Entries Tagged 'GStreamer' ↓

An update on Pipewire – the multimedia revolution

We launched PipeWire last September with this blog entry. I thought it would be interesting for people to hear about the latest progress on what I believe is going to be a gigantic step forward for the Linux desktop. So I caught up with Pipewire creator Wim Taymans during DevConf 2018 in Brno where Wim is doing a talk about Pipewire and we discussed the current state of the code and Wim demonstrated a few of the things that PipeWire now can do.

Christian Schaller and Wim Taymans testing PipeWire with Cheese

Christian Schaller and Wim Taymans testing PipeWire with Cheese

Priority number 1: video handling

So as we said when we launched the top priority for PipeWire is to address our needs on the video side of multimedia. This is critical due to the more secure nature of Wayland, which makes the old methods for screen sharing not work anymore and the emergence of desktop containers in the form of Flatpak. Thus we need PipeWire to help us provide appliation and desktop developers with a new method for doing screen sharing and also to provide a secure way for applications inside a container to access audio and video devices on the system.

There are 3 major challenges PipeWire wants to solve for video. One is device sharing, meaning that multiple applications can share the same video hardware device, second it wants to be able to do so in a secure manner, ensuring your video streams are not highjacked by a rogue process and finally it wants to provide an efficient method for sharing of multimedia between applications, like for instance fullscreen capture from your compositor (like GNOME Shell) to your video conferencing application running in your browser like Google Hangouts, Blue Jeans or Pexip.

So the first thing Wim showed me in action was the device sharing. We launched the GNOME photoboot application Cheese which gets PipeWire support for free thanks to the PipeWire GStreamer plugin. And this is an important thing to remember, thanks to so many Linux applications using GStreamer these days we don’t need to port each one of them to PipeWire, instead the PipeWire GStreamer plugin does the ‘porting’ for us. We then launched a gst-launch command line pipeline in a terminal. The result is two applications sharing the same webcam input without one of them blocking access for the other.

Cheese and GStreamer pipeline running on Pipewiere

As you can see from the screenshot above it worked fine, and this was actually done on my Fedora Workstation 27 system and the only thing we had to do was to start the ‘pipewire’ process in a termal before starting Cheese and the gst-launch pipeline. GStreamer autoplugging took care of the rest. So feel free to try this out yourself if you are interested, but be aware that you will find bugs quickly if you try things like on the fly resolution changes or switching video devices. This is still tech preview level software in Fedora 27.

The plan is for Wim Taymans to sit down with the web browser maintainers at Red Hat early next week and see if we can make progress on supporting PipeWire in Firefox and Chrome, so that conferencing software like the ones mentioned above can start working fully under Wayland.

Since security was one of the drivers for the move to Wayland from X Windows we of course also put a lot of emphasis of not recreating the security holes of X in the compositor. So the way PipeWire now works is that if an application wants to do full screen capture it will check with the compositor through a dbus-api, or a portal in Flatpak and Wayland terminology, and only allows the permited application to do the screen capture, so the stream can’t be highjacked by a random rougue application or process on your computer. This also works from within a sandboxed setting like Flatpaks.

Jack Support

Another important goal of PipeWire was to bring all Linux audio and video together, which means PipeWire needed to be as good or better replacement for Jack for the Pro-Audio usecase. This is a tough usecase to satisfy so while getting the video part has been the top development priority Wim has also worked on verifying that the design allows for the low latency and control needed for Pro-Audio. To do this Wim has implemented the Jack protocol on top of PipeWire.

Carla, a Jack application running on top of PipeWire.


Through that work he has now verified that he is able to achieve the low latency needed for pro-audio with PipeWire and that he will be able to run Jack applications without changes on top of PipeWire. So above you see a screenshot of Carla, a Jack-based application running on top of PipeWire with no Jack server running on the system.

ALSA/Legacy applications

Another item Wim has written the first code for and verfied will work well is the Alsa emulation. The goal of this piece of code is to allow applications using the ALSA userspace API to output to Pipewire without needing special porting or application developer effort. At Red Hat we have many customers with older bespoke applications using this API so it has been of special interest for us to ensure this works just as well as the native ALSA output. It is also worth nothing that Pipewire also does mixing so that sound being routed through ALSA will get seamlessly mixed with audio coming through the Jack layer.

Bluetooth support

The last item Wim has spent some time on since last September is working on making sure Bluetooth output works and he demonstrated this to me while we where talking together during DevConf. The Pipewire bluetooth module plugs directly into the Bluez Bluetooth framework, meaning that things like the GNOME Bluetooth control panel just works with it without any porting work needed. And while the code is still quite young, Wim demonstrated pairing and playing music over bluetooth using it to me.

What about PulseAudio?

So as you probably noticed one thing we didn’t mention above is how to deal with PulseAudio applications. Handling this usecase is still on the todo list and the plan is to at least initially just keep PulseAudio running on the system outputing its sound through PipeWire. That said we are a bit unsure how many appliations would actually be using this path because as mentioned above all GStreamer applications for instance would be PipeWire native automatically through the PipeWire GStreamer plugins. And for legacy applications the PipeWire ALSA layer would replace the current PulseAudio ALSA layer as the default ALSA output, meaning that the only applications left are those outputing to PulseAudio directly themselves. The plan would also be to keep the PulseAudio ALSA device around so if people want to use things like the PulseAudio networked audio functionality they can choose the PA ALSA device manually to be able to keep doing so.
Over time the goal would of course be to not have to keep the PulseAudio daemon around, but dropping it completely is likely to be a multiyear process with current plans, so it is kinda like XWayland on top of Wayland.

Summary

So you might read this and think, hey if all this work we are almost done right? Well unfortunately no, the components mentioned here are good enough for us to verify the design and features, but they still need a lot of maturing and testing before they will be in a state where we can consider switching Fedora Workstation over to using them by default. So there are many warts that needs to be cleaned up still, but a lot of things have become a lot more tangible now than when we last spoke about PipeWire in September. The video handling we hope to enable in Fedora Workstation 28 as mentioned, while the other pieces we will work towards enabling in later releases as the components mature.
Of course the more people interesting in joining the PipeWire community to help us out, the quicker we can mature these different pieces. So if you are interested please join us in #pipewire on irc.freenode.net or just clone the code of github and start hacking. You find the details for irc and git here.

AAC support will be available in Fedora Workstation 27!

So I am really happy to announce another major codec addition to Fedora Workstation 27 namely the addition of the codec called AAC. As you might have seen from Tom Callaways announcement this has just been cleared for inclusion in Fedora.

For those not well versed in the arcane lore of audio codecs AAC is the codec used for things like iTunes and is found in a lot of general media files online. AAC stands for Advanced Audio Coding and was created by the MPEG working group as the successor to mp3. Especially due to Apple embracing the format there is a lot of files out there using it and thus we wanted to support it in Fedora too.

What we will be shipping in Fedora is a modified version of the AAC implementation released by Google, which was originally written by Frauenhoffer. On top of that we will of course be providing GStreamer plugins to enable full support for playing and creating AAC files for GStreamer applications.

Be aware though that AAC is a bit of an umbrella term for a lot of different technologies and thus you might be able to come across files that claims to use AAC, but which we can not play back. The most likely reason for that would be that it requires a AAC profile we do not support. The version of AAC that we will be shipping has also be carefully created to fit within the requirements for software in Fedora, so if you are a packager be aware that unlike with for instance mp3, this change does not mean you can package and ship any AAC implementation you want to in Fedora.

I am expecting to have more major codec announcements soon, so stay tuned :)

Running Wayland on the Nvidia driver

I know many of you have wanted to test running Wayland on NVidia. The work on this continues between Jonas Ådahl, Adam Jackson and various developers at NVidia. It is not ready for primetime yet as we are still working on the server side glvnd piece we need for XWayland. That said with both Adam Jackson looking at this from our side and Kyle Brenneman looking at it from NVidia I am sure we will be able to hash out the remaining open questions and get that done.

In the meantime Miguel A. Vico from NVidia has set up a Copr to let people start testing using EGLStreams under Wayland. I haven’t tested it myself yet, but if you do and have trouble make sure to let Miguel and Jonas know.

As a sidenote, I am heading off to GUADEC in Manchester tomorrow and we do plan to discuss efforts like these there. We have team members like Jonas Ådahl flying in from Taiwan and Peter Hutterer flying in from Australia, so it will be a great chance to meet core developers who are far away from us in terms of timezone and geographical distance. GUADEC this year should be a lot of fun and from what I hear we are going to have record level attendance this year based early registration numbers, so if you can make it Manchester I strongly recommend joining us as I think this years event will have a lot of energy and a lot of interesting discussions on what the next steps are for GNOME.

Another media codec on the way!

One of the thing we are working hard at currently is ensuring you have the codecs you need available in Fedora Workstation. Our main avenue for doing this is looking at the various codecs out there and trying to determine if the intellectual property situation allows us to start shipping all or parts of the technologies involved. This was how we were able to start shipping mp3 playback support for Fedora Workstation 25. Of course in cases where this is obviously not the case we have things like the agreement with our friends at Cisco allowing us to offer H264 support using their licensed codec, which is how OpenH264 started being available in Fedora Workstation 24.

As you might imagine clearing a codec for shipping is a slow and labour intensive process with lawyers and engineers spending a lot of time reviewing stuff to figure out what can be shipped when and how. I am hoping to have more announcements like this coming out during the course of the year.

So I am very happy to announce today that we are now working on packaging the codec known as AC3 (also known as A52) for Fedora Workstation 26. The name AC3 might not be very well known to you, but AC3 is part of a set of technologies developed by Dolby and marketed as Dolby Surround. This means that if you have video files with surround sound audio it is most likely something we can playback with an AC3 decoder. AC3/A52 is also used for surround sound TV broadcasts in the US and it is the audio format used by some Sony and Panasonic video cameras.

We will be offering AC3 playback in Fedora Workstation 26 and we are looking into options for offering an encoder. To be clear there are nothing stopping us from offering an encoder apart from finding an implementation that is possible to package and ship with Fedora with an reasonable amount of effort. The most well known open source implementation we know about is the one found in ffmpeg/libav, but extracting a single codec to ship from ffmpeg or libav is a lot of work and not something we currently have the resources to do. We found another implementation called aften, but that seems to be unmaintaned for years, but we will look at it to see if it could be used.
But if you are interested in AC3 encoding support we would love it if someone started working on a standalone AC3 encoder we could ship, be that by picking up maintership of Aften, splitting out AC3 encoding from libav or ffmpeg or writting something new.

If you want to learn more about AC3 the best place to look is probably the Wikipedia page for Dolby Digital or the a52 ATSC audio standard document for more of a technical deep dive.

Mp3 support now coming to Fedora Workstation 25

So, in Fedora Workstation 24 we added H264 support through OpenH264. In Fedora Workstation 25 I am happy to tell you all that we are taking another step in improving our codec support by adding support for mp3 playback. I know this has been a big wishlist item for a long time for a lot of people so I am really happy that we are finally in a position to fulfil that wish. You should be able to download the mp3 plugin on day 1 through GNOME Software or through the missing codec installer in various GStreamer applications. For Fedora Workstation 26 I would not be surprised if we decide to ship it on the install media.

Fo the technically inclined out there, our initial enablement is through the mpeg123 library and corresponding GStreamer plugin. The main reason we choose this library over all the others available out there was a combination of using the same license as GStreamer (lgpl v2) and being a well established library used by a lot of different applications already. There might be other mp3 decoders added in the future depending on interest in and effort by the community. So get ready to install Fedora Workstation 25 when its released soon and play some tunes :)

P.S. To be 110% clear we will not be adding encoding support at this time.

H264 in Fedora Workstation

So after a lot of work to put the policies and pieces in place we are now giving Fedora users access to the OpenH264 plugin from <a href="http://www.cisco.comCisco.
Dennis Gilmore posted a nice blog entry explaining how you can install OpenH264 in Fedora 24.

That said the plugin is of limited use today for a variety of reasons. The first being that the plugin only supports the Baseline profile. For those not intimately familiar with what H264 profiles are they are
basically a way to define subsets of the codec. So as you might guess from the name Baseline, the Baseline profile is pretty much at the bottom of the H264 profile list and thus any file encoded with another profile of H264 will not work with it. The profile you need for most online videos is the High profile. If you encode a file using OpenH264 though it will work with any decoder that can do Baseline or higher, which is basically every one of them.
And there are some things using H264 Baseline, like WebRTC.

But we realize that to make this a truly useful addition for our users we need to improve the profile support in OpenH264 and luckily we have Wim Taymans looking at the issue and he will work with Cisco engineers to widen the range of profiles supported.

Of course just adding H264 doesn’t solve the codec issue, and we are looking at ways to bring even more codecs to Fedora Workstation. Of course there is a limit to what we can do there, but I do think we will have some announcements this year that will bring us a lot closer and long term I am confident that efforts like Alliance for Open Media will provide us a path for a future dominated by royalty free media formats.

But for now thanks to everyone involved from Cisco, Fedora Release Engineering and the Workstation Working Group for helping to make this happen.

GStreamer Conference 2015

Wanted to let everyone know that the GStreamer Conference 2015 is happening for the 6th time this year. So if you want to attend the premier open source multimedia conference you can do so in Dublin, Ireland between the 8th and 9th of October. If you want to attend I suggest registering as early as possible using the GStreamer Conference registration webpage. Like earlier years the GStreamer Conference is co-located with other great conferences like the Embedded Linux Conference Europe so you have the chance to combine the attendance into one trip.

The GStreamer Conference has always been a great opportunity to not only learn about the latest developments in GStreamer, but about whats happeing in the general Linux multimedia stack, latest news from the world of codec development and other related topics. I strongly recommend setting aside the 8th and the 9th of October for a trip to Dublin and the GStreamer Conference.

Also a heads up for those interested in doing a talk. The formal deadline for submitting a proposal is this Sunday the 9th of August, so you need to hurry to send in a proposal. You find the details for how to submit a talk on the GStreamer Conference 2015 website. While talks submitted before the 9th will be prioritized I do recommend anyone seeing this after the deadline to still send in a proposal as there might be a chance to get on the program anyway if you get your proposal in during next week.

Fedora Workstation next steps : Introducing Pinos

So this will be the first in a series of blogs talking about some major initiatives we are doing for Fedora Workstation. Today I want to present and talk about a thing we call Pinos.

So what is Pinos? One of the original goals of Pinos was to provide the same level of advanced hardware handling for Video that PulseAudio provides for Audio. For those of you who has been around for a while you might remember how you once upon a time could only have one application using the sound card at the same time until PulseAudio properly fixed that. Well Pinos will allow you to share your video camera between multiple applications and also provide an easy to use API to do so.

Video providers and consumers are implemented as separate processes communicating with DBUS and exchanging video frames using fd passing.

Some features of Pinos

  • Easier switching of cameras in your applications
  • It will also allow you to more easily allow applications to switch between multiple cameras or mix the content from multiple sources.

  • Multiple types of video inputs
  • Supports more than cameras. Pinos also supports other type of video sources, for instance it can support your desktop as a video source.

  • GStreamer integration
  • Pinos is built using GStreamer and also have GStreamer elements supporting it to make integrating it into GStreamer applications simple and straightforward.

  • Pinos got some audio support
  • Well it tries to solve some of the same issues for video that PulseAudio solves for audio. Namely letting you have multiple applications sharing the same camera hardware. Pinos does also include audio support in order to let you handle both.

What do we want to do with this in Fedora Workstation?

  • One thing we know is of great use and importance for many of our users, including many developers who wants to make videos demonstrating their software, is to have better screen capture support. One of the test cases we are using for Pinos is to improve the built in screen casting capabilities of GNOME 3, the goal being to reducing overhead and to allow for easy setup of picture in picture capturing. So you can easily set it up so there will be a camera capturing your face and voice and mixing that into your screen recording.
  • Video support for Desktop Sandboxes. We have been working for a while on providing technology for sandboxing your desktop applications and while we with a little work can use PulseAudio for giving the sandboxed applications audio access we needed something similar for video. Pinos provides us with such a solution.

Who is working on this?
Pinos is being designed and written by Wim Taymans who is the co-creator of the GStreamer multimedia framework and also a regular contributor to the PulseAudio project. Wim is also the working for Red Hat as a Principal Engineer, being in charge of a lot of our multimedia support in both Red Hat Enterprise Linux and Fedora. It is also worth nothing that it draws many of its ideas from an early prototype by William Manley called PulseVideo and builds upon some of the code that was merged into GStreamer due to that effort.

Where can I get the code?
The code is currently hosteed in Wim’s private repository on freedesktop. You can get it at cgit.freedesktop.org/~wtay/pinos.

How can I get involved or talk to the author
You can find Wim on Freenode IRC, he uses the name wtay and hangs out in both the #gstreamer and #pulseaudio IRC channels.
Once the project is a bit further along we will get some basic web presence set up and a mailing list created.

FAQ

If Pinos contains Audio support will it eventually replace PulseAudio too?
Probably not, the usecases and goals for the two systems are somewhat different and it is not clear that trying to make Pinos accommodate all the PulseAudio usescases would be worth the effort or possible withour feature loss. So while there is always a temptation to think ‘hey, wouldn’t it be nice to have one system that can handle everything’ we are at this point unconvinced that the gain outweighs the pain.

Will Pinos offer re-directing kernel APIs for video devices like PulseAudio does for Audio? In order to handle legacy applications?
No, that was possible due to the way ALSA worked, but V4L2 doesn’t have such capabilities and thus we can not take advantage of them.

Why the name Pinos?
The code name for the project was PulseVideo, but to avoid confusion with the PulseAudio project and avoid people making to many assumptions based on the name we decided to follow in the tradition of Wayland and Weston and take inspiration from local place names related to the creator. So since Wim lives in Pinos de Alhaurin close to Malaga in Spain we decided to call the project Pinos. Pinos is the word for pines in Spanish :)

Congratulations to Endless Computer

So the Endless Computer kickstarter just succeeded in their funding goal of 100K USD. A big heartfelt congratulations to the whole team and I am looking forward to receiving my system. For everyone else out there I strongly recommend getting in on their kickstarter, not only do you get a cool looking computer with a really nice Linux desktop, you are helping a company forward that has the potential to take the linux dektop to the next level. And the be aware that the computer is a standard computer (yet very cool looking) at the end of the day, so if you want to install Fedora Workstation on it you can :)

Thanks for all the applications

Jobs at Red Hat
So I got a LOT of responses to my blog post about the open positions we have here at Red Hat working on Fedora and the Desktop. In fact I got so many it will probably take a bit of time before we can work through them all. So you might have to wait a little bit before getting a response from us. Anyway, thanks you to everyone who sent me their CV, much appreciated and looking forward to working with those of you we end up hiring!

Builder campaign closes in 13 hours
I want to make one last pitch for everyone to contribute to the Builder crowdfunding campaign. It has just passed 47 000 USD as I write this, which means we just need another 3000 USD to reach
the graphical debugger stretch goal. Don’t miss out on this opportunity to help this exciting open source project!