Launching Pipewire!

In quite a few blog posts I been referencing Pipewire our new Linux infrastructure piece to handle multimedia under Linux better. Well we are finally ready to formally launch pipewire as a project and have created a Pipewire website and logo.Pipewire logo

To give you all some background, Pipewire is the latest creation of GStreamer co-creator Wim Taymans. The original reason it was created was that we realized that as desktop applications would be moving towards primarly being shipped as containerized Flatpaks we would need something for video similar to what PulseAudio was doing for Audio. As part of his job here at Red Hat Wim had already been contributing to PulseAudio for a while, including implementing a new security model for PulseAudio to ensure we could securely have containerized applications output sound through PulseAudio. So he set out to write Pipewire, although initially the name he used was PulseVideo. As he was working on figuring out the core design of PipeWire he came to the conclusion that designing Pipewire to just be able to do video would be a mistake as a major challenge he was familiar with working on GStreamer was how to ensure perfect audio and video syncronisation. If both audio and video could be routed through the same media daemon then ensuring audio and video worked well together would be a lot simpler and frameworks such as GStreamer would need to do a lot less heavy lifting to make it work. So just before we starting sharing the code publicaly we renamed the project to Pinos, named after Pinos de Alhaurín, a small town close to where Wim is living in southern Spain. In retrospect Pinos was probably not the worlds best name :)

Anyway as work progressed Wim decided to also take a look at Jack, as supporting the pro-audio usecase was an area PulseAudio had never tried to do, yet we felt that if we could ensure Pipewire supported the pro-audio usecase in addition to consumer level audio and video it would improve our multimedia infrastructure significantly and ensure pro-audio became a first class citizen on the Linux desktop. Of course as the scope grew the development time got longer too.

Another major usecase for Pipewire for us was that we knew that with the migration to Wayland we would need a new mechanism to handle screen capture as the way it was done under X was very insecure. So Jonas Ådahl started working on creating an API we could support in the compositor and use Pipewire to output. This is meant to cover both single frame capture like screenshot, to local desktop recording and remoting protocols. It is important to note here that what we have done is not just implement support for a specific protocol like RDP or VNC, but we ensured there is an advaned infrastructure in place to support any protocol on top of. For instance we will be working with the Spice team here at Red Hat to ensure SPICE can take advantage of Pipewire and the new API for instance. We will also ensure Chrome and Firefox supports this so that you can share your Wayland desktop through systems such as Blue Jeans.

Where we are now
So after multiple years of development we are now landing Pipewire in Fedora Workstation 27. This initial version is video only as that is the most urgent thing we need supported for Flatpaks and Wayland. So audio is completely unaffected by this for now and rolling that out will require quite a bit of work as we do not want to risk breaking audio on your system as a result of this change. We know that for many the original rollout of PulseAudio was painful and we do not want a repeat of that history.

So I strongly recommend grabbing the Fedora Workstation 27 beta to test pipewire and check out the new website at Pipewire.org and the initial documentation at the Pipewire wiki. Especially interesting is probably the pages that will eventually outline our plans for handling PulseAudio and JACK usecases.

If you are interested in Pipewire please join us on IRC in #pipewire on freenode. Also if things goes as planned Wim will be on Linux Unplugged tonight talking to Chris Fisher and the Unplugged crew about Pipewire, so tune in!

25 thoughts on “Launching Pipewire!

  1. On the Audio side of Pipewire, would it be a replacement for PulseAudio/JACK on top of ALSA/something else, or would it be running on top of PulseAudio/JACK themselves?

    I’m guessing it’s the former, but I didn’t get exactly where its audio side would go in the current stack. I’m not criticizing, I have no knowledge on the technical side of these things.

  2. Is it going to have a stable and sane plugin interface which I could use to filter all of the audio which passes through my system?

    I once had written a plugin for PulseAudio – the code which interfaced with PA was a horrible thousand lines long mess that broke on every new version of PA. After a few updates I got frustrated and just rewrote the thing on top of JACK – it was less than hundred lines long (as opposed to over a thousand), and it **never** broke in several years.

    So right now I have a really “fun” audio setup, thanks to our pal PulseAudio, e.g. for programs which use ALSA the audio flows like this:

    ALSA -> PulseAudio -> JACK -> ALSA

    Totally insane. But it’s still significantly less painful than dealing with the absolute horrible mess which is PulseAudio’s plugin (module) API.

    • Please join us in #pipewire on freenode to discuss this with wtay. He would love to get detailed feedback from people like yourself as he embarks on fleshing out the audio side of Pipewire.

    • It means we are still early days and not all details are set in stone yet :) There is still a goal to somehow offer ABI compatability with Pulse Audio, but the exact form of that is not decided yet, multiple approaches are being discussed, like for example keeping the current API implementation and just putting Pipewire in behind it.

  3. I’ve long since thought that we need something similar to Apple’s CoreAudio for Mac OS/IOS, and I hope pipe wire ends up doing that, tying pro audio and consumer audio all together so there is no need for API bridging.

  4. Yes! So Will it allow for “OBS-like” software function under Wayland? What do You think about avisynth like scripting language for manipulating video / audio in Pipewire, would’nt be awesome?

  5. very happy to hear this. Please aim for the best audio handling of all platforms. It better takes longer to finish it then it should end up as a second class audio/video editing platform.

  6. Cool, could VJ programmes use this to capture and composite the output of arbitrary programmew, like Spout or Syphon on Windows and OSX?

  7. So, the holy grail for some applications on Linux would be a way to do GPU texture sharing of video data (like Syphon does on OSX). Is that something supported or in the pipeline for Pipewire?

  8. I don’t want to ruin the party, but all those fancy progress always come at the expense of breaking years of applications and backward compatibility. So I have to ask, is this going to be the “default” cross desktop API standard for things like sharing desktop/taking screenshot/producing video screencast? Will this work with X11 based application *transparently*?

    • no, but kinda yes. The new API works across X and Wayland, but for old X apps the old way will keep working without porting except the application is unable to share anything
      but its own window.

      • Currently, an X11 application needs to know its window ID, because asking for root or the composite overlay window won’t work? I think the X11 API should be patched to return an empty content for the desktop plus windows that belong to the application that is asking for them. Same goes for the GTK libraries that deal with screenshot (afaik, those are not working either under Wayland, you need to request a screenshot by querying dbus, but I think this is not a cross desktop solution). So if Pipewire could solve it that would be awesome, even if this means getting a black pixel buffer with the exception of the actual source application. We really need a cross desktop solution.

  9. If this is going to be piping some video through a processing graph, having audio added and so forth and everything is aligned, in a live / rt-way, sounds good! video data is so heavy though. it will have a lot of latency for anything beyond hello world example is my speculation.
    best luck

Comments are closed.