Preparing the ground for the Fedora Workstation

Things are moving forward for the Fedora Workstation project. For those of you who don’t know about it, it is part of a broader plan to refocus Fedora around 3 core products with clear and distinctive usecase for each. The goal here is to be able to have a clear definition of what Fedora is and have something that for instance ISVs can clearly identify and target with their products. At the same time it is trying to move away from the traditional distribution model, a model where you primarily take whatever comes your way from upstream, apply a little duct tape to try to keep things together and ship it. That model was good in the early years of Linux existence, but it does not seem a great fit for what people want from an operating system today.

If we look at successful products MacOS X, Playstation 4, Android and ChromeOS the red thread between them is that while they all was built on top of existing open source efforts, they didn’t just indiscriminately shovel in any open source code and project they could find, instead they decided upon the product they wanted to make and then cherry picked the pieces out there that could help them with that, developing what they couldn’t find perfect fits for themselves. The same is to some degree true for things like Red Hat Enterprise Linux and Ubuntu. Both products, while based almost solely on existing open source components, have cherry picked what they wanted and then developed what pieces they needed on top of them. For instance for Red Hat Enterprise Linux its custom kernel has always been part of the value add offered, a linux kernel with a core set of dependable APIs.

Fedora on the other hand has historically followed a path more akin to Debian with a ‘more the merrier’ attitude, trying to welcome anything into the group. A metaphor often used in the Fedora community to describe this state was that Fedora was like a collection of Lego blocks. So if you had the time and the interest you could build almost anything with it. The problem with this state was that the products you built also ended up feeling like the creations you make with a random box of lego blocks. A lot of pointy edges and some weird looking sections due to needing to solve some of the issues with the pieces you had available as opposed to the piece most suited.

With the 3 products we are switching to a model where although we start with that big box of lego blocks we add some engineering capacity on top of it, make some clear and hard decisions on direction, and actually start creating something that looks and feels like it was made to be a whole instead of just assembled from a random set of pieces. So when we are planning the Fedora Workstation we are not just looking at what features we can develop for individual libraries or applications like GTK+, Firefox or LibreOffice, but we are looking at what we want the system as a whole to look like. And maybe most important we try our hardest to look at things from a feature/usecase viewpoint first as opposed to a specific technology viewpoint. So instead of asking ‘what features are there in systemd that we can expose/use in the desktop being our question, the question instead becomes ‘what new features do we want to offer our users in future versions of the product, and what do we need from systemd, the kernel and others to be able to do that’.

So while technologies such as systemd, Wayland, docker, btrfs are on our roadmap, they are not there because they are ‘cool technologies’, they are there because they provide us with the infrastructure we need to achieve our feature goals. And whats more we make sure to work closely with the core developers to make the technologies what we need them to be. This means for example that between myself and other members of the team we are having regular conversations with people such as Kristian Høgsberg and Lennart Poettering, and of course contributing code where possible.

To explain our mindset with the Fedora Workstation effort let me quickly summarize some old history. In 2001 Jim Gettys, one of the original creators of the X Window System did at talk a GUADEC in Sevile called ‘Draining the Swamp’. I don’t think the talk can be found online anywhere, but he outlined some of the same thoughts in this email reply to Richard Stallman some time later. I think that presentation has shaped the thinking of the people who saw it ever since, I know it has shaped mine. Jim’s core message was that the idea that we can create a great desktop system by trying to work around the shortcomings or weirdness in the rest of the operating system was a total fallacy. If we look at the operating system as a collection of 100% independent parts, all developing at their own pace and with their own agendas, we will never be able to create a truly great user experience on the desktop. Instead we need to work across the stack, fixing the issues we see where they should be fixed, and through that ‘drain the swamp’. Because if we continued to try to solve the problems by adding layers upon layers of workarounds and abstraction layers we would instead be growing the swamp, making it even more unmanageable. We are trying to bring that ‘draining the swamp’ mindset with us into creating the Fedora Workstation product.

With that in mind what is the driving ideas behind the Fedora Workstation? The Fedora Workstation effort is meant to provide a first class desktop for your laptop or workstation computer, combining a polished user interface with access to new technologies. We are putting a special emphasis on developers with our first releases, both looking at how we improve the desktop experience for developers, and looking at what tools we can offer to developers to let them be productive as quickly as possible. And to be clear when we say developers we are not only thinking about developers who wants to develop for the desktop or the desktop itself, but any kind of software developer or DevOPs out there.

The full description of the Fedora Workstation can be found here, but the essence of our plan is to create a desktop system that not only provides some incremental improvements over how things are done today, but which tries truly take a fresh look at how a linux desktop operating system should operate. The traditional distribution model, built up around software packages like RPM or Deb has both its pluses and minuses.
Its biggest challenge is probably that it creates a series of fiefdoms where a 3rd party developers can’t easily target the system or a family of systems except through spending time very specifically supporting each one. And even once a developers decides to commit to trying to support a given system it is not clear what system services they can depend on always being available or what human interface design they should aim for. Solving these kind of issues is part of our agenda for the new workstation.

So to achieve this we have decided on a set of core technologies to build this solution upon. The central piece of the puzzle is the so called LinuxApps proposal from Lennart Poettering. LinuxApps is currently a combination of high level ideas and some concrete building blocks. In terms of the building blocks are technologies such as Wayland, kdbus, overlayfs and software containers. The ideas side include developing a permission system similar to what you for instance see Android applications employ to decide what rights a given application has and develop defined versioned library bundles that 3rd party applications can depend on regardless of the version of the operating system. On the container side we plan on expanding on the work Red Hat is doing with Docker and Project Atomic.

In terms of some of the other building blocks I think most of you already know of the big push we are doing to get the new Wayland display server ready. This includes work on developing core infrastructure like libinput, a new library for handling input devices being developed by Jonas Ådahl and our own Peter Hutterer. There is also a lot of work happening on the GNOME 3 side of things to make GNOME 3 Wayland ready. Jasper St.Pierre wrote up a great blog blog entry outlining his work to make GDM and the GNOME Shell work better with Wayland. It is an ongoing effort, but there is a big community around this effort as most recently seen at the West Cost Hackfest at the Endless Mobile office.

As I mentioned there is a special emphasis on developers for the initial releases. These includes both a set of small and big changes. For instance we decided to put some time into improving the GNOME terminal application as we know it is a crucial piece of technology for a lot of developers and system administers alike. Some of the terminal improvements can be seen in GNOME 3.12, but we have more features lined up for the terminal, including the return of translucency. But we are also looking at the tools provided in general and the great thing here is that we are able to build upon a lot of efforts that Red Hat is developing for the Red Hat product portfolio, like Software Collections which gives easy access to a wide range of development tools and environments. Together with Developers Assistant this should greatly enhance your developers experience in the Fedora Workstation. The inclusion of Software collections also means that Fedora becomes an even better tool than before for developing software that you expect to deploy on RHEL, as you can be sure that an identical software collection will be available on RHEL that you have been developing against on Fedora as software collections ensure that you can have the exact same toolchain and toolchain versions available for both systems.

Of course creating a great operating system isn’t just about the applications and shell, but also about supporting the kind of hardware people want to use. A good example here is that we put a lot of effort into HiDPI support. HiDPI screens are not very common yet, but a lot of the new high end laptops coming out are using them already. Anyone who has used something like a Google Pixel or a Samsung Ativ Book 9 Plus has quickly come to appreciate the improved sharpness and image quality these displays brings. Due to the effort we put in there I have been very pleased to see many GNOME 3.12 reviews mentioning this work recently and saying that GNOME 3.12 is currently the best linux desktop for use with HiDPI systems due to it.

Another part of the puzzle for creating a better operating system is the software installation. The traditional distribution model often tended to try to bundle as many applications as possible as there was no good way for users to discover new software for their system. This is a brute force approach that assumes that if you checked the ‘scientific researcher’ checkbox you want to install a random collection of 100 applications useful for ‘scientific researchers’. To me this is a symptom of a system that does not provide a good way of finding and installing new applications. Thanks to the ardent efforts of Richard Hughes we have a new Software Installer that keeps going from strength to strength. It was originally launched in Fedora 19, but as we move forward towards the first Fedora Workstation release we are enabling new features and adding polish to it. One area where we need to wider Fedora community to work with us is to increase the coverage of appdata files. Appdata files essentially contains the necessary metadata for the installer to describe and advertise the application in question, including descriptive text and screenshots. Ideally upstreams should come with their own appdata file, but in the case where they are not we should add them to the Fedora package directly. Currently applications from the GTK+ and GNOME sphere has relatively decent appdata coverage, but we need more effort into getting applications using other toolkits covered too.

Which brings me to another item of importance to the workstation. The linux community has for natural reasons been very technical in nature which has meant that some things that on other operating systems are not even a question has become defining traits on Linux. The choice of GUI development toolkits being one of these. It has been a great tool used by the open source community to shoot ourselves in the foot for many years now. So while users of Windows or MacOS X probably never ask themselves what toolkit was used to implement a given application, it seems to be a frequently asked one for linux applications. So we want to move away from it with the Workstation. So while we do ship the GNOME Shell as our interface and use GTK+ for developing tools ourselves, including spending time evolving the toolkit itself that does not mean we think applications written using for instance Qt, EFL or Java are evil and should be exorcised from the system. In fact if an application developer want to write an application for the linux desktop at all we greatly appreciate that effort regardless of what tools they decide to use to do so. The choice of development toolkits is a choice meant to empower developers, not create meaningless distinctions for the end user. So one effort we have underway is to work on the necessary theming and other glue code to make sure that if you run a Qt application under the GNOME Shell it feels like it belongs there, which also extends to if you are using accessibility related setups like the high contrast theme. We hope to expand upon that effort both in width and in depth going forward.

And maybe on a somewhat related note we are also trying to address the elephant in the room when it comes to the desktop and that is the fact that the importance of the traditional desktop is decreasing in favor of the web. A lot of things that you used to do locally on your computer you are probably just doing online these days. And a lot of the new things you have started doing on your computer or other internet capable device are actually web services as opposed to a local applications. The old Sun slogan of ‘The Network is the Computer’ is more true today than it has ever been before. So we don’t believe the desktop is dead in any way or form, as some of the hipsters in the media like to claim, in fact we expect it to stay around for a long time. What we do envision though is that the amount of time you spend on webapps will continue to grow and that more and more of your computing tasks will be done using web services as opposed to local applications. Which is why we are continuing to deeply integrate the web into your desktop. Be that through things like GNOME Online accounts or the new Webapps that are introduced in Software installer. And as I have mentioned before on this blog we are also still working on trying to improve the integration of Chrome and Firefox apps into the desktop along the same lines. So while we want the desktop to help you use the applications you want to run locally as efficiently as possible, we also realize that you like us are living in a connected world and thus we need to help give you get easy access to your online life to stay relevant.

So there are of course a lot of other parts of the Fedora Workstation effort, but this has already turned into a very long blog post as it is so I leave the rest for later. Please feel free to post any questions or comments and I will try to respond.

GNOME 3.12 release comments

So the recent GNOME 3.12 release has gotten a very positive reception. Since I know that many members of my team has worked very hard on GNOME 3.12 I am delighted to see all the positive feedback the release is getting. And of course it doesn’t hurt having it give us a flying start to the Fedora Workstation effort. Anyway, for the fun of it I tried putting together a set of press quotes, kinda like how they tend to do for computer game advertisements.

  • “GNOME 3.12: Pixel perfect” “GNOME 3 has finally arrived” – The Register
  • “It is the GNOME release I have been waiting for” – Linux Action Show
  • “The Very Exciting GNOME 3.12 Has Been Released” – Phoronix.com
  • “…. a milestone feature update for users …” – eweek.com
  • “The design team has refined everything …” – omgubuntu.co.uk
  • “One of the big Linux desktops is updated” – TheInquirer
  • “High Resolution screens are best managed under Gnome 3.12” – laptopspirit.fr
  • “One of the most striking innovations..” – Heise.de
  • “has resurrected what was once the darling of the Linux desktop” – TechRepublic.com

Some of the quotes might feel a little out of context, but as I said I did it for fun and if you end up spending time reading GNOME 3.12 articles to verify the quotes, then all the better ;)

Also you should really check out the nice GNOME 3.12 release video that can be found on the GNOME 3.12 release page.

Anyway, I plan on doing a blog post about the Fedora Workstation effort this week and will talk a bit about how GNOME 3.12 and later fits into that.

Transmageddon 1.0 released!

It has been a long time in the making, but I have finally cut a new release of the Transmageddon transcoder application. The code inside Transmageddon has seen some major overhaul as I have updated it to take advantage of new GStreamer APIs and features. New features in this release include:

  • Support files with multiple audio streams, allowing you to transcode them to different codecs or drop them from the new file
  • DVD ripping support. So know you can use your movie DVDs as input in Transmageddon, be aware though that you need to install things like lsdvd and the GStreamer dvdread plugin from gst-plugins-ugly for it to become available. And you probably also want libdvdcss installed to be able to transcode most movie DVDs.
  • Another small feature of the release is that you can now set language information on files with one audio stream inside. I hope to extend this to also work with files that have multiple audio streams. If you rip a DVD with multiple audio streams Transmageddon will preserve the existing audio information, so in that case you shouldn’t need to set the language metadata manually.
  • Enabled VP9 support in the code.

There are some other smaller niceties too, like the use of blue default action buttons to match the GNOME 3 style better and I also switched to new icon designed by Jakub Steiner. There is also an appdata file now, which should make Transmageddon available in a nice way inside the new Fedora Software Installer’

Also there is now an Advanced section on the Transmageddon website explaining how you can create custom presets that allow you to do things like resize the video or change the bitrate of the audio.

And last, but not least here is a screenshot of the new version.
transmageddon-1.0-blue-button

You can download the new version from the Transmageddon website, I will update the version in Fedora soon.

Update from GStreamer Hackfest at Google Office in Munich

To give the wider community a chance to see what happened during the GStreamer hackfest last weekend I put together this blog post is based on an summary written by Wim Taymans, so a big thanks to Wim for letting me reuse parts of his summary.)

So last weekend 21 GStreamer hackers got together at the Google office in Munich to spend the weekend hacking on their favourite GStreamer bits. At this point in time we didn’t have any major basic plumbing tasks that needed tackling so the time was spent hacking on a lot of different projects and using the opportunity to discuss design and challenges with each other.

We where 3 people attending from Red Hat and Fedora; Wim Taymans, Alberto Ruiz and myself.

With the Release of GStreamer 1.0 in September 2012, we drastically changed the way memory is handled in the multimedia pipeline and the large body of work is still in exploring, improving and porting elements to this new memory model. We’re also mostly working on improving the existing elements with comparatively little new infrastructure work.

We’re also seeing a lot of people from different companies that contribute significant amounts of code to the official GStreamer repositories. This has traditionally been a much more closed effort with various pieces of code living in multiple repositories, especially for the hardware acceleration bits. It is good to see that the 1.0 series brings all these efforts together again with more coordination and a more coherent story.

HW acceleration

One of the large ongoing tasks is to improve our support for hardware accelerated decoding, effects and display. With 1.0 we can finally get this done cleanly and efficiently in very many use cases.

Matthew Waters to flew in from Australia to move the gst-plugins-gl set of plugins to the core GStreamer plugins packages. He has been working on these plugins for a while now. Their goal is to use OpenGL to apply operations on the video, like rotation on a cube or applying a shader. With the 1.0 memory management it becomes possible to do this efficiently with a minimal amount of texture upload/downloads. More work is needed here, we can optimize things some more by delaying the work and running the shaders as part of the rendering operation.

Andoni Morales (Fluendo) has also been working on improving hardware acceleration on android. He used some of the new features of 1.0 to make the android codecs use zero-copy by implementing the texture-upload meta data on buffers. This allows the video sink to efficiently create a texture from the decoded data for display. Andoni also ported winks, a video capture source on Windows, to GStreamer 1.0.

Nicolas Dufresne (Collabora) has been working on adding a new set of decoders based on the mem2mem API in v4l2. Not many drivers provide this API yet but it is implemented in some Samsung Exynos SOCs. We would also like to support other m2m operations later, such as color conversion but for that we need to make some of our base classes support the required asynchronous behaviour of mem2mem. The memory management in our v4l2 elements has been gone through several iterations of improvements during the 1.0 cycle but it still is not entirely what it should be. We agreed on what we should do to fix this in the near future. We also briefly discussed the need for a new event that can be used to reclaim memory from a pipeline; many elements that use hardware buffers need to free those before they can negotiate a new format with the hardware so we need a way to make that possible.

Mathieu Bourron (Collabora) has been working on libva, the library for GPU based video decoding and encoding on Intel hardware, and spent his time at the Hackfest fixing up the SPU overlay element to enable hardware accelerated subpicture overlays in the video sink. Traditionally GStreamer would use the CPU to overlay the subpictures (of DVD, for example) on top of the video images. With new GL-based sinks, and hardware accelerated decoders this is very undesirable and can be done much more efficient as part of the final rendering. In 1.0 we have the infrastructure to delay this overlay operation by attaching extra metadata (with the subpicture) to the video images when the video sink knows how to overlay them. We have been doing this with subtitles in cluttersink and other sinks for a while now and soon we can also do this with subpictures.

Plugin Hacking

Arun Raghavan, GStreamer hacker and Pulse Audio maintainer, worked on disabling the audio and video filters in playbin when passthrough mode was selected. In passthrough mode, a video or audio sink can directly handle the encoded media (think a bluetooth headset that can handle mp3 directly or a hardware sink that takes encoded data). He expaned on that work in blog entry.

As a cool hack, Arun also made a source element to read from torrent files so you can watch a movie while you torrent it. He provides more information on that element in his blog, it is actually really cool.

Thiago Santos (Collabora) was continuing with his work to improve the DASH demuxer, reworking the buffering code to make it buffer less and smoother. Dash is one of the new formats (with HLS and MSS) to stream media over HTTP while adapting to bandwidth changes. On the server side, it makes media available in various bitrates while a client switches between bitrates depending on its measured network conditions. Andoni Morales also worked on a new dashsink element that implements the server side of the DASH
format.

Mathieu Duponchelle, a former GSoC student was trying to improve support for seeking in MPEG Transport Streams in order to use them in PiTiVi. Seeking in MPEG TS is not an easy thing because they are really optimized for streaming only. He got help from Thibault Saunier (Collabora), who was also hacking on PiTiVi and who was preparing a new release of gnonlin, GES and gst-python 1.2 (which he released on Sunday). Mathieu is one of the developers able to work fulltime on PiTiVi now thanks to the PiTiVi fundraiser, so be sure to contribute to that!

Jan Schmidt (Centricular), a long time GStreamer core hacker was working on debugging some DVB issues and also ended up taking part in a lot of the general design and troubleshooting discussions happening during the hackfest, helping other people move forward with their projects.

Long time GStreamer hacker Edward Hervey (Collabora) was planning to do a lot of DVB hacking but had to give up on that effort when it was clear that Google had signal isolated the office for security reasons, so there was no DVB signal in the Google office. Instead he worked on merging some pending DVB patches and implemented GAP support in the mpeg transport stream plugin. GAP support deals with streams that have long periods of no media (like missing audio for some time in DVD). It makes sure that downstream elements keeps processing the silence instead of waiting for more data.

Applications

Meg Ford, a GSOC student mentored by Sebastian Dröge (Centricular) was working on Gnome Sound Recorder and fixing up the last bugs, preparing it for a new release.

Myself, Christian Schaller (Red Hat) was on a bug fixing spree in Transmageddon (a transcoding application written in python and GStreamer) and managed to reduce the number of known bugs to only 1. Fixed that last bug once I got home, so now I just need to hammer at Transmageddon for a bit to make sure I caught all the corner case issues so I can do a major new release with new features such as handling files with multiple audio streams, handling DVD ripping, handling VP9 encoding, handling setting audio stream language information, reducing decoding overhead for streams that we are going to throw away and more. Also had help reviewing and cleaning up the Transmageddon code from Alberto Ruiz, freeing Transmageddon from some ugly code that had survived many library updates and rewrites.

Alessandro Decina(Spotify) kept working on his patches to update the Firefox GStreamer backend to GStreamer 1.0. We hope to deploy this work in Fedora in the not to distant future. As a hack for the hackfest he provided patches to implement audio and video capture.

Wim Taymans (Red Hat) was hacking on a new library that can parse and generate MIKEY messages (RFC 3830). He want to use this in the GStreamer RTSP server to negotiate SRTP (secure RTP) encryption parameters.

We had 2 people from the Swedish company AXIS, who provide network cameras that all run GStreamer and who contribute on a regular basis to the RTP and RTSP elements and libraries. Ognyan Tonchev was mostly writing some unit tests for RTSP and multicast handling in the RTSP server. Sebastian Rasmussen had been hacking on our watchdog element and the payloaders.

Infrastructure

Long time GStreamer hacker Stefan Sauer (Google) gave a demo of his idea for a tracing infrastructure in GStreamer. The idea is to place trace macros at strategic places that would send structured data to pluggable tracer modules. Some of the tracer modules could, for example measure CPU usage of a plugin or measure the latency. The idea would be to gradually replace our extensive (but unstructured) logging with this new trace infrastructure. This would allow us to do new interesting things, like send debug log to a remote machine or produce STF (Structured Trace Format) to analyse with standard tools. No immediate plans were made to merge this but there seems to be very little resistance to get this merged soon.

Core hacker Sebastian Dröge (Centricular) has been going over the current Stream selection ideas. One of the long outstanding issues is that of switching streams between different languages: you have a movie in different languages and you want to switch between them. To achieve low-latency old data should be kept around for the streams that are not currently selected and be quickly and sent to the audio device. The idea is a combination of events to select a stream and to have the demuxer seek back in the stream on
switches. No final conclusion or plan that can solve all requirements has been reached yet.

Also investigations have begun to make decodebin deal with renegotiations. For example, when a new stream is selected, we might need to use a different decoder for this stream but also when new input is received, decodebin should be able to reconfigure itself. The decodebin code is a complicated beast so any change to it should be done carefully.

GStreamer maintainer Tim-Philipp Müller (Centricular) spent his time merging the new device probing and monitoring API (written by Olivier Crête from Collabora) that had been sitting in bugzilla for a while now. The purpose is to be able to probe devices and their capabilities such as v4l2 and ALSA devices. It’s also possible to be notified when devices appear and disappear in the system. An implementation for pulseaudio devices and another for v4l2 devices using gudev has been committed as well. This reimplements a
feature that was in 0.10, but got cut from 1.0 due to us not being happy with the old design. One of the complications with that was the fact that we ran out of bits in one of our enums so we needed to find a good solution for that.

We briefly discussed how to implement the SKIP seek flag. This extra flag can be used when doing fast forward or reverse and instructs the decoders that it is allowed to throw away data to more efficiently perform the trick mode (at reduced accuracy). There was a prototype for AVI playback that I implemented once that we discussed a bit. We’ll see if someone takes up the task to finalize this work and implement SKIP mode in more demuxers.

I took some photos during the event to capture the spirit and put them on Google Plus for your viewing pleasure.

A big thank you to Google for hosting us and providing us with free lunch and free drinks through the weekend.

PiTiVi fundraiser pass the 10 000 Euro mark

Hi, so I wrote A blog entry asking people to contribute to the PiTiVi fundraiser effort last week. I am happy to see them having already reached 10K Euro, but their goal of course is to get more. I am also happy to say that the GStreamer Project decided to support the fundraiser using some of the money we earned over the years doing Google Summer of Code and organizing the GStreamer Conference, which added 2500€ to the effort. You can read about the GStreamer donation here.

Anyway, I would once again want to ask people to contribute to this effort. There is a proven team behind this fundraiser, so this is not like a kickstarter where people are starting from scratch and you have no idea where they will end up going or what they might achieve. This is an existing project that just needs some polish to get it to critical mass. So visit the fundraising page and make your pledge.

PiTiVi Fundraising Campaign – Why you should donate

So the PiTiVi announced the PiTiVi fundraising campaign on Friday. I sincerely hope they are successful, because I think we really need a good non-linear video editor that runs on the Linux desktop, especially one that is built on top of GStreamer and thus sharing the core multimedia infrastructure of the rest of your desktop. The current PiTiVi team got the right skills and enthusiams in my view to truly pull this off, and their project is scoped in a manner that makes me believe they can pull it off. PiTiVi is already functional and this fundraiser is more about accelerating ongoing development as opposed to creating something from scratch. And their funding requirements for reaching the base milestone is rather modest, for example if just the employees of the 3 main linux distribution companies pitched in 3-4 Euro it would be enough to cover the base funding goal.

But I think this fundraiser is important also beyond the PiTiVi project, because it can serve as a precedent that it is possible to do significant crowdfunding around open source development and thus open the gate for more projects accelerating their development using it. There are a lot of great open source projects out there created on a volunteer basis, which is great and like PiTiVi they will flourish even without crowdfunding, but crowdfunding can be a great way for developers of the most interesting projects to be able to focus solely on their project for some time and thus accelerate its development significantly. So in the case of PiTiVi, I am sure the team will be able to achieve all the goals they have outlined in the funding campaign even if the fundraiser raises no money, but the difference here is if they do it in 1 year or 5 years.

So personally I donated 60 Euro to the PiTiVi fundraiser and I hope everyone reading this blog entry will do the same. Lets give the people developing this and other great open source tools our support and help them make their great software even better. This fundraiser is done by people passionate about open source and their project, because to be fair, no matter if the efforts ends up raising closer to 30 000€ or closer to 100 000 € it is in no way what anyone can call a get rich quick scheme, but rather modest amounts that will let two talented open source developers spend time working fulltime on a project we all want.

And remember whenever a major project using GStreamer gets a boost, it gives all GStreamer projects a boost. For instance in my own pet project, Transmageddon, I am gotten a lot of help over the years from general improvements in GStreamer done due to the involvement from PiTiVi developers, and I have even ended up copying code from PiTiVi itself a few times to quickly and easily solve some challenges in had in Transmageddon.

Hardware encoding in Transmageddon

So thanks to the new GStreamer 1.x VAAPI package for Fedora I was able to do a hardware encode with Transmageddon for the first time today. This has been working in theory for a while, but due to me migrating Transmageddon from GStreamer 0.10.x to GStreamer 1.x at the wrong time compared to the gstreamer-vaapi development timeline I wasn’t able to test it before.

Bellow is the GStreamer pipeline that Transmageddon use now if you have the Intel hardware encoder packages installed. Click on the image for the full image.
libva-pipeline-extract

I am also close to being able to release the new version of Transmageddon now. Most recent bugs found have turned out to be in various GStreamer elements, so I am starting to feel confident about the current pipeline building code in Transmageddon. I think that during the GStreamer Hackfest in Munich next Month I will make the new release with the nice new features such as general support for multiple audio streams, DVD ripping support and language tag setting support.

transmageddon-git-master

Excited about Cockpit

So we had the DevConf conference here in Brno this weekend. One of the projects I am really excited about is Cockpit. Cockpit is a new server administration tool developed by Red Hat engineers which aims at providing a modern looking and userfriendly interface for your servers. There has been many such efforts over the years, but what I feel makes this one special is that it got graphical designers and interface designers involved, to ensure that the user experience is kept in focus instead of being taken hostage by underlaying APIs or systems. Too many such interfaces, be they web based or not tend to both feel and look clunky, for instance sometimes exposing features not because anyone realistically ever would want them, but because the underlying library happen to have a call for it.

Cockpit should also hopefully put the final nail in the coffin for the so called ‘server desktop’. The idea that you need to be able to run a graphical shell using X on your server adds a lot of pain with little gain in my opinion. The Fedora Server product should hopefully become a great showpiece for how nice a Linux server can be to use and configure when you have something like Cockpit available.

There was some nice videos showing what is already in Cockpit shown at the conference so hopefully they will be available online soon. In the meantime I recommend taking a look at the Cockpit web page.

Getting Wayland Input handling ready

I noticed an article on Phoronix today about libinput which made me think I should post a little Wayland update again. So Libinput is developed by Peter Hutterer who is part of the Graphics team here at Red Hat, and our resident input expert. He is developing libinput as part of our work to get Fedora Wayland ready.

That said input is a complex area and if we do end up not having a Wayland option with feature parity with the X.org option in Fedora 21, then not having gotten input sorted in time is the most likely scenario. That said we are still charging ahead with the goal of getting things ready for Fedora 21, but in our last status meeting we did flag input as our biggest risk. Luckily Peter is not alone in his efforts on libinput, there are a healthy amount of community contributions and at Red Hat we have recently had Hans de Goede join Peter on input. So we are doing our utmost to make sure the pieces fall into place.

Our minimum todo list for input that will enable Wayland to be on parity with X for at least 90% of our users is as follows:

  • keyboard works as-is
  • mouse supports left/right-handed button mapping
  • mouse/touchpad middle mouse button emulation
  • touchpad scrolling and tapping
  • touchpad software-emulated buttons work on clickpads
  • touchpad disable-while-typing

But there are of course other items on here too, like Wacom tablet support, which is of interest to a much smaller segment of our users, but still important to get done. We might have to push some of these more niche items onto the F22 timescale.

Also if anyone is wondering about game pads and such, we don’t have any concrete plans around them currently in the context of Wayland, as when we spoke to Valve and the SDL team they currently access game controllers directly through the kernel interfaces, and preferred to keep doing that. So we decided not to try to second guess them on this considering they been doing game development for years and we haven’t :)

Integrating Chrome Applications into your desktop

With the growing popularity of ChromeOS and Chrome applications we have been doing a little research project inside Red Hat to make such applications a bit more integrated into your Fedora desktop. As you might know if you go into the ‘Tools’ menu in Chrome/Chromium there is an option called ‘Create Applications Shortcut’. If you choose that you can turn any web page or chrome application into something easily reached directly from the desktop, and especially with a lot of ChromeApps now working offline this is a quite nice feature. But there are some issues with this setup. First of all it uses the appicon as the application icon, which looks really ugly compared to the other icons on your desktop, secondly it is a little cumbersome to have to go into that menu to set up your application and lastly there is no way of uninstalling it again save from manually deleting the generated .desktop file.

Well our resident Webkit developer, Tomas Popela, has created a Chrome/Chromium extension which you can download from using this link.
To install it you need to go to the extensions page (chrome://extensions/) and enable ‘developer mode’. Once you have done that you can for instance drag and drop the created extension onto the Chrome extension page to install it. Once it is installed it will automatically create a desktop entry for any application you install from the Chrome store, using a nice looking icon. It will also remove the entry again once you uninstall the application.

Some screenshots of this feature in in action.

As you can see the ChromeApp is using its own image in the Shell activity menu and is session managed separately from a 'normal' Chrome window.

As you can see the ChromeApp is using its own image in the Shell activity menu and is session managed separately from a ‘normal’ Chrome window.

angrybird-activity

And you can search and find your Chrome Apps in the GNOME Shell activity overview just like any other application.

Unfortunately the shelf life of this extension is limited as its relying on Chrome supporting npapi, which it will stop doing in April according to current plans. But we are trying to work with Google to see if we can make this standard functionality going forward.

For those interested you can find the source code here on github.

Enjoy!