Back from GStreamer Conference in Prague

I am back home for about a day today after spending a week in Prague for the GStreamer Conference and LinuxCon Europe.

Had an absolute blast and I am really happy the GStreamer Conference again turned out to be a big success. A big thanks to the platinum sponsor Collabora, and the two silver sponsors Fluendo and Google who made it all possible. Also a big thanks to Ubicast who was there onsite recording all talks. They aim to have all the talks online within a Month.

While I had to run a bit back and forth to make sure things was running smoothly, I did get to some very interesting talks, like Monty Montgomery from Xiph.org talking about the new Opus audio codec they are working on with the IETF and the strategies they are working on to fend of bogus patent claims.

On a related note I saw that Apple released their lossless audio codec ALAC as free software under the Apache license. Always nice to see such things even if ALAC for the most part has failed to make any headway against the already free FLAC codec. If Apple now would join the effort around WebM things would really start looking great in the codec space.

We did a Collabora booth during the LinuxCon and Embedded Linux days that followed the GStreamer Conference. Our demos showcasing a HTML5 video editing UI using GStreamer and the GStreamer Editing Services and video conferencing using Telepathy through HTML5 was a great success and our big screen TV running the Media Explorer media center combined with Telepathy based video conferencing provided us with a steady stream of people to our booth. For those who missed the conference all the tech demos can be grabbed from this Prague-demo Ubuntu PPA.

So as you might imagine I was quite tired by the time Friday was almost done, but thanks to Tim Bird and Sony I got a really nice end to the week as I won a Sony Tablet S through the Elinux Wiki editing competition. The tablet is really nice and it was the first tablet I ever wanted, so winning one was really great. The feature set is really nice with built in DLNA support, it can function as a TV remote and it has support for certain Playstation 1 titles. The ‘folded magazine’ shape makes it really nice to hold and I am going to try to use it as an e-book reader as I fly off to Lahore tomorrow morning for my sister-in-laws wedding.

OpenOffice vs LibreOffice – the next chapter

Been seeing with interest the latest moves around Open Office. While a lot of people see it as almost a direct attack on Libre Office, to me personally it seems like a clumsy result of Oracle trying to ditch OpenOffice without frustrating their main OpenOffice business partner, IBM. Due to having the Lotus Symphony suite based on OpenOffice under a special license from Sun/Oracle, I wouldn’t be surprised if switching to the pure LGPL Libre Office seemed painful to them. And thus the idea of an Apache licensed OpenOffice must have seemed endearing.

Personally I hope people stick with LibreOffice and build upon their existing success. Chasing a big company like IBM might seem tempting, but big companies change their mind and change priorities all the time, just look at Nokia, so if you have something viable without a big company involved, stick with it, and let the big company contribute on your terms if they want, as it will then have the ability to stay around even when the big company goes elsewhere.

Real Networks makes me smile

As many of probably have noticed GStreamer is experiencing fast growth in the embedded and mobile space with major companies shipping phones or other kind of devices using it. The Palm Pre being among the latest. Of course not everyone appreciate this as much as we do at Collabora Multimedia.

Jan pointed me to this little piece of FUD that had been added to the GStreamer wikipedia article recently:

While the project is licensed under LGPL, there is no copyright indemnity for source code available from Fluendo or other sources primarily because the GStreamer project does not collect copyright assignments from each and every one of its contributors. This incurs significant risk on commercial distributors of GStreamer.

The funny part was that the IP address of the anonymous person changing this was 207.188.29.244. So out of curiosity we did a little whois on that IP address and whose name do we see popping up. Well none other than Real Networks. Seriously guys, if you are going to try to add anonymous FUD to wikipedia articles you might want to do it from outside your own network….

Anyway, I removed it now, but it is still viewable in the Wikipedia history of course.

Stephen Fry on FSF anniversary

Just noticed today that the FSF managed to get Stephen Fry to make a video in celebration of the 25th Anniversary of the FSF. Been a fan of Stephen for a long time, ever since I first saw him in Blackadder many years ago, so it is cool to see him doing this sort of promo for free software. Been aware that Stephen Fry has advocated free software in his blog for some time, but it is still nice to see such direct interaction with the community. The video is available in Ogg format using Theora video and Vorbis audio, which also makes me happy. I even ended up emailing them saying I be happy to convert their source material into a HD Dirac+Vorbis version if they are interested. Every time I see stuff being published in free formats it makes me feel very good about the work we are doing here at Collabora and the goals we have set for ourselves.

Dirac Everywhere

On the topic of Dirac there are a lot of fun stuff happening. One thing I failed to mention before, is that there is a Dirac Quicktime component available now. Still alpha quality, but part of the effort done to reach out to a wide a community as possible with Dirac. There has also been work happening on wider Dirac support in GStreamer and integrating that support better into GStreamer. For instance Thiago merged a patch from David Schleef to add Dirac support to the new quicktime muxer Thiago created as part of the summer of code. It already works well, but we need to do a little Pitivi hacking to enable it there. Edward hopes to get at that before the weekend. Finally Sebastian Dröge merged the transport stream muxer library and plugin into gst-plugins-bad, which also can mux Dirac video (the library used to be hosted on the old Dirac website). Sebastian will also be working on making sure that muxer can create some Playstation 3 friendly files going forward.

Also thanks to Fluendo and Zaheer we know have a working transport stream demuxer in gst-plugins-bad which of course also handles Dirac.

Centralising GStreamer plugins

On the back of this I think we will try to do a bigger effort to merge some of the external plugin repositories into GStreamer proper. For instance at Collabora we got the gst-plugins-farsight module which should have its plugins moved over. Our latest team member Mark Nauwelaerts got his GEntrans plugins which should also move over. Having a central set of repositories and plugins makes them easier to find for everyone and will also increase the ease of maintenance. And of course reduce the risk of people doing something which someone else has already done.

How to mix code with different licenses

Got a question on IRC today about the licensing of a specific file in GStreamer CVS, as it was under a MIT license instead of the LGPL license. While we strive to keep our licensing simple by making all new code LGPL or in some specific cases dual licensed, there are a few cases where we got code which is under the MIT or BSD license. This create a situation where we have some files in a directory under the LGPL while others are MIT for example. While I think he have kept things on an even keel within GStreamer, I have noticed that there is a lot of confusion in the open source community in general, about how you deal with MIT and BSD code in a GPL/LGPL context. In some extreme cases I have even seen people just cut’n pasting the MIT code into their GPL project believing that the MIT ‘do what you want license’ includes the right to relicense the code. It does not.

Anyway, to clear up the details for myself I contacted Luis Villa to get some help understanding some of the possible corner cases. Luis then pointed me at this great resource from the Software Freedom Law Center for understanding how MIT and GPL code can co-exist in your codebase. I absolutely recommend reading over this to better understand the implications.

Ubuntu Disapointment

One thing I ranted about multiple times in my blog over the years is how Linux distributions have failed to provide their content in Ogg format. Especially when the content is targeted at Linux users it suprise me that they do not make sure to have the video available in the format that basically all linux users have support for out of the box. That said both Red Hat and Novell has actually taken this feedback to heart and more often than not they do provide Ogg videos these days (in addition to various other formats).

It saddens me then when I checked out the link in Jono Bacons latest blog entry. Where the Ubuntu MOTU videos seems only to be available in the proprietary Flash format. For a distribution which likes to drape itself so loudly in the colours of community and freedom this is a huge letdown. And while you can view these videos with things like swfdec you still need to have the patent encumbered codecs available through gst-ffmpeg to actually view the videos. Would it be so hard to also offer those videos as a Ogg Theora torrents for instance?

Update: Talked to Jono. Turns out they do plan on making Ogg’s available, but haven’t gotten around to it yet. While I kicked Ubuntu here, it wasn’t really about them specifically, but the fact that even though the tools have gotten quite good and widespread over the last few years in terms of creating Ogg’s the open source or free software community is still rather lackluster in its willingness to try to help push the free formats. Its kinda how I used PNG images on my website even before there was widespread PNG support, cause if my page got just one person (hi mom) to use a PNG supporting browser it was a step forward.

Nokia on Ogg

Slashdot linked this weekend to a Nokia position paper on the use of Ogg in the HTML5 proposal for the media elements. For those of us who have followed the HTML5 discussion for some time there is little new in the position paper, he is simply regurgitating the same arguments that Apple Safari people came up with.

Let me start by saying that I know that Nokia is a big organisation and that the opinions expressed by Stephan Wenger in the linked position paper do not reflect the opinion of everyone at Nokia. So unlike the Slashdot crowd I am not putting this position paper at the feet of everyone at Nokia. I would point out that Stephan Wenger’s job at Nokia seems to consist of traveling the world attending MPEG meetings eating canap’es, so there is probably a lot of self interest in that position paper also :)

That said I do feel it correct to address some of the concerns/claims made in that document and by some Slashdot commenters.

To start with the Slashdot headline, the definition of proprietary in this context I guess could be open to debate, I have myself referred to technologies as proprietary if they are mostly a one group/company effort myself, even if the code is available. If that description would fit Ogg, Vorbis and Theora is another matter, but I will for the sake of argument allow that the what makes something proprietary in the context of software is open to some discussion.

What I felt was the biggest red herring in the paper was actually the musing about DRM. There is nothing stopping you from DRM protecting Ogg,Vorbis and Theora content and thus his arguments about the need for DRM support seemed rather misplaced. Sure you can not play back a DRM protected file on a system only supporting the normal playback but that is true for any format. You can not play back a Windows Media DRM’ed file on a non-DRM supporting Windows Media stack either. Same for playing back a Fairplay protected AAC file on a system with no Fairplay support. And unless he wanted to also standardize on a specific DRM system in HTML5 it doesn’t matter what format you use cause if people use different DRM systems you don’t get better interoperability anyway. An OMA DRM protected AAC file do not work with a Fairplay enabled AAC playback system and vica versa.

He also spent a some time nagging about what are the currently popular formats on the net and what terms they are commonly available under. The cutest argument however was how he managed to try to say that if the W3C accepted a royalty bearing set of codecs for this specification it could at the same time try to push for more royalty free stuff through MPEG and ITU-T…..yes….sounds brilliant……no better way to convince organizations creating royalty bearing standards that they need to do royalty free standards than to start paying money to use their standards…..errrrr NOT.

His section on Alternative ways forward is also quite hilarious. Proposal 1. Leave it up to the market forces. Dude, standardizing on Vorbis/Theora is part of creating market forces. And for the claim that the market had quickly chosen something at earlier points in other non-related markets was also quite hilarious. If he instead had looked at the Web of today there is a big mix of stuff being used like Windows Media, Quicktime, Flash Video, DivX, Real Media and more. And its been like that for a long time. The only way the HTML5 media tags have any hope of causing consolidation is of course to propose specific codecs for HTML5. If not there will be zero motivation for anyone to move away from their current Windows Media or Quicktime or Flash or whatever solution.

His second option was to adopt some ancient standards which where sure to be out of patent. One would have hoped a position paper from a world leading organisation like Nokia would be held at a higher professional standard than being based on a random authors ‘author’s personal experience‘ to quote the article. That said Theora as it is could be better and it is in the process of getting a lot better due to Monty’s ongoing work.

His last proposal is of course the oldest most true and tested way of trying to derail an effort: propose to set up a committee to investigate the issue…

Hopefully the next time Nokia want to write a position paper on something they will choose someone to write it who wants to be part of the solution and not the problem.

The Sun and NetApp Ordeal

Read Miguel’s post about the patent suit between Sun and NetApp. I guess both parties are carefully avoiding stating something which is outright lies so instead they tiptoe around the issue a bit.

Here is my guess of how things went down:

Step 1: NetApps first approached StorageTek behind the cover of a third party intermediary seeking to purchase STK patents. This is what Jonathan mentions in his blog and nothing in the blog entry of the NetApp CEO contradicts this, it just omits it.
Step 2: Sun when being asked about the patent purchase turns around and says ‘sorry not for sale, but you can license.’ Sun having talked to the third party mentioned they make this offer directly to NetApp. So As the NetApp’s CEO says, Sun contacted them with a list of patent (the same list NetApp had asked to purchase) and said they where available for licensing.
Step 3: NetApp realize that their patent purchase request has backfired a bit and starts looking for a way out. Their first solution is to look through their own patent portfolio probably hoping to find something to cross license with Sun. (Or if they got stupid with greed, they tried to both get Sun to agree that they where not infringing on Sun’s patents and at the same time demand patent fee payment for their own).
Step 4: Licensing lawyers/people at Sun are faced with what is more than a ‘standard’ patent licensing agreement and for some reason tries to just drop it instead of dealing with it. (A scarily common event in many big companies).
Step 5: NetApp either due to worry about future legal action from Sun or due to greed decide that a request for a summary judgement about the validity of the Sun patents (which they originally wanted to buy) and a countersuit would be the best way forward.
Step 6: NetApp CEO presents his view in blog post in the hope to not get to much bad reactions from the open source/free software community.

Step 7: Sun CEO replies in his own blog.

All the above is just guesswork by me of course for what happened, but these set of events would not contradict either of the two versions of what happened.

My guess is that NetApp just got greedy in this process and starting behaving stupidly. Probably in the end they make the same fatal mistake that SCO did, they assumed the cost of a lawsuit is what you pay your lawyers. Instead the real cost of a lawsuit will often be the collateral damage it will inflict on your business. NetApp might end up experiencing the same thing that SCO did (although on a smaller scale), that suddenly their customer base wants to avoid doing business with them as they are seen as a patent troll and a enemy of open source software.

Is releasing the code always important?

Been briefly taking part in and watching a discussion about wether Launchpad should be released. The debate made me think about wether all code releasing is truly important or even a good thing.

Once upon a time I was writing articles for a now defunct news site called linuxpower.org. For this site a special publishing system had been written. I know Jeremy considered releasing the code we used for the site a couple of times, but in the end I remember him concluding that the code wasn’t really in a release worthy state and that he didn’t have the time or the interest to clean it up in order to add yet another half-done publishing system the world.

While we all where strong supporters of free software none of us had any problems with this decission. Part of the reason for that is that releasing the code of something doesn’t automatically make it useful for people. In fact it may only be a distraction as you get more useless crap showing up on google when you are trying to find something.

For the release of sourcecode to be truly useful the code needs to be in a state where its been prepared for consumption by anyone else than the original creator. Getting hold of a source package that do not compile or run cause you don’t have access to the 7 post-it notes with manual instructions, the 19 steps only stored in the memory of the creator and is using some database tables you don’t have an sql script to create tend to be of abysmally little value.

A lot of source code is written by one or two persons for their own private or professional use. Code written like that is often using a lot of shortcuts to achieve its tasks, like hardcoding values, no code comments, no documentation, no real build system, relying on a database structure thats been created manually and incrementally over a period of time and so on. Thus sending that code out there doesn’t make it instantly useful. So unless your application is truly special nobody will probably ever bothering spending the weeks or months it would take to make it useful to themselves or the even longer period it would take to make it useful to the world at large.

That said there are of course cases where even such code could be useful, for instance if the code documents a certain piece of hardware or fileformat. But once again it would require the code to actually correctly document the hardware or fileformat in question, sending out a file called nvidia-driver.tar.gz which contains a driver you tried to make by trial and error, but which never did anything apart from cause 4 of your graphics card to stop working permanently is probably not doing anyone any favours. At least not without a lot of code comments and a big warning.

Which brings me a back to trying to pressure someone to open source something. In many cases unless the person asked to release some code wants to release the code to the world and thus is willing to take the time and effort to make sure the world would truly be able to use the code then getting the code released would probably be of little or no value. In fact it might just be adding to the noise making googling for actually useful code a little harder.

So in terms of Launchpad. I am sure it could be a useful tool for various people or groups if released, but release means more than doing ‘tar -cvf lp.tar /var/www/’. Thus unless one can convince Canonical that there is true value for them in spending the time and the money to prepare LP for a release and maintaining that release as a public project, then all achieved is probably getting a big tarball of useless crud put onto the net and at the same time have wasted developer time on an effort of little value.

In the meantime maybe effort should instead be spent on improving existing projects already available which has a featureset similar or close to what Launchpad offers.

Patently troublesome

Saw another article today where Balmer talks about the Novell/Microsoft deal. Once again he demonstrates in my opinion how extremely broken the whole patent system around software is and how companies are trying to abuse that brokenness.

While I have little love for organisations like MPEG LA at least they clearly define what they license out. If you take a license for MPEG4 for instance you will get a full list of patent numbers and nations which they apply to. If one would like to challenge or work around those patents one would at least be able to figure out what one are up against.

In the Microsoft case they are not licensing something concrete for a specific amount of money. Instead they are basically saying ‘we have a thicket of patents and we think a unquantified subset of them applies to you, pay a fee or you risk a lawsuit’. So if you want to do a risk assesment or try to work around these patents your only option is to dig through the global (primarily US) patent office databases for anything concerning Microsoft or companies bought by Microsoft and try to figure out if any of those patents apply to anything you do or have. The cost of such a move is probably prohibitive. Of course if you do find some patents which could apply to something you do, then the question of wether they should have been granted in the first place comes up. You then have the option to spend lots of money on trying to find prior art to invalidate the patent(s) in question. But the problem here is that most companies who do patent blackmail tend to make sure that their licensing fees are lower than the expected cost of getting their patents invalidated, so you are stuck in a lose/lose situation. You can give in to their crocked ways and license their patents no matter how bogus, or you can try to fight them and end up spending even more money. One could dream of a situation where the cost of any patent prior art research and litigation should be covered by the US patent office, as they are the ones who are primarily to blame for the current mess.

Not sure this situation can be fully remedied without the US doing a full rehaul of their patent system, but maybe a stopgap measure would be a law that forbids the claim of patents against a competitor without being specific about which patent and which application implementing it at least. That would put much more of cost on the would be attacker instead of the defendant.