So I spent quite a bit of time this weekend working on an article I’ve been brewing on for quite a while. Don’t know if the article ever will reach a state where I will publish it, but as I was writing it suddenly two of the things I where writing about people where blogging about already working on. So a big thanks to John Stowers for his work on Conduit and to Ryan Lortie for his work on panelcompositebin. Thanks to guys like you doing great stuff my article almost become redundant even before its out of first draft status 
Category Archives: GNOME
The endless Linux ready for the desktop discussion
Slashdot has a story from a guy giving up on using Linux on the desktop. When looking into it what he is actually giving up on is having a ‘rogue’ linux system seamlessly integrate into a Microsoft environment. The community is getting better all the time at providing technologies that will emulate your Windows software of choice, but unfortunatly Microsoft is not standing still so its chasing a moving target. So unless you have an IT organisation which is interested in truly supporting Linux you will proably always have some pains. On the other hand I do believe that for a lot of corporate environments Linux is ready for their desktop as long as they want Linux on their desktops. Meaning they tune their purchasing strategies and software choices to doing this.
That said I have myself been thinking about the state of the desktop. As I to have been part of this community for a long time now and there do always seem to be a new big hurdle to cross before we reach Desktop nirvana. Sometimes it can be a bit disencouraging. But if I look back to when I started out with Linux and compare the problems then to the problems we face today I realize that we have gotten a lot closer to being where we want to be. A lot of applications simply didn’t not exist on Linux when I started out. In fact the only two advanced desktop applications I can remember being at a level where they fullfilled my needs where Netscape and Gimp. Today there are a host of applications in most major categories. Applications that come to mind are Inkscape, OpenOffice, Evolution, Gaim, Ekiga, Totem and Rhythmbox.
For instance I remember getting my friends to switch to use Yahoo messenger as it was the only IM service I managed to get working under Linux at the time.
Hardware support has gotten a lot better. When I first installed Linux on my laptop back then I spent a full weekend to get basic audio working on it by grabing CVS versions of ALSA and trying to figure out magic options in the alsarc file. It also seemed like the only time hardware got detected under Linux at that point was during install so when I changed hardware I tended to end up re-installing Linux on the machine. That problem could have been avoided had I understood more of how linux worked back then, but today its not an issue as installing new hardware seems rather automated. I also remember spending a lot of time trying to get Linux to be able to read the ‘Joliet’ CD’s I burned under Windows. Linux at the time only supported another standard at the time (which there was only 1 burning application I ever saw which supported under Windows and that one was silly expensive).
CD burning under linux only existed in the form of the command line cdrecord application, which it took my quite some time to figure out.
Today my burning needs are mostly taken care of by Nautilus.
Hardware support is today as back then still an issue. But it is much less of an issue and hardware that is supported tend to be so in a much more ‘real’ way. Back when I started ‘supported’ meant ‘you can get it to work if you spend a few days on it’, today ‘supported’ means it will plug and play.
We have succeeded in creating standarized abstractions or subsystems for most things today. So when someone writes a new driver you don’t need your application to specifically support it anymore. As long as it is written towards this shared interface it will just work. I am sure there are still some subsystems which are not perfect yet in this regard, but lets face it we are currently working on fixing usability issues more often than we are looking into the ‘how can we get this type of thing to work somewhat at all’ kind of issues.
Its like libgimme-codec. We are not trying to solve the problem of ‘how do we get any kind of support for media playback’ anymore, instead we are trying to solve the problem of ‘how can we do this in the most userfriendly way possible, given the constraints we are facing’.
We might not be there 100% yet, but looking a few years back in time do make it clear we are moving forward rapidly. And I don’t think we need that many more major announcement like the recent Novell Peugeot Citroën deal before the hardware makers all realize that the Linux desktop is here to stay and needs to be supported properly.
Article with some quotes from me
Nathan Willis recently interviewed me about the codec shop launch Fluendo did.
The result is on
linux.com now. The result was less of an interview and more an article than I expected, but I guess it still explains a few things around our shop launch.
The lagging fortunes of XaraLX
About a year ago the source code and linux port of XaraLX was announced to much fanfare and excitement. Unfortunatly things haven’t developed that well as time has gone by. First of all Xara has had limited resources to devote to the project themselves and there has been little in the form of a developer community formed from the outside. What I learned to my suprise today is that part of the reason for this is because Xara depends on a binary blob called cdraw/gdraw which is similar to pixman in Cairo. This among other things keep most distro’s from shipping it. Not sure what will happen with XaraLX now, but one hope would be that someone sat down and ported the XaraLX code from cdraw/gdraw to Cairo. And through that make it a truly free software application. Xara is interesting and should have a chance to fullfill a need alongside Gimp and Inkscape.
Nokia 800
A little later than the rest of the herd I got my Nokia 800 today. It is a worthy upgrade to the 770. Especially with Tigert’s Plankton theme it looks and feel very snazy.
My main frustration though from the 770 is still there, and that is that when entering new streams to the internet radio I have to have the actual stream uri, the playlist uri which is the most easy/common to find is not supported. That said finding the place to enter such uri’s is much easier now compared to the 770.
One feature we look to get into Pitivi is a Nokia 770/800 output profiles. So if you want to prepare a video to take on a trip you just choose the Nokia 770/800 profile and it will get transcoded into a video using the optimal combination of codecs and imgae size/framerate to play back on your device. New release of Pitivi out today btw, so be sure to check it out.
Miguel on ODF vs OOXML
Miguel has a very interesting blog entry discussing the ongoing ODF vs OOXML ISO debacle. Since he linked to an email I sent not long ago regarding SVG I felt a bit compelled to comment on that part of his essay. I agree that it is weird to attack OOXML on its lack of use of SVG when OpenOffice don’t support it, but instead use its own OOD format. On the other side just because ODF fucked up in this area doesn’t mean OOXML need to repeat the stupidity. Of course if they do end up using SVG then it would be a bit funny as suddenly for graphics at least OOXML would be a better and freer standard than ODF is 
Miguel also points out the size of SVG as a problem with SVG, to which I agree, but the solution I have advocated for a long while within the librsvg community is to aim to support the SVG Mobile profile as it is for the most part the sensible subset of SVG we are all looking for. Speaking of librsvg it is in maintenance mode currently. Caleb who pushed many of the major changes for a long while has gone AWOL unfortunatly and Carl Worth is naturally putting most of his energy into Cairo itself. Dom is still around maintaining and holding the fort, but lacks the possibility to take librsvg the last steps to match the SVG Tiny profile. So if anyone out there is interested joining the librsvg team to flesh out the remaining holes in librsvg to actually conform fully with one the W3c SVG specs then please drop by #librsvg on Gimpnet or join the mailing list.
Fun little odd game from Oddlabs
So from time to time I head over to LinuxGames.com to see the latest news and keep track of how linux as a gaming platform is evolving. Gamers have been one of those early adopter segments I have been hoping we would be able to lure to the linux platform at some point, but of course currently its mostly about wondering if the egg or the chicken will be the start of Linux as a competitive gaming platform.
Last week I found the game
Tribal Trouble from Oddlabs.com. A small danish gaming company. Its a 3d real time strategy game available for Windows, Mac and Linux. The multiplatform support enabled due to the game being written in Java. It was an enlightening experience for me for a variety of reasons, one being that it is possible to write a 3D game like this in Java and get good performance out of it. It is not the first game I recently seen in Java and I do get the impression that there are quite a few of these Java based games out there which thus have a very low threshold for supporting Linux. Puzzle Pirates is another one of these new generation of games written in Java. With Sun’s recent decision to GPL their implementation of Java I think we have a great opportunity to integrate Java closely in the desktop to enable easy playing of games like these. Sun’s great work on integrating look and feel wise with GTK+ is of course another great boon. One thing I did find in the Oddlabs development blog was a mention that their paying customers was 47% Mac, 9% Linux and 44% Windows. Come on everyone, there has to be more people out there using linux interested enough in getting fun little games onto our favourite platform. Lets at least try to match the market for Mac software. Personally I have already bought the game and spent quite a few hours playing it 
Tried eating an OLPC laptop?
So we have one of those cute little green OLPC laptops here at the Fluendo office. What suddenly struck me today is how much it looks like a children’s toy, which is appropriate considering who it is targeted at.
But I am sure things like Fischer Price toys go through a lot of child safety testing to make sure they for instance are not poisonous. So the question is have anyone tried eating parts of their OLPC to make sure we don’t risk killing any kids somewhere with it? Or do I need it pick an office volunteer to try eating some OLPC to make sure its truly safe for the worlds children?
LightScribe for Linux
Some time ago I bought a USB DVD burner that supported this feature called lightscribe. Essentially it means you can buy DVD or CDROM’s with a special covering and then you can use the player laser to burn text and or images onto the disc. Looks kinda cool. As I expected back then the feature was not supported under Linux. But today I noticed that they actually released and SDK for linux which means CD burning applications or even graphics applications like the Gimp could potentially support it.
The SDK is available under a standard restrictive proprietary license though so don’t expect the functionality to be included with your average distro anytime soon unless some of the DVD burning software developers allows bundling with this non-free library in the license. There is a simple application available from their site, but unfortunatly it seemed unable to detect my drive so I couldn’t test if the lightscribe functionality actually do work.
There are of course two ways to look at this. Either one think that they supporting Linux is cool and help validate the platform for desktop use, even though their support is not free software. Or one considers the support worthless since it is not free software. Personally I do hope that this non-free library doesn’t stop people from trying to create opensource support for lightscribe burners, but in the meantime I do take their closed source support as a positive sign that the linux desktop is gaining in importance.
State of vector graphics support
Decided to look into the current state of vector graphics support
today. My original testcase was whether would be able to load a graphics into Inkscape then load then save and load the image into OpenOffice. As I tested I increased my target by doing various other tests testing interoperability. The origin for my testing was the hope that SVG support would be so commonplace and good now that we had achieved full interoperability beetween large parts of the desktop. Ended up testing a lot of random file formats and viewers.
I put together a page with my test results and the result was not exactly what I had hoped 
Be aware that I don’t consider any of the results here as proof of anything except that as a normal user spending 2-3 hours on the problem this was as far as I got.
GNOME plans for the future
Noticed some tiny disturbance in the force before Christmas as
Thom Holwerda of osnews posted an article about what he felt was the sorry state of free desktops. Seems most people in the GNOME camp simply ignored the article as irrelevant, but Aaron Segio of Trolltech and KDE let it somewhat get to him.
Personally I felt Thom kinda pointed out some troublesome points, but that his context and conclusion was wrong.
First of all he critized GNOME for not having a clear vision for GNOME 3. Well this is true, but that is mostly due to not having any clear ideas for something that would require a GNOME 3. GNOME 2 came about as a result of shortcomings in GTK+ at the time, causing the GTK+ maintainers having to break API compatability in order to improve for instance the handling various writing languages and fonts.
As part of having to port to the new GTK+ some policy changes where made for GNOME in terms of focus and goals. Goals and policies which people are still very happy with and don’t see a big need to change.
At the moment GNOME is doing quite well with incremental improvements with a lot of the major effort by GNOME contributors and companies going into projects such as HAL, X.org, Cairo, NetworkManager, GStreamer, Telepathy, OpenOffice, Firefox and Bluetooth support to mention a few. The thinking being that having a full featured office suite for instance is more important to potential users than having a panel that can be themed to have the shape of a sextant. At the same time the core parts of GNOME are continously moving forward with incremental improvements or replacements.
Why GNOME’s incremental approach is considered less by Thom than Apple’s is not clear to me, but for some reason he feels that unless you put a major version number behind something in the linux world you by definition stand still.
And to be honest incremental improvements is what everyone is doing these days. Windows Vista, MacOSX and KDE4 don’t really contain anything earth shattering, they are basically increamental improvements over the predecessors. Thom mentiones KDE’s Plasma, Appeal and Solid in his article as KDE4 efforts. Aasegio mentioned Phonon and Decibel as other examples. Well if you look at each of them, none of them are actually doing anything ‘new’, they are all just attempts at trying to do what is already being done, but in what each project maintainer feel is a better way. Which is just the same as how GNOME currently increments forward, although since GTK+ is not breaking API the need/motivation to call it GNOME3 is not very big. The excitement around Compiz recently showed how more glitz can be brought to the desktop as an incremental improvement.
The thing is that until we find a new way to interact with our desktops, nobody will be doing anything truly significantly new anytime soon, apart from maybe in the application space.
And to give an example of what I am talking of I want to point to the Nintendo Wii as a device which actually is doing something ‘new’. While parts of the technology has been around for quite a while the way the Wii controler works do truly change the way you interact with the system (making it much for physical for one) as compared to previous and competing consoles.
On the desktop space I think doing something I feel deserve the title ‘new’ will be harder, but I think the Lowfat experiement of Mirco has the potential. And if it pans out it might become the foundation and focus of a GNOME 3 cycle. But with all such experimental efforts we can’t commit to it before the proof of concept has reached a bit further so we know we can accomodate all the major usecases. And maybe in the end it will end up being more like what we at Fluendo try to do with Elisa, making it a add-on to the current desktop/system for a specific usecase rather than a full desktop replacement. Yet using many of the same building blocks as the desktop.
So while both GNOME and KDE could do with more developers I don’t see any truly dark clouds on the horizon. And if Thom or anyone else have any clear ideas on something that would require GNOME to change so many of its internals to justify switching the major version number to 3, then please come with them. In the meantime lets just continue incrementing our way towards perfection within the constraints of the current paradigm 
The Magic Lamp of Standardization
Just saw an article on whats up next in Linux desktop standardization where there is blurb talking about the discussion at the latest ODSL Desktop Architects Meeting about the issues around Linux Audio (or multimedia in general if you want).
The final outcome was this: It was decided to start addressing these issues by creating a focus group and mailing a list of what’s needed from audio APIs, and how to deal with bringing consistency to Linux audio.
Which is exactly the kind of outcome a conference like this can ever hope to achieve and which also shows why it is a complete waste of time trying to address the issue in such a forum.
The DAM meetings are set up as a ‘lets get everyone together to see if we can find common ground’ type of meeting. Which isn’t bad, but it only lends itself to a certain type of problems.
The problem with Linux audio is not that people out there do not ‘know’ what the problem is, the problem is that people disagree, often quite strongly on how to solve it, and that there is limited resources available for solving it using any solution. So instead of actually moving towards solving it we get a lot of cute discussions on related topics such as when to push and when to pull as one example.
To solve the problems mentioned in the article you will need to step on some toes, sink a lot of man hours into the chosen solution and in the end win out on excellence. Which surprisingly is a bit harder than one would think 
I think the ODSL-DAM context fails horribly already on the first hurdle, in the sense that the DAM conference is not set up to do any kind of toe stepping, rather to the contrary. Not that I am saying that the DAM type meetings are useless, not at all, but a format made for trying to increase cooperation between two quite mature and well established solutions like GNOME and KDE doesn’t necessarily lend itself well to a problem requiring some painful trailblazing.
That said I do think the problems are being addressed, but they are not and will not be addressed by things like ODSL DAM, instead they are being addressed by individual developers, contributing companies and distributions who are already all moving in mostly the same direction. The thing is however that for the contexts these people and groups are moving something like DAM is more of a distraction than a valuable contribution towards the end goal at this point in time.
To let on where I see things moving based on what the distro’s are doing I will say that on the audio side the solution that is gaining traction is PulseAudio which will be run on top of ALSA and providing legacy support for ESD and OSS. For the codec problem there are solutions being worked on around GStreamer.
Within the context of these technologies and some other technologies that comes as a consequence around them also longer term plans are made for addressing the issues faced. For instance making sure that Pulse Audio is an acceptable solution for both people doing so called pro-audio and for the normal desktop, like it is on MacOSX.
Ogg Trick modes!
Ok, so Wim went on a hacking spree this Sunday and implemented trick-modes with Theora video. This means you can now grab gst-plugins-base CVS and playback Theora in reverse using it. Elisa has some support for this already in subversion, but the best test application is the ‘seek’ example application under tests/examples/seek in gst-plugins-base. Been having some fun playing some of the Ogg Theora movies I have in fast-forward mode, slow motion mode, slow motion reverse and fast rewind mode by modifying the ‘rate’ value in ‘seek’. Vorbis sound is also speeding up/slowing down and playing the sound in reverse perfectly. I tried recording a screencast of it using Istanbul, but my machine is not even close to being fast enoungh to both do the trick mode playback and recording to Ogg at the same time :). A big thanks to Wim. Really looking forward to demonstrating this at future conferences and meetings.
With the support now implemented in a set of open source decoders and demuxers hopefully more people in the community will be able to help out with trick mode enabling the rest of our supported formats. Up to this point only the Fluendo Windows Media plugins supported trick modes, but that doesn’t really help anyone outside Fluendo understand how trick modes are supposed to work in GStreamer, with these changes the secret sauce is available for anyone to study alongside the already public
design document 
Update on the Texas Instruments OpenMax plugins
Texas Instruments sent in the first version of their OpenMax optimized
GStreamer plugins. Their plugins use the open source OpenMax implementation Bellagio/Omxil which was created by Semiconductor Technologies. ST plans on releasing a set of
GStreamer plugins of their own also as far as I know.
DVD and GStreamer
For a long time now things have been moving slowly in terms of
improving the state of DVD support with GStreamer. Tim Müller
has been working on improving it bit-by-bit over time, but being
only one person and having a lot of other tasks on his plate the
progress have been slow. Well know it seems some help has arrived
in the form of Jason Gerard DeRose who sent in information about his
new dvdread element to the mailing list last Friday and at the same time expressing interest in working also on working on the DVD navigation problemspace. Jason’s original reason for working on
the dvdread element was for his KungFu DVD ripper, but with the DVD stuff fresh in his head is willing to put time into improving also the playback side of things. A big thanks to Jason!
Proprietary 3D drivers
After Mark Shuttleworths invitation for Suse devs to switch to Ubuntu, which today he clarified, a lot of discussion has happened. A frequent topic that has come up in that discussion is the planed use of binary drivers for default in future Ubuntu releases for ATI and NVidia graphics cards. Well the binary graphics driver issue old news I say, I thought about that over 9 months ago 
NVidia/Fedora Quake crash
I would like to thank those who provided some feedback on my crash problem. It turned out to be a known bug in the latest released drivers and upgrading to the latest beta solved it. Thanks to the AC who pointed me in this direction.