Ubuntu packages for Rygel

I promised Zeeshan that I’d have a look at his Rygel UPnP Media Server a few months back, and finally got around to doing so.  For anyone else who wants to give it a shot, I’ve put together some Ubuntu packages for Jaunty and Karmic in a PPA here:

Most of the packages there are just rebuilds or version updates of existing packages, but the Rygel ones were done from scratch.  It is the first Debian package I’ve put together from scratch and it wasn’t as difficult as I thought it might be.  The tips from the “Teach me packaging” workshop at the Canonical All Hands meeting last month were quite helpful.

After installing the package, you can configure it by running the “rygel-preferences” program.  The first notebook page lets you configure the transcoding support, and the second page lets you configure the various media source plugins.

I wasn’t able to get the Tracker plugin working on my system, which I think is due to Rygel expecting the older Tracker D-Bus API.  I was able to get the folder plugin working pretty easily though.

Once things were configured, I ran Rygel itself and an extra icon showed up on my PlayStation 3.  Getting folder listings was quite slow, but apparently this is limited to the folder back end and is currently being worked on.  It’s a shame I wasn’t able to test the more mature Tracker back end.

With LPCM transcoding enabled, I was able to successfully play a Vorbis file on the PS3.  With transcoding disabled, I wasn’t able to play any music — even files in formats the PS3 could handle natively.  This was apparently due to the folder backend not providing the necessary metadata.  I didn’t have any luck with MPEG2 transcoding for video.

It looks like Rygel has promise, but is not yet at a stage where it could replace something like MediaTomb.  The external D-Bus media source support looks particularly interesting.  I look forward to trying out version 0.4 when it is released.

Sansa Fuze

On my way back from Canada a few weeks ago, I picked up a SanDisk Sansa Fuze media player.  Overall, I like it.  It supports Vorbis and FLAC audio out of the box, has a decent amount of on board storage (8GB) and can be expanded with a MicroSDHC card.  It does use a proprietary dock connector for data transfer and charging, but that’s about all I don’t like about it.  The choice of accessories for this connector is underwhelming, so a standard mini-USB connector would have been preferable since I wouldn’t need as many cables.

The first thing I tried was to copy some music to the device using Rhythmbox.  This appeared to work, but took longer than expected.  When I tried to play the music, it was listed as having an unknown artist and album name.  Looking at the player’s filesystem, the reason for this was obvious: Rhythmbox had transcoded the music to MP3 and lost the tags.  Copying the ogg files directly worked a lot better: it was quicker and preserved the metadata.

Of course, getting Rhythmbox to do the right thing would be preferable to telling people not to use it.  Rhythmbox depends on information about the device provided by HAL, so I had a look at the relevant FDI files.  There was one section for Sansa Clip and Fuze players which didn’t list Vorbis support, and another section for “Sansa Clip version II”.  The second section was a much better match for the capabilities of my device.  As all Clip and Fuze devices support the extra formats when running the latest firmware, I merged the two sections (hal bug 20616, ubuntu bug 345249).  With the updated FDI file in place, copying music with Rhythmbox worked as expected.

The one downside to this change is that if you have a device with old firmware, Rhythmbox will no longer transcode music to a format the device can play.  There doesn’t seem to be any obvious way to tell if a device has a new enough firmware via USB IDs or similar, so I’m not sure how to handle it automatically.  That said, it is pretty easy to upgrade the firmware following the instructions from their forum, so it is probably best to just do that.

PulseAudio

It seems to be a fashionable to blog about experiences with PulseAudio, I thought I’d join in.

I’ve actually had some good experiences with PulseAudio, seeing some tangible benefits over the ALSA setup I was using before.  I’ve got a cheapish surround sound speaker set connected to my desktop.  While it gives pretty good sound when all the speakers are used together, it sounds like crap if only the front left/right speakers are used.

ALSA supports multi-channel audio with the motherboard’s sound card alright, but apps producing stereo sound would only play out of the front two speakers.  There are some howtos on the internet for setting up a separate ALSA device that routes stereo audio to all the speakers in the right way, but that requires that I know in advance what sort of audio an application is going to generate: something like Totem could produce mono, stereo or surround output depending on the file I want to play.  This is more effort than I was usually willing to do, so I ended up flicking a switch on the amplifier to duplicate the front left/right channels to the rear.

With PulseAudio, I just had to edit the /etc/pulse/daemon.conf file and set default-sample-channels to 6, and it took care of converting mono and stereo output from apps to play on all the speakers while still letting apps producing surround output play as expected.  This means I automatically get the best result without any special effort on my part.

I’m not too worried that I had to tell PulseAudio how many speakers I had, since it is possible to plug in a number of speaker configurations and I don’t think the card is capable of sensing what has been attached (the manual documents manually selecting the speaker configuration in the Windows driver).  It might be nice if there was a way to configure this through the GUI though.

I’m looking forward to trying the “flat volume” feature in future versions of PulseAudio, as it should get the best quality out of the sound hardware (if I understand things right, 50% volume with current PulseAudio releases means you only get 15 bits of quantisation on a 16-bit sound card).  I just hope that it manages to cope with the mixers my sound card exports: one two-channel mixer for the front speakers, one two-channel mixer for the rear two speakers and two single channel mixers for the center and LFE channels.

Using Twisted Deferred objects with gio

The gio library provides both synchronous and asynchronous interfaces for performing IO.  Unfortunately, the two APIs require quite different programming styles, making it difficult to convert code written to the simpler synchronous API to the asynchronous one.

For C programs this is unavoidable, but for Python we should be able to do better.  And if you’re doing asynchronous event driven code in Python, it makes sense to look at Twisted.  In particular, Twisted’s Deferred objects can be quite helpful.

Deferred

The Twisted documentation describes deferred objects as “a callback which will be put off until later”.  The deferred will eventually be passed the result of some operation, or information about how it failed.

From the consumer side, you can register one or more callbacks that will be run:

def callback(result):
    # do stuff
    return result

deferred.addCallback(callback)

The first callback will be called with the original result, while subsequent callbacks will be passed the return value of the previous callback (this is why the above example returns its argument). If the operation fails, one or more errbacks (error callbacks) will be called:

def errback(failure):
    # do stuff
    return failure

deferred.addErrback(errback)

If the operation associated with the deferred has already been completed (or already failed) when the callback/errback is added, then it will be called immediately. So there is no need to check if the operation is complete before hand.

Using Deferred objects with gio

We can easily use gio’s asynchronous API to implement a new API based on deferred objects.  For example:

import gio
from twisted.internet import defer

def file_read_deferred(file, io_priority=0, cancellable=None):
    d = defer.Deferred()
    def callback(file, async_result):
        try:
            in_stream = file.read_finish(async_result)
        except gio.Error:
            d.errback()
        else:
            d.callback(in_stream)
    file.read_async(callback, io_priority, cancellable)
    return d

def input_stream_read_deferred(in_stream, count, io_priority=0,
                               cancellable=None):
    d = defer.Deferred()
    def callback(in_stream, async_result):
        try:
            bytes = in_stream.read_finish(async_result)
        except gio.Error:
            d.errback()
        else:
            d.callback(bytes)
    # the argument order seems a bit weird here ...
    in_stream.read_async(count, callback, io_priority, cancellable)
    return d

This is a fairly simple transformation, so you might ask what this buys us. We’ve gone from an interface where you pass a callback to the method to one where you pass a callback to the result of the method. The answer is in the tools that Twisted provides for working with deferred objects.

The inlineCallbacks decorator

You’ve probably seen code examples that use Python’s generators to implement simple co-routines. Twisted’s inlineCallbacks decorator basically implements this for generators that yield deferred objects. It uses the enhanced generators feature from Python 2.5 (PEP 342) to pass the deferred result or failure back to the generator. Using it, we can write code like this:

@defer.inlineCallbacks
def print_contents(file, cancellable=None):
    in_stream = yield file_read_deferred(file, cancellable=cancellable)
    bytes = yield input_stream_read_deferred(
        in_stream, 4096, cancellable=cancellable)
    while bytes:
        # Do something with the data.  For this example, just print to stdout.
        sys.stdout.write(bytes)
        bytes = yield input_stream_read_deferred(
            in_stream, 4096, cancellable=cancellable)

Other than the use of the yield keyword, the above code looks quite similar to the equivalent synchronous implementation.  The only thing that would improve matters would be if these were real methods rather than helper functions.

Furthermore, the inlineCallbacks decorator causes the function to return a deferred that will fire when the function body finally completes or fails. This makes it possible to use the function from within other asynchronous code in a similar fashion. And once you’re using deferred results, you can mix in the gio calls with other Twisted asynchronous calls where it makes sense.

Metrics for success of a DVCS

One thing that has been mentioned in the GNOME DVCS debate was that it is as easy to do “git diff” as it is to do “svn diff” so the learning curve issue is moot.  I’d have to disagree here.

Traditional Centralised Version Control

With traditional version control systems  (e.g. CVS and Subversion) as used by Free Software projects like GNOME, there are effectively two classes of users that I will refer to as “committers” and “patch contributors”:

Centralised VCS Users

Patch contributors are limited to read only access to the version control system.  They can check out a working copy to make changes, and then produce a patch with the “diff” command to submit to a bug tracker or send to a mailing list.  This is where new contributors start, so it is important that it be easy to get started in this mode.

Once a contributor is trusted enough, they may be given write access to the repository moving them to the committers group. They now have access to more functionality from the VCS, including the ability to checkpoint changes into focused commits, possibly on branches.  The contributor may still be required to go through patch review before committing, or may be given free reign to commit changes as they see fit.

Some problems with this arrangement include:

  • New developers are given a very limited set of tools to do their work.
  • If a developer goes to the trouble of learning the advanced features of the version control system, they are still limited to the read only subset if they decide to start contributing to another project.

Distributed Workflow

A DVCS allows anyone to commit to their own branches and provides the full feature set to all users.  This splits the “committers” class into two classes:

Distributed VCS Users

The social aspect of the “committers” group now becomes the group of people who can commit to the main line of the project – the core developers. Outside this group, we have people who make use of the same features of the VCS as the core developers but do not have write access to the main line: their changes must be reviewed and merged by a core developer.

I’ve left the “patch contributor” class in the above diagram because not all contributors will bother learning the details of the VCS.  For projects I’ve worked on that used a DVCS, I’ve still seen people send simple patches (either from the “xxx diff” command, or as diffs against a tarball release) and I don’t think that is likely to change.

Measuring Success

Making the lives of core developers better is often brought up as a reason to switch to a DVCS (e.g. through features like offline commits, local cache of history, etc).  I’d argue that making life easier for non core contributors is at least as important.  One way we can measure this is by looking at whether such contributors are actually using VCS features beyond what they could with a traditional centralised setup.

By looking at the relative numbers of contributors who submit regular patches and those that either publish branches or submit changesets we can get an idea of how much of the VCS they have used.

It’d be interesting to see the results of a study based on contributions to various projects that have already adopted DVCS.  Although I don’t have any reliable numbers, I can guess at two things that might affect the results:

  1. Familiarity for existing developers.  There is a lot of cross pollination in Free Software, so it isn’t uncommon for a new contributor to have worked on another project before hand.  Using a VCS with a familiar command set can help here (or using the same VCS).
  2. A gradual learning curve.  New contributors should be able to get going with a small command set, and easily learn more features as they need them.

I am sure that there are other things that would affect the results, but these are the ones that I think would have the most noticeable effects.