Red Bull Air Race

Yesterday, I went to see the finals of the Red Bull Air Race here in Perth.  This was my first time watching the event, since I was over seas at the times it was held the previous two years.

The weather was good, and gave me a good opportunity to play with my camera a bit.  Sometimes you can miss out on the action by trying to take photos, but in this case the camera made it a lot easier to see the planes from the shore.

As the overall winner was decided by points scored over the full series, it wasn’t necessary for the series winner to win the Perth race.  This turned out to be the case, with Hans Arch coming third but winning the series.

Hannes Arch passing through one of the gates in front of the WACA
Hannes Arch passing through one of the gates in front of the WACA

The Perth final ended up being between two English pilots: Nigel Lamb and Paul Bonhomme, with Bonhomme winning the race.

Paul Bonhomme passing through the finishing pylons
Paul Bonhomme passing through the finishing gate

After the race, there was a display by the RAAF Roulettes formation flying team, and a fly over by an F/A-18 Hornet.

F/A-18 Hornet
F/A-18 Hornet

Overall, it was a pretty good day.  I don’t know if I’d have been up for watching the entire two days worth, but the finals were entertaining.  Hopefully I’ll be around for next year’s race.

Re: Continuing to Not Quite Get It at Google…

David: taking a quick look at Google’s documentation, it sure looks like OpenID to me.  The main items of note are:

  1. It documents the use of OpenID 2.0’s directed identity mode.  Yes this is “a departure from the process outlined in OpenID 1.0”, but that could be considered true of all new features found in 2.0.  Google certainly isn’t the first to implement this feature:
    • Yahoo’s OpenID page recommends users enter “yahoo.com” in the identity box on web sites, which will initiate a directed identity authentication request.
    • We’ve been using directed identity with Launchpad to implement single sign on for various Canonical/Ubuntu sites.

    Given that Google account holders identify themselves by email address, users aren’t likely to know a URL to enter, so this kind of makes sense.

  2. The identity URLs returned by the OpenID provider do not directly reveal information about the user, containing a long random string to differentiate between users.  If the relying party wants any user details, they must request them via the standard OpenID Attribute Exchange protocol.
  3. They are performing access control based on the OpenID realm of the relying party.  I can understand doing this in the short term, as it gives them a way to handle a migration should they make an incompatible change during the beta.  If they continue to restrict access after the beta, you might have a valid concern.

It looks like there would be no problem talking to their provider using existing off the shelf OpenID libraries (like the ones from JanRain).

If you have an existing site using OpenID for login, chances are that after registering the realm with Google you’d be able to log in by entering Google’s OP server URL.  At that point, it’d be fairly trivial to add another button to the login page – sites seem pretty happy to plaster provider-specific radio buttons and entry boxes all over the page already …

Streaming Vorbis files from Ubuntu to a PS3

One of the nice features of the PlayStation 3 is the UPNP/DLNA media renderer.  Unfortunately, the set of codecs is pretty limited, which is a problem since most of my music is encoded as Vorbis.  MediaTomb was suggested to me as a server that could transcode the files to a format the PS3 could understand.

Unfortunately, I didn’t have much luck with the version included with Ubuntu 8.10 (Intrepid), and after a bit of investigation it seems that there isn’t a released version of MediaTomb that can send PCM audio to the PS3.  So I put together a package of a subversion snapshot in my PPA which should work on Intrepid.

With the newer package, it was pretty easy to get things working:

  1. Install the mediatomb-daemon package
  2. Edit the /etc/mediatomb/config.xml file and make the following changes:
    • Change the <protocolInfo/> line to set extend="yes".
    • In the <extension-mimetype> section, uncomment the line to map “avi” to “video/divx”.  This will get a lot of videos to play without problem.
    • In the <mimetype-upnpclass> section, add a line to map “application/ogg” to “object.item.audioItem.musicTrack”.  This is needed for the vorbis files to be recognised as music.
    • In the <mimetype-contenttype> section add a line to map “audio/L16” to “pcm”.
    • On the <transcoding> element, change the enabled attribute to “yes”.
    • Add the settings from here to the <transcoding> section.
  3. Edit the /etc/default/mediatomb script and set INTERFACE to the network interface you want to advertise on.
  4. Restart the mediatomb daemon.
  5. Go to the web UI (try opening /var/lib/mediatomb/mediatomb.html in a web browser), and add the directories you want to export.
  6. Test things on the PS3.

Things aren’t perfect though.  As MediaTomb is simply piping the transcoded audio to the PS3, it doesn’t implement seeking on such files, and it seems that the PS3 won’t even let you pause a stream that doesn’t allow seeking.  With a less generalised transcoding backend, it seems like it should be trivial to support seeking in an uncompressed PCM stream though, since the byte offsets can be trivially mapped to sample numbers.

The other problem I found was that none of the recent music I’d ripped showed up.  It seems that they’d been ripped with the .oga file extension rather than .ogg.  This change appears to have been made in bug 543306, but the reasoning seems suspect: the guidelines from Xiph indicate that the files generated by this encoding profile should continue to use the .ogg file extension.

I tried adding some extra mappings to the MediaTomb configuration file to recognise the files without success, but eventually decided to just rename them and fix the encoding profile locally.

A Perfect Media Server

While MediaTomb mostly works for me, it doesn’t do everything I’d like.  A few of the things I’d like out of a media server include:

  1. No need to configure things via a web UI.  In fact, I could do without a web UI all together – something nicely integrated into the desktop would be nice.
  2. No need to set model specific settings in the configuration file.  Ideally it would know how to talk to common players by default.
  3. Supports transcoding and seeking within transcoded files.  Preferably knows what needs transcoding for common players.
  4. Picks up new files in real time.  So something inotify based rather than periodic reindexing.
  5. A virtual folder tree for music based on artist/album metadata. A plain folder tree for other media would be fine.
  6. Cached video thumbnails would be nice too.  The build of MediaTomb in my PPA includes support for thumbnails (needs to be enabled in the config file), but they aren’t cached so are slow to appear.

Perhaps Zeeshan‘s media server will be worth trying out at some point.

Thoughts on OAuth

I’ve been playing with OAuth a bit lately. The OAuth specification fulfills a role that some people saw as a failing of OpenID: programmatic access to websites and authenticated web services. The expectation that OpenID would handle these cases seems a bit misguided since the two uses cases are quite different:

  • OpenID is designed on the principle of letting arbitrary OpenID providers talk to arbitrary relying parties and vice versa.
  • OpenID is intentionally vague about how the provider authenticates the user. The only restriction is that the authentication must be able to fit into a web browsing session between the user and provider.

While these are quite useful features for a decentralised user authentication scheme, the requirements for web service authentication are quite different:

  • There is a tighter coupling between the service provider and client. A client designed to talk to a photo sharing service won’t have much luck if you point it at a micro-blogging service.
  • Involving a web browser session in the authentication process for individual web service request is not a workable solution: the client might be designed to run offline for instance.

While the idea of a universal web services client is not achievable, there are areas of commonality between different the services: gaining authorisation from the user and authenticating individual requests. This is the area that OAuth targets.

While it has different applications, it is possible to compare some of the choices made in the protocol:

  1. The secrets for request and access tokens are sent to the client in the clear. So at a minimum, a service provider’s request token URL and access token URL should be served over SSL. OpenID nominally avoids this by using Diffie-Hellman Key Exchange to avoid evesdropping, but ended up needing it to avoid man in the middle attacks. So sending them in the clear is probably a more honest approach.
  2. Actual web service methods can be authenticated over plain HTTP in a fairly secure means using the HMAC-SHA1 or RSA-SHA1 signature methods. Although if you’re using SSL anyway, the PLAINTEXT authentication method is probably not any worse than HMAC-SHA1.
  3. The authentication protocol supports both web applications and desktop applications. Though any security gained through consumer secrets is invalidated for desktop applications, since anyone with a copy of the application will necessarily have access to the secrets. A few other points follow on from this:
    • The RSA-SHA1 signature method is not appropriate for use by desktop applications. The signature is based only on information available in the web service request and the RSA key associated with the consumer, and the private key will need to be distributed as part of the application. So if an attacker discovers an access token (not access token secret), they can authenticate.
    • The other two authentication methods — HMAC-SHA1 and PLAINTEXT — depend on an access token secret. Along with the access token, this is essentially a proxy for the user name and password, so should be protected as such (e.g. via the GNOME keyring).  It still sounds better than storing passwords directly, since the token won’t give access to unrelated sites the user happened to use the same password on, and can be revoked independently of changing the password.
  4. While the OpenID folks found a need for a formal extension mechanism for version 2.0 of that protocol, nothing like that seems to have been added to OAuth.  There are now a number of proposed extensions for OAuth, so it probably would have been a good idea.  Perhaps it isn’t as big a deal, due to tigher coupling of service providers and consumers, but I could imagine it being useful as the two parties evolve over time.

So the standard seems decent enough, and better than trying to design such a system yourself.  Like OpenID, it’ll probably take until the second release of the specification for some of the ambiguities to be taken care of and for wider adoption.

From the Python programmer point of view, things could be better.  The library available from the OAuth site seems quite immature and lacks support for a few aspects of the protocol.  It looks okay for simpler uses, but may be difficult to extend for use in more complicated projects.

Django support landed in Storm

Since my last article on integrating Storm with Django, I’ve merged my changes to Storm’s trunk.  This missed the 0.13 release, so you’ll need to use Bazaar to get the latest trunk or wait for 0.14.

The focus since the last post was to get Storm to cooperate with Django’s built in ORM.  One of the reasons people use Django is the existing components that can be used to build a site.  This ranges from the included user management and administration code to full web shop implementations.  So even if you plan to use Storm for your Django application, your application will most likely use Django’s ORM for some things.

When I last posted about this code, it was possible to use both ORMs in a single app, but they would use separate database connections.  This had a number of disadvantages:

  • The two connections would be running separate transactions in parallel, so changes made by one connection would not be visible to the other connection until after the transaction was complete.  This is a problem when updating records in one table that reference rows that are being updated on the other connection.
  • When you have more than one connection, you introduce a new failure mode where one transaction may successfully commit but the other fail, leaving you with only half the changes being recorded.  This can be fixed by using two phase commit, but that is not supported by either Django or Storm at this point in time.

So it is desirable to have the two ORMs sharing a single connection.  The way I’ve implemented this is as a Django database engine backend that uses the connection for a particular named per-thread store and passes transaction commit or rollback requests through to the global transaction manager.  Configuration is as simple as:

DATABASE_ENGINE = 'storm.django.backend'
DATABASE_NAME = 'store-name'
STORM_STORES = {'store-name': 'database-uri'}

This will work for PostgreSQL or MySQL connections: Django requires some additional set up for SQLite connections that Storm doesn’t do.

Once this is configured, things mostly just work.  As Django and Storm both maintain caches of data retrieved from the database though, accessing the same table with both ORMs could give unpredictable results.  My code doesn’t attempt to solve this problem so it is probably best to access tables with only one ORM or the other.

I suppose the next step here would be to implement something similar to Storm’s Reference class to represent links between objects managed by Storm and objects managed by Django and vice versa.