Extracting BIOS images and tools from ThinkPad update ISOs

With my old ThinkPad, Lenovo provided BIOS updates in the form of Windows executables or ISO images for a bootable CD.  Since I had wiped Windows partition, the first option wasn’t an option.  The second option didn’t work either, since it expected me to be using the drive in the base I hadn’t bought.  Luckily I was able to just copy the needed files out of the ISO image to a USB stick that had been set up to boot DOS.

When I got my new ThinkPad, I had hoped to do the same thing but found that the update ISO images appeared to be empty when mounted.  It seems that the update is handled entirely from an El Torito emulated hard disk image (as opposed to using the image only to bootstrap the drivers needed to access the CD).

So I needed some way to extract that boot image from the ISO.  After a little reading of the spec, I put together the following Python script that does the trick:

import struct
import sys


def find_image(fp):
    # el-torito boot record descriptor
    fp.seek(0x11 * SECTOR_SIZE)
    data = fp.read(SECTOR_SIZE)
    assert data[:0x47] == b'\x00CD001\x01EL TORITO SPECIFICATION' + b'\x0' * 41
    boot_catalog_sector = struct.unpack('<L', data[0x47:0x4B])[0]

    # check the validation entry in the catalog
    fp.seek(boot_catalog_sector * SECTOR_SIZE)
    data = fp.read(0x20)
    assert data[0:1] == b'\x01'
    assert data[0x1e:0x20] == b'\x55\xAA'
    assert sum(struct.unpack('<16H', data)) % 0x10000 == 0

    # Read the initial/default entry
    data = fp.read(0x20)
    (bootable, image_type, load_segment, system_type, sector_count,
     image_sector) = struct.unpack('<BBHBxHL', data[:12])
    image_offset = image_sector * SECTOR_SIZE
    if image_type == 1:
        # 1.2MB floppy
        image_size = 1200 * 1024
    elif image_type == 2:
        # 1.44MB floppy
        image_size = 1440 * 1024
    elif image_type == 3:
        # 2.88MB floppy
        image_size = 2880 * 1024
    elif image_type == 4:
        # Hard disk image.  Read the MBR partition table to locate file system
        data = fp.read(512)
        # Read the first partition entry
        (bootable, part_type, part_start, part_size) = struct.unpack_from(
            '<BxxxBxxxLL', data, 0x1BE)
        assert bootable == 0x80 # is partition bootable?
        image_offset += part_start * 512
        image_size = part_size * 512
        raise AssertionError('unhandled image format: %d' % image_type)

    return fp.read(image_size)

if __name__ == '__main__':
    with open(sys.argv[1], 'rb') as iso, open(sys.argv[2], 'wb') as img:

It isn’t particularly pretty, but does the job and spits out a 32MB FAT disk image when run on the ThinkPad X230 update ISOs. It is then a pretty easy task of copying those files onto the USB stick to run the update as before. Hopefully owners of similar laptops find this useful.

There appears to be an EFI executable in there too, so it is possible that the firmware update could be run from the EFI system partition too.  I haven’t had the courage to try that though.

u1ftp: a demonstration of the Ubuntu One API

One of the projects I’ve been working on has been to improve aspects of the Ubuntu One Developer Documentation web site.  While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.

I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients).  To help verify that the documentation was useful, I wrote a small program to exercise those APIs.  The result is u1ftp: a program that exposes a user’s files via an FTP daemon running on localhost.  In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.

You can download the program from:


To make it easy to run on as many systems as possible, I packaged it up as a runnable zip file so can be run directly by the Python interpreter.  As well as a Python interpreter, you will need the following installed to run it:

  • On Linux systems, either the gnomekeyring extension (if you are using a GNOME derived desktop), or PyKDE4 (if you have a KDE derived desktop).
  • On Windows, you will need pywin32.
  • On MacOS X, you shouldn’t need any additional modules.

These could not be included in the zip file because they are extension modules rather than pure Python.

Once you’ve downloaded the program, you can run it with the following command:

python u1ftp-0.1.zip

This will start the FTP server listening at ftp://localhost:2121/.  Pointing a file manager at that URL should prompt you to log in, where you can use your standard Ubuntu One credentials and start browsing your files.  It will verify the credentials against the Ubuntu SSO service and issue an OAuth token that it stores in the keyring.  The OAuth token is then used to authenticate requests to the file storage REST API.

While I expect this program to be useful on its own, it was also intended to act as an example of how the Ubuntu One API can be used.  One way to browse the source is to simply unzip the package and poke around.  Alternatively, you can check out the source directly from Launchpad:

bzr branch lp:u1ftp

If you come up with an interesting extension to u1ftp, feel free to upload your changes as a branch on Launchpad.

Packaging Python programs as runnable ZIP files

One feature in recent versions of Python I hadn’t played around with until recently is the ability to package up a multi-module program into a ZIP file that can be run directly by the Python interpreter.  I didn’t find much information about it, so I thought I’d describe what’s necessary here.

Python has had the ability to add ZIP files to the module search path since PEP 273 was implemented in Python 2.3.  That can let you package up most of your program into a single file, but doesn’t help with the main entry point.

Things improved a bit when PEP 338 was implemented in Python 2.4, which allows any module that can be located on the Python search path can be executed as a script.  So if you have a ZIP file foo.zip containing a module foo.py, you could run it as:

PYTHONPATH=foo.zip python -m foo

This is a bit cumbersome to type though, so Python 2.6 lets you run directories and zip files directly.  So if you run

python foo.zip

It is roughly equivalent to:

PYTHONPATH=foo.zip python -m __main__

So if you place a file called __main__.py inside your ZIP file (or directory), it will be treated as the entry point to your program.  This gives us something that is as convenient to distribute and run as a single file script, but with the better maintainability of a multi-module program.

If your program has dependencies that you don’t expect to find present on the target systems, you can easily include them up in the zip file along side your program.  If you need to provide some data files along side your program, you could use the pkg_resources module from setuptools or distribute.

There are still a few warts with this set up though:

  • If your program fails, the trace back will not include lines of source code.  This is a general problem for modules loaded from zip files.
  • You can’t package extension modules into a zip file.  Of course, if you’re in a position where the target platforms are locked down tight enough that you could reliably provide compiled code that would run on them, you’d probably be better off using the platform’s package manager.
  • There is no way to tell whether a ZIP file can be executed directly with Python without inspecting its contents.  Perhaps this could be addressed by defining a new file extension to identify such files.

pygpgme 0.3

This week I put out a new release of pygpgme: a Python extension that lets you perform various tasks with OpenPGP keys via the GPGME library.  The new release is available from both Launchpad and PyPI.

There aren’t any major new extensions to the API, but this is the first release to support Python 3 (Python 2.x is still supported though).  The main hurdle was ensuring that the module correctly handled text vs. binary data.  The split I ended up on was to treat most things as text (including textual representations of binary data such as key IDs and fingerprints), and treat the data being passed into or returned from the encryption, decryption, signing and verification commands as binary data.  I haven’t done a huge amount with the Python 3 version of the module yet, so I’d appreciate bug reports if you find issues.

So now you’ve got one less reason not to try Python 3 if you were previously using pygpgme in your project.

Watching iView with Rygel

One of the features of Rygel that I found most interesting was the external media server support.  It looked like an easy way to publish information on the network without implementing a full UPnP/DLNA media server (i.e. handling the UPnP multicast traffic, transcoding to a format that the remote system can handle, etc).

As a small test, I put together a server that exposes the ABC‘s iView service to UPnP media renderers.  The result is a bit rough around the edges, but the basic functionality works.  The source can be grabbed using Bazaar:

bzr branch lp:~jamesh/+junk/rygel-iview

It needs Python, Twisted, the Python bindings for D-Bus and rtmpdump to run.  The program exports the guide via D-Bus, and uses rtmpdump to stream the shows via HTTP.  Rygel then publishes the guide via the UPnP media server protocol and provides MPEG2 versions of the streams if clients need them.

There are still a few rough edges though.  The video from iView comes as 640×480 with a 16:9 aspect ratio so has a 4:3 pixel aspect ratio, but there is nothing in the video file to indicate this (I am not sure if flash video supports this metadata).

Getting Twisted and D-Bus to cooperate

Since I’d decided to use Twisted, I needed to get it to cooperate with the D-Bus bindings for Python.  The first step here was to get both libraries using the same event loop.  This can be achieved by setting Twisted to use the glib2 reactor, and enabling the glib mainloop integration in the D-Bus bindings.

Next was enabling asynchronous D-Bus method implementations.  There is support for this in the D-Bus bindings, but has quite a different (and less convenient) API compared to Twisted.  A small decorator was enough to overcome this impedence:

from functools import wraps

import dbus.service
from twisted.internet import defer

def dbus_deferred_method(*args, **kwargs):
    def decorator(function):
        function = dbus.service.method(*args, **kwargs)(function)
        def wrapper(*args, **kwargs):
            dbus_callback = kwargs.pop('_dbus_callback')
            dbus_errback = kwargs.pop('_dbus_errback')
            d = defer.maybeDeferred(function, *args, **kwargs)
                dbus_callback, lambda failure: dbus_errback(failure.value))
        wrapper._dbus_async_callbacks = ('_dbus_callback', '_dbus_errback')
        return wrapper
    return decorator

This decorator could then be applied to methods in the same way as the @dbus.service.method method, but it would correctly handle the case where the method returns a Deferred. Unfortunately it can’t be used in conjunction with @defer.inlineCallbacks, since the D-Bus bindings don’t handle varargs functions properly. You can of course call another function or method that uses @defer.inlineCallbacks though.

The iView Guide

After coding this, it became pretty obvious why it takes so long to load up the iView flash player: it splits the guide data over almost 300 XML files.  This might make sense if it relied on most of these files remaining unchanged and stored in cache, however it also uses a cache-busting technique when requesting them (adding a random query component to the URL).

Most of these files are series description files (some for finished series with no published programs).  These files contain a title, a short description, the URL for a thumbnail image and the IDs for the programs belonging to the series.  To find out about those programs, you need to load all the channel guide XML files until you find which one contains the program.  Going in the other direction, if you’ve got a program description from the channel guide and want to know about the series it belongs to (e.g. to get the thumbnail), you need to load each series description XML file until you find the one that contains the program.  So there aren’t many opportunities to delay loading of parts of the guide.

The startup time would be a lot easier if this information was collapsed down to a smaller number of larger XML files.


Last week, we released the source code to django-openid-auth.  This is a small library that can add OpenID based authentication to Django applications.  It has been used for a number of internal Canonical projects, including the sprint scheduler Scott wrote for the last Ubuntu Developer Summit, so it is possible you’ve already used the code.

Rather than trying to cover all possible use cases of OpenID, it focuses on providing OpenID Relying Party support to applications using Django’s django.contrib.auth authentication system.  As such, it is usually enough to edit just two files in an existing application to enable OpenID login.

The library has a number of useful features:

  • As well as the standard method of prompting the user for an identity URL, you can configure a fixed OpenID server URL.  This is useful for deployments where OpenID is being used for single sign on, and you always want users to log in using a particular OpenID provider.  Rather than asking the user for their identity URL, they are sent directly to the provider.
  • It can be configured to automatically create accounts when new identity URLs are seen.
  • User names, full names and email addresses can be set on accounts based on data sent via the OpenID Simple Registration extension.
  • Support for Launchpad‘s Teams OpenID extension, which lets you query membership of Launchpad teams when authenticating against Launchpad’s OpenID provider.  Team memberships are mapped to Django group membership.

While the code can be used for generic OpenID login, we’ve mostly been using it for single sign on.  The hope is that it will help members of the Ubuntu and Launchpad communities reuse our authentication system in a secure fashion.

The source code can be downloaded using the following Bazaar command:

bzr branch lp:django-openid-auth

Documentation on how to integrate the library is available in the README.txt file.  The library includes some code written by Simon Willison for django-openid, and uses the same licensing terms (2 clause BSD) as that project.

Getting “bzr send” to work with GMail

One of the nice features of Bazaar is the ability to send a bundle of changes to someone via email.  If you use a supported mail client, it will even open the composer with the changes attached.  If your client isn’t supported, then it’ll let you compose a message in your editor and then send it to an SMTP server.

GMail is not a supported mail client, but there are a few work arounds listed on the wiki.  Those really come down to using an alternative mail client (either the editor or Mutt) and sending the mails through the GMail SMTP server.  Neither solution really appealed to me.  There doesn’t seem to be a programatic way of opening up GMail’s compose window and adding an attachment (not too surprising for a web app).

What is possible though is connecting via IMAP and adding messages to the drafts folder (assuming IMAP support is enabled).  So I wrote a small plugin to do just that.  It can be installed with the following command:

bzr branch lp:~jamesh/+junk/bzr-imapclient ~/.bazaar/plugins/imapclient

And then configure the IMAP server, username and mailbox according to the instructions in the README file.  You can then use “bzr send” as normal and then complete and send the draft at your leisure.

One nice thing about the plugin implementation is that it didn’t need any GMail specific features: it should be useful for anyone who has their drafts folder stored on an IMAP server and uses an unsupported mail client.

The main area where this could be improved would be to open up the compose screen in the web browser.  However, this would require knowing the internal message ID for the new message, which I can’t see how to access via IMAP.

Using Twisted Deferred objects with gio

The gio library provides both synchronous and asynchronous interfaces for performing IO.  Unfortunately, the two APIs require quite different programming styles, making it difficult to convert code written to the simpler synchronous API to the asynchronous one.

For C programs this is unavoidable, but for Python we should be able to do better.  And if you’re doing asynchronous event driven code in Python, it makes sense to look at Twisted.  In particular, Twisted’s Deferred objects can be quite helpful.


The Twisted documentation describes deferred objects as “a callback which will be put off until later”.  The deferred will eventually be passed the result of some operation, or information about how it failed.

From the consumer side, you can register one or more callbacks that will be run:

def callback(result):
    # do stuff
    return result


The first callback will be called with the original result, while subsequent callbacks will be passed the return value of the previous callback (this is why the above example returns its argument). If the operation fails, one or more errbacks (error callbacks) will be called:

def errback(failure):
    # do stuff
    return failure


If the operation associated with the deferred has already been completed (or already failed) when the callback/errback is added, then it will be called immediately. So there is no need to check if the operation is complete before hand.

Using Deferred objects with gio

We can easily use gio’s asynchronous API to implement a new API based on deferred objects.  For example:

import gio
from twisted.internet import defer

def file_read_deferred(file, io_priority=0, cancellable=None):
    d = defer.Deferred()
    def callback(file, async_result):
            in_stream = file.read_finish(async_result)
        except gio.Error:
    file.read_async(callback, io_priority, cancellable)
    return d

def input_stream_read_deferred(in_stream, count, io_priority=0,
    d = defer.Deferred()
    def callback(in_stream, async_result):
            bytes = in_stream.read_finish(async_result)
        except gio.Error:
    # the argument order seems a bit weird here ...
    in_stream.read_async(count, callback, io_priority, cancellable)
    return d

This is a fairly simple transformation, so you might ask what this buys us. We’ve gone from an interface where you pass a callback to the method to one where you pass a callback to the result of the method. The answer is in the tools that Twisted provides for working with deferred objects.

The inlineCallbacks decorator

You’ve probably seen code examples that use Python’s generators to implement simple co-routines. Twisted’s inlineCallbacks decorator basically implements this for generators that yield deferred objects. It uses the enhanced generators feature from Python 2.5 (PEP 342) to pass the deferred result or failure back to the generator. Using it, we can write code like this:

def print_contents(file, cancellable=None):
    in_stream = yield file_read_deferred(file, cancellable=cancellable)
    bytes = yield input_stream_read_deferred(
        in_stream, 4096, cancellable=cancellable)
    while bytes:
        # Do something with the data.  For this example, just print to stdout.
        bytes = yield input_stream_read_deferred(
            in_stream, 4096, cancellable=cancellable)

Other than the use of the yield keyword, the above code looks quite similar to the equivalent synchronous implementation.  The only thing that would improve matters would be if these were real methods rather than helper functions.

Furthermore, the inlineCallbacks decorator causes the function to return a deferred that will fire when the function body finally completes or fails. This makes it possible to use the function from within other asynchronous code in a similar fashion. And once you’re using deferred results, you can mix in the gio calls with other Twisted asynchronous calls where it makes sense.

Thoughts on OAuth

I’ve been playing with OAuth a bit lately. The OAuth specification fulfills a role that some people saw as a failing of OpenID: programmatic access to websites and authenticated web services. The expectation that OpenID would handle these cases seems a bit misguided since the two uses cases are quite different:

  • OpenID is designed on the principle of letting arbitrary OpenID providers talk to arbitrary relying parties and vice versa.
  • OpenID is intentionally vague about how the provider authenticates the user. The only restriction is that the authentication must be able to fit into a web browsing session between the user and provider.

While these are quite useful features for a decentralised user authentication scheme, the requirements for web service authentication are quite different:

  • There is a tighter coupling between the service provider and client. A client designed to talk to a photo sharing service won’t have much luck if you point it at a micro-blogging service.
  • Involving a web browser session in the authentication process for individual web service request is not a workable solution: the client might be designed to run offline for instance.

While the idea of a universal web services client is not achievable, there are areas of commonality between different the services: gaining authorisation from the user and authenticating individual requests. This is the area that OAuth targets.

While it has different applications, it is possible to compare some of the choices made in the protocol:

  1. The secrets for request and access tokens are sent to the client in the clear. So at a minimum, a service provider’s request token URL and access token URL should be served over SSL. OpenID nominally avoids this by using Diffie-Hellman Key Exchange to avoid evesdropping, but ended up needing it to avoid man in the middle attacks. So sending them in the clear is probably a more honest approach.
  2. Actual web service methods can be authenticated over plain HTTP in a fairly secure means using the HMAC-SHA1 or RSA-SHA1 signature methods. Although if you’re using SSL anyway, the PLAINTEXT authentication method is probably not any worse than HMAC-SHA1.
  3. The authentication protocol supports both web applications and desktop applications. Though any security gained through consumer secrets is invalidated for desktop applications, since anyone with a copy of the application will necessarily have access to the secrets. A few other points follow on from this:
    • The RSA-SHA1 signature method is not appropriate for use by desktop applications. The signature is based only on information available in the web service request and the RSA key associated with the consumer, and the private key will need to be distributed as part of the application. So if an attacker discovers an access token (not access token secret), they can authenticate.
    • The other two authentication methods — HMAC-SHA1 and PLAINTEXT — depend on an access token secret. Along with the access token, this is essentially a proxy for the user name and password, so should be protected as such (e.g. via the GNOME keyring).  It still sounds better than storing passwords directly, since the token won’t give access to unrelated sites the user happened to use the same password on, and can be revoked independently of changing the password.
  4. While the OpenID folks found a need for a formal extension mechanism for version 2.0 of that protocol, nothing like that seems to have been added to OAuth.  There are now a number of proposed extensions for OAuth, so it probably would have been a good idea.  Perhaps it isn’t as big a deal, due to tigher coupling of service providers and consumers, but I could imagine it being useful as the two parties evolve over time.

So the standard seems decent enough, and better than trying to design such a system yourself.  Like OpenID, it’ll probably take until the second release of the specification for some of the ambiguities to be taken care of and for wider adoption.

From the Python programmer point of view, things could be better.  The library available from the OAuth site seems quite immature and lacks support for a few aspects of the protocol.  It looks okay for simpler uses, but may be difficult to extend for use in more complicated projects.

Django support landed in Storm

Since my last article on integrating Storm with Django, I’ve merged my changes to Storm’s trunk.  This missed the 0.13 release, so you’ll need to use Bazaar to get the latest trunk or wait for 0.14.

The focus since the last post was to get Storm to cooperate with Django’s built in ORM.  One of the reasons people use Django is the existing components that can be used to build a site.  This ranges from the included user management and administration code to full web shop implementations.  So even if you plan to use Storm for your Django application, your application will most likely use Django’s ORM for some things.

When I last posted about this code, it was possible to use both ORMs in a single app, but they would use separate database connections.  This had a number of disadvantages:

  • The two connections would be running separate transactions in parallel, so changes made by one connection would not be visible to the other connection until after the transaction was complete.  This is a problem when updating records in one table that reference rows that are being updated on the other connection.
  • When you have more than one connection, you introduce a new failure mode where one transaction may successfully commit but the other fail, leaving you with only half the changes being recorded.  This can be fixed by using two phase commit, but that is not supported by either Django or Storm at this point in time.

So it is desirable to have the two ORMs sharing a single connection.  The way I’ve implemented this is as a Django database engine backend that uses the connection for a particular named per-thread store and passes transaction commit or rollback requests through to the global transaction manager.  Configuration is as simple as:

DATABASE_ENGINE = 'storm.django.backend'
DATABASE_NAME = 'store-name'
STORM_STORES = {'store-name': 'database-uri'}

This will work for PostgreSQL or MySQL connections: Django requires some additional set up for SQLite connections that Storm doesn’t do.

Once this is configured, things mostly just work.  As Django and Storm both maintain caches of data retrieved from the database though, accessing the same table with both ORMs could give unpredictable results.  My code doesn’t attempt to solve this problem so it is probably best to access tables with only one ORM or the other.

I suppose the next step here would be to implement something similar to Storm’s Reference class to represent links between objects managed by Storm and objects managed by Django and vice versa.