Extracting BIOS images and tools from ThinkPad update ISOs

With my old ThinkPad, Lenovo provided BIOS updates in the form of Windows executables or ISO images for a bootable CD.  Since I had wiped Windows partition, the first option wasn’t an option.  The second option didn’t work either, since it expected me to be using the drive in the base I hadn’t bought.  Luckily I was able to just copy the needed files out of the ISO image to a USB stick that had been set up to boot DOS.

When I got my new ThinkPad, I had hoped to do the same thing but found that the update ISO images appeared to be empty when mounted.  It seems that the update is handled entirely from an El Torito emulated hard disk image (as opposed to using the image only to bootstrap the drivers needed to access the CD).

So I needed some way to extract that boot image from the ISO.  After a little reading of the spec, I put together the following Python script that does the trick:

import struct
import sys

SECTOR_SIZE = 2048

def find_image(fp):
    # el-torito boot record descriptor
    fp.seek(0x11 * SECTOR_SIZE)
    data = fp.read(SECTOR_SIZE)
    assert data[:0x47] == b'\x00CD001\x01EL TORITO SPECIFICATION' + b'\x0' * 41
    boot_catalog_sector = struct.unpack('<L', data[0x47:0x4B])[0]

    # check the validation entry in the catalog
    fp.seek(boot_catalog_sector * SECTOR_SIZE)
    data = fp.read(0x20)
    assert data[0:1] == b'\x01'
    assert data[0x1e:0x20] == b'\x55\xAA'
    assert sum(struct.unpack('<16H', data)) % 0x10000 == 0

    # Read the initial/default entry
    data = fp.read(0x20)
    (bootable, image_type, load_segment, system_type, sector_count,
     image_sector) = struct.unpack('<BBHBxHL', data[:12])
    image_offset = image_sector * SECTOR_SIZE
    if image_type == 1:
        # 1.2MB floppy
        image_size = 1200 * 1024
    elif image_type == 2:
        # 1.44MB floppy
        image_size = 1440 * 1024
    elif image_type == 3:
        # 2.88MB floppy
        image_size = 2880 * 1024
    elif image_type == 4:
        # Hard disk image.  Read the MBR partition table to locate file system
        fp.seek(image_offset)
        data = fp.read(512)
        # Read the first partition entry
        (bootable, part_type, part_start, part_size) = struct.unpack_from(
            '<BxxxBxxxLL', data, 0x1BE)
        assert bootable == 0x80 # is partition bootable?
        image_offset += part_start * 512
        image_size = part_size * 512
    else:
        raise AssertionError('unhandled image format: %d' % image_type)

    fp.seek(image_offset)
    return fp.read(image_size)

if __name__ == '__main__':
    with open(sys.argv[1], 'rb') as iso, open(sys.argv[2], 'wb') as img:
        img.write(find_image(iso))

It isn’t particularly pretty, but does the job and spits out a 32MB FAT disk image when run on the ThinkPad X230 update ISOs. It is then a pretty easy task of copying those files onto the USB stick to run the update as before. Hopefully owners of similar laptops find this useful.

There appears to be an EFI executable in there too, so it is possible that the firmware update could be run from the EFI system partition too.  I haven’t had the courage to try that though.

u1ftp: a demonstration of the Ubuntu One API

One of the projects I’ve been working on has been to improve aspects of the Ubuntu One Developer Documentation web site.  While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.

I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients).  To help verify that the documentation was useful, I wrote a small program to exercise those APIs.  The result is u1ftp: a program that exposes a user’s files via an FTP daemon running on localhost.  In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.

You can download the program from:

https://launchpad.net/u1ftp/trunk/0.1/+download/u1ftp-0.1.zip

To make it easy to run on as many systems as possible, I packaged it up as a runnable zip file so can be run directly by the Python interpreter.  As well as a Python interpreter, you will need the following installed to run it:

  • On Linux systems, either the gnomekeyring extension (if you are using a GNOME derived desktop), or PyKDE4 (if you have a KDE derived desktop).
  • On Windows, you will need pywin32.
  • On MacOS X, you shouldn’t need any additional modules.

These could not be included in the zip file because they are extension modules rather than pure Python.

Once you’ve downloaded the program, you can run it with the following command:

python u1ftp-0.1.zip

This will start the FTP server listening at ftp://localhost:2121/.  Pointing a file manager at that URL should prompt you to log in, where you can use your standard Ubuntu One credentials and start browsing your files.  It will verify the credentials against the Ubuntu SSO service and issue an OAuth token that it stores in the keyring.  The OAuth token is then used to authenticate requests to the file storage REST API.

While I expect this program to be useful on its own, it was also intended to act as an example of how the Ubuntu One API can be used.  One way to browse the source is to simply unzip the package and poke around.  Alternatively, you can check out the source directly from Launchpad:

bzr branch lp:u1ftp

If you come up with an interesting extension to u1ftp, feel free to upload your changes as a branch on Launchpad.

Packaging Python programs as runnable ZIP files

One feature in recent versions of Python I hadn’t played around with until recently is the ability to package up a multi-module program into a ZIP file that can be run directly by the Python interpreter.  I didn’t find much information about it, so I thought I’d describe what’s necessary here.

Python has had the ability to add ZIP files to the module search path since PEP 273 was implemented in Python 2.3.  That can let you package up most of your program into a single file, but doesn’t help with the main entry point.

Things improved a bit when PEP 338 was implemented in Python 2.4, which allows any module that can be located on the Python search path can be executed as a script.  So if you have a ZIP file foo.zip containing a module foo.py, you could run it as:

PYTHONPATH=foo.zip python -m foo

This is a bit cumbersome to type though, so Python 2.6 lets you run directories and zip files directly.  So if you run

python foo.zip

It is roughly equivalent to:

PYTHONPATH=foo.zip python -m __main__

So if you place a file called __main__.py inside your ZIP file (or directory), it will be treated as the entry point to your program.  This gives us something that is as convenient to distribute and run as a single file script, but with the better maintainability of a multi-module program.

If your program has dependencies that you don’t expect to find present on the target systems, you can easily include them up in the zip file along side your program.  If you need to provide some data files along side your program, you could use the pkg_resources module from setuptools or distribute.

There are still a few warts with this set up though:

  • If your program fails, the trace back will not include lines of source code.  This is a general problem for modules loaded from zip files.
  • You can’t package extension modules into a zip file.  Of course, if you’re in a position where the target platforms are locked down tight enough that you could reliably provide compiled code that would run on them, you’d probably be better off using the platform’s package manager.
  • There is no way to tell whether a ZIP file can be executed directly with Python without inspecting its contents.  Perhaps this could be addressed by defining a new file extension to identify such files.

NBN talk at PLUG

Earlier in the week, I attended a PLUG discussion panel about the National Broadband Network.  While I had been following some of the high level information about the project, it was interesting to hear some of the more technical information.

The evening started with a presentation by Chris Roberts from NBN Co, and was followed by a panel discussion with Gavin Tweedie from iiNet and Warrick Mitchel from AARNet.

James Bromberger introducing the panel: Chris Roberts (NBN Co), Gavin Tweedie (iiNet) and Warrick Mitchel (AARNet)

One question I had was when they’ll get round to building out the network where I live.  There is a rollout map on the NBN Co site, but it currently only shows plans for works that will commence within a year.  Apparently they plan to release details on the three year plan by the end of this month, so hopefully my suburb will appear in that list.

The NBN is being built on top of three methods of connection: GPON fibre for built up areas, fixed LTE wireless (non roaming) for the smaller towns where it is not economical to provide fibre, and satellite broadband for the really remote areas.  All three connection methods provide a common interface to service providers, so companies that provide services over the network are not required to treat the three methods differently.  The wireless and satellite connections will initially run at 12Mb/s down and 1Mb/s up, while fibre connections can range from 25/5 to 100/40 (with the higher connection speeds incurring higher wholesale prices).  It should be quite an improvement over the upload speed  I’m currently getting on ADSL2.

Chris brought in some sample “User Network Interface” (UNI) boxes that would be used on premises with a fibre connection.  It provided 4 gigabit Ethernet ports, and 2 telephony ports.

The inside of a current generation NBN interface box

Rather than the 4 Ethernet ports being part of a single network as you’d expect for similar looking routers, each port represents a separate service.  So the single box can support connections to 4 retail ISPs, or for any other services delivered over the network (e.g. a cable TV service).  You would still need a router to provide firewall, NAT and wifi services, but since it only requires Ethernet for the WAN port there should be a bit more choice in routers than if you limit yourself to ones with ADSL modems built in.  In particular, it should be easier to find a router capable of running an open firmware like OpenWRT or CeroWRT.

The box also acts as a SIP ATA, where each of the two telephony ports can be configured to talk to the servers of different service providers.

It is also possible for NBN Co to remotely monitor the UNI boxes in people’s houses, so they can tell when they drop off the network.  This means that they have the ability to detect and respond to faults without relying on customer complaint calls like we do for the current Telstra copper network.

Since the NBN is supposed to provide a service equivalent to the current copper telephone network, the UNI box is paired with a battery pack to keep the telephony ports active during black outs, similar to how a wired telephone draws power from the exchange.  This battery pack is somewhat larger than the UNI box, holding a 7.2 Ah lead acid battery.  At 10W, this can keep the box running for around 8 hours.  The battery pack will automatically cut power before it is completely drained, but has an emergency switch to deliver the remaining energy at the expense of ruining the battery.

Next PLUG Event

If you’re in Perth, why not come down to the next PLUG event on March 26th?  It is an open source themed pub quiz at the Moon & Sixpence.  Last year’s quiz was a lot of fun, and I expect this one will be the same.

pygpgme 0.3

This week I put out a new release of pygpgme: a Python extension that lets you perform various tasks with OpenPGP keys via the GPGME library.  The new release is available from both Launchpad and PyPI.

There aren’t any major new extensions to the API, but this is the first release to support Python 3 (Python 2.x is still supported though).  The main hurdle was ensuring that the module correctly handled text vs. binary data.  The split I ended up on was to treat most things as text (including textual representations of binary data such as key IDs and fingerprints), and treat the data being passed into or returned from the encryption, decryption, signing and verification commands as binary data.  I haven’t done a huge amount with the Python 3 version of the module yet, so I’d appreciate bug reports if you find issues.

So now you’ve got one less reason not to try Python 3 if you were previously using pygpgme in your project.