I am at Narita Airport at the moment, on the way to Boston for some of the meetings being held during UDS. It’ll be good to catch up with everyone again.
Hopefully this trip won’t be as eventful as the previous one to Florida 
I am at Narita Airport at the moment, on the way to Boston for some of the meetings being held during UDS. It’ll be good to catch up with everyone again.
Hopefully this trip won’t be as eventful as the previous one to Florida 
Most people have probably seen or used OpenID. If you have used it, then it has most likely that it was with the 1.x protocol. Now that OpenID 2.0 is close to release (apparently they really mean it this time …), it is worth looking at the new features it enables. A few that have stood out to me include:
I’ll now discuss each of these in a bit more detail
Extension Support
OpenID 1.1 had one well known extension: the Simple Registration Extension. An OpenID relying party (RP) would send a request with an openid.sreg.required field, and get back user information in openid.sreg.* fields from the OpenID Provider (OP). The RP and OP would just need to know that “openid.sreg” fields means that the simple registration extension is being used.
But what if I want to define my own extension? If my RP sends openid.foo.* fields, how does the OP know that it refers to my extension and not some other extension that happened to pick the same prefix?
OpenID 2.0 solves this problem by borrowing the idea of name space URIs from XML. If I am sending some openid.foo.* fields in an OpenID message, then I also need to send an openid.ns.foo field set to a URI that identifies the extension. This means that a message that sends the same data as openid.bar.* fields should be treated the same provided that openid.ns.bar is set to the extension’s name space URI.
As with XML name spaces, this allows us to piggy back on top of DNS as a way of avoiding conflicts.
Large Requests and Responses
OpenID 1.1 uses HTTP redirects as a way of transferring control between the RP and OP (and vice versa). This means that the upper limit on a message is effectively the same as the smallest upper limit on length of URLs in common web browsers and servers. Internet Explorer seems to have the lowest limit—2,083 characters—so it sets the effective limit on message size.
For simple authentication checks (what OpenID was originally designed for), this is not generally a problem. But once you start to introduce a few extensions, this limit can easily be reached.
OpenID 2.0 allows messages to be sent as an HTTP POST body which effectively removes the upper limit. The recommended way of achieving this is by sending a page to the user’s browser that contains a form that posts to the appropriate endpoint and contains the data as hidden form fields. The form would then get submitted by a JavaScript onload handler.
Directed Identity
For OpenID 1.1, the authentication process goes something like this:
With OpenID 2.0, the discovery process may tell the RP that the URL identifies the OP rather than the user. If this happens, the RP proceeds with the authentication request using the special “http://specs.openid.net/auth/2.0/identifier_select” value as the identity URL. The OP will then fill in the user’s actual identity URL in the subsequent authentication response. As an additional step, the RP is then required to perform discovery on this URL to ensure that the OP is entitled to authenticate it.
There are a number of cases where this feature can be useful:
Attribute Exchange Extension
The OpenID Attribute Exchange extension is like the simple registration extension on steroids. The major differences are:
Prop Up A New Naming Monopoly
With OpenID 2.0, a user is supposed to be able to enter an i-name in place of an identity URL in an RP, and be authenticated against the i-broker managing that name. So rather than entering an ugly URL, users can enter an ugly string starting with “=” or “@”.
All it costs to take advantage of this is US$12 per year (or US$55 for an organisation name). They claim that it will be possible to use an i-name in many contexts in the future, but for now it appears to be limited to (1) a subset of OpenID RPs, (2) a web form that people can use to send you emails and (3) an HTTP redirection to your website.
At this point, it seems that i-name support in OpenID is more important to the i-name crowd than the OpenID crowd. That said, the complexity is hidden by most of the existing OpenID libraries, so it’ll most likely get implemented by default on most RPs moving forward.
Conclusion
Overall OpenID 2.0 looks like a worthwhile upgrade, even if some parts like i-names are questionable.
Assuming the attribute exchange extension takes off, it should provide a much richer user experience. Imagine being able to update your shipping address in one place when you move house and having all the online retailers you use receive the updated address immediately. Or changing your email address and having all the bugzilla instances you use pick up the new address instantly (perhaps requiring you to verify the new address first, of course).
The improved extension support should also make it easier for people to experiment with new extensions without accidentally conflicting with each other, which should accelerate development of new features.
Last week I was in sunny Dunedin for a Launchpad/Bazaar integration sprint with Tim and Jonathan. Some of the smaller issues we addressed should make their way to users in the next Launchpad release (these were mainly fixes to confusing error messages on bazaar.launchpad.net). Some of the others will probably only become available a release or two further on (mostly related to improving development workflow for branches hosted on Launchpad).
My previous trip to New Zealand had also been to Dunedin (for last year’s linux.conf.au). Since then they’d replaced all the coins for denominations less than NZ$1. Other than being less familiar to Australians, the smaller coins seem like a good idea. They don’t seem to have taken Australia’s lead in making the $2 coin smaller than the $1 coin though.
It is probably old news to some, but Google have put up an information page on the upcoming Australian Federal Election.
The most useful tool is the Google Maps overlay that provides information about the different electorates. At the moment it only has information about the sitting members, their margin and links to relevant news articles. Presumably more information will become available once the election actually gets called.
Presumably they are planning on offering similar tools for next year’s US elections and this is a beta. So even if you aren’t interested in Australian politics, it might be worth a peak to see what is provided.
One useful feature of Bazaar is the ability to cryptographically sign revisions. I was discussing this with Ryan on IRC, and thought I’d write up some of the details as they might be useful to others.
Anyone who remembers the past security of GNOME and Debian servers should be able to understand the benefits of being able to verify the integrity of a source code repository after such an incident. Rather than requiring all revisions made since the last known safe backup to be examined, much of the verification could be done mechanically.
Turning on Revision Signing
The first thing you’ll need to do is get a PGP key and configure GnuPG to use it. The GnuPG handbook is a good reference on doing this. As the aim is to provide some assurance that the revisions you publish were really made by you, it’d be good to get the key signed by someone.
Once that is done, it is necessary to configure Bazaar to sign new revisions. The easiest way to do this is to edit ~/.bazaar/bazaar.conf to look something like this:
[DEFAULT] email = My Name <me@example.com> create_signatures = always
Now when you run “bzr commit“, a signature for the new revision will be stored in the repository. With this configuration change, you will be prompted for your pass phrase when making commits. If you’d prefer not to enter it repeatedly, there are a few options available:
Signatures are transferred along with revisions when you push or pull a branch, perform merges, etc.
How Does It Work?
So what does the signature look like, and what does it cover? There is no command for printing out the signatures, but we can access them using bzrlib. As an example, lets look at the signature on the head revision of one of my branches:
>>> from bzrlib.branch import Branch
>>> b = Branch.open('http://bazaar.launchpad.net/~jamesh/storm/reconnect')
>>> b.last_revision()
'james.henstridge@canonical.com-20070920110018-8e88x25tfr8fx3f0'
>>> print b.repository.get_signature_text(b.last_revision())
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
bazaar-ng testament short form 1
revision-id: james.henstridge@canonical.com-20070920110018-8e88x25tfr8fx3f0
sha1: 467b78c3f8bfe76b222e06c71a8f07fc376e0d7b
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFG8lMHAa+T2ZHPo00RAsqjAJ91urHiIcu4Bim7y1tc5WtR+NjvlACgtmdM
9IC0rtNqZQcZ+GRJOYdnYpA=
=IONs
-----END PGP SIGNATURE-----
>>>
If we save this signature to a file, we can verify it with a command like “gpg --verify signature.txt” to prove that it was made using my PGP key. Looking at the signed text, we see three lines:
For the current signing algorithm, the checksum is made over the long form testament for the revision, which can easily be verified:
$ bzr branch http://bazaar.launchpad.net/~jamesh/storm/reconnect $ cd reconnect $ bzr testament --long > testament.txt $ sha1sum testament.txt 467b78c3f8bfe76b222e06c71a8f07fc376e0d7b testament.txt
Looking at the long form testament, we can see what the signature ultimately covers:
So if the revision testament matches the revision signature and the revision signature validates, you can be sure that you are looking at the same code as the person who made the signature.
It is worth noting that while the signature makes an assertion about the state of the tree at that revision — the only thing it tells you about the ancestry is the revision IDs of the parents. If you need assurances about those revisions, you will need to check their signatures separately. One of the reasons for this is that you might not know the full history of a branch if it has ghost revisions (as might happen when importing code from certain foreign version control systems).
Signing Past Revisions
If you’ve already been using Bazaar but had not enabled revision signing, it is likely that you’ve got a bunch of unsigned revisions lying around. If that is the case, you can sign the revisions in bulk using the “bzr sign-my-commits” command. It will go through all revisions in the ancestry, and generate signatures for all the commits that match your committer ID.
Verifying Signatures in Bulk
To verify all signatures found in a repository, John Arbash Meinel’s signing plugin can be used, which provides a “bzr verify-sigs” command. It can be installed with the following commands:
$ mkdir -p ~/.bazaar/plugins $ bzr branch http://bzr.arbash-meinel.com/plugins/signing/ ~/.bazaar/plugins/signing
When the command is run it will verify the integrity of all the signatures, and give a summary of how many revisions each person has signed.
When Storm was released, one of the comments made was that it did not include the ability to generate a database schema from the Python classes used to represent the tables while this feature is available in a number of competing ORMs. The simple reason for this is that we haven’t used schema generation in any of our ORM-using projects.
Furthermore I’d argue that schema generation is not really appropriate for long lived projects where the data stored in the database is important. Imagine developing an application along these lines:
In order to perform step 5, it will be necessary to modify the existing database to match the new schema. These changes might be in a number of forms, including:
Assuming that you want to keep the existing data, it isn’t enough to simply represent the new schema in the updated application: we need to know how that new schema relates to the old one in order to migrate the existing data.
For some changes like addition of tables, it is pretty easy to update the schema given knowledge of the new schema. For others it is more difficult, and will often require custom migration logic. So it is likely that you will need to write a custom script to migrate the schema and data.
Now we have two methods of building the database schema for the application:
Are you sure that the two methods will result in the same schema? How about if we iterate the process another 10 times or so? As a related question, are you sure that the database environment your tests are running under match the production environment?
The approach we settled on with Launchpad development was to only deal with migration scripts and not generate schemas from the code. The migration scripts are formulated as a sequence of SQL commands to migrate the schema and data as needed. So to set up a new instance, a base schema is loaded then patched up to the current schema. Each patch leaves a record in the database that it has been applied so it is trivial to bring a database up to date, or check that an application is in sync with the database.
When the schema is not generated from the code, it also means that the code can be simpler. As far as Python ORM layer is concerned, does it matter what type of integer a field contains? Does the Python code care what indexes or constraints are defined for the table? By only specifying what is needed to effectively map data to Python objects, we end up with easy to understand code without annotations that probably can’t specify everything we want anyway.
I got round to upgrading my desktop system to Gutsy today. I’d upgraded my laptop the previous week, so was not expecting much in the way of problems.
I’d done the original install on my desktop back in the Warty days, and the root partition was a bit too small to perform the upgrade. As there was a fair bit of accumulated crud, I decided to do a clean install. Things mostly worked, but there were a few problems, which I detail below:
Dual Head Configuration
With previous releases, I was using the Radeon driver’s MergedFB mode, as it gives a better user experience than the traditional Xinerama code (3D acceleration on both heads, better performance, etc). After moving adding the MergedFB options to xorg.conf, I was just getting the same image cloned on both displays.
Looking at the X server log file, there was a message saying that MergedFB support had been removed in favour of RandR 1.2 support. And it was possible to get dual head working with the xrandr command line tool:
xrandr --output VGA-0 --right-of DVI-0
It was good to know that dual-head still worked, but I didn’t want to reconfigure this every time I restarted the machine. I didn’t find much information on how to configure the initial RandR configuration on the X.org website, but did find a useful guide on the Intel Linux Graphics website. While the guide was aimed at the Intel driver, it had enough information to get things configured for the Radeon driver. The main difference was in the naming of the outputs. Below is a an excerpt of my configuration file that configures things the way I had them previously:
Section "Device"
Identifier "ATI Technologies Inc RV280 [Radeon 9200 SE]"
Driver "ati"
BusID "PCI:1:0:0"
Option "monitor-DVI-0" "Sony SDM-S74 [1]"
Option "monitor-VGA-0" "Sony SDM-S74 [2]"
EndSection
Section "Monitor"
Identifier "Sony SDM-S74 [1]"
Option "DPMS"
HorizSync 30-65
VertRefresh 50-75
Option "LeftOf" "Sony SDM-S74 [2]"
EndSection
Section "Monitor"
Identifier "Sony SDM-S74 [2]"
Option "DPMS"
HorizSync 30-65
VertRefresh 50-75
EndSection
Section "Screen"
Identifier "Default Screen"
Device "ATI Technologies Inc RV280 [Radeon 9200 SE]"
Monitor "Sony SDM-S74 [1]"
DefaultDepth 16
SubSection "Display"
Modes "1280x1024" "1024x768" "800x600" "640x480"
Virtual 2560 1024
EndSubSection
EndSection
I had originally tried setting the VGA monitor to be “RightOf” the monitor connected to the DVI, but that left me with the desktop in clone mode. The main difference I’ve noticed with this configuration compared to my previous one is that the GDM login prompt displays on the right hand head (VGA) rather than the left hand head (DVI).
Window Shadows Don’t Render
Desktop Effects were enabled by default after the install (and on the live CD). While some effects seemed to work, the shadows on the panel and drop down menus were rendered as opaque grey boxes around the windows. I ended up just disabling the effects to clear up the problem.
This bug had already been reported as bug 141304 (which may be the same as bug 116808).
Firefox Crashes on Startup
When I tried to start firefox, it would momentarily display a window and then crash. This appears to be bug 133124, and seems to only occur on AMD64 systems. The problem appears to be in the ubuntulooks theme engine, and switching to a different control theme makes the problem go away, but hopefully it’ll get fixed for the final release.
Problems Rendering Ligatures in Firefox
The problems rendering ligatures in firefox seem to be back again. This problem was never really fixed, but was worked around by removing the ligature table entries from the DejaVu fonts. With the ligature table entries back, the symptoms have returned. This is bug 37828.
When I got back from Florida, I found a copy of the Manic Times in the mail. It seems that I received the copy because I used to be subscribed to The Chaser back when it was a newspaper. The newspaper is being edited by Charles Firth, who was the US correspondent in the last series of The Chaser’s War on Everything.
The content is fairly different to what was published in The Chaser, in that they are nominally true. That said, they are written from a non-mainstream point of view. The issue I received seemed to focus on the recent APEC conference and related security measures (which appear to have been fairly poor).
I haven’t yet decided whether to subscribe, but the online subscription form thoughtfully includes “Governor-General”, “Deputy Vice-President”, “The Right Hon”, “Queen” and “Archbishop” in the dropdown list of titles to pick from – something that most likely inconveniences people on other web sites.
This week I am in Florida for a Launchpad sprint. I was meant to arrive on Sunday night, but I fell asleep in the boarding lounge and missed the San Francisco → Orlando flight (the flight out of Perth was an early morning one, and I didn’t get enough sleep on the plane). The earliest alternative fligh was the same time the next day, so I ended up ariving on Monday night.
I can’t say I was impressed with United’s customer service though. I was directed to the customer service centre in the airport and queued up behind about 10 other people. After a short while, the one staff member at the desk announced that her shift was over and that her replacements would not be arriving for another hour. It seems like really bad management to leave the desk unattended for an hour, particularly when they knew that there were people waiting.
They had a bunch of check-in computers which were supposed to let you change your flight details, so I gave one of these a try. Unfortunately, the computers directed me to pick up the phone to talk to a representative, and the representative ended up directing me to talk to someone at the customer service centre. After waiting for the next shift, things got sorted out okay though, which was good.
This was also my first experience with SSSS screening. In fact I got to experience it twice: once when checking in for the flight I missed, and again for the later flight. On my way back to Australia, I’ll have two more flights leaving from US airports so it’ll be interesting to see what happens then.
The new Canonical Shop was opened recently which allows you to buy anything from Ubuntu tshirts and DVDs up to a 24/7 support contract for your server.
One thing to note is that this is the first site using our new Launchpad single sign-on infrastructure. We will be rolling this out to other sites in time, which should give a better user experience to the existing shared authentication system currently in place for the wikis.