OpenID 2.0

Most people have probably seen or used OpenID. If you have used it, then it has most likely that it was with the 1.x protocol. Now that OpenID 2.0 is close to release (apparently they really mean it this time ...), it is worth looking at the new features it enables. A few that have stood out to me include: proper extension support support for larger requests/responses directed identity attribute exchange extension support for a new naming monopoly I'll now discuss each of these in a bit more detail Extension Support OpenID 1.1 had one well known extension: the Simple Registration Extension. An OpenID relying party (RP) would send a request with an openid.sreg.required field, and get back user information in openid.sreg.* fields from the OpenID Provider (OP). The RP and OP would just need to know that "openid.sreg" fields means that the simple registration extension is being used. But what if I want to define my own extension? If my RP sends openid.foo.* fields, how does the OP know that it refers to my extension and not some other extension that happened to pick the same prefix? OpenID 2.0 solves this problem by borrowing the idea of name space URIs from XML. If I am sending some openid.foo.* fields in an OpenID message, then I also need to send an openid.ns.foo field set to a URI that identifies the extension. This means that a message that sends the same data as openid.bar.* fields should be treated the same provided that openid.ns.bar is set to the extension's name space URI. As with XML name spaces, this allows us to piggy back on top of DNS as a way of avoiding conflicts. Large Requests and Responses OpenID 1.1 uses HTTP redirects as a way of transferring control between the RP and OP (and vice versa). This means that the upper limit on a message is effectively the same as the smallest upper limit on length of URLs in common web browsers and servers. Internet Explorer seems to have the lowest limit—2,083 characters—so it sets the effective limit on message size. For simple authentication checks (what OpenID was originally designed for), this is not generally a problem. But once you start to introduce a few extensions, this limit can easily be reached. OpenID 2.0 allows messages to be sent as an HTTP POST body which effectively removes the upper limit. The recommended way of achieving this is by sending a page to the user's browser that contains a form that posts to the appropriate endpoint and contains the data as hidden form fields. The form would then get submitted by a JavaScript onload handler. Directed Identity For OpenID 1.1, the authentication process goes something like this: the user enters their identity URL into a form on the RP the RP performs discovery on that URL to find the user's OP. the RP initiates an OpenID authentication request with that OP. With OpenID 2.0, the discovery process may tell the RP that the…

Back from Dunedin

Last week I was in sunny Dunedin for a Launchpad/Bazaar integration sprint with Tim and Jonathan. Some of the smaller issues we addressed should make their way to users in the next Launchpad release (these were mainly fixes to confusing error messages on bazaar.launchpad.net). Some of the others will probably only become available a release or two further on (mostly related to improving development workflow for branches hosted on Launchpad). My previous trip to New Zealand had also been to Dunedin (for last year's linux.conf.au). Since then they'd replaced all the coins for denominations less than NZ$1. Other than being less familiar to Australians, the smaller coins seem like a good idea. They don't seem to have taken Australia's lead in making the $2 coin smaller than the $1 coin though.

Google’s Australian Election Tools

It is probably old news to some, but Google have put up an information page on the upcoming Australian Federal Election. The most useful tool is the Google Maps overlay that provides information about the different electorates. At the moment it only has information about the sitting members, their margin and links to relevant news articles. Presumably more information will become available once the election actually gets called. Presumably they are planning on offering similar tools for next year's US elections and this is a beta. So even if you aren't interested in Australian politics, it might be worth a peak to see what is provided.

Signed Revisions with Bazaar

One useful feature of Bazaar is the ability to cryptographically sign revisions. I was discussing this with Ryan on IRC, and thought I'd write up some of the details as they might be useful to others. Anyone who remembers the past security of GNOME and Debian servers should be able to understand the benefits of being able to verify the integrity of a source code repository after such an incident. Rather than requiring all revisions made since the last known safe backup to be examined, much of the verification could be done mechanically. Turning on Revision Signing The first thing you'll need to do is get a PGP key and configure GnuPG to use it. The GnuPG handbook is a good reference on doing this. As the aim is to provide some assurance that the revisions you publish were really made by you, it'd be good to get the key signed by someone. Once that is done, it is necessary to configure Bazaar to sign new revisions. The easiest way to do this is to edit ~/.bazaar/bazaar.conf to look something like this: [DEFAULT] email = My Name <me@example.com> create_signatures = always Now when you run "bzr commit", a signature for the new revision will be stored in the repository. With this configuration change, you will be prompted for your pass phrase when making commits. If you'd prefer not to enter it repeatedly, there are a few options available: install gpg-agent, and use it to remember your pass phrase in the same way you use ssh-agent. install the gnome-gpg wrapper, which lets you remember your pass phrase in your Gnome keyring. To use gnome-gpg, you will need to add an additional configuration value: "gpg_signing_command = gnome-gpg". Signatures are transferred along with revisions when you push or pull a branch, perform merges, etc. How Does It Work? So what does the signature look like, and what does it cover? There is no command for printing out the signatures, but we can access them using bzrlib. As an example, lets look at the signature on the head revision of one of my branches: >>> from bzrlib.branch import Branch >>> b = Branch.open('http://bazaar.launchpad.net/~jamesh/storm/reconnect') >>> b.last_revision() 'james.henstridge@canonical.com-20070920110018-8e88x25tfr8fx3f0' >>> print b.repository.get_signature_text(b.last_revision()) -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 bazaar-ng testament short form 1 revision-id: james.henstridge@canonical.com-20070920110018-8e88x25tfr8fx3f0 sha1: 467b78c3f8bfe76b222e06c71a8f07fc376e0d7b -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFG8lMHAa+T2ZHPo00RAsqjAJ91urHiIcu4Bim7y1tc5WtR+NjvlACgtmdM 9IC0rtNqZQcZ+GRJOYdnYpA= =IONs -----END PGP SIGNATURE----- >>> If we save this signature to a file, we can verify it with a command like "gpg --verify signature.txt" to prove that it was made using my PGP key. Looking at the signed text, we see three lines: An identifier for the checksum algorithm. This is included to future proof old signatures should the need arise to alter the checksum algorithm at a later date. The revision ID that the signature applies to. Note that this is the full globally unique identifier rather than the shorter numeric identifiers that are only unique in the context of an individual branch. The checksum, in SHA1 form. For the…

Schema Generation in ORMs

When Storm was released, one of the comments made was that it did not include the ability to generate a database schema from the Python classes used to represent the tables while this feature is available in a number of competing ORMs. The simple reason for this is that we haven't used schema generation in any of our ORM-using projects. Furthermore I'd argue that schema generation is not really appropriate for long lived projects where the data stored in the database is important. Imagine developing an application along these lines: Write the initial version of the application. Generate a schema from the code. Deploy one or more instances of the application in production, and accumulate some data. Do further development on the application, that involves modifications to the schema. Deploy the new version of the application. In order to perform step 5, it will be necessary to modify the existing database to match the new schema. These changes might be in a number of forms, including: adding or removing a table adding or removing a column from a table changing the way data is represented in a particular column refactoring one table into two related tables or vice versa adding or removing an index Assuming that you want to keep the existing data, it isn't enough to simply represent the new schema in the updated application: we need to know how that new schema relates to the old one in order to migrate the existing data. For some changes like addition of tables, it is pretty easy to update the schema given knowledge of the new schema. For others it is more difficult, and will often require custom migration logic. So it is likely that you will need to write a custom script to migrate the schema and data. Now we have two methods of building the database schema for the application: generate a schema from the new version of the application. generate a schema from the old version of the application, then run the migration script. Are you sure that the two methods will result in the same schema? How about if we iterate the process another 10 times or so? As a related question, are you sure that the database environment your tests are running under match the production environment? The approach we settled on with Launchpad development was to only deal with migration scripts and not generate schemas from the code. The migration scripts are formulated as a sequence of SQL commands to migrate the schema and data as needed. So to set up a new instance, a base schema is loaded then patched up to the current schema. Each patch leaves a record in the database that it has been applied so it is trivial to bring a database up to date, or check that an application is in sync with the database. When the schema is not generated from the code, it also means that the code can be simpler. As far as Python ORM…