Allocated Seating at Greater Union

On the weekend, I had my first encounter with allocated seating at the Greater Union Innaloo cinemas.

As usual, we’d bought tickets separately. It wasn’t until going in to the actual cinema that a staff member said that we were expected to sit in seats scattered around the cinema (one of which was on the very edge).

As the cinema wasn’t completely full, we did the only sensible thing: ignore the allocations and pick some seats next to each other. Looking around the cinema, it looked like a number of other people were ignoring the allocations (the seat I’d been allocated was taken by someone else in a group of about 5 people).

As far as I can understand, the reason for introducing this was to make the internet booking more compelling by letting you pick your seat. I guess they felt the need to do something, since the current system has never seemed worth it:

  • They charge an extra dollar per ticket for internet sales. This is despite the fact that they get the money earlier, and you might not even turn up (the tickets are sold on a no returns basis).
  • While there is a special queue for picking up internet sales tickets, there often isn’t anyone staffing it. I’ve only seen people in the queue a few times, and they needed to wait until one of the other ticket sellers was free.

Maybe they thought screwing with the majority of their customers’ experience would make the extra dollar worth it.

I sent a complaint to Greater Union, and in future plan to treat their seating allocations as a suggestion. It is a shame that so many other cinemas have been closing down over the years 🙁

urlparse considered harmful

Over the weekend, I spent a number of hours tracking down a bug caused by the cache in the Python urlparse module. The problem has already been reported as Python bug 1313119, but has not been fixed yet.

First a bit of background. The urlparse module does what you’d expect and parses a URL into its components:

>>> from urlparse import urlparse
>>> urlparse('http://www.gnome.org/')
('http', 'www.gnome.org', '/', '', '', '')

As well as accepting byte strings (which you’d be using at the HTTP protocol level), it also accepts Unicode strings (which you’d be using at the HTML or XML content level):

>>> urlparse(u'http://www.ubuntu.com/')
(u'http', u'www.ubuntu.com', u'/', '', '', '')

As the result is immutable, urlparse implements a cache of up to 20 previous results. Unfortunately, the cache does not distinguish between byte strings and Unicode strings, so parsing a byte string may return unicode components if the result is in the cache:

>>> urlparse('http://www.ubuntu.com/')
(u'http', u'www.ubuntu.com', u'/', '', '', '')

When you combine this with Python’s automatic promotion of byte strings to unicode when concatenating with a unicode string, can really screw things up when you do want to work with byte strings. If you hit such a problem, the code may all look correct but the problem was introduced 20 urlparse calls ago. Even if your own code never passes in Unicode strings, one of the libraries you use might be doing so.

The problem affects more than just the urlparse function. The urljoin function from the same module is also affected since it uses urlparse internally:

>>> from urlparse import urljoin
>>> urljoin('http://www.ubuntu.com/', '/news')
u'http://www.ubuntu.com/news'

It seems safest to avoid the module all together if possible, or at least until the underlying bug is fixed.

OpenID 2.0 Specification Approved

It looks like the OpenID Authentication 2.0 specification has finally been released, along with OpenID Attribute Exchange 1.0. While there are some questionable features in the new specification (namely XRIs), it seems like a worthwhile improvement over the previous specification. It will be interesting to see how quickly the new specification gains adoption.

While this is certainly an important milestone, there are still areas for improvement.

Best Practices For Managing Trust Relationships With OPs

The proposed Provider Authentication Policy Extension allows a Relying Party to specify what level of checking it wants the OpenID Provider to perform on the user (e.g. phishing resistant, multi factor, etc). The OP can then tell the RP what level of checking was actually performed.

What the specification doesn’t cover is why the RP should believe the OP. I can easily set up an OP that performs no checking on the user but claims that it performed “Physical Multi-Factor Authentication” in its responses. Any RP that acted on that assertion would be buggy.

This isn’t to say that the extension is useless. If the entity running the RP also runs the OP, then they might have good reason to believe the responses and act on them. Similarly, they might decide that JanRain are quite trustworthy so believe responses from myOpenID.

What is common in between these situations is that there is a trust relationship between the OP and RP that is outside of the protocol. As the specification gives no guidance on how to set up these relationships, they are likely to be ad-hoc and result in some OpenIDs being more useful than others.

At a minimum, it’d be good to see some best practices document on how to handle this.

Trusted Attribute Exchange

As mentioned in my previous article on OpenID Attribute Exchange, I mentioned that attribute values provided by the OP should be treated as being self asserted. So if the RP receives an email address or Jabber ID via attribute exchange, there is no guarantee that the user actually owns them. This is a problem if the RP wants to start emailing or instant messaging the user (e.g. OpenID enabled mailing list management software). Assuming the RP doesn’t want to get users to revalidate their email address, what can it do?

One of the simplest solutions is to use a trust relationship with the OP. If the RP knows that the OP will only transfer email addresses if the user has previously verified them, then they need not perform a second verification. This leaves us in the same situation as described in the previous situation.

Another solution that has been proposed by Sxip is to make the attribute values self-asserting. This entails making the attribute value contain both the desired information plus a digital signature. Using the email example, if the email address has a valid digital signature and the RP trusts the signer to perform email address verification, then it can accept the email address without further verification.

This means that the RP only needs to manage trust relationships with the attribute signers rather than every OP used by their user base. If there are fewer attribute signers than OPs then this is of obvious benefit to the RP. It also benefits the user since they no longer limited to one of the “approved” OPs.

Canonical IDs for URL Identifiers

I’ve stated previously that I think the support for identifier reuse with respect to URL identifiers is a bit lacking.  It’d be nice to see it expanded in a future specification revision.

States in Version Control Systems

Elijah has been writing an interesting series of articles comparing different version control systems. While the previous articles have been very informative, I think the latest one was a bit muddled. What follows is an expanded version of my comment on that article.

Elijah starts by making an analogy between text editors and version control systems, which I think is quite a useful analogy. When working with a text editor, there is a base version of the file on disk, and the version you are currently working on which will become the next saved version.

This does map quite well to the concepts of most VCS’s. You have a working copy that starts out identical to a base tree from the branch you are editing. You make local changes and eventually commit, creating a new base tree for future edits.

In addition to these two “states”, Elijah goes on to list three more states that are actually orthogonal to the original two. These additional states refer to certain categorisations of files within the working copy, rather than particular versions of files or trees. Rather than simplifying things, I believe that mingling the two concepts together is more likely to cause confusion. I think this is evident from the fact that the additional states do not fit the analogy we started with.

Versioned and Unversioned Files

If you are going to use a version control system seriously, it is worth understanding how files within a working copy are managed. Rather than thinking of a flat list of possible states, I think it is helpful to think of a hierarchy of categories. The most basic categorisation is whether a file is versioned or not.

Versioned files are those whose state will be saved when committing a new version of the tree. Conversely, unversioned files exist in the working copy but are not recorded when committing new versions of the tree.

This concept does not map very well to the original text editor analogy. If text editors did support such a feature, it would be the ability to add paragraphs to the document that do not get stored to disk when you save, but would persist inside the editor.

Types of Versioned Files

There are various ways to categorise versioned files, but here are some fairly generic ones that fit most VCS’s.

  1. unchanged
  2. modified
  3. added
  4. removed

Each of these categorisations is relative to the base tree for the working copy. The modified category contains both files whose contents have changed and whose metadata has changed (e.g. files that have been renamed).

The removed category is interesting because files in this category don’t actually exist in the working copy. That said the VCS knows that such files did exist, so it knows to delete the files when committing the next version of the tree.

Types of Unversioned Files

There are two primary categories for unversioned files:

  1. ignored
  2. unknown

The ignored category consists of unversioned files that the VCS knows the user does not want added to the tree (either through a set of default file patterns, or because the user explicitly said the file should be ignored). Object files and executables built from source code in the tree are prime examples of files that the user would want to ignore.

The unknown category is a catch-all for any other unversioned file in the tree. This is what Elijah referred to as “limbo” in his article.

Differences between VCS’s

These concepts are roughly applicable to most version control systems, but there are differences in how the categories are handled. Some of the areas where they differ are:

  • Are newly created files in the working copy counted as added or unknown?
    Some VCS’s (or configurations of VCS’s) don’t have a concept of unknown files. In such a system, newly created files will be treated as added rather than unknown.
  • Are unknown files allowed in the working copy when committing?
    One of the issues Elijah brought up was forgetting to add new files before commit. Some VCS’s avoid this problem by not letting you commit a tree with unknown files.
  • When renaming a versioned file, does it count as a single modified file, or a removed file and an added file?
    This one is a basic question of whether the VCS supports renames or not.
  • If I delete a versioned file, is it put in the removed category automatically?
    With some VCS’s you need to explicitly tell them that you are removing a file. With others it is enough to delete the file on disk.

These differences are the sorts of things that affect the workflow for the VCS, so are worth investigating when comparing different systems.