Using Twisted Deferred objects with gio

The gio library provides both synchronous and asynchronous interfaces for performing IO.  Unfortunately, the two APIs require quite different programming styles, making it difficult to convert code written to the simpler synchronous API to the asynchronous one.

For C programs this is unavoidable, but for Python we should be able to do better.  And if you’re doing asynchronous event driven code in Python, it makes sense to look at Twisted.  In particular, Twisted’s Deferred objects can be quite helpful.

Deferred

The Twisted documentation describes deferred objects as “a callback which will be put off until later”.  The deferred will eventually be passed the result of some operation, or information about how it failed.

From the consumer side, you can register one or more callbacks that will be run:

def callback(result):
    # do stuff
    return result

deferred.addCallback(callback)

The first callback will be called with the original result, while subsequent callbacks will be passed the return value of the previous callback (this is why the above example returns its argument). If the operation fails, one or more errbacks (error callbacks) will be called:

def errback(failure):
    # do stuff
    return failure

deferred.addErrback(errback)

If the operation associated with the deferred has already been completed (or already failed) when the callback/errback is added, then it will be called immediately. So there is no need to check if the operation is complete before hand.

Using Deferred objects with gio

We can easily use gio’s asynchronous API to implement a new API based on deferred objects.  For example:

import gio
from twisted.internet import defer

def file_read_deferred(file, io_priority=0, cancellable=None):
    d = defer.Deferred()
    def callback(file, async_result):
        try:
            in_stream = file.read_finish(async_result)
        except gio.Error:
            d.errback()
        else:
            d.callback(in_stream)
    file.read_async(callback, io_priority, cancellable)
    return d

def input_stream_read_deferred(in_stream, count, io_priority=0,
                               cancellable=None):
    d = defer.Deferred()
    def callback(in_stream, async_result):
        try:
            bytes = in_stream.read_finish(async_result)
        except gio.Error:
            d.errback()
        else:
            d.callback(bytes)
    # the argument order seems a bit weird here ...
    in_stream.read_async(count, callback, io_priority, cancellable)
    return d

This is a fairly simple transformation, so you might ask what this buys us. We’ve gone from an interface where you pass a callback to the method to one where you pass a callback to the result of the method. The answer is in the tools that Twisted provides for working with deferred objects.

The inlineCallbacks decorator

You’ve probably seen code examples that use Python’s generators to implement simple co-routines. Twisted’s inlineCallbacks decorator basically implements this for generators that yield deferred objects. It uses the enhanced generators feature from Python 2.5 (PEP 342) to pass the deferred result or failure back to the generator. Using it, we can write code like this:

@defer.inlineCallbacks
def print_contents(file, cancellable=None):
    in_stream = yield file_read_deferred(file, cancellable=cancellable)
    bytes = yield input_stream_read_deferred(
        in_stream, 4096, cancellable=cancellable)
    while bytes:
        # Do something with the data.  For this example, just print to stdout.
        sys.stdout.write(bytes)
        bytes = yield input_stream_read_deferred(
            in_stream, 4096, cancellable=cancellable)

Other than the use of the yield keyword, the above code looks quite similar to the equivalent synchronous implementation.  The only thing that would improve matters would be if these were real methods rather than helper functions.

Furthermore, the inlineCallbacks decorator causes the function to return a deferred that will fire when the function body finally completes or fails. This makes it possible to use the function from within other asynchronous code in a similar fashion. And once you’re using deferred results, you can mix in the gio calls with other Twisted asynchronous calls where it makes sense.

Thoughts on OAuth

I’ve been playing with OAuth a bit lately. The OAuth specification fulfills a role that some people saw as a failing of OpenID: programmatic access to websites and authenticated web services. The expectation that OpenID would handle these cases seems a bit misguided since the two uses cases are quite different:

  • OpenID is designed on the principle of letting arbitrary OpenID providers talk to arbitrary relying parties and vice versa.
  • OpenID is intentionally vague about how the provider authenticates the user. The only restriction is that the authentication must be able to fit into a web browsing session between the user and provider.

While these are quite useful features for a decentralised user authentication scheme, the requirements for web service authentication are quite different:

  • There is a tighter coupling between the service provider and client. A client designed to talk to a photo sharing service won’t have much luck if you point it at a micro-blogging service.
  • Involving a web browser session in the authentication process for individual web service request is not a workable solution: the client might be designed to run offline for instance.

While the idea of a universal web services client is not achievable, there are areas of commonality between different the services: gaining authorisation from the user and authenticating individual requests. This is the area that OAuth targets.

While it has different applications, it is possible to compare some of the choices made in the protocol:

  1. The secrets for request and access tokens are sent to the client in the clear. So at a minimum, a service provider’s request token URL and access token URL should be served over SSL. OpenID nominally avoids this by using Diffie-Hellman Key Exchange to avoid evesdropping, but ended up needing it to avoid man in the middle attacks. So sending them in the clear is probably a more honest approach.
  2. Actual web service methods can be authenticated over plain HTTP in a fairly secure means using the HMAC-SHA1 or RSA-SHA1 signature methods. Although if you’re using SSL anyway, the PLAINTEXT authentication method is probably not any worse than HMAC-SHA1.
  3. The authentication protocol supports both web applications and desktop applications. Though any security gained through consumer secrets is invalidated for desktop applications, since anyone with a copy of the application will necessarily have access to the secrets. A few other points follow on from this:
    • The RSA-SHA1 signature method is not appropriate for use by desktop applications. The signature is based only on information available in the web service request and the RSA key associated with the consumer, and the private key will need to be distributed as part of the application. So if an attacker discovers an access token (not access token secret), they can authenticate.
    • The other two authentication methods — HMAC-SHA1 and PLAINTEXT — depend on an access token secret. Along with the access token, this is essentially a proxy for the user name and password, so should be protected as such (e.g. via the GNOME keyring).  It still sounds better than storing passwords directly, since the token won’t give access to unrelated sites the user happened to use the same password on, and can be revoked independently of changing the password.
  4. While the OpenID folks found a need for a formal extension mechanism for version 2.0 of that protocol, nothing like that seems to have been added to OAuth.  There are now a number of proposed extensions for OAuth, so it probably would have been a good idea.  Perhaps it isn’t as big a deal, due to tigher coupling of service providers and consumers, but I could imagine it being useful as the two parties evolve over time.

So the standard seems decent enough, and better than trying to design such a system yourself.  Like OpenID, it’ll probably take until the second release of the specification for some of the ambiguities to be taken care of and for wider adoption.

From the Python programmer point of view, things could be better.  The library available from the OAuth site seems quite immature and lacks support for a few aspects of the protocol.  It looks okay for simpler uses, but may be difficult to extend for use in more complicated projects.

Django support landed in Storm

Since my last article on integrating Storm with Django, I’ve merged my changes to Storm’s trunk.  This missed the 0.13 release, so you’ll need to use Bazaar to get the latest trunk or wait for 0.14.

The focus since the last post was to get Storm to cooperate with Django’s built in ORM.  One of the reasons people use Django is the existing components that can be used to build a site.  This ranges from the included user management and administration code to full web shop implementations.  So even if you plan to use Storm for your Django application, your application will most likely use Django’s ORM for some things.

When I last posted about this code, it was possible to use both ORMs in a single app, but they would use separate database connections.  This had a number of disadvantages:

  • The two connections would be running separate transactions in parallel, so changes made by one connection would not be visible to the other connection until after the transaction was complete.  This is a problem when updating records in one table that reference rows that are being updated on the other connection.
  • When you have more than one connection, you introduce a new failure mode where one transaction may successfully commit but the other fail, leaving you with only half the changes being recorded.  This can be fixed by using two phase commit, but that is not supported by either Django or Storm at this point in time.

So it is desirable to have the two ORMs sharing a single connection.  The way I’ve implemented this is as a Django database engine backend that uses the connection for a particular named per-thread store and passes transaction commit or rollback requests through to the global transaction manager.  Configuration is as simple as:

DATABASE_ENGINE = 'storm.django.backend'
DATABASE_NAME = 'store-name'
STORM_STORES = {'store-name': 'database-uri'}

This will work for PostgreSQL or MySQL connections: Django requires some additional set up for SQLite connections that Storm doesn’t do.

Once this is configured, things mostly just work.  As Django and Storm both maintain caches of data retrieved from the database though, accessing the same table with both ORMs could give unpredictable results.  My code doesn’t attempt to solve this problem so it is probably best to access tables with only one ORM or the other.

I suppose the next step here would be to implement something similar to Storm’s Reference class to represent links between objects managed by Storm and objects managed by Django and vice versa.

Transaction Management in Django

In my previous post about Django, I mentioned that I found the transaction handling strategy in Django to be a bit surprising.

Like most object relational mappers, it caches information retrieved from the database, since you don’t want to be constantly issuing SELECT queries for every attribute access. However, it defaults to commiting after saving changes to each object. So a single web request might end up issuing many transactions:

Change object 1 Transaction 1
Change object 2 Transaction 2
Change object 3 Transaction 3
Change object 4 Transaction 4
Change object 5 Transaction 5

Unless no one else is accessing the database, there is a chance that other users could modify objects that the ORM has cached over the transaction boundaries. This also makes it difficult to test your application in any meaningful way, since it is hard to predict what changes will occur at those points. Django does provide a few ways to provide better transactional behaviour.

The @commit_on_success Decorator

The first is a decorator that turns on manual transaction management for the duration of the function and does a commit or rollback when it completes depending on whether an exception was raised. In the above example, if the middle three operations were made inside a @commit_on_success function, it would look something like this:

Change object 1 Transaction 1
Change object 2 Transaction 2
Change object 3
Change object 4
Change object 5 Transaction 3

Note that the decorator is usually used on view functions, so it will usually cover most of the request. That said, there are a number of cases where extra work might be done outside of the function. Some examples include work done in middleware classes and views that call other view functions.

The TransactionMiddleware class

Another alternative is to install the TransactionMiddleware middleware class for the site. This turns on transaction management for the duration of each request, similar to what you’d see with other frameworks giving results something like this:

Change object 1 Transaction 1
Change object 2
Change object 3
Change object 4
Change object 5

Combining @commit_on_success and TransactionMiddleware

At first, it would appear that these two approaches cover pretty much everything you’d want. But there are problems when you combine the two. If we use the @commit_on_success decorator as before and TransactionMiddleware, we get the following set of transactions:

Change object 1 Transaction 1
Change object 2
Change object 3
Change object 4
Change object 5 Transaction 2

The transaction for the @commit_on_success function has extended to cover the operations made before hand. This also means that operations #1 and #5 are now in separate transactions despite the use of TransactionMiddleware. The problem also occurs with nested use of @commit_on_success, as reported in Django bug 2227.

A better behaviour for nested transaction management would be something like this:

  1. On success, do nothing. The changes will be committed by the outside caller.
  2. On failure, do not abort the transaction, but instead mark it as uncommittable. This would have similar semantics to the Zope transaction.doom() function.

It is important that the nested call does not abort the transaction because that would cause a new transaction to be started by subsequent code: that should be left to the code that began the transaction.

The @autocommit decorator

While the above interaction looks like a simple bug, the @autocommit decorator is another matter. It turns autocommit on for the duration of a function call, no matter what the transaction mode for the caller was. If we took the original example and wrapped the middle three operations with @autocommit and used TransactionMiddleware, we’d get 4 transactions: one for the first two operations, then one for each of the remaining operations.

I can’t think of a situation where it would make sense to use, and wonder if it was just added for completeness.

Conclusion

While the nesting bugs remain, my recommendation would be to go for the TransactionMiddleware and avoid use of the decorators (both in your own code and third party components). If you are writing reusable code that requires transactions, it is probably better to assert that django.db.transaction.is_managed() is true so that you get a failure for improperly configured systems while not introducing unwanted transaction boundaries.

For the Storm integration work I’m doing, I’ve set it to use managed transaction mode to avoid most of the unwanted commits, but it still falls prey to the extra commits when using the decorators. So I guess inspecting the code is still necessary. If anyone has other tips, I’d be glad to hear them.

Storm 0.13

Yesterday, Thomas rolled the 0.13 release of Storm, which can be downloaded from Launchpad.  Storm is the object relational mapper for Python used by Launchpad and Landscape, so it is capable of supporting quite large scale applications.  It is seven months since the last release, so there is a lot of improvements.  Here are a few simple statistics:

0.12 0.13 Change
Tarball size (KB) 117 155 38
Mainline revisions 213 262 49
Revisions in ancestry 552 875 323

So it is a fairly significant update by any of these metrics.  Among the new features are:

  • Infrastructure for tracing the SQL statements issued by Storm.  Sample tracer implementations are provided to implement bounded statement run times and for logging statements (both features used for QA of Launchpad).
  • A validation framework.  The property constructors take a validator keyword argument, which should be a function taking arguments (object, attr_name, value) and return the value to set.  If the function raises an exception, it can prevent a value from being set.  By returning something different to its third argument it can transform values.
  • The find() and ResultSet API has been extended to make it possible to generate queries that use GROUP BY and HAVING.  The primary use case for result sets that contain an object plus some aggregates associated with that object.
  • Some core parts of Storm have been accelerated through a C extension.  This code is turned off by default, but can be enabled by defining the STORM_CEXTENSIONS environment variable to 1.  While it is disabled by default, it is pretty stable.  Barring any serious problems reported over the next release cycle, I’d expect it to be enabled by default for the next release.
  • The minimum dependencies of the storm.zope.zstorm module have been reduced to just the zope.interface and transaction modules.  This makes it easier to use the per-thread store management code and global transaction management outside of Zope apps (e.g. for integrating with Django).

It doesn’t include my Django integration code though, since that isn’t fully baked.  I’ll post some more about that later.

Using Storm with Django

I’ve been playing around with Django a bit for work recently, which has been interesting to see what choices they’ve made differently to Zope 3.  There were a few things that surprised me:

  • The ORM and database layer defaults to autocommit mode rather than using transactions.  This seems like an odd choice given that all the major free databases support transactions these days.  While autocommit might work fine when a web application is under light use, it is a recipe for problems at higher loads.  By using transactions that last for the duration of the request, the testing you do is more likely to help with the high load situations.
  • While there is a middleware class to enable request-duration transactions, it only covers the database connection.  There is no global transaction manager to coordinate multiple DB connections or other resources.
  • The ORM appears to only support a single connection for a request.  While this is the most common case and should be easy to code with, allowing an application to expand past this limit seems prudent.
  • The tutorial promotes schema generation from Python models, which I feel is the wrong choice for any application that is likely to evolve over time (i.e. pretty much every application).  I’ve written about this previously and believe that migration based schema management is a more workable solution.
  • It poorly reinvents thread local storage in a few places.  This isn’t too surprising for things that existed prior to Python 2.4, and probably isn’t a problem for its default mode of operation.

Other than these things I’ve noticed so far, it looks like a nice framework.

Integrating Storm

I’ve been doing a bit of work to make it easy to use Storm with Django.  I posted some initial details on the mailing list.  The initial code has been published on Launchpad but is not yet ready to merge. Some of the main details include:

  • A middleware class that integrates the Zope global transaction manager (which requires just the zope.interface and transaction packages).  There doesn’t appear to be any equivalent functionality in Django, and this made it possible to reuse the existing integration code (an approach that has been taken to use Storm with Pylons).  It will also make it easier to take advantage of other future improvements (e.g. only committing stores that are used in a transaction, two phase commit).
  • Stores can be configured through the application’s Django settings file, and are managed as long lived per-thread connections.
  • A simple get_store(name) function is provided for accessing per-thread stores within view code.

What this doesn’t do yet is provide much integration with existing Django functionality (e.g. django.contrib.admin).  I plan to try and get some of these bits working in the near future.

How not to do thread local storage with Python

The Python standard library contains a function called thread.get_ident().  It will return an integer that uniquely identifies the current thread at that point in time.  On most UNIX systems, this will be the pthread_t value returned by pthread_self(). At first look, this might seem like a good value to key a thread local storage dictionary with.  Please don’t do that.

The value uniquely identifies the thread only as long as it is running.  The value can be reused after the thread exits.  On my system, this happens quite reliably with the following sample program printing the same ID ten times:

import thread, threading

def foo():
    print 'Thread ID:', thread.get_ident()

for i in range(10):
    t = threading.Thread(target=foo)
    t.start()
    t.join()

If the return value of thread.get_ident() was used to key thread local storage, all ten threads would share the same storage. This is not generally considered to be desirable behaviour.

Assuming that you can depend on Python 2.4 (released 3.5 years ago), then just use a threading.local object. It will result in simpler code, correctly handle serially created threads, and you won’t hold onto TLS data past the exit of a thread.

You will save yourself (or another developer) a lot of time at some point in the future. Debugging these problems is not fun when you combine code doing proper TLS with other code doing broken TLS.

Psycopg migrated to Bazaar

Last week we moved psycopg from Subversion to Bazaar.  I did the migration using Gustavo Niemeyer‘s svn2bzr tool with a few tweaks to map the old Subversion committer IDs to the email address form conventionally used by Bazaar.

The tool does a good job of following tree copies and create related Bazaar branches.  It doesn’t have any special handling for stuff in the tags/ directory (it produces new branches, as it does for other tree copies).  To get real Bazaar tags, I wrote a simple post-processing script to calculate the heads of all the branches in a tags/ directory and set them as tags in another branch (provided those revisions occur in its ancestry).  This worked pretty well except for a few revisions synthesised by a previous cvs2svn migration.  As these tags were from pretty old psycopg 1 releases I don’t know how much it matters.

As there is no code browsing set up on initd.org yet, I set up mirrors of the 2.0.x and 1.x branches on Launchpad to do this:

It is pretty cool having access to the entire revision history locally, and should make it easier to maintain full credit for contributions from non-core developers.

Psycopg2 2.0.7 Released

Yesterday Federico released version 2.0.7 of psycopg2 (a Python database adapter for PostgreSQL).  I made a fair number of the changes in this release to make it more usable for some of Canonical‘s applications.  The new release should work with the development version of Storm, and shouldn’t be too difficult to get everything working with other frameworks.

Some of the improvements include:

  • Better selection of exceptions based on the SQLSTATE result field.  This causes a number of errors that were reported as ProgrammingError to use a more appropriate exception (e.g. DataError, OperationalError, InternalError).  This was the change that broke Storm’s test suite as it was checking for ProgrammingError on some queries that were clearly not programming errors.
  • Proper error reporting for commit() and rollback(). These methods now use the same error reporting code paths as execute(), so an integrity error on commit() will now raise IntegrityError rather than OperationalError.
  • The compile-time switch that controls whether the display_size member of Cursor.description is calculated is now turned off by default.  The code was quite expensive and the field is of limited use (and not provided by a number of other database adapters).
  • New QueryCanceledError and TransactionRollbackError exceptions.  The first is useful for handling queries that are canceled by statement_timeout.  The second provides a convenient way to catch serialisation failures and deadlocks: errors that indicate the transaction should be retried.
  • Fixes for a few memory leaks and GIL misuses. One of the leaks was in the notice processing code that could be particularly problematic for long-running daemon processes.
  • Better test coverage and a driver script to run the entire test suite in one go.  The tests should all pass too, provided your database cluster uses unicode (there was a report just before the release of one test failing for a LATIN1 cluster).

If you’re using previous versions of psycopg2, I’d highly recommend upgrading to this release.

Future work will probably involve support for the DB-API two phase commit extension, but I don’t know when I’ll have time to get to that.

Running Valgrind on Python Extensions

As most developers know, Valgrind is an invaluable tool for finding memory leaks. However, when debugging Python programs the pymalloc allocator gets in the way.

There is a Valgrind suppression file distributed with Python that gets rid of most of the false positives, but does not give particularly good diagnostics for memory allocated through pymalloc. To properly analyse leaks, you often need to recompile Python with pymalloc.

As I don’t like having to recompile Python I took a look at Valgrind’s client API, which provides a way for a program to detect whether it is running under Valgrind. Using the client API I was able to put together a patch that automatically disables pymalloc when appropriate. It can be found attached to bug 2422 in the Python bug tracker.

The patch still needs a bit of work before it will be mergeable with Python 2.6/3.0 (mainly autoconf foo).  I also need to do a bit more benchmarking on the patch.  If the overhead of turning on this patch is negligible, then it’d be pretty cool to have it enabled by default when Valgrind is available.