Running Valgrind on Python Extensions

As most developers know, Valgrind is an invaluable tool for finding memory leaks. However, when debugging Python programs the pymalloc allocator gets in the way.

There is a Valgrind suppression file distributed with Python that gets rid of most of the false positives, but does not give particularly good diagnostics for memory allocated through pymalloc. To properly analyse leaks, you often need to recompile Python with pymalloc.

As I don’t like having to recompile Python I took a look at Valgrind’s client API, which provides a way for a program to detect whether it is running under Valgrind. Using the client API I was able to put together a patch that automatically disables pymalloc when appropriate. It can be found attached to bug 2422 in the Python bug tracker.

The patch still needs a bit of work before it will be mergeable with Python 2.6/3.0 (mainly autoconf foo).  I also need to do a bit more benchmarking on the patch.  If the overhead of turning on this patch is negligible, then it’d be pretty cool to have it enabled by default when Valgrind is available.

Two‐Phase Commit in Python’s DB‐API

Marc uploaded a new revision of the Python DB-API 2.0 Specification yesterday that documents the new two phase commit extension that I helped develop on the db-sig mailing list.

My interest in this started from the desire to support two phase commit in Storm – without that feature there are far fewer occasions where its ability to talk to multiple databases can be put to use. As I was doing some work on psycopg2 for Launchpad, I initially put together a PostgreSQL specific patch, which was (rightly) rejected by Federico.

He suggested that it would be better to try and standardise on an API on the db-sig list, so that’s what I did. I looked over the API exposed by other database adapters that supported 2PC, and the 2PC APIs of the major free databases that did not have support in their Python adapters (MySQL and PostgreSQL). The resulting API is a bit more complicated than my original PostgreSQL-only but has the advantage of being implementable on other databases such as MySQL.

Below is a simple example of using the API directly (missing some of the error handling):

# begin transactions for each database connection
conn1.tpc_begin(conn1.xid(42, 'transaction ID', 'connection 1'))
conn2.tpc_begin(conn2.xid(42, 'transaction ID', 'connection 2'))
# Do stuff with both connections
...
try:
    conn1.tpc_prepare()
    conn2.tpc_prepare()
except DatabaseError:
    conn1.tpc_rollback()
    conn2.tpc_rollback()
else:
    conn1.tpc_commit()
    conn2.tpc_commit()

Or alternatively, if you’ve got one connection supporting 2PC and the other only supporting one-phase commit, it could be structured as follows:

# begin transactions for each database connection
conn1.tpc_begin(conn1.xid(42, 'transaction ID', 'connection 1'))
# Do stuff with both connections
...
try:
    conn1.tpc_prepare()
    conn2.commit()
except DatabaseError:
    conn1.tpc_rollback()
    conn2.rollback()
else:
    conn1.tpc_commit()

While it is possible to use the 2PC API directly, it is expected that most applications will rely on a transaction manager to coordinate global transactions, such as Zope’s transaction module.

The hope is that by offering a consistent API, Python application frameworks will be more likely to bother supporting this feature of databases. Hopefully you’ll be able to use the API with PostgreSQL and Storm soon.

Zeroconf Branch Sharing with Bazaar

Bazaar logoAt Canonical, one of the approaches taken to accelerate development is to hold coding sprints (otherwise known as hackathons, hackfests or similar). Certain things get done a lot quicker face to face compared to mailing lists, IRC or VoIP.

When collaborating with someone at one of these sprints the usual way to let others look at my work would be to commit the changes so that they could be pulled or merged by others. With legacy version control systems like CVS or Subversion, this would generally result in me uploading all my changes to a server in another country only for them to be downloaded back to the sprint location by others.

In contrast, with a modern VCS like Bazaar we should be able to avoid this since the full history of the branch is available locally – enough information to let others pull or merge the changes. That said, we’ve often ended up using a server on the internet to exchange changes despite this. This is the same work flow we use when working from home, so I guess the pain of switching to a new work flow outweighs the potential productivity gains.

The Solution

Bazaar makes it easy to run a read only server locally:

bzr serve [--directory=DIR]

However, there is still the issue of others finding the branch. They’d need to know the IP address assigned to my computer at the sprint, and the path to the branch on the server. Ideally they’d just need to know the name of the my branch. As it happens, we’ve got the technology to fix this.

Avahi logoAvahi makes it trivial to advertise and browse for services on the local network without having to worry about what IP addresses have been assigned or what people name their computer. So the solution is to hook Avahi and Bazaar together. This was fairly easy due to Avahi’s DBus interface and the dbus-python bindings.

The result is my bzr-avahi plugin. You can either download tarballs or install the latest version directly with from Bazaar:

bzr branch lp:bzr-avahi ~/.bazaar/plugins/avahi

To use the plugin, you must have at least version 1.1 of Bazaar, the Python bindings for DBus and Avahi, and a working Avahi setup. Once the plugin is installed, it hooks into the standard “bzr serve” command to do the following:

  • scan the directory being served for branches that the user has asked to advertise.
  • ask Avahi to advertise said branches

You can ask to advertise a branch using the new “bzr advertise” command:

bzr advertise [BRANCH-NAME]

If no name is specified, the branch’s nickname is used. The advertise command sends a signal over the session bus to tell any running servers about the change, so there is no need to restart “bzr serve” to see the change.

At this point, the advertised branches should be visible with a service browser like avahi-discover, so that’s half the problem solved. From the client side two things are provided: a special redirecting transport and a command to list all advertised branches on the local network.

The transport allows you to access the branch by its advertised name with most Bazaar commands. For example, merging a branch is as simple as:

$ bzr merge local:BRANCH-NAME
local:BRANCH-NAME is redirected to bzr://hostname.local:4155/path/to/branch
...
All changes applied successfully.
$

If you want to get a list of all advertised branches on the network, the “bzr browse” command will print out a list of branch names and the URLs they translate to.

I believe using these tools together should offer a low enough overhead for direct sharing of branches at sprints that people would actually bother using it. It should be quite useful at the next sprint I go to.

Re: Python factory-like type instances

Nicolas: Your metaclass example is a good example of when not to use metaclasses. I wouldn’t be surprised if it is executed slightly different to how you expect. Let’s look at how Foo is evaluated, starting with what’s written:

class Foo:
    __metaclass__ = FooMeta

This is equivalent to the following assignment:

Foo = FooMeta('Foo', (), {...})

As FooMeta has an __new__() method, the attempt to instantiate FooMeta will result in it being called. As the return value of __new__() is not a FooMeta instance, there is no attempt to call FooMeta.__init__(). So we could further simplify the code to:

Foo = {
    'linux2': LinuxFoo,
    'win32': WindowsFoo,
}.get(PLATFORM, None)
if not Foo:
    # XXX: this should _really_ raise something other than Exception
    raise Exception, 'Platform not supported'

So the factory function is gone completely here, and it is clear that the decision about which class to use is being made at module import time rather than class instantiation time.

Now this isn’t to say that metaclasses are useless. In both implementations, the code responsible for selecting the class has knowledge of all implementations. To add a new implementation (e.g. for Solaris or MacOS X), the factory function needs to be updated. A better solution would be to provide a way for new implementations to register themselves with the factory. A metaclass could be used to make the registration automatic:

class FooMeta(type):
    def __init__(self, name, bases, attrs):
        cls = super(FooMeta, self).__init__(name, bases, attrs)
        if cls.platform is not None:
            register_foo_implementation(klass.platform, cls)
        return cls

class Foo:
    __metaclass__ = FooMeta
    platform = None
    ...

class LinuxFoo(Foo):
    platform = 'linux2'

Now the simple act of defining a SolarisFoo class would be enough to have it registered and ready to use.

urlparse considered harmful

Over the weekend, I spent a number of hours tracking down a bug caused by the cache in the Python urlparse module. The problem has already been reported as Python bug 1313119, but has not been fixed yet.

First a bit of background. The urlparse module does what you’d expect and parses a URL into its components:

>>> from urlparse import urlparse
>>> urlparse('http://www.gnome.org/')
('http', 'www.gnome.org', '/', '', '', '')

As well as accepting byte strings (which you’d be using at the HTTP protocol level), it also accepts Unicode strings (which you’d be using at the HTML or XML content level):

>>> urlparse(u'http://www.ubuntu.com/')
(u'http', u'www.ubuntu.com', u'/', '', '', '')

As the result is immutable, urlparse implements a cache of up to 20 previous results. Unfortunately, the cache does not distinguish between byte strings and Unicode strings, so parsing a byte string may return unicode components if the result is in the cache:

>>> urlparse('http://www.ubuntu.com/')
(u'http', u'www.ubuntu.com', u'/', '', '', '')

When you combine this with Python’s automatic promotion of byte strings to unicode when concatenating with a unicode string, can really screw things up when you do want to work with byte strings. If you hit such a problem, the code may all look correct but the problem was introduced 20 urlparse calls ago. Even if your own code never passes in Unicode strings, one of the libraries you use might be doing so.

The problem affects more than just the urlparse function. The urljoin function from the same module is also affected since it uses urlparse internally:

>>> from urlparse import urljoin
>>> urljoin('http://www.ubuntu.com/', '/news')
u'http://www.ubuntu.com/news'

It seems safest to avoid the module all together if possible, or at least until the underlying bug is fixed.