Filtering derived fields with Wagtail search

Wagtail’s built in search functionality has this nifty feature where it will index callables on your model, e.g. your model has a start and end date, but you want to search on duration. Hypothetically we could add a FilterField here to index this for Elasticsearch1.

class Job(Model, index.Indexed):
    """A job you can apply for."""

    start_date = models.DateField()
    end_date = models.DateField()

    search_fields = (
        index.FilterField('duration', type='IntegerField'),
    )

    @property
    def duration(self):
        """Duration of the assignment."""
        return 12 * (self.end_date.year - self.start_date.year) + \
            self.end_date.month - self.start_date.month

Wagtail is quite clever in that it takes a Django QuerySet and decomposes the filters.

queryset = queryset\
    .filter(duration__range=(data['duration'].lower or 0,
                             data['duration'].upper or 99999))
query = backend.search(keywords, queryset)

Of course Django will get upset about this, since that’s not a field you can filter on. So we can annotate the value in.

from django.db.models import Func, fields

class DurationInMonths(Func):  # pylint:disable=abstract-method
    """
    SQL Function to calculate the duration of assignments in months.
    """
    template = \
        'EXTRACT(year FROM age(%(expressions)s)) * 12 + ' \
        'EXTRACT(month FROM age(%(expressions)s))'
    output_field = fields.IntegerField()


queryset = queryset\
    .annotate(duration=DurationInMonths(
        F('end_date'), F('start_date')
    ))\
    .filter(duration__range=(data['duration'].lower or 0,
                             data['duration'].upper or 99999))

Also Django will be upset it can’t annotate the duration, so you’ll need to add a setter to your model.

class Job(Model, index.Indexed):
    @property
    def duration(self):
        ...

    @duration.setter
    def duration(self, value):
        """Ignored to make Django annotations happy."""
        pass

And you ideally would be done except now Wagtail gets upset that it can’t determine the attribute name for the field but you can kludge around this for now:

class DurationInMonths(Func):  # pylint:disable=abstract-method
    """
    SQL Function to calculate the duration of assignments in months.
    """
    template = \
        'EXTRACT(year FROM age(%(expressions)s)) * 12 + ' \
        'EXTRACT(month FROM age(%(expressions)s))'
    output_field = fields.IntegerField()
    target = type('IntegerFieldKludge',
                  (fields.IntegerField,),
                  {'attname': 'duration'})()
  1. Note the requirement for a type, otherwise this will be indexed as a string. []

Multiple choice using Django’s postgres ArrayField

There’s a lot of times you want to have a multiple choice set of flags and using a many-to-many database relation is massively overkill. Django 1.9 added a builtin modelfield to leverage Postgres’ built-in array support. Unfortunately the default formfield for ArrayField, SimpleArrayField, is not even a little bit useful (it’s a comma-delimited text input).

If you’re writing your own form, you can simply use the MultipleChoiceField formfield, but if you’re using something that builds forms using the ModelForm automagic factories with no overrides (e.g. Wagtail’s admin site), you need a way to specify the formfield.

Instead subclass ArrayField:

from django import forms
from django.contrib.postgres.fields import ArrayField


class ChoiceArrayField(ArrayField):
    """
    A field that allows us to store an array of choices.
    
    Uses Django 1.9's postgres ArrayField
    and a MultipleChoiceField for its formfield.
    """

    def formfield(self, **kwargs):
        defaults = {
            'form_class': forms.MultipleChoiceField,
            'choices': self.base_field.choices,
        }
        defaults.update(kwargs)
        # Skip our parent's formfield implementation completely as we don't
        # care for it.
        # pylint:disable=bad-super-call
        return super(ArrayField, self).formfield(**defaults)

You use this like ArrayField, except that choices is required.

FLAG_CHOICES = (
    ('defect', 'Defect'),
    ('enhancement', 'Enhancement'),
)
flags = ChoiceArrayField(models.CharField(max_length=...,
                                          choices=FLAG_CHOICES),
                         default=['defect'])

You can similarly use this with any other field that supports choices, e.g. IntegerField (but you’re not storing choices as integers… are you).

Gist.

Two new fixtures-related utilities for Django and Wagtail

During constantly bettering today I got around to forking two small tools I wrote from their parent codebases and uploading them to GitHub/PyPI.

wagtailimporter is a Django app that reads Wagtail pages from a yaml file and imports them into the database. It’s designed to handle pages in a much neater way than DB fixtures, including things like foreign key lookups (via Yaml objects) and inline support for StreamField. It’s sort of a work in progress, where I’m adding more support for things as needed.

django_loaddata_stdin is a very small utility that extends Django’s loaddata command to support reading from stdin. This is extremely useful when you have fragments of site config loaded into the database (e.g. Django sites, Allauth socialaccounts), but you don’t want this in a fixture file committed to revision control.

For instance, to deploy my site into Docker, where I don’t have site-config fixture files built into the containers.

docker-compose run web ./manage.py loaddata --format=yaml - << EOF
<paste>
EOF

Redmine analytics with Python, Pandas and iPython Notebooks

ORIGINALLY POSTED ON IXA.IO

We use Redmine at Infoxchange to manage our product backlog using the Agile plugin from Redmine CRM. While this is generally good enough for making sure we don’t lose any cards and has features like burndown charting you still have to keep your own metrics. This is sort of annoying because you’re sure the data is in there somehow and what do you do when you want to explore a new metric?

This is where iPython Notebooks and the data analysis framework Pandas come in. iPython Notebooks are an interactive web-based workspace where you can intermix documentation and Python code, with graphs and tables you produce output directly into the document. Individual “cells” of the notebook are cached so you can work on part of the program, and experiment (great for data analysis) without having to run big slow number crunching or data download steps.

Pandas is a library for loading and manipulating data. It is based on the well-known numpy and scipy scientific packages and extends them to be able to load data from almost any file type or source (i.e. a Redmine CSV straight from your Redmine server) without having to know much programming. Your data becomes intuitively exposed. It also has built-in plotting and great integration with iPython Notebooks.<!–more–>The first metric I wanted to collect was to study the relationship between our story estimates (in points), our tasking estimates (in hours) and reality. First let’s plot our velocity. It is a matter of saving a custom query whose CSV export URL I can copy into my notebook. This means I can run the notebook at any time to get the latest data.

Remember that iPython notebooks cache the results of cells so I can place accessing Redmine into its own cell and I won’t have to constantly be downloading the data. I can even work offline if I want.

import pandas
data = pandas.read_csv('https://redmine/projects/devops/issues.csv?query_id=111&key={key}'.format(key=key))

Pandas exposes the result as a data frame, which is something we can manipulate in meaningful ways. For instance, we can extract all of the values of a column withdata['column name']. We can also select all of the rows that match a certain condition:

data[data['Target version'] != 'Backlog']

We can go a step further and group this data by the sprint it belongs to:

data[data['Target version'] != 'Backlog']\
    .groupby('Target version')

We can then even create a new series of data by summing the points column each group to find our velocity (as_index means we wish to make the sprint version the row identifier for each row, this will be useful when plotting the data):

data[data['Target version'] != 'Backlog']\
    .groupby('Target version', as_index=True)\
    ['Story Points'].sum()

If you’ve entered each of these into an iPython Notebook and executed them you’ll notice that the table appeared underneath your code block.

We can use this data series to do calculations. For example to find the moving average of our sprint velocity we can do this:

velocity = data[data['Target version'] != 'Backlog']\
    .groupby('Target version', as_index=True)\
    ['Story Points'].sum()

avg_velocity = pandas.rolling_mean(velocity, window=5, min_periods=1)</pre>
We can then plot our series together (note the passing of <em>ax</em> to the second plot, this allows us to share the graphs on one plot):
<pre>ax = velocity.plot(kind='bar', color='steelblue', label="Velocity",
                   legend=True, title="Velocity", figsize=(12,8))
avg_velocity.plot(ax=ax, color='r', style='.-', label="Average velocity (window=5)",
                  legend=True)
ax.xaxis.grid(False)

You can view the full notebook for this demo online. If you want to learn more about Pandas read 10 minutes to Pandas. Of course what makes iPython Notebooks so much more powerful than Excel, SPSS and R is our ability to use any 3rd party Python package we like. Another time I’ll show how we can use a 3rd party Python-Redmine API to load dataframes to extract metrics from the individual issues’ journals.

Using an SSL intermediate as your CA cert with Python Requests

Originally posted on ixa.io

We recently had to work around an issue integrating with a service that did not provide the full SSL certificate chain. That is to say it had the server certificate installed but not the intermediate certificate required to chain back up to the root. As a result we could not verify the SSL connection.

Not a concern, we thought, we can just pass the appropriate intermediate certificate as the CA cert using the verify keyword to Requests. This is after all what we do in test with our custom root CA.

response = requests.post(url, body, verify=settings.INTERMEDIATE_CA_FILE)

While this worked using curl and the openssl command it continued not to work in Python. Python instead gave us a vague error about certificate validity. openssl is actually giving us the answer though by showing the root CA in the trust chain.

The problem it turns out is that you need to provide the path back to a root CA (the certificate not issued by somebody else). The openssl command does this by including the system root CAs when it considers the CA you supply, but Python considers only the CAs your provide in your new bundle. Concatenate the two certificates together into a bundle:

-----BEGIN CERTIFICATE-----
INTERMEDIATE CA CERT
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
ROOT CA CERT THAT ISSUES THE INTERMEDIATE CA CERT
-----END CERTIFICATE-----

Pass this bundle to the verify keyword.

You can read up about Intermediate Certificates on Wikipedia.

Running Django on Docker: a workflow and code

It has been an extremely long time between beers (10 months!). I’ve gotten out of the habit of blogging and somehow I never blogged about the talk I co-presented at PyCon AU this year on Pallet and Forklift the standard and tool we’ve developed at Infoxchange to help make it easier to develop web-applications on Docker1.

Infoxchange is one of the few places I’m aware of that runs Docker in prod. If you’re looking at using Docker to do web development, it’s worth checking out what we’ve been doing over on the Infoxchange devops blog.

  1. There’s also Straddle Carrier, a set of Puppet manifests for loading Docker containers on real infrastructure, but they’ve not been released yet as they rely too much on our custom Puppet config. []

lazy-loading class-based-views in Django

So one of the nice things with method-based views in Django is the ability to do this sort of thing to load a view at the path frontend.views.home:

urlpatterns = patterns(
    'frontend.views',

    url(r'^$', 'home', name='home'),
)

Unfortunately, if you’re using class-based-views, you can’t do this:

urlpatterns = patterns(
    'frontend.views',

    url(r'^$', 'HomeView', name='home'),
)

And instead you had to resort to importing the view and calling HomeView.as_view(). Sort of annoying when you didn’t want to import all of those views.

It turns out however that overloading the code to resolve HomeView is not that difficult, and we can do it with a pretty straightforward monkeypatch. This version uses the kwargs argument of url() to pass keyword arguments to as_view().

from django.conf import urls
from django.views.generic import View


class ClassBasedViewURLPattern(urls.RegexURLPattern):
    """
    A version of RegexURLPattern able to handle class-based-views

    Monkey-patch it in to support class-based-views
    """

    @property
    def callback(self):
        """
        Hook locating the view to handle class based views
        """

        view = super(ClassBasedViewURLPattern, self).callback

        if isinstance(view, type) and issubclass(view, View):
            view = view.as_view(**self.default_args)
            self.default_args = {}

        return view

urls.RegexURLPattern = ClassBasedViewURLPattern

Django utility methods (including New Relic deployment notification)

So we’ve moved to Github here at Infoxchange as our primary development platform because pull requests and Travis CI are much nicer than yelling across the room at each other1. To enable Travis to build our code, we’ve needed to move our little utility libraries to Github too. Since some of these were already on pip, it made sense to open source the rest of them too.

The most useful is a package called IXDjango which includes a number of generally useful management commands for Django developers. Especially useful are deploy which will run a sequence of other commands for deployment, and newrelic_notify_deploy which you can use to notify New Relic of your deployment, which annotates all of your graphs with the version number.

We hope these are useful to people.

  1. big shout out to both Github and Travis CI for supporting our not-for-profit mission with gratis private accounts []

Writing your first web app using Python and Flask

I presented a tutorial at linux.conf.au a couple of weeks ago on what there was for Python developers between CGI scripts and Django. That developers needn’t still be writing CGI scripts in 2014 (it happens) and there were frameworks that met your needs.

This tutorial introduces the microframework Flask, and shows of a whole bunch of things you can do with it, up to being a fully-fledged replacement for Django if you’re so inclined.

Video
Examples source

In other news, I am now a maintainer of Lettuce, a BDD framework for Python/Django. So expect a few more Lettuce related blog posts (if I stop ignoring my blog).

More than a side salad: behaviour driven testing and test driven design in Django with Lettuce

Been quiet lately because I am super busy getting a project out of the door. However I did find time to give this talk last night on behaviour driven testing with Lettuce at MelbDjango.

Apologies for the glitches in the PDF. reveal.js is amazing for doing presentations but I didn’t have time to fix the glitches in the PDF output. The presentation is also able to be cloned from Git (view index.html), but you require the fonts (they’re all libre but I was too lazy to use webfonts).

Hopefully, when I get a breather, I can write about some other code I’ve written, or am writing. Or maybe catch up on the rest of my life/projects.