python-pkcs11 with the Nitrokey HSM

So my Nitrokey HSM arrived and it works great, thanks to the Nitrokey peeps for sending me one.

Because the OpenSC PKCS #11 module is a little more lightweight than some of the other vendors, which often implement mechanisms that are not actually supported by the hardware (e.g. the Opencryptoki TPM module), I wrote up some documentation on how to use the device, focusing on how to extract the public keys for using outside of PKCS #11, as the Nitrokey doesn’t implement any of the public key functions.

Nitrokey with python-pkcs11

This also encouraged me to add a whole bunch more of the import/extraction functions for the diverse key formats, including getting very frustrated at the lack of documentation for little things like how OpenSSL stores EC public keys (the answer is as SubjectPublicKeyInfo from X.509), although I think there might be some operating system specific glitches with encoding some DER structures. I think I need to move from pyasn1 to asn1crypto.

Announcing new high-level PKCS#11 HSM support for Python

Recently I’ve been working on a project that makes use of Thales HSM devices to encrypt/decrypt data. There’s a number of ways to talk to the HSM, but the most straight-forward from Linux is via PKCS#11. There were a number of attempts to wrap the PKCS#11 spec for Python, based on SWIG, cffi, etc., but they were all (a) low level, (b) not very Pythonic, (c) have terrible error handling, (d) broken, (e) inefficient for large files and (f) very difficult to fix.

Anyway, given that nearly all documentation on how to actually use PKCS#11 has to be discerned from C examples and thus I’d developed a pretty good working knowledge of the C API, and I’ve wanted to learn Cython for a while, I decided I’d write a new binding based on a high level wrapper I’d put into my app. It’s designed to be accessible, pick sane defaults for you, use generators where appropriate to reduce work, stream large files, be introspectable in your programming environment and be easy to read and extend.

https://github.com/danni/python-pkcs11

It’s currently a work in progress, but it’s now available on pip. You can get a session on a device, create a symmetric key, find objects, encrypt and decrypt data. The Cryptoki spec is quite large, so I’m focusing on the support that I need first, but it should be pretty straightforward for anyone who wanted to add something else they needed. I like to think I write reasonably clear, self-documenting code.

At the moment it’s only tested on SoftHSMv2 and the Thales nCipher Edge, which is what I have access to. If someone at Amazon wanted this to work flawlessly on CloudHSM, send me an account and I’ll do it 😛 Then I can look at releasing my Django integrations for fields, storage, signing, etc.

PostgreSQL date ranges in Django forms

Django’s postgres extensions support data types like DateRange which is super useful when you want to query your database against dates, however they have no form field to expose this into HTML.

Handily Django 1.11 has made it super easy to write custom widgets with complex HTML.

Continue reading “PostgreSQL date ranges in Django forms”

Websockets + socket.io on the ESP8266 w/ Micropython

I recently learned about the ESP8266 while at Pycon AU. It’s pretty nifty: it’s tiny, it has wifi, a reasonable amount of RAM (for a microcontroller) oh, and it can run Python. Specifically Micropython. Anyway I purchased a couple from Adafruit (specifically this one) and installed the Micropython UNIX port on my computer (be aware with the cheaper ESP8266 boards, they might not be very reflashable, or so I’ve been told, spend the extra money for one with decent flash).

The first thing you learn is that the ports are all surprisingly different in terms of what functionality they support, and the docs don’t make it clear like they do for CPython. I learned the hard way there is a set of docs per port, which maybe is why you the method you’re looking for isn’t there.

The other thing is that even though you’re getting to write in Python, and it has many Pythonic abstractions, many of those abstractions are based around POSIX and leak heavily on microcontrollers. Still a number of them look implementable without actually reinventing UNIX (probably).

The biggest problem at the moment is there’s no “platform independent” way to do asynchronous IO. On the microcontroller you can set top-half interrupt handlers for IO events (no malloc here, yay!), gate the CPU, and then execute bottom halfs from the main loop. However that’s not going to work on UNIX. Or you can use select, but that’s not available on the ESP8266 (yet). Micropython does support Python 3.5 asyncio coroutines, so hopefully the port of asyncio to the ESP8266 happens soon. I’d be so especially ecstatic if I could do await pin.trigger(Pin.FALLING).

There’s a few other things that could really help make it feel like Python. Why isn’t disabling interrupts a context manager/decorator. It’s great that you can try/finally your interrupt code, but the with keyword is so much more Pythonic. Perhaps this is because the code is being written by microprocessor people… which is why they’re so into protocols like MQTT for talking to their devices.

Don’t get me wrong, MQTT is a great protocol that you can cram onto all sorts of devices, with all sorts of crappy PHYs, but I have wifi, and working SSL. I want to do something more web 2.0. Something like websockets. In fact, I want to take another service’s REST API and websockets, and deliver that information to my device… I could build a HTTP server + MQTT broker, but that sounds like a pain. Maybe I can just build a web server with socket.io and connect to that directly from the device?!

The ESP8266 already has some very basic websocket support for its WebREPL, but that’s not very featureful and seems to only implement half of the spec. If we’re going to have Python on a device, maybe we can have something that looks like the great websockets module. Turns out we can!

socket.io is a little harder, it requires a handshake which is not documented (I reversed it in the end), and decoding a HTTP payload, which is not very clearly documented (had to read the source). It’s not the most efficient protocol out there, but the chip is more than fast enough to deal with it. Also fun times, it turns out there’s no platform independent way to return from waiting for IO. Basically it turned out there were a lot of yaks to shave.

Where it all comes into its own though is the ability to write what is pretty much everyday, beautiful Python however, it’s worth it over Arduino sketches or whatever else takes your fancy.

uwebsockets/usocketio on Github.

Electronics breadboard with a project on it sitting on a laptop keyboard Electronics breadboard with a project on it sitting on a laptop keyboard

Django and PostgreSQL composite types

PostgreSQL has this nifty feature called composite types that you can use to create your own types from the built-in PostgreSQL types. It’s a bit like hstore, only structured, which makes it great for structured data that you might reuse multiple times in a model, like addresses.

Unfortunately to date, they were pretty much a pain to use in Django. There were some older implementations for versions of Django before 1.7, but they tended to do things like create surprise new objects in the namespace, not be migrateable, and require connection to the DB at any time (i.e. during your build).

Anyway, after reading a bunch of their implementations and then the Django source code I wrote django-postgres-composite-types.

Install with:

pip install django-postgres-composite-types

Then you can define a composite type declaratively:

from django.db import models
from postgres_composite_type import CompositeType


class Address(CompositeType):
    """An address."""

    address_1 = models.CharField(max_length=255)
    address_2 = models.CharField(max_length=255)

    suburb = models.CharField(max_length=50)
    state = models.CharField(max_length=50)

    postcode = models.CharField(max_length=10)
    country = models.CharField(max_length=50)

    class Meta:
        db_type = 'x_address'  # Required

And use it in a model:

class Person(models.Model):
    """A person."""

    address = Address.Field()

The field should provide all of the things you need, including formfield etc and you can even inherit this field to extend it in your own way:

class AddressField(Address.Field):
    def __init__(self, in_australia=True, **kwargs):
        self.in_australia = in_australia

        super().__init__(**kwargs)

Finally to set up the DB there is a migration operation that will create the type that you can add:

import address
from django.db import migrations


class Migration(migrations.Migration):

    operations = [
        # Registers the type
        address.Address.Operation(),
        migrations.AddField(
            model_name='person',
            name='address',
            field=address.Address.Field(blank=True, null=True),
        ),
    ]

It’s not smart enough to add it itself (can you do that?). Nor would it be smart enough to write the operations to alter a type. That would be a pretty cool trick. But it’s useful functionality all the same, especially when the alternative is creating lots of 1:1 models that are hard to work with and hard to garbage collect.

It’s still pretty early days, so the APIs are subject to change. PRs accepted of course.

Filtering derived fields with Wagtail search

Wagtail’s built in search functionality has this nifty feature where it will index callables on your model, e.g. your model has a start and end date, but you want to search on duration. Hypothetically we could add a FilterField here to index this for Elasticsearch ((Note the requirement for a type, otherwise this will be indexed as a string.)).

class Job(Model, index.Indexed):
    """A job you can apply for."""

    start_date = models.DateField()
    end_date = models.DateField()

    search_fields = (
        index.FilterField('duration', type='IntegerField'),
    )

    @property
    def duration(self):
        """Duration of the assignment."""
        return 12 * (self.end_date.year - self.start_date.year) + \
            self.end_date.month - self.start_date.month

Wagtail is quite clever in that it takes a Django QuerySet and decomposes the filters.

queryset = queryset\
    .filter(duration__range=(data['duration'].lower or 0,
                             data['duration'].upper or 99999))
query = backend.search(keywords, queryset)

Of course Django will get upset about this, since that’s not a field you can filter on. So we can annotate the value in.

from django.db.models import Func, fields

class DurationInMonths(Func):  # pylint:disable=abstract-method
    """
    SQL Function to calculate the duration of assignments in months.
    """
    template = \
        'EXTRACT(year FROM age(%(expressions)s)) * 12 + ' \
        'EXTRACT(month FROM age(%(expressions)s))'
    output_field = fields.IntegerField()


queryset = queryset\
    .annotate(duration=DurationInMonths(
        F('end_date'), F('start_date')
    ))\
    .filter(duration__range=(data['duration'].lower or 0,
                             data['duration'].upper or 99999))

Also Django will be upset it can’t annotate the duration, so you’ll need to add a setter to your model.

class Job(Model, index.Indexed):
    @property
    def duration(self):
        ...

    @duration.setter
    def duration(self, value):
        """Ignored to make Django annotations happy."""
        pass

And you ideally would be done except now Wagtail gets upset that it can’t determine the attribute name for the field but you can kludge around this for now:

class DurationInMonths(Func):  # pylint:disable=abstract-method
    """
    SQL Function to calculate the duration of assignments in months.
    """
    template = \
        'EXTRACT(year FROM age(%(expressions)s)) * 12 + ' \
        'EXTRACT(month FROM age(%(expressions)s))'
    output_field = fields.IntegerField()
    target = type('IntegerFieldKludge',
                  (fields.IntegerField,),
                  {'attname': 'duration'})()

Multiple choice using Django’s postgres ArrayField

There’s a lot of times you want to have a multiple choice set of flags and using a many-to-many database relation is massively overkill. Django 1.9 added a builtin modelfield to leverage Postgres’ built-in array support. Unfortunately the default formfield for ArrayField, SimpleArrayField, is not even a little bit useful (it’s a comma-delimited text input).

If you’re writing your own form, you can simply use the MultipleChoiceField formfield, but if you’re using something that builds forms using the ModelForm automagic factories with no overrides (e.g. Wagtail’s admin site), you need a way to specify the formfield.

Instead subclass ArrayField:

from django import forms
from django.contrib.postgres.fields import ArrayField


class ChoiceArrayField(ArrayField):
    """
    A field that allows us to store an array of choices.
    
    Uses Django 1.9's postgres ArrayField
    and a MultipleChoiceField for its formfield.
    """

    def formfield(self, **kwargs):
        defaults = {
            'form_class': forms.MultipleChoiceField,
            'choices': self.base_field.choices,
        }
        defaults.update(kwargs)
        # Skip our parent's formfield implementation completely as we don't
        # care for it.
        # pylint:disable=bad-super-call
        return super(ArrayField, self).formfield(**defaults)

You use this like ArrayField, except that choices is required.

FLAG_CHOICES = (
    ('defect', 'Defect'),
    ('enhancement', 'Enhancement'),
)
flags = ChoiceArrayField(models.CharField(max_length=...,
                                          choices=FLAG_CHOICES),
                         default=['defect'])

You can similarly use this with any other field that supports choices, e.g. IntegerField (but you’re not storing choices as integers… are you).

Gist.

Two new fixtures-related utilities for Django and Wagtail

During constantly bettering today I got around to forking two small tools I wrote from their parent codebases and uploading them to GitHub/PyPI.

wagtailimporter is a Django app that reads Wagtail pages from a yaml file and imports them into the database. It’s designed to handle pages in a much neater way than DB fixtures, including things like foreign key lookups (via Yaml objects) and inline support for StreamField. It’s sort of a work in progress, where I’m adding more support for things as needed.

django_loaddata_stdin is a very small utility that extends Django’s loaddata command to support reading from stdin. This is extremely useful when you have fragments of site config loaded into the database (e.g. Django sites, Allauth socialaccounts), but you don’t want this in a fixture file committed to revision control.

For instance, to deploy my site into Docker, where I don’t have site-config fixture files built into the containers.

docker-compose run web ./manage.py loaddata --format=yaml - << EOF
<paste>
EOF

Redmine analytics with Python, Pandas and iPython Notebooks

ORIGINALLY POSTED ON IXA.IO

We use Redmine at Infoxchange to manage our product backlog using the Agile plugin from Redmine CRM. While this is generally good enough for making sure we don’t lose any cards and has features like burndown charting you still have to keep your own metrics. This is sort of annoying because you’re sure the data is in there somehow and what do you do when you want to explore a new metric?

This is where iPython Notebooks and the data analysis framework Pandas come in. iPython Notebooks are an interactive web-based workspace where you can intermix documentation and Python code, with graphs and tables you produce output directly into the document. Individual “cells” of the notebook are cached so you can work on part of the program, and experiment (great for data analysis) without having to run big slow number crunching or data download steps.

Pandas is a library for loading and manipulating data. It is based on the well-known numpy and scipy scientific packages and extends them to be able to load data from almost any file type or source (i.e. a Redmine CSV straight from your Redmine server) without having to know much programming. Your data becomes intuitively exposed. It also has built-in plotting and great integration with iPython Notebooks.<!–more–>The first metric I wanted to collect was to study the relationship between our story estimates (in points), our tasking estimates (in hours) and reality. First let’s plot our velocity. It is a matter of saving a custom query whose CSV export URL I can copy into my notebook. This means I can run the notebook at any time to get the latest data.

Remember that iPython notebooks cache the results of cells so I can place accessing Redmine into its own cell and I won’t have to constantly be downloading the data. I can even work offline if I want.

import pandas
data = pandas.read_csv('https://redmine/projects/devops/issues.csv?query_id=111&key={key}'.format(key=key))

Pandas exposes the result as a data frame, which is something we can manipulate in meaningful ways. For instance, we can extract all of the values of a column withdata['column name']. We can also select all of the rows that match a certain condition:

data[data['Target version'] != 'Backlog']

We can go a step further and group this data by the sprint it belongs to:

data[data['Target version'] != 'Backlog']\
    .groupby('Target version')

We can then even create a new series of data by summing the points column each group to find our velocity (as_index means we wish to make the sprint version the row identifier for each row, this will be useful when plotting the data):

data[data['Target version'] != 'Backlog']\
    .groupby('Target version', as_index=True)\
    ['Story Points'].sum()

If you’ve entered each of these into an iPython Notebook and executed them you’ll notice that the table appeared underneath your code block.

We can use this data series to do calculations. For example to find the moving average of our sprint velocity we can do this:

velocity = data[data['Target version'] != 'Backlog']\
    .groupby('Target version', as_index=True)\
    ['Story Points'].sum()

avg_velocity = pandas.rolling_mean(velocity, window=5, min_periods=1)

We can then plot our series together (note the passing of ax to the second plot, this allows us to share the graphs on one plot):

ax = velocity.plot(kind='bar', color='steelblue', label="Velocity",
                   legend=True, title="Velocity", figsize=(12,8))
avg_velocity.plot(ax=ax, color='r', style='.-', label="Average velocity (window=5)",
                  legend=True)
ax.xaxis.grid(False)

You can view the full notebook for this demo online. If you want to learn more about Pandas read 10 minutes to Pandas. Of course what makes iPython Notebooks so much more powerful than Excel, SPSS and R is our ability to use any 3rd party Python package we like. Another time I’ll show how we can use a 3rd party Python-Redmine API to load dataframes to extract metrics from the individual issues’ journals.

Using an SSL intermediate as your CA cert with Python Requests

Originally posted on ixa.io

We recently had to work around an issue integrating with a service that did not provide the full SSL certificate chain. That is to say it had the server certificate installed but not the intermediate certificate required to chain back up to the root. As a result we could not verify the SSL connection.

Not a concern, we thought, we can just pass the appropriate intermediate certificate as the CA cert using the verify keyword to Requests. This is after all what we do in test with our custom root CA.

response = requests.post(url, body, verify=settings.INTERMEDIATE_CA_FILE)

While this worked using curl and the openssl command it continued not to work in Python. Python instead gave us a vague error about certificate validity. openssl is actually giving us the answer though by showing the root CA in the trust chain.

The problem it turns out is that you need to provide the path back to a root CA (the certificate not issued by somebody else). The openssl command does this by including the system root CAs when it considers the CA you supply, but Python considers only the CAs your provide in your new bundle. Concatenate the two certificates together into a bundle:

-----BEGIN CERTIFICATE-----
INTERMEDIATE CA CERT
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
ROOT CA CERT THAT ISSUES THE INTERMEDIATE CA CERT
-----END CERTIFICATE-----

Pass this bundle to the verify keyword.

You can read up about Intermediate Certificates on Wikipedia.

Creative Commons Attribution-ShareAlike 2.5 Australia
This work by Danielle Madeley is licensed under a Creative Commons Attribution-ShareAlike 2.5 Australia.