Websockets + socket.io on the ESP8266 w/ Micropython

I recently learned about the ESP8266 while at Pycon AU. It’s pretty nifty: it’s tiny, it has wifi, a reasonable amount of RAM (for a microcontroller) oh, and it can run Python. Specifically Micropython. Anyway I purchased a couple from Adafruit (specifically this one) and installed the Micropython UNIX port on my computer (be aware with the cheaper ESP8266 boards, they might not be very reflashable, or so I’ve been told, spend the extra money for one with decent flash).

The first thing you learn is that the ports are all surprisingly different in terms of what functionality they support, and the docs don’t make it clear like they do for CPython. I learned the hard way there is a set of docs per port, which maybe is why you the method you’re looking for isn’t there.

The other thing is that even though you’re getting to write in Python, and it has many Pythonic abstractions, many of those abstractions are based around POSIX and leak heavily on microcontrollers. Still a number of them look implementable without actually reinventing UNIX (probably).

The biggest problem at the moment is there’s no “platform independent” way to do asynchronous IO. On the microcontroller you can set top-half interrupt handlers for IO events (no malloc here, yay!), gate the CPU, and then execute bottom halfs from the main loop. However that’s not going to work on UNIX. Or you can use select, but that’s not available on the ESP8266 (yet). Micropython does support Python 3.5 asyncio coroutines, so hopefully the port of asyncio to the ESP8266 happens soon. I’d be so especially ecstatic if I could do await pin.trigger(Pin.FALLING).

There’s a few other things that could really help make it feel like Python. Why isn’t disabling interrupts a context manager/decorator. It’s great that you can try/finally your interrupt code, but the with keyword is so much more Pythonic. Perhaps this is because the code is being written by microprocessor people… which is why they’re so into protocols like MQTT for talking to their devices.

Don’t get me wrong, MQTT is a great protocol that you can cram onto all sorts of devices, with all sorts of crappy PHYs, but I have wifi, and working SSL. I want to do something more web 2.0. Something like websockets. In fact, I want to take another service’s REST API and websockets, and deliver that information to my device… I could build a HTTP server + MQTT broker, but that sounds like a pain. Maybe I can just build a web server with socket.io and connect to that directly from the device?!

The ESP8266 already has some very basic websocket support for its WebREPL, but that’s not very featureful and seems to only implement half of the spec. If we’re going to have Python on a device, maybe we can have something that looks like the great websockets module. Turns out we can!

socket.io is a little harder, it requires a handshake which is not documented (I reversed it in the end), and decoding a HTTP payload, which is not very clearly documented (had to read the source). It’s not the most efficient protocol out there, but the chip is more than fast enough to deal with it. Also fun times, it turns out there’s no platform independent way to return from waiting for IO. Basically it turned out there were a lot of yaks to shave.

Where it all comes into its own though is the ability to write what is pretty much everyday, beautiful Python however, it’s worth it over Arduino sketches or whatever else takes your fancy.

uwebsockets/usocketio on Github.

Electronics breadboard with a project on it sitting on a laptop keyboard Electronics breadboard with a project on it sitting on a laptop keyboard

Django and PostgreSQL composite types

PostgreSQL has this nifty feature called composite types that you can use to create your own types from the built-in PostgreSQL types. It’s a bit like hstore, only structured, which makes it great for structured data that you might reuse multiple times in a model, like addresses.

Unfortunately to date, they were pretty much a pain to use in Django. There were some older implementations for versions of Django before 1.7, but they tended to do things like create surprise new objects in the namespace, not be migrateable, and require connection to the DB at any time (i.e. during your build).

Anyway, after reading a bunch of their implementations and then the Django source code I wrote django-postgres-composite-types.

Install with:

pip install django-postgres-composite-types

Then you can define a composite type declaratively:

from django.db import models
from postgres_composite_type import CompositeType


class Address(CompositeType):
    """An address."""

    address_1 = models.CharField(max_length=255)
    address_2 = models.CharField(max_length=255)

    suburb = models.CharField(max_length=50)
    state = models.CharField(max_length=50)

    postcode = models.CharField(max_length=10)
    country = models.CharField(max_length=50)

    class Meta:
        db_type = 'x_address'  # Required

And use it in a model:

class Person(models.Model):
    """A person."""

    address = Address.Field()

The field should provide all of the things you need, including formfield etc and you can even inherit this field to extend it in your own way:

class AddressField(Address.Field):
    def __init__(self, in_australia=True, **kwargs):
        self.in_australia = in_australia

        super().__init__(**kwargs)

Finally to set up the DB there is a migration operation that will create the type that you can add:

import address
from django.db import migrations


class Migration(migrations.Migration):

    operations = [
        # Registers the type
        address.Address.Operation(),
        migrations.AddField(
            model_name='person',
            name='address',
            field=address.Address.Field(blank=True, null=True),
        ),
    ]

It’s not smart enough to add it itself (can you do that?). Nor would it be smart enough to write the operations to alter a type. That would be a pretty cool trick. But it’s useful functionality all the same, especially when the alternative is creating lots of 1:1 models that are hard to work with and hard to garbage collect.

It’s still pretty early days, so the APIs are subject to change. PRs accepted of course.

Fixing botched migrations with `oc debug`

When using Openshift Origin to deploy software, you often have your containers execute a database migration as part of their deployment, e.g. in your Dockerfile:

CMD ./manage.py migrate --noinput && \
    gunicorn -w 4 -b 0.0.0.0:8000 myapp.wsgi:application

This works great until your migration won’t apply cleanly without intervention, your newly deploying pods are in crashloop backup, and you need to understand why. This is where the `oc debug` command comes in. Using `oc debug` we can ask for a shell on a pod running or newly created.

Assuming we have a deployment config `frontend`:

oc debug dc/frontend

will give us a shell in a running pod for the latest stable deployment (i.e. your currently running instances, not the ones that are crashing).

However let’s say deployment #44 is the one crashing. We can debug a pod from the replication controller for deployment #44.

oc debug rc/frontend-44

will give us a shell in a new pod for that deployment, with our new code, and allows us to manually massage our broken migration in (e.g. by faking the data migration that was retroactively added for production).

Filtering derived fields with Wagtail search

Wagtail’s built in search functionality has this nifty feature where it will index callables on your model, e.g. your model has a start and end date, but you want to search on duration. Hypothetically we could add a FilterField here to index this for Elasticsearch1.

class Job(Model, index.Indexed):
    """A job you can apply for."""

    start_date = models.DateField()
    end_date = models.DateField()

    search_fields = (
        index.FilterField('duration', type='IntegerField'),
    )

    @property
    def duration(self):
        """Duration of the assignment."""
        return 12 * (self.end_date.year - self.start_date.year) + \
            self.end_date.month - self.start_date.month

Wagtail is quite clever in that it takes a Django QuerySet and decomposes the filters.

queryset = queryset\
    .filter(duration__range=(data['duration'].lower or 0,
                             data['duration'].upper or 99999))
query = backend.search(keywords, queryset)

Of course Django will get upset about this, since that’s not a field you can filter on. So we can annotate the value in.

from django.db.models import Func, fields

class DurationInMonths(Func):  # pylint:disable=abstract-method
    """
    SQL Function to calculate the duration of assignments in months.
    """
    template = \
        'EXTRACT(year FROM age(%(expressions)s)) * 12 + ' \
        'EXTRACT(month FROM age(%(expressions)s))'
    output_field = fields.IntegerField()


queryset = queryset\
    .annotate(duration=DurationInMonths(
        F('end_date'), F('start_date')
    ))\
    .filter(duration__range=(data['duration'].lower or 0,
                             data['duration'].upper or 99999))

Also Django will be upset it can’t annotate the duration, so you’ll need to add a setter to your model.

class Job(Model, index.Indexed):
    @property
    def duration(self):
        ...

    @duration.setter
    def duration(self, value):
        """Ignored to make Django annotations happy."""
        pass

And you ideally would be done except now Wagtail gets upset that it can’t determine the attribute name for the field but you can kludge around this for now:

class DurationInMonths(Func):  # pylint:disable=abstract-method
    """
    SQL Function to calculate the duration of assignments in months.
    """
    template = \
        'EXTRACT(year FROM age(%(expressions)s)) * 12 + ' \
        'EXTRACT(month FROM age(%(expressions)s))'
    output_field = fields.IntegerField()
    target = type('IntegerFieldKludge',
                  (fields.IntegerField,),
                  {'attname': 'duration'})()
  1. Note the requirement for a type, otherwise this will be indexed as a string. []

Multiple choice using Django’s postgres ArrayField

There’s a lot of times you want to have a multiple choice set of flags and using a many-to-many database relation is massively overkill. Django 1.9 added a builtin modelfield to leverage Postgres’ built-in array support. Unfortunately the default formfield for ArrayField, SimpleArrayField, is not even a little bit useful (it’s a comma-delimited text input).

If you’re writing your own form, you can simply use the MultipleChoiceField formfield, but if you’re using something that builds forms using the ModelForm automagic factories with no overrides (e.g. Wagtail’s admin site), you need a way to specify the formfield.

Instead subclass ArrayField:

from django import forms
from django.contrib.postgres.fields import ArrayField


class ChoiceArrayField(ArrayField):
    """
    A field that allows us to store an array of choices.
    
    Uses Django 1.9's postgres ArrayField
    and a MultipleChoiceField for its formfield.
    """

    def formfield(self, **kwargs):
        defaults = {
            'form_class': forms.MultipleChoiceField,
            'choices': self.base_field.choices,
        }
        defaults.update(kwargs)
        # Skip our parent's formfield implementation completely as we don't
        # care for it.
        # pylint:disable=bad-super-call
        return super(ArrayField, self).formfield(**defaults)

You use this like ArrayField, except that choices is required.

FLAG_CHOICES = (
    ('defect', 'Defect'),
    ('enhancement', 'Enhancement'),
)
flags = ChoiceArrayField(models.CharField(max_length=...,
                                          choices=FLAG_CHOICES),
                         default=['defect'])

You can similarly use this with any other field that supports choices, e.g. IntegerField (but you’re not storing choices as integers… are you).

Gist.

Two new fixtures-related utilities for Django and Wagtail

During constantly bettering today I got around to forking two small tools I wrote from their parent codebases and uploading them to GitHub/PyPI.

wagtailimporter is a Django app that reads Wagtail pages from a yaml file and imports them into the database. It’s designed to handle pages in a much neater way than DB fixtures, including things like foreign key lookups (via Yaml objects) and inline support for StreamField. It’s sort of a work in progress, where I’m adding more support for things as needed.

django_loaddata_stdin is a very small utility that extends Django’s loaddata command to support reading from stdin. This is extremely useful when you have fragments of site config loaded into the database (e.g. Django sites, Allauth socialaccounts), but you don’t want this in a fixture file committed to revision control.

For instance, to deploy my site into Docker, where I don’t have site-config fixture files built into the containers.

docker-compose run web ./manage.py loaddata --format=yaml - << EOF
<paste>
EOF

Redmine analytics with Python, Pandas and iPython Notebooks

ORIGINALLY POSTED ON IXA.IO

We use Redmine at Infoxchange to manage our product backlog using the Agile plugin from Redmine CRM. While this is generally good enough for making sure we don’t lose any cards and has features like burndown charting you still have to keep your own metrics. This is sort of annoying because you’re sure the data is in there somehow and what do you do when you want to explore a new metric?

This is where iPython Notebooks and the data analysis framework Pandas come in. iPython Notebooks are an interactive web-based workspace where you can intermix documentation and Python code, with graphs and tables you produce output directly into the document. Individual “cells” of the notebook are cached so you can work on part of the program, and experiment (great for data analysis) without having to run big slow number crunching or data download steps.

Pandas is a library for loading and manipulating data. It is based on the well-known numpy and scipy scientific packages and extends them to be able to load data from almost any file type or source (i.e. a Redmine CSV straight from your Redmine server) without having to know much programming. Your data becomes intuitively exposed. It also has built-in plotting and great integration with iPython Notebooks.<!–more–>The first metric I wanted to collect was to study the relationship between our story estimates (in points), our tasking estimates (in hours) and reality. First let’s plot our velocity. It is a matter of saving a custom query whose CSV export URL I can copy into my notebook. This means I can run the notebook at any time to get the latest data.

Remember that iPython notebooks cache the results of cells so I can place accessing Redmine into its own cell and I won’t have to constantly be downloading the data. I can even work offline if I want.

import pandas
data = pandas.read_csv('https://redmine/projects/devops/issues.csv?query_id=111&key={key}'.format(key=key))

Pandas exposes the result as a data frame, which is something we can manipulate in meaningful ways. For instance, we can extract all of the values of a column withdata['column name']. We can also select all of the rows that match a certain condition:

data[data['Target version'] != 'Backlog']

We can go a step further and group this data by the sprint it belongs to:

data[data['Target version'] != 'Backlog']\
    .groupby('Target version')

We can then even create a new series of data by summing the points column each group to find our velocity (as_index means we wish to make the sprint version the row identifier for each row, this will be useful when plotting the data):

data[data['Target version'] != 'Backlog']\
    .groupby('Target version', as_index=True)\
    ['Story Points'].sum()

If you’ve entered each of these into an iPython Notebook and executed them you’ll notice that the table appeared underneath your code block.

We can use this data series to do calculations. For example to find the moving average of our sprint velocity we can do this:

velocity = data[data['Target version'] != 'Backlog']\
    .groupby('Target version', as_index=True)\
    ['Story Points'].sum()

avg_velocity = pandas.rolling_mean(velocity, window=5, min_periods=1)

We can then plot our series together (note the passing of ax to the second plot, this allows us to share the graphs on one plot):

ax = velocity.plot(kind='bar', color='steelblue', label="Velocity",
                   legend=True, title="Velocity", figsize=(12,8))
avg_velocity.plot(ax=ax, color='r', style='.-', label="Average velocity (window=5)",
                  legend=True)
ax.xaxis.grid(False)

You can view the full notebook for this demo online. If you want to learn more about Pandas read 10 minutes to Pandas. Of course what makes iPython Notebooks so much more powerful than Excel, SPSS and R is our ability to use any 3rd party Python package we like. Another time I’ll show how we can use a 3rd party Python-Redmine API to load dataframes to extract metrics from the individual issues’ journals.

Returning screenshots in Gitlab CI (and Travis CI)

ORIGINALLY POSTED ON IXA.IO

Our code base includes a large suite of functional tests using Lettuce, Selenium and PhantomJS. When a test fails, we have a hook that captures the current screen contents and writes it to a file. In an ideal CI system these would be collected as a failure artifact (along with the stack trace, etc.) but that’s not currently possible withGitlab CI (hint hint Gitlab team, investigate Subunit for streaming test output).

Instead what we do on Gitlab is output our images as base64 encoded text:

if $SUCCESS; then
    echo "Success"
else
    echo "Failed ------------------------------------"
    for i in Test*.html; do echo $i; cat "$i"; done
    for i in Test*.png; do echo $i; base64 "$i"; echo "EOF"; done
fi

$SUCCESS[/bash]
Of course now you have test output full of meaningless base64’ed data. Enter Greasemonkey.

// ==UserScript==
// @name View CI Images
// @namespace io.ixa.ci
// @description View CI Images
// @version 1
// @match https://gitlabci/projects/*/builds/*
// ==/UserScript==

(function ($) {
    var text = $('#build-trace').html();
    text = text.replace(/(Test_.+\.png)\n([A-Za-z0-9\n\/\+=]+)\nEOF\n/g,
                        '&lt;h2&gt;$1&lt;/h2&gt;&lt;img src="data:image/png;base64,$2" ' +
                        'style="display:block; max-width:800px; width: auto; height: auto;" ' +
                        '/&gt;');
 
    $('#build-trace').html(text);
})(jQuery);

Web browsers (handily) can already display base64’ed images. This little user script will match builds on the CI server you specify and then replace those huge chunks of base64 with the image they represent.

This technique could easily be replicated on Travis CI by updating the jQuery selector.

Using an SSL intermediate as your CA cert with Python Requests

Originally posted on ixa.io

We recently had to work around an issue integrating with a service that did not provide the full SSL certificate chain. That is to say it had the server certificate installed but not the intermediate certificate required to chain back up to the root. As a result we could not verify the SSL connection.

Not a concern, we thought, we can just pass the appropriate intermediate certificate as the CA cert using the verify keyword to Requests. This is after all what we do in test with our custom root CA.

response = requests.post(url, body, verify=settings.INTERMEDIATE_CA_FILE)

While this worked using curl and the openssl command it continued not to work in Python. Python instead gave us a vague error about certificate validity. openssl is actually giving us the answer though by showing the root CA in the trust chain.

The problem it turns out is that you need to provide the path back to a root CA (the certificate not issued by somebody else). The openssl command does this by including the system root CAs when it considers the CA you supply, but Python considers only the CAs your provide in your new bundle. Concatenate the two certificates together into a bundle:

-----BEGIN CERTIFICATE-----
INTERMEDIATE CA CERT
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
ROOT CA CERT THAT ISSUES THE INTERMEDIATE CA CERT
-----END CERTIFICATE-----

Pass this bundle to the verify keyword.

You can read up about Intermediate Certificates on Wikipedia.

Searching for documents with arrays of objects using Elasticsearch

Originally posted on ixa.io.

Elasticsearch is pretty nifty in that searching for documents that contain an array item requires no additional work to if that document was flat. Further more, searching for documents that contain an object with a given property in an array is just as easy.

For instance, given documents with a mapping like so:

{
    "properties": {
        "tags": {
            "properties": {
                "tag": {
                    "type": "string"
                },
                "tagtype": {
                    "type": "string"
                }
            }
        }
    }
}

We can do a query to find all the documents with a tag object with tag term find me:

{
    "query": {
        "term": {
            "tags.tag": "find me"
        }
    }
}

Using a boolean query we can extend this to find all documents with a tag object with tag find me or tagtype list.

{
    "query": {
        "bool": {
            "must": [
                {"term": {"tags.tag": "find me"},
                {"term": {"tags.tagtype": "list"}
            ]
        }
    }
}

But what if we only wanted documents that contained tag objects of tagtype list *and* tag find me? While the above query would find them, and they would be scored higher for having two matches, what people often don’t expect is that hide me lists will also be returned when you don’t want them to.

This is especially surprising if you’re doing a filter instead of a query; and especially-especially surprising if you’re doing it using an abstraction API such as elasticutils and expected Django-esque filtering.

How Elasticsearch stores objects

By default the object mapping type stores the values in a flat dotted structure. So:

{
    "tags": {
        "tag": "find me",
        "tagtype": "list"
    }
}

Becomes:

{
    "tags.tag": "find me",
    "tags.tagtype": "list"
}

And for lists:

{
    "tags": [
        {
            "tag": "find me",
            "tagtype": "list"
        }
    ]
}

Becomes:

{
    "tags.tag": ["find me"],
    "tags.tagtype": ["list"]
}

This saves a whole bunch of complexity (and memory and CPU) when implementing searching, but it’s no good for us finding documents containing specific objects.

Enter: the nested type

The solution to finding what we’re looking for is to use the nested query and mark our object mappings up with the nested type. This preserves the objects and allows us to execute a query against the individual objects. Internally it maps them as separate documents and does a child query, but they’re hidden documents, and Elasticsearch takes care to keep the documents together to keep things fast.

So what does it look like? Our mapping only needs one additional property:

{
    "properties": {
        "tags": {
            "type": "nested",
            "properties": {
                "tag": {
                    "type": "string"
                },
                "tagtype": {
                    "type": "string"
                }
            }
        }
    }
}

We then make a nested query. The path is the dotted path of the array we’re searching. query is the query we want to execute inside the array. In this case it’s our boolquery from above. Because individual sub-documents have to match the subquery for the main-query to match, this is now the and operation we are looking for.

{
    "query": {
        "nested": {
            "path": "tags",
            "query": {
                "bool": ...
            }
        }
    }
}

Using nested with Elasticutils

If you are using Elasticutils, unfortunately it doesn’t support nested out of the box, and calling query_raw or filter_raw breaks your Django-esque chaining. However it’s pretty easy to add support using something like the following:

    def process_filter_nested(self, key, value, action):
        """
        Do a nested filter

        Syntax is filter(path__nested=filter).
        """

        return {
            'nested': {
                'path': key,
                'filter': self._process_filters((value,)),
            }
        }

Which you can use something like this:

S().filter(tags__nested=F(**{
    'tags.tag': 'find me',
    'tags.tagtype': 'list',
})