OpenHardware and code signing (update)

I posted a few weeks ago about the difficulty of providing device-side verification of firmware updates, at the same time remaining OpenHardware and thus easily hackable. The general consensus was that allowing anyone to write any kind of firmware to the device without additional authentication was probably a bad idea, even for OpenHardware devices. I think I’ve come up with an acceptable compromise I can write up as a recommendation, as per usual using the ColorHug+ as an example. For some background, I’ve sold nearly 3,000 original ColorHug devices, and in the last 4 years just three people wanted help writing custom firmware, so I hope you can see the need to protect the majority is so much larger than making the power users happy.

ColorHug+ will be supplied with a bootloader that accepts only firmware encrypted with the secret XTEA key I that I’m using for my devices. XTEA is an acceptable compromise between something as secure as ECC, but that’s actually acceptable in speed and memory usage for a 8-bit microcontroller running at 6MHz with 8k of ROM. Flashing a DIY or modified firmware isn’t possible, and by the same logic flashing a malicious firmware will also not work.

To unlock the device (and so it stays OpenHardware) you just have to remove the two screws, and use a paper-clip to connect TP5 and GND while the device is being plugged into the USB port. Both lights will come on, and stay on for 5 seconds and then the code protection is turned off. This means you can now flash any home-made or malicious firmware to the device as you please.

There are downsides to unlocking; you can’t re-lock the hardware so it supports official updates again. I don’t know if this is a huge problem; flashing home-made firmware could damage the device (e.g. changing the pin mapping from input to output and causing something to get hot). If this is a huge problem I can fix CH+ to allow re-locking and fix up the guidelines, although I’m erring on unlocking being a one way operation.

Comments welcome.

Published by


Richard has over 10 years of experience developing open source software. He is the maintainer of GNOME Software, PackageKit, GNOME Packagekit, GNOME Power Manager, GNOME Color Manager, colord, and UPower and also contributes to many other projects and opensource standards. Richard has three main areas of interest on the free desktop, color management, package management, and power management. Richard graduated a few years ago from the University of Surrey with a Masters in Electronics Engineering. He now works for Red Hat in the desktop group, and also manages a company selling open source calibration equipment. Richard's outside interests include taking photos and eating good food.

22 thoughts on “OpenHardware and code signing (update)”

  1. Is it too difficult to do the check at flash-time instead of boot-time? It would be nice if we could disable the check just for one flash. Otherwise, this sounds like a decent plan. Also, is there a way for users to add their own keys after unlocking?

    1. So you mean something like “unlock for this flash only” — I don’t know how well that would work for the verification stage and the integrity checking in fwupd. You can of course add your own keys after unlocking, although you’re only allowed to do this once — until you “unlock” again.

      1. You could permit the user to upload a password to the device which can be used as a weaker version of the jumper level bypass later. Though that comes with its own problems, not least of which is what if every vendor does this sort of thing ever so slightly subtly incompatible with the others…

  2. If you wanted to use ECC, the microECC software ( works very well on 8-bit micros. P256 takes a few seconds but you can pick a smaller curve if you are worried about performance.

    There is a hardware option using Atmel’s ATECC108/508. The chip is unfortunately under a NDA but Atmel releases software that use it. It implements P256 in the hardware. For about $1, it can store a private key that is generated in the device or you can load public keys (for firmware verification for example).

    I made a few ports of the software for it:

    Out-of-tree kernel module:

    User-space c-library (linux):

    Arduino wrapper:

    And then there is Atmel’s code:

    The hardware setup is pretty minimal (just I2C), but the design files are available on sparkfun’s site:

    I understand the argument against using black-box style crypto hardware, so you may not be interested. If that’s the case, microECC is your friend.

    1. I actually looked at microECC, but the code space used was very high and it was kinda complex to audit. I’m not against a hardware solution like the ATECC108, the issue is both financial (e.g. $1 -> £1, plus taxes, plus the bigger PCB, plus the cost of soldering it) and the idea that it’s black-box crypto. Thanks for the links tho!

    1. Yes, in an area of EEPROM that is protected from reading with an external programmer. The processor EEPROM space is split up into two halves, bootloader and firmware. The former is write only with a external programmer, and can only read and write EEPROM in the upper half. The keys are stored in the last page of flash in the bootloader space. XTEA isn’t actually checking a signature, it just descrambles the eeprom data, so if the startup sequence is even slightly wrong the firmware will crash, get caught by the hardware watchdog and then revert to the bootloader. Better ideas welcome! :)

      1. > Yes, in an area of EEPROM that is protected from reading with an external programmer.

        This means, obviously that you are restricted to the one single key for each and every device, which might suit you if you only want basic checksum verification of the fw (you trust nobody will ever go to the trouble of writing a malicious fw). However, surely basic checksumming can be implemented far more easily and robustly in fwupd itself using proper signature checking methods.

        On the other hand, if the intention is to protect against malicious fw then history suggests a proper key update scheme/protocol has to be baked in from the start or else if someone is motivated enough to write custom fw for nefarious purposes, then they will also find a way to get to your secret.

        Once you have a proper way to do key updates then your partitioning scheme can work for preserving integrity of the flashing process and it seems to me you only need to encrypt the signature of the fw and check it before deciding whether or not to “commit” to the fw. Failure mode could be either:
        * Refuse to write to flash in which case you’d need sufficient RAM to store the entire fw and signature. This is the most robust, obviously.
        * Refuse to boot in anything other than DFU mode or something by which fwupd can recognise a failure and then reflash using the old fw. So it would need to download the fw before uploading the new one.

        Then the jumper by pass idea could be used to permit flashing your own (public) keys unconditionally. (A more complex fw might allow uploading keys that have been signed by trusted keys apart from flashing…)

        1. Right; fwupd already uses gnupg to do proper public/private key crypto on the archive downloaded from LVFS, and before the update is scheduled. This device-side verification is for people “going behind the back of fwupd” where the device is exposed as user-accessable in programmer mode (not all devices change the PID when going from runtime to DFU mode). I’m pretty sure I can check the first block of the uploaded firmware on the device for a known signature, and if not found refuse to flash the device.

  3. As long as one can reproduce your official builds oneself, I don’t see a big problem with unlocking not allowing to flash the official firmware binaries.

    That said, I am not entirely sure what specific cases the signature checking is intended to protect against?
    I guess it is a nice workaround for signature checking of the firmware (for authenticity) on the host being such a hassle (especially cross-platform)…

    1. Yes, public/private keys would be best of course, but most devices have an order of magnitude less of everything needed for that. I guess for vendors it’s a way of restricting only official firmware being installed onto the device to avoid damage, and from a user point of view it stops other people installing “prank” or malicious firmware on your device without your consent.

      1. True, but at the very least it ought to be possible to flash a new key as part of a fw update. Because unless you have that, your protection scheme lasts only as long as the time it takes for the first successful social engineering attack against your vendor/their subcontractors. :/

        That’s the problem: you don’t generally have to protect against “your caddish ‘mate’ grabbing your stuff, sitting down with a breadboard and JTAG debugger, engineering a ‘funny’ new fw and then tricking you into flashing said fw as part of a prank”. Said mate is more likely to just execute a fork bomb on your terminal window or cleverly rearrange your preferences or do something that takes much less effort.

        But you do have to protect the masses against people who go to the trouble of designing dropper malware to deploy cryptolocker type nasties for payoffs from extortion. And eventually those people are no longer going to be dissuaded by the existence of a single, immutable, fixed secret — they’ll either hack your vendor or crack the key if they are determined enough to own the world’s precious ColorHug devices.

        1. Yes, we can flash a new key onto a device, as long as the previous key is still secret. If the old key was leaked, the attacker could just decrypt the firmware update and then the key would be present somewhere in the new file. I agree a determined enough hacker could break the crypto, but I don’t know if that’s enough rationale for doing *nothing*… Good points tho, thanks for your comments.

          1. > then the key would be present somewhere in the new file.

            Obviously: but that just goes to show why using symmetric key crypto cannot ever work properly for this purpose.

            If I may be a bit reductive, for the purpose of defending against bad fw, your scheme is more or less like using a HMAC and therefore in some ways more like verifying a known checksum as with ISBN and other barcode schemes and rather less like verifying a signature that guarantees authenticity.

            You can use it to check that the fw wasn’t obviously/trivially tampered with but you cannot use it to check that the fw is “authentic”.

            In my mind, it wasn’t clear to me from your original post whether or not your propose would address that distinction and temper expectations accordingly.

  4. IMHO, shipping with a firmware lock and requiring jumpering to disable it completely goes against the openness principle.

    And why does unlocking prevent installing official firmware? That is particularly nasty and surprising. I’d expect an unlocked device to accept ANY firmware, including the official one.

    1. Well, the device is openable, but it also provides some security against malicious firmware. There’s always a tradeoff between security and openness. I’m thinking hard about how to allow the official firmware on an unlocked device.

  5. How do I lock the device to my key instead of your key?

    [WORDPRESS HASHCASH] The poster sent us ‘0 which is not a hashcash value.

  6. How can a consumer verify that the device they’re using is using your “genuine”, trusted firmware without having to trust the chain of custody between you and them?

    1. You can use `fwupd verify` which will read the firmware from the device, and cross reference the SHA1 hash from the LVFS (assuming the vendor has submitted the update through the LVFS).

Comments are closed.