Using a client certificate to set the attestation checksum

For a while, fwupd has been able to verify the PCR0 checksum for the system firmware. The attestation checksum can be used to verify that the installed firmware matches that supplied by the vendor and means the end user is confident the firmware has not been modified by a 3rd party. I think this is really an important and useful thing the LVFS can provide. The PCR0 value can easily be found using tpm2_pcrlist if the TPM is in v2.0 mode, or cat /sys/class/tpm/tpm0/pcrs if the TPM is still in v1.2 mode. It is also reported in the fwupdmgr get-devices output for versions of fwupd >= 1.2.2.

The device checksum as a PCR0 is slightly different than a device checksum for a typical firmware. For instance, a DFU device checksum can be created using sha256sum firmware.bin (assuming the image is 100% filling the device) and you don’t actually have to flash the image to the hardware to get the device checksum out. For a UEFI UpdateCapsule you need to schedule the update, reboot, then read back the PCR0 from the hardware. There must be an easier way…

Assuming you have a vendor account on the LVFS, first upload the client certificate for your user account to the LVFS:

Then, assuming you’re using fwupd >= 1.2.6 you can now do this:

fwupdmgr refresh
fwupdmgr update
…reboot…
fwupdmgr report-history --sign

Notice the –sign there? Looking back at the LVFS, there now exists a device checksum:

This means the firmware gets the magic extra green tick that makes everyone feel a lot happier:

New AppStream Validation Requirements

In the next release of appstream-glib the appstream-util validate requirements got changed, which might make your life easier, or harder — depending if you already pass or fail the validation. The details are here but the rough jist is that we’ve relaxed a lot of the style rules (e.g. starts with a capital letter, ends with a full stop, less than a certain number of chars, etc), and made stricter some of the more important optional parts of the specification. For instance, requiring <content_rating> for any desktop or console application.

Even if you don’t care upstream, the new validation will soon be turned on for any apps built in Flathub, and downstream “packagers” will be pestering you for details as updates are now failing. Although only a few apps fail, some of the missing metadata tags are important enough to fail building. To test your app right now with the new validator:

$ flatpak remote-add --if-not-exists gnome-nightly https://sdk.gnome.org/gnome-nightly.flatpakrepo
$ flatpak install gnome-nightly org.gnome.Sdk
$ flatpak run --command=bash --filesystem=home:ro org.gnome.Sdk//master
# appstream-util validate /home/hughsie/Code/gnome-software/data/appdata/org.gnome.Software.appdata.xml.in
# exit

Of course, when the next tarball is released it’ll be available in your distribution as normal, but I wanted to get some early sanity checks in before I tag the release.

The LVFS is now a Linux Foundation project

The LVFS is now an official Linux Foundation project! I did a mini-interview if you want some more details about where the project came from and where it’s heading. I’m hoping the move to the Linux Foundation gives the project a lot more credibility with existing LF members, and it certainly takes some of the load from me. I’ll continue to develop the lvfs-website codebase as before, and still be the friendly face when talking to OEMs and ODMs.

In the short term, not much changes, although you might start see some rebranding of the website itself. The server is also moving from a little VM in AMS to a fully scalable orchestrated thing maintained by people who actually understand how to be a sysadmin. If you’re interested in what’s happening on the LVFS, be sure to join the announcement mailing list. We’re averaging about 450,000 firmware downloads a month, and still growing steadily, with more and more vendors joining every month.

In related news, there’s lots of new firmware on the LVFS, much of it addressing serious CVEs on lots of different laptop models. If you’ve not updated recently, now is the time to fix that.

Even more fun with SuperIO

My fun with SuperIO continues, and may be at the logical end. I’ve now added the required code to the superio plugin to flash IT89xx embedded controllers. Most of the work was working out how to talk to the hardware on ports 0x62 and 0x66, although the flash “commands” are helpfully JEDEC compliant. The actual flashing process is the typical:

  • Enter into a bootloader mode (which disables your keyboard, fans and battery reporting)
  • Mark the internal EEPROM as writable
  • Erase blocks of data
  • Write blocks of data to the device
  • Read back the blocks of data to verify the write
  • Mark the internal EEPROM as read-only
  • Return to runtime mode

There were a few slight hickups, in that when you read the data back from the device just one byte is predictably wrong, but nothing that can’t be worked around in software. Working around the wrong byte means we can verify the attestation checksum correctly.

Now, don’t try flashing your EC with random binaries. The binaries look unsigned, don’t appear to have any kind of checksum, and flashing the wrong binary to the wrong hardware has the failure mode of “no I/O devices appear at boot” so unless you have a hardware programmer handy it’s probably best to wait for an update from your OEM.

We also do the EC update from a special offline-update mode where nothing else than fwupd is running, much like we do the system updates in Fedora. All this work was supported by the people at Star Labs, and now basically everything in the LapTop Mk3 is updatable in Linux. EC updates for Star Labs hardware should appear on the LVFS soon.

A fwupd client side certificate

In the soon-to-be-released fwupd 1.2.6 there’s a new feature that I wanted to talk about here, if nothing else to be the documentation when people find these files and wonder what they are. The fwupd daemon now creates a PKCS-7 client self-signed certificate at startup (if GnuTLS is enabled and new enough) – which creates the root-readable /var/lib/fwupd/pki/secret.key and world-readable /var/lib/fwupd/pki/client.pem files.

These certificates are used to sign text data sent to a remote server. At the moment, this is only useful for vendors who also have accounts on the LVFS, so that when someone in their QA team tests the firmware update on real hardware, they can upload the firmware report with the extra --sign argument to sign the JSON blob with the certificate. This allows the LVFS to be sure the report upload comes from the vendor themselves, and will in future allow the trusted so-called attestation DeviceChecksums a.k.a. the PCR0 to be set automatically from this report. Of course, the LVFS user needs to upload the certificate to the LVFS to make this work, although I’ve written this functionality and am just waiting for someone to review it.

It’ll take some time for the new fwupd to get included in all the major distributions, but when practical I’ll add instructions for companies using the LVFS to use this feature. I’m hoping that by making it easier to securely set the PCR0 more devices will have the attestation metadata needed to verify if the machine is indeed running the correct firmware and secure.

Of course, fwupd doesn’t care if the certificate is self-signed or is issued from a corporate certificate signing request. The files in /var/lib/fwupd/pki/ can be set to whatever policy is in place. We can also use this self-signed certificate for any future agent check-in which we might need for the enterprise use cases. It allows us send data from the client to a remote server and prove who the client is. Comments welcome.

Making the LVFS and fwupd work in the enterprise

I’ve spent some time over the weekend thinking about how firmware updates should work when you’re an enterprise, i.e. when you’re responsible for more than about 100 broadly similar computers. Some companies using fwupd right now are managing over a 100,000 devices (!) using a variety of non-awesome workarounds. So far we’ve not had a very good story on how to make firmware updates for corporate or IoT “just work” as we’ve been concentrating on the desktop use-cases.

We’ve started working on some functionality in fwupd to install an optional “agent” that reports the versions of firmware installed to a central internal web service daily, so that the site admin can see what computers are not up-to-date with the latest firmware updates. I’d expect there the admin could also approve updates after in-house QA testing, and also rate-limit the flow of updates to hardware of the same type. The reference web app would visually look like some kind of dashboard, although I’d be happy to also plug this information into existing system management systems like Lenovo XClarity or even Red Hat Satellite. The deliverable here would be to provide the information and the mechanism that can be used to implement whatever policy the management console defines.

This stuff isn’t particularly relevant to the average Linux user, and enabling this special “enterprise mode” would involve spinning up a web app on the internal network, manually enabling a systemd timer on all clients in the enterprise and also perhaps setting up a LVFS mirror. The console certainly isn’t the kind of thing you’d run on the Internet or be provided by the LVFS.

If this sounds interesting, I’d love to hear some comments, feedback and wish list items. We’re at the pre-alpha stage right now and are just prototyping some toy code. Thanks!

Making ATA updates just work

The fwupd project has supported updating the microcode on ATA devices for about a month, and StarLabs is shipping firmware on the LVFS already. More are coming, but as part of the end-to-end testing with various deliberately-unnamed storage vendors we hit a thorny issue.

Most drives require the firmware updater to use the so-called 0xE mode, more helpfully called ATA_SUBCMD_MICROCODE_DOWNLOAD_CHUNKS in fwupd. This command transfers chunks of firmware to the device, and then the ATA hardware waits for a COMRESET before switching to the new firmware version. On most drives you can also use 0x3 mode which downloads the chunks and switches to the new firmware straight away using ATA RESET. As in, your drive currently providing your root filesystem disconnects from your running system and then reconnects with the new firmware version running. The kernel should be okay with that (and seems to work for me), but various people have advised us it would be a good way to cause accidental Bad Things™ to happen, which certainly seems plausible. Needlessly to say, we defaulted to the safe 0xE mode in fwupd 1.2.4 and thus require the user to reboot to switch to the new firmware version.

The issue we found is that about half of the ATA drive vendors require the drive to receive a COMRESET before switching to the new firmware. Depending on your main system firmware (and seemingly, the phase of the moon) you might only get a COMRESET when the device is initially powered on, rather than during reset. This means we’d have to tell the user to shutdown and then manually restart their system rather than just doing a system restart, which means various fwupd front ends like GNOME Software and KDE discover would need updating with new strings and code. This isn’t exactly trivial for enterprise distros like RHEL, and fwupd doesn’t know the capabilities of the front-end so can’t do anything sensible like hold back the update.

Additionally, the failure mode of installing a firmware update and then just restarting rather than shutting down would be the firmware version would be unchanged on the next boot, and fwupd would recognize this and mark the update as failed. The user would then also be prompted to update the firmware on the device that they thought they just updated. As my boss would say, “disappointing”.

Complexity to the rescue! There is one extra little-used mode in the ATA specification, called 0xF. This command causes the drive to immediately switch to the new firmware version, which as we’ve previously lamented might cause data loss. We can however, use this new command on shutdown when the filesystems have all been remounted read only. In fwupd git master (which is what will become version 1.2.6) we actually install a /usr/lib/systemd/system-shutdown/fwupd.shutdown script which checks the history database, and activates the new firmware if there is any activation required. This way it’ll always come back with the new firmware version when the user restarts, regardless of how the storage vendor interpreted the ATA specification.

I guess I should also thank Mario Limonciello and the storage team at Dell for all the help with this. We’ll hopefully have some more good news to share soon.

Lenovo ThinkCentre joins the LVFS

Lenovo ThinkPad and ThinkStation have already been using the LVFS for some time, with many models supported from each group. Now the first firmware for the ThinkCentre line of hardware has appeared. ThinkCentre machines are often found in the enterprise, often tucked neatly behind other hardware or under counter tops, working away for years without problems. With the LVFS support site administrators can now update firmware on machines either locally or using ssh. At the moment only the M625q model is listed as supported on the LVFS, but other models are in the pipeline and will appear when ready.

It’s been a good month for the LVFS, 6 new devices were added in the last month, and we celebrated the numerically significant 5 million firmware downloads. The move to the Linux Foundation is going well, and we’ll hopefully be moving the staging instance from a little VM to a proper cloud deployment, providing the scalability and uptime requirements we need for critical infrastructure like this. If all goes to plan the main instance will move after a few months of testing.

PackageKit is dead, long live, well, something else

It’s probably no surprise to many of you that PackageKit has been in maintenance mode for quite some time. Although started over ten years ago (!) it’s not really had active maintenance since about 2014. Of course, I’ve still been merging PRs and have been slinging tarballs over the wall every few months, but nothing new was happening with the project, and I’ve worked on many other things since.

I think it’s useful for a little retrospective. PackageKit was conceived as an abstraction layer over about a dozen different package management frameworks. Initially it succeeded, with a lot of front ends UIs being written for the PackageKit API, making the Linux desktop a much nicer place for many years. Over the years, most package managers have withered and died, and for the desktop at least really only two remain, .rpm and .deb. The former being handled by the dnf PackageKit backend, and the latter by aptcc.

Canonical seems to be going all in on Snaps, and I don’t personally think of .deb files as first class citizens on Ubuntu any more – which is no bad thing. Snaps and Flatpaks are better than packages for desktop software in almost every way. Fedora is concentrating on Modularity and is joining with most of the other distros with a shared Flatpak and Flathub future and seems to be thriving because of it. If course, I’m missing out a lot of other distros, but from a statistics point of view they’re unfortunately not terribly relevant. Arch users are important, but they’re also installing primarily on the command line, not using an abstraction layer or GUI. Fedora is also marching towards an immutable base image using rpmostree, containers and flatpaks, and then PackageKit isn’t only not required, but doesn’t actually get installed at all in Fedora SilverBlue.

GNOME Software and the various KDE software centers already have an abstraction in the session; which they kind of have to to support per-user flatpak applications and per-user pet containers like Fedora Toolbox. I’ve also been talking to people in the Cockpit project and they’re in the same boat, and basically agree that having a shared system API to get the installed package list isn’t actually as useful as it used to be. Of course, we’ll need to support mutable systems for a long time (RHEL!) and so something has to provide a D-Bus interface to provide that. I’m not sure whether that should be dnfdaemon providing a PackageKit-compatible API, or it should just implement a super-simple interface that’s not using an API design from the last decade. At least from a gnome-software point of view it would just be one more plugin, like we have a plugin for Flatpak, a plugin for Snap, and a plugin for PackageKit.

Comments welcome.

Using fwupd and updating firmware without using the LVFS

The LVFS is a webservice designed to allow system OEMs and ODMs to upload firmware easily, and for it to be distributed securely to tens of millions of end users. For some people, this simply does not work for good business reasons:

  • They don’t trust me, fwupd.org, GPG, certain OEMs or the CDN we use
  • They don’t want thousands of computers on an internal network downloading all the files over and over again
  • The internal secure network has no internet connectivity

For these cases there are a few different ways to keep your hardware updated, in order of simplicity:

Download just the files you need manually

Download the .cab files you found for your hardware and then install them on the target hardware via Ansible or Puppet using fwupdmgr install foo.cab — you can use fwupdmgr get-devices to get the existing firmware versions of all hardware. If someone wants to write the patch to add JSON/XML export to fwupdmgr that would be a very welcome thing indeed.

Download and deploy firmware as part of an immutable image

If you’re shipping an image, you can just dump the .cab files into a directory in the deployment along with something like /etc/fwupd/remotes.d/immutable.conf (only on fwupd >= 1.2.3):

[fwupd Remote]
Enabled=false
Title=Vendor (Automatic)
Keyring=none
MetadataURI=file:///usr/share/fwupd/remotes.d/vendor/firmware

Then once you disable the LVFS, running fwupdmgr or fwupdtool will use only the cabinet archives you deploy in your immutable image (or even from an .rpm for that matter). Of course, you’re deploying a larger image because you might have several firmware files included, but this is how Google ChromeOS is using fwupd.

Sync all the public firmware from the LVFS to a local directory

You can use Pulp to mirror the entire contents of the LVFS (not private or embargoed firmware, for obvious reasons). Create a repo pointing to PULP_MANIFEST and then sync that on a regular basis to download the metadata and firmware. The Pulp documentation can explain how to set all this up. Make sure the local files are available from a webserver in your private network using SSL.

Then, disable the LVFS by deleting/modifying lvfs.conf and then create a myprivateserver.conf file on the clients /etc/fwupd/remotes.d:

[fwupd Remote]
Enabled=true
Type=download
Keyring=gpg
MetadataURI=https://my.private.server/mirror/firmware.xml.gz
FirmwareBaseURI=https://my.private.server/mirror

Export a share to all clients

Again, use Pulp to create a big directory holding all the firmware (currently ~10GB), and keep it synced. This time create a NFS or Samba share and export it to clients. Map the folder on clients, and then create a myprivateshare.conf file in /etc/fwupd/remotes.d:

[fwupd Remote]
Enabled=false
Title=Vendor
Keyring=none
MetadataURI=file:///mnt/myprivateshare/fwupd/remotes.d/firmware.xml.gz
FirmwareBaseURI=file:///mnt/myprivateshare/fwupd/remotes.d

Create your own LVFS instance

The LVFS is a free software Python 3 Flask application and can be set up internally, or even externally for that matter. You have to configure much more this way, including things like generating your own GPG keys, uploading your own firmware and setting up users and groups on the server. Doing all this has a few advantages, namely:

  • You can upload your own private firmware and QA it, only pushing it to stable when ready
  • You don’t ship firmware which you didn’t upload
  • You can control the staged deployment, e.g. only allowing the same update to be deployed to 1000 servers per day
  • You can see failure reports from clients, to verify if the deployment is going well
  • You can see nice graphs about how many updates are being deployed across your organisation

I’m hoping to make the satellite deployment LVFS use cases more concrete, and hopefully add some code to the LVFS to make this easier, although it’s not currently required for any Red Hat customer. Certainly a “setup wizard” would make setting up the LVFS much easier than obscure commands on the console.

Comments welcome.