libxmlb now a dependency of fwupd and gnome-software

I’ve just released libxmlb 0.1.3, and merged the branches for fwupd and gnome-software so that it becomes a hard dependency on both projects. A few people have reviewed the libxmlb code, and Mario, Kalev and Robert reviewed the fwupd and gnome-software changes so I’m pretty confident I’ve not broken anything too important — but more testing very welcome. GNOME Software RSS usage is about 50% of what is shipped in 3.30.x and fwupd is down by 65%! If you want to ship the upcoming fwupd 1.2.0 or gnome-software 3.31.2 in your distro you’ll need to have libxmlb packaged, or be happy using a meson subpackage to download libxmlb during the build of each dependent project.

Announcing the first release of libxmlb

Today I did the first 0.1.0 preview release of libxmlb. We’re at the “probably API stable, but no promises” stage. This is the library I introduced a couple of weeks ago, and since then I’ve been porting both fwupd and gnome-software to use it. The former is almost complete, and nearly ready to merge, but the latter is still work in progress with a fair bit of code to write. I did manage to launch gnome-software with libxmlb yesterday, and modulo a bit of brokenness it’s both faster to start (over 800ms faster from cold boot!) and uses an amazing 90Mb less RSS at runtime. I’m planning to merge the libxmlb branch into the unstable branch of fwupd in the next few weeks, so I need volunteers to package up the new hard dep for Debian, Ubuntu and Arch.

The tarball is in the usual place – it’s a simple Meson-built library that doesn’t do anything crazy. I’ve imported and built it already for Fedora, much thanks to Kalev for the super speedy package review.

I guess I should explain how applications are expected to use this library. At its core, there are just five different kinds of objects you need to care about:

  • XbSilo – a deduplicated string pool and a read only node tree. This is typically kept alive for the application lifetime.
  • XbNode – a “Gobject wrapped” immutable node available as a query result from XbSilo.
  • XbBuilder – a “compiler” to build the XbSilo from XbBuilderNode’s or XbBuilderSource’s. This is typically created and destroyed at startup or when the blob needs regenerating.
  • XbBuilderNode – a mutable node that can have a parent, children, attributes and a value
  • XbBuilderSource – a source of data for XbBuilder, e.g. a .xml.gz file or just a raw XML string

The way most applications will use libxmlb is to create a local XbBuilder instance, add some XbBuilderSource’s and then “ensure” a local cache file. The “ensure” process either mmap loads the binary blob if all the file mtimes are the same, or compiles a blob saving it to a new file. You can also tell the XbSilo to watch all the sources that it was built with, so that if any files change at runtime the valid property gets set to FALSE and the application can xb_builder_ensure() at a convenient time.

Once the XbBuilder has been compiled, a XbSilo pops out. With the XbSilo you can query using most common XPath statements – I actually ended up implementing a FORTH-style stack interpreter so we can now do queries like /components/component/id[contains(upper-case(text()),'GIMP')] – I’ll say a bit more on that in a minute. Queries can limit the number of results (for speed), and are deduplicated in a sane way so it’s really quite a simple process to achieve something that would be a lot of C code. It’s possible to directly query an attribute or text value from a node, so the silo doesn’t have to be passed around either.

In the process of porting gnome-software, I had to make libxmlb thread-safe – which required some internal organisation. We now have an non-exported XbMachine stack interpreter, and then the XbSilo actually registers the XML-specific methods (like contains()) and functions (like ~=). These get passed some per-method user data, and also some per-query private data that is shared with the node tree – allowing things like [last()] and position()=3 to work. The function callbacks just get passed an query-specific stack, which means you can allow things like comparing “1” to 1.00f This makes it easy to support more of XPath in the future, or to support something completely application specific like gnome-software-search() without editing the library.

If anyone wants to do any API or code review I’d be super happy to answer any questions. Coverity and valgrind seem happy enough with all the self tests, but that’s no replacement for a human asking tricky questions. Thanks!

Speeding up AppStream: mmap’ing XML using libxmlb

AppStream and the related AppData are XML formats that have been adopted by thousands of upstream projects and are being used in about a dozen different client programs. The AppStream metadata shipped in Fedora is currently a huge 13Mb XML file, which with gzip compresses down to a more reasonable 3.6Mb. AppStream is awesome; it provides translations of lots of useful data into basically all languages and includes screenshots for almost everything. GNOME Software is built around AppStream, and we even use a slightly extended version of the same XML format to ship firmware update metadata from the LVFS to fwupd.

XML does have two giant weaknesses. The first is that you have to decompress and then parse the files – which might include all the ~300 tiny AppData files as well as the distro-provided AppStream files, if you want to list installed applications not provided by the distro. Seeking lots of small files isn’t so slow on a SSD, and loading+decompressing a small file is actually quicker than loading an uncompressed larger file. Parsing an XML file typically means you set up some callbacks, which then get called for every start tag, text section, then end tag – so for a 13Mb XML document that’s nested very deeply you have to do a lot of callbacks. This means you have to process the description of GIMP in every language before you can even see if Shotwell exists at all.

The typical way parsing XML involves creating a “node tree” when parsing the XML. This allows you treat the XML document as a Document Object Model (DOM) which allows you to navigate the tree and parse the contents in an object oriented way. This means you typically allocate on the heap the nodes themselves, plus copies of all the string data. AsNode in libappstream-glib has a few tricks to reduce RSS usage after parsing, which includes:

  • Interning common element names like description, p, ul, li
  • Freeing all the nodes, but retaining all the node data
  • Ignoring node data for languages you don’t understand
  • Reference counting the strings from the nodes into the various appstream-glib GObjects

This still has a both drawbacks; we need to store in hot memory all the screenshot URLs of all the apps you’re never going to search for, and we also need to parse all these long translated descriptions data just to find out if gimp.desktop is actually installable. Deduplicating strings at runtime takes nontrivial amounts of CPU and means we build a huge hash table that uses nearly as much RSS as we save by deduplicating.

On a modern system, parsing ~300 files takes less than a second, and the total RSS is only a few tens of Mb – which is fine, right? Except on resource constrained machines it takes 20+ seconds to start, and 40Mb is nearly 10% of the total memory available on the system. We have exactly the same problem with fwupd, where we get one giant file from the LVFS, all of which gets stored in RSS even though you never have the hardware that it matches against. Slow starting of fwupd and gnome-software is one of the reasons they stay resident, and don’t shutdown on idle and restart when required.

We can do better.

We do need to keep the source format, but that doesn’t mean we can’t create a managed cache to do some clever things. Traditionally I’ve been quite vocal against squashing structured XML data into databases like sqlite and Xapian as it’s like pushing a square peg into a round hole, and forces you to think like a database doing 10 level nested joins to query some simple thing. What we want to use is something like XPath, where you can query data using the XML structure itself.

We also want to be able to navigate the XML document as if it was a DOM, i.e. be able to jump from one node to it’s sibling without parsing all the great, great, great, grandchild nodes to get there. This means storing the offset to the sibling in a binary file.

If we’re creating a cache, we might as well do the string deduplication at creation time once, rather than every time we load the data. This has the added benefit in that we’re converting the string data from variable length strings that you compare using strcmp() to quarks that you can compare just by checking two integers. This is much faster, as any SAT solver will tell you. If we’re storing a string table, we can also store the NUL byte. This seems wasteful at first, but has one huge advantage – you can mmap() the string table. In fact, you can mmap the entire cache. If you order the string table in a sensible way then you store all the related data in one block (e.g. the <id> values) so that you don’t jump all over the cache invalidating almost everything just for a common query. mmap’ing the strings means you can avoid strdup()ing every string just in case; in the case of memory pressure the kernel automatically reclaims the memory, and the next time automatically loads it from disk as required. It’s almost magic.

I’ve spent the last few days prototyping a library, which is called libxmlb until someone comes up with a better name. I’ve got a test branch of fwupd that I’ve ported from libappstream-glib and I’m happy to say that RSS has reduced from 3Mb (peak 3.61Mb) to 1Mb (peak 1.07Mb) and the startup time has gone from 280ms to 250ms. Unless I’ve missed something drastic I’m going to port gnome-software too, and will expect even bigger savings as the amount of XML is two orders of magnitude larger.

So, how do I use this thing. First, lets create a baseline doing things the old way:

$ time appstream-util search gimp.desktop
real	0m0.645s
user	0m0.800s
sys	0m0.184s

To create a binary cache:

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.497s
user	0m0.453s
sys	0m0.028s

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.016s
user	0m0.004s
sys	0m0.006s

Notice the second time it compiled nearly instantly, as none of the filename or modification timestamps of the sources changed. This is exactly what programs would do every time they are launched.

$ df -h appstream.xmlb
4.2M	appstream.xmlb

$ time xb-tool query appstream.xmlb "components/component[@type='desktop']/id[text()='firefox.desktop']"
RESULT: <id>firefox.desktop</id>
RESULT: <id>firefox.desktop</id>
RESULT: <id>firefox.desktop</id>
real	0m0.008s
user	0m0.007s
sys	0m0.002s

8ms includes the time to load the file, search for all the components that match the query and the time to export the XML. You get three results as there’s one AppData file, one entry in the distro AppStream, and an extra one shipped by Fedora to make Firefox featured in gnome-software. You can see the whole XML component of each result by appending /.. to the query. Unlike appstream-glib, libxmlb doesn’t try to merge components – which makes it much less magic, and a whole lot simpler.

Some questions answered:

  • Why not just use a GVariant blob?: I did initially, and the cache was huge. The deeply nested structure was packed inefficiently as you have to assume everything is a hash table of a{sv}. It was also slow to load; not much faster than just parsing the XML. It also wasn’t possible to implement the zero-copy XPath queries this way.
  • Is this API and ABI stable?: Not yet, as soon as gnome-software is ported.
  • You implemented XPath in c‽: No, only a tiny subset. See the README.md

Comments welcome.

3 Million Firmware Files and Counting…

In the last two years the LVFS has supplied over 3 million firmware files to end users. We now have about a two dozen companies uploading firmware, of which 9 are multi-billion dollar companies.

Every month about 200,000 more devices get upgraded and from the reports so far the number of failed updates is less than 0.01% averaged over all firmware types. The number of downloads is going up month-on-month, although we’re no longer growing exponentially, thank goodness. The server load average is 0.18, and we’ve made two changes recently to scale even more for less money: signing files in a 30 minute cron job rather than immediately, and switching from Amazon to BunnyCDN.

The LVFS is mainly run by just one person (me!) and my time is sponsored by the ever-awesome Red Hat. The hardware costs, which recently included random development tools for testing the dfu and nvme plugins, and the server and bandwidth costs are being paid from charitable donations from the community. We’re even cost positive now, so I’m building up a little pot for the next server or CDN upgrade. By pretty much any metric, the LVFS is a huge success, and I’m super grateful to all the people that helped the project grow.

The LVFS does have one weakness, that it has a bus factor of one. In other words, if I got splattered by a bus, the LVFS would probably cease to exist in the current form. To further grow the project, and to reduce the dependence on me, we’re going to be moving various parts of the LVFS to the Linux Foundation. This means that they’ll be sysadmins who don’t have to google basic server things, a proper community charter, and access to an actual legal team. From a OEM point of view, nothing much should change, including the most important thing that it’ll continue to be free to use for everyone. The existing server and all the content will be migrated to the LVFS infrastructure. From a users point of view, new metadata and firmware will be signed by the Linux Foundation key, rather than my key, although we haven’t decided on a date for the switch-over yet. The LF key has been trusted by fwupd for firmware since 1.0.8 and it’s trivial to backport to older branches if required.

Before anyone gets too excited and starts pointing me at all my existing bugs for my other stuff: I’ll probably still be the person “onboarding” vendors onto the LVFS, and I’m fully expecting to remain the maintainer and core contributor to the lvfs-website code itself — but I certainly should have a bit more time for GNOME Software and color stuff.

In related news, even more vendors are jumping on the LVFS. No more public announcements yet, but hopefully soon. For a lot of hardware companies they’ll be comfortable “going public” when the new hardware currently in development is on shelves in stores. So, please raise a glass to the next 3 million downloads!

Realtek on the LVFS!

For the last week I’ve been working with Realtek engineers adding USB3 hub firmware support to fwupd. We’re still fleshing out all the details, as we also want to also update any devices attached to the hub using i2c – which is more important than it first seems. A lot of “multifunction” dongles or USB3 hubs are actually USB3 hubs with other hardware connected internally. We’re going to be working on updating the HDMI converter firmware next, probably just by dropping a quirk file and adding some standard keys to fwupd. This will let us use the same plugin for any hardware that uses the rts54xx chipset as the base.

Realtek have been really helpful and open about the hardware, which is a refreshing difference to a lot of other hardware companies. I’m hopeful we can get the new plugin in fwupd 1.1.2 although supported hardware won’t be available for a few months yet, which also means there’s no panic getting public firmware on the LVFS. It will mean we get a “works out of the box” experience when the new OEM branded dock/dongle hardware starts showing up.

Fun with SuperIO

While I’m waiting back for NVMe vendors (already one tentatively onboard!) I’ve started looking at “embedded controller” devices. The EC on your laptop historically used to just control the PS/2 keyboard and mouse, but now does fan control, power management, UARTs, GPIOs, LEDs, SMBUS, and various tasks the main CPU is too important to care about. Vendors issue firmware updates for this kind of device, but normally wrap up the EC update as part of the “BIOS” update as the system firmware and EC work together using various ACPI methods. Some vendors do the EC update out-of-band and so we need to teach fwupd about how to query the EC to get the model and version on that specific hardware. The Linux laptop vendor Tuxedo wants to update the EC and system firmware separately using the LVFS, and helpfully loaned me an InfinityBook Pro 13 that was immediately disassembled and connected to all kinds of exotic external programmers. On first impressions the N131WU seems quick, stable and really well designed internally — I’m sure would get a 10/10 for repairability.

At the moment I’m just concentrating on SuperIO devices from ITE. If you’re interested what SuperIO chip(s) you have on your machine you can either use superiotool from coreboot-utils or sensors-detect from lm_sensors. If you’ve got a SuperIO device from ITE please post what signature, vendor and model machine you have in the comments and I’ll ask if I need any more information from you. I’m especially interested in vendors that use devices with the signature 0x8587, which seems to be a favourite with the Clevo reference board. Thanks!

Please welcome AKiTiO to the LVFS

Another week, another vendor. This time the vendor is called AKiTiO, a vendor that make a large number of very nice ThunderBolt peripherals.

Over the last few weeks AKiTiO added support for the Node and Node Lite devices, and I’m sure they’ll be more in the future. It’s been a pleasure working with the engineers and getting them up to speed with uploading to the LVFS.

In other news, Lenovo also added support for the ThinkPad T460 on the LVFS, so get any updates while they’re hot. If you want to try this you’ll have to enable the lvfs-testing remote either using fwupdmgr enable-remote lvfs-testing or using the sources dialog in recent versions of GNOME Software. More Lenovo updates coming soon, and hopefully even more vendor announcements too.

Adventures with NVMe, part 2

A few days ago I asked people to upload their NVMe “cns” data to the LVFS. So far, 908 people did that, and I appreciate each and every submission. I promised I’d share my results, and this is what I’ve found:

Number of vendors implementing slot 1 read only “s1ro” factory fallback: 10 – this was way less than I hoped. Not all is lost: the number of slots in a device “nfws”indicate how many different versions of firmware the drive can hold, just like some wireless broadband cards. The idea is that a bad firmware flash means you can “fall back” to an old version that actually works. It was surprising how many drives didn’t have this feature because they only had one slot in total:

I also wanted to know how many firmware versions there were for a specific model (deduping by removing the capacity string in the model); the idea being that if drives with the same model string all had the same version firmware then the vendor wasn’t supplying firmware updates at all, and might be a lost cause, or have perfect firmware. Vendors don’t usually change shipped firmware on NMVe drives for no reason, and so a vendor having multiple versions of firmware for a given model could indicate a problem or enhancement important enough to re-run all the QA checks:

So, not all bad, but we can’t just assume that trying to flash a firmware is a safe thing to do for all drives. The next, much bigger problem was trying to identify which drives should be flashed with a specific firmware. You’d think this would be a simple problem, where the existing firmware version would be stored in the “fr” firmware revision string and the model name would be stored in the “mn” string. Alas, only Lenovo and Apple store a sane semver like 1.2.3, other vendors seem to encode the firmware revision using as-yet-unknown methods. Helpfully, the model name alone isn’t all we need to identify the firmware to flash as different drives can have different firmware for the laptop OEM without changing the mn or fr. For this I think we need to look into the elusive “vs” vendor-defined block which was the reason I was asking for the binary dump of the CNS rather than the nvme -H or nvme -o json output. The vendor block isn’t formally defined as part of the NVMe specification and the ODM (and maybe the OEM?) can use this however they want.

Only 137 out of the supplied ~650 NVMe CNS blobs contained vendor data. SK hynix drives contain an interesting-looking string of something like KX0WMJ6KS0760T6G01H0, but I have no idea how to parse that. Seagate has simply 2002. Liteon has a string like TW01345GLOH006BN05SXA04. Some Samsung drives have things like KR0N5WKK0184166K007HB0 and CN08Y4V9SSX0087702TSA0 – the same format as Toshiba CN08D5HTTBE006BEC2K1A0 but it’s weird that the blob is all ASCII – I was somewhat hoping for a packed GUID in the sea of NULs. They do have some common sub-sections, so if you know what these are please let me know!

I’ve built a fwupd plugin that should be able to update firmware on NVMe drives, but it’s 100% untested. I’m going to use the leftover donation money for the LVFS to buy various types of NVMe hardware that I can flash with different firmware images and not cry if all the data gets wiped or the device get bricked. I’ve already emailed my contact at Samsung and fingers crossed something nice happens. I’ll do the same with Toshiba and Lenovo next week. I’ll also update this blog post next week with the latest numbers, so if you upload your data now it’s still useful.

NVMe Firmware: I Need Your Data

In a recent Google Plus post I asked what kind of hardware was most interesting to be focusing on next. UEFI updating is now working well with a large number of vendors, and the LVFS “onboarding” process is well established now. On that topic we’ll hopefully have some more announcements soon. Anyway, back to the topic in hand: The overwhelming result from the poll was that people wanted NVMe hardware supported, so that you can trivially update the firmware of your SSD. Firmware updates for SSDs are important, as most either address data consistency issues or provide nice performance fixes.

Unfortunately there needs to be some plumbing put in place first, so don’t expect anything awesome very fast. The NVMe ecosystem is pretty new, and things like “what version number firmware am I running now” and “is this firmware OEM firmware or retail firmware” are still queried using vendor-specific extensions. I only have two devices to test with (Lenovo P50 and Dell XPS 13) and so I’m asking for some help with data collection. Primarily I’m trying to find out what NMVe hardware people are actually using, so I can approach the most popular vendors first (via the existing OEMs). I’m also going to be looking at the firmware revision string that each vendor sets to find quirks we need — for instance, Toshiba encodes MODEL VENDOR, and everyone else specifies VENDOR MODEL. Some drives contain the vendor data with a GUID, some don’t, I have no idea of the relative number or how many different formats there are. I’d also like to know how many firmware slots the average SSD has, and the percentage of drives that have a protected slot 1 firmware. This all lets us work out how safe it would be to attempt a new firmware update on specific hardware — the very last thing we want to do is brick an expensive new NMVe SSD with all your data on.

So, what do I would like you to do. You don’t need to reboot, unmount any filesystems or anything like that. Just:

  1. Install nvme (e.g. dnf install nvme-cli or build it from source
  2. Run the following command:
    sudo nvme id-ctrl --raw-binary /dev/nvme0 > /tmp/id-ctrl
    
  3. If that worked, run the following command:
    curl -F type=nvme \
        -F "machine_id="`cat /etc/machine-id` \
        -F file=@/tmp/id-ctrl \
        https://staging.fwupd.org/lvfs/upload_hwinfo

If you’re not sure if you have a NVMe drive you can check with the nvme command above. The command isn’t doing anything with the firmware; it’s just asking the NVMe drive to report what it knows about itself. It should be 100% safe, the kernel already did the same request at system startup.

We are sending your random machine ID to ensure we don’t record duplicate submissions — if that makes you unhappy for some reason just choose some other 32 byte hex string. In the binary file created by nvme there is the encoded model number and serial number of your drive; if this makes you uneasy please don’t send the file.

Many thanks, and needless to say I’ll be posting some stats here when I’ve got enough submissions to be statistically valid.

GNOME Software and automatic updates

For GNOME 3.30 we’ve enabled something that people have been asking for since at least the birth of the gnome-software project: automatically installing updates.

This of course comes with some caveats. Since it’s still not safe to auto-update packages (trust me, I triaged the hundreds of bugs) we will restrict automatic updates to Flatpaks. Although we do automatically download things like firmware updates, ostree content, and package updates by default they’re deployed manually like before. I guess it’s important to say that the auto-update of Flatpaks is optional and can easily be turned off in the GUI, and that you’ll be notified when applications have been auto-updated and need restarting.

Another common complaint with gnome-software was that it didn’t show the same list of updates as command line tools like dnf. The internal refactoring required for auto-deploying updates also allows us to show updates that are available, but not yet downloaded. We’ll still try and auto-download them ahead of time if possible, but won’t hide them until that. This does mean that “new” updates could take some time to download in the updates panel before either the firmware update is performed or the offline update is scheduled.

This also means we can add some additional UI controlling whether updates should be downloaded and deployed automatically. This doesn’t override the existing logic regarding metered connections or available battery power, but does give the user some more control without resorting to using gsettings invocation on the command line.

Also notable for GNOME 3.30 is that we’ve switched to the new libflatpak transaction API, which both simplifies the flatpak plugin considerably, and it means we install the same runtimes and extensions as the flatpak CLI. This was another common source of frustration as anyone trying to install from a flatpakref with RuntimeRepo set will testify.

With these changes we’ve also bumped the plugin interface version, so if you have out-of-tree plugins they’ll need recompiling before they work again. After a little more polish, the new GNOME Software 2.29.90 will soon be available in Fedora Rawhide, and will thus be available in Fedora 29. If 3.30 is as popular as I think it might be, we might even backport gnome-software 3.30.1 into Fedora 28 like we did for 3.28 and Fedora 27 all those moons ago.

Comments welcome.