BuildStream 2 news

After a long time passed without any BuildStream updates, I’m proud to finally announce that unstable BuildStream 2 development phase is coming to a close.

As of the 1.95.0 beta release, we have now made the promise to remain stable and refrain from introducing any more API breaking changes.

At this time, we are encouraging people to try migrating to the BuildStream 2 API and to inform us of any issues you have via our issue tracker.

Installing BuildStream 2

At this time we recommend installing BuildStream 2 into a python venv, this allows parallel installability with BuildStream 1 in the case that you may need both installed on the same system.

First you need to install BuildBox using these instructions, and then you can install BuildStream 2 from source following these commands:

# Install BuildStream 2 in a virtual environment
cd buildstream
git checkout 1.95.0
python3 -m venv ~/buildstream2
~/buildstream2/bin/pip install .

# Activate the virtual environment to use `bst`
. ~/buildstream2/bin/activate

Porting projects to BuildStream 2

In order to assist in the effort of porting your projects, we have compiled a guide which should be helpful in updating your BuildStream YAML files.

The guide does not cover the python API. In case you have custom plugins which need to be ported to the new API, you can observe the API reference here and you are encouraged to reach out to us on our project mailing list where we will be eager to answer any questions and help out in the porting effort.

What’s new in BuildStream 2 ?

This would be a very exhaustive list, so I’ll try to summarize the main changes as succinctly as possible.

Remote Execution

BuildStream now supports building remotely using the Remote Execution API (REAPI), which is a standard protocol used by various applications and services such as Bazel, recc, BuildGrid, BuildBarn and Buildfarm.

As the specification allows for variations in setup and implementation of remote execution clusters, there are some limitations on what can be used by BuildStream documented here.

Load time performance

In order to support large projects (in the 100,000 elements range), the loading codepaths have undergone a massive overhaul. Various recursive algorithms have been eliminated in favor of iterative ones, and cython is used for the hot code paths.

This makes large projects manageable, and noticeably improves performance and responsiveness on the command line.

Artifact and Source caching

BuildStream now uses implementations of the Content Addressable Storage (CAS) and Remote Asset services which are a part of the Remote Execution API exclusively to store artifacts and sources in remote caches. As such, we no longer ship our own custom implementation of an artifact server, and currently recommend using BuildBarn.

Supported implementations of cache servers are documented here.

In addition to caching and sharing built artifacts on artifact servers, BuildStream now also caches source code in CAS servers. This will imply an initial performance degradation on the initial build while BuildStream populates the caches but should improve performance under regular operation, assuming that you have persistent local caches, or have decent bandwidth between your build machines and CAS storage services.

BuildBox Sandboxing

Instead of using bubblewrap directly, BuildStream now uses the BuildBox abstraction for its sandboxing implementation, which will use bubblewrap on Linux platforms.

On Linux build hosts, BuildBox will use a fuse filesystem to expose files in CAS directly to the containerized build environment. This results in some optimization of the build environment and also avoids the hard limit on hardlinks which we sometimes encounter with BuildStream 1.

BuildBox is also used for the worker implementation when building remotely using the BuildGrid remote execution cluster implementation.

Cache Key Stability

BuildStream 2 comes with a new promise to never inadvertently change artifact cache keys in future releases.

This is helpful to reduce unnecessary rebuilds and potential resulting validation of artifacts when upgrading BuildStream to new versions.

In the future, it is always possible that a new feature might require a change to the artifact content and format. Should such a scenario arise, our policy is to support the old format and make sure that new features which require artifact changes be opt-in.

Caching failed builds

With BuildStream 2, it is now possible to preserve and share the state of a failed build artifact.

This should be useful for local debugging of builds which have failed in CI or on another user’s machine.

YAML Format Enhancements

A variety of enhancements have been made to the YAML format:

  • Variable expansion is now performed unconditionally on all YAML. This means it is now possible to use variable substitution when declaring sources as well as elements.
  • The toplevel-root and project-root automatic variables have been added, allowing some additional flexibility for specifying sources which must be obtained in project relative locations on the build host.
  • Element names are more clearly specified, and it is now possible to refer to elements across multiple junction/project boundaries, both on the command line and also as dependencies.
  • It is now possible to override an element in a subproject, including the ability to override junction definitions in subprojects. This can be useful to resolve conflicts where multiple projects depend on the same junction (diamond shaped project dependency graphs). This can also be useful to override how a single element is built in a subproject, or cause subproject elements to depend on local dependencies.
  • The core link element has been introduced, which simply acts as a symbolic link to another element in the same project or in a subproject.
  • New errors are produced when loading multiple junctions to the same project, these errors can be explicitly avoided using the duplicates or internal keywords in your project.conf.
  • ScriptElement and BuildElement implementations now support extended configuration in dependencies, allowing one to stage dependencies in alternative locations at build time.
  • Loading pip plugins now supports versioning constraints, this offers a more reliable method for specifying what plugins your project depends on when loading plugins from pip packages installed in the host python environment.
  • Plugins can now be loaded across project boundaries using the new junction plugin origin. This is now the recommended way to load plugins, and plugin packages such as buildstream-plugins are accessible using this method. An example of creating a junction to a plugin package and using plugins from that package is included in the porting guide.

New Plugin Locations

Plugins which used to be a part of the BuildStream core, along with some additional plugins, have been migrated to the buildstream-plugins repository. A list of migrated core plugins and their new homes can be found in the porting guide.

Porting Progress

Abderrahim has been maintaining the BuildStream 2 ports of freedesktop-sdk and gnome-build-meta, and it is looking close to complete, as can be observed in the epic issue.

Special Thanks

We’ve come a long way in the last years, and I’d like to thank:

  • All of our contributors individually, whom are listed in our beta release announcement.
  • Our downstream projects and users who have continued to have faith in the project.
  • Our stakeholder Bloomberg, who funded a good portion of the 2.0 development, took us into the Remote Execution space and also organized the first Build Meetup events.
  • Our stakeholder Codethink, who continues to support and fund the project since founding the BuildStream project in 2016, organized several hackfests over the years, organized the 2021 chapter of the Build Meetup during lockdowns, and has granted us the opportunity to tie up the loose ends and bring BuildStream 2.0 to fruition.

BuildStream news and 2.0 planning

Hi all,

It has been a very long time since my last BuildStream related post, and there has been a huge amount of movement since then, including the initial inception of our website, the 1.2 release, the beginnings of BuildGrid, and several hackfests including one in manchester hosted by Codethink, and another one in London followed by the Build Meetup which were hosted by Bloomberg at their London office.

BuildStream Gathering (BeaverCon) January 2019 group photo

As a newyears resolution, I should make a commitment to blogging about upcoming hackfests as soon as we decide on dates, sorry I have been failing at this !

Build Meetup

While the other BuildStream related hackfests were very eventful (each of them with at least 20 attendees and jam packed with interesting technical and planning sessions), the Build Meetup in London is worth a special mention as it marks the beginnings of a collaborative community of projects in the build space.

Developers of Bazel, BuildStream & BuildGrid and Buildbarn (and more!) united in London to discuss our wide variety of requirements for the common remote execution API specification, had various discussions on how we can improve performance and interoperability of our tooling in distributed build scenarios and most importantly; had an opportunity to meet in person and drink beer together !

BuildStream 2.0 planning

As some may have noticed, we did not release any BuildStream 1.4 in the expected timeframes, and the big news is that we won’t be.

After much deliberation and hair pulling, we have decided to focus on a new BuildStream 2.0 API.

Why ?

The reasons for this are multifold, but mainly:

  • The project has received much more interest and funding than anticipated, as such we currently have a staggering amount of contributors working full time on BuildStream & BuildGrid combined. Considering the magnitude of features and changes we want to make, committing to a 6 month stable release cycle for this development is causing too much friction.
  • A majority of contributors would prefer to refine and perfect the command line interface than to commit to the interface we initially had, and indeed the result after adding new functionality will be more comprehensive and intuitive as a result. The result of these changes will not however be backwards compatible.
  • With a long term plan to make it easier for external plugins to be developed, along with a plan to split out plugins into more sensible separate repositories, we also anticipate some API breakages to the format related to plugin loading which cannot be avoided.

To continue with the 1.x API while making the changes we want to make would be dishonest and just introduce unexpected breakages.

What are we planning ?

Some highlights of what we are working on for future BuildStream 2.x include:

Remote execution support

A huge amount of the ongoing work is towards supporting distributed build scenarios and interoperability with other tooling on server clusters which implement the remote execution API.

This means:

  • Element builds which occur on distributed worker machines
  • Ability to implement a Bazel element which operates on the same build servers and share the same caching mechanics
  • Ability to use optimizations such as RECC which interact with the same servers and caching mechanics

Optimizating for large scale projects

While performance of BuildStream 1.x is fairly acceptable when working with smaller projects with 500-1000 elements, it falls flat on its face when processing larger projects with ~50K elements or more.

New commands for interacting with artifacts

For practical purposes, we have been getting along well enough without the ability to list the artifacts in your cache or examine details about specific artifacts, but the overall experience will be greatly enriched with the ability to specify and manipulate artifacts (rather than elements) directly on the command line.

Stability of artifact cache keys

This is a part of BuildStream which was never stable (yet), meaning that any minor point upgrade (e.g. from 1.0 -> 1.2) would result in a necessary rebuild of all of a given project’s artifacts.

While this is not exactly on the roadmap yet, there is a growing interest in all contributing parties to make these keys stable, so I have an expectation that this will stabilize before any 2.0 release.

Compatibility concerns

BuildStream 2.x will be incompatible with 1.x, as such we will need to guarantee safety to users of either 1.x or 2.x, probably in the form of ensuring parallel installability of BuildStream itself and ensuring that the correct plugins are always loaded for the correct version, and that 1.x and 2.x clients do not cross contaminate eachother in any way.

While there are no clear commitments about just how much the 2.x API will resemble the 1.x API, we are committed to providing a clear migration guide for projects which need to migrate, and we have a good idea what is going to change.

  • The command line interface will change a lot. We consider this to be the least painful for users to adapt to, and as there are a lot of things which we want to enhance, we expect to take a lot of liberties in changing the command line interface.
  • The Plugin facing Python API will change, we cannot be sure how much. One important change is that plugins will not interact directly with the filesystem but instead must use an abstract API. There are not that many plugins out in the wild so we expect that this migration will not be very painful to end users.
  • The YAML format will change the least. We recognize that this is the most painful API surface to change and as such we will try our best to not change it too much. Not only will we ensure that a migration process is well documented, but we will make efforts to ensure that such migrations require minimal effort.

We have also made some commitment to testing BuildStream 2.x on a continuous basis and ensuring that we can build the freedesktop-sdk project with the new BuildStream at all times.


We have not committed to any date for the 2.0 release as of yet, and we explicitly do not have any intention to commit to a date until such a time that we are comfortable with the new API and that our distributed build story has sufficiently stabilized. I can say with confidence that this will certainly not be in 2019.

We will be releasing development snapshots regularly while working towards 2.0, and these will be released in the 1.90.x range.

In the mean time, as far as I know there are no features which are desperately needed by users at this time, and as long as bugs are identified and fixed in the 1.2.x line, we will continue to issue bugfix releases.

We expect that this will provide the safety required for users while also granting us the freedom we need to develop towards the new 2.0.


Often, a conference is an opportunity to visit exotic and distant locations like Europe, and maybe we forget to check out the exciting conferences closer to home.

In this spirit, I will be giving another BuildStream talk at FOSSASIA this year. This conference has been running strong since 2009, takes place in Singapore and lasts four days including talks, workshops, an exhibition and a hackathon.

I’m pretty excited to finally see this for myself and would like to encourage people to join me there !

BuildStream Hackfest and FOSDEM

Hi all !

In case you happen to be here at FOSDEM, we have a BuildStream talk in the distributions track tomorrow at noon.

BuildStream Hackfest

I also wanted to sum up a last minute BuildStream hackfest which occurred in Manchester just a week ago. Bloomberg sent some of their Developer Experience engineering team members over to the Codethink office in Manchester where the whole BuildStream team was present, and we split up into groups to plan upcoming coding sprints, land some outstanding work and fix some bugs.

Here are some of the things we’ve been talking about…

Caching the build tree

Currently we only cache the output in a given artifact, but caching the build tree itself, including the sources and intermediate object files can bring some advantages to the table.

Some things which come to mind:

  • When opening a workspace, if the element has already been built; it becomes interesting to checkout the whole built tree instead of only the sources. This can turn your first workspace build into an incremental build.
  • Reconstruct build environment to test locally. In case you encounter failures in remote CI runners, it’s interesting to be able to reproduce the shell environment locally in order to debug the build.

Benchmarking initiative

There is an initiative started by Angelos Evripiotis to get a benchmarking process in place to benchmark BuildStream’s performance in various aspects such as load times for complex projects, staging time for preparing a build sandbox etc, so we can keep an eye on performance improvements and regressions over time.

We talked a bit about the implementation details of this, I especially like that we can use BuildStream’s logging output to drive benchmarks as this eliminates the observer side effects.

Build Farm

We talked about farming out tasks and discussed the possibility of leveraging the remote execution framework which the Bazel folks are working on. We noticed that if we could leverage such a framework to farm out the execution of a command against a specific filesystem tree, we could essentially use the same protocol to farm out a command to a worker with a specified CPU architecture, solving the cross build problem at the same time.

Incidentally, we also spent some time with some of the Bazel team here at FOSDEM, and seeing as there are some interesting opportunities for these projects to compliment eachother, we will try to see about organizing a hackfest for these two projects.

Incremental Builds

Chandon Singh finally landed his patches which mount the workspaces into the build environment at build time instead of copying them in, this is the first big step to giving us incremental builds for workspaced elements.

Besides this, we need to handle the case where configure commands should not be run after they have already been run once, with an option to reset or force them to be run again (i.e. avoid recreating config.h and rebuilding everything anyway), also there is another subtle bug to investigate here.

But more than this, we also got to talking about a possible workflow where one runs CI on the latest of everything in a given mainline, and runs the builds incrementally, without workspaces. This would require some client state which would recall what was used in the last build in order to feed in the build tree for the next build and then apply/patch the differences to that tree, causing an incremental build. This can be very interesting when you have highly active projects which want to run CI on every commit (or branch merge), and then scale that to thousands of modules.

Reproducibility Testing

BuildStream strives for bit-for-bit reproducibility in builds, but should have some minimal support for allowing one to easily check just how reproducible their builds are.

Jim MacArthur has been following up on this.


In Summary

The hackfest was a great success but was on a little short notice, we will try to organize another hackfest well in advance and give more people a chance to participate next time around.


And… here is a group photo !

BuildStream Hackfest Group Photo


If you’re at FOSDEM… then see you in a bit at the GNOME beer event !

GUADEC & BuildStream

After a much needed 2 week vacation following GUADEC, finally I’m getting around to writing up a GUADEC post.


Great venue conveniently located close to downtown. I feel like it was a smaller event this year compared to other years that I have attended (which are not many), still; I enjoyed most, if not all of the talks I went to.

Some highlights for me I guess were attending Juan’s Glade talk, he always does something fun with the presentation and creates the presentation using glade which is pretty cool; would always be nice to get some funding for that project as it’s always had potential…

Both Christian and Michael’s talks on Builder and LibreOffice respectively were entertaining and easy to digest.

Probably the most interesting and important talk I attended was Richard Brown’s talk which was about software distribution models and related accountability (broadly speaking). It’s important that we hear the concerns of those who have been maintaining traditional distros, so that we are prepared for the challenges and responsibilities that come with distributing binary stacks (Flatpak runtimes and apps).

My deepest regret is that I missed out on all of Carlos Garnocho’s talks. Here we are all concerned with the one talk we have to give; and meanwhile Carlos is so generous that he is giving three talks, all without breaking a sweat (a round of applause please !).

First and foremost to me, GUADEC is an opportunity to connect in person with the human backend component of the irc personas we interact with on a day to day basis and this is always a pleasure 🙂

So thanks to the local organization team for organizing a great event, and for manning the booths and doing all the stuff that makes this possible, and thanks to everyone who sponsored the event !


It was a huge year so far for BuildStream and I’m very happy with the result so far. Last year we started out in November from scratch and now while our initial road map is not exactly complete, it’s in great shape. This is thanks in no small part to the efforts and insights of my team members who have been dealing with build system related problems for years before me (Sam Thursfield, Jürg Billeter, Jonathan Maw and more).

I threw together a last minute demo for my talk at GUADEC, showing that we can viably build Flatpak runtimes using BuildStream and that we can do it using the same BuildStream project (automatically generated from the current jhbuild modulesets) used to build the rest of GNOME. I messed up the demo a bit because it required a small feature that I wrote up the day before and that had a bug; but I was able to pull it together just in time and show gedit running in an SDK that causes GtkLabels to be upside down (this allowed me to also demo the workspaces and non strict build modes features which I’ve been raving about).

The unconference workshop did not go quite as well as I had hoped; I think I just lacked the talent to coordinate around 15 people and get everyone to follow a step by step hands on experience, lesson learned I guess. Still, people did try building things and there was a lot of productive conversation which went on throughout the workshop.

GNOME & BuildStream

During the unconference days I had some discussions with some of release team members regarding moving forward building GNOME with BuildStream.

The consensus at this time is that we are going to look into a migration after releasing GNOME 3.26 (which is soon !), which means I will be working full time on this for… a while.

Some things we’re looking into (but not all at once):

  • Migrate to maintaining a BuildStream project instead of JHBuild modulesets
  • Ensure the GNOME BuildStream project can be used to build Flatpak runtimes
  • Migrate the Flatpak sdkbuilder machines to use BuildStream to build the GNOME SDK
  • Also build the freedesktop SDK using BuildStream as we can tie things in better this way
  • Finally replace the Yocto based freedesktop-sdk-base project with a runtime bootstrap built with BuildStream
  • Adapt GNOME Continuous to consume the same GNOME BuildStream project, and also have GNOME Continuous contribute to a common artifact share (so developers dont have to build things which GNOME Continuous already built).

These are all things which were discussed but I’m only focusing on landing the first few of the above points before moving on with other enhancements.

Also, I have accepted an invitation to join the GNOME release team !

This probably wont change my life dramatically as I expect I’ll be working on building and releasing GNOME anyway, just in a more “official” capacity.


This year we did a lot of great work, and GUADEC was a climax in our little BuildStream journey. All in all I enjoyed the event and am happy about where things are going with the project, it’s always exciting to show off a new toy and rewarding when people are interested.

Thanks to everyone who helped make both GUADEC and BuildStream happen !

Continuous BuildStream conversions of GNOME Modulesets

This is another followup post in the BuildStream series.

As I’ve learned from my recent writings on d-d-l, it’s hard to digest information when too much of it is presented at once, so today I’m going to try to talk about one thing only: A continuous migration path for moving away from the JHBuild modulesets to the YAML files which BuildStream understands.

Why continuous migrations ?

Good question !

It would be possible to just provide a conversion tool (which is indeed a part of the mechanics we’ve put in place), but that would leave GNOME in a situation where there would have to be a decided “flag day” on which the JHBuild based modulesets would be deprecated and everyone would have to move at once.

This continuous migration allows us to improve the user and maintainer story in BuildStream, and allows people to try building GNOME with BuildStream along the way until such a point that the GNOME release team and community is ready to take a plunge.

How does it work ?

The migrations are running on a dedicated server (which just happens to be one of the servers that Codethink contributed to build GNOME things on arm architectures last year) and there are a few moving parts, which should be quite transparent to the end user.

Here is a little diagram I threw together to illustrate the big picture:

About debootstrap-ostree

One of the beautiful things about BuildStream is that we do not ever allow host tools or dependencies to infiltrate the build sandbox. This means that in order to obtain the system dependencies required to build GNOME, we need to provide them from somewhere. Further, we need that source to be something revisioned so that builds can be reproducible.

The debootstrap-ostree script does this by running debian’s multistrap program to cross bootstrap the GNOME system dependencies (currently using debian testing) for the four architectures we are currently interested in: i386, x86_64, arm and aarch64.

This process simply commits the results of the multistrap into an ostree repository that is hosted on the same machine, and is running once daily.

In case you are wondering “why use a debian base” instead of “my favorite distro foo” the answer is just that time is precious and this was very easy to setup. With that said, longer term I think it will be interesting to setup a service for running builds against a multitude of baselines (as I tentatively outlined in this message).

  • Code Repository:
  • Hosted outputs:

About jhbuild2bst

The jhbuild2bst conversion program is a fork of upstream JHBuild itself, this was just the easiest way to do it as we leverage all of the internal XML parsing logic.

Again because of the “no host dependency” nature of BuildStream, there is some extra static data which goes into the conversion. This data defines the base system that everything we convert depends on to build. Naturally this static data defines a BuildStream element which imports the debootstrap-ostree output, you can take a look at that here.

The automated conversion works by first checking out the latest gnome-modulesets-base repository and then running jhbuild2bst in that repository to get a conversion of the latest upstream jhbuild modulesets and the latest static data together.

This conversion runs every 10 minutes on and result in a noop in the case that the static data and upstream modulesets have not changed. The output is then committed to the master branch of this git repository.

  • jhbuild2bst fork:
  • static data:
  • converted:

The “tracked” branch

In addition to the automated conversions taking place above which produce output in the master branch of the automated git repository, there is another task which runs bst track on the same output, this is a heavier workload and runs less frequently than the conversions themselves.

So what is “bst track” exactly ?

If you look at the GNOME modulesets, you will find that (except for in previous stable releases which refer to precise tarball versions), the currently developed version of GNOME specifies only branches for every module.

The raw conversions (available in the master branch of the converted repo) will only show the tracking branches taken from the jhbuild modulesets, but running “bst track” will modify the BuildStream project inline with the latest source references for each respective tracking branch.

As BuildStream favors determinism, it will refuse to build anything without explicit references to an exact set of inputs, however BuildStream will automate this part for the user with “bst track” so you don’t have to care. It is however important to us that the activity of changing your exact source references be an explicit one. For instance, BuildStream also now supports “bst build –track” so that you can track the newest source references on each branch and then build them, all in one session.

While it is certainly possible to just grab the latest conversion from the master branch and build it with “bst build –track”, it is also interesting that we have a repository where it has been done automatically. This allows me to say that I have build exactly this <commit sha> from the tracking branch, and that if you build the same commit on your host, the results you get will be exactly the same.

To get the latest tracked sources:

git clone
cd gnome-modulesets
git checkout tracked

How can I try it ?

So we’ve finally reached the fun part !

First, install buildstream by following the “Installing BuildStream” instructions in the documentation.

Now just run these commands:

git clone
cd gnome-modulesets
bst build --track meta-gnome-apps-tested.bst

So the above will build or fail depending on the time of day, while building a tracked branch will build or fail entirely predictably. In the meantime I will be babysitting the converted builds and trying to get as much as I can building reliably.

Now lets say you want to run something that you’ve built:

bst shell --scope run gnome-sudoku.bst

[ ... BuildStream now prepares a sandbox and launches a shell ... ]

# gnome-sudoku

Warning:  This will take around around 15GB of disk space (better have 20) and a lot of processing (on my very powerful laptop, just over 2 hours to build around 130 modules). The artifact sharing feature which is almost complete at this time will help to at least reduce the initial amount of disk space needed to get off the ground. Eventually (and after we get dual cache key modes implemented) if there are enough builders contributing to the artifact cache, you will be able to get away with only building the modules you care about working on.

What Next ?

Flying pigs, naturally 🙂

Jokes aside, the next steps for this will be to leverage the variants feature in BuildStream to allow building multiple variants of GNOME pipelines. Most particularly, we will want to build a subset only of the GNOME modulesets against the freedesktop-sdk-images runtime as a base, and output a GNOME Flatpak runtime. This will be particularly challenging as we will have to integrate this into the continuous conversion, so it will most probably have to just take a list of which modules in the modulesets we want to include in the output GNOME Flatpak runtime.

Also, for the full GNOME builds, we will be looking into creating a single target which encompasses everything we want to roll into a desktop system and generate bootable images directly from the continuously converted GNOME builds.


BuildStream progress and booting images

It’s been a while since my initial post about BuildStream and I’m happy to see that it has generated some discussion.

Since then, here at Codethink we’ve made quite a bit of progress, but we still have some road to travel before we can purport to solve all of the world’s build problems.

So here is a report on the progress we’ve made in various areas.


Last time I blogged, project infrastructure was still not entirely sorted. Now that this is in place and will remain fixed for the foreseeable future, I’ll provide the more permanent links:

Links to the same in previous post have been updated

A note on GitLab

Gitlab provides us with some irresistible features.

Asides from the Merge Request feature which really does lower the barrier to contributing patches, the pre-merge CI pipelines allow us to ensure the test cases run prior to accepting any patch and are a deciding factor to remain hosted on GitLab for our git repository in lieu of creating a repo on

Another feature we get for free with GitLab’s pipeline feature is that we can automatically publish our documentation generated from source whenever a commit lands on the master branch, this was all very easy to setup.

User Experience

A significantly large portion of a software developer’s time is spent building and assembling software. Especially in tight debug and test loops, the seconds that it takes and menial tasks which stand in between an added printf() statement and a running test to reproduce some issue can make the difference between tooling which is actually helpful to the user, or just getting in the way of progress.

As such, we are paying attention to the user experience and have plans in place to ensure the most productive experience is possible.

Here are some of the advancements made since my first post


Some of the elements we considered as important when viewing the output of a build include:

  • Separation and easy to find log files. Many build tools which use a serial build model will leave you with one huge log file to parse and figure out what happened, which is rather unwieldy to read. On the other hand, tools which exercise a parallelized build model can leave you searching through log directories for the build log you are looking for.
  • Constant feedback of what is being processed. When your build appears to hang for 30 minutes while all of your cores are being hammered down by a WebKit build, it’s nice to have some indication that a WebKit build is in fact taking place.
  • Consideration of terminal width. It’s desirable however not always possible, to avoid wrapping lines in the output of any command line interface.
  • Colorful and aligned output. When viewing a lot of terminal output, it helps to use some colors to assist the user in identifying some output they may be searching for. Likewise, alignment and formatting of text helps the user to parse more information with less frustration.

Here is a short video showing what the output currently looks like:

I’m particularly happy about how the status bar remains at the bottom of the terminal output while the regular rolling log continues above. While the status bar tells us what is going on right now, the rolling log above provides detail about what tasks are being launched, how long they took to complete and in what log files you can find the detailed build log.

Note that colors and status lines are automatically disabled when BuildStream is not connected to a tty. Interactive mode is also automatically disabled in that case. However using the bst –log-file /path/to/build.log … option will allow you to preserve the master build log of the entire session and also work in interactive mode.

Job Control

Advancements have also been made in the scheduler and how child tasks are managed.

When CNTL-C is pressed in interactive mode, all ongoing tasks are suspended and the user is presented with some choices:

  • continue – Carries on processing and queuing jobs
  • quit – Carries on with ongoing jobs but stops queuing new jobs
  • terminate – Terminates any ongoing jobs and exits immediately

Similarly, if an ongoing build fails in interactive mode, all ongoing tasks will be suspended while the user has the same choices, and an additional choice to debug the failing build in a shell.

Unfortunately continuing with a “repaired” build is not possible at this time in the same way as it is with JHBuild, however one day it should be possible in some developer mode where the user accepts that anything further that is built can only be used locally (any generated artifacts would be tainted as they don’t really correspond to their deterministic cache keys, those artifacts should be rebuilt with a fix to the input bst file before they can be shared with peers).

New Element Plugins

For those who have not been following closely, BuildStream is a system for the modeling and running of build pipelines. While this is fully intended for software building and the decoupling of the build problem and the distribution problem; in a more abstract perspective it can be said that BuildStream provides an environment for the modeling of pipelines, which consist of elements which perform mutations on filesystem data.

The full list of Element and Source plugins currently implemented in BuildStream can be found on the face page of the documentation.

As a part of my efforts to fully reproduce and provide a migration path for Baserock’s declarative definitions, some interesting new plugins were required.


The meson element is a BuildElement for building modules which use meson as their build system.

Thanks goes to Patrick Griffis for filing a patch and adding this to BuildStream.


The compose plugin creates a composition of its own build dependencies. Which is to say that its direct dependencies are not transitive and depending on a compose element can only pull in the output artifact of the compose element itself and none of its dependencies (a brief explanation of build and runtime dependencies can be found here)

Basically this is just a way to collect the output of various dependencies and compress them into a single artifact, that with some additional options.

For the purpose of categorizing the output of a set of dependencies, we have also introduced the split-rules public data which can be read off of the the dependencies of a given element. The default split-rules are defined in BuildStream’s default project configuration, which can be overridden on a per project and also on a per element basis.

The compose element makes use of this public data in order to provide a more versatile composition, which is to say that it’s possible to create an artifact composition of all of the files which are captured by a given domain declared in your split-rules, for instance all of the files related to internationalization, or the debugging symbols.


kind: compose
description: Initramfs composition
- filename: gnu-toolchain.bst
  type: build
- filename: initramfs/initramfs-scripts.bst
  type: build

  # Include only the minimum files for the runtime
  - runtime

The above example takes the gnu-toolchain.bst stack which basically includes a base runtime with busybox, and adds to this some scripts. In this case the initramfs-scripts.bst element just imports an init and shutdown script required for the simplest of initramfs variations. The output is integrated; which is to say that things like ldconfig have run and the output of those has been collected in the output artifact. Further, any documentation, localization, debugging symbols etc, have been excluded from the composition.


The script element is a simple but powerful element allowing one to stage more than one set of dependencies into the sandbox in different places.

One set of dependencies is used to stage the base runtime for the sandbox, and the other is used to stage the input which one intends to mutate in some way to produce output, to be collected in the regular /buildstream/install location.


kind: script
description: The compressed initramfs
- filename: initramfs/initramfs.bst
  type: build
- filename: foundation.bst
  type: build

  base: foundation.bst
  input: initramfs/initramfs.bst

  - mkdir -p %{install-root}/boot
  - (find . -print0 | cpio -0 -H newc -o) |
    gzip -c > %{install-root}/boot/initramfs.gz

This example element will take the foundation.bst stack element (which in this context, is just a base runtime with your regular shell tools available) and stage that at the root of the sandbox, providing the few tools and runtime we want to use. Then, still following the same initramfs example as above, the integrated composition element initramfs/initramfs.bst will be staged as input in the /buildstream/input directory of the build sandbox.

The script commands then simply use the provided base tools to create a gzipped cpio archive inside the /buildstream/install directory, which will be collected as the artifact produced by this script.

A bootable system

Another thing we’ve been doing since last we touched base is providing a migration path for Baserock users to use BuildStream.

This is a particularly interesting case for BuildStream because Baserock systems provide metadata to build a bootable system from the ground up, from a libc and compiler boostrapping phase all the way up to the creation and deployment of a bootable image.

In this way we cover a lot of ground and can now demonstrate that bootstrapping, building and deploying a bootable image as a result is all possible using BuildStream.

The bootstrap

One of the more interesting parts is that the bootstrap remains almost unchanged, except for the key ingredient which is that we never allow any host tools to be present in the build sandbox.

The working theory is that whenever you bootstrap, you bootstrap from some tools. If you were ever able to obtain these tools in binary form installed on your computer, then it should also be possible to obtain them in the form of a chrootable sysroot (or “SDK”).

Anyone who has had a hand in maintaining a tree of build instructions which include a bootstrap phase from host tooling to first get off the ground (like buildroot or yocto) will have lived through the burden of vetting new distros as they roll out and patching builds so as to work “on the latest debian” or whatnot. This whole maintenance aspect is simply dropped from the equation by ensuring that host tools are not a variable in the equation but rather a constant.

Assembling the image

When it comes time to assemble an image to boot with, there are various options and it should not be such a big deal, right ? Well, unfortunately it’s not quite that simple.

It turns out that even in 2017, the options we have for assembling a bootable file system image as a regular unprivileged user are still quite limited.

Short of building qemu and using some virtualization, I’ve found that the only straight forward method of installing a boot loader is with syslinux on a vfat filesystem. While there are some tools around for manipulating ext2 filesystems in user space but these are largely unneeded anyway as static device nodes and assigning file ownership to arbitrary uid/gids is mostly unneeded when using modern init systems. In any case recent versions of e2fsprogs provide an option for populating the filesystem at creation time.

Partitioning an image for your file systems is also possible as a regular user, but populating those partitions is a game of splicing filesystem images into their respective partition locations.

I am hopeful however that with some virtualization performed entirely inside the build sandbox, we can achieve a much better outcome using libguestfs. I’m not altogether clear on how supermin and libguestfs come together but from what I understand, this technology will allow us to mount any linux supported filesystem in userspace, and quite possibly without even having (or using) the supporting filesystem drivers in your host kernel.

That said, for now we settle for the poor mans basic tooling and live with the restriction of having our boot partition be a vfat partition. The image can be created using the script element described above.


kind: script
description: Create a deployment of the GNOME system
- filename: gnome/gnome-system.bst
  type: build
- filename: deploy-base.bst
  type: build

  # Size of the disk to create
  # Should be able to calculate this based on the space
  # used, however it must be a multiple of (63 * 512) bytes
  # as mtools wants a size that is devisable by sectors (512 bytes)
  # per track (63).
  boot-size: 252000K

  rootfs-size: 4G
  swap-size: 1G
  sector-size: 512

  base: deploy-base.bst
  input: gnome/gnome-system.bst


  - |
    # Split up the boot directory and the other
    # This should be changed so that the /boot directory
    # is created separately.

    cd /buildstream
    mkdir -p /buildstream/sda1
    mkdir -p /buildstream/sda2

    mv %{build-root}/boot/* /buildstream/sda1
    mv %{build-root}/* /buildstream/sda2

  - |
    # Generate an fstab
    cat > /buildstream/sda2/etc/fstab << EOF
    /dev/sda2 / ext4 defaults,rw,noatime 0 1
    /dev/sda1 /boot vfat defaults 0 2
    /dev/sda3 none swap defaults 0 0

  - |
    # Create the syslinux config
    mkdir -p /buildstream/sda1/syslinux
    cat > /buildstream/sda1/syslinux/syslinux.cfg << EOF
    PROMPT 0

    SERIAL 0 115200

    DEFAULT boot
    LABEL boot

    KERNEL /vmlinuz
    INITRD /initramfs.gz

    APPEND root=/dev/sda2 rootfstype=ext4 init=/sbin/init

  - |
    # Create the vfat image
    truncate -s %{boot-size} /buildstream/sda1.img
    mkdosfs /buildstream/sda1.img

  - |
    # Copy all that stuff into the image
    mcopy -D s -i /buildstream/sda1.img -s /buildstream/sda1/* ::/

  - |
    # Install the bootloader on the image, it will load the
    # config file from inside the vfat boot partition
    syslinux --directory /syslinux/ /buildstream/sda1.img

  - |
    # Now create the root filesys on sda2
    truncate -s %{rootfs-size} /buildstream/sda2.img
    mkfs.ext4 -F -i 8192 /buildstream/sda2.img \
              -L root -d /buildstream/sda2

  - |
    # Create swap
    truncate -s %{swap-size} /buildstream/sda3.img
    mkswap -L swap /buildstream/sda3.img

  - |

    #        Partition the disk            #

    # First get the size in bytes
    sda1size=$(stat --printf="%s" /buildstream/sda1.img)
    sda2size=$(stat --printf="%s" /buildstream/sda2.img)
    sda3size=$(stat --printf="%s" /buildstream/sda3.img)

    # Now convert to sectors
    sda1sec=$(( ${sda1size} / %{sector-size} ))
    sda2sec=$(( ${sda2size} / %{sector-size} ))
    sda3sec=$(( ${sda3size} / %{sector-size} ))

    # Now get the offsets in sectors, first sector reserved
    # for MBR partition table
    sda2offset=$(( ${sda1offset} + ${sda1sec} ))
    sda3offset=$(( ${sda2offset} + ${sda2sec} ))

    # Get total disk size in sectors and bytes
    sdasectors=$(( ${sda3offset} + ${sda3sec} ))
    sdabytes=$(( ${sdasectors} * %{sector-size} ))

    # Create the main disk and do the partitioning
    truncate -s ${sdabytes} /buildstream/sda.img
    parted -s /buildstream/sda.img mklabel msdos
    parted -s /buildstream/sda.img unit s mkpart primary fat32 \
       ${sda1offset} $(( ${sda1offset} + ${sda1sec} - 1 ))
    parted -s /buildstream/sda.img unit s mkpart primary ext2 \
       ${sda2offset} $(( ${sda2offset} + ${sda2sec} - 1 ))
    parted -s /buildstream/sda.img unit s mkpart primary \
       linux-swap \
       ${sda3offset} $(( ${sda3offset} + ${sda3sec} - 1 ))

    # Make partition 1 the boot partition
    parted -s /buildstream/sda.img set 1 boot on

    # Now splice the existing filesystems directly into the image
    dd if=/buildstream/sda1.img of=/buildstream/sda.img \
      ibs=%{sector-size} obs=%{sector-size} conv=notrunc \
      count=${sda1sec} seek=${sda1offset} 

    dd if=/buildstream/sda2.img of=/buildstream/sda.img \
      ibs=%{sector-size} obs=%{sector-size} conv=notrunc \
      count=${sda2sec} seek=${sda2offset} 

    dd if=/buildstream/sda3.img of=/buildstream/sda.img \
      ibs=%{sector-size} obs=%{sector-size} conv=notrunc \
      count=${sda3sec} seek=${sda3offset} 

  - |
    # Move the image where it will be collected
    mv /buildstream/sda.img %{install-root}
    chmod 0644 %{install-root}/sda.img

As you can see the script element is a bit too verbose for this type of task. Following the pattern we have in place for the various build elements, we will soon be creating a reusable element with some more simple parameters (filesystem types, image sizes, swap size, partition table type, etc) for the purpose of whipping together bootable images.

A booting demo

So for those who want to try this at home, we’ve prepared a complete system which can be built in the build-gnome branch of the buildstream-tests repository.

BuildStream now requires python 3.4 instead of 3.5, so this should hopefully be repeatable on most stable distros, e.g. debian jessie ships 3.4 (and also has the required ostree and bubblewrap available in the  jessie-backports repository).

Here are some instructions to get you off the ground:

mkdir work
cd work

# Clone related repositories
git clone
git clone

# Checkout build-gnome branch
cd buildstream-tests
git checkout build-gnome
cd ..

# Make sure you have ostree and bubblewrap provided by your distro
# first, you will also need pygobject for python 3.4

# Install BuildStream as local user, does not require root
# If this fails, it's because you lack some required dependency.
cd buildstream
pip install --user -e .
cd ..

# If you've gotten this far, then the following should also succeed
# after many hours of building.
cd buildstream-tests
bst build gnome/gnome-system-image.bst

# Once the above completes, there is an image which can be
# checked out from the artifact cache.
# The following command will create ~/VM/sda.img
bst checkout gnome/gnome-system-image.bst ~/VM/

# Now you can use your favorite VM to boot the image, e.g.:
qemu-system-x86_64 -m size=1024 ~/VM/sda.img

# GDM is currently disabled in this build, once the VM boots
# you can login as root (no password) and in that VM you can run:
systemctl start gdm

# And the above will bring up gdm and start the regular
# gnome-initial-setup tool.

With SSD storage and a powerful quad core CPU, this build completes in less than 5 hours (and pretty much makes full usage of your machine’s resources all along the way). All told, the build will take around 40GB of disk space to build and store the result of around 500 modules. I would advise having at least 50GB of free space for this though, especially to account for some headroom in the final step.

Note: This is not an up to date GNOME system based on current modulesets yet, but rather a clone/conversion of the system I tried integrating last year using YBD. I will soon be starting on creating a more modular repository which builds only the components relevant to GNOME and follows the releases, for that I will need to open some dialog and sort out some of the logistics.

Note on modularity

The mentioned buildstream-tests repository is one huge repository with build metadata to build everything from the compiler up to a desktop environment and some applications.

This is not what we ultimately want, because first off, it’s obviously a huge mess to maintain and you dont want your project to be littered with build metadata that you’re not going to use (which is what happens when forking projects like buildroot). Secondly, even when you are concerned with building an entire operating system from scratch, we have found that without modularity, changes introduced in the lower levels of the stack tend to be pushed on the stacks which consume those modules. This introduces much friction in the development and integration process for such projects.

Instead, we will eventually be using recursive pipeline elements to allow modular BuildStream projects to depend on one another in such a way that consuming projects can always decide what version of a project they depend on will be used.


Introducing BuildStream

Greetings fellow Gnomies 🙂

At Codethink over the past few months we’ve been revisiting our approach to assembly of whole-stack systems, particularly for embedded Linux systems and custom GNOME based systems.

We’ve taken inspiration, lessons and use-cases from various projects including OBS, Reproducible Builds, Yocto, Baserock, buildroot, Aboriginal, GNOME Continuous, JHBuild, Flatpak Builder and Android repo.

The result of our latest work is a new project, BuildStream, which aims initially to satisfy clear requirements from GNOME and Baserock, and grow from there. BuildStream uses some key GNOME plumbing (OSTree, bubblewrap) combined with declarative build-recipe description to provide sandboxed, repeatable builds of GNOME projects, while maintaining the flexibility and speed required by GNOME developers.

But before talking about BuildStream, lets go over what this can mean for GNOME in 2017.

Centralization of build metadata

Currently we build GNOME in various ways, including JHBuild XML, Flatpak JSON for the GNOME Runtime and SDK, and GNOME Continuous JSON for CI.

We hope to centralize all of this so that the GNOME release team need only maintain one single set of core module metadata in one repository in the same declarative YAML format.

To this end, we will soon be maintaining a side branch of the GNOME release modulesets so people can try this out early.

GNOME Developer Experience

JHBuild was certainly a huge improvement over the absolutely nothing that we had in place before it, but is generally unreliable due its reliance on host tooling and dependencies.

  • Newcomers can have a hard time getting off the ground and making sure they have satisfied the system dependencies.
  • Builds are not easily repeatable, you cannot easily build GNOME 3.6 today with a modern set of dependencies.
  • Not easy to test core GNOME components like gnome-session or the gnome-initial-setup tool.

BuildStream nips these problems at the bud with an entirely no-host-tooling policy, in fact you can potentially build all of GNOME on your computer without ever installing gcc. Instead, GNOME will be built on top of a deterministic runtime environment which closely resembles the freedesktop-sdk-images Flatpak runtime but will also include the minimal requirements for booting the results in a VM.

Building in the Swarm

BuildStream supports artifact cache sharing so that authenticated users may upload successful build results to share with their peers. I doubt that we’ll want to share all artifacts between random users, but having GNOME Continuous upload to a common artifact cache will alleviate the pain of webkit rebuilds (unless you are hacking on webkit of course).

Flatpak / Flathub support

BuildStream will also be available as an alternative to flatpak-builder.

We will be providing an easy migration path and conversion script for Flatpak JSON which should be good enough for most if not all Flatpak app builds.

As the Flathub project develops, we will also work towards enabling submission of BuildStream metadata as an alternative to the Flatpak Builder JSON.

About BuildStream

Unlike many existing build systems, BuildStream treats the problem of building and distribution as separate problem spaces. Once you have built a stack in BuildStream it should be trivial enough to deploy it as rpms, debian packages, a tarball/ostree SDK sysroot, as a flatpak, or as a bootable filesystem image which can be flashed to hardware or booted in a VM.

Our view is that build instructions as structured metadata used to describe modules and how they integrate together is a valuable body of work on its own. As such we should be able to apply that same body of work reliably to a variety of tasks – the BuildStream approach aims to prove this view while also offering a clean and simple user experience.

BuildStream is written in Python 3, has fairly good test coverage at this stage and is quite well documented.

BuildStream works well right now but still lacks some important features. Expect some churn over the following months before we reach a stable release and become a viable alternative for developing GNOME on your laptop/desktop.


Note that for the time being the OSTree requirement may be too recent for many users running currently stable distros (e.g. debian Jessie). This is because we use the OSTree gobject introspection bindings which require a version from August 2016. Due to this hard requirement it made little sense to include special case support for older Python versions.

However with that said; if this transitional period is too painful, we may decide to lower the Python requirement and just use the OSTree command line interface instead.

Build Pipelines

The BuildStream design in a nutshell is to have one abstract core, which provides the mechanics for sandboxing build environments (currently using bubblewrap as our default sandbox), interpreting the YAML data model and caching/sharing the build results in an artifact cache (implemented with ostree) and an ecosystem of “Element” plugins which process filesystem data as inputs and outputs.

In a very abstract view, one can say that BuildStream is like GStreamer but its extensible set of element plugins operate on filesystem data instead of audio and video buffers.

This should allow for a virtually unlimited variety of pipelines, here are some sketches which attempt to illustrate the kinds of tasks we expect to accomplish using BuildStream.

Import a custom vendor tarball, build an updated graphics stack and BSP on top of that, and use a custom export element to deploy the build results as RPMS:

Import the base runtime ostree repository generated with Yocto, build the modules for the freedesktop-sdk-images repository on top of that runtime, and then deploy both Runtime and SDK from that base, while filtering out the irrelevant SDK specific bits from the Runtime deployment:

Import an arbitrary but deterministic SDK (not your host !) to bootstrap a compiler, C runtime and linux kernel, deploy a bootable filesystem image:

Build pipelines are modular and can be built recursively. So a separate project/pipeline can consume the same base system we just built and extend it with a graphics stack:

A working demo

What follows are some instructions to try out BuildStream in its early stages.

For this demo we chose to build a popular application (gedit) in the flatpak style, however this does not yet include an ostree export or generation of the metadata files which flatpak requires; the built gedit result cannot be run with flatpak without those steps but can be run in a `build-stream shell` environment.

# Installing BuildStream

# Before installing BuildStream you will need to first install
# Python >= 3.5, bubblewrap and OSTree >= v2016.8 as stated above.

# Create some arbitrary directory, dont use ~/buildstream because
# that's currently used by buildstream unless you override the 
# configuration.
mkdir ~/testing
cd testing
git clone

# There are a handful of ways to install a python setuptools
# package, we recommend for developer checkouts that you first
# install pip, and run the following command.
# This should install build-stream and its pythonic dependencies
# into your users local python environment without touching any
# system directories:
cd buildstream
pip install --user -e .

# Clone the demo project repository
cd ..
git clone
cd buildstream-tests

# Take a peek of the gedit.bst pipeline state (optional)
# This will tell us about all the dependencies in the pipeline,
# what their cache keys are and their local state (whether they
# are already cached or need to be built, or are waiting for a
# dependency to be built first).
build-stream show --deps all gedit.bst

# Now build gedit on top of a GNOME Platform & Sdk
build-stream build gedit.bst

# This will take some time and quite some disk space, building
# on SSD is highly recommended.
# Once the artifact cache sharing features are in place then this
# will take half the disk space it currently takes, in the majority
# of cases where you BuildStream already has an artifact for the
# GNOME Platform and SDK bases.

# Ok, the build may have taken some time but I'm pretty sure it
# succeeded.
# Now we can launch a sandbox shell in an environment with the
# built gedit:
build-stream shell --scope run gedit.bst

# And launch gedit. Use the --standalone option to be sure we are
# running the gedit we just built, not a new window in the gedit
# installed on your host
gedit --standalone

Getting Involved

As you can see we’re currently hosted from my user account on gitlab, so our next steps is to sort out a proper hosting for the project including mailing list, bug tracking and a place to publish our documentation.

For right now, the best place to reach us and talk about BuildStream is in the #buildstream channel on GNOME IRC.

If you’d like to play around with the source, a quick read into the HACKING file will provide some orientation for getting started, coding conventions, building documentation and running tests.


With that, I hope you’ve all enjoyed FOSDEM and the beer that it entails 🙂

Flatpak builds available on a variety of architectures

Following the recent work we’ve been doing at Codethink in cooperation with Endless, it’s been a while now that we have the capability of building flatpak SDKs and apps for ARM architectures, and consequently also for 32bit Intel architectures.

Alex has been tying this together and setting up the Intel build machines and as of this week, flatpak builds are available at in a variety of arches and flavors.


The supported architectures are as follows

  • x86_64, the 64bit Intel architecture which is the only one we’ve been building until now
  • i386, this is the name we are using for 32bit Intel, this is only i386 in name but the builds are in fact tuned for the i586 instruction set
  • aarch64, speaks for itself, this is the 64bit ARM architecture
  • arm, like i386, this is a generic name chosen to indicate 32bit arm, this build is tuned for ARMv7-A processors and will make use of modern features such as vfpv3 and the neon simd. In other words, this will not run on older ARM architectures but should run well on modern ARM processors such as the Cortex-A7 featured in the Raspberry Pi 2.

Build Bots

The build bots are currently driven with this set of build scripts, which should be able to turn an Intel or ARM machine with a vanilla Ubuntu 16.04 or RHEL 7 installation into a flatpak build machine.

ARM and Intel builds run on a few distributed build machines and are then propagated to for distribution.

The build machines also push notifications of build status to IRC, currently we have it setup so that only failed builds are announced in #flatpak on freenode, while the fully verbose build notifications are announced in #flatpak-builds also on freenode (so you are invited to lurk in #flatpak-builds if you would like to monitor how your favorite app or library is faring on various build architectures).


Many thanks to all who were involved in making this happen, thanks to Alex for being exceptionally responsive and helpful on IRC, thanks to Endless for sponsoring the development of these build services and ARM support, thanks to Codethink for providing the build machines for the flatpak ARM builds and a special thanks to Dave Page for setting up the ARM build server infrastructure and filling in the IT knowledge gap where I fall short (specifically with things networking related).

Endless and Codethink team up for GNOME on ARM

A couple of months ago Alberto Ruiz issued a Call to Arms here on planet GNOME. This was met with an influx of eager contributions including a wide variety of server grade ARM hardware, rack space and sponsorship to help make GNOME on ARM a reality.

Codethink and Endless are excited to announce their collaboration in this initiative and it’s my pleasure to share the details with you today.

Codethink has donated 8 cartridges dedicated to building GNOME things for ARM architectures in our Moonshot server. These cartridges are AppliedMicro™ X-Gene™ with 8 ARMv8 64-bit cores at 2.4Ghz, 64GB of DDR3 PC3L-12800 (1600 MHz) Memory and 120GB M.2 solid state storage.

Endless has also enlisted our services for the development and deployment of a Flatpak (formerly known as xdg-app) build farm to run on these machines. The goal of this project is to build and distribute both stable and bleeding edge versions of GNOME application bundles and SDKs on a continuous basis.

And we are almost there !

After one spontaneous hackfest and a long list of patches; I am happy to add here that runtimes, sdks and apps are building and running on both AArch64 and 32bit ARMv7-A architectures. As a side effect of this effort, Flatpak sdks and applications can now also be built for 32bit Intel platforms (this may have already been possible, but not from an x86_64 build host).

The builds are already automated at this time and will shortly be finding their way to

In the interest of keeping everything repeatable, I have been maintaining a set of scripts which setup nightly builds on a build machine, which can be configured to build various stable/unstable branches of the SDK and app repositories. These are capable of building our 4 supported target architectures: x86_64, i386, aarch64 and arm.

Currently they are only well tested with vanilla installations of Ubuntu 16.04 and are also known to work on Debian Stretch, but it should be trivial to support some modern RPM based distros as well.


Stay tuned for further updates on GNOME’s new found build farm, brought to you by Endless and Codethink !

Aboriginal YBD – An exploration in cross building

The last couple of months at Codethink have been an exploration into cross compiling, or rather, cross compiling without the hassle of cross compiling.

In brief, this post is about an experimental technique for cross building operating systems we’ve come up with, in which we use a virtual machine to run the builds, a cross compiler over distccd to do the heavy lifting and a virtfs 9p mount to share the build directory with the guest build slave.

Lets start at the beginning

In a recent post, I showcased a build of GNOME from scratch. This was created using the ybd build tool to build GNOME from Baserock YAML definitions.

Once we had a working system, I was asked if I could repeat that for arm. There was already a build story for building arm with Baserock definitions, but getting off the ground to bootstrap it was difficult, and the whole system needs to be built inside an arm emulator or on arm hardware. We started looking at improving the build story for cross compilation.

We examined a few approaches…

Full traditional cross compile

Some projects, such as yocto or buildroot, provide techniques for cross compiling an entire OS from scratch.

I did a writeup on the complications involved in cross building systems
in this email, but in summary:

  • The build process is complex, packages need to be compiled for both the $host and $target all the way up the stack, since modules tend to provide tooling which needs to run on the build host, usually by other modules which depend on it (i.e. icu-config or pkg-config).
  • Building involves trickery, one needs to setup the build environment very specifically so that host tools are run in the build scripts of a given module, this setup varies from module to module depending on the kind of build scripts they use.
  • The further up the stack you get, the more modules tend to expect a self hosting (or native) build environment. This means there is a lot of friction in maintaining something like buildroot, it can involve in some cases very strange autogen/configure incantations and in worse cases, downstream patches need to be maintained just to get it to work.
  • Sometimes you even encounter projects which compile C programs that are not distributed, but only used to generating header files and the like during the build, and often these programs are not compiled specifically with $HOST_CC but directly with $CC.

In any case, this was obviously not a viable option. If one wants to be able to build the bleeding edge on a regular basis, cross compiling all the way up the stack involves too much friction.

The scratchbox2 project

This was an avenue which shows promise indeed. The scratchbox project allows one to setup a build environment that is completely tweaked for optimal build performance, using qemu user mode emulation, and much, much more.

I took a look at the internals PDF document and, while I remain impressed, I just don’t think this is the right fit.

The opening statement of the referred pdf says:

Documenting a system as complex as Scratchbox 2 is not an easy task.

And this is no understatement by any means. Scratchbox’s internal design is extremely difficult to grasp, there are many moving parts and details to this build environment; all of which, at least at face value, I perceive to be potential points of failure.

Scratchbox 2 inserts itself in between the qemu user mode emulator and the host operating system and makes decisions, based on configuration data the user needs to provide, about what tooling can be used during a build, and what paths are really going to be accessed.

In short, scratchbox 2 will sometimes call host tools and run them directly without emulation, and sometimes it will use target tools in user mode emulation, these are managed by a virtual filesystem “view” and both execution modes will see the underlying filesystem in different ways. This way you basically get the fastest solution possible: you run a host cross compiler to build binaries for the target at build time, you run host built coreutils and shells and perl and such at configure time, and if you are well configured, you presumably only ever run target binaries in user emulation when those are tools which were built in your build and need to run during the build.

Scratchbox is what you get when you try to get the maximum performance out of a virtualized native build environment. And it is a remarkable feat, but I have reservations about depending on something as complex as this:

  • Will be able to easily repeat the same build I did today in 10 years from now and easily obtain the same result ?
  • If something ever goes wrong, will it always be possible to find an engineer who is easily capable of fixing it ?
  • When creating entirely new builds, how much effort is going to go into setting up and configuring the scratchbox environment ?

But we’re getting closer, scratchbox2 provides a virtualized environment so that when compiling, your upstream packages believe that they are in a native environment, removing that friction with upstreams and allowing one to upgrade modules without maintaining obscure build instructions and downstream patches.

The Aboriginal Linux project

This is the approach we took as a starting point, it’s not the fastest as a build environment but has some values which align quite nicely with our goals.

What Aboriginal Linux provides, is mostly just a hand full of shell scripts which allow one to bootstrap for a given architecture quite elegantly.

When running the Aboriginal build itself, you just have to tell it what the host and target architectures are, and after the build completes, you end up with the following ingredients:

A statically linked, relocatable $host -> $target cross compiler

This is interesting, you get a gcc which you can untar on any machine
of the given $host architecture and it will compile for $target

A minimal system image to run on the target

This includes:

    • A minimal kernel configured for that arch
    • Busybox / Toybox for your basic utilities
    • Bash for your basic shell utilities
    • A native compiler for the target arch
    • distcc
    • An to boot the system

A set of scripts to launch your kernel & rootfs under qemu

These scripts are generated for your specific target arch so they “just work”, and they setup the guest so that distcc is plugged into the cross compiler you just built.

A couple of nice things about Aboriginal

Minimal build requirements

Running the Aboriginal scripts and getting a build requires:

ar, as, nm, ranlib, ld, cc, gcc, g++, objdump, make and sh

The build starts out by setting up an environment which has access to these, and only these binaries.

This highly controlled early stage build environment is attractive to me because I think the chances are very high that in 10 years I can launch the same build script and get a working result, this is to be at the very base of our stack.

Elegant configuration for bootstrapping targets

Supporting a target architecture in Aboriginal Linux is a bit tricky but once it’s done it should remain reliable and repeatable.

Aboriginal keeps some target configuration files which are sourced at various stages of the build in order to determine:

  • Extra compiler flags for building binutils & gcc and libc
  • Extra configuration options for the kernel build
  • Magical obscure qemu incantation for bringing up the OS in qemu

Getting a compiler, kernel and emulator tuple to work is a delicate dance of matching up these configurations exactly. Once that is done however, it should basically keep working forever.

The adventure begins

When I started testing things, I first just wanted a proof of concept, lets see if we can build our stack from within the Aboriginal Linux emulator.

In my first attempts, I built all the dependencies I needed to run python and git, which are basically the base requirements for running the ybd build tool. This was pretty smooth sailing except that I had to relocate everything I was building into a home directory (read-only root). By the time I started to build baserock’s definitions though I hit a wall. I, quite innocently, wanted to just go ahead and build glibc with Aboriginal’s compiler, thinking no big deal right ? Boy was I wrong.

First problem was that glibc, seems to care a great deal about what compiler is used to build it, and the last GPLv2 version of gcc was not going to cut it. Surprisingly, the errors I encountered were not about the compiler not supporting a recent C standard or such, it was explicitly about gcc – glibc has a deep longing desire to be compiled with gcc, and a moderately recent version of it at that.

Aboriginal Linux had frozen at the very latest releases (and even git commits) of packages which were still available under GPLv2. It took some convincing but since that toolchain is getting old, Rob Landley agreed that it would be desirable, in a transitional period until llvm is ready, to have an optional build mode allowing one to build Aboriginal Linux using the newer GPLv3 contaminated toolchain.

So, I set myself to work and, hoping that it would just cost me a weekend (wrong again), cooked up a branch which supports an option to compile Aboriginal with GCC 5.3 and binutils 2.25.1. A report of the changes this branch introduced can be found on the aboriginal mailing list.

In this time I became intimately acquainted with building compilers and cross compilers. As I mentioned, Aboriginal has a very neat build model which bootstraps everything, running basically runs like:


So essentially you choose the host arch and target arch (both of which need to have support, i.e. a description file like this one in the aboriginal sources), and then the build runs in stages, sourcing the description files for the said architecture depending on what it’s building along the way.

Because I spent considerable time on this, and am still sufficiently fascinated, I’m going to give a break down of the process here.

1.) Build host tooling

First we create a host directory where we will install tools we want to use during the build, we intentionally symlink to only a few minimal host tools that we require be on your system, these are your host compilers, linkers, a functional shell and make implementation.

Then, we build toybox, busybox, e2fsprogs and distcc, basically any tools which we actually have a chance of running on your host.

2.) Build a stage 1 cross compiler for ${target}

This is the compiler we’re going to use to build everything that is going to run on your target qemu guest image, including of course the native compiler.

In this step we build gcc, musl libc and then gcc again, we build gcc again in order to complete the runtime and get libstdc++.

Previous versions of Aboriginal did not require this second build of gcc, but since GCC folks decided to start using C++, we need a C++ capable cross compiler to build the native compiler.

3.) Build a stage 1 cross compiler for ${host}

This is the first step towards building your statically linked and relocatable cross compiler, which you’ll want to be plugging into distcc and using on any machine of the given ${host} arch.

This step is run in exactly the same way as the previous step, except that we are building a cross compiler from your real host -> ${host}

4.) Build the full ${host} -> ${TARGET} cross compiler

In this stage we will use the cross compiler we built in the previous step, in order to build a cross compiler which runs on ${host} and produces code for ${target}. Neither of these have to be the host arch you are actually running on, you could be building a cross compiler for arm machines to build x86 code on a mips, if you were that sadistic.

In this second stage compiler the setup is a bit different, we start out by compiling musl libc right away and just build gcc once, since we already have a full compiler and we already have musl libc compiled for the target ${host}.

Note: This is actually called a “Canadian Cross”, and no, I was also surprised to find out that it is not named after a tattoo one gets when joining a fringe religious group in Canada.

5.) Build the native compiler

Now, in exactly the same way we build the Canadian Cross, we’re going to build a native compiler using the stage 1 cross compiler for ${target}.

This compiler is cross compiled to run on target, and configured to produce code for the same target, so it is a cross compiled native compiler.

6.) Build the kernel and system image

Finally, we use our stage 1 cross compiler again to compile the kernel which will run in qemu, and the root filesystem which has a few things in it. The root filesystem includes toybox, busybox, make, bash and distcc.

We wrap this up with a few scripts, an to run on the resulting guest image and a script which is generated to just “know” how to properly bring up this guest.

A word on the ccwrap compiler frontend

Before moving on, I should say a word or two about the compiler frontend ccwrap.c.

I mentioned before that the cross compiler Aboriginal creates is statically linked and relocatable. This is achieved with the said frontend to the compiler tooling, who’s purpose in life is to fight GCC’s desire to hard code itself into the location you’ve compiled it to, tooth and nail.

ccwrap counters gcc’s tactics by sitting in place of gcc, cc, g++, c++ and cpp, and figuring out the real location of standard includes and linking stubs, and then calling into the original gcc binaries using a modified set of command line arguments; adding -nostdinc and -nostdlib where necessary, and providing the include paths and stubs to the command line.

This is a violent process, and gcc puts up a good fight, but the result is that the cross compiler you generate can be untarred anywhere on any host of the correct ${host} architecture, and it will just run and create binaries for ${target}, building and linking against musl libc by default (more on libc further down).


To port this all to work with new GCC and binutils versions, I needed to find the right patches for gcc and binutils, these are all mostly upstream already in unreleased versions of gcc and binutils. Then I had to reconstruct the building of the stage 1 compilers so that it builds with C++ support, and finally iron out remaining kinks.

This part was all pretty fun to wrap my head around, hope it was also enjoyable to read about 🙂

The travel from musl libc to glibc

So after all that, we have an Aboriginal Linux setup which is capable of building glibc, but the ride is not over ! When building a whole operating system, there is a small chance that someone out there used C++, if we’re going to distribute a glibc based system, we’re probably also going to want to have a libstdc++ that is actually linked against that glibc.

Well, that was what I was thinking, in fact; it runs deeper than this, gcc itself provides libgcc.a and it’s start/end stubs which compliment the host libc’s start/end stubs, but also provides a shared library and a which need to be linked against the host libc.

In any case, at this stage I was a bit worried that the musl-linked gcc compiler I had might not be capable of building and linking programs against the new glibc. Of course it should work, this is just a standards compliant compiler on one hand and a standard C library on the other, but seeing that gcc / glibc entanglement runs so deeply, we had to be sure.

After some time building and rebuilding glibc and gcc on a puny armv5l qemu emulator I found the magic concoction which makes the build pass. For glibc the build pretty much runs smoothly, you first have to install the appropriate linux kernel includes and tell glibc that –with-headers=/usr/include, lest it tread off the beaten path, and go searching obscure host-triple prefixed paths all on it’s own.

To build the gcc runtimes (so that you get the desired libstdc++), you actually have to build gcc as if you were building a cross compiler.

In the armv5l transition from musl libc to GNU libc, you would tell it that:


With this setup, it will build all the host tooling using the existing musl libc which our existing compiler is hardwired to use, but when building the runtimes, it will look into ${prefix} and find the glibc we previously compiled, linking the gcc runtimes against the fresh glibc.

And yeah, it’s actually important to specify ‘-musleabi’ and ‘-gnueabi’ in the host triples specifically, gcc’s build scripts will parse the triples and behave differently depending on what suffix you give it.

In my setup, I did not want to use the new compiler, just the runtimes. So I did a custom install of the gcc runtimes in precisely the way that the aboriginal frontend expects to find them.

At this stage, we can now use environment variables to tell the Aboriginal compiler frontend how to behave, telling it the runtime linker we want to use and where it should look for it’s start stubs and end stubs and such.

Once we have installed glibc and new gcc runtimes into a chroot staging area on the target emulator, we can now set the following env vars:


And gcc will look for standard headers and library paths in /usr and use the dynamic linker installed by glibc.

Now we can compile C and C++ programs, against glibc and glibc based libstdc++, using our nifty compiler which was built against, and statically linked, to musl libc.

What we have done with this ?

The next step was integrating all of this into the YBD build tool and use the Aboriginal compilers and emulator image to virtually cross-compile baserock definitions from whatever host you are running.

What we have now is a build model that looks something like this:

I’ll just take a bit more space here to give a run down of what each component is doing.

YBD Builder

The YBD builder tool remains mostly unchanged in my downstream branch.

Mostly it differs inasmuch as it no longer performs the builds in a chroot sandbox, but instead marshals those builds to slaved Aboriginal guests running in qemu emulators (plural of course, because we want to parallelize the builds as much as dependencies and host resources allow).

What YBD does is basically:

  • Clones the sources to be built from git, all sources are normalized into git repositories hosted on the trove.
  • Stages dependencies, i.e. results of previous builds into a sysroot in a temporary directory for a build, this is done in the virtfs staging grounds.
  • Stages the git repository into the build directory
  • Tells a running emulator that it’s time to build
  • Waits for the result
  • If successful, collects the build results and creates an “artifact” (tarball).

Also, of course YBD parses the YAML definitions format and constructs and navigates a dependency graph.

IPC Interpretor / Modified

This component currently lives in the aboriginal controller repository, but should eventually be migrated into the YBD build tool itself as it makes little sense to have this many moving parts.

This is essentially some host side shell scripts, and some guest side shell scripts. The guest is launched in a specific way so as to run in the background and listen to commands over the virtio serial port (this IPC needs to be fixed, it’s a shaky thing and should probably be done over the actual network instead of the serial ports).

Build Sandbox

The build sandbox is just your basic chroot calling shell script, except that it is a bit peculiar in the way it does things.

  • It conditionally stages toybox/busybox if and only if tools are not already found in the staging area
  • It stages statically linked binaries only and is perfectly operational in the absence of any libc

Well, not all that peculiar I guess.

Virtfs 9p shared directory

Here is another, really fun part of this experimental build process.

Qemu has support for exporting a shared directory which can be accessed by the guest kernel if it is compiled with:


When a guest mounts -t 9p the exported directory, qemu will basically just perform the reads and writes on the guests behalf.

More interestingly, qemu provides a few security models, the basic being passthrough, which just reads and writes using the qemu launching user’s credentials. In any case, qemu can only access the underlying filesystem using the credentials it has. However qemu does provide a security model called “mapped” (or “mapped-file” which we ended up using).

Firstly, of course the shared directory is practical because it allows the host running YBD tool to stage things in the same directory where they will be built by the emulator, but things become interesting when the emulator is installing files under specific uids/gids, or creating device files which should be shipped in the resulting OS – basically anything that normally requires root.

Using the “mapped-file” security model allows the guest emulator to believe that it, as root, can manipulate the 9p mounted filesystem as root for all intents and purposes. On the actual underlying filesystem that qemu is writing to, everything will be created in mode 0600 and belong to the user running qemu, but extra metadata about the files qemu creates are going to go into corresponding files in a .virtfs_metadata directory.

The solution we came up with (I had much help in this area from Rob Taylor), was to write a small translation layer which allows us to also interact with the virtfs staging directory on the host side. What this translation layer does is basically:

  • Collect build results and create “real” tarballs from these results. The regular user is not allowed to create device files or files which belong to root, but it is at least allowed to own a tarball containing such files
  • The reverse of the previous; stage the content of a real tarball into a virtfs staging ground, so that files are extracted under the users credentials but the correct virtfs metadata is created so that the guest (build slave) will see the right thing
  • Stage files and directories into the virtfs staging grounds. This part is required for extracted git repositories which we intend to build.

This way, the whole operating system image can potentially be built from scratch by a regular user on the host.


At this, unfinished stage, I have built over 300 of the ~420 components which go into the basic GNOME system using this method of compilation to build for armv5l on my x86_64 laptop. The only build instructions which needed to be changed in order to build these were the base compiler and glibc builds, and a couple of minor changes to get some packages to build on armv5l.

Most of the kinks have been ironed out, I still have to build over 100 high level components and deploy and test the resulting image, but the higher up the stack you get the less problems you tend to encounter, I presume we’re through the worst of it.

Performance wise, this will never be as fast as scratchbox, however it’s possible that we explore qemu’s user mode emulation at some point. The problem with performance is the more you optimize here, the more nasty hacks you introduce (if say, you want to run host perl while building in the emulator), and the less comprehensive build system you end up with. We will try to keep a nice balance here and prioritize on repeatability and the convenience this can offer in terms of bootstrapping an OS with baserock build instructions on new architectures.

I can say however that regarding performance, libtool is probably next on the chopping block. It serves basically no purpose when building on linux, and building libtool objects costs about 8 to 10 times the time as simply compiling a regular object over distcc.

I will have to put this work down for a while as I have other work landing on my plate which requires attention, so I hope there will be an army of developers to continue this work in my absence 🙂

If you would like to try and repeat this work, a HOWTO can be found at the bottom of this email. Note that in that email, we had not yet tried the virtfs mapped security model which solves the problem of building as a regular user, however the instructions to get a build off the ground are still valid.

For now I see this as an interesting research project, we have tried some pretty new and interesting things, I am curious to see where this will lead us.

And, special thanks are owed to Rob Landley for giving me pointers along the way while navigating the Aboriginal build system, and for being generally entertaining in #toybox in freenode. Also thanks to Rob Taylor for digging into the qemu sources and coming up with the wild idea of man handling the virtfs mapped metadata.