Gaming with GThree

The last couple of week I’ve been on holiday and I spent some of that hacking on gthree. Gthree is a port of three.js, and a good way to get some testing of it is to port a three.js app. Benjamin pointed out HexGL, a WebGL racing game similar to F-Zero.

This game uses a bunch of cool features like shaders, effects, sprites, particles, etc, so it was a good target. I had to add a bunch of features to gthree and fix some bugs, but its now at a state where it looks pretty cool as a demo. However it needs more work to be playable as a game.

Check out this screenshot:

Or this (lower resolution) video:

If you’re interested in playing with it, the code is on github. It needs latest git versions of graphene and gthree to build.

I hope to have a playable version of this for GUADEC. See you there!

Gthree update, It moves!

Recently I have been backporting some missing three.js features and fixing some bugs. In particular, gthree supports:

  • An animation system based on keyframes and interpolation.
  • Skinning, where a model can have a skeleton and modifying a bone affects the whole model.
  • Support in the glTF loader for the above.

This is pretty cool as it enables us to easily load and animate character models. Check out this video:

Introducing flat-manager

A long time ago I wrote a blog post about how to maintain a Flatpak repository.

It is still a nice, mostly up to date, description of how Flatpak repositories work. However, it doesn’t really have a great answer to the issue called syncing updates in the post. In other words, it really is more about how to maintain a repository on one machine.

In practice, at least on a larger scale (like e.g. Flathub) you don’t want to do all the work on a single machine like this. Instead you have an entire build-system where the repository is the last piece.

Enter flat-manager

To support this I’ve been working on a side project called flat-manager. It is a service written in rust that manages Flatpak repositories. Recently we migrated Flathub to use it, and its seems to work quite well.

At its core, flat-manager serves and maintains a set of repos, and has an API that lets you push updates to it from your build-system. However, the way it is set up is a bit more complex, which allows some interesting features.

Core concept: a build

When updating an app, the first thing you do is create a new build, which just allocates an id that you use in later operations. Then you can upload one or more builds to this id.

This separation of the build creation and the upload is very powerful, because it allows you to upload the app in multiple operations, potentially from multiple sources. For example, in the Flathub build-system each architecture is built on a separate machine. Before flat-manager we had to collect all the separate builds on one machine before uploading to the repo. In the new system each build machine uploads directly to the repo with no middle-man.

Committing or purging

An important idea here is that the new build is not finished until it has been committed. The central build-system waits until all the builders report success before committing the build. If any of the builds fail, we purge the build instead, making it as if the build never happened. This means we never expose partially successful builds to users.

Once a build is committed, flat-manager creates a separate repository containing only the new build. This allows you to use Flatpak to test the build before making it available to users.

This makes builds useful even for builds that never was supposed to be generally available. Flathub uses this for test builds, where if you make a pull request against an app it will automatically build it and add a comment in the pull request with the build results and a link to the repo where you can test it.

Publishing

Once you are satisfied with the new build you can trigger a publish operation, which will import the build into the main repository and do all the required operations, like:

  • Sign builds with GPG
  • Generate static deltas for efficient updates
  • Update the appstream data and screenshots for the repo
  • Generate flatpakref files for easy installation of apps
  • Update the summary file
  • Call out out scripts that let you do local customization

The publish operation is actually split into two steps, first it imports the build result in the repo, and then it queues a separate job to do all the updates needed for the repo. This way if multiple builds are published at the same time the update can be shared. This saves time on the server, but it also means less updates to the metadata which means less churn for users.

You can use whatever policy you want for how and when to publish builds. Flathub lets individual maintainers chose, but by default successful builds are published after 3 hours.

Delta generation

The traditional way to generate static deltas is to run flatpak build-update-repo --generate-static-deltas. However, this is a very computationally expensive operation that you might not want to do on your main repository server. Its also not very flexible in which deltas it generates.

To minimize the server load flat-manager allows external workers that generate the deltas on different machines. You can run as many of these as you want and the deltas will be automatically distributed to them. This is optional, and if no workers connect the deltas will be generated locally.

flat-manager also has configuration options for which deltas should be generated. This allows you to avoid generating unnecessary deltas and to add extra levels of deltas where needed. For example, Flathub no longer generates deltas for sources and debug refs, but we have instead added multiple levels of deltas for runtimes, allowing you to go efficiently to the current version from either one or two versions ago.

Subsetting tokens

flat-manager uses JSON Web Tokens to authenticate API clients. This means you can assign different permission to different clients. Flathub uses this to give minimal permissions to the build machines. The tokens they get only allow uploads to the specific build they are currently handling.

This also allows you to hand out access to parts of the repository namespace. For instance, the Gnome project has a custom token that allows them to upload anything in the org.gnome.Platform namespace in Flathub. This way Gnome can control the build of their runtime and upload a new version whenever they want, but they can’t (accidentally or deliberately) modify any other apps.

Rust

I need to mention Rust here too. This is my first real experience with using Rust, and I’m very impressed by it. In particular, the sense of trust I have in the code when I got it past the compiler. The compiler caught a lot of issues, and once things built I saw very few bugs at runtime.

It can sometimes be a lot of work to express the code in a way that Rust accepts, which makes it not an ideal language for sketching out ideas. But for production code it really excels, and I can heartily recommend it!

Future work

Most of the initial list of features for flat-manager are now there, so I don’t expect it to see a lot of work in the near future.

However, there is one more feature that I want to see; the ability to (automatically) create subset versions of the repository. In particular, we want to produce a version of Flathub containing only free software.

I have the initial plans for how this will work, but it is currently blocking on some work inside OSTree itself. I hope this will happen soon though.

Nvidia drivers in Fedora Silverblue

UPDATE:

The updated drivers packages are now in the repos, so you don’t need the specially built rpm. Using rpm-ostree install kmod-nvidia xorg-x11-drv-nvidia is enough. If you installed the custom build you need to uninstall it as it can cause upgrade issues.

I really like how Fedora Silverblue combines the best of atomic, image-based updates and local tweaking with its package layering idea.

However, one major issue many people has had with it is support for the NVIDIA drivers. Given they ares not free software they can’t be shipped with the image, so one imagines using package layering to would be a good way to install it. In theory this works, but unfortunately it often runs into issues, because frequent kernel updates cause there to be no pre-built nvidia module for your particular kernel/driver version.

In a normal Fedora installation this is handled by something called akmods. This is a system where the kernel modules ship as sources which get automatically rebuilt on the target system itself when a new kernel is installed.

Unfortunately this doesn’t quite work on Silverblue, because the system image is immutable. So, I’ve been working recently on making akmods work in silverblue. The approach I’ve taken is having the modules being built during the rpm-ostree update command (in the %post script) and the output of that being integrated into the newly constructed image.

Last week the final work landed in the akmods and kmodtools packages (currently available in updates-testing), which means that anyone can easily experiment with akmods, including the nvidia drivers.

Preparing the system

First we need the latest of everything:

$ sudo rpm-ostree update

The required akmods packages are in updates-testing at the moment, so we’ll enable that for now:

$ sudo vi /etc/yum.repos.d/fedora-updates-testing.repo
... Change enabled to 1 ..

Then we add the rpmfusion repository:

$ sudo rpm-ostree install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-29.noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-29.noarch.rpm

At this point you need to reboot into the new ostree image to enable installation from the new repositories.

$ systemctl reboot

Installing the driver

The akmod-nvidia package in the current rpm-fusion is not built against the new kmodtools, so until it is rebuilt it will not work. This is a temporary issue, but  I built a new version we can use until it is fixed.

To install it, and the driver itself we do:

$ sudo rpm-ostree install http://people.redhat.com/alexl/akmod-nvidia-418.43-1.1rebuild.fc29.x86_64.rpm xorg-x11-drv-nvidia

Once the driver in rpm-fusion is rebuilt the custom rpm should not be necessary.

We also need to blacklist the built-in nouveau driver so to avoid driver conflicts:

$ sudo rpm-ostree kargs --append=rd.driver.blacklist=nouveau --append=modprobe.blacklist=nouveau --append=nvidia-drm.modeset=1

Now you’re ready to boot into your fancy new silverblue nvidia experience:

$ systemctl reboot

What about Fedora 30/Rawhide?

All the changes necessary for this to work have landed, but there is no Fedora 30 Silverblue image yet (only a rawhide one), and the rawhide kernel is built with mutex debugging which is not compatible with the nvidia driver.

However, the second we have a Fedora 30 Silverblue image with a non-debug kernel the above should work there too.

Changes in Flathub land

The last month or so we’ve been working in the background on a major update to the Flathub infrastructure. This has been available for testing for a while, but this week we finally enabled it on the live system. There are some pretty cool internal changes, including a new repo manager microservice written in rust. Later blog posts will talk about some of the technical details, but for now I’ll just talk about the user visible changes.

Power to the maintainers!

Flathub uses buildbot to to manage the builds, and we have updated and customized the UI a bit to be nicer for maintainers. For example, we now have a page listing all the apps ever built, with links to per-app pages showing builds of that app.

We also integrated GitHub authentication so that maintainers of individual applications automatically have authority to do operations on their own apps and builds. For example, the home and per-app pages have buttons that let you start builds, which anyone with write permissions to the corresponding GitHub repository can use. Also, similarly they can cancel or retry the builds of their own apps. Previously you had to ask a Flathub administrator to restart or cancel a build, but no more!

New publish workflow

There has also been a major change in the workflow for builds. It used to be the case that a successful build was immediately imported into the repository and was then available to users. Now instead, a successful build is available for installation in a test-repository. The build system will display a link to it so that you can easily install and test the build results. When you’re satisfied that the build is ok you can then manually push a button to export it to the public repository.

If you don’t manually publish the build, then it will be automatically published, by default after 24 hours, but this is configurable by the app maintainer. See the wiki for details.

Testing the test builds

Test builds used to only verify that the app built, but with the new system they get built into test repositories just like regular builds. This means you can actually install and test the builds, for example from a pull request against your application. Such test repos stay around for 5 days, or until you explicitly delete then in the build web UI.

Test builds are also more useful now due to the permission work, as developers can easily create or cancel them from the web ui, or by using the “bot, build” command in a GitHub issue, without needing help from the Flathub admins.

Also, test builds started from a GitHub issue gets nice comments pointing to the test build and the build result. Here is an example of a pull request with automatically built tests showing how this looks.

We now automatically queue test builds for all new PRs, although such builds are less prioritized than regular builds (for resource reasons) and can take a while to start.

Publish beta releases!

In addition to the existing stable repository Flathub added a repository for beta builds. Exactly if and how this is used is up to each individual application maintainer, but the goal of this is to have a way for developers to get early releases of new stable versions into the hand of regular users.

This isn’t meant to be used for nightly builds, but for releases that has some level of testing and are expected to mostly work and be usable to non-developer end-users.

The way this works is that each GitHub repository builds the master branch for the stable repository, which will have the flatpak branch name “stable”, and then the beta git branch will build into the beta repository with the flatpak branch “beta”.

As a user, the beta channel looks like a separate remote. First you configure it as a remote:

$ flatpak remote-add flathub-beta https://flathub.org/beta-repo/flathub-beta.flatpakrepo

And then you can install any apps from it:

$ flatpak install --user flathub-beta org.godotengine.Godot

Alternatively, you can use a flatpakref, which are generated for each app:

$ flatpak install https://flathub.org/beta-repo/appstream/org.godotengine.Godot.flatpakref

The above Godot example is the latest beta of Godot 3.1, whereas the stable repo still contains 3.0. You can see how this beta build is set up in GitHub.

If you install both the beta and the stable version of an app then they will be installed in parallel. However, only one will be showed in the menus. You can switch which one is currently showed like this:

$ flatpak make-current org.godotengine.Godot [beta|stable]

But from the command line you can always start any installed version explicitly, like this:

$ flatpak run --branch=beta org.godotengine.Godot
or
$ flatpak run org.godotengine.Godot//beta

Now, go build some betas!

Moving away from the 1.6 freedesktop runtime

A flatpak runtime contains the basic dependencies that an application needs. It is shared by applications so that application authors don’t have to bother with complicated low-level dependencies, but also so that these dependencies can be shared and get shared updates.

Most flatpaks these days use the freedesktop runtime or one of its derivates (like the Gnome and KDE runtimes). Historically, these have been using the 1.6 version of the freedesktop runtime which is based on Yocto.

The 1.6 runtime has served its place to kickstart flatpak and flathub well, but it is getting quite long in the tooth. We still fix security issues in it now and then, but it is not seeing a lot of maintenance recently. Additionally, not a lot of people know enough yocto to work on it, so we were never able to build a larger community around it.

However, earlier this summer a complete reimplementation, version 18.08, was announced, and starting with version 3.30 the Gnome runtime is now based on it as well, with a KDE version is in the works. This runtime is based on BuildStream, making it much easier to work with, which has resulted in a much larger team working on this runtime. Partly this is due to the awesome fact that Codethink has several people paid to work on this, but there are also lots of community support.

The result is a better supported, easier to maintain runtime with more modern content. What we need to do now is to phase out the old runtime and start using the new one in apps.

So, this is a call to action!

Anyone who maintains a flatpak application, especially on flathub, please try to move to a runtime based on 18.08. And if you have any problems, please report them to the upstream freedesktop-sdk project.

Flatpak on windows

As I teased about last week I recently played around with WSL, which lets you run Linux applications on Windows. This isn’t necessarily very useful, as there isn’t really a lack of native applications on Windows, but it is still interesting from a technical viewpoint.

I created a wip/WSL branch of flatpak that has some workarounds needed for flatpak to work, and wrote some simple docs on how to build and test it.

There are some really big problems with this port. For example, WSL doesn’t support seccomp or network namespaces which removes some of the utility of the sandbox. There is also a bad bug that makes read-only bind-mounts not work for flatpak, which is really unsafe as apps can modify themselves (or the runtime). There were also various other bugs that I reported. Additionally, some apps rely on things on the linux host that don’t exist in the WSL environment (such as pulseaudio, or various dbus services).

Still, its amazing that it works as well as it does. I was able to run various games, gnome and kde apps, and even the linux versions of telegram. Massive kudos to the Microsoft developers who worked on this!

I know you crave more screenshots, so here is one:

Kick-starting the revolution 1.0

Yesterday marked the day when we finally released Flatpak 1.0 (check out the release video!). I want to thank everyone who helped make this a reality, from writing code, to Flathub packaging, to just testing stuff and spreading the word. Large projects like this can’t be done by a single person, its all about the community.

With 1.0 out, I expect the rate of change in Flatpak itself to slow down. Going forward the focus will be more on the infrastructure around it. Things like getting 1.0 into all distributions, making portals work well, ensuring Flathub works smoothly and keeps growing, improving our test-suites and working on the runtimes.

Most of my blog posts are about technical details, but the reason for the existence of Flatpak is not technical. I created flatpak because the Linux application desktop ecosystem is fundamentally broken. As a app developer you have no sane way to distribute the result of your work to users.

Unless you have massive resources, the only realistic way is to wait for distributions to pick up your app. However, there are many problems with this. First of all, not all distros pick up all apps, and even if they do they often wait until the app is well known leading to a chicken-and-egg problem for new apps. And when they finally ship it you have no control of what version is shipped, or when it is updated.

This is a real problem! For instance, maybe some web service it uses changed API. Its a quick fix for your app, but it takes a long time before it propagates to distros. In fact many stable distros never ever update to new versions.

This model leads to a disconnect between the developer and the users. Users file bugs against older versions, about bugs that are already fixed, yet don’t get any fixes. Developers add new features that users can’t use, and get no feedback on them.

With Flatpak, the goal is for the upstream developer to have control of updates. If the developer fixes an important bug, a new stable version is released that users can immediately use. Any bugs filed will be against the latest stable version, so they are not stale, and once the bug report is closed the user will actually get the fix. That means reporting bugs is useful to the user. Similarly, any new feature development will get immediate feedback, and user feedback will be based on the current state of the app.

This kind of virtuous cycle helps improving both speed of development and software quality. My hope is that this in turn will increase the interest in writing native Linux applications and trigger a revolution, leading to the YEAR OF THE LINUX DESKTOP! (ahem)

The birth of a new runtime

Runtimes are a core part of the flatpak design. They are a way to make bundling feasible, while still fully isolating from the host system. Application authors can bundle the libraries specific to the application, but don’t have to care about the lowlevel dependencies that are uninteresting (yet important) for the application.

Many people think of runtimes primarily as a way to avoid duplication (and thus bloat). However, they play two other important roles. First of all they allow an independent stream of updates for core libraries, so even dead apps get fixes. And secondly, they allow the work of the bundling to be shared between all application authors.

There are some runtimes bases on pre-existing distribution packages, such as Fedora and Debian. These are very useful if you want to produce flatpaks of the packages from these distributions. However, most of the “native” Flatpaks these days are based on the Freedesktop 1.6 runtime or one of its derivates (like the Gnome and KDE runtimes).

Unfortunately this runtime is starting to show its age.

The freedesktop runtime is built in two steps. The first is based on Yocto, which is a cross-compilation system maintained by the Linux Foundation. An image is created from this build which is then further extended using flatpak-builder. This was a great way to get something going initially. However, Yocto focuses mainly on cross compilation and embedded which isn’t a great fit, and the weird 2 layer split and the complex yocto build files lead to very few people being able to build or do any work on the runtime. It also didn’t help that the build system was a bunch of crappy scripts that needed a lot of handholding by me.

Fortunately this is now getting much better, because today the new Freedesktop runtime, version 18.08, was released!

This runtime has the same name, and its content is very similar, but it is really a complete re-implementation. It is based on a new build system called BuildStream, which is much nicer and a great fit for flatpak. So, no more Yocto, no more buildbake, no multi-layer builds!

Additionally, it has an entire group of people working on it, including support from Codethink. Its already using gitlab, with automatic builds, CI, etc, etc. There is also a new release model (year.month) with a well-defined support time. Also, all the packages are much newer!

Gnome is also looking at using this as the basics for its releases, its CI system and eventually the Gnome runtime.

The old freedesktop runtime is dead, long live the freedesktop runtime!

Flatpak – a history

I’ve been working on Flatpak for almost 4 years now, and 1.0 is getting closer. I think it might be interesting at this point to take a retrospective look at the history of Flatpak.

Early history

Ancient Egyptian Flatpak

The earliest history goes back to the summer of 2007. I had played a bit with a application image system called Klik, which had some interesting ideas. However, I was not really satisfied with some technical details. One day at the beach I got an interesting ideas for a hack that could improve this.

Fast forward until August 2007 when I released Glick in the wild, based on these ideas. The name is sort of a pun on the old KDE/Gnome first-letter naming scheme, although neither Klik or Glick are really desktop-specific.

Glick was a a single-file-image system. It predates usable kernel container APIs, so it uses fuse and some weird hacks. It doesn’t integrate with the desktop in any way, and applications have to decide what to bundle, falling back to system-libraries for the non-bundled things. This means its not terribly robust., but it is completely stand-alone and need nothing installed on the host system.

Around 2011 the initial support for kernel namespaces had landed and started being useful. Using these I could avoid some of the hacks that my earlier experiment used. So, I got interested in bundling again and released Glick 2 based on this.

Glick 2 requires some software to be installed on the host, which allows it to integrate better with the system. For example, you can “install” bundles by putting the file in a known location, and doing this allows some level of desktop integration. Glick 2 also uses SHA1 checksums to try to automatically de-duplicate files shared between applicatins. Here we can see an early version of the ideas that make up OSTree.

Bundling using namespaces was lot more robust than the previous hacks, but it still relied on the system for the core libraries that the application doesn’t bundle. So an app would sometimes work on one distro, but not another.

Around this time I posted a blog  about how I thought application bundling combined with read-only OS images can make a really good model for an OS. This idea is very similar to what Project Atomic / SilverBlue  are doing now.

Containers, Portals and Runtimes

A few years later, around 2013 the kernel support for containers was starting to shape up, and Docker hit the market. I did a lot of work on the early docker, like porting it away from aufs in order to run on RHEL.

Around this time I also attended the Gnome Developer Experience hackfest  in Brussels where one of the topics was Application deployment and sandboxing. From the discussions there (and my previous experiences) a lot of the core ideas of Flatpak, like runtimes, sandboxing and portals originated.

In 2014 the first version (then called xdg-app) was released. The current Flatpak is a lot more polished, but the initial version of xdg-app is still very much recognizable today.

xdg-app used OSTree to download, store and de-duplicate applications. It uses kernel namespaces (via a helper called xdg-app-helper) to do unprivileged containers. It has a split between applications and runtimes which allow applications to be portable between distros in a very robust fashion, while still limiting the duplication between applications and allowing security updates. There is also integration with the desktop (icons, desktop files, mimetypes, etc), and some very early portal code can be seen.

The great renaming

Modern Flatpak

The name xdg-app was just something I picked for the first commit without much consideration, and it was not very good. However, names are hard, and we spent a lot of time trying to come up with another, eventually settling on “Flatpak” (with the above logo). The 0.6.0 release in may 2016 was the first with the new name.

The 0.6 release was also the first that split out the unprivileged container launcher (xdg-app-helper) into its own project, now known as BubbleWrap , hosted by Project Atomic.

Soon thereafter we had the first release of xdg-desktop-portal which is the host-side implementation of the portal idea, allowing sandboxed applications to safely break out of the sandbox in a controlled fashion.

Version 0.8.0, released december 2016 was the first long-term stable release, which was included in Debian Stretch and RHEL 7. Since then we have had another stable release series called 0.10.x.

We want apps!

Flatpak was always a decentralized system, in that anyone can host their own applications and be on equal terms with everyone else. However, while this is an important feature, it leads to a poor initial experience, both for users (hard to find apps) or developers (need to maintain their own repository).

To solve this we started the Flathub project, which is a single repository where you can find most apps. In the last year it has gone from a minimal viable product building its first app to something with more than 300 apps and a diverse group of developers.

Onwards and upwards!

Future Flatpak

No software is ever finished, or bug-free, but we have had a list of core things that we wanted to have before calling Flatpak 1.0, and that list is now empty. So, I’m planning to release a release-candidate (called 0.99.1) later this week.

Then 1.0 will be released later this summer.