Nightly development builds using xdg-app

When reporting a bug in some software it is often common that the developer asks you to check if the bug is fixed in the latest development version. This makes a lot of sense from the perspective of a developer that gets a lot of bug reports. However, unless you are very experienced with building software this is prohibitively hard.

This is an area where xdg-app shines, because it allows you to create binary builds of desktop applications that work on any distribution. In order to demonstrate this I set up an automated build system that builds Gimp and Inkscape from the development branch every day and produces a new binary that you can easily install and run:

screenshot of app icons
Launching the apps

To make it easy to use I also created packages of xdg-app for some common distributions.

For more information about how to use these builds, see the nightly builds page.

Playing games with runtime extensions

One of the core ideas of xdg-app is that users should be running the same build of everything that the developers tested on. Not only does this mean that you can trust the testing that went into the app, but it also means that an app can run on multiple distributions, and on different version of the same distribution.

However, an application does not have bundle everything. Instead the app specifies a dependency on a runtime, which contains the base system libraries. I like to compare this to dynamic libraries, where xdg-app is “dynamically linked” to its runtime, whereas container systems (like docker) are “statically linked” (by shipping a complete runtime in each app).

This may seem weird and contrary to the first paragraph, but it turns out that this is pretty much a requirement. We want third parties to be able to produce a binary that will keep running “forever”, but the base system may need fixes or support for new hardware. We can’t expect every vendor to rebuild every application (for example some old game) each time something needs fixing on the lower levels. So, therefore we allow updates to the runtime separately from the app (although any update must be compatible).

An app can only depend on one runtime, and everything not in the runtime must be bundled with the application. There is (by design) no way to depend on multiple runtimes, or have runtimes depend on each other. However, there is something called runtime extensions. Extensions are a way to split off and make optional parts of the runtime and recombine them at runtime.

For instance, I’m working on a runtime called org.freedesktop.Platform that has the basic freedesktop libraries (X11, Mesa, DBus, etc). It has this snippet in the configuration:

[Extension org.freedesktop.Platform.Timezones]
directory=share/zoneinfo

This means that whenever another runtime called org.freedesktop.Platform.Timezones is installed its contents will replace the directory share/zoneinfo in the runtime. This is very useful as it allows the timezone info (which changes frequently) to be updated separately from the runtime.

It also has this:

[Extension org.freedesktop.Platform.Locale]
directory=share/runtime/locale
subdirectories=true

This means that if a runtime like org.freedesktop.Platform.Locale.sv is installed, it will replace the contents of share/runtime/locale/sv in the runtime. During the build all the locale data and translations are separated out into per-language runtimes which can be installed separately.

And finally, it has:

[Extension org.freedesktop.Platform.GL]
directory=lib/GL

There is no (official) runtime with this name, but if one is installed it will appear in lib/GL, and the main runtime has been programmed to look into this directory for libGL.

The idea here is that if your system uses an OpenGL driver that does not ship with the regular runtime (i.e. Mesa), or needs a more recent version of it, then you can create your own runtime with this name and get your drivers into the runtime.

For a long time this has been a theoretical solution, but recently I acquired an NVidia card in order to test this. The result is this script, which takes an upstream nvidia driver release and converts it into a runtime that matches the (soon to be released) 1.2 version of the Freedesktop runtime.

To verify that it works I created an xdg-app bundle for the Unreal Editor. Here are some screenshots of a sandboxed version of unreal to show this working:

Launching the unreal editor
Launching the unreal editor
Editing the sample project
Editing the sample project

While runtimes can’t have dependencies on other runtimes, they can be build from the same base, and thus be compatible. For instance, the official Gnome runtime takes the Freedesktop runtime and adds the Gnome modules to it. Since we have the same ABI  we can reuse the same extensions. The two runtimes have different versions (Gnome is 3.18, Freedesktop is 1.2), so we have to specify the version (which is otherwise infered by the runtime version). Here is how the Gnome runtime config looks:

[Extension org.freedesktop.Platform.GL]
version=1.2
directory=lib/GL

I hope to have stable builds out of the Freedesktop 1.2 and Gnome 3.18 runtimes shortly, so that other people can play with them. Unfortunately I’m not allowed to distribute the unreal editor app.

xdg-app moving to freedesktop.org

For anyone following the development of xdg-app, all development have now moved to freedesktop.org. Here is where things are happening now:

Testing rawhide apps using xdg-app

An important aspect of xdg-app is application sandboxing, which will require application changes to use sandbox-specific APIs. However, xdg-app is also a good way to deploy and run non-sandboxed (or partially sandboxed) regular applications.

A very interesting usecase for this is to have an image-based operating system, for instance a Workstation spin of Fedora Atomic. Such a system would have a basic workstation installation with a read-only /usr, and atomic updates/rollback. However, installing an application is painful, and customizing yor install in that way undoes many of the advantages of an image-based OS.

With xdg-app you can install apps into /var (or $HOME) and have them fully integrate with the system, while still being isolated from changes to the host. This makes for a great combination, just like atomic + docker is a good combination for the server space.

I’ve spent some time recently making a prototype runtime based on the Fedora packages, as reported on the desktop list. This is kind of interesting as it lets you test applications from rawhide on fedora 21 or 22. Just install xdg-app from fedora-updates and then install the runtime:

$ xdg-app add-remote --no-gpg-verify --user fedora http://fedorapeople.org/~alexl/repo/
$ xdg-app install-runtime --user fedora org.fedoraproject.Platform 23

And then you can try gedit 3.17.0:

$ xdg-app install-app --user fedora org.gnome.gedit
$ xdg-app run org.gnome.gedit

Or evince 3.17.2:

$ xdg-app install-app --user fedora org.gnome.evince
$ xdg-app run org.gnome.evince

Once installed you can also just start them from the desktop environment as usual.  They should be there like any regular application as the desktop files and icons are exported to the host.

Official GNOME SDK runtime builds are out

As people who have followed the work on sandboxed applications know, we have promised a developer preview for GNOME 3.16. Well, 3.16 has now been released, so the time is now!

I spent last week setting up an build system on the GNOME infrastructure, and the output of this is finally available at:

http://sdk.gnome.org/repo/

This repository contains the gnome 3.16 runtimes, org.gnome.Platform, as well as a smaller one that is useful for less integrated apps (like games) called org.freedesktop.Platform. It also has corresponding develoment runtimes (org.gnome.Sdk and org.freedesktop.Sdk) that you can use to create applications for the platforms.

This is a developer preview, so consider these builds weakly supported. This means I will try to keep them somewhat updated if there are major issues and that I will keep them API and ABI stable. I will probably also pick up at least some 3.16.x minor releases as they are released.

I also did the first official release of xdg-app. For easy testing this is available for Fedora 21 and 22 as a copr repo.

Testing the SDK

Using the repo above makes it really easy to test this. Just install the xdg-app package from copr, log out+in (needed update the environment for the session), then follow these instructions (as a regular user):

  1. Install the Gnome SDK public key into  /usr/share/ostree/trusted.gpg.d, (or alternatively, use –no-gpg-verify when you add the remote below).
  2. Install the basic Gnome and freedesktop runtimes:
    $ xdg-app add-remote --user gnome-sdk http://sdk.gnome.org/repo/
    $ xdg-app install-runtime --user gnome-sdk org.gnome.Platform 3.16
    $ xdg-app install-runtime --user gnome-sdk org.freedesktop.Platform 1.0
  3. Optionally install some locale packs:
    $ xdg-app install-runtime --user gnome-sdk org.gnome.Platform.Locale.se 3.16
    $ xdg-app install-runtime --user gnome-sdk org.freedesktop.Platform.Locale.se 1.0
  4. Install some apps from my repository of test apps:
    $ xdg-app add-remote --user --no-gpg-verify test-apps https://people.gnome.org/~alexl/test-apps/repo/
    $ xdg-app install-app --user test-apps org.gnome.gedit
    $ xdg-app install-app --user test-apps org.freedesktop.glxgears
  5. Run the apps! You should find gedit listed among the regular applications in the shell as it exports a desktop file. But you can also run them manually like this:
    $ xdg-app run org.gnome.gedit
    $ xdg-app run org.freedesktop.glxgears
  6. I also packaged the latest gnome builder from git. It requires the full sdk which takes a bit longer to download:
    $ xdg-app install-runtime --user gnome-sdk org.gnome.Sdk 3.16
    $ xdg-app install-app --user test-apps org.gnome.Builder

All the above install the apps into your home-directory (in ~/.local/share/xdg-app) . You can also run the commands as root and skip the –user arguments to do system-wide application installs.

Future work

With the basics now laid down to run current applications in a minimally isolated environment the next step is to work on the sandboxing aspects more. This will require lots of work, both in the system side (things like kdbus), the desktop (add sandbox aware APIs, make pulseaudio protect clients from each other, etc)  and in modifying applications.

If you’re interested in this, you can follow the work on the wiki.

Building your own apps

If you download the SDKs you have enough tooling to build your own applications. There are some documentations on how to do this here.

I also created a git repository with the scripts I used to build the test applications above. It uses the gnome-sdk-bundles repostory which has some tooling and specfiles to easily bundle dependencies with the application.

Building the SDK

If you ever want to build the SDK yourself, it is available at:

https://git.gnome.org/browse/gnome-sdk-images

This repository contains the desktop specific parts of the SDK, which is layered on a core Yocto layer. When you build the SDK this will be automatically checked out and built from:

https://git.gnome.org/browse/freedesktop-sdk-base

However, if you don’t want to build all of this you can download the pre-build images from http://sdk.gnome.org/images/x86_64/ and put them in the freedesktop-sdk-base/images/x86_64 subdirectory of gnome-sdk-images. This can save you a lot of time and space.

First fully sandboxed Linux desktop app

Its not a secret that I’ve been working on sandboxed desktop applications recently. In fact, I recently gave a talk at devconf.cz about it. However, up until now I’ve mainly been focusing on the bundling and deployment aspects of the problem. I’ve been running applications in their own environment, but having pretty open access to the system.

Now that the basics are working it’s time to start looking at how to create a real sandbox. This is going to require a lot of changes to the Linux stack. For instance, we have to use Wayland instead of X11, because X11 is impossible to secure. We also need to use kdbus to allow desktop integration that is properly filtered at the kernel level.

Recently Wayland has made some pretty big strides though, and we now have working Wayland sessions in Fedora 21. This means we can start testing real sandboxing for simple applications. To get something running I chose to focus on a game, because they require very little interaction with the system. Here is a video I made of Neverball, running in a minimal sandbox:

In this example we’re running a regular build of neverball in an environment which:

  • Is independent of the host distribution
  • Has no access to any system or user files other than the ones from the runtime and application itself
  • Has no access to any hardware devices, other than DRI (for GL rendering)
  • Has no network access
  • Can’t see any other processes in the system
  • Can only get input via Wayland
  • Can only show graphics via Wayland
  • Can only output audio via PulseAudio
  • … plus more sandboxing details

Yet the application is still simple to install and integrates nicely with the desktop. If you want to test it yourself, just follow the instructions on the project page and install org.neverball.Neverball.

Of course, there are still a lot to do here. For instance, PulseAudio doesn’t protect clients from each other, and for more complex applications we need to add new APIs to safely grant access to things like user files and devices. The sandbox details page has a more detailed list of what has to be done.

The road is long, but at least we have now started our journey!

Back from DevX hackfest

I’m now back from a week in Cambridge at the developer experience hackfest. This was a great event, it was a lot of fun to meet people again, and we got a lot of things done. I spent a lot of time talking to people about things related to xdg-app and sandboxed applications, both spreading information and actually implementing features.

I spent some time with Emmanuele, Ryan and Lars working on glib stuff, which resulted in the G_DECLARE_*_TYPE macros finally being merged. Additionally I reviewed the new list model abstraction which I hope we can land soon, and Ryan and I worked out a new fancy __attribute__(cleanup) approach that we hope to merge into glib soon.

We also worked a bit on Gtk+ OpenGL support. Based on feedback from early users we’re doing some changes in how GL contexts are created to allow you to configure them in more detail. We also decided that we want to completely drop support for legacy OpenGL contexts, as these had issues cooperating with Core 3.2 contexts, and because we don’t live in the 90s anymore. Carlos was working on converting GtkPopover to use (override redirect) toplevels on X11, and I gave him moral support and generally hated on ancient crappy X11 behaviour.

Props to Collabora and Philip for arranging a great event!

The modern Gtk drawing model

Over a few releases now, and culminating in Gtk+ 3.10 we have restructured the way drawing works internally. This hasn’t really been written about a lot, and I still see a lot of people unaware of these changes, so I thought I’d do a writeup of the brave new world.

All Gtk+ applications are mainloop driven, which means that most of the time the app is idle inside a loop that just waits for something to happen and then calls out to the right place when it does. On top of this Gtk+ has a frame clock that gives a “pulse” to the application. This clock beats at a steady rate, which is tied to the framerate of the output (this is synced to the monitor via the window manager/compositor). The clock has several phases:

  • Events
  • Update
  • Layout
  • Paint

The phases happens in this order and we will always run each phase through before going back to the start. The Input events phase is a long stretch of time between each redraw where we get input events from the user and other events (like e.g. network i/o). Some events, like mouse motion are compressed so that we only get a single mouse motion event per clock cycle[1].

Once the event phase is over we pause all external events and run the redraw loop. First is the update phase, where all animations are run to calculate the new state based on the estimated time the next frame will be visible (available via the frame clock). This often involves geometry changes which drives the next phase, Layout. If there are any changes in widget size requirements we calculate a new layout for the widget hierarchy (i.e. we assign sizes and positions). Then we go to the Paint phase where we redraw the regions of the window that needs redrawing.

If nothing requires the update/layout/paint phases we will stay in the event phase forever, as we don’t want to redraw if nothing changes. Each phase can request further processing in the following phases (e.g. the update phase will cause there to be layout work, and layout changes cause repaints).

There are multiple ways to drive the clock, at the lowest level you can request a particular phase by gdk_frame_clock_request_phase() which will schedule a clock beat as needed so that it eventually reaches the requested phase. However in practice most things happen at higher levels:

  • If you are doing an animation, you can use gtk_widget_add_tick_callback() which will cause a regular beating of the clock with a callback in the update phase until you stop the tick.
  • If some state changes that causes the size of your widget to change you call gtk_widget_queue_resize() which will request a layout phase and mark your widget as needing relayout.
  • If some state changes so you need to redraw some area of your widget you use the normal gtk_widget_queue_draw() set of functions. These will request a paint phase and mark the region as needing redraw.

There are also a lot of implicit triggers of these from the CSS layer (which does animations, resizes and repaints as needed).

During the Paint phase we will send a single expose event to the toplevel window[2]. The event handler will create a cairo context for the window and emit a GtkWidget::draw() signal on it, which will propagate down the entire widget hierarchy in back-to-front order, using the clipping and transform of the cairo context. This lets each widget draw its content at the right place and time, correctly handling things like partial transparencies and overlapping widgets.

There are some notable differences here compared to how it used to work:

  • Normally there is only a single cairo context which is used in the entire repaint, rather than one per GdkWindow. This means you have to respect (and not reset) existing clip and transformations set on it.
  • Most widgets, including those that create their own GdkWindows have a transparent background, so they draw on top of whatever widgets are below them. This was not the case before where the theme set the background of most widgets to the default background color. (In fact, transparent GdkWindows used to be impossible.)
  • The whole rendering hierarchy is captured in the call stack, rather than having multiple separate draw emissions as before, so you can use effects like e.g.  cairo_push/pop_group() which will affect all the widgets below you in the hierarchy. This lets you have e.g. partially transparent containers.
  • Due to backgrounds being transparent we no longer use a self-copy operation when GdkWindows move or scroll. We just mark the entire affected area for repainting when these operations are used. This allows (partially) transparent backgrounds, and it also more closely models modern hardware where self-copy operations are problematic (they break the rendering pipeline).
  • Due to the above, old scrolling code is slower, so we do scrolling differently. Containers that scroll a lot (GtkViewport, GtkTextView, GtkTreeView, etc) allocate an offscreen image during scrolling and render their children to it (which is now possible since draw is fully hierarchical). The offscreen image is a bit larger than the visible area, so most of the time when scrolling it just needs to draw the offscreen in a different position.  This matches contemporary graphics hardware much better, as well as allowing efficient transparent backgrounds.
    In order for this to work such containers need to detect when child widgets are redrawn so that it can update the offscreen. This can be done with the new gdk_window_set_invalidate_handler() function.

This skips some of the details, but with this overview you should have a better idea that happens when your code is called, and be in a better position to do further research if necessary.

[1] If you need more mouse event precision you need to look at the mouse device history.

[2] Actually each native subwindow will get one too, but that is not very common these days.

Adventures in Docker land

Connoisseurs of this blog know that I have an interest in application deployment systems, having created three different application bunding system (1,2, 3). These were all experiments in the area of desktop applications, but recently there has been some interesting motions in related areas, namely Docker.

Docker is a server application/container deployment system, which nicely sidesteps a lot of the complexity with desktop apps (not having to integrate deeply with the desktop) which makes it a lot easier to deploy. Additionally, docker is more than a deployment system, it also has some interesting ideas about how to create and distribute applications.

Every docker container is a copy-on-write clone of a specific parent Image, which means instantiation of docker containers is very fast and cheap. It also gives some very interesting properties because you can track the changes in a container (compared to t its parent image) and “commit” this to a new image. This creates a git-like hierarchy of images where every commit is a filesystem layer that applies to a previous layer, up to some base image. The git-like workflow is really nice to work with when creating images, and the final result is very easy to share and deploy, and at the same time automatically shares as much as possible with common base images.

Unfortunately Docker relies on AUFS, a union filesystem that is not in the upstream kernel, nor is it likely to ever be there. Also, while AUFS is in the current Ubuntu kernel it is deprecated there and will eventually be removed. This means Docker doesn’t run on Fedora which has a primarily-upstream approach to packaging.

So, the last month or so I’ve been working on making Docker work in Fedora (and thus eventually in RHEL, which is the nr 1 requested Docker feature). Of course, this work will benefit other distributions that don’t have AUFS too.

I started looking at possible replacements for the copy-on-write support, and there are a few possibilities availible:

  • overlayfs
  • btrfs
  • lvm snapshots
  • lvm thin provisioning

Overlayfs is a different union filesystem implementation than AUFS, and the one that seems most likely to land upstream. But that is happening slowly, if at all. Long-term I think this is the best option, but right now it is out of the question.

Btrfs has copy-on-write both using filesystem snapshots and one a file basis using reflink. However, btrfs is not currently used much in production as its not considered stable enough. It would also be a very heavy dependency for Docker, as may users would have to reformat their disks to use it.

Lvm snapshots are useful for doing e.g. backup of a snapshot, but regress badly in performance when you start having many snapshots of the same device.

This leaves us with only lvm thin provisioning. This is a fairly recent, but relatively stable technology that allows you to create copy-on-write block devices that are “thinly” provisioned, meaning they don’t use real space until the device is in use. This is not ideal for Docker as it really wants copy-on-write at the file level, but with some work it is possible to work around this.

Rather than interacting with lvm which is a very generic volume manager I chose to use the lower level device-mapper kernel APIs directly (via libdevmapper). This allows us greater ease of access to the devices programmatically, as well as avoiding confusion with possible system use of LVM. Also it avoids some LVM performance issues with very many devices.

So, we set up a single large block device on which we create a device-mapper “thinp” pool. On this we then creates a single “base” block device formated with ext4. Every image and container are then created as snapshot (in multiple steps) from this base device. So, say you’re starting a container based on an image “apache” which itself is  based on a “fedora” image, we would:

  1. Create a snapshot of the base device.
  2. Mount it and apply the changes in the fedora image.
  3. Create a snapshot based on the fedora device.
  4. Mount it and apply the changes in the apache image.
  5. Create a snapshot based on the apache device.
  6. Mount it and use as the root in the new container.

And of course, these devices will be reused (with corresponding steps skipped) as needed by other images/containers.

The devicemapper pool need to be set up on a large block device that fits all the images and containers that you will be used which would be painful for most people. Docker handles this by automatically creating the a large sparse file, using it as a loopback device for the devicemapper work. Additionally we ensure that DISCARD support is enabled in the filesystem so that any files removed in the conttainer filters down to the loopback file making it sparse again.

This means that there is no need for setup, and space for images and containers will only be used as needed. Of course, there are still issues, like the max size of the loopback mount (100G by default, but this should be easy to grow) and the max size of the base extt4 image (10G by default, resizing is harder after initial construction, but should be possible).

We’re currently in the process of landing this in Docker, and hope to have a 0.7 release out based on my device-mapper work pretty soon. Then I will continue working on making docker a first-class citizen on Fedora.

HiDPI support in Gnome

For some time I’ve been working on HiDPI support for Gnome, in order to support all the new laptops with very high resolution displays. This work is now at a stage where I can start showing it off and adventurous users might even want to test it out.

The support happens on many layers, like:

  • Wayland: I’ve added support in the protocol for scaled windows and outputs, and implemented this in Weston (the Wayland compositor reference implementation) and the wayland sample clients. This is currently in the unstable branches and will be in Wayland 1.2.
  • Cairo: In order to seamlessly support hidpi rendering I’ve added support for setting device-scale on cairo surfaces. This code is a prerequisite for the Gtk+ support, and is currently in a cairo branch, but will land in Cairo 1.13 eventually.
  • Gnome-icon-theme: Added high resolution version of most images used in the theme.

But most of the work has been in Gtk+. There is a branch called wip/window-scales that contains this code. It has:

  • Support for scaled windows in the wayland backend, including support for different scale factors on different outputs.
  • Support for scaled windows on X, limited to one scaling factor for all monitors.
  • Support for retina displays on OSX
  • Support for alternative CSS assets for scaled windows, so that you can specify higher resolution background and border images in your themes that will be used on hidpi screens.
  • Support for alternative high-resolution icons for scaled windows, including additions to the icon theme spec so that themes can specify higher resolution images with less details (i.e. 24×24@2x is the same size as 48×48@1x, but has less detail).

Here is how it looks right now: (click for full version)

This is work in progress, but if you want to try it out, here is the code you need to use:

You can then enable the scaling either by setting GDK_SCALE=2 in the environment, or by using gsettings (only works under gnome):

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides 
   "{ 'Gdk/WindowScalingFactor':<2>, 'Gdk/UnscaledDPI':<92160> }"

There is still a lot of work to do, but I hope to have this mostly done for my talk at Guadec. Hopefully I’ll see you there!

And last, but not least, many thanks to Brion Vibber who generously donated a Chromebook Pixel to me so that I could work on this stuff.