When reporting a bug in some software it is often common that the developer asks you to check if the bug is fixed in the latest development version. This makes a lot of sense from the perspective of a developer that gets a lot of bug reports. However, unless you are very experienced with building software this is prohibitively hard.
This is an area where xdg-app shines, because it allows you to create binary builds of desktop applications that work on any distribution. In order to demonstrate this I set up an automated build system that builds Gimp and Inkscape from the development branch every day and produces a new binary that you can easily install and run:
To make it easy to use I also created packages of xdg-app for some common distributions.
One of the core ideas of xdg-app is that users should be running the same build of everything that the developers tested on. Not only does this mean that you can trust the testing that went into the app, but it also means that an app can run on multiple distributions, and on different version of the same distribution.
However, an application does not have bundle everything. Instead the app specifies a dependency on a runtime, which contains the base system libraries. I like to compare this to dynamic libraries, where xdg-app is “dynamically linked” to its runtime, whereas container systems (like docker) are “statically linked” (by shipping a complete runtime in each app).
This may seem weird and contrary to the first paragraph, but it turns out that this is pretty much a requirement. We want third parties to be able to produce a binary that will keep running “forever”, but the base system may need fixes or support for new hardware. We can’t expect every vendor to rebuild every application (for example some old game) each time something needs fixing on the lower levels. So, therefore we allow updates to the runtime separately from the app (although any update must be compatible).
An app can only depend on one runtime, and everything not in the runtime must be bundled with the application. There is (by design) no way to depend on multiple runtimes, or have runtimes depend on each other. However, there is something called runtime extensions. Extensions are a way to split off and make optional parts of the runtime and recombine them at runtime.
For instance, I’m working on a runtime called org.freedesktop.Platform that has the basic freedesktop libraries (X11, Mesa, DBus, etc). It has this snippet in the configuration:
This means that whenever another runtime called org.freedesktop.Platform.Timezones is installed its contents will replace the directory share/zoneinfo in the runtime. This is very useful as it allows the timezone info (which changes frequently) to be updated separately from the runtime.
This means that if a runtime like org.freedesktop.Platform.Locale.sv is installed, it will replace the contents of share/runtime/locale/sv in the runtime. During the build all the locale data and translations are separated out into per-language runtimes which can be installed separately.
There is no (official) runtime with this name, but if one is installed it will appear in lib/GL, and the main runtime has been programmed to look into this directory for libGL.
The idea here is that if your system uses an OpenGL driver that does not ship with the regular runtime (i.e. Mesa), or needs a more recent version of it, then you can create your own runtime with this name and get your drivers into the runtime.
For a long time this has been a theoretical solution, but recently I acquired an NVidia card in order to test this. The result is this script, which takes an upstream nvidia driver release and converts it into a runtime that matches the (soon to be released) 1.2 version of the Freedesktop runtime.
To verify that it works I created an xdg-app bundle for the Unreal Editor. Here are some screenshots of a sandboxed version of unreal to show this working:
While runtimes can’t have dependencies on other runtimes, they can be build from the same base, and thus be compatible. For instance, the official Gnome runtime takes the Freedesktop runtime and adds the Gnome modules to it. Since we have the same ABI we can reuse the same extensions. The two runtimes have different versions (Gnome is 3.18, Freedesktop is 1.2), so we have to specify the version (which is otherwise infered by the runtime version). Here is how the Gnome runtime config looks:
An important aspect of xdg-app is application sandboxing, which will require application changes to use sandbox-specific APIs. However, xdg-app is also a good way to deploy and run non-sandboxed (or partially sandboxed) regular applications.
A very interesting usecase for this is to have an image-based operating system, for instance a Workstation spin of Fedora Atomic. Such a system would have a basic workstation installation with a read-only /usr, and atomic updates/rollback. However, installing an application is painful, and customizing yor install in that way undoes many of the advantages of an image-based OS.
With xdg-app you can install apps into /var (or $HOME) and have them fully integrate with the system, while still being isolated from changes to the host. This makes for a great combination, just like atomic + docker is a good combination for the server space.
I’ve spent some time recently making a prototype runtime based on the Fedora packages, as reported on the desktop list. This is kind of interesting as it lets you test applications from rawhide on fedora 21 or 22. Just install xdg-app from fedora-updates and then install the runtime:
This repository contains the gnome 3.16 runtimes, org.gnome.Platform, as well as a smaller one that is useful for less integrated apps (like games) called org.freedesktop.Platform. It also has corresponding develoment runtimes (org.gnome.Sdk and org.freedesktop.Sdk) that you can use to create applications for the platforms.
This is a developer preview, so consider these builds weakly supported. This means I will try to keep them somewhat updated if there are major issues and that I will keep them API and ABI stable. I will probably also pick up at least some 3.16.x minor releases as they are released.
Using the repo above makes it really easy to test this. Just install the xdg-app package from copr, log out+in (needed update the environment for the session), then follow these instructions (as a regular user):
Install the Gnome SDK public key into /usr/share/ostree/trusted.gpg.d, (or alternatively, use –no-gpg-verify when you add the remote below).
All the above install the apps into your home-directory (in ~/.local/share/xdg-app) . You can also run the commands as root and skip the –user arguments to do system-wide application installs.
With the basics now laid down to run current applications in a minimally isolated environment the next step is to work on the sandboxing aspects more. This will require lots of work, both in the system side (things like kdbus), the desktop (add sandbox aware APIs, make pulseaudio protect clients from each other, etc) and in modifying applications.
If you’re interested in this, you can follow the work on the wiki.
Building your own apps
If you download the SDKs you have enough tooling to build your own applications. There are some documentations on how to do this here.
I also created a git repository with the scripts I used to build the test applications above. It uses the gnome-sdk-bundles repostory which has some tooling and specfiles to easily bundle dependencies with the application.
Building the SDK
If you ever want to build the SDK yourself, it is available at:
However, if you don’t want to build all of this you can download the pre-build images from http://sdk.gnome.org/images/x86_64/ and put them in the freedesktop-sdk-base/images/x86_64 subdirectory of gnome-sdk-images. This can save you a lot of time and space.
Its not a secret that I’ve been working on sandboxed desktop applications recently. In fact, I recently gave a talk at devconf.cz about it. However, up until now I’ve mainly been focusing on the bundling and deployment aspects of the problem. I’ve been running applications in their own environment, but having pretty open access to the system.
Now that the basics are working it’s time to start looking at how to create a real sandbox. This is going to require a lot of changes to the Linux stack. For instance, we have to use Wayland instead of X11, because X11 is impossible to secure. We also need to use kdbus to allow desktop integration that is properly filtered at the kernel level.
Recently Wayland has made some pretty big strides though, and we now have working Wayland sessions in Fedora 21. This means we can start testing real sandboxing for simple applications. To get something running I chose to focus on a game, because they require very little interaction with the system. Here is a video I made of Neverball, running in a minimal sandbox:
In this example we’re running a regular build of neverball in an environment which:
Is independent of the host distribution
Has no access to any system or user files other than the ones from the runtime and application itself
Has no access to any hardware devices, other than DRI (for GL rendering)
Yet the application is still simple to install and integrates nicely with the desktop. If you want to test it yourself, just follow the instructions on the project page and install org.neverball.Neverball.
Of course, there are still a lot to do here. For instance, PulseAudio doesn’t protect clients from each other, and for more complex applications we need to add new APIs to safely grant access to things like user files and devices. The sandbox details page has a more detailed list of what has to be done.
The road is long, but at least we have now started our journey!
I’m now back from a week in Cambridge at the developer experience hackfest. This was a great event, it was a lot of fun to meet people again, and we got a lot of things done. I spent a lot of time talking to people about things related to xdg-app and sandboxed applications, both spreading information and actually implementing features.
We also worked a bit on Gtk+ OpenGL support. Based on feedback from early users we’re doing some changes in how GL contexts are created to allow you to configure them in more detail. We also decided that we want to completely drop support for legacy OpenGL contexts, as these had issues cooperating with Core 3.2 contexts, and because we don’t live in the 90s anymore. Carlos was working on converting GtkPopover to use (override redirect) toplevels on X11, and I gave him moral support and generally hated on ancient crappy X11 behaviour.
Props to Collabora and Philip for arranging a great event!
I’ve recently been working on OpenGL support in Gtk+, and last week it landed in master. However, the demos we have are pretty lame and are not very good to show off or even test the OpenGL support. I’ve looked around for some open source demos that used modern GL that we could use, but I didn’t find anything that we could easily use.
What I did find though, was a lot of WebGL demos that used three.js. This looked like a very nice open source library for highlevel 3d rendering. At first I had some plans to bind OpenGL to gjs so that we could run three.js, but this turned out to be a hard.
Instead I started converting three.js into C + GObject, using the Gtk+ OpenGL support and the vector/matrix library graphene that Emmanuele has been working on recently.
After about a week of frantic hacking it is now at a stage where it may be interesting for others. So, without further ado I introduce:
It does not yet support everything that three.js can do, but it does support a meshes with most mesh matrial types and lighting, including a loader for the json model format of thee.js, which means that it is minimally useful.
Here are some screenshots of the examples that ships with the code:
This has been a lot of fun to work on as I’ve seen a lot of progress very fast. Mad props to mrdoob and the other three.js developers for creating three.js and making it free software. Gthree is a huge rip-off of their work and would never be possible without it. Thanks also to Emmanuele for his graphene library.
What are you sitting here for, go ahead and play with it! Make some demos, port some more three.js features, marvel at the fancy graphics!
I’ve been doing some experiments recently with ways to build docker images. As a test bed for this I’m building an image with the Gtk+ Broadway backend, and some apps. This will both be useful (have a simple way to show off Broadway) as well as being a complex real-world use case.
I have two goals for the image. First of all, building of custom code has to be repeatable and controlled, Secondly, the produced image should be clean and have a minimal size. In particular, we don’t want any of the build dependencies or intermediate results in any layer in the final docker image.
The approach I’m using involves using a build Dockerfile, which installs all the build tools and the development dependencies. Inside this container I build a set of rpms. Then another Dockerfile creates a runtime image which just installs the necessary runtime rpms from the rpms built in the first container..
However, there is a complication to this. The runtime image is created using a Dockerfile, but there is no way for a Dockerfile to access the files produced from the build container, as volumes don’t work during docker build. We could extract the rpms from the container and use ADD in the dockerfile to insert the files into the container before installing them, but this would create an extra layer in the runtime image that contained the rpms, making it unnecessarily large.
To solve this I made the build container construct a yum repository of all the built rpms, and then when starts lighttp, exporting it. So, by doing:
docker build -t broadway-repo
docker run -d -p 9999:80 broadway-repo
I get a yum repository server at port 9999 that can be used by the
second container. Ideally we would want to use
docker links to get access to the build container, but unfortunately links don’t work with docker build. Instead we use a hack in the Makefile to
generate a yum repo file with the host ip address and add that to the
Instead of starting by adding all the source files we add them one by one as needed. This allows us to use the caching that docker build does, so that if you change something at the end of the Dockerfile docker will reuse the previously built images all the way up to the first line that has changed.
We try do do multiple commands in each RUN invocation, chaining them together with &&. This way we can avoid the current limit of 127 layers in a docker image.
At the end we use createrepo to create the repository and set the default command to run lighttp on the repository at port 80.
We rebuild all packages in the Gtk+ stack, so there are no X11 dependencies in the final image. Additionally, we strip a bunch of dependencies and use some other tricks to make the rpms themselves a bit smaller.
Here is a screenshot of the final results:
You can try it yourself by running “docker run -d -p 8080:8080 alexl/broadway” and loading http://127.0.0.1:8080 in a browser.
Over a few releases now, and culminating in Gtk+ 3.10 we have restructured the way drawing works internally. This hasn’t really been written about a lot, and I still see a lot of people unaware of these changes, so I thought I’d do a writeup of the brave new world.
All Gtk+ applications are mainloop driven, which means that most of the time the app is idle inside a loop that just waits for something to happen and then calls out to the right place when it does. On top of this Gtk+ has a frame clock that gives a “pulse” to the application. This clock beats at a steady rate, which is tied to the framerate of the output (this is synced to the monitor via the window manager/compositor). The clock has several phases:
The phases happens in this order and we will always run each phase through before going back to the start. The Input events phase is a long stretch of time between each redraw where we get input events from the user and other events (like e.g. network i/o). Some events, like mouse motion are compressed so that we only get a single mouse motion event per clock cycle.
Once the event phase is over we pause all external events and run the redraw loop. First is the update phase, where all animations are run to calculate the new state based on the estimated time the next frame will be visible (available via the frame clock). This often involves geometry changes which drives the next phase, Layout. If there are any changes in widget size requirements we calculate a new layout for the widget hierarchy (i.e. we assign sizes and positions). Then we go to the Paint phase where we redraw the regions of the window that needs redrawing.
If nothing requires the update/layout/paint phases we will stay in the event phase forever, as we don’t want to redraw if nothing changes. Each phase can request further processing in the following phases (e.g. the update phase will cause there to be layout work, and layout changes cause repaints).
There are multiple ways to drive the clock, at the lowest level you can request a particular phase by gdk_frame_clock_request_phase() which will schedule a clock beat as needed so that it eventually reaches the requested phase. However in practice most things happen at higher levels:
If you are doing an animation, you can use gtk_widget_add_tick_callback() which will cause a regular beating of the clock with a callback in the update phase until you stop the tick.
If some state changes that causes the size of your widget to change you call gtk_widget_queue_resize() which will request a layout phase and mark your widget as needing relayout.
If some state changes so you need to redraw some area of your widget you use the normal gtk_widget_queue_draw() set of functions. These will request a paint phase and mark the region as needing redraw.
There are also a lot of implicit triggers of these from the CSS layer (which does animations, resizes and repaints as needed).
During the Paint phase we will send a single expose event to the toplevel window. The event handler will create a cairo context for the window and emit a GtkWidget::draw() signal on it, which will propagate down the entire widget hierarchy in back-to-front order, using the clipping and transform of the cairo context. This lets each widget draw its content at the right place and time, correctly handling things like partial transparencies and overlapping widgets.
There are some notable differences here compared to how it used to work:
Normally there is only a single cairo context which is used in the entire repaint, rather than one per GdkWindow. This means you have to respect (and not reset) existing clip and transformations set on it.
Most widgets, including those that create their own GdkWindows have a transparent background, so they draw on top of whatever widgets are below them. This was not the case before where the theme set the background of most widgets to the default background color. (In fact, transparent GdkWindows used to be impossible.)
The whole rendering hierarchy is captured in the call stack, rather than having multiple separate draw emissions as before, so you can use effects like e.g. cairo_push/pop_group() which will affect all the widgets below you in the hierarchy. This lets you have e.g. partially transparent containers.
Due to backgrounds being transparent we no longer use a self-copy operation when GdkWindows move or scroll. We just mark the entire affected area for repainting when these operations are used. This allows (partially) transparent backgrounds, and it also more closely models modern hardware where self-copy operations are problematic (they break the rendering pipeline).
Due to the above, old scrolling code is slower, so we do scrolling differently. Containers that scroll a lot (GtkViewport, GtkTextView, GtkTreeView, etc) allocate an offscreen image during scrolling and render their children to it (which is now possible since draw is fully hierarchical). The offscreen image is a bit larger than the visible area, so most of the time when scrolling it just needs to draw the offscreen in a different position. This matches contemporary graphics hardware much better, as well as allowing efficient transparent backgrounds.
In order for this to work such containers need to detect when child widgets are redrawn so that it can update the offscreen. This can be done with the new gdk_window_set_invalidate_handler() function.
This skips some of the details, but with this overview you should have a better idea that happens when your code is called, and be in a better position to do further research if necessary.
 If you need more mouse event precision you need to look at the mouse device history.
 Actually each native subwindow will get one too, but that is not very common these days.