The Truth they are not telling you about “Themes”

Before we start, let’s get this out of the way because the week long delirium on social media has dragged enough.

Yes, libadwaita “hardcodes” Adwaita. Yes, applications, as is, will not be following a custom system theme. Yes this does improve the default behavior of application for GNOME when run on other platforms like Elementary OS. However, this is the result of a technical limitation, and not some evil plot as Twitter will keep telling you…

The reason is that in order for High Contrast (and the upcoming Dark Style) to work, libadwaita needs to override the theme name property so it doesn’t fallback to GTK’s “Default” High Contrast style. The “Default” style is an older version of Adwaita, not your system style.

Compared to GTK 3, there isn’t a new way to enforce the “hardcoded” style. The GTK_THEME variable still works, as does ~/.config/gtk-4.0/gtk.css and probably 3 other ways of doing this. The process to theme your system might be a bit different compared to GTK 3 but it will still work. Likewise, if you are developing a distribution, you have control of the end product and can do anything you want with the code. There is a plethora of options available. Apparently complaining on social media and bullying volunteers into submission was one such option…

And I guess this also needs to be stated: this change only affects apps that choose to use libadwaita and adopt the GNOME Design Guidelines, not “every” GTK 4 application.

As usual, the fact that the themes keep working doesn’t mean they are supported. The same issues about restyling applications when they don’t expect it apply and GNOME can not realistically support arbitrary stylesheets that none of the contributors develop against and test.

Now onto the actual blogpost.

There seems to be some confusion when it comes to libadwaita’s stylesheet and coloring APIs. It’s easy to mix them up when you haven’t heard of libadwaita before, so here is a short introduction on what they are and how they differ.

Keep in mind that the features discussed below are not guaranteed to land. After libadwaita 1.0 the stylesheet will be frozen and treated as an API. That means that if a feature doesn’t make it by 1.0 it will be a breaking change and will have to wait for libadwaita 2.0.

An Application Coloring API / Accent Colors

The idea here is that you can define “accent” colors to be applied for various parts of widgets. You can also recolor any part of a widget however you like. Take a look at Epiphany’s private mode header bar for an example. For this to be possible the whole stylesheet had to be reworked. Extra care was needed to ensure that the functionality wouldn’t conflict with the high contrast preference and wouldn’t need special handling. I hope Alexander will blog about this work in more detail, as it was truly fascinating. I am very excited to see what developers do with the coloring API.

For now the colors can be controlled with the GTK-specific @define-color, similar to CSS variables. Programmatic API will be added later on as the dust settles. The API will be based on AdwStyleManager which is getting introduced by the Dark style preference MR and hasn’t landed yet.

Here’s a quick example:

@define-color accent_color @yellow_5;
@define-color accent_bg_color @yellow_2;
@define-color accent_fg_color black;

.controls {
    color: white;
    background: linear-gradient(to right, shade(@blue_3, .8), @purple_2);
}

.slider > trough > highlight {
    background: linear-gradient(to left, shade(@red_1, .8), @yellow_4);
}

.controls textview text {
  background: none;
}

.controls entry,
.controls spinbutton,
.controls textview {
  background-color: alpha(black, .15);
  color: white;
}

.navigation-sidebar,
headerbar {
  background: alpha(white, .1);
  color: white;
}

 

GNOME Patterns application showcasing the capabilities of the CSS styles.

For a more detailed example of what you can do check Federico’s recent blogpost.

System Accents

This is heavily inspired by system accent settings in elementary OS, and it’s similar in function. Think of it like a way to set the accent color system-wide, then applications can read it and decide to follow or override it. A case where you want to override would be if your app had a Sepia mode for example.

The coloring API mentioned above is designed in a way that makes this feature easy to implement. The interface and UI for this are not yet fleshed out completely, and it’s debatable if it’s going to be implemented/merged at all. There are a couple of design issues and concerns that need further research. It’s a possibility, but don’t bet on it.

Picture of the Elementary OS 6 Appearance Settings panel.

Vendor Styling

The story behind this idea is extensive and best left for another post, so here’s the current status on this infamous topic.

There have been great accomplishments to reducing the possible fallout of restyling applications with brand colors. Nowadays vendors recognize that arbitrary restyling can be damaging to application developers and have taken some precautions.

Yaru reworked its style and rebased it on Adwaita, this helped reduced the changes to mostly the color palette and minor stylistic tweaks. This got rid of a lot of bugs surfacing in applications, as Yaru now at least has the same spacing, margins and padding as Adwaita. Pop!_OS followed suit shortly after, I believe it’s now based on Yaru.

However, both Ubuntu and Pop also introduced “Dark-modes”, Pop making it the default, which broke applications’ expectations. They did it despite being warned about it. As a result this ended up increasing the issues with theming by about an order of magnitude as now you would frequently end up with black on black, grey on grey and other fun coloring bugs. It should also be noted that neither Ubuntu nor System 76 approached any contributor I know of, about properly implementing a Dark Style preference upstream. Even though GNOME and Elementary contributors had been collaborating in public for the last 3 years.

Screenshot of gedit with Yaru Dark stylesheet, where the selected text becomes invisible.

Yaru developers did some research on the topic and there was a call for engagement by GNOME, but unfortunately ever since the last theming BoF in 2019, the conversation had dried up. The interested parties haven’t provided any details on what the scope of the API would need to be, how it would look like, or the detailed requirements. Nobody stepped up to help with the Adwaita changes that were required either, or with dark mode, or to work on the QA tooling, or to figure out the implementation details. Now we are sadly out of time for libadwaita 1.0 and there isn’t much hope for such a complex thing to be ready in the next 4 months.

Conclusion

For libadwaita 1.0 and GNOME 42 the work on recoloring widgets will likely be completed. A proper Dark Style setting will likely also be implement by then. System-wide accent colors are being discussed and looked at, but there are design related concerns about them, so it’s possible that they will never land. And there won’t be any “Theming API” for libadawaita 1.0. Maybe there will be renewed interest from the vendors that want it in the future, but given the story so far, I won’t hold my breath. I hope to be proven wrong.

GNOME Nightly Annual ABI Break

This only affects GNOME Nightly, if you are using the stable runtimes you have nothing to worry about

It’s that time of the year again. We’ve updated the base of the GNOME Nightly Flatpak runtime to the Freedesktop-SDK 21.08 beta release.

This brings lots of improvements and updates to the underlying toolchain, but it also means that between yesterday and today, there is an ABI break and that all your Nightly apps will need to be rebuilt against the newer base.

Thankfully this should be as simple as triggering a new Gitlab CI pipeline. If you merge anything that will also trigger a new build as well.

I suggest you also take the time to set up a daily scheduled CI job so that your applications keep up with runtime changes automatically, even if there hasn’t been new activity in the app for some time. It’s quite simple.

Go to the your project, Settings -> CI/CD -> Schedules -> New schedule button -> Select the daily preset.

Happy hacking.

Toolbox your Debian

Last week I needed a Debian system to test things, I had heard others were using toolbox with Debian images without much trouble so decided to give it a go instead of creating a VM.

Toolbox only requires a handful utilities to work with any given docker image. After a quick search I stumbled upon Philippe’s post which in turn linked into this PR about an Ubuntu based toolbox image. Looks like the last major issues where worked out recently in toolbox and there isn’t anything extra needed apart the image.

Until the upstream PR is merged, I’ve adapted the image for Debian Sid and inlined the deps in one Dockerfile so its easier to fetch it and use it. The Dockerfile is hosted in a gist here.

Here is how to use it:

curl -o debian.Dockerfile 'https://gitlab.gnome.org/-/snippets/1653/raw/master/debian.Dockerfile'
podman build -t debian-sid-toolbox -f debian.Dockerfile .
toolbox create -c sid --image debian-sid-toolbox
toolbox enter sid

Enjoy!

Try GTK 4 Demos Now!

You watched Matthias’s talk at Linux App Summit last week and you wish you could try the demos yourself? Excellent!

During the Q&A section, Cassidy James asked if there was any Flatpak available, an hour after that we published a Nightly version of GTK4 Demo along with the Widget factory and the Icon Browser in GNOME’s Nightly repository.

To try it all you need to do is the following:

$ flatpak install https://nightly.gnome.org/repo/appstream/org.gtk.Demo4.flatpakref
$ flatpak run org.gtk.Demo4

The Flatpak is built by the GTK CI pipeline for on each commit so it will be updatable as time goes. You can try the most bleeding edge there is, all in its little isolated sandbox without worrying about messing with your system or having to go through the soul crashing process of figuring out how to build software.

If you are curious how we do CI/CD with Flatpak, take a look at the Merge Request and the relevant wiki page.

What is GNOME OS?

With the release of GNOME 3.38.0, we started producing and distributing bootable VM images for debugging and testing features before they hit any distribution repository. We called the images GNOME OS. The name itself is not new, and what it stands for has not changed dramatically since it was introduced, but let’s restate its goals.
 
GNOME OS aims to better facilitate development of GNOME by providing a working system for development, design, and user testing purposes.
 
The main feature of GNOME OS is that we can produce a new system image for each commit made in any of our modules. The ability to have these VM images is truly amazing since we are dealing with hundreds of modules that depend and integrate with each other, and with the lower layers of the OS stack. This effort represents a game changer with regards to being able to automate the boot and session initialization, testing design and implementation changes, catching regressions earlier in the development cycle, and many other possibilities.
 
GNOME OS will also allow the engagement team to more easily create visual assets ahead of the release, present features and raise bug fixes with the free and open source software community at large, and make initiatives like the release video a lot simpler. Journalists will be able to get their hands on the new release of GNOME before the final release. 

What GNOME OS is not

Despite the its name, GNOME OS should not be regarded as GNOME’s own platform or general purpose operating system. As much as I would personally like for it to be, and have talked in the past about it, we have to recognize that it’s not, and for good reasons. In its current state, GNOME OS is still an incomplete reference system. It’s the closest approximation that we have to a GNOME platform OS, but we don’t actually have the resources that would be required to support a fully realized, general purpose OS for everyday use.
 
Maintaining an OS would, at the bare minimum, require keeping up with security fixes (CVEs), doing hardware enablement, and having some kind of user support story. Each one of these is a gigantic task requiring a dedicated team to do properly, and GNOME is not currently set up to be able to tackle them. Distributions put a lot of effort into building their platforms to maintain the changes coming from various upstreams and QAing them together.
 
Furthermore, many GNOME contributors are already working for companies that develop distributions (Canonical, Red Hat, SUSE, Endless, etc.) and it’s unclear how many would be willing or able to maintain another one.

Who is it for?

Firstly it’s for designers, so they no longer have to suffer through countless hours of trying to build software themselves, in order to test the latest development versions of some of our core modules (most notably GNOME Shell). Tightening that feedback loop is incredibly valuable for delivering a polished product. After that, it’s for the release team, so it can validate releases before slinging them out the door; for developers and translators, so they can have a complete system to test and debug their changes on; for our downstream distributors and OS vendors, so that they can have a known to be working baseline against which they can compare their own products. Last but not least, it’s for the machines and robots that keep an eye out for regressions.
 
Enjoy GNOME OS!  

Navigating Docker for Windows versions

Recently, gitlab-runner got support for running jobs inside windows containers. This simplifies a lot the setup needed to get a windows CI job up and running, and it makes it similar how we do linux builds.

Windows though has a couple of gotchas, the behavior of docker on windows can vastly vary depending on which binary and/or configuration you use.

Containers on windows are dependent on the server version of the Host. For example, your server 2016 (1607) containers can only be executed on a server 2016 host. Currently there are 2 popular base versions that docker supports, Server 2016, and 2019. Gitlab-runner only supports server 2019, so we will go with that.

Microsoft recently introduced a set of  Process isolation apis, where up till now docker had been using a hyper-v backend to launch containers. This results in possibly more light-weight containers as opposed to light VMs with Hyper-V, but also now now docker for windows has 2 isolation backends, which can affect differently the behavior of your containers! Proccess Isolation seems to be the default for Server 2019.

Next you want to go and install docker, so you look up the Microsoft documentation. There’s one option for the servers, but there’s also an option for Docker Desktop for Windows 10, and to add to this Docker Desktop itself has 2 versions depending if you want the Community or Enterprise Edition! Also, you can’t install Docker on windows 10 the same way you would do on the Server, so you are left having to mix different binaries and hope everything works out. Spoiler, it didn’t work for me! There were couple of build/networking failure when using Docker Desktop, where the same Dockerfile built fine on the server host.

Bonus round:

Docker Desktop has 2 modes, one for running linux containers and one for windows containers! We talked about the 2 different isolation backends for windows above. Now it looks like the linux containers mode got a backend based on WSL2 in addition to the existing hyper-v backend it has been using.

Hope you made it to the end without losing count of all the different “containers” in the windows land. 🙂

GUADEC 2018

Its has been more than a month since Guadec wrapped up, though I’ve been busy and did not manage to write my report till now.

Traveling

I was a bit anxious about the travel, It was my first time flying and not only that but I had to spent the night in the Airport due to departure being at 6am. The flights went smoothly and I arrived at Málaga in the evening. Afterwards I took a bus to get to Almeria, it was a pleasant surprise to find out that other gnomies were also on board.

View of Málaga from a plain, right before landing.

Talks

There were many exciting talks. In particular I really enjoyed the Auditorium track, on Saturday, about Flatpak and Buildstream. At Sunday I attended Carlos Soriano’s talk Devops for GNOME and Tobias Bernard’s Designing GNOME Mobile. I was really looking forward to Federico’s talk too, sadly couldn’t make it in time and watched it online instead.

BoF days

On Monday morning I attended the Librem 5 BoF along with Julian Sparber in order to talk about Fractal and the Messages Application with the folks from Purism. We discussed Fractal’s upcoming features and plans going forward. Afterwards I head over to the Gitlab Workshop to help with whatever I could. During that time Jean Felder and I debugged an issue and we got Music Development build to run along side Music stable/system install succesfully!

The highlight of Tuesday was the Theming BoF. It was really interesting so many different groups attending and discussing the issues the Design Team brought forward. App developers, Platform developers, Downstream, Designers and even Elementary folks were there to give their opinion and talk about their experience and how they deal with such issues in Elementary. Tobias Bernard and Elementary Summarized them both really well in their posts here, and here. Cassidy James also took great notes and added them to the wiki page of the BoF here.

Jakub presenting the new Icon designs.

 

Wednesday was also packed. In the morning we had the What is a GNOME App BoF. More on that another time though, Carlos and Tobias are working on a proper Proposal. After that Tobias and I took some time to work on the blocker issues, before making the first “public” release of the new Podcasts app for GNOME. We’ve been working on the app for the past year, and its now available on Flathub!

Social Events

By far the thing I enjoyed the most from GUADEC was the social events. Talking with people about all sorts of thing and seeing perspectives of others from all around the world was a magical experience and though-provoking. I don’t really like going to the beach, but I loved both the beach party and the Sandcastle BoFs. The visit to the Alcazaba Castle and the Flamenco show afterwards was absolutely delightful too.

 

View of the Alcazaba Castle Walls.

 

 

A big thanks to the Foundation

My trip was sponsored by the GNOME Foundation. Thanks a lot to the volunteers at the Travel committee for all their hard work and making it possible for me to attend GUADEC!

Flatpak builds in the CI

This is a follow-up to Carlos Soriano’s blog post about the new GNOME workflow that has emerged following the transition to gitlab.gnome.org. The post is pretty damn good and if you haven’t read it already you should. In this post I will walk through setting up the Flatpak build and test job that runs on the nautilus CI. The majority of the work on this was done by Carlos Soriano and Ernestas Kulik.

Let’s start by defining what we want to accomplish. First of all we want to ensure that every commit commit will be buildable in a clean environment and against a Flatpak runtime. Second to that, we want to ensure that the each project’s test suite will be run and pass. After these succeed we want to be able to export the resulting Flatpak to install and/or test it locally. Lastly we don’t want to waste precious time of the shared CI runners from other projects so we want to utilize Flatpak’s ostree artifacts for caching.

To summarize we want to achieve the following:

  1. Build the project
  2. Run the Test-Suite
  3. Create a Flatpak package/bundle and export it
  4. Use a caching mechanism to reduce build times

Building the project

If your Flatpak manifest targets gnome-3.26/3.28 or Freedesktop-1.6 runtime you can use these container images, provided by the Flatpak project directly. They are a good fit for your stable release branches too. Nautilus master branch though, as most GNOME projects, targets the GNOME Nightly runtime. I created a custom image using the same process the stable 3.26/3.28 images are built. It will be updated every day an hour after the new Nightly runtime is composed. You can use it by changing the image: key in the .gitlab-ci.yml to point to registry.gitlab.com/alatiera/gnome-nightly-oci/gnome-master:latest. Currently we invoke Flatpak builder manually to build the resulting Flatpak, the reason this happens is that the manifest is always pointing at Gnome/nautilus/master branch which would ignore the fork/branch we want it to build. That’s why we do the following in nautilus to sidestep that.

script:
  - flatpak-builder --stop-at=nautilus app build-aux/flatpak/org.gnome.Nautilus.json
  # Make sure to keep this in sync with the Flatpak manifest, all arguments
  # are passed except the config-args because we build it ourselves
  - flatpak build app meson --prefix=/app --libdir=/app/lib -Dprofile=development -Dtests=all _build
  - flatpak build app ninja -C _build install
  - flatpak-builder --finish-only --repo=repo app build-aux/flatpak/org.gnome.Nautilus.json

This builds all the modules up till nautilus. Then we take over and build nautilus ourselves with from the local checkout. Finally we call again the manifest to finish the build.

It works with Sdk-extensions too

There’s a small caveat here, I was only able to use sdk-extensions with flatpak-builder.
It’s probably possible to use flatpak build too, but my knowledge of Flatpak
is limited. If you know of a way to do it better please let me know!

I’ve created a Rust-sdk image for my own use. Here is an example of how to use its used.
If you need any other sdk-extension, like C#, open an issue in this repo or even better make an MR!

Running tests inside the Flatpak environment

In order to run the nautilus test suite we will add the following line:

flatpak build app ninja -C _build test

If your testsuite requires a display, you could use Xvfb. Since it’s quite common for GNOME apps I’ve included it in the gnome-nightly container image directly so you won’t have it to install it. Hopefully you can just prefix the above command with xvfb-run ... args cmd

xvfb-run -a -s "-screen 0 1024x768x24" flatpak build app ninja -C _build test

Thanks to Emmanuele Bassi for showing me how to get the display tests up and running withxvfb.

Retrieving a Flatpak package

To export a Flatpak bundle, named nautilus-dev.flatpak, we will add the following line:

flatpak build-bundle repo nautilus-dev.flatpak --runtime-repo=https://sdk.gnome.org/gnome-nightly.flatpakrepo org.gnome.NautilusDevel

Then we will add an artfact: block in order to extract our Flatpak package and be able to download it and extract it locally.

artifacts:
  paths:
    - nautilus-dev.flatpak
  expire_in: 2 days

After that there should be a “Browse” and a “Download” button in the job’s logs from where you can download the Flatpak bundle. You can either get a zip with all the artifacts upon clicking “Download” or get the individual nautilus-dev.flatpak from “Browse”. After that to install it you can either open it with gnome-software (probably KDE discover too) or with the following command:

flatpak install --bundle nautilus-dev.flatpak

Caching in-between builds

In order to introduce caching in between CI runs we just need to add the following lines. In principle this should work, but there seem to be frequent cache misses that I am still investigating. If anyone is able to reduce the misses somehow please let me know.

cache:
  paths:
    - .flatpak-builder/cache

Complete config

Here is how the nautilus .gitlab-ci.yml config for the Flatpak job looks like right now. It might have slightly changed depending on when you read this.

flatpak:
  image: registry.gitlab.com/alatiera/gnome-nightly-oci/gnome-master:latest
  stage: test

  script:
    - flatpak-builder --stop-at=nautilus app build-aux/flatpak/org.gnome.Nautilus.json
    # Make sure to keep this in sync with the Flatpak manifest, all arguments
    # are passed except the config-args because we build it ourselves
    - flatpak build app meson --prefix=/app --libdir=/app/lib -Dprofile=development -Dtests=all _build
    - flatpak build app ninja -C _build install
    - flatpak-builder --finish-only --repo=repo app build-aux/flatpak/org.gnome.Nautilus.json
    # Make a Flatpak Nautilus bundle for people to test
    - flatpak build-bundle repo nautilus-dev.flatpak --runtime-repo=https://sdk.gnome.org/gnome-nightly.flatpakrepo org.gnome.NautilusDevel
    # Run automatic tests inside the Flatpak env
    - xvfb-run -a -s "-screen 0 1024x768x24" flatpak build app ninja -C _build test

  artifacts:
    paths:
      - nautilus-dev.flatpak
      - _build/meson-logs/meson-log.txt
      - _build/meson-logs/testlog.txt
    expire_in: 2 days

  cache:
    paths:
      - .flatpak-builder/cache

Going forward

The current config is not ideal yet. We have to keep it in sync with various parts of the Flatpak manifest and essentially replicate half of the functionality that’s specified in it. It would be nice if we could use the usual oneliner to build the Flatpak instead.

Also the cache misses issues mentioned above are driving me mad, when setting up gnome-builder‘s CI it would spend 11/12min rebuilding all the dependencies and half a minute building gnome-builder. What seems to happen is that it hits the ostree cache points for gnome-builder, but it is rejecting them for the modules/dependencies. Never figured out why sadly.

The Wikipedia page of Xvfb states that it has been replaced by xf86-video-dummy since X.org 7.8 and someone should probably look into using that for the tests instead.

But this setup should be good enough to get you started. If you are an app maintainer and would want to set this up but don’t have time or you are having trouble with something I want to hear from you! Feel free to ping me anytime at #gnome-hackers or send me an email.

Continues Integration in Librsvg, Part 3

Caching stuff

Generally 5min/job does not seem like a terribly long time to wait, but it can add up really quickly when you add couple of jobs to the pipeline. First let’s take a look where most of the time is spent. First of jobs currently are spawned in a clean environment, which means each time we want to build the Rust part of librsvg, we download the whole cargo registry and all of the cargo dependencies each time. That’s our first low hanging fruit! Apart from that another side-effect of the clean environment is that we build librsvg from scratch each time, meaning we don’t make use of the incremental compilation that modern compilers offer. So let’s get started.

Cargo Registry

According to the cargo docs there’s a cache of the registry is stored in $CARGO_HOME(default under $HOME/.cargo). Gitlab-CI though only allows you to cache things that exists inside your projects root (it does not need to be tracked by git). So we have to somehow relocate $CARGO_HOME somewhere where we can extract it. Thankfully that’s as easy as setting $CARGO_HOME to our desired path.

.test_template: &distro_test
  before_script:
    - mkdir -p .cargo_cache
    # Only stuff inside the repo directory can be cached
    # Override the CARGO_HOME variable to force it location
    - export CARGO_HOME="${PWD}/.cargo_cache"
  script:
    - echo foo
 
  cache:
    paths:
      - .cargo_cache/

What’s new in our template above compared to part 1, are the before_script: and cache: blocks. In the before_script: first we create the .cacrgo_cache folder if it not exists (cargo is probably smart enough to not need this but ccache isn’t! So better be safe I guess). And then we export the new $CARGO_HOME location. Then in the cache: block we set what folder we want to cache. That’s it, now our cargo registry and downloaded crates should persist across builds!

Caching Rust Artifacts

The only thing needed to cache the rustc build artifacts is to add target/ in the cache: block. That’s it I am serious.

cache:
  paths:
    - target/

Caching C Artifacts with ccache

C and ccache on the other hand are a completely different story sadly. To that it did not contribute that my knowledge of C and build systems is approximating 0. Thankfully while searching I found another post from Ted Gould where he describes how ccache was setup for Inkscape.  The following config ended up working for librsvg‘s current autotools setup.

.test_template: &distro_test
  before_script:
    # ccache Config
    - mkdir -p ccache
    - export CCACHE_BASEDIR=${PWD}
    - export CCACHE_DIR=${PWD}/ccache
    - export CC="ccache gcc"

  script:
    - echo foo
  cache:
    paths:
      - ccache/

I got stuck on how to actually call gcc through ccache since it depends on the build system you use (see export CC). Shout out to Christian Hergert for showing me how to do it!

Cache behavior

One last thing, is that we want our each of our job to have an independent cache as opposed to a shared one across the pipeline. This can be achieved by using the key: directive. I am not sure how it works and I wish the jobs would elaborate a bit more. In practice the following line will make sure that each job on each branch will have it’s own cache. For more complex configurations I suggest looking at the gitlab docs.

cache:
    # JOB_NAME - Each job will have it's own cache
    # COMMIT_REF_SLUG = Lowercase name of the branch
    # ^ Keep different caches for each branch
    key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"

Final config and results

So here is the cache config as it exists today on Librsvg‘s master branch. This brought build of each job from ~4-5min, which we left it in part 2, to ~1-1:30min. Pretty damn fast! But what if you wanted to do a clean build or rule the out the possibility that the cache is causing bugs and failed runs? Well if you happen to use gitlab 10.4 or later (and GNOME is) you can do it from the Web GUI. If not you probably have to contact a gitlab administrator.

.test_template: &distro_test
  before_script:
    # CCache Config
    - mkdir -p ccache
    - mkdir -p .cargo_cache
    - export CCACHE_BASEDIR=${PWD}
    - export CCACHE_DIR=${PWD}/ccache
    - export CC="ccache gcc"

    # Only stuff inside the repo directory can be cached
    # Override the CARGO_HOME variable to force it location
    - export CARGO_HOME="${PWD}/.cargo_cache"
  script:
    - echo foo

  cache:
    # JOB_NAME - Each job will have it's own cache
    # COMMIT_REF_SLUG = Lowercase name of the branch
    # ^ Keep diffrerent caches for each branch
    key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
    paths:
      - target/
      - .cargo_cache/
      - ccache/

Continues Integration in Librsvg, Part 2

Custom Images for everyone!

In the previous post we setup a small Pipeline that builds and runs the tests suite across some distributions, checks the code formatting  and(Soon™) will deny clippy warnings.

In this part we will try to reduce the time it takes for the Pipeline to run by using custom container images bundled with the dependencies needed to built Librsvg instead of downloading and installing them each time.

Creating an image

First we will need to find a base image of the distro we want to build upon. Most distributions have official images published in Dockerhub. I am gonna use the fedora image but the same process can be done for any image.

We will create a new file named fedora_latest.yml, but the name doesn’t really matter. Then add the following content to it:

FROM fedora:latest

RUN dnf -y update && dnf -y upgrade && \
dnf install -y gcc rust rust-std-static cargo make vala \
automake autoconf libtool gettext itstool \
gdk-pixbuf2-devel gobject-introspection-devel \
gtk-doc git redhat-rpm-config gtk3-devel ccache \
libxml2-devel libcroco-devel cairo-devel pango-devel && \
dnf clean all

The first line FROM fedora:latest specified the base image we will use. The first part specifies the repository it will come from fedora, if no registry is passed it will by default look in Dockerhub I think.

Then in the rest of the file we gonna use the standard dnf commands to install the dependencies we want.

Last we will need to build the image with the following command:

docker build -f fedora_latest.yml -t librsvg/fedora:latest .

Notice the dot at the end. That cost me half an hour to debug the first time 🙁 (I would suggest looking at buildah for building OCI images instead of docker).

After that is complete docker images should have an entry like the following:

REPOSITORY        TAG       IMAGE ID        CREATED          SIZE
librsvg/fedora    latest    3d92c1b0ea5a    6 minutes ago    992 MB

It’s now possible to emulate the environment the Gitlab CI with the following command:

docker run -ti librsvg/fedora:latest bash

This will give as a bash shell inside the image we just built from where we can test cloning the git repo and running the test suite. I plan on writing a separate post on how to use the containers we are going to build for compiling and testing librsvg in a bit more depth.

Pushing our image to a registry

We built our image now we need some way for the Gitlab CI to fetch it and use it. One way is to push it in an online registry like Dockerhub. Ideally though we want to avoid depending Dockerhub or any other(possibly proprietary) registry that we have no control off. Turns out gitlab can have an integrated container registry FOR EACH PROJECT.

As of the time of writing this post, the GNOME gitlab migration is still ongoing and the container registry is not yet enabled. I plan on migrating the librsvg-oci-registry from gitlab.com as soon as the feature becomes available. For now and the rest of the post I will make use of the gitlab.com infrastructure but migrating or replicating the setup in the gitlab deployment of your choice(ex. gitlab.gnome.org or salsa.debian.org)  should be identical apart from changing the base urls.

The only prerequisite to use the gitlab registry(apart from being enabled) is to have a gitlab project(a repository by Github terms). I’ve created the a new project under my gitlab.com account called librsvg-oci-images. When navigating to the Repository section(in the left bar in gitlab 10.x series) the following instructions are shown.

docker login registry.gitlab.com

Once you log in, you’re free to create and upload a container image using the common build and push commands

docker build -t registry.gitlab.com/alatiera/librsvg-oci-images . 
docker push registry.gitlab.com/alatiera/librsvg-oci-images

Assuming you rebuilt the image with a tag that matches the registry’s namespace, that’s all it takes to push an image to it.

Automating Image builds

Building an image and pushing it manually can get tiresome and a long time (especially on slow internet connections). What if we could use Gitlab-CI to build and push the images automatically on regular basis?

It certainly possible, in fact there’s even a whole section about it in the gitlab docs. I’ve also found useful this post from .

First we will commit the fedora.yml file we created earlier to the repo. Then will need to add a .gitlab-ci.yml file in the librsvg-oci-images repository. For now we will just put the following to assert it’s working.

variables:
  FEDORA_LATEST: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}/fedora:latest

build:
  image: docker:latest
  services:
    - docker:dind
  stage: build
  script:
    - docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}
    - docker build --pull -f cross-distro-images/fedora_latest.yml -t ${FEDORA_LATEST} .
    - docker push ${FEDORA_LATEST}

Gitlab-CI set’s up some ENV variables on every run, which we can use to login and push to the registry. It’s almost a carbon copy of the examples in the gitlab docs. Though for now we’ve hardcoded the path the fedora Dockerfile yaml file(currently under cross-distro-images folder in the repo). If this works the resulting image will be under registry.gitlab.com/alatiera/librsvg-oci-images/fedora:latest.

Indeed the pipeline was successful and we can pull our new image with the following command:

docker pull registry.gitlab.com/alatiera/librsvg-oci-images/fedora:latest

Alright, now that this works let’s add the rest of the images we want to build and refactor the .gitlab-ci.yml file a bit.

First we will add some more dockerfiles in the cross-distro-images folder in the repository root.

Then I will fast-forward a bunch of failed attempts to get this working. Eventually our .gitlab-ci.yml will look something like this:

stages:
  - distro_image

# Expects $DISTRO_NAME variable which should be the name of the distro image ex. ubuntu
# Expects $DISTRO_VER variable which should be the version distro image ex. 18.04
# Expects $OCI_YML variable which should be the path to the dockerfile
.distro_template: &distro_build
  before_script:
    - export IMAGE=${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}/${DISTRO_NAME}:${DISTRO_VER}
  script:
    - docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}
    - docker build --pull -f ${OCI_YML} -t ${IMAGE} .
    - docker push ${IMAGE}
  allow_failure: true

fedora:latest:
  variables:
    DISTRO_NAME: "fedora"
    DISTRO_VER: "latest"
    OCI_YML: "cross-distro-image/fedora_latest.yml"

  <<: *distro_build

fedora:rawhide:
  variables:
    DISTRO_NAME: "fedora"
    DISTRO_VER: "rawhide"
    OCI_YML: "cross-distro-image/fedora_rawhide.yml"

  <<: *distro_build

debian:testing:
  variables:
    DISTRO_NAME: "debian"
    DISTRO_VER: "testing"
    OCI_YML: "cross-distro-image/debian_testing.yml"

  <<: *distro_build

opensuse:tumbleweed:
  variables:
    DISTRO_NAME: "opensuse"
    DISTRO_VER: "tumbleweed"
    OCI_YML: "across-distro-image/opensuse_tumbleweed.yml"

  <<: *distro_build

Basically what happened is that we made a generic template and used Environment Variables to pass it the arguments we want in each case. Here is the result after we push the file and the pipeline is run.

And here is how the registry looks like:

Now the only thing that’s left is to switch librsvg pipeline to use the custom images instead. To do so we simply have to change the image: key and delete the before_script: that used installed dependencies before since they are now included in the image.

So from this:

fedora:latest:
  image: fedora:latest

  before_script:
    - dnf install -y gcc rust ...  gtk3-devel
 <<: *distro_test

It will become like this:

fedora:latest:
  image: registry.gitlab.com/alatiera/librsvg-oci-images/fedora:latest

 <<: *distro_test

Automatically rebuilding and updating the Images

On each commit now the Gitlab-CI will build and push to the registry new images. But Gitlab-CI also has a nice feature where you can schedule Pipelines to run on regular time intervals. So we can have for example weekly/monthly rebuilds of updated images without having to push new commits or trigger manual rebuilds. When a new image is pushed the downstream CI in librsvg that uses the image should automatically pick it up. So that means as long the image builds don’t fail we won’t need to ever touch the repo again.

Setting up a scheduled Pipeline is straight forward for the most part. (They though cron-like syntax would make a good UX…). I won’t go into detail since there’s seem to be good documentation on howto do it in the Gitlab docs here.

Conclusion and Results

Using custom images and avoiding downloading and installing package dependencies in each run brought down the opensuse Job from ~20 min to ~5 minutes, and the fedora/debian jobs from ~10 minutes to ~4 minutes.

Which means we can run all the cross distro jobs now in less time than it initially took us to just test an opensuse build and still have some spare time!

Though we still download the whole cargo registry and all the rust dependencies each time. We also build librsvg from scratch in each pipeline run. In the next part we will see how to cache C and Rust artifacts across builds and have essentially do incremental builds.

Here is a link to the librsvg container registry, currently hosted at gitlab.com, if you want to take a closer look of the repo. Some stuff may be slightly different.