Vivid colors in Brno

Co-authored by Sebastian Wick & Jonas Ådahl.

During April 24 to 26 Red Hat invited people working on compositors and display drivers to come together to collaborate on bringing the Linux graphics stack to the next level. There were three high level topics that were discussed at length: Color Management, High Dynamic Range (HDR) and Variable Refresh Rate (VRR). This post will go through the discussions that took place, and occasional rough consensus reached among the people who attended.

The event itself aimed to be both as inclusive and engaging as possible, meaning participants could attend both in person, in the Red Hat office in Brno, Czech Republic, or remotely via a video link. The format of the event was structured in a way aiming to give remote attendees and physical attendees an equal opportunity to participate in discussions. While the hallway track can be a great way to collaborate, discussions accessible remotely were prioritized by having two available rooms with their own video link.

This meant that if the main room wanted to continue on the same topic, while some wanted to do a breakout session, they could go to the other room, and anyone attending remotely could tag along by connecting to the other video link. In the end, the break out room became the room where people collaborated on various things in a less structured manner, leaving the main room to cover the main topics. A reason for this is that the microphones in both rooms were a bit too good, effectively catching any conversation anyone had anywhere in the room. Making one of the rooms a bit more chaotic, while the other focused, also allowed for both ways of collaborating.

For the kernel side, people working on AMD, Intel and NVIDIA drivers were among the attendees, and for user space there was representation from gamescope, GNOME, KDE, smithay, Wayland, weston and wlroots. Some of those people are community contributors and some of them were attending on behalf of Red Hat, Canonical, System76, sourcehut, Collabora, Blue Systems, Igalia, AMD, Intel, Google, and NVIDIA. We had a lot of productive discussion, ending up in total with a 20 (!) page document of notes.

Discussion with remote attendees during the hackfest

Color management & HDR

Wayland

Color management in the Linux graphics stack is shifting in the way it is implemented, away from the style used in X11 where the display server (X.org) takes a hands-off approach and the end result is dependent on individual client capabilities, to an architecture where the Wayland display server takes an active role to ensure that all clients, be them color aware or not, show up on screen correctly.

Pekka Paalanen and Sebastian Wick gave a summary of the current state of digital color on Linux and Wayland. For full details, see the Color and HDR documentation repository.

They described the in-development color-representation and color-management Wayland protocols. The color-representation protocol lets clients describe the way color channels are encoded and the color-management protocol lets clients describe the color channels’ meaning to completely describe the appearance of surfaces. It also gives clients information about how it can optimize its content to the target monitor capabilities to minimize the color transformations in the compositor.

Another key aspect of the Wayland color protocols in development is that compositors will be able to choose what they want to support. This allows for example to implement HDR without involving ICC workflows.

There is already a broad consensus that this type of active color management aligns with the Wayland philosophy and while work is needed in compositors and client toolkits alike, the protocols in question are ready for prototyping and review from the wider community.

Colors in kernel drivers & compositors

There are two parts to HDR and color management for compositors. The first one is to create content from different SDR and HDR sources using color transformations. The second is signaling the monitor to enter the desired mode. Given the current state of kernel API capabilities, compositors are in general required to handle all of their color transformations using shaders during composition. For the short term we will focus on removing the last blockers for HDR signaling and in the long term work on making it possible to offload color space conversions to the display hardware which should ideally make it possible to power down the GPU while playing e.g. a movie

Short term

Entering HDR mode is done by setting the colorimetry (KMS Colorspace property) and overriding the transfer characteristics (KMS HDR_OUTPUT_METADATA property).

Unfortunately the design of the Colorspace property does not mix well with the current broader KMS design where the output format is an implementation detail of the driver. We’re going to tweak the behavior of the Colorspace property such that it doesn’t directly control the InfoFrame but lets the driver choose the correct variant and transparently convert to YCC using the correct matrix if required. This should allow AMD to support HDR signaling upstream as well.

The HDR_OUTPUT_METADATA property is a bit weird as well and should be documented. Changing it might require a mode set and changing the transfer characteristics part of the blob will make monitors glitch, while changing other parameters must not require a mode set and must not glitch.

Both landing support upstream for the AMD driver, and improvements to the documentation should happen soon, enabling proper upstream HDR signaling.

Vendor specific uAPI for color pipelines

Recently a proposal for adding vendor specific properties for exposing hardware color pipelines via KMS has been posted, and while it is great to see work being done to improve situation in the Linux kernel, there are concerns that this opens up for per vendor API that end up necessary for compositors to implement, effectively reintroducing per vendor GPU drivers in userspace outside of mesa.

Still, upstream support in the kernel has its upsides, as it for example makes it much easier to experiment. A way forward discussed is to propose that vendor specific color pipeline properties should be handled with care, by requiring them to be clearly documented as experimental, and disabled by default both with a build configuration, and a off-by-default module parameter.

A proposal for this will be sent by Harry Wentland to the relevant kernel mailing lists.

Color pipelines in KMS

Long term, KMS should support color pipelines without any experimental flags, and there is a wide agreement that it should be done with a vendor agnostic API. To achieve this, a proposal was discussed at length, but to summarize it, the goal is to introduce a new KMS object for color operations. A color operation object exposes a low level mathematical function (e.g. Matrix multiplication, 1D or 3D look up tables) and a link to the next operation. To declare a color pipeline, drivers construct a linked list of these operations, for example 1D LUT → Matrix → 1D LUT to describe the current DEGAMMA_LUT → CTM → GAMMA_LUT KMS pipeline.

The discussions primarily focused on per plane color pipelines for the pre-blending stage, but the same concept should be reusable for the post blending stage on the CRTC.

Eventually this work should also make it possible to cleanly separate KMS properties which change the colors (i.e. color operations) from properties changing the mode and signaling to sinks, such as Broadcast RGB, Colorspace, max_bpc.

It was also agreed that user space needs more control over the output format, i.e. what is transmitted over the wire. Right now this is a driver implementation detail and chosen such that the bandwidth requirements of the selected mode will be satisfied. In particular making it possible to turn off YCC subsampling, specifying the minimum bit depth and specifying the compression strength for DCC seems to have consensus.

There are a lot more details that handle all the quirks that hardware may have. For more details and further discussion about the color pipeline proposal, head over to the RFC that Simon Ser just sent to the relevant mailing lists.

Testing & VKMS

Testability of color pipelines and KMS in general was a topic that was brought up as well, with two areas of interest: testing compositors and the generic DRM layer in the kernel using VKMS, and testing actual kernel drivers.

The state of VKMS is to some degree problematic; it currently lacks a large enough pool of established contributors that can take maintainership responsibilities, i.e. reviewing and landing code, but at the same time, there is an urge to make it a more central part of GPU driver development in general, where it can take a more active role in ensuring cross driver conformance. Discussions on how to create more incentive for both kernel developers and compositor developers to help out were discussed, and while ability to test compositors is a relatively good incentive, an idea discussed was to require new DRM properties to always get a VKMS implementation as well to be able to land. This is, however, not easy, since a significant amount of bootstrapping is needed to make that viable. Some ideas were thrown around, and hopefully something will come out of it; keep an eye on the relevant mailing lists for something related to this area.

For testing actual drivers, the usage of Chamelium was discussed, and while everyone agreed it’s something that is definitely nice to have, it takes a significant amount of resources to maintain wired up CI runners for the community to rely on. Ideally a setup that can be shared across the different compositors and GPU drivers would be great, but it’s a significant task to handle.

Variable Refresh Rate

Smoothing out refresh rate changes

Variable Refresh Rate monitors driven at a certain mode have a minimum and maximum refresh cycle duration and the actual duration can be chosen for every refresh cycle. One problem with most existing VRR monitors however is that when the refresh duration changes too quickly, they tend to produce visible glitches. They appear as brightness changes for a fraction of a second and can be very jarring. To avoid them, each refresh cycle must change the duration only up to some fixed amount. The amount however varies between monitors, with some having no restriction at all.

A VESA certification is currently being deployed aiming to certify monitors where any change in the refresh cycle duration does not result in glitches. For all other monitors, the increase and decrease in duration which does not result in glitches is unknown if not provided by optional EDID/DisplayID data blocks.

Driving monitors glitch-free without machine readable information therefore requires another approach. One idea is to make the limits configurable. Requiring all users to tweak and fiddle to make it work good enough, however, is not very user friendly, so another idea that was discussed is to maintain a database similar to the one used by libinput, but in libdisplay-info, that contains the required information about monitors, even if there is no such information made available by the vendor.

With all of the required information, the smoothing of refresh rate changes still needs to happen somewhere. It was debated whether this should be handled transparently by the kernel, or if it should be completely up to user space. There are pros and cons to both ways, for example better timing ability in the kernel, but less black box magic if handled by user space. In the end, the conclusion is for user space components (i.e. compositors) to handle this themselves first, and then reconsider some point in the future if that is enough, or whether new kernel uAPI is needed.

Low Framerate Compensation

The usual frame rates that a VRR monitor can achieve typically do not cover a bunch of often used low frame rates, such as 30, 25, or 24 Hz. To still be able to show such content without stutter, the display can be driven at a multiple of the target frame rate and present new content on every n-th refresh cycle.

Right now this Low Framerate Compensation (LFC) feature is built into the kernel driver, and when VRR is enabled, user space can transparently present content at refresh rates even lower than what the display supports. While this seems like a good idea, there are problems with this approach. For example the cursor can only be updated when there is a content update, making it very sluggish because of the low rate of content updates even though the screen refreshes multiple times. This either requires a special KMS commit which does not result in an immediate page flip but ends up on the refresh cycles inserted by LFC, or implementing LFC in user space instead. Like with the refresh rate change smoothing talked about earlier, moving LFC to user space might be possible but also might require some help from the kernel to be able to time page flips well enough.

Wayland

For VRR to work, applications need to provide content updates on a surface in a semi-regular interval. GUI applications for example often only draw when something changed which makes the updates irregular, driving VRR to its minimum refresh rate until e.g. an animation is playing and VRR is ramping up the refresh rate over multiple refresh cycles. This results in choppy mouse cursor movements and animations for some time. GUI applications sometimes do provide semi-regular updates, e.g. during animations or video playback. Some applications, like games, always provide semi-regular updates.

Currently there is no1 Wayland protocol letting applications advertise that a surface works with VRR at a moment in time, or at all. There is no way for a compositor to automatically determine if an app or a surface is suitable for VRR as well. For wayland native applications a protocol to communicate this information could be created but there are a lot of applications out there which would work fine with VRR but will not get updated to support this protocol.

Maintaining a database similar to the one mentioned above, but for applications, was discussed, but there is no clear winner in how to do so, and where to store the data. Maintaining a list is cumbersome, and complicates the ability for applications to work with VRR on release, or on distributions with out of date databases. Another idea was a desktop file entry stating support, but this too has its downsides. All in all, there is no clear path forward in how to actually enable VRR for applications transparently without causing issues.

1. Except for a protocol proposal.

Wrap-up

Brno, Czech Republic

The hackfest was a huge success! Not only was this a good opportunity to get everyone up to speed and learn about what everyone is doing, having people with different backgrounds in the discussions made it possible to discuss problems, ideas and solutions spanning all the way from clients over compositors, to drivers and hardware. Especially on the color and HDR topics we came up with good, actionable consensus and a clear path to where we want to go. For VRR we managed to pin-point the remaining issues and know which parts require more experimentation.

For GNOME, Color management, HDR and VRR are all topics that are being actively worked on, and the future is both bright and dynamic, not only when it comes to luminescence and color intensity, but also when it comes to the rate monitors present all these intense colors.

Dor Askayo who has been working on bringing VRR to GNOME attended part of the hackfest, and together we can hopefully bring experimental VRR to GNOME soon. There will be more work needed to iron out the overall experience, as covered above, but getting the fundamental building blocks in place is a critical first step.

For HDR, work has been going on to attach color state information to the scene graph, and at the hackfest Georges Basile Stavracas, Sebastian Wick and Jonas Ådahl sat down and sketched out a new Clutter rendering API that aims replace the current Clutter paint nodes API that is used in Mutter and GNOME Shell, which will make color transformations a first class citizen. We will initially focus on using shaders for everything, but down the road, the goal is to utilize the future color pipeline KMS uAPI for both performance and power consumption improvements.

We’d like to thank Red Hat for organizing and hosting the hackfest and for allowing us to work on these interesting topics, Red Hat and Collabora for sponsoring food and refreshments, and especially Carlos Soriano Sanchez and Tomas Popela for actually doing all the work making the event happen. It was great. Also thanks to Jakub Steiner for the illustration, and Carlos Soriano Sanchez for the photo from the hackfest.

For another great hackfest write-up, head over to Simon Ser’s blog post.

Automated testing of GNOME Shell

Automated testing is important to ensure software continues to behave as it is intended and it’s part of more or less all modern software projects, including GNOME Shell and many of the dependencies it builds upon. However, as with most testing, we can always do better to get more complete testing. In this post, we’ll dive into how we recently improved testing in GNOME Shell, and what this unlocks in terms of future testability.

Already existing testing

GNOME Shell already performs testing as part of its continuous integration pipeline (CI), but tests have been limited to unit testing, meaning testing selected components in isolation ensuring they behave as expected, but due to of the nature of the functionalities that Shell implements, the amount of testing one can do as unit testing is rather limiting. Primarily, in something like GNOME Shell, it is just as important to test how things behave when used in their natural environment, i.e. instead of testing specific functionality in isolation, the whole Shell instance needs to be executed with all bits and pieces running as a whole, as if it was a real session.

In other words, what we need is being able running all of GNOME Shell as if it was installed and logged in into on a real system.

Test Everything

As discussed, to actually test enough things, we need to run all of GNOME Shell with all its features, as if it was a real session. What this also means is that we don’t necessarily have the ability to set up actual test cases filled with asserts as one does with unit testing; instead we need mechanisms to verify the state of the compositor in a way that looks more like regular usage. Enter “perf tests“.

Since many years back, GNOME Shell has had automated performance tests, that would measure how well the Shell performed doing various tasks. Each test is a tiny JavaScript function that performs a few operations, while making sure all the performed operations actually happened, and when it finishes, the Shell instance is terminated. For example, a “perf test” could look like

  1. Open overview
  2. Open notifications
  3. Close notifications
  4. Leave overview

As is it turns out, this infrastructure fits rather neatly with the kind of testing we want to add here – tests that that perform various tasks that exercise user facing functionality.

There are, however, more ways to  verify that things behave as expected other than triggering these operations and ensuring that they executed correctly. The most immediate next step is to ensure that there were no warnings logged during the whole test run. This is useful in part due to the fact that GNOME Shell is largely written in JavaScript, as this means the APIs provided by lower level components such as Mutter and GLib tend to have runtime input validation in introspected API entry points. Consequently, if an API is misused by some JavaScript code, it tends to result in warnings being logged. We can be more confident that a particular change won’t introduce regressions when it runs GNOME Shell completely without warnings.

This, however, is easier said than done, for two main reasons: we’ll be running in a container, and the complications that comes with mixing memory management models of different programming languages.

Running GNOME Shell in a container

For tests to be useful, they need to run in CI. Running in CI means running in a container, and that is not all that straightforward when it comes to compositors. The containerized environment is rather different than running on a regularly installed and setup Linux distribution; it lack many services that are expected to be running, and provide important functionality needed to build a desktop environment, like service and session management (e.g. logging out), system management (e.g. rebooting), dealing with network connectivity, and so on.

Running with most of these services missing is possible, but results in many warnings, and a partially broken session. To get any useful testing done, we need to eliminate all of these warnings, without just silencing them. Enter service mocking.

Mocked D-Bus Services

In the world of testing, “mocking” involves creating an implementation of an API, without the actual real world API implementation sitting behind it. Often these mocked services provide a limited pre-defined subset of functionality, for example hard coding results of API operations given a pre-defined set of possible input arguments. Sometimes, mocked APIs can simply only be there to pretend a service available, and nothing more is needed unless the functionality it provides needs to be actively triggered.

As part of CI testing in Mutter, the basic building blocks for mocking services needed to run a display server in CI have been implemented, but GNOME Shell needs many more compared to plain Mutter. As of this writing, in addition to the few APIs Mutter relies on, GNOME Shell also needs the following:

  • org.freedesktop.Accounts (accountsservice) – For e.g. the lock screen
  • org.freedesktop.UPower (upower) – E.g. battery status
  • org.freedesktop.NetworkManager (NetworkManager) – Manage internet
  • org.freedesktop.PolicyKit1 (polkit) – Act as a PolKit agent
  • net.hadess.PowerProfiles (power-profiles-daemon) – Power profiles management
  • org.gnome.DisplayManage (gdm) – Registering with GDM
  • org.freedesktop.impl.portal.PermissionStore (xdg-permission-store) – Permission checking
  • org.gnome.SessionManager (gnome-session) – Log out / Reboot / …
  • org.freedesktop.GeoClue2 (GeoClue) – Geolocation control
  • org.gnome.Shell.CalendarServer (gnome-shell-calendar-server) – Calendar integration

The mock services used by Mutter are implemented using python-dbusmock, and Mutter conveniently installs its own service mocking implementations. Building on top of this, we can easily continue mocking API after API until all the needed ones are provided.

As of now, either upstream python-dbusmock or GNOME Shell have mock implementations of all the mentioned services. All but one, org.freedesktop.Accounts, either existed or needed a trivial implementation. In the future, for further testing that involves interacting with the system, e.g. configuring Wi-Fi, we will need expand what these mocked API implementations can do, but for what we’re doing initially, it’s good enough.

Terminating GNOME Shell

Mixing JavaScript, a garbage collected language, and C, with all its manual memory management, has its caveats, and this is especially true during tear down. In the past the Mutter context was terminated, later followed by the JavaScript context. Terminating the JavaScript context last prevented Clutter and Mutter objects from being destroyed, as JavaScript may still have references to these objects. If you ever wondered why there tends to be warnings in journal when logging out, this is why. All of these warnings and potential crashes mean any tests that rely on zero warnings would fail. We can’t have that!

To improve this situation, we have to shuffle things around a bit. In rough terms, we now terminate the JavaScript context first, ensuring there are no references held by JavaScript, before tearing down the backend and the Mutter context. To make this possible without introducing even more issues, this meant tearing down the whole UI tree on shut-down, making sure the actual JavaScript context disposal more or less only involves cleaning up defunct JavaScript objects.

In the past, this has been complicated too, since not all components can easily handle bits and pieces of the Shell getting destroyed in a rather arbitrary order, as it means signals get emitted when they were not expected to, e.g. when parts of the shell that was expected to still exist has already been cleaned up. A while ago, a new door was opened making it possible to handle rather conveniently: enter the signal tracker, a helper that makes it possible to write code using signal handlers that automatically disconnects signal handlers on shutdown.

With the signal tracker in place and in use, a few smaller final fixes here, and the aforementioned reversed order we tear down the JavaScript context and the Mutter bits, we can now terminate without any warnings being logged.

And as a result, the tests pass!

Enabled in CI

Right now we’re running the “basic” perf test on each merge request in GNOME Shell. It performs some basic operations, including opening the quick settings menu, handles an incoming notification, opens the overview and application grid. A screen recording of what it does can be seen below.

What’s Next

More Tests

Testing more functionality than basic.js. There are some more existing “perf tests” that could potentially be used, but tests that aim for testing specific functionality, for example window management, or configuring the Wi-Fi, that isn’t related to performance don’t really exist yet. This will become easier after the port to standard JavaScript modules, when tests no longer have to be included in the gnome-shell binary itself.

Input Events

So far, widgets are triggered programmatically. Using input events via virtual input devices means we get more fully tested code paths. Better test infrastructure for things related to input is being worked on for Mutter, and can hopefully be reused in GNOME Shell’s tests.

Running tests from Mutter’s CI

GNOME Shell provides a decent sanity test for Clutter, Mutter’s compositing library, so ensuring that it runs successfully and without warnings is useful to make sure changes there doesn’t introduce regressions.

Screenshot-based Tests

Using so called reference screenshots, test will be able to ensure there were no actual visual changes unless so was intended. The basic infrastructure exist in and can be exposed by Mutter, but for something like GNOME Shell, we probably need a way other than in-tree reference images for storage as is done in Mutter, in order to not make the gnome-shell git repository grow out of hand.

Multi-monitor

Currently the tests use a single fixed resolution virtual monitor, but this should be expanded to involve multi monitor and hotplugging. Mutter has ways to create virtual monitors, but does not yet export this via by GNOME Shell consumable API.

GNOME Shell Extensions

Not only GNOME Shell itself needs testing, running tests specifically for extensions, or running GNOME Shell’s own tests as part of testing extensions would have benefits as well.

Splitting up the Frame Clock

Readers be advised, this is somewhat of a deep dive into the guts of Mutter. With that out in the open, lets start!

Not too long ago mutter saw a merge request land, that has one major aim: split up the frame clock so that when using the Wayland session, each monitor is driven by its own frame clock. In effect the goal here is that e.g. a 144 Hz monitor and a 60 Hz monitor being active in the same session will not have to wait for each other to update, and that the space they occupy on the screen will draw at their own pace. A window on the 144 Hz monitor will paint at 144 Hz, and mutter will composite to the monitor at 144 Hz, while a window on the 60 Hz monitor will paint at 60 Hz and Mutter will composite to the monitor at 60 Hz.

glxgears on a 75 Hz monitor next to weston-simple-egl on a 60 Hz monitor.

All of this is roughly achieved by the changes summarized below.

Preface

In the beginning of times, Clutter was an application toolkit. As such, it assumed (1) the existence of a window compositor, and (2) the compositor is a different process. Back then, Wayland was still in its early infancy, and those assumptions wouldn’t conflict with writing a X11 window manager. After all, a X11 window manager is pretty much just another client application.

Over time, however, Clutter started to grow Wayland integration in itself. Deeper and deeper surgeries were made to it to accomodate it being used by a Wayland compositor.

In 2016, the Cogl and Clutter codebases were merged with Mutter codebase, and they all live in the same repository now. However, to this day, relics from the time when Clutter was an application toolkit are still present in Mutter’s Clutter. One such relic is ClutterMasterClock .

ClutterMasterClock

ClutterMasterClock was the main frame clock that drove Clutter painting. As an application toolkit, only a single, global frame clock was necessary; but as a compositor toolkit, this design doesn’t fit the requirements for multi-monitor setups.

Over the last cycles, there has been some attempts to make it handle multiple monitors slightly better by juggling multiple monitors with their own refresh rates and clocks using various tricks, but the fundamental design was standing in the way for making substantial progress, so it has been completely decommissioned.

Enters ClutterFrameClock.

ClutterFrameClock is the new frame clock object that aims to drive a single “output”. Right now, it has a fixed refresh rate, and a single “frame listener” and “presenter” notifying about frames being presented. It is also possible to have multiple frame clocks running in parallel.

However, ClutterFrameClock alone isn’t enough to achieve independence of monitor redraws.

Stage Views

Mutter has a single stage that covers the union of all monitor rectangles. But how does it render different contents to each one of them?

That’s one of the main responsibilities of ClutterStageView.

ClutterStageView was the answer to the need of drawing the stage at different framebuffers. ClutterStageView corresponds roughly to one monitor. Each ClutterStageView holds the on-screen framebuffer that the monitor displays; if using shadow framebuffers, ClutterStageView also handles them; and finally, it also handles the monitor rotation.

Now, ClutterStageView also handles the monitor’s frame clock. By handling the frame clock, each view is also responsible of notifying about frames being presented, and handling the frame clock dispatching

The frame scheduling related logic (including flip counting, schedule time calculation, etc) was spread out in ClutterMasterClockDefault, ClutterStage, ClutterStageCogl, MetaRendererNative, MetaStageNative, and MetaStageX11, but has now now been concentrated to ClutterFrameClock and ClutterStageView alone.

Actors, Actors Everywhere

When animating interface elements, the core object that does that is ClutterTimeline and its subclass, ClutterTransition .

Timelines and transitions saw frames whenever the master clock ticked. With the master now clock gone, they need to find an appropriate frame clock to drive them. In most (and after this change effectively all) cases a timeline was used to directly drive an animation related to an actor. This indirect relationship is now made explicit, and the timeline uses the actor to find what stage view it is being displayed on, and with that information, picks an appropriate frame clock to attach to.

For transitions, used extensively by GNOME Shell to implement animations, this is handled by making a ClutterAnimatable provide the actor, and for stand-alone timelines, it’s a property set directly on the timeline before it’s started.

This means that when an actor moves across the stage and enters a different stage view, the timeline will be notified about this and will decide whether to migrate to a different frame clock.

What About X11?

In the X11 session, we composite the whole X11 screen at once, without any separation between monitors. This remains unchanged, with the difference being where scheduling takes place (as mentioned in an earlier point). The improvements described here are thus limited to using the Wayland session.

Be aware of API changes

This is quite a substantial change in how painting works in mutter, API changes could not be avoided. With that in mind, the changes needed are small, and mostly handled transparently by GNOME Shell itself. In fact, in all of GNOME Shell’s Javascript code, only two places needed change.

To be specific, for extension developers, there are two things to keep in mind:

  • If you use a St.Adjustment. You must now pass an actor when constructing it. This actor will determine what frame clock will drive the adjustment.
  • Some signals saw their type signatures change, namely ClutterStage::presented, ClutterStage::after-paint.

Final Thoughts

This is a big achievement to Mutter, GNOME Shell, its users, and especially to the contributors that were part of this. The road to reach this point was long and tortuous, and required coordinated efforts of dozens of contributors over the course of at least 5 years. We’d like to take a moment to appreciate this milestone and congratulate each and every single contributor that was part of this. Thank you so much!