Trying TypeScript for GNOME Apps

JavaScript is an excellent language for getting things done quickly. However, flexibility and speed come at the cost of not catching certain classes of errors before your code runs. For example, you can easily miss type errors and syntax errors. Developers can address these issues via type annotations and static analyzers like ESLint, but that requires a significant amount of effort on the developer’s part – which contrasts with the ability to write a program in a flexible language quickly.

Generally, I prefer to work in a language that watches my back – the tooling is my partner as I write my code. Rust is an exemplary language in this department, but certain design decisions make it slower to write an app in Rust than in JS. I figured it wasn’t the right choice for the JS apps I maintain, but I also wasn’t fond of continuing to work in plain JS. These reasons led me to look into TypeScript, and after some research, I decided I would port an app as an experiment: GNOME Sound Recorder. As far as I know, this is the first complete application in the GNOME ecosystem written in TypeScript.

I ported the app in 6 main steps:

  • Porting to ES modules
  • Using type-checked JS
  • Porting to TypeScript in full
  • Enabling strict type checking
  • Re-adding ESLint
  • Fixing the flatpak manifest and our CI

With some minor cleanups and style changes sprinkled in. Before we dive in, I want to say thank you to everyone who has helped with this effort, particularly:

  • Zander Brown
  • Philip Chimento
  • Sonny Piers
  • Andy Holmes
  • Evan Welsh
  • Michael Murphy

Everyone here provided invaluable advice, tooling, or code that helped me in the process of working on this.

Porting to ES Modules

While Sound Recorder may be the first TypeScript app, I am not the first person in GNOME to work with TypeScript. The developers at System76 are using it to write their tiling shell extension. Evan Welsh created a type definition generator and published type definitions on npm. 2 years ago, Michael Murphy was posting issues on a few GJS repositories about porting to TypeScript, and I noticed one such issue. I was curious, so I wrote an email, and he sent some helpful information about how their shell extension worked.

When I decided to give TypeScript a try recently, I remembered that information. I noticed that System76 has a script to change the module syntax of their TypeScript to the style of imports and exports that GJS supported at the time. Thankfully GJS now supports modules – so there’s no script required. Or rather, we wouldn’t need a script if we transitioned the Sound Recorder codebase to use modules. So I did precisely that. Most GJS code already uses modules, so it was something I needed to do as a general housekeeping task anyway.

Type-checked JavaScript

Type-checking JavaScript is the use case that most developers have for using the GJS type definitions, but it’s also a first step in integrating TypeScript tooling with your JS code. The typescriptlang.org docs have a chapter on setting it up. At this stage, I received completion in my text editor and a few warnings for mistakes that no one had noticed. For example, I forgot to have a return statement in a non-void virtual function.

It wasn’t all sunny at first – GJS has some built-in functions and objects that it allows you to access globally. The TypeScript checker wasn’t aware of those at all. Philip Cemento helped me learn how to define pkg and the _() helper for translations in an ambient type definition file. I also needed to re-declare modules like gtk as gi://Gtk so that I could import them in the way that GJS would expect.

In addition to the module pains, there was another issue: The GJS-provided GObject.registerClass() function sets up property and template children fields. If you have a child named button, GJS will allow you to refer to it as _button. To get type-checking for fields, I needed to declare them at the beginning of the class. However, re-declaring the fields set them to null, breaking the application. So I came up with an ugly hack:

// @ts-ignore
/** @type {Gtk.Stack} */ _mainStack = this._mainStack;
// @ts-ignore
/** @type {Adw.StatusPage} */ _emptyPage = this._emptyPage;
// @ts-ignore
/** @type {Adw.Clamp} */ _column = this._column;
// @ts-ignore
/** @type {Gtk.Revealer} */ _headerRevealer = this._headerRevealer;
// @ts-ignore
/** @type {Adw.ToastOverlay} */ _toastOverlay = this._toastOverlay;

Yeah, that’s not pretty at all. I was scared to continue after writing such hacky code – I wasn’t sure if the final product would be this awful. Still, I pressed on.

Initial TypeScript Port

This part was surprisingly uneventful. I mainly edited the code to provide type annotations for functions and closures. I already annotated fields in the previous step – I just needed to switch to TypeScript syntax. The new syntax came with a big boon: I could clean up the messy hack I had before. TypeScript has a ! operator that can be used on fields, telling the linter that you know it may seem like a field isn’t there, but it really is there. So the above code became:

_mainStack!: Gtk.Stack;
_emptyPage!: Adw.StatusPage;
_column!: Adw.Clamp;
_headerRevealer!: Gtk.Revealer;
_toastOverlay!: Adw.ToastOverlay;

After this, I was pretty happy with the app’s state – until my friend Zander Brown told me I could make TypeScript even stricter.

Strict Type Checking

Strict type checking enables null safety in TypeScript. That means that for every value that can be undefined or null, you must declare and handle those states explicitly. Typescript has some neat shorthand for this. For example:

level?: Gst.Element;

Here the ? means that level can either be a Gst.Element or undefined. You only need this for fields you don’t set when declared or in the constructor.

let audioCaps = Gst.Caps.from_string(profile.audioCaps);
audioCaps?.set_value('channels', this._getChannel());

? can also be used as a quick null check. Gst.Caps.from_string() returns Gst.Caps | null. So in the following line, audioCaps?.set_value() is saying “if audioCaps exists, set a value.”

This stage helped me catch areas where we simply forgot to check whether or not a value could be null before using it.

ESLint

ESLint caught a bunch of formatting and convention issues, but it also let me know that a “clever trick” I thought of to get around some troubling code blocks wasn’t so clever.

In JS I had this segment:

_init() {
    this._peaks = [];
    super._init({});

    let srcElement, audioConvert, caps;
    try {
        this.pipeline = new Gst.Pipeline({ name: 'pipe' });
        srcElement = Gst.ElementFactory.make('pulsesrc', 'srcElement');
        audioConvert = Gst.ElementFactory.make('audioconvert', 'audioConvert');
        caps = Gst.Caps.from_string('audio/x-raw');
        this.level = Gst.ElementFactory.make('level', 'level');
        this.ebin = Gst.ElementFactory.make('encodebin', 'ebin');
        this.filesink = Gst.ElementFactory.make('filesink', 'filesink');
    } catch (error) {
        log(`Not all elements could be created.\n${error}`);
    }

    try {
        this.pipeline.add(srcElement);
        this.pipeline.add(audioConvert);
        this.pipeline.add(this.level);
        this.pipeline.add(this.ebin);
        this.pipeline.add(this.filesink);
    } catch (error) {
        log(`Not all elements could be addded.\n${error}`);
    }

    srcElement.link(audioConvert);
    audioConvert.link_filtered(this.level, caps);
}

Many of these functions could return null, so I had no idea how to rework this segment nicely. I cheated and used the ! operator to tell TS to ignore the nullability of these items:

constructor() {
    super();
    this._peaks = [];

    let srcElement: Gst.Element;
    let audioConvert: Gst.Element;
    let caps: Gst.Caps;

    this.pipeline = new Gst.Pipeline({ name: 'pipe' });

    try {
        srcElement = Gst.ElementFactory.make('pulsesrc', 'srcElement')!;
        audioConvert = Gst.ElementFactory.make('audioconvert', 'audioConvert')!;
        caps = Gst.Caps.from_string('audio/x-raw')!;
        this.level = Gst.ElementFactory.make('level', 'level')!;
        this.ebin = Gst.ElementFactory.make('encodebin', 'ebin')!;
        this.filesink = Gst.ElementFactory.make('filesink', 'filesink')!;
    } catch (error) {
        log(`Not all elements could be created.\n${error}`);
    }

    try {
        this.pipeline.add(srcElement!);
        this.pipeline.add(audioConvert!);
        this.pipeline.add(this.level!);
        this.pipeline.add(this.ebin!);
        this.pipeline.add(this.filesink!);
    } catch (error) {
        log(`Not all elements could be addded.\n${error}`);
    }

    srcElement!.link(audioConvert!);
    audioConvert!.link_filtered(this.level!, caps!);
}

Using that operator was awful, and ESLint refused to let me get away with it. Thankfully Philip Chimento helpfully provided a solution with tuples and destructuring:

constructor() {
    super();
    this._peaks = [];

    let srcElement: Gst.Element;
    let audioConvert: Gst.Element;
    const caps = Gst.Caps.from_string('audio/x-raw');

    this.pipeline = new Gst.Pipeline({ name: 'pipe' });

    const elements = [
        ['pulsesrc', 'srcElement'],
        ['audioconvert', 'audioConvert'],
        ['level', 'level'],
        ['encodebin', 'ebin'],
        ['filesink', 'filesink']
    ].map(([fac, name]) => {
        const element = Gst.ElementFactory.make(fac, name);
        if (!element)
            throw new Error('Not all elements could be created.');
        this.pipeline.add(element);
        return element;
    });

    [srcElement, audioConvert, this.level, this.ebin, this.filesink]  = elements;

    srcElement.link(audioConvert);
    audioConvert.link_filtered(this.level, caps);
}

Fixing Flatpak & CI

Since TypeScript is part of the general JS ecosystem that uses npm.js, I somehow needed to integrate the npm sources with my own. Thankfully, a handy manifest generator provides a list of sources in a format flatpak knows how to handle. I couldn’t find a good way for my build scripts to access the directory where flatpak installed the sources, so I needed to set up a module with steps to move the sources somewhere visible to my app module:

{
    "name" : "yarn-deps",
    "buildsystem" : "simple",
    "build-commands" : [
        "/usr/lib/sdk/node18/enable.sh",
        "mkdir -p /app",
        "cp -r $FLATPAK_BUILDER_BUILDDIR/flatpak-node/yarn-mirror/ /app"
    ],
    "sources" : [
        "generated-sources.json"
    ]
}

Then I set up meson to take the mirror dir as an option. After verifying that my build still worked locally and within flatpak, fixing my CI was as simple as adding two lines to the YAML:

  before_script:
    - flatpak --user install -y org.freedesktop.Sdk.Extension.node18//22.08beta

Next Steps

There are still two remaining tasks before I consider this port fully finished:

  • Make Sound Recorder use Promises and async/await syntax
  • Remove the type definitions from the repo and use them from npm

Promises have a significant papercut, making them difficult to use in the current state. There are workarounds, but none that I’ve gotten to work. For the type definitions, having less code to maintain in the Sound Recorder repo will be helpful.

Porting Sound Recorder to TypeScript has been a positive experience overall. I want to continue using TypeScript in place of plain JavaScript, including in core apps like Weather. There are a few questions that the community needs to answer before I would feel comfortable using JavaScript in core components:

  • How should distros (or component developers) handle sources from NPM?
  • Are we as a community comfortable depending on the NPM ecosystem?

I look forward to seeing how discussions about these questions pan out.

If you’re interested in following and supporting my future work, please consider sponsoring me on Patreon, GitHub Sponsors, or with one-time donations via PayPal:

In the next few days, I plan to share what I’ve done since my previous planning post.

Plans for GNOME 43 and Beyond

GNOME 42 is fresh out the door, bringing some exciting new features – like the new dark style preference and the new UI for taking screenshots and screen casts. Since we’re right on the heels of a release, I want to keep some momentum going and share my plans for features I want to implement during the upcoming release cycles.

Accent Colors and Libadwaita Recoloring API

The arrival of libadwaita allows us to do a few new things with the GNOME platform, since we have a platform library to help developers implement new platform features and implement them properly. For example, libadwaita gave us the opportunity to implement a global dark style preference with machinery that allows developers to choose whether they support it and easily adjust their app’s styling when it’s enabled. Alexander Mikhaylenko spent a long time reworking Adwaita so that it works with recoloring, and I want to take full advantage of that with the next two features: global accent colors and a recoloring API.

Libadwaita makes it simple to implement a much-wanted personalization feature: customizable accent colors. Global accent colors will be opt-in for app developers. For the backend I want accent colors to be desktop- and platform-agnostic like the new dark style preference. I plan to submit a proposal for this to xdg-desktop-portal in the near future. In GNOME it’d probably be best to show only a few QA-tested accents in the UI, but libadwaita would support arbitrary colors so that apps from KDE, GNOME, elementary OS, and more all use the same colors if they support the preference.

Developers using the recoloring API will be able to programmatically change colors in their apps and have dependent colors update automatically. They’ll be able to easily create presets which can be used, for example, to recolor the window based on a text view’s color scheme. Technically this is already possible with CSS in libadwaita 1.0, but the API will make it simpler. Instead of having to consider every single color, they’ll only need to set a few and libadwaita will handle the rest properly. The heuristics used here will also be used to ensure that accent colors have proper contrast against an app’s background.

There’s no tracking issue for this, but if you’re interested in this work you may want to track the libadwaita repository: https://gitlab.gnome.org/GNOME/libadwaita/

Adaptive Nautilus and Improved File Chooser

The GTK file chooser has a few issues. For example, it doesn’t support GNOME features like starred files, and it needs to be patched by downstream vendors (e.g. PureOS, Mobian) to work on mobile form factors. In order to keep up with platform conventions, ideally the file chooser should become part of GNOME’s core rather than part of GTK. There’s some discussion to be had on solutions, but I believe that it would be smart to keep the file chooser and our file browser in sync by making the file chooser a part of the file browser.

With all of that in mind I plan to make Nautilus adaptive for mobile form factors and add a new file chooser mode to it. The file chooser living in Nautilus instead of GTK allows us support GNOME platform features at GNOME’s pace rather than GTK’s pace, follow GNOME design patterns, and implement features like a grid view with thumbnails without starting from scratch.

If you’re interested in seeing this progress, keep track of the Nautilus repository: https://gitlab.gnome.org/GNOME/nautilus/

Loupe (Image Viewer)

For a while now I’ve been working on and off on Loupe, a new image viewer written in Rust using GTK4 and libadwaita. I plan for Loupe to be an adaptive, touch pad and touchscreen friendly, and easy to use. I also want it to integrate with Nautilus, so that Loupe will follow the sorting settings you have for a folder in Nautilus.

In the long term we want Loupe to gain simple image editing capabilities, namely cropping, rotation, and annotations. With annotations Loupe can integrate with the new screenshot flow, allowing users to take screenshots and annotate them without needing any additional programs.

If Loupe sounds like an interesting project to you, feel free to track development on GitLab: https://gitlab.gnome.org/BrainBlasted/loupe/

Rewrite Baobab in Rust, and Implement new Design

Baobab (a.k.a. Disk Usage Analyzer) is written in Vala. Vala does not have access to a great ecosystem of libraries, and the tooling has left something to be desired. Rust, however, has a flourishing library ecosystem and wonderful tooling. Rust also has great GTK bindings that are constantly improving. By rewriting Baobab in Rust, I will be able to take full advantage of the ecosystem while improving the performance of it’s main function: analyzing disk usage. I’ve already started work in this direction, though it’s not available on GitLab yet.

In addition to the rewrite, I also plan to implement a redesign based on Allan Day’s mockups. The new design will modernize the UI, using new patterns and fixing a few major UI gripes people have with the current design.

You can keep track of progress on Baobab on GitLab: https://gitlab.gnome.org/GNOME/baobab

Opening Neighboring Files from FileChooser Portal

The xdg-desktop-portal file picker doesn’t allow opening neighboring files when you choose a file. Apps like image browsers, web browsers, and program runners all need a sandbox hole if they want to function without friction. If you use a web browser as a flatpak you may have run into this issue: opening an html file won’t load associated HTML files or media files. If you are working on a website locally, you need to serve it with a web server in order to preview it – e.g. with python -m http.server.

I want to work on a portal that allows developers to request access to neighboring files when opening one file. With this portal I could ship Loupe as a flatpak without requiring any sandbox holes, and apps like Lutris or Bottles would also be more viable as flatpaks.

If you want to learn more and follow along with progress on this issue, see the xdg-desktop-portal issue on GitHub: https://github.com/flatpak/xdg-desktop-portal/issues/463

Accessibility Fixups

GTK4 makes accessibility simpler than ever. However, there are still improvements to be made when it comes to making core apps accessible. I want to go through our core app set, test them with the accessibility tooling available, and document and fix any issues that come up.

Sponsoring my Work

I hope to be able to work on all of these items (and more I haven’t shared) this year. However, I am currently looking for work. Right now I would need to be looking for work full-time or working on something else full-time instead of working on these initiatives – I don’t have the mental bandwidth to do both. If you want to see this work get done, I could really use your support. I have three places you can support me:

If I get enough sponsorships, I plan to start posting regular (bi-weekly) updates for GitHub sponsors and Patrons. I also may start streaming some of my work on Twitch or YouTube, since people seem to be interested in seeing how things get done in GNOME.

That’s all for now – thanks for reading until the end, and I hope we can get some or all of this done by the time of the next update.

Lifetimes, Clones, and Closures: Explaining the “glib::clone!()” Macro

One thing that I’ve seen confuse newcomers to writing GObject-based Rust code is the glib::clone!() macro. It’s foreign to people coming from writing normal Rust code trying to write GObject-based code, and it’s foreign to many people used to writing GObject-based code in other languages (e.g. C, Python, JavaScript, and Vala). Over the years I’ve explained it a few times, and I figure now that I should write a blog post that I can point people to describing what the clone!() macro is, what it does, and why we need it in detail.

Closures and Clones in Plain Rust

Rust has a nifty thing called a closure. To quote the official Rust book:

…closures are anonymous functions you can save in a variable or pass as arguments to other functions. You can create the closure in one place and then call the closure to evaluate it in a different context. Unlike functions, closures can capture values from the scope in which they’re defined.

Simply put, a closure is a function you can use as a variable or an argument to another function. Closures can “capture” variables from the environment, meaning that you can easily pass variables within your scope without needing to pass them as arguments. Here’s an example of capturing:

let num = 1;
let num_closure = move || {
    println!("Num times 2 is {}", num * 2); // `num` captured here
};

num_closure();

num is an i32, or a signed 32-bit integer. Integers are cheap, statically sized primitives, and they don’t require any special behavior when they are dropped. Because of this, it’s safe to keep using them after a move – so the type can and does implement the Copy trait. In practice, that means we can use our integer after the closure captures it, as it captures a copy. So we can have:

// Everything above stays the same
num_closure();
println!("Num is {}", num);

And the compiler will be happy with us. What happens if you need something dynamically sized and stored on the heap, like the data from a String? If we try this pattern with a String:

let string = String::from("trust");
let string_closure = move || {
    println!("String contains \"rust\": {}", string.contains("rust"));
};

string_closure();
println!("String is \"{}\"", string); 

We get the following error:

error[E0382]: borrow of moved value: `string`
  --> src/main.rs:10:34
   |
4  |     let string = String::from("trust");
   |         ------ move occurs because `string` has type `String`, which does not implement the `Copy` trait
5  |     let string_closure = move || {
   |                          ------- value moved into closure here
6  |         println!("String contains \"rust\": {}", string.contains("rust"));
   |                                                  ------ variable moved due to use in closure
...
10 |     println!("String is \"{}\"", string); 
   |                                  ^^^^^^ value borrowed here after move

Values of the String type cannot be copied, so the compiler instead “moves” our string, giving the closure ownership. In Rust, only one thing can have ownership of a value. So when the closure captures string, our outer scope no longer has access to it. That doesn’t mean we can’t use string in our closure, though. We just need to be more explicit about how it should be handled.

Rust provides the Clone trait that we can implement for objects like this. Clone provides the clone() method, which explicitly duplicates an object. Types that implement Clone but not Copy are generally types that can be of an arbitrary size, and are stored in the heap. Values of the String type can be vary in size, which is why it falls into this category. When you call clone(), usually you are creating a new full copy of the object’s data on the heap. So, we want to create a clone, and only pass that clone into the closure:

let s = string.clone();
let string_closure = move || {
    println!("String contains \"rust\": {}", s.contains("rust"));
};

The closure will only capture our clone, and we can still use the original in our original scope.

If you need more information on cloning and ownership, I recommend reading the “Understanding Ownership” chapter of the official Rust book.

Reference Counting, Abbreviated

When working with types of an arbitary size, we may have types that are too large to efficiently clone(). For these types, we can use reference counting. In Rust, there are two types for this you’re likely to use: Rc<T> for single-threaded contexts, and Arc<T> for multi-threaded contexts. For now let’s focus on Rc<T>.

When working with reference-counted types, the reference-counted object is kept alive for as long as anything holds a “strong” reference. Rc<T> creates a new Rc<T> instance when you call .clone() and increments the number of strong references instead of creating a full copy. The number of strong references is decreased when an instance of Rc<T> goes out of scope. An Rc can often be used in contexts the reference &T is used. Particularly, calling a method that takes &self on an Rc<T> will call the method on the underlying T. For example, some_string.as_str() would work the same if some_string were a String or an Rc<String>.

For our example, we can simply wrap our String constructor with Rc::new():

let string = Rc::new(String::from("trust"));
let s = string.clone();
let string_closure = move || {
    println!("String contains \"rust\": {}", s.contains("rust"));
};

string_closure();
println!("String is \"{}\"", string); 

With this, we can capture and use larger values without creating expensive copies. There are some consequences to naively using clone(), and we’ll get into those below, but in a slightly different context.

Closures and Copies in GObject-based Rust

When working with GObject-based Rust, particularly gtk-rs, closures come up most often when working with signals. Signals are a GObject concept. To (over)simplify, signals are used to react to and modify object-specific events. For more detail I recommend reading the “Signals” section in the “Type System Concepts” documentation. Here’s what you need to know:

  • Signals are emitted by objects.
  • Signals can carry data in the form of parameters that connections may use.
  • Signals can expect their handlers to have a return type that’s used elsewhere.

Let’s take a look at how this works with a C example. Say we have a GtkButton, and we want to react when the button is clicked. Most code will use the g_signal_connect () function macro to register a signal handler. g_signal_connect () takes 4 parameters:

  • The GObject that we expect to emit the signal
  • The name of the signal
  • A GCallback that is compatible with the signal’s parameters
  • data, which is a pointer to a struct.

The object here is our GtkButton instance. The signal we want to connect to is the “clicked” signal. The signal expects a callback with the signature of void clicked (GtkButton *self, gpointer user_data). So we need to write a function that has that signature. user_data here corresponds to the data parameter that we give g_signal_connect (). With all of that in mind, here’s what connecting to the signal would typically look like in C:

void
button_clicked_cb (GtkButton *button,
                   gpointer   user_data)
{
    MyObject *self = MY_OBJECT (user_data);
    my_object_do_something_with_button (self, button);
}


static void
my_object_some_setup (MyObject *self)
{
    GtkWidget *button = gtk_button_new_with_label ("Do Something");
    g_signal_connect (button, "clicked",
                      G_CALLBACK (button_clicked_cb), self);
    
    my_object_add_button (button); // Assume this does something to keep button alive
}

This is the simplest way to handle connecting to the signal. But we have an issue with this setup: what if we want to pass multiple values to the callback, that aren’t necessarily a part of MyObject? You would need to create a custom struct that’s houses each value you want to pass, use that struct as data, and read each field of that struct within your callback.

Instead of having to create a struct for each callback that needs to take multiple arguments, in Rust we can and do use closures. The gtk-rs bindings are nice in that they have generated functions for each signal a type can emit. So for gtk::Button we have connect_clicked (). These generated functions take a closure as an argument, with the closure taking the same arguments that the signal expects – except user_data. However, because Rust closures can capture variables, we don’t need user_data – the closure essentially becomes a struct containing captured variables, and the pointer to it becomes user_data. So, let’s try to do a direct port of the functions above, and condense them down to one function with a closure inside:

impl MyObject {
    pub fn some_setup(&self) {
        let button = gtk::Button::with_label("Do Something");

        button.connect_clicked(move |btn| {
            self.do_something_with_button(btn);
        });

        self.add_button(button);
    }
}

This looks pretty nice, right? The catch is, it doesn’t compile:

error[E0759]: `self` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement
  --> src/lib.rs:33:36
   |
30 |           pub fn some_setup(&self) {
   |                             ----- this data with an anonymous lifetime `'_`...
...
33 |               button.connect_clicked(move |btn| {
   |  ____________________________________^
34 | |                 self.do_something_with_button(btn);
35 | |             });
   | |_____________^ ...is captured here...
   |
note: ...and is required to live as long as `'static` here
  --> src/lib.rs:33:20
   |
33 |             button.connect_clicked(move |btn| {
   |                    ^^^^^^^^^^^^^^^

Lifetimes can be a bit confusing, so I’ll try to simplify. &self is a reference to our object. It’s like the C pointer MyObject *self, except it has guarantees that C pointers don’t have: notably, they must always be valid where they are used. The compiler is telling us that by the time our closure runs – which could be any point where button is alive – our reference may not be valid, because our &self method argument (by declaration) only lives to the end of the method. There are a few ways to solve this: change the lifetime of our reference and ensure it matches the closure’s lifetime, or to find a way to pass an owned object to the closure.

Lifetimes are complex – I don’t recommend worrying about them unless you really need the extra performance from using references everywhere. There’s a big complication with trying to work with lifetimes here: our closure has a specific lifetime bound. If we take a look at the function signature for connect_clicked():

fn connect_clicked<F: Fn(&Self) + 'static>(&self, f: F) -> SignalHandlerId

We can see that the closure (and thus everything captured by the closure) has the 'static lifetime. This can mean different things in different contexts, but here that means that the closure needs to be able to hold onto the type for as long as it wants. For more detail, see “Rust by Example”’s chapter on the static lifetime. So, the only option is for the closure to own the objects it captures.

The trick to giving ownership to something you don’t necessarily own is to duplicate it. Remember clone()? We can use that here. You might think it’s expensive to clone your object, especially if it’s a large and complex widget, like your main window. There’s something very nice about GObjects though: all GObjects are reference-counted. So, cloning a GObject instance is like cloning an Rc<T> instance. Instead of making a full copy, the amount of strong references increases. So, we can change our code to use clone just like we did in our original String example:

pub fn some_setup(&self) {
    let button = gtk::Button::with_label("Do Something");

    let s = self.clone();
    button.connect_clicked(move |btn| {
        s.do_something_with_button(btn);
    });

    self.add_button(button);
}

All good, right? Unfortunately, no. This might look innocent, and in some programs cloning like this might cause any issues. What if button wasn’t owned by MyObject? Take this version of the function:

pub fn some_setup(&self, button: &gtk::Button) {
    let s = self.clone();
    button.connect_clicked(move |btn| {
        s.do_something_with_button(btn);
    });
}

button is now merely passed to some_setup(). It may be owned by some other widget that may be alive for much longer than we want MyObject to be alive. Think back to the description of reference counting: objects are kept alive for as long as a strong reference exists. We’ve given a strong reference to the closure we attached to the button. That means MyObject will be forcibly kept alive for as long as the closure is alive, which is potentially as long as button is alive. MyObject and the memory associated with it may never be cleaned up, and that gets more problematic the bigger MyObject is and the more instances we have.

Now, we can structure our program differently to avoid this specific case, but for now let’s continue using it as an example. How do we keep our closure from controlling the lifetime of MyObject when we need to be able to use MyObject when the closure runs? Well, in addition to “strong” references, reference counting has the concept of “weak” references. The amount of weak references an object has is tracked, but it doesn’t need to be 0 in order for the object to be dropped. With an Rc<T> instance we’d use Rc::downgrade() to get a Weak<T>, and with a GObject we use ObjectExt::downgrade() to get a WeakRef<T>. In order to turn a weak reference back into a usable instance of an object we need to “upgrade” it. Upgrading a weak reference can fail, since weak references do not keep the referenced object alive. So Weak<T>::upgrade() returns an Option<Rc<T>>, and WeakRef returns an Option<T>. Because it’s optional, we should only move forward if T still exists.

Let’s rework our example to use weak references. Since we only care about doing something when the object still exists, we can use if let here:

pub fn some_setup(&self, button: &gtk::Button) {
    let s = self.downgrade();
    button.connect_clicked(move |btn| {
        if let Some(obj) = s.upgrade() {
            obj.do_something_with_button(btn);
        }
    });
}

Only two more lines, but a little more annoying than just calling clone(). Now, what if we have another widget we need to capture?

pub fn some_setup(&self, button: &gtk::Button, widget: &OtherWidget) {
    let s = self.downgrade();
    let w = widget.downgrade();
    button.connect_clicked(move |btn| {
        if let (Some(obj), Some(widget)) = (s.upgrade(), w.upgrade()) {
            obj.do_something_with_button(btn);
            widget.set_visible(false);
        }
    });
}

That’s getting harder to parse. Now, what if the closure needed a return value? Let’s say it should return a boolean. We need to handle our intended behavior when MyObject and OtherWidget still exist, and we need to handle the fallback for when it doesn’t:

pub fn some_setup(&self, button: &gtk::Button, widget: &OtherWidget) {
    let s = self.downgrade();
    let w = widget.downgrade();
    button.connect_clicked(move |btn| {
        if let (Some(obj), Some(widget)) = (s.upgrade(), w.upgrade()) {
            obj.do_something_with_button(btn);
            widget.visible()
        } else {
            false
        }
    });
}

Now we have something pretty off-putting. If we want to avoid keeping around unwanted objects or potential reference cycles, this will get worse for every object we want to capture. Thankfully, we don’t have to write code like this.

Enter the glib::clone!() Macro

The glib crate provides a macro to solve all of these cases. The macro takes the variables you want to capture as @weak or @strong, and the capture behavior corresponds to upgrading/downgrading and calling clone(), respectively. So, starting with the example behavior that kept MyObject around, if we really wanted that we would write the function like this:

pub fn some_setup(&self, button: &gtk::Button) {
    button.connect_clicked(clone!(@strong self as s => move |btn| {
        s.do_something_with_button(btn);
    }));
}

We use self as s because self is a keyword in Rust. We don’t need to rename a variable unless it’s a keyword or some field (e.g. foo.bar as bar). Here, glib::clone!() doesn’t prevent us from holding onto s forever, but it does provide a nicer way of doing it should we want to. If we want to use a weak reference instead, it would be:

button.connect_clicked(clone!(@weak self as s => move |btn| {
    s.do_something_with_button(btn);
}));

Just one word and we no longer have to worry about MyObject sticking around when it shouldn’t. For the example with multiple captures, we can use comma separation to pass multiple variables:

pub fn some_setup(&self, button: &gtk::Button, widget: &OtherWidget) {
    button.connect_clicked(clone!(@weak self as s, @weak widget => move |btn| {
        s.do_something_with_button(btn);
        widget.set_visible(false);
    }));
}

Very nice. It’s also simple to provide a fallback for return values:

button.connect_clicked(clone!(@weak self as s, @weak widget => @default-return false, move |btn| {
    s.do_something_with_button(btn);
    widget.visible()
}));

Now instead of spending time and code on using weak references and fall back correctly, we can rely on glib::clone!() to handle it for us succinctly.

There’s are a few caveats to using glib::clone!(). Errors in your closures may be harder to spot, as the compiler may point to the site of the macro, instead of the exact site of the error. rustfmt also can’t format the contents inside the macro. For that reason, if your closure is getting too long I would recommend separating the behavior into a proper function and calling that.

Overall, I recommend using glib::clone!() when working on gtk-rs codebases. I hope this post helps you understand what it’s doing when you come across it, and that you know when you should use it.

System76: A Case Study on How Not To Collaborate With Upstream

Preface: the following post was written in the context of the events that happened in September. Some time has passed, and I held off on publishing in the hopes we could reach a happy ending with System76. As time has passed, that hope has faded. Attempts to reach out to System76 have not been productive, and I feel we’ve let the impression they’ve given the wider tech community about GNOME sit for far too long.  Some things have changed since I originally wrote the post, so some bits have been removed.


Recently there’s been some heated discussion regarding GNOME’s future. This has led to a lot of fear, uncertainty, and doubt being spread about GNOME, as well as attacks and hostility toward GNOME as a whole and toward individual contributors. This largely started due to the actions of one company’s employees in particular: System76.

This is not the first time System76 has been at the center of a public conflict with the GNOME community, nor is it the first time it was handled poorly. At this point, I no longer feel comfortable working with System76 without some sort of acknowledgment and apology for their poor behavior, and a promise that this won’t happen again.

You might be thinking: what sort of behavior are you talking about? What has System76 done to deserve this treatment? Well, it’s not any one incident – it’s a pattern of behavior that’s repeated multiple times over the past few years. I’ll share incidents I know of from the past, what their behavior has been like in the present, and my own thoughts on the future.

Disclaimer: The following content is my opinion and my opinion alone. I do not speak for GNOME as a whole, only for myself using the observations I’ve made. These observations may be incorrect.

System76 & the LVFS

This is the first major issue that I’m aware of. It will also showcase some of the behaviors that will repeat. Back in 2017 and 2018, Richard Hughes was communicating with System76 to get their firmware working on the LVFS and fwupd. Then in a sudden move, System76 announced their own infrastructure and software for firmware updates. They also told Richard that he should use their infrastructure instead of his. Richard expressed his surprise and disappointment at this move on his blog.

Initially this seems like a case of a communication breakdown alongside a technical disagreement. However, the real unacceptable behavior comes with System76’s response to Richard’s post. They start by sharing part of the correspondence between both parties. I believe the communication breakdown occurs in this section, where Richard explains why their current methods will not work for the LVFS. Then they go further and imply that the LVFS is handing away private data:

Be wary

We had intended to use LVFS in the future, but that is no longer the case; there are too many problems with the project.
If you want to use LVFS, disable the data collection. There’s no need for it. Understand that the first instinct of the project leaders was to unnecessarily opt-in data collection from user installations.
Understand that you’re electing for your distribution to communicate with third party servers. The more servers your distribution communicates with out of the box (especially as root), the more surface area there is for vulnerabilities. Heavily scrutinize any addition of a third-party server for updates.
Understand that if you are a company specializing in Linux-only devices and considering using LVFS, you are handing your private sales data to LVFS. We suggest hosting LVFS on your own servers.

System76 employees continued to spread this idea into early 2019 online, likely prompting a post from Richard to refute their claims. Later on the LVFS team put in work to support the use case of System76 so that vendors could use the LVFS without any sort of reporting. Soon after System76 began using the LVFS and fwupd without any fanfare or retraction of their prior statements.

Now, before we move on to the next incident there are a few key takeaways that should be kept in mind. These will be repeating behaviors:

  • System76 blindsided upstream with criticism and advertised their own in-house solutions.
  • System76 spread misinformation for months after not getting their way.

System76, Ubuntu, & Bug Fixes

In 2019 Sebastien Bacher from Canonical shared a post about System76’s treatment of upstreams – in this case, both Ubuntu and GNOME:

I saw a few cases of those situations happening recently

  1. System76 / Pop! OS finds a bug (where ‘find’ often means that they confirm an existing upstream bug is impacting their OS version)

  2. They write a patch or workaround, include it in their package but don’t upstream the change/fix (or just drop a .patch labelled as workaround in a comment rather than submitting it for proper review)

  3. Later-on they start commenting on the upstream (Ubuntu, GNOME, …) bugs trackers, pointing out to users that the issue has been addressed in Pop! OS, advertising how they care about users and that’s why they got the problem solved in their OSSystem76 / Pop! OS team, while you should be proud of the work you do for you users I think you are going the wrong way there. Working on fixes and including them early in your product is one thing, not upstreaming those fixes and using that for marketing you as better than your upstreams is a risky game.

At this point in time System76 had already established a pattern of behavior where instead of working with upstreams, they take the opportunity to market themselves and their solutions.

System76, GNOME Shell, & Tiling

This is a fairly straightforward incident, but it’s one that’s concerning as it involves a misleading narrative that System76 has been running for two years. In late 2019 System76 began work on an extension that allowed for i3-like tiling in GNOME. When an upstream designer reached out for collaboration on tiling upstream Jeremy Soller, principal engineer at System76, refused to work with him. Now, the true problematic part is that System76 has continued to repeat that upstream is not interested in tiling for the past few years. A few examples:

I’m afraid GNOME does not want to support quarter tiling windows.

It would be nice if GNOME wanted to have tiling window functionalities integrated into their desktop. But the truth is that they don’t.

– Michael Murphy

This is untrue, and the linked tweet shows that. Yet System76 employees keep repeating the idea that we don’t want tiling.

System76 & GNOME 40

Soon after we announced the direction for GNOME 40, Jeremy Soller surprised us with the statement that System76 did not “consent” to the new GNOME Shell design. He also claimed that feedback from their designer was dismissed. GNOME 40’s design was largely discussed on design calls that were not recorded, which makes this claim hard to verify. It’s also hard to address because while there may not have been any intent of harm, their designer may still have felt unheard or hurt – causing pain doesn’t require intending to cause pain. So, I will focus on what I can address.

Jeremy claims and has continued to claim that their proposals were rejected. That is not the case. The reality is that System76 had a designer involved for roughly one year of the three-year design process. Their designer participated in design calls and discussions with the design team, and her role was focused on the UX research itself (defining research questions, running interviews, parsing the results, etc.). There were never any mockups or concrete proposals from System76 during the design process. I’ve also heard that System76 conducted a survey and did not share the results with the team working on the 40 design.

To my understanding, the one time System76 made a proposal was at the very end of the design process during a meeting the design team had with stakeholders. There they pitched COSMIC and didn’t receive the feedback they wanted at the time. Perhaps this is what they were referring to, but at this point it was too late for any proposed changes to be made.

The behavior of the company repeated the initial pattern. When System76 did not get their way with their design, they started sharing misinformation. This particular instance of misinformation has been incredibly harmful for GNOME’s reputation. I’ve often seen the first two tweets brought up as “proof” that GNOME doesn’t want to work with downstreams or listen to users. In reality they are an example of how System76 acts toward others when they don’t get what they want.

System76, libadwaita, & GTK Themes

Now we’ve arrived at the latest incident. I’ve been involved in this one more than the others, and I have watched it unfold from the beginning to the present. I’ve decided to go into further depth, breaking down the most important parts by the day.

September 1

This all started at the beginning of the month, when Jeremy noticed that we override gtk-theme-name and set the Adwaita stylesheet for apps using libadwaita. He shared this on his Twitter. Jeremy then posted misinformation about dark style support. Why is this misinformation?

Jeremy then very publicly called out GNOME as a whole – something that a few people interpreted as a threatening ultimatum. Jeremy was soon made aware that libadwaita is a WIP library and that there is still time to work with us regarding vendor theming. Yet, this was not the end for him posting about this on Twitter. Federico wrote and linked a blog post detailing our goals and what can be done to help resolve the situation, with help from Alexander.

September 2

Jeremy replied to Federico’s blog post with a quote from Alexander. This quote was taken out of context and assumed a different meaning, a meaning that is convenient to the narrative that was building up until this point. In reality Alexander was only explaining the status of things as they are in the present, and what the current goals are. Jeremy also quoted an old blog post from Adrien from before libadwaita existed. Do note that in said blog post Adrien went over the possibility for vendors and users to style within this framework despite his personal opinion. When asked to work with us upstream instead of going to Twitter, Jeremy refused – he would not work with us unless he got exactly what he wanted.

September 3

Jeremy stated that System76 was excited to use libadwaita. The very next day Michael Murphy, one of their core team members, contradicted this saying:

I wasn’t even aware that libadwaita even existed when Jeremy started tweeting about it. I don’t think any of us knew about libadwaita and it’s goals.

Michael also made the claim that we don’t want to collaborate when at this point we’d asked them to collaborate instead of pointing fingers multiple times, as shown in the prior archives.

Finally after three days an opportunity appeared to talk and I took it. Below is a summary of the conversation, of which I also kept screenshots of:

  • Jeremy sent the proposal he mentioned
  • I read through the proposal, then gave my feedback. I said I thought that it was interesting, but I also gave a detailed breakdown on why I didn’t think it would be implemented, including the following points:
    • The GTK team wants to put platform libraries in charge of platform styling
    • Most app developers I know (even those who hadn’t signed the well-known open letter at the time) are not happy with the state of arbitrarily applied stylesheet changes
    • Many app developers don’t want to take on the burden of supporting multiple stylesheets
    • Other stylesheets may not be up-to-date implementing Adwaita’s API, re-creating the existing problem
    • “Hard coding” Adwaita is part of our work toward things like a recoloring API for apps and a properly supported dark style preference
  • I expressed there was room to discuss distro branding of default apps, but not a will to bring back arbitrary stylesheet support.
  • I explained the concerns of app developers regarding custom widgets
  • Jeremy expressed his fear about replacing default apps because of these changes
  • I explained that it would be better to work with us on a non-stylesheet method for vendor branding
  • As the conversation wrapped up, I asked that in the future they take issues to GitLab before going to Twitter. I also expressed the pain they had caused and subtly tried to tell them that some developers would want an apology before working with them.
  • Jeremy did not acknowledge the first part and refused to make an apology.

Jeremy then opened the GTK issue, and was redirected to libadwaita. After some discussion on the libadwaita issue he opened a merge request.

September 6

After a lot of discussion on the issue and the MR, the merge request was closed once was made clear that:

  • Third-party and core application developers largely did not want the proposed solution.
  • Multiple developers, including myself, were not comfortable working with System76 without an acknowledgment of their behavior and an apology.

The Aftermath

Once again, System76 employees decided to spread misinformation and complaints on Twitter instead of making an attempt to work with us upstream on their issues. It’s reasonable to assume that they learned about libadwaita and began posting ultimatums on Twitter on the same day. Once again, they have spread misinformation about the project and our goals.

In the weeks since the MR was closed, a few different employees have continued spreading misinformation on Twitter about the situation. Their existing misinformation has already caused ripples within the community, getting many people stirred up and making claims like GTK4 being only for GNOME, or GNOME wanting to be the “only” desktop on Linux because of the approach to themes we have. We’ve had a lot of vitriol thrown our way.

System76’s words also have reach beyond the Linux ecosystem, and large content creators are only hearing their side of the story.

There have been additional attempts by others to reach out diplomatically, but those seem to have fallen through. Just the other day the CEO claimed that there were no technical or quality reasons for the change, heavily implying that it was done just to hurt downstreams.

With all of the misinformation and the consequences of said misinformation, and with the refusal to listen or engage with our own needs, I do not feel like it is worth my time to engage with System76. Until they recognize their behavior for what it is and make a commitment to stop it, I will not be doing free labor for them. If they can do that, I would be happy to work with them on resolving their issues.

Epilogue: What’s Really Going On With Theming?

Now that we’ve addressed System76’s hand in stirring up controversy, let me clarify what the situation actually is, and what vendors and other desktop environments can do moving forward.

Terms To Know

This is a high-level overview, but since we’re talking about technical details there will be some terms to know:

  • Stylesheet: Compilations of CSS styles for widgets. “Themes” as we know them are stylesheets, e.g. Adwaita, Arc, Pop, or Yaru.
  • Platform library: A platform library is a collection of widgets, useful functions, constants, and styles for apps on a platform to use. Granite and libadwaita are examples of platform libraries.
  • Visual API: An API or “contract” for the visuals of an application.

GTK & Theming

Let me clear this up immediately: nothing about the theming infrastructure has changed in GTK itself between GTK3 and GTK4. The only changes are listed in the migration guide, and are mostly removed or replaced GTK-specific CSS functions. All the changes that have caused controversy have to do with libadwaita.

You might have seen some people bring up two particular issues as counterpoints to this:

What do those two issues mean? Well, it means that the GTK team thinks that these particular issues are better handled in platform libraries. This does not mean only libadwaita or only libgranite, nor does it mean that GTK-based applications would no longer be able to have a dark theme or different themes. The platform library would be responsible for handling those items in accordance with their goals.

Desktops X, Y, and Z could work together and create a libxyz that loads different stylesheets just like GTK does now, or they could each make their own platform library that handles their specific needs. The proposed GTK changes would simply put the choice in the hands of each platform.

GNOME, libadwaita, & Theming

In April when Adwaita was moved to libadwaita, the controversial change to override gtk-theme-name was made. The commit explains the reasoning for this particular change:

Ensure Adwaita is loaded instead of the default, take care to load Adwaita-hc when HighContrast is selected system-wide.

Why do we want to ensure libadwaita is loaded at all instead of other stylesheets? It comes down to two key things:

  • Adwaita for platform integration
  • Adwaita as a visual API

In GNOME, we want to have a stable platform for app developers to use to build new, useful, and well-integrated apps. Part of having a stable platform is having a stable visual API. The GNOME Human Interface Guidelines lay out the ideas and goals for apps targeting GNOME to follow, and libadwaita helps developers implement those ideas. The stylesheet itself is part of keeping interfaces in line with the HIG.

With libadwaita the widgets, the stylesheet, and the HIG are all tied together in one easy to use package. With each new version of GNOME we can iterate on them all at once so it will be easier for developers to keep up with different changes. This is in contrast to the current state of development where app developers need to implement their own versions of common patterns or copy-paste them from other apps.

Adwaita being used this way means that apps also need to rely on it being there, and apps will make assumptions based on that. For example, we want it to be very simple for app developers to recolor widgets. Before if apps wanted to do this, they needed to depend on sassc and libsass and occasionally regenerate the themes on-the-fly. Now all they need to do is set the background-color on a widget.

This new method works perfectly with the new Adwaita stylesheet. Here we have a simple Color Picker app:

Color Picker using the Adwaita Stylesheet
Color Picker using the Adwaita Stylesheet

Recoloring still works well with stylesheets that are based on Adwaita, like Yaru:

Color Picker using the Yaru Stylesheet
Color Picker using the Yaru Stylesheet

Things begin breaking down with stylesheets that deviate too far from Adwaita:

Color Picker using the Pop stylesheet
Color Picker using the Pop stylesheet

Without enforcing Adwaita as the stylesheet apps may not look correct or in some cases may not function at all. For a more in-depth breakdown of the problem with more examples, see Tobias Bernard’s post “Restyling apps at scale.”

Announcing Solanum

A little while ago I released version 3.0.0 of my app Solanum. When doing the release, I realized something: I never announced the app here. Woops. So, here is the announcement.

Screenshot of Solanum's Initial State
Screenshot of Solanum’s initial state

Solanum is an app that lets you work in timed sessions with regular breaks, based on the Pomodoro Technique. By default work sessions are 25 minutes, short breaks are 5 minutes, and long breaks are 15 minutes, and you work 4 sessions before a long break. These defaults can be configured in the Preferences window from Solanum 3.0.0 onward.

Screenshot of Solanum's Preferences window
Screenshot of Solanum’s Preferences window

When GNOME 42 releases with the new dark style preference, Solanum will support it out of the box.

If you’re looking for a pomodoro app to help you with time management, you can find Solanum on Flathub.

Download Solanum from Flathub

If you find Solanum useful, please consider sponsoring me on GitHub or Patreon 🙂

Developing With The Flatpak CLI

Flatpak is a very powerful tool for development, and is well-integrated into GNOME Builder. This is what I’d recommend for most developers. But what if you use a plain text editor? Barring Visual Studio Code, there aren’t many extensions for common text editors to use flatpak. This tutorial will go over how to use flatpak to build and test your apps with only the command line.

Building & Testing Your App

First, you’ll need to have flatpak installed and a flatpak manifest. You’ll also need the right runtime and SDK installed. Then, you’ll need to set up the environment to build your application. Navigate to your project directory from the terminal. Once there run the following command:

# $MODULE_NAME is the name of your application's flatpak module
$ flatpak-builder --repo=repo --build-only --stop-at=$MODULE_NAME --force-clean flatpak_app $APP_ID.json

This will fetch all the dependencies declared in the flatpak manifest and build each one, stopping with your app. Then, you need to use flatpak build to run the build commands for your application.

First you configure the buildsystem:

# $CONFIG_OPTS should match the `config-opts` for the module in your flatpak manifest
$ flatpak build --filesystem=host flatpak_app meson _build --prefix=/app $CONFIG_OPTS

Then run the build & install command:

# $BUILD_OPTS should match the build-options in the flatpak manifest.
# `append-path` turns into `--env=PATH=$PATH:$APPEND_PATH`
$ flatpak build --filesystem=host $BUILD_OPTS flatpak_app ninja -C _build install

After that, you can also use flatpak build to test the application:

# $FINISH_ARGS would be the `finish-args` from your flatpak manifest
$ flatpak build --filesystem=host $FINISH_ARGS flatpak_app $APP_EXECUTABLE_NAME

Creating Dist Tarballs

One of the responsibilities an app maintainer has is creating tarballs of their applications for distribution. This can be challenging, as the maintainer needs to build using an environment that has all dependencies – including versions of dependencies that aren’t yet released.

Flatpak allows for developers to do this in a simple way. If you haven’t run the
command above to fetch and build your dependencies, do so now.
Also run the configuration step. Now you should be ready to run the dist command:

$ flatpak build --filesystem=host flatpak_app ninja -C _build dist

Now you should have a release tarball ready in _build/meson-dist.

Notes

While this method works for development, it’s a bit clumsy. I highly recommend using GNOME Builder or Visual Studio Code with the flatpak extension. These tools handle the clumsiness for you, allowing you to focus entirely on development. However, if you find yourself wanting to develop using flatpak and don’t want to use either of the above options, this is the way to do so.

Glade Not Recommended

If you are starting out with GTK development, you may have heard of a tool called Glade. Glade is a UI designer application for GTK projects, that allows you to create, modify, and preview UI files before writing code. In that sense, Glade is a very useful tool for GTK apps.

With that said, I must implore that you do not use Glade.

Why? Glade was built for it’s own format, before the advent of
GtkBuilder. It does not know certain properties, does not know
of certain features, and does not know modern GTK practices.

Instead of ignoring what it does not know, it aggressively “corrects” it. Glade will re-write your UI files to its whims,
including but not limited to:

  • Removing property bindings
  • Removing unknown properties
  • Adding child properties to GtkBox children everywhere
    • These are gone in GTK 4, which adds extra work to the porting process
  • Adding explicit GtkViewports that cause a double-border
  • Forcing minimum width and height on GtkListBoxRows
  • Re-arranging objects within the UI file
  • Changing the format of all properties in an edited UI file

This makes Glade harmful in different ways. Your UI files will be bloated, due to Glade adding placeholders, child properties,
and extra widgets where they aren’t needed. When you make a
contribution to an app, you may be asked to re-do it by hand
because Glade made unnecessary changes. When porting to GTK 4 you
will need to do more work, as Glade doesn’t know to avoid
deprecated widgets, properties, and child properties.

All of these elements combine to make Glade more harmful than helpful. So please, do not use Glade. If you are working with UI
files write them by hand. An editor with code folding and syntax
highlighting can be very helpful. If you need to mock something
up, you can try sketching on a piece of paper or using our
mockup resources. Whatever you choose, don’t use Glade.

New Release: Color Picker v2.4.0

Gcolor3 is now “Color Picker”! With the rename comes a new maintiner, a new icon, lots of new improvements, and many translation updates.

Release Notes

  • Color Picker now works on Wayland!
  • The typography and iconography of Color Picker has changed to be consistent and more in line with the GNOME HIG.
  • The use of deprecated GTK APIs has been dropped, which will make porting to GTK4 a smooth process.
  • Multiple other under-the-hood improvements

Networks Of Trust: Dismantling And Preventing Harassment

Purism’s David Seaward recently posted an article titled Curbing Harassment with User Empowerment. In it, they posit that “user empowerment” is the best way to handle harassment. Yet, many of their suggestions do nothing to prevent or stop harassment. Instead they only provide ways to allow a user to plug their ears as it occurs.

Trusting The Operator

David Seaward writes with the assumption that the operator is always untrustworthy. But, what if the operator was someone you knew? Someone you could reach out to if there were any issues, who could reach out to other operators? This is the case on the Fediverse, where Purism’s Librem Social operates. Within this system of federated networks, each node is run by a person or group of people. These people receive reports in various forms. In order to continue to be trusted, moderators of servers are expected to handle reports of spam, hate speech, or other instances of negative interactions from other services. Since the network is distributed, this tends to be sustainable.

In practice, this means that as a moderator my users can send me things they’re concerned by, and I can send messages to the moderators of other servers if something on their server concerns me or one of my users. If the operator of the other node breaches trust (e.g. not responding, expressing support for bad actors) then I can choose to defederate from them. If I as a user find that my admin does not take action, I can move to a node that will take action. The end result is that there are multiple layers of trust:

  • I can trust my admins to take action
  • My admins can trust other admins to take action

This creates a system where, without lock-in, admins are incentivized to respond to things in good faith and in the best interests of their users.

User Empowerment And Active Admins

The system of trust above does not conflict with Purism’s goal of user empowerment. In fact, these two systems need to work together. Providing users tools to avoid harassment works in the short term, but admins need to take action to prevent harassment in the long term. There’s a very popular saying: with great power comes great responsibility. When you are an admin, you have both the power and responsibility to prevent harassment.

To continue using the fediverse for this discussion, there are two ways harassment occurs in a federated system:

  1. A user on a remote instance harasses people
  2. A user on the local instance harasses people

When harassment occurs, it comes in various forms like harassing speech, avoiding blocks, or sealioning. In all cases and forms, the local admin is expected to listen to reports and handle them accoridngly. For local users, this can mean a stern warning or a ban. For remote users, the form of response could range from contacting the remote admin to blocking that instance. Some fediverse software also supports blocking individual remote accounts. Each action helps prevent the harasser from further harming people on your instance or other instances.

Crowdsourcing Does Not Solve Harassment

One solution David proposes in the article is crowdsourced tagging. Earlier in the article he mentions that operators can be untrustworthy, but trusting everyone to tag things does not solve this. In fact, this can contribute to dogpiling and censorship. Let’s use an example to illustrate the issue. A trans woman posts about her experience with transphobia, and how transphobic people have harmed her. Her harassers can see this post, and tag it with “#hatespeech”. They tell their friends to do it too, or use bots. This now means anyone who filters “#hatespeech” would have her post hidden – even people that would have supported her. Apply this for other things and crowdsourced tagging can easily become a powerful tool to censor the speech of marginalized people.

Overall, I’d say Purism needs to take a step back and review their stance to moderation and anti-harassment. It would do them well if they also took a minute to have conversations with the experts they cite.

The Paradox of Tolerance In Online Spaces

Author’s Note: I have never felt uncomfortable contributing to GNOME. This is a more general post about online communities, and targeted at some of the other FOSS communities I’ve been interested in contributing to or have contributed to in the past.

In certain online spaces, the idea that not including people who are intolerant of others is itself a form of negative intolerance has gained traction. This would be excluding those who post white supremacist content, misogyny, homophobia, transphobia, etc. A common argument against excluding these people is the “slippery slope” – if you exclude these people for hate speech, soon you’ll try to exclude any people for anything.

In reality, this slope does not exist. In healthy online communities, these people are kept out and the community continues to move forward. How? Well, it’s because these communities could not be healthy and safe without the exclusion of harmful elements. This is where we hit the paradox of tolerance.

What Is The Paradox of Tolerance?

…if a society is tolerant without limit, its ability to be tolerant is eventually seized or destroyed by the intolerant.

In online spaces, “tolerance” refers to who you allow in the community. To be tolerant means to allow people from all walks of life into your space, regardless of race, sexual or gender identity, or other factors used to marginalize people within society. To go further, a good community should do more than tolerate them, but let them know that they are welcome and that they will not be marginalized within the community.

A person is marginalized when they are abused for their identity, or made to feel less important because of it. In real life, this manifests as workforce discrimination, housing discrimination, police brutality, and many other forms of oppression that make it so that the value of a victim’s life and livelihood are less important than the oppressor’s. In an online space, marginalization is more subtle. It would be if a black person saw someone use the “n word” – or worse, is called one – without reprucussion. It would be if a trans woman had to deal with someone saying that they are “men trying to invade women’s spaces”. It would be if a woman in general had to deal with men making sexual remarks and unwanted advances. These things all make the victims uncomfortable, and the lack of action taken can make them feel unimportant.

Some communities like to think of themselves as “perfectly tolerant”. This means that they would tolerate people that take actions to make marginalized people uncomfortable. When a community does this, they are actually being intolerant, and enabling abusers.

Isn’t It Intolerance To Keep Out The Intolerant?

Yes. In a very literal sense, it is intolerance to keep these people out of communities. However, the effect of this intolerance is that people who face real intolerance in their day-to-day lives feel safer in those communities. So it comes down to what you think is important. Do you think it’s more important to let people abuse others, or to have a safe and productive community? If you want to run a community, you have to make that choice.