Rethinking Window Management

Window management is one of those areas I’m fascinated with because even after 50 years, nobody’s fully cracked it yet. Ever since the dawn of time we’ve relied on the window metaphor as the primary way of multitasking on the desktop. In this metaphor, each app can spawn one or more rectangular windows, which are stacked by most recently used, and moved or resized manually.

Overlapping windows can get messy quickly

The traditional windowing system works well as long as you only have a handful of small windows, but issues emerge as soon the number and size of the windows grows. As new windows are opened, existing ones are obscured, sometimes completely hiding them from view. Or, when you open a maximized window, suddenly every other window is hidden.

Over the decades, different OSes have added different tools and workflows to deal with these issues, including workspaces, taskbars, and switchers. However, the basic primitives have not changed since the 70s and, as a result, the issues have never gone away.

While most of us are used to this system and its quirks, that doesn’t mean it’s without problems. This is especially apparent when you do user research with people who are new to computing, including children and older people. Manually placing and sizing windows can be fiddly work, and requires close attention and precise motor control. It’s also what we jokingly refer to as shit work: it is work that the user has to do, which is generated by the system itself, and has no other purpose.

Most of the time you don’t care about exact window sizes and positions and just want to see the windows that you need for your current task. Often that’s just a single, maximized window. Sometimes it’s two or three windows next to each other. It’s incredibly rare that you need a dozen different overlapping windows. Yet this is what you end up with by default today, when you simply use the computer, opening apps as you need them. Messy is the default, and it’s up to you to clean it up.

What about tiling?

Traditional tiling window managers solve the hidden window problem by preventing windows from overlapping. While this works well in some cases, it falls short as a general replacement for stacked, floating windows. The first reason for this is that tiling window managers size windows according to the amount of available screen space, yet most apps are designed to be used at a certain size and aspect ratio. For example, chat apps are inherently narrow and end up having large amounts of empty space at large sizes. Similarly, reading a PDF in a tiny window is not fun.

GNOME 44 with the “Forge” tiling extension. Just because windows can be tall and narrow doesn’t mean they should be :)

Another issue with tiling window manager is that they place new windows in seemingly arbitrary positions. This is a consequence of them not having knowledge about the content of a window or the context in which it is being used, and leads to having to manually move or resize windows after the fact, which is exactly the kind of fiddling we want to avoid in the first place.

More constrained tiling window managers such as on iPadOS are interesting in that they’re more purposeful (you always intentionally create the tiling groups). However, this approach only allows tiling two windows side-by-side, and does not scale well to larger screens.

History

This topic has been of interest to the design team for a very long time. I remember discussing it with Jakub at my first GUADEC in 2017, and there have been countless discussions, ideas, and concepts since. Some particular milestones in our thinking were the concept work leading up to GNOME 40 in 2019 and 2020, and the design sessions at the Berlin Mini GUADEC in 2022 and the Brno hackfest in 2023.

Tiling BoF in Brno during the HDR hackfest. Left to right: Robert Mader, Marco Trevisan, Georges Stavracase, Jakub Steiner and Allan Day (remote), Florian Müllner, Jonas Dreßler

I personally have a bit of a tradition working on this problem for at least a few weeks per year. For example, during the first lockdown in 2020 I spent quite a bit of time trying to envision a tiling-first version of GNOME Shell.

2020 mockup for a tiling-first GNOME Shell. More mockups in the OS mockups repo on Gitlab.

Problems with our current tiling

GNOME has had basic tiling functionality since early in the GNOME 3 series. While this is nice to have, it has obvious limitations:

  • It’s completely manual
  • Only 2 windows are supported, and the current implementation is not extensible to more complex layouts
  • Tiled windows are not grouped in the window stack, so both windows are not raised simultaneously and other windows get in the way
  • Workspaces are manual, and not integrated into the workflow
Because tiled windows are currently mixed with overlapping floating windows they’re not really helping make things less messy in practice.

We’ve wanted more powerful tiling for years, but there has not been much progress due to the huge amount of work involved on the technical side and the lack of a clear design direction we were happy with. We now finally feel like the design is at a stage where we can take concrete next steps towards making it happen, which is very exciting!

Get out of my way

The key point we keep coming back to with this work is that, if we do add a new kind of window management to GNOME, it needs to be good enough to be the default. We don’t want to add yet another manual opt-in tool that doesn’t solve the problems the majority of people face.

To do this we landed on a number of high level ideas:

  • Automatically do what people probably want, allow adjusting if needed
  • Make use of workspaces as a fully integrated part of the workflow
  • Richer metadata from apps to allow for better integration

Our current concept imagines windows having three potential layout states:

  • Mosaic, a new window management mode which combines the best parts of tiling and floating
  • Edge Tiling, i.e. windows splitting the screen edge-to-edge
  • Floating, the classic stacked windows model

Mosaic is the default behavior. You open a window, it opens centered on the screen at a size that makes the most sense for the app. For a web browser that might be maximized, for a weather app maybe only 700×500 pixels.

As you open more windows, the existing windows move aside to make room for the new ones. If a new window doesn’t fit (e.g. because it wants to be maximized) it moves to its own workspace. If the window layout comes close to filling the screen, the windows are automatically tiled.

You can also manually tile windows. If there’s enough space, other windows are left in a mosaic layout. However, if there’s not enough space for this mosaic layout, you’re prompted to pick another window to tile alongside.

You’re not limited to tiling just two windows side by side. Any tile (or the remaining space) can be split by dragging another window over it, and freely resized as the window minimum sizes allow.

There are always going to be cases that require placing a window in a specific position on the screen. The new system allows windows to be used with the classic floating behavior, on a layer above the mosaic/tiling windows. However, we think that this floating behaviour is going to be a relatively uncommon, similar to the existing “always on top” behavior that we have today.

There’s of course much more to this, but hopefully this gives an idea of what we have in mind in terms of behavior.

New window metadata

As mentioned above, to avoid the pitfalls of traditional tiling window managers we need more information from windows about their content. Windows can already set a fixed size and they have an implicit minimum size, but to build a great tiling experience we need more.

Some apps should probably never be maximized/tiled on a 4K monitor…

One important missing piece is having information on the maximum desired size of a window. This is the size beyond which the window content stops looking good. Not having this information is one of the reasons that traditional tiling window managers have issues, especially on larger screens. This maximum size would not be a hard limit and manual resizing would still be possible. Instead, the system would use the maximum size as one factor when it calculates an optimal window layout. For example, when tiling to the side of the screen, a window would only grow as wide as its maximum width rather than filling exactly half of the screen.

In addition, it’d be helpful to know the range of ideal sizes where an app works best. While an app may technically work at mobile sizes that’s probably not the best way to use that app if you have a large display. To stay with our chat example, you probably want to avoid folding the sidebar if it can be avoided, so the range of ideal sizes would be between the point where it becomes single pane and its maximum usable size.

Ideally these properties could be set dynamically depending on the window content. For example, a spreadsheet with a lot of columns but few rows could have a wider ideal size than one with lots of rows.

Depending on apps using new system APIs can be challenging and slow — it’s not easy to move the entire ecosystem! However, we think there’s a good chance of success in this case, due to the simplicity and universal usefulness of the API.

Next steps

At the Brno hackfest in April we had an initial discussion with GNOME Shell developers about many of the technical details. There is tentative agreement that we want to move in the direction outlined in this post, but there’s still a lot of work ahead.

On the design side, the biggest uncertainty is the mosaic behavior — it’s a novel approach to window management without much prior art. That’s exciting, but also makes it a bit risky to jump head-first into implementation. We’d like to do user research to validate some of our assumptions on different aspects of this, but it’s the kind of project that’s very difficult to test outside of an actual prototype that’s usable day to day.

If you’d like to get involved with this initiative, one great way to help out would be to work on an extension that implements (parts of) the mosaic behavior for testing and refining the interactions. If you’re interested in this, please reach out :)

There’s no timeline or roadmap at this stage, but it’s definitely 46+ material and likely to take multiple cycles. There are individual parts of this that could be worked on independently ahead of the more contingent pieces, for example tiling groups or new window metadata. Help in any of these areas would be appreciated.

This post is summarizing collaborative work over the past years by the entire design team (Allan Day, Jakub Steiner, Sam Hewitt, et al). In particular, thanks to Jakub for the awesome animations bringing the behaviors to life!

Post Collapse Computing Part 4: The Road Ahead

Part 1 of this series looks at the state of the climate emergency we’re in, and how we can still get our governments to do something about it. Part 2 looks at collapse scenarios we’re likely to face if we fail in those efforts, and part 3 is about concrete things we could work towards to make our software more resilient in those scenarios. In this final part we’re looking at obstacles and contradictions on the path to resilience.

Part 3 of this series was, in large parts, a pretty random list of ideas for how to make software resilient against various effects of collapse. Some of those ideas are potentially contradictory, so in this part I want to explore these contradictions, and hopefully start a discussion towards a realistic path forward in these areas.

Efficient vs. Repairable

The goals of wanting software to be frugal with resources but also easy to repair are often hard to square. Efficiency is generally achieved by using lower-level technology and having developers do more work to optimize resource use. However, for repairability you want something high-level with short feedback loops and introspection, i.e. the opposite.

An app written and distributed as a single Python file with no external dependencies is probably as good as it gets in terms of repairability, but there are serious limitations to what you can do with such an app and the stack is not known for being resource-efficient. The same applies to other types of accessible programming environments, such as scripts or spreadsheets. When it comes to data, plain text is very flexible and easy to work with (i.e. good for repairability), but it’s less efficient than binary data formats, can’t be queried as easily as a database, etc.

My feeling is that in many cases it’s a matter of choosing the right tradeoffs for a given situation, and knowing which side of the spectrum is more important. However, there are definitely examples where this is not a tradeoff: Electron is both inefficient and not very repairable due to its complexity.

What I’m more interested in is how we could bring both sides of the spectrum closer together: Can we make the repair experience for a Rust app feel more like a single-file Python script? Can we store data as plain text files, but still have the flexibility to arbitrarily query them like a database?

As with all degrowth discussions, there’s also the question whether reducing the scope of what we’re trying to achieve could make it much easier to square both goals. Similar to how we can’t keep using energy at the current rate and just swap fossil fuels out for renewables, we might have to cut some features in the interest of making things both performant and repairable. This is of course easier said than done, especially for well-established software where you can’t easily remove things, but I think it’s important to keep this perspective in mind.

File System vs. Collaboration

If you want to store data in files while also doing local-first sync and collaboration, you have a choice to make: You can either have a global sync system (per-app or system wide), or a per-file one.

Global sync: Files can use standard formats because history, permissions for collaboration, etc. are managed globally for all files. This has the advantage that files can be opened with any editor, but the downside is that copying them elsewhere means losing this metadata, so you can no longer collaborate on the file. This is basically what file sync services à la Nextcloud do (though I’m not sure to what degree these support real-time collaboration).

Per-file sync: The alternative is having a custom file format that includes all the metadata for history, sync, and collaboration in addition to the content of the file. The advantage of this model is that it’s more flexible for moving files around, backing them up, etc. because they are self-contained. The downside is that you lose access to the existing ecosystem of editors for the file type. In some cases that may be fine because it’s a novel type of content anyway, but it’s still not great because you want to ensure there are lots of apps that can read your content, across all platforms. The Fullscreen whiteboard app is an example of this model.

Of course ideally what you’d want is a combination of both: Metadata embedded in each file, but done in such a way that at least the latest version of the content can still be opened with any generic editor. No idea how feasible that’d be in general, but for text-based formats I could imagine this being a possibility, perhaps using some kind of front-matter with a bunch of binary data?

More generally, there’s a real question where this kind of real-time collaboration is needed in the first place. For which use cases is the the ability to collaborate in real time worth the added complexity (and hence reduced repairability)? Perhaps in many cases simple file sync is enough? Maybe the cases where collaboration is needed are rare enough that it doesn’t make sense to invest in the tech to begin with?

Bandwidth vs. Storage

In thinking about building software for a world with limited connectivity, it’s generally good to cache as much as possible on disk and hit the network as little as possible. But of course that also means using more disk space, which can itself become a resource problem, especially in the case of older computers or mobile devices. This would be accelerated if you had local-first versions of all kinds of data-heavy apps that currently only work with a network connection (e.g. having your entire photo and music libraries stored locally on disk).

One potential approach could be to also design for situations with limited storage. For example, could we prioritize different kinds of offline content in case something has to be deleted/offloaded? Could we offload large, but rarely used content or apps to external drives?

For example, I could imagine moving extra Flatpak SDKs you only need for development to a separate drive, which you only plug in when coding. Gaming could be another example: Your games would be grayed-out in the app grid unless you plug in the hard drive they’re on.

Having properly designed and supported workflows and failure states for low-storage cases like these could go a long way here.

Why GNOME?

Perhaps you’re wondering why I’m writing about this topic in the context of free software, and GNOME in particular. Beyond the personal need to contextualize my own work in the reality of the climate crisis, I think there are two important reasons: First, free software may have an important role to play in keeping computers useful in coming crisis scenarios, so we should make sure it’s good at filling that role. GNOME’s position in the GNU/Linux space and our close relationships and personnel overlap with projects up and down the stack make it a good forum to discuss these questions and experiment with solutions.

But secondly, and perhaps more importantly, I think this community has the right kinds of people for the problems at hand. There aren’t very many places where low-level engineering and principled UX design are done together at this scale, in the commons.

Some resilience-focused projects are built on the very un-resilient web stack because that’s what the authors know. Others have a tiny community of volunteer developers, making it difficult to build something that has impact beyond isolated experiments. Conversely, GNOME has a large community of people with expertise all across the stack, making it an interesting place to potentially put some of these ideas into practice.

People to Learn From?

While it’s still quite rare in tech circles overall, there are some other people thinking about computing from a climate collapse point of view, and/or working on adjacent problems. While most of this work is not directly relevant to GNOME in terms of technology, I find some of the ideas and perspectives very valuable, and maybe you do as well. I definitely recommend following some of these people and projects on Mastodon :)

Permacomputing is a philosophy trying to apply permaculture-like principles to computing. The term was coined by Ville-Matias “Viznut” Heikkilä in a 2020 essay. Permaculture aims to establish natural systems that can work sustainably in the long term, and the goal with permacomputing is to do something similar for computing, by rethinking its relationship to resource and energy use, and the kinds of things we use it for. As further reading, I recommend this interview with Heikkilä and Marloes de Valk.

100 Rabbits is a two-person art collective living on a sailboat and experimenting with ideas around resilience, wherein their boat studio setup is a kind of test case for the kinds of resource constraints collapse might bring. One of their projects is uxn, a tiny, portable emulator, which serves as a super-constrained platform to build apps and games for in a custom Assembly language. I think their projects are especially interesting because they show that you don’t need fancy hardware to build fun, attractive things – what’s far more important is the creativity of the people doing it.

Screenshot of a few uxn apps and games (source)

Collapse OS is an operating system written in Forth by Virgil Dupras for a further-away future, where industrial society has not only collapsed, but when people stop having access to any working modern computers. For that kind of scenario it aims to provide a simple OS that can run on micro-controllers that are easy to find in all kinds of electronics, in order to build or repair custom electronics and simple computers.

Low Tech is an approach to technology that tries to keep things simple and resilient, often by re-discovering older technologies and recombining them in new ways. An interesting example of this philosophy in both form and content is Low Tech Magazine (founded in 2007 by Kris De Decker). Their website uses a dithered aesthetic for images that allows them to be just a few Kilobytes each, and their server is solar-powered, so it can go down when there’s not enough sunlight.

Screenshot of the Low Tech Magazine website with its battery meter background

Ink & Switch is a research lab exploring ambitious high-level ideas in computing, some of which are very relevant to resilience and autonomy, such as local-first software, p2p collaboration, and new approaches to digital identity.

p2panda is a protocol for building local-first apps. It aims to make it easy enough to build p2p applications that developers can spend their time thinking about interesting user experiences rather than focus on the basics of making p2p work. It comes with reference implementations in Rust and Typescript.

Earthstar is a local-first sync system developed by Sam Gwilym with the specific goal to be “like a bicycle“, i.e. simple, reliable, and easy enough to understand top-to-bottom to be repairable.

Funding Sources

Unfortunately, as with all the most important work, it’s hard to get funding for projects in this area. It’ll take tons of work by very skilled people to make serious progress on things like power profiling, local-first sync, or mainlining Android phones. And of course, the direction is one where we’re not only not enabling new opportunities for commerce, but rather eliminating them. The goal is to replace subscription cloud services with (free) local-first ones, and make software so efficient that there’s no need to buy new hardware. Not an easy sell to investors :)

However, while it’s difficult to find funding for this work it’s not impossible either. There are a number of public grant programs that fund projects like these regularly, and where resilience projects around GNOME would fit in well.

If you’re based in the the European Union, there are a number of EU funds under the umbrella of the Next Generation Internet initiative. Many of them are managed by dutch nonprofit NLNet, and have funded a number of different projects with a focus on peer-to-peer technology, and other relevant topics. NLNet has also funded other GNOME-adjacent projects in the past, most recently Julian’s work on the Fractal Matrix client.

If you’re based in Germany, the German Ministry of Education’s Prototype Fund is another great option. They provide 6 month grants to individuals or small teams working on free software in a variety of areas including privacy, social impact, peer-to-peer, and others. They’ve also funded GNOME projects before, most recently the GNOME Shell mobile port.

The Sovereign Tech Fund is a new grant program by the German Ministry of Economic Affairs, which will fund work on software infrasctucture starting in 2023. The focus on lower-level infrastructure means that user-facing projects would probably not be a good fit, but I could imagine, for example, low-level work for local-first technology being relevant.

These are some grant programs I’m personally familiar with, but there are definitely others (don’t hesitate to reach out if you know some, I’d be happy to add them here). If you need help with grant applications for projects making GNOME more resilient don’t hesitate to reach out, I’d be happy to help :)

What’s Next?

One of my hopes with this series was to open a space for a community-wide discussion on topics like degrowth and resilience, as applied to our development practice. While this has happened to some degree, especially at in-person gatherings, it hasn’t been reflected in our online discourse and actual day-to-day work as much as I’d hoped. Finding better ways to do that is definitely something I want to explore in 2023.

On the more practical side, we’ve had sporadic discussions about various resilience-related initiatives, but nothing too concrete yet. As a next step I’ve opened a Gitlab issue for discussion around practical ideas and initiatives. To accelerate and focus this work I’d like to do a hackfest with this specific focus sometime soon, so stay tuned! If you’d be interested in attending, let me know :)

Closing Thoughts

It feels surreal to be writing this. There’s something profoundly weird about discussing climate collapse… on my GNOME development blog. Believe me, I’d much rather be writing about fancy animations and porting apps to phones. But such are the times. The climate crisis affects, or will affect, every aspect of our lives. It’d be more surreal not to think about how it will affect my work, to ignore or compartmentalize it as a separate thing.

As I write this in late 2022, we’ve just had one of the the hottest years on record, with an unprecedented number of catastrophes across the globe. At the same time, we’ve also seen the complete inability of the current political and economic system to enact meaningful policies to actually reduce emissions. This is especially dire in the context of the new IPCC report released earlier in the year, which says that global emissions need to peak before 2025 at the latest. But instead of getting starting on the massive transition this will require, governments are building new fossil infrastructure with public money, further fueling the crisis.

Yours truly at a street blockade with Letzte Generation

But no matter how bad things get, there’s always hope in action. Whether you glue yourself to the road to force the government to enact emergency measures, directly stop emissions by blocking the expansion of coal mines, seize the discourse with symbolic actions in public places, or disincentivize luxury emissions by deflating SUV tires, there’s a wing of this movement for everyone. It’s not too late to avoid the worst outcomes – If you, too, come and join the fight.

See you in action o/

Post Collapse Computing Part 3: Building Resilience

Part 1 of this series looks at the state of the climate crisis, and how we can still get our governments to do something about it. Part 2 considers the collapse scenarios we’re likely to face if we fail in those efforts. In this part we’re looking at concrete things we could work towards to make our software more resilient in those scenarios.

The takeaway from part 2 was that if we fail to mitigate the climate crisis, we’re headed for a world where it’s expensive or impossible to get new hardware, where electrical power is scarce, internet access is not the norm, and cloud services don’t exist anymore or are largely inaccessible due to lack of internet.

What could we do to prepare our software for these risks? In this part of the series I’ll look at some ideas and relevant art for resilient technlogy, and how we could apply this to GNOME.

Local-First

Producing power locally is comparatively doable given the right equipment, but internet access is contingent on lots of infrastructure both locally and across the globe. This is why reducing dependence on connectivity is probably the most important challenge for resilience.

Unfortunately we’ve spent the past few decades making software ever more reliant on having fast internet access, all the time. Many of the apps people spend all day in are unusable without an internet connection. So what would be the opposite of that? Is anyone working in the direction of minimizing reliance on the network?

As it turns out, yes! It’s called “local-first”. The idea is that instead of the primary copy of your data being on a server and local apps acting as clients to it, the client is the primary source of truth. The network is only used optionally for syncing and collaboration, with potential conflicts automatically resolved using CRDTs. This allows for superior UX because you’re not waiting on the network, better privacy because you can end-to-end encrypt everything, and better handling of low-connectivity cases. All of this is of course technically very challenging, and there aren’t many implementations of it in production today, but the field is growing and maturing quickly.

Among the most prominent proponents of the local-first idea are the community around the Ink & Switch research lab and Muse, a sketching/knowledge work app for Apple platforms. However, there’s also prior work in this direction from the GNOME community: There’s Christian Hergert’s Bonsai, the Endless content apps, and it’s actually one of the GNOME Foundation’s newly announced goals to enable more people to build local-first apps.

For more on local-first software, I recommend watching Rob’s GUADEC talk (Recording on Youtube), reading the original paper on local-first software (2019), or listening to this episode of the Metamuse podcast (2021) on the subject.

Other relevant art for local-first technology:

  • automerge, a library for building local-first software
  • Fullscreen, a web-based whiteboard app which allows saving to a custom file format that includes history and editing permissions
  • Magic Wormhole, a system to send files directly between computers without any servers
  • Earthstar, a local-first sync system with USB support

USB Fallback

Local-first often assumes it’s possible to sometimes use the network for syncing or transferring data between devices, but what if you never have an internet connection?

It’s possible to use the local network in some instances, but they’re not very reliable in practice. Local networks are often weirdly configured, and things can fail in many ways that are hard to debug (Source: Endless tried it and decided it was not worth the hassle). In contrast USB storage is reliable, flexible, and well-understood by most people, making it a much better fallback.

As a practical example, a photo management app built in this paradigm would

  • Store all photos locally so there’s never any spinners after first setup
  • Allow optionally syncing with other devices and collaborative album management with other people via local network or the internet
  • Automatically reconcile conflicts if something changed on other devices while they were disconnected
  • Allow falling back to USB, i.e. copying some of the albums to a USB drive and then importing them on another device (including all metadata, collaboration permissons, etc.)
Mockup for USB drive support in GNOME Software (2020)

Some concrete things we could work on in the local-first area:

  • Investigate existing local-first libraries, if/how they could be integrated into our stack, or if we’d need to roll our own
  • Prototype local-first sync in some real-world apps
  • Implement USB app installation and updates in GNOME Software (mockups)

Resource Efficiency

While power can be produced locally, it’s likely that in the future it will be far less abundant than today. For example, you may only have power a few hours a day (already a reality in parts of the global south), or only when there’s enough sun or wind at the moment. This makes power efficiency in software incredibly important.

Power Measurement is Hard

Improving power efficiency is not straightforward, since it’s not possible to measure it directly. Measuring the computer’s power consumption as a whole is trivial, but knowing which program caused how much of it is very difficult to pin down (for more on this check out Aditya Manglik’s GUADEC talk (Recording on Youtube) about power profiling tooling). Making progress in this area is important to allow developers to make their software more power-efficient.

However, while better measurements would be great to have, in practice there’s a lot developers can do even without it. Power is in large part a function of CPU, GPU, and memory use, so reducing each of these definitely helps, and we do have mature profiling tools for these.

Choose a Low-Power Stack

Different tech stacks and dependencies are not created equal when it comes to power consumption, so this is a factor to take into account when starting new projects. One area where there are actual comparative studies on this is programming languages: For example, according to this paper Python uses way more power than other languages commonly used for GNOME app development.

Relative energy use of different programming languages (Source: Pereira et al.)

Another important choice is user interface toolkit. Nowadays many applications just ship their own copy of Chrome (in the form of Electron) to render a web app, resulting in huge downloads, slow startup times, large CPU and memory footprints, and laggy interfaces. Using native toolkits instead of web technologies is a key aspect of making resilient software, and GTK4/Adwaita is actually in a really good position here given its performance, wide language support, modern feature set and widgets, and community-driven development model.

Schedule Power Use

It’s also important to actively consider the temporal aspect of power use. For example, if your power supply is a solar panel, the best time to charge batteries or do computing-intensive tasks is during the day, when there’s the most sunlight.

If we had a way for the system to tell apps that right now is a good/bad time to use a lot of power, they could adjust their behavior accordingly. We already do something similar for metered connections, e.g. Software doesn’t auto-download updates if your connection is metered. I could also imagine new user-facing features in this direction, e.g. a way to manually schedule certain tasks for when there will be more power so you can tell Builder to start compiling the long list of dependencies for a newly cloned Rust project tomorrow morning when the sun is back out.

Some concrete things we could work on in the area of resource efficiency:

  • Improve power efficiency across the stack
  • Explore a system API to tell apps whether now is a good time to use lots of power or not
  • Improve the developer story for GTK on Windows and macOS, to allow more people to choose it over Electron

Data Resilience

In hedging against loss of connectivity, it’s not enough to have software that works offline. In many cases what’s more important is the data we read/write using that software, and what we can do with it in resource-constrained scenarios.

The File System is Good, Actually

The 2010s saw lots of experimentation with moving away from the file system as the primary way to think about data storage, both within GNOME and across the wider industry. It makes a lot of sense in theory: Organizing everything manually in folders is shit work people don’t want to do, so they end up with messy folder hierarchies and it’s hard to find things. Bespoke content apps for specific kinds of data, with rich search and layouts custom-tailored to the data are definitely a nicer, more human-friendly way to deal with content–in theory.

In practice we’ve seen a number of problems with the content app approach though, including

  • Flexibility: Files can be copied/pasted/deleted, stored on a secondary internal drive, sent as email attachments, shared via a USB key, opened/changed using other apps, and more. With content apps you usually don’t have all of these options.
  • Interoperability: The file system is a lowest common denominator across all OSes and apps.
  • Development Effort: Building custom viewers/editors for every type of content is a ton of work, in part because you have to reimplement all the common operations you get for free in a file manager.
  • Familiarity: While it’s messy and not that easy to learn, most people have a vague understanding of the file system by now, and the universality of this paradigm means it only has to be learned once.
  • Unmaintained Apps: Data living in a specific app’s database is useless if the app goes unmaintained. This is especially problematic in free software, where volunteer maintainers abandoning projects is not uncommon.

Due to the above reasons, we’ve seen in practice that the file system is not in fact dying. It’s actually making its way into places where it previously wasn’t present, including iPhones (which now come with a Files app) and the web (via Nextcloud, Google Drive, and company).

From a resilience point of view some of the shortcomings of content apps listed above are particularly important, such as the flexibility to be moved via USB when there’s no internet, and cross-platform interoperability. This is why I think user-accessible files should be the primary source of truth for user data in apps going forward.

Simple, Standardized Formats

With limited connectivity, a potential risk is that you don’t have the ability to download new software to open a file you’re encountering. This is why sticking to well-known standard formats that any computer is likely to have a viewer/editor for is generally preferable (plain text, standard image formats, PDF, and so on).

When starting a new app, ask yourself, is a whole new format needed or could it use/extend something pre-existing? Perhaps there’s a format you could use that already has an ecosystem of apps that support it, especially on other platforms?

For example, if you were to start a new notes app that can do inline media you could go with a custom binary format and a database, but you could also go with Markdown files in a user-accessible folder. In order to get inline media you could use Textbundle, an extension to Markdown implemented by a number of other Markdown apps on other platforms, which basically packs the contained media into an archive together with the Markdown file.

Side note: I really want a nice GTK app that supports Textbundle (more specifically, its compressed variant Textpack), if you want to make one I’d be deligthed to help on the design side :)

Export as Fallback

Ideally data should be stored in standardized formats with wide support, and human-readable in a text editor as a fallback (if applicable). However, this isn’t possible in every case, for example if an app produces a novel kind of content there are no standardized formats for yet (e.g. a collaborative whiteboard app). In these cases it’s important to make sure the non-standard format is well-documented for people implementing alternative clients, and has support for exporting to more common formats, e.g. exporting the current state of a collaborative whiteboard as PDF or SVG.

Some concrete things we could work on towards better data resilience:

  • Explore new ways to do content apps with the file system as a backend
  • Look at where we’re using custom formats in our apps, and consider switching to standard ones
  • Consider how this fits in with local-first syncing

Keep Old Hardware Running

There are many reasons why old hardware stops being usable, including software built for newer, faster devices becoming too slow on older ones, vendors no longer providing updates for a device, some components (especially batteries) degrading with use over time, and of course planned obsolescence. Some of these factors are purely hardware-related, but some also only depend on software, so we can influence them.

Use old Hardware for Development

I already touched on this in the dedicated section above, but obviously using less CPU, RAM, etc. helps not only with power use, but also allows the software to run on older hardware for longer. Unfortunately most developers use top of the line hardware, so they are least impacted by inefficiencies in their personal use.

One simple way to ensure you keep an eye on performance and resource use: Don’t use the latest, most powerful hardware. Maybe keep your old laptop for a few years longer, and get it repaired instead of buying a new one when something breaks. Or if you’re really hardcore, buy an older device on purpose to use as your main machine. As we all know, the best way to get developers to care about something is to actually dogfood it :)

Hardware Enablement for Common Devices

In a world where it’s difficult to get new hardware, it’ll become increasingly important to reuse existing devices we have lying around. Unfortunately, a lot of this hardware is stuck on very old versions of proprietary software that are both slow and insecure.

With Windows devices there’s an easy solution: Just install an up-to-date free software OS. But while desktop hardware is fairly well-supported by mainline Linux, mobile is a huge mess in this regard. The Android world almost exclusively uses old kernels with lots of non-upstreamable custom patches. It takes years to mainline a device, and it has to be done for every device.

Projects like PostmarketOS are working towards making more Android devices usable, but as you can see from their device support Wiki, success is limited so far. One especially problematic aspect from a resilience point of view is that the devices that tend to be worked on are the ones that developers happen to have, which are generally not the models that sell the most units. Ideally we’d work strategically to mainline some of the most common devices, and make sure they actually fully work. Most likely that’d be mid-range Samsung phones and iPhones. For the latter there’s curiously little work in this direction, despite being a gigantic, relatively homogeneous pool of devices (for example, there are 224 million iPhone 6 out there which don’t get updates anymore).

Hack Bootloaders

Unfortunately, hardware enablement alone is not enough to make old mobile devices more long-lived by installing more up-to date free software. Most mobile devices come with locked bootloaders, which require contacting the manufacturer to get an unlock code to install alternative software – if they allow it at all. This means if the vendor company’s server goes away or you don’t have internet access there’s no way to repurpose a device.

What we’d probably need is a collection of exploits that allow unlocking bootloaders on common devices in a fully offline way, and a user-friendly automated unlocking tool using these exploits. I could imagine this being part of the system’s disk utility app or a separate third-party app, which allows unlocking the bootloader and installing a new OS onto a mobile device you plug in via USB.

Some concrete things we could work on to keep old hardware running:

  • Actively try to ensure older hardware keeps working with new versions of our software (and ideally getting faster with time rather than slower thanks to ongoing performance work)
  • Explore initiatives to do strategic hardware eneblament for some of the most common mobile devices (including iPhones, potentially?)
  • Forge alliances with the infosec/Android modding community and build convenient offline bootloader unlocking tools

Build for Repair

In a less connected future it’s possible that substantial development of complex systems software will stop being a thing, because the necessary expertise will not be available in any single place. In such a scenario being able to locally repair and repurpose hardware and software for new uses and local needs is likely to become important.

Repair is a relatively clearly defined problem space for hardware, but for software it’s kind of a foreign concept. The idea of a centralized development team “releasing” software out into the world at scale is built into our tools, technologies, and culture at every level. You generally don’t repair software, because in most cases you don’t even have the source code, and even if you do  (and the software doesn’t depend on some server component) there’s always going to be a steep learning curve to being able to make meaningful changes to an unfamiliar code base, even for seasoned programmers.

In a connected world it will therefore always be most efficient to have a centralized development team that maintains a project and makes releases for the general public to use. But with that possibly no longer an option in the future, someone else will end up having to make sure things work as best they can at the local level. I don’t think this will mean most people will start making changes to their own software, but I could see software repair becoming a role for specialized technicians, similar to electricians or car mechanics.

How could we build our software in a way that makes it most useful to people in such a future?

Use Well-Understood, Accessible Tech

One of the most important things we can do today to make life easier for potential future software repair technicians is using well-established technology, which they’re likely to already have experience with. Writing apps in Haskell may be a fun exercise, but if you want other people to be able to repair/repurpose them in the future, GJS is probably a better option, simply because so many more people are familiar with the language.

Another important factor determining a technology stack’s repairability is how accessible it is to get started with. How easy is it for someone to get a development environment up and running from scratch? Is there good (offline) documentation? Do you need to understand complex math or memory management concepts?

Local-First Development

Most modern development workflows assume a fast internet connection on a number of levels, including downloading and updating dependencies (e.g. npm modules or flatpak SDKs), documentation, tutorials, Stackoverflow, and so on.

In order to allow repair at the local level, we also need to rethink development workflows in a local-first fashion, meaning things like:

  • Ship all the source code and development tools needed to rebuild/modify the OS and apps with the system
  • Have a first-class flow for replacing parts of the system or apps with locally modified/repaired versions, allowing easy management of different versions, rollbacks, etc.
  • Have great offline documentation and tutorials, and maybe even something like a locally cached subset of Stackoverflow for a few technologies (e.g. the 1000 most popular questions with the “gtk” tag)

Getting the tooling and UX right for a fully integrated local-first software repair flow will be a lot of work, but there’s some interesting relevant art from Endless OS from a few years back. The basic idea was that you transform any app you’re running into an IDE editing the app’s source code (thanks to Will Thompson for the screencast below). The devil is of course in the details for making this a viable solution to local software repair, but I think this would be a very interesting direction to explore further.

Some concrete things we could work on to make our software more repairable:

  • Avoid using obscure languages and technologies for new projects
  • Avoid overly complex and brittle dependency trees
  • Investigate UX for a local-first software repair flow
  • Revive or replace the Devhelp offline documentation app
  • Look into ways to make useful online resources (tutorials, technical blog posts, Stackoverflow threads, etc.) usable offline

This was part three of a four-part series. In the fourth and final installment we’ll wrap up the series by looking at some of the hurdles in moving towards resilience and how we could overcome them.

Post Collapse Computing Part 2: What if we Fail?

This is a lightly edited version of my GUADEC 2022 talk, given at c-base in Berlin on July 21, 2022. Part 1 briefly summarizes the horrors we’re likely to face as a result of the climate crisis, and why civil resistance is our best bet to still avoid some of the worst-case scenarios. Trigger Warning: Very depressing facts about climate and societal collapse.

While I think it’s critical to use the next few years to try and avert the worst effects of this crisis, I believe we also need to think ahead and consider potential failure scenarios.

What would it mean if we fail to force our governments to enact the necessary drastic climate action, both for society at large but also very concretely for us as free software developers? In other words: What does collapse mean for GNOME?

In researching the subject I discovered that there’s actually a discipline studying questions like this, called “Collapsology”.

Collapsology studies the ways in which our current global industrial civilization is fragile and how it could collapse. It looks at these systemic risks in a transdisciplinary way, including ecology, economics, politics, sociology, etc. because all of these aspects of our society are interconnected in complex ways. I’m far from an expert on this topic, so I’m leaning heavily on the literature here, primarily Pablo Servigne and Raphaël Stevens’ book How Everything Can Collapse (translated from the french original).

So what does climate collapse actually look like? What resources, infrastructure, and organizations are most likely to become inaccessible, degrade, or collapse? In a nutshell: Complex, centralized, interdependent systems.

There are systems like that in every part of our lives of course, from agriculture, to pharma, to energy production, and of course electronics. Because this talk’s focus is specifically the impact on free software, I’ll dig deeper on a few areas that affect computing most directly: Supply chains, the power grid, the internet, and Big Tech.

Supply Chains

As we’ve seen repeatedly over the past few years, the supply chains that produce and transport goods across the globe are incredibly fragile. During the first COVID lockdowns it was toilet paper, then we got the chip shortage affecting everything from Play Stations to cars, and more recently a baby formula shortage in the US, among others. To make matters worse, many industries have moved to just-in-time manufacturing over the past decades, making them even less resilient.

Now add to that more and more extreme natural disasters disrupting production and transport, wars and sanctions disrupting trade, and financial crises triggered or exacerbated by some of the above. It’s not hard to imagine goods that are highly dependent on global supply chains becoming prohibitively expensive or just impossible to get in parts of the world.

Computers are one of the most complex things manufactured today, and therefore especially vulnerable to supply chain disruption. Without a global system of resource extraction, manufacturing, and trade there’s no way we can produce chips anywhere near the current level of sophistication. On top of that chip supply chains are incredibly centralized, with most of global chip production being controlled by a single Taiwanese company, and the machines used for that production controlled by a single Dutch company.

Power Grid

Access to an unlimited amount of power, at any time, for very little money, is something we take for granted, but probably shouldn’t. In addition to disruptions by extreme weather events one important factor here is that in an ever-hotter world, air conditioning starts to put an increasing amount of strain on the power grid. In parts of the global south this is one of the reasons why power outages are a daily occurrence, and having power all the time is far from guaranteed.

In order to do computing we of course need power, not only to run/charge our own devices, but also for the data centers and networking infrastructure running a lot of the things we’re connecting to while using those devices.

Which brings us to our next point…

Internet

Having a reliable internet connection requires a huge amount of interconnected infrastructure, from undersea cables, to data centers, to the local cable infrastructure that goes to your neighborhood, and ultimately your router or a nearby cellular tower.

All of this infrastructure is at risk of being disrupted by sea level rise and extreme weather, taken over by political actors wanting to control the flow of information, abandoned by companies when it becomes unprofitable to operate in a certain area due to frequent extreme weather, and so on.

Big Tech

Finally, at the top of the stack there’s the actual applications and services we use. These, too, have become ever more centralized and fragile at all levels over the past decades.

At the most basic level there’s OS updates and app stores. There are billions of iOS devices out there that are literally unable to get security updates or install new software if they lose access to Apple’s servers. Apple collapsing seems unlikely in the short term, but, for example, what if they stop doing business in your country because of sanctions?

We used to warn about lock-in to proprietary software and formats, but at least Photoshop CS2 continues to run on your computer regardless of what happens to the company. With Figma et al you can not only not access your existing files anymore if the server isn’t accessible, you can’t even create new ones.

In order to get a few nice sharing and collaboration features people are increasingly just running all software in the cloud on someone else’s computer, whether it’s Google Slides for presentations, SketchUp for 3D modeling, Notion for note taking, Figma for design, and even games via game streaming services like Stadia.

From a free software perspective another particularly risky point of corporate centralization is Github, given that a huge number of important projects are hosted there. Even if you’re not actively using it yourself for development, you’re almost certainly depending on other projects hosted on Github. If something were to happen to it… yikes.

Failure Scenarios

So to summarize, this is a rough outline of a potential failure scenario, as applied to computing:

  • No new hardware: It’s difficult and expensive to get new devices because there’s little to no new ones being made, or they’re not being sold where you live.
  • Limited power: There’s power some of the time, but only a few hours a day or when there’s enough sun for your solar panels. It’s likely that you’ll want to use it for more important things than powering computers though…
  • Limited connectivity: There’s still a kind of global internet, but not all countries have access to it due to both degraded infrastructure and geopolitical reasons. You’re able to access a slow connection a few times a month, when you’re in another town nearby.
  • No cloud: Apple and Google still exist, but because you don’t have internet access often enough or at sufficient speeds, you can’t install new apps on your iOS/Android devices. The apps you do have on them are largely useless since they assume you always have internet.

This may sound like an unrealistically dystopian scenario, until you realize: Parts of the global south are experiencing this today. Of course a collapse of these systems at the global level would have a lot of other terrible consequences, but I think seeing the global south as a kind of preview of where everyone else is headed is a helpful reference point.

A Smaller World

The future is of course impossible to predict, but in all likelihood we’re headed for a world where everything is a lot more local, one way or the other. Whether by choice (to reduce emissions and be more resilient), or through a full-on collapse, our way of life is going to change drastically over the next decades.

The future we’re looking at is likely to be a lot more disconnected in terms of the movement of goods, people, as well as information. This will necessitate producing things locally, with whatever resources are available locally. Given the complexity of most supply chains, this means many things we build today probably won’t be produced at all anymore, so there will need to be a lot more repair, and a lot less consumption.

Above all though, this will necessitate much stronger communities at the local level, working together to keep things running and make life liveable in the face of the catastrophes to come.

To be Clear: Fuck Nazis

When discussing apocalyptic scenarios like these I think a lot of people’s first point of reference is the Hollywood version of collapse – People out for themselves, fighting for survival as rugged individuals. There are certain types of people attracted by that who hold other reprehensible views, so when discussing topics like preparing for collapse it’s important to distance oneself from them.

That said, individual prepping is also not an effective strategy, because real life is not a Hollywood movie. In crisis scenarios mutual aid is just as natural a response for people as selfishness, and it’s a much better approach to actually survive longer-term. Resilient communities of people helping each other is our best bet to withstand whatever worst case scenarios might be headed our way.

We’ll Still Need Computers…

If this future comes to pass, how to do computing will be far from our biggest concern. Having enough food, drinkable water, and other necessities of life are likely to be higher on our priority list. However, there will definitely be important things that we will need computers for.

The thing to keep in mind here is that we’re not talking about the far future here: The buildings, roads, factories, fields, etc. we’ll be working with in this future are basically what we have today. The factories where we’re currently building BMWs are not going away overnight, even if no BMWs are being built. Neither are the billions of Intel laptops and mid-range Android phones currently in use, even if they’ll be slow and won’t get updates anymore.

So what might we need computers for in this hyper-local, resource-constrained future?

Information Management

At the most basic level, a lot of our information is stored primarily on computers today, and using computers is likely to remain the most efficient way to access it. This includes everything from teaching materials for schools, to tutorials for DIY repairs, books, scientific papers, and datasheets for electronics and other machines.

The same goes for any kind of calculation or data processing. Computers are of course good at the calculations needed for construction/engineering (that’s kind of what they were invented for), but even things like spreadsheets, basic scripting, or accounting software are orders of magnitude more efficient than doing the same things without a computer.

Local Networking

We’re used to networking always meaning “access to the entire internet”, but that’s not the only way to do networks – Our existing computers are perfectly capable of talking to each other on a local network at the level of a building or town, with no connection to a global internet.

There are lots of examples of potential use cases for local-only networking and communication, e.g. city-level mesh networks, or low-connectivity chat apps like Briar.

Reuse, Repair, Repurpose

Finally, there’s a ton of existing infrastructure and machinery that needs computers in order to be able to run, be repaired, or repurposed, including farm equipment, medical devices, public transit, and industrial tools.

I’m assuming – but this is conjecture on my part, it’s really not my area of expertise – the machines we’re currently using to build cars and planes could be repurposed to make something more useful, which can actually still be constructed with locally available resources in this future.

…Running Free Software?

As we’ve already touched on earlier, the centralized nature of proprietary software means it’s inherently less resilient than free software. If the company building it goes away or doesn’t sell you the software anymore, there’s not much you can do.

Given all the risks discussed earlier, it’s possible that free software will therefore have a larger role in a more localized future, because it can be adapted and repaired at the local level in ways that are impossible with proprietary software.

Assumptions to Reconsider?

However, while free software has structural advantages that make it more resilient than proprietary software, there are problematic aspects of current mainstream technology culture that affect us, too. Examples of assumptions that are pretty deeply ingrained in how most modern software (including free software) is built include:

  • Fast internet is always available, offline/low-connectivity is a rare edge case, mostly relevant for travel
  • New, better hardware is always around the corner and will replace the current hardware within a few years
  • Using all the resources available (CPU, storage, power, bandwidth) is fine

Assumptions like these manifest in many subtle ways in how we work and what we build.

Dependencies and Package Managers

Over the past decade language-specific package managers such as npm and crates.io have taken off in an unprecedented way, leading to software with larger and more complex dependency graphs than ever before. This is the dominant paradigm for building software today, newer languages all come with their own built-in package manager.

However, just like physical supply chains, more complex dependency graphs are also less resilient. More dependencies, especially with pinned versions and lack of caching between projects means huge downloads and long build times when building software locally, resulting in lots of bandwidth, power, and disk space being used. Fully offline development is basically impossible, because every project you build needs to download its own specific version of every dependency.

It’s possible to imagine some kind of cross-project shared local dependency cache for this, but to my knowledge no language ecosystem is doing this by default at the moment.

Web-Based Tooling

Core parts of the software development workflow are increasingly moving to web-based tools, especially around code forges like Github or Gitlab. Issue management, merge requests, CI, releases, etc. all happen on these platforms, which are primarily or exclusively used via very, very, slow websites. It’s hard to overstate this: Code forges are among the slowest, shittiest websites out there, basically unusable unless you have a fast connection.

This is, of course, not resilient at all and a huge problem given that we rely on these tools for many of our key workflows.

Cloud Storage & Streaming

As already discussed relying on data centers is problematic on a number of levels, but in practice most people (even in the free software community), have embraced cloud services in some areas, at least at a personal level.

Instead of local photo, music, and movie collections many of us just use Google Photos, Spotify, and Netflix nowadays, which of course affects which kinds of apps are being built. For example, there are no modern, actively developed apps to manage your photo collection locally anymore, but we do have a nice, modern Spotify client

Global Community Without the Internet?

Scariest of all, I think, is imagining free software development without the internet. This movement came into existence and grew alongside the global internet in the 80s and 90s, and it’s almost impossible to imagine what it could look like without it.

Maybe the movement as a whole, as well as individual projects would splinter into smaller, local versions in the regions that are still connected? But would there be a sufficient amount of expertise in each of those regions? Would development at any real scale just stop, and instead people would only do small repairs to what they have at the local level?

I don’t have any smart answers here, but I believe it’s something we really ought to think about.

This was part two of a four-part series. In part 3 we’ll look at concrete ideas and examples of things we can work towards to make our software more resilient.

Post Collapse Computing Part 1: The Crisis is Here

This is a lightly edited version of my GUADEC 2022 talk, given at c-base in Berlin on July 21, 2022. Trigger Warning: Very depressing facts about climate and societal collapse.

In this community I’m primarily known for my work as a designer, but if you know me a bit better you’re aware that I also do a different kind of activism, which sometimes looks like this:

Yours truly (bottom left), chained to a 1.5 degree symbol blocking a bridge near the German parliament.

This was an Extinction Rebellion action in Berlin earlier this year, the week the new IPCC report was released. Among other things, the report says that keeping global warming to within 1.5 degrees, the goal all our governments agreed to, is basically impossible at this point.

The idea with this action in particular was to force the state to symbolically destroy the 1.5 degree goal in order to clear our street blockade. Here’s the police doing that:

Police literally the dismantling the 1.5 degree target :P

It’s Happening Now

The climate crisis is no longer a thing future generations will one day have to deal with, like we were told as kids. It’s here, affecting all of us today, including in the global north. Some of the people travelling to this year’s Berlin Mini GUADEC were delayed by the massive heatwave, because train tracks on the way could not handle the heat.

There are already a number of unavoidable horrible consequences on the horizon. These include areas around the equator where the combination of temperature and humidity is deadly for humans for parts of the year, crop failures causing ever larger famines, conflicts around resources such as water, and general infrastructure breakdown caused by a combination of ever more extreme weather events and decreasing capacity to deal with them.

Second-order consequences will include billions of people having to flee to less affected areas, which in turn will have almost unimaginable political consequences – If 5 million refugees from the Syrian civil war caused a Europe-wide resurgence in proto-fascist parties, what will 100 million or more do?

And that’s not the worst of it.

Tipping Over

The climate system is not linear. There are a number of tipping elements which can, once destabilized, not be brought back to their previous state and go from being carbon sinks to actually releasing carbon into the atmosphere.

These include forests such as the Amazon, polar ice shields such as in Greenland, and perhaps most ominously, the gigantic amounts of methane frozen in the Russian permafrost. If some or all of these elements tip, they can kick off a self-reinforcing feedback loop of ever-accelerating warming, independent of human emissions.

We don’t know which tipping points are reached at what temperature exactly, but past 2 degrees it’s very likely that we’ll cross enough of them to cause 4, 5, or more degrees of warming.

We Have 3 Years

While many terrible things can’t be avoided anymore, scientists tell us that we still have a “brief and rapidly closing window of opportunity” to avoid some of the worst consequences – If we manage to turn things around and start actually reducing emissions in the next few years, and then continue doing so over the following decades.

That doesn’t mean each of us individually deciding to buy organic food and bamboo toothbrushes – The individual carbon footprint was literally invented by BP to deflect responsibility from corporations onto people. It’s obviously important to reduce our emissions as much as possible individually (especially luxury emissions such as meat and air travel), but that should not be where we stop or invest most of our energy.

No amount of individual action can really move the needle when just 100 companies are responsible for 70% of global emissions. All the real solutions are structural.

Unfortunately, on that front we’ve seen zero actual progress over the 40+ years that we’ve known about the impending catastrophe. Emissions have continuously increased in the past decades, rather than decreased. We’ve emitted more since the release of the first IPCC report in 1992 than in the entire history of humanity before that point. Even now, our governments are still subsidizing new fossil infrastructure with public money, while failing to meet the (already insufficient) goals they set for themselves.

Climate policy so far has completely failed to achieve even a reduction in new emissions, let alone removing carbon from the atmosphere to get back to a safe level below 350 ppm.

It’s not too Late – Yet

This political and economic system is clearly not capable of the kind of action needed to avert this crisis. However, we’re also not going to be able to build an entirely new system in the next few years, there’s just not enough time. We can either try to use the existing state regulatory apparatus to reduce emissions now, or accept collapse as inevitable.

That sounds incredibly bleak, and it is – but there really is still a path to turn this around, and there are people and movements with a plan. Depending on where you live they have different names, logos, and tactics, but the strategy is roughly the same:

  1. Mass Mobilization: Organize a small part of the population (something like a single digit percentage) into a mass civil resistance movement, and generate awareness of the emergency in the broader population.
  2. Civil Resistance: Use civil disobedience tactics to disrupt business, politics, and infrastructure and do enough economic damage that the government can’t ignore it.
  3. Citizens’ Assemblies: Demand that the government give the power to decide how to respond to the climate crisis to Citizens’ Assemblies. Members of these assemblies are chosen at random, in a way that is representative of the population, and advised by scientific experts. The assemblies can then decide how to reduce emissions and mitigate the effects of the crisis in a way that is both effective and socially equitable, because they are not beholden to capital interests.

This is of course an incredibly simplified version of the strategy (and I’d recommend reading up on it in detail), but it’s basically what groups such as Exctinction Rebellion (international), Just Stop Oil (UK), Letzte Generation (DE), Dernière Rénovation (FR), and many others are working towards.

Successful multi-day blockade at “Großer Stern” in Berlin, 2019

So in the face of this, should we all just drop everything and start doing blockades for the next few years?

Well, yes. If you’re not currently doing civil disobedience wherever you live, I’d recommend looking into what groups exist locally and joining them. Even if you’re not ready to glue yourself to the road, there’s plenty of stuff you can do to help. They need your support to succeed, and we all really need them to succeed.

If you’re based in or near Germany, there’s actually a great opportunity coming up for getting involved: There’s a big rebellion wave September 17-20, so now’s an ideal time to get in touch with a local group nearby, do an action training, and book the trip to Berlin! See you there ;)

This is the first part of a four-part series. In part 2 we’ll explore what happens if we don’t manage to force our governments to enact radical change in the next few years, and what that would mean concretely for free software.

GUADEC 2019

Last week I was in Thessaloniki (Greece) for this year’s GUADEC. This time I took vacations before the conference, visiting Athens and Delphi among other places, before coming to Thessaloniki.

View from the Acropolis in Athens

Conference Days

I made an effort to see more talks this year, because there were so many interesting ones. Kudos to the speakers and organizers for getting such an excellent program together! Among my favorites were Allan’s talk on UX strategy and tactics, Cassidy’s about his research on dark styles, and Deb Nicholson’s closing keynote on building a free software utopia.

On Sunday I gave a talk about adaptive patterns and making GNOME apps that work well across form factors, from phones all the way to desktops. There is a video of my talk, and the slides are here.

Julian, myself, Bastian, Adrien, and Heather waiting for the Foundation AGM to start (and Sigu photobombing)

Monday: GTK

I spent most of Monday in the GTK BoF, where we discussed (among other things) menus, dark styles, upstreaming Libhandy widgets, and a new pattern library for GNOME. Since there were so many app maintainers in the room, we inevitably also talked about random things in various apps, such as the Mouse/Touchpad settings, the Display settings, and Evince.

Pattern Library

One of the things we’ve talked a lot about recently on the GNOME design team is making it easier to implement our UI patterns.

Many of the platform widgets are in GTK directly, which makes them easy to use, but hard to iterate on since we don’t want to break API there. Other things are in third party libraries, such as Libzazzle or Libhandy, but those are not “official”, and app developers have to know about them. Other widgets are just copy-pasted between apps, or completely custom everywhere. This makes it needlessly complex to follow our design guidelines, and has resulted in inconsistency in how different apps implement patterns. One idea to fix this was to have a separate official pattern library, and a new widget factory that showcases these patterns. This library could move significantly faster than GTK, and its release cycle could be better aligned with GNOME.

On a parallel track we’ve also been discussing for some time (both within Purism and GNOME) how best to upstream the widgets in Libhandy. Some of the things in there are fairly generic and should go into GTK (e.g. HdyColumn), but since GTK3 is stable we can’t upstream them directly before GTK4. Other widgets are more GNOME-specific (e.g. HdyPreferencesWindow), and would ideally be able to move at a faster speed, so a separate library would be a better fit.

We discussed this at length over the course of GUADEC, and this seems like the most likely way forward: For GTK4 we’ll have a separate GNOME pattern library outside GTK, which contains “official” widgets implementing patterns from the GNOME HIG. Some of these widgets would come from currently separate libraries such as Libhandy, while others might be moved out of GTK (e.g. GtkShortcutsWindow). The new library would have a clearly defined inclusion/review process for new widgets, and would be kept in sync with the HIG.

For GTK3 it seems like the path of least resistance is to just adopt Libhandy upstream, after removing a few things that are too specific or no longer needed (e.g. HdyDialer and HdyArrows), and instituting the same review process for new widgets as for the GTK4 library.

It’s still early days for all of this, but this is the current consensus as I understood it from various conversations at GUADEC.

Adrien and Bastian in front of the Arch of Galerius

Tuesday: Vendor Themes

On Tuesday we had a BoF with various interested parties (including design team, distros, and app developers) to discuss a possible future Vendor Styles API.

GNOME does not currently support making changes to the platform stylesheet/icons, and app developers build their apps with this in mind. Changing these means ripping out API from under developers, which often results in apps looking broken. Some downstreams are doing it however, which is a problem for our overall platform developer story.

The good news is that some of these downstreams realize this is an issue, and are willing to work upstream to improve the situation. During the BoF we discussed the motivations for changing the platform style, and the kinds of changes they’re interested in. We broadly categorized these changes in 3 groups:

  1. Accent Colors”: Making it possible to change some UI colors (at both the system and app level) without breaking things
  2. Upstreamable”: Stylistic changes which upstream might be interested in (such as rounded menus, flatter checkboxes, etc.)
  3. Here Be Dragons”: Anything that touches widget sizing, margins, and the like (because changes to these break apps the hardest)

We spent most of the BoF discussing the “Accent Colors” category, because that’s where most of the low-hanging fruit is. The main things we need to figure out for this are

  • Which and how many variables do we want (and can realistically be supported)?
  • How would app developers test for different color combinations? What kind of tooling do we need to make that easier?
  • How do we ensure good contrast?
  • Can colors be set arbitrarily or are there constraints?
  • How are the colors set? What should that API look like?
  • Could we do this for GTK3 given that it’s stable? Would it be GTK4 only?
  • How would we handle Appstream screenshots looking different from the app once installed?

We discussed most of these questions in some detail, but all of this needs a lot more work before we can definitively say if and how we’ll go forward with it. At the BoF we outlined some first steps in this direction, namely documenting the current set of color variables within Adwaita, and looking at what other variables we might need. For more on this read the Discourse topic.

Once all of that is figured out though, there are some pretty exciting things on the horizon. For example, some third party app developers would like to use color in more interesting ways in their apps, but the way Adwaita is set up, this is currently not easy. If done right, making Adwaita more flexible would not only allow for vendor styles, but also empower app developers to do more cool stuff.

Epiphany uses a blue header bar in incognito mode. This kind of thing would be much easier to implement with an accent color API.

Another potential benefit is that a lot of the work around ensuring contrast on different background colors, better documentation around color variables, etc. would be needed anyway if we get a global “prefer dark” preference. If and when that happens, it will be a much easier transition if we’ve already worked out some of these things.

Lunch with some Purism people and other GNOME friends

Software Freedom/Ethics Ratings

I also sat down with François on Tuesday to discuss the software ethics rating system we’ve been thinking about for a while. The goal of this initiative is to make the value of software freedom more tangible to people when they’re looking for applications in an app store (e.g. “this app won’t leak your data” instead of “this app is licensed GPLv3+”). On the developer side, the goal is to encourage ethical practices (e.g. encryption) and discourage unethical ones (e.g. tracking). So far we’ve mostly discussed this idea internally at Purism, but as with everything else we’d ideally want it to be a ecosystem-wide thing others can benefit from rather than something we do downstream.

There’s plenty of relevant art for nudging people towards more ethical choices. In other industries there are many examples for both info badges/warnings (e.g. food labels with nutritional information) and indirect incentives (e.g. higher taxes on unhealthy foods). In the software realm, interesting examples include FDroid’s anti-feature warnings, and Terms of Service; Didn’t Read, which has curated summaries and letter grades.

Screenshot of the ToS;DR rating for Twitter

In our case we have limited options in the short term, because we have to deal with multiple different software sources (someone might have apps from a distribution repo, multiple Flatpak remotes, and who knows what else). In addition to that, giving a simple answer to the question “Is this app safe to use?” is often complicated and somewhat subjective. For example, it’s impossible to build an email client that doesn’t send unencrypted messages, or an RSS app that doesn’t connect to servers which could track you. Does that mean we should mark these apps as insecure/unethical?

There currently isn’t a trusted entity which could make the complex value judgements that are involved with deciding whether applications are ethical at scale, for many different software sources.

My feeling is that instead of coming up with a complicated process that may or may not work the way we expect it to, it would be good to first experiment with some leaner solutions and test the general approach. A potential first step we discussed could be something similar to the anti-features on FDroid, perhaps tied in with the existing Appstream metadata we have. In combination with Flatpak sandboxing/permissions there are a number of cases where you can actually say with relative certainty that an app is safe/ethical without complicated judgement calls (e.g. fully sandboxed apps without network access). If we can find a few such categories of apps this could be a good starting point, to see if this helps us reach our goals.

Obviously this needs a lot more work, but I’m hoping to do at least some mockups/prototypes soon. Also, I haven’t talked to a ton of people about this so far, but I imagine other projects with a focus in software freedom/ethics might be interested in this problem as well. If you work on such a project and have ideas/comments/concerns, let’s chat!

Wednesday: Beach BoF!

After 5 intense days of GUADEC and BoFs we had a more chill day at the beach on Wednesday. We still had lots of productive discussions about GNOME stuff there of course, maybe even better in some ways because we didn’t have our laptops distracting us :)

Thanks

I’d like to thank the organizers for putting together an awesome GUADEC, all of my GNOME friends old and new for being there, and the GNOME Foundation for sponsoring my attendance!

GUADEC 2018

A few weeks ago I attended GUADEC in Almeria, Spain. The travel was a bit of an adventure, because Julian and I went there and back from Italy by train. It was great though, because we had lots of time to hack on Fractal on the train.

We also met Bastian on the train from Madrid to Almeria

Conference Days

The conference days were great, though I didn’t manage to see many talks because I kept getting tied up in interesting discussions (first world problem, I know :D). I did give a talk of my own though, about my work at Purism on UI patterns for making GNOME apps work on mobile. There is a video recording of the talk, and here are my slides.

The main thing I tried to get across is that Purism isn’t trying to create a separate ecosystem or platform, but to make the GNOME platform better upstream. We ship vanilla GNOME on our laptops, and we want to do the same on phones. It’s of course early days, and it will take a while for everything to get into place, but it feels great to work for a company that has upstream-first as a core principle.

The biggest area where our efforts will make an impact upstream in the short term are the Libhandy widgets Adrien and Guido have been working on. These widgets allow regular GNOME/GTK apps to scale to smaller sizes using adaptive UI patterns. The patterns are extending GNOME’s existing HIG in a few small details only, and can be used to make many existing GNOME apps adaptive without requiring major UI changes. We’re still experimenting with them, but once the patterns are solid and the widgets stable, we will work to upstream them into GTK.

Using Libhandy widgets will not only enable apps to run on phones, but also yield benefits on the desktop. For example, HdyColumn solves a very old problem many GTK apps have: Lists that need to grow with the window’s width, but also need a maximum width to ensure legibility. By enabling this, HdyColumn allows apps to work better on both very small and very large screens.

BoF Days

The BoF days were packed with interesting sessions, but sadly many of them happened simultaneously, so I was only able to attend a few of them. However, the ones I did attend were all incredibly productive and interesting, and I’m excited about the things we worked on and planned for the future.

Monday: Librem 5

On Monday I attended the all-day Librem 5 BoF, together with my colleagues from Purism, and some community members, such as Jordan and Julian from Fractal.

We talked about apps, particularly the messaging situation and Fractal. We discussed what will be needed in order to split the app, make the UI adaptive, and get end-to-end encryption. Daniel’s work on the database and Julian’s message history refactor are currently laying the groundwork for these.

On the shell side we talked through the design of various parts of the shell, such as keyboard, notifications, multitasking, and gestures. Though many of those things won’t be implemented in the near future, we have a plan for where we’re going with these, and getting designers and developers in one room was very productive for working out some of the details.

We also discussed a number of exciting new widgets to make it easier to get GNOME apps to work at smaller sizes, such as a new adaptive preferences window, and a way to allow modal windows to take up the entire screen at small sizes.

Multi-Monitor & Theming

On Tuesday we had a Multi-Monitor BoF, where we discussed multi-monitor behaviors with people from System76 and Ubuntu, among others. The most interesting parts to me were the discussions about adding some usecase-driven modes, such as a presentation mode, and a potential new keyboard-driven app switching interface (think Alt-Tab, but good). All of this will require a lot of work, but it’s great to see downstreams like System76 interested in driving initiatives like this.

In the afternoon we had the Theming & Ecosystem BoF, where we got designers, upstream GNOME developers, and people from various downstreams together to talk about the state of theming, and its impact on our ecosystem.

The basic problem we were discussing is that app developers want stable APIs and control over what their app looks like on user’s systems, while some distributions want to apply their own branding to everything. The current situation is pretty bad, because users end up with broken apps, developers constantly need to fix bugs for setups they didn’t want to support in the first place, and distributions need to invest lots of resources into building forever-slightly-broken custom themes. We discussed a number of possible approaches to tackle this problem, in order to make our platform easier to target. I’ll blog about this in more detail, but I’m excited about the possibility of finally solving this long-standing problem.

We also talked about some of our future plans with regard to icons. This includes a new “library” of symbolics, and a push for app developers to ship symbolic icons with their apps, rather than linking to random strings which have to be maintained forever. We also introduced the new app icon style initiative, which will make it drastically easier to make icons because they are simpler, more geometric, and there are fewer sizes to draw. All of this will still develop over the next cycle, since it’s not going to ship until 3.32.

Jakub presenting the new icon style

Wednesday: What is a GNOME app?

On Wednesday we had a small but very productive BoF to work on a proposal for a policy for including new apps as part of GNOME, and more generally getting a clearer definition of what it means for a project to be part of GNOME. There is currently no clear process for the inclusion of new projects as part of GNOME, so it doesn’t happen very often, and usually in a very disorganized fashion. This is a problem, because it leaves people who are excited about making new GNOME apps without a clear path to do so.

For example, both Fractal and Podcasts were built from the ground up to be GNOME apps, but still haven’t made it to the GNOME/ group on Gitlab officially, because there’s no clear policy. The new Calls app Bob is building for the Librem 5 is in the same limbo, just waiting around for someone to say if/how it can become a part of GNOME officially.

At the BoF we drafted a proposal for an explicit inclusion policy. The idea is for apps that a follow a set of criteria (e.g. follows the HIG, uses our tech stack) to be able to apply for inclusion, and have some kind of committee that could review these requests.

This is only a very rough proposal for now, but I’m excited about the potential it has to bring in more developers from the wider ecosystem. And of course, as the designer of a half dozen semi-official GNOME apps I’m very selfishly interested in getting this in place ;)

Thanks everyone!

Some of the Fractal core team meeting for the first time at the pre-registration event (Daniel, Jordan, Julian, and me)

In addition to all of the above, it was great to meet and hang out with so many of the awesome people in our community at social events, beach BoFs, and ad-hoc hacking sessions on the corridor. It’s hard to believe that one year ago I came to GUADEC as a newcomer. This year it felt like coming home.

Thanks everyone, and see you next year o/