Re-Decentralizing Development

As I’ve already announced internally, I’m stepping down from putting together an STF application for this year. For inquiries about the 2025 application, please contact Adrian Vovk going forward. This is independent of the 2024 STF project, which we’re still in the process of wrapping up. I’m sticking around for that until the end.

The topic of this blog post is not the only reason I’m stepping down but it is an important one, and I thought some of this is general enough to be worth discussing more widely.

In the context of the Foundation issues we’ve had throughout the STF project I’ve been thinking a lot about what structures are best suited for collectively funding and organizing development, especially in the context of a huge self-organized project like GNOME. There are a lot of aspects to this, e.g. I hadn’t quite realized just how important having a motivated, talented organizer like Sonny is to successfully delivering a complex project. But the specific area I want to talk about here is how power and responsibilities should be split up between different entities across the community.

This is my personal view, based on having worked on GNOME in a variety of structures over the years (volunteer, employee, freelancer, for- and non-profit, worked under grants, organized grants, etc.). I don’t have any easy answers, but I wanted to share how my perspective has shifted as a result of the events of the past year, which I hope will contribute to the wider ongoing discussion around this.

A Short History

Unlike many other user-facing free software projects, GNOME had strong corporate involvement since early on in its history, with many different product companies and consultancies paying people to work on various parts of it. The project grew up during the Dotcom bubble (younger readers may not remember this, but “Linux” was the “AI” of that era), and many of our structures date back to this time.

The Foundation was created in those early days as a neutral organization to hold resources that should not belong to any one of the companies involved (such as the trademark, donation money, or the development infrastructure). A lot of checks and balances were put in place to avoid one group taking over the Foundation or the Foundation itself overshadowing other players. For example, hiring developers via the Foundation was an explicit non-goal, advisory board companies do not get a say in the project’s technical direction, and there is a limit to how many employees of any single company can be on the board. See this episode of Emmanuele Bassi’s History of GNOME Podcast for more details.

The Dotcom bubble burst and some of those early companies died, but there continued to be significant corporate investment, e.g. from enterprise desktop companies like Sun, and then later via companies from the mobile space during the hype cycles around netbooks, phones, and tablets around 2010.

Fast forward to today, this situation has changed drastically. In 2025 the desktop is not a growth area for anyone in the industry, and it hasn’t been in over a decade. Ever since the demise of Nokia and the consolidation of the iOS/Android duopoly, most of the money in the ecosystem has been in server and embedded use cases.

Today, corporate involvement in GNOME is limited to a handful of companies with an enterprise desktop business (e.g. Red Hat), and consultancies who mostly do low-level embedded work (e.g. Igalia with browsers, or Centricular with Gstreamer).

Retaining the Next Generation

While the current level of corporate investment, in combination with volunteer work from the wider community, have been enough to keep the project afloat in recent years, we have a pretty glaring issue with our new contributor pipeline: There are very few job openings in the field.

As a result, many people end up dropping out or reducing their involvement after they finish university. Others find jobs on adjacent technologies where they occasionally get work time for GNOME-related stuff, and put in a lot of volunteer time on top. Others still are freelancing, applying for grants, or trying to make Patreon work.

While I don’t have firm numbers, my sense is that the number of people in precarious situations like these has been going up since I got involved around 2015. The time when you could just get a job at Red Hat was already long gone when I joined, but for a while e.g. Endless and Purism had quite a few people doing interesting stuff.

In a sense this lack of corporate interest is not unusual for user-facing free software — maybe we’re just reverting to the mean. Public infrastructure simply isn’t commercially profitable. Many other projects, particularly ones without corporate use cases (e.g. Tor) have always been in this situation, and thus have always relied on grants and donations to fund their development. Others have newly moved in this direction in recent years with some success (e.g. Thunderbird).

Foundational Issues

I think what many of us in the community have wanted to see for a while is exactly what Tor, Thunderbird, Blender et al. are doing: Start doing development at the Foundation, raise money for it via donations and grants, and grow the organization to pick up the slack from shrinking corporate involvement.

I know why this idea is so seductive to many of us, and has been for years. It’s in fact so popular, I found four board candidacies (1, 2, 3, 4) from the last few election cycles proposing something like it.

On paper, the Foundation seems perfect as the legal structure for this kind of initiative. It already exists, it has the same name as the wider project, and it already has the infrastructure to collect donations. Clearly all we need to do is to raise a bit more money, and then use that money to hire community members. Easy!

However, after having been in the trenches trying to make it work over the past year, I’m now convinced it’s a bad idea, for two reasons: Short/medium term the current structure doesn’t have the necessary capacity, and longer term there are too many risks involved if something goes wrong.

Lack of Capacity

Simply put, what we’ve experienced in the context of the STF project (and a few other initiatives) over the past year is that the Foundation in its current form is not set up to handle projects that require significant coordination or operational capacity. There are many reasons for this — historical, personal, structural — but my takeaway after this year is that there need to be major changes across many of the Foundation’s structures before this kind of thing is feasible.

Perhaps given enough time the Foundation could become an organization that can fund and coordinate significant development, but there is a second, more important reason why I no longer think that’s the right path.

Structural Risk

One advantage of GNOME’s current decentralized structure is its resilience. Having a small Foundation at the center which only handles infrastructure, and several independent companies and consultancies around it doing development means different parts are insulated from each other if something goes wrong.

If there are issues inside e.g. Codethink or Igalia, the maximum damage is limited and the wider project is never at risk. People don’t have to leave the community if they want to quit their current job, ideally they can just move to another company and continue most of their upstream work.

The same is not true of projects with a monolithic entity at the center. If there’s a conflict in that central monolith it can spiral ever wider if it isn’t resolved, affecting more and more structures and people, and doing drastically more damage.

This is a lesson we’ve unfortunately had to learn the hard way when, out of the blue, Sonny was banned last year. I’m not going to talk about the ban here (it’s for Sonny to talk about if/when feels like it), but suffice to say that it would not have happened had we not done the STF project under the Foundation, and many community members including myself do not agree with the ban.

What followed was, for some of us, maybe the most stressful 6 months of our lives. Since last summer we’ve had to try and keep the STF project running without its main architect, while also trying to get the ban situation fixed, as well as dealing with a number of other issues caused by the ban. Thousands of volunteer hours were probably burned on this, and the issue is not even resolved yet. Who knows how many more will be burned before it’s over. I’m profoundly sad thinking about the bugs we could have fixed, the patches we could have reviewed, and the features we could have designed in those hours instead.

This is, to me, the most important takeaway and the reason why I no longer believe the Foundation should be the structure we use to organize community development. Even if all the current operational issues are fixed, the risk of something like this happening is too great, the potential damage too severe.

What are the Alternatives?

If using the Foundation is too risky, what other options are there for organizing development collectively?

I’ve talked to people in our community who feel that NGOs are fundamentally a bad structure for development projects, and that people should start more new consultancies instead. I don’t fully buy that argument, but it’s also not without merit in my experience. Regardless though, I think everyone has also seen at one point or another how dysfunctional corporations can be. My feeling is it probably also heavily depends on the people and culture, rather than just the specific legal structure.

I don’t have a perfect solution here, and I’m not sure there is one. Maybe the future is a bunch of new consulting co-ops doing a mix of grants and client work. Maybe it’s new non-profits focused on development. Maybe we need to get good at Patreon. Or maybe we all just have to get a part time job doing something else.

Time will tell how this all shakes out, but the realization I’ve come to is that the current decentralized structure of the project has a lot of advantages. We should preserve this and make use of it, rather than trying to centralize everything on the Foundation.

Boiling The Ocean Hackfest

Last weekend we had another edition of last year’s post-All Systems Go hackfest in Berlin. This year it was even more of a collaborative event with friends from other communities, particularly postmarketOS. Topics included GNOME OS, postmarketOS, systemd, Android app support, hardware enablement, app design, local-first sync, and many other exciting things.

This left us with an awkward branding question, since we didn’t want to name the event after one specific community or project. Initially we had a very long and unpronounceable acronym (LMGOSRP), but I couldn’t bring myself to use that on the announcement post so I went with something a bit more digestible :)

“Boiling The Ocean” refers to the fact that this is what all the hackfest topics share in common: They’re all very difficult long-term efforts that we expect to still be working on for years before they fully bear fruit. A second, mostly incidental, connotation is that the the ocean (and wider biosphere) are currently being boiled thanks to the climate crisis, and that much of our work has a degrowth or resilience angle (e.g. running on older devices or local-first).

I’m not going to try to summarize all the work done at the event since there were many different parallel tracks, many of which I didn’t participate in. Here’s a quick summary of a few of the things I was tangentially involved in, hopefully others will do their own write-ups about what they were up to.

Mobile

Mainline Linux on ex-Android phones was a big topic, since there were many relevant actors from this space present. This includes the postmarketOS crew, Robert with his camera work, and Jonas and Caleb who are still working on Android app support via Alien Dalvik.

To me, one of the most exciting things here is that we’re seeing more well-supported Qualcomm devices (in addition to everyone’s favorite, the Oneplus 6) these days thanks to all the work being done by Caleb and others on that stack. Between this, the progress on cameras, and the Android app support maybe we can finally do the week-long daily driving challenge we’ve wanted to do for a while at GUADEC 2025 :)

Design

On Thursday night we already did a bit of pre-event hacking at a cafe, and I had an impromptu design session with Luca about eSIM support. He has an app for this at the moment, though of course ideally this should just be in Settings longer-term. For now we discussed how to clean up the UI a bit and bring it more in line with the HIG, and I’ll push some updates to the cellular settings mockups based on this soon.

On Friday I looked into a few Papers things with Pablo, in particular highlights/annotations. I pushed the new mockups, including a new way to edit annotations. It’s very exciting to see how energetic the Papers team is, huge kudos to Pablo, Qiu, Markus, et al for revitalizing this app <3

On Saturday I sat down with fellow GNOME design contributor Philipp, and looked at a few design questions in Decibels and Calendar. One of my main takeaways is that we should take a fresh look at the adaptive Calendar layout now that we have Adwaita breakpoints and multi-layout.

47 Release Party

On Saturday night we had the GNOME 47 release party, featuring a GNOME trivia quiz. Thanks to Ondrej for preparing it, and congrats to the winners: Adrian, Marvin, and Stefan :)

Local-First

Adrian and Andreas from p2panda had some productive discussions about a longer-term plan for a local-first sync system, and immediate next steps in that direction.

We have a first collaboration planned in the form of a Hedgedoc-style local-first syncing pad, codenamed “Aardvark” (initial mockups). This will be based on a new, more modular version of p2panda (still WIP, but to be released later this year). Longer-term the idea is to have some kind of shared system level daemon so multiple apps can use the same syncing infrastructure, but for now we want to test this architecture in a self-contained app since it’s much easier to iterate on. There’s no clear timeline for this yet, but we’re aiming to start this work around the end of the year.

GNOME OS

On Sunday we had a GNOME OS planning meeting with Adrian, Abderrahim, and the rest of the GNOME OS team (remote). The notes are here if you’re interested in the details, but the upshot is that the transition to the next-generation stack using systemd sysupdate and homed is progressing nicely (thanks to the work Adrian and Codethink have been doing for our Sovereign Tech Fund project).

If all goes to plan we’ll complete both of these this cycle, making GNOME OS 48 next spring a real game changer in terms of security and reliability.

Community

Despite the very last minute announcement and some logistical back and forth the event worked out beautifully, and we had over 20 people joining across the various days. In addition to the usual suspects I was happy to meet some newcomers, including from outside Berlin and outside the typical desktop crowd. Thanks for joining everyone!

Thanks also to Caleb and Zeeshan for helping with organization, and the venues we had hosting us across the various days:

  • offline, a community space in Neukölln
  • JUCR, for hosting us in their very cool Kreuzberg office and even paying for drinks and food
  • The x-hain hackerspace in Friedrichshain

See you next time!

Local-First Workshop (feat. p2panda)

This week we had a local-first workshop at offline in Berlin, co-organized with the p2panda project. As I’ve written about before, some of us have been exploring local-first approaches as a way to sync data between devices, while also working great offline.

We had a hackfest on the topic in September, where we mapped out the problem space and discussed different potential architectures and approaches. We also realized that while there are mature solutions for the actual data syncing part with CRDT libraries like Automerge, the network and discovery part is more difficult than we thought.

Network Woes

The issues we need to address at the network level are the classic problems any distributed system has, including:

  • Discovering other peers
  • Connecting to the other peers behind a NAT
  • Encryption and authentication
  • Replication (which clients need what data?)

We had sketched out a theoretical architecture for first experiments at the last hackfest, using WebRTC data channel to send data, and hardcoding a public STUN server for rendezvous.

A few weeks after that I met Andreas from p2panda at an event in Berlin. He mentioned that in p2panda they have robust networking already, including mDNS discovery on the local network, remote peer discovery using rendezvous servers, p2p connections via UDP holepunching or relays, data replication, etc. Since we’re very interested in getting a low-fi prototype working sooner rather than later it seemed like a promising direction to explore.

p2panda

The p2panda project aims to provide a batteries-included SDK for easy local-first app development, including all the hard networking stuff mentioned above. It’s been around since about 2020, and is currently primarily developed by Andreas Dzialocha and Sam Andreae.

The architecture consist of nodes and clients. Nodes include networking, materialization, and an SQL database. Clients sign and create data, and interact with the node using a GraphQL API.

As of the latest release there’s TLS transport encryption between nodes, but end-to-end data-encryption using MLS is still being worked on, as well as a capabilities system and privacy-respecting deletion. Currently there’s a single key/value CRDT being used for all data, with no high-level way for apps to customize this.

The Workshop

The idea for the workshop was to bring together people from the GNOME and local-first communities, discuss the problem space, and do some initial prototyping.

For the latter Andreas prepared a little bookmark manager demo project (git repository) that people can open in Workbench and hack on easily. This demo runs a node in the background and accesses the database via GraphQL from a simple GTK frontend, written in Rust. The demo app automatically finds other peers on the local network and syncs the data between them.

Bookmark syncing demo using p2panda, running in Workbench

We had about 10 workshop participants with diverse backgrounds, including an SSB developer, a Mutter developer, and some people completely new to both local-first and GTK development. We didn’t get a ton of hacking done due to time constraints (we had enough program for an all-day workshop realistcally :D), but a few people did start projects they plan to pursue after the workshop, including C/GObject bindings for p2panda-rs and an app/demo to sync a list of map locations. We also had some really good discussions on local-first architecture, and the GNOME perspective on this.

Thoughts on Local-First Architectures

The way p2panda splits responsibilities between components is optimized for simple client development, and being able to use it in the browser using the GraphQL API. All of the heavy lifting is done in the node, including networking, data storage, and CRDTs. It currently only supports one CRDT, which is optimized for database-style apps with lots of discrete fields.

One of the main takeaways from our previous hackfest was that data storage and CRDTs should ideally be in the client. Different apps need different CRDTs, because these encode the semantics of the data. For example, a text editor would need a custom text CRDT rather than the current p2panda one.

Longer-term we’ll probably want an architecture where clients have more control over their data to allow for more complex apps and diverse use cases. p2panda can provide these building blocks (generic reducer logic, storage providers, networking stack, etc.) but these APIs still need to be exposed for more flexibility. How exactly this could be done and if/how parts of the stack could be shared needs more exploration :)

Theoretical future architectures aside, p2panda is a great option for local-first prototypes that work today. We’re very excited to start playing with some real apps using it, and actually shipping them in a way that people can try.

What’s Next?

There’s a clear path towards first prototype GNOME apps using p2panda for sync. However, there are two constraints to keep in mind when considering ideas for this:

  • Data is not encrypted end-to-end for now (so personal data is tricky)
  • The default p2panda CRDT is optimized for key / value maps (more complex ones would need to be added manually)

This means that unfortunately you can’t just plug this into a GtkSourceView and have a Hedgedoc replacement. That said, there’s still lots of cool stuff you can do within these constraints, especially if you get creative in the client around how you use/access data. If you’re familiar with Rust, the Workbench demo Andreas made is a great starting point for app experiments.

Some examples of use cases that could be well-suited, because the data is structured, but not very sensitive:

  • Expense splitting (e.g. Splittypie)
  • Meeting scheduling (e.g. Doodle)
  • Shopping list
  • Apartment cleaning schedule

Thanks to Andreas Dzialocha for co-organizing the event and providing the venue, Sebastian Wick for co-writing this blog post, Sonny Piers for his help with Workbench, and everyone who joined the event. See you next time!

GNOME 45 Release Party & Hackfest

In celebration of the 45 release we had a hackfest and release party in Berlin last week. It was initially supposed to be a small event, but it turns out the German community is growing more rapidly than we thought! In the end we were around 25 people, about half of them locals from Berlin :)

GNOME OS

Since many of the GNOME OS developers were in town for All Systems Go, this was one of the main topics. In addition to Valentin, Javier, and Jordan (remote), we also had Lennart from systemd and Adrian from carbonOS and discussed many of the key issues for image-based operating systems.

I was only present for part of these discussions so I’ll leave it to others to report the results in detail. It’s very exciting how things are maturing in this area though, as everyone is standardizing on systemd’s tools for image-based OSes.

Discussing the developer story on image-based OSes with Jonas and Sebastian from GNOME Shell/mutter. Left side: Adrian, Javier, Valentin, Sebastian. Right side: Jonas, Kai, Lennart

Local-First

On Saturday the primary topic was local-first. This is the idea that software should always work offline, and optionally use the network when available for device sync and collaboration. This allows for people to own their data, but still have access to modern features like multiplayer editing.

People in the GNOME community have long been interested in local-first and we’ve had various discussions and experiments in this direction over the past few years. However, so far we have not really investigated how we’d implement it at a larger scale, and what concrete steps in that direction would look like.

For context, any sync system (local-first or not) needs the following things:

  • Network: Device discovery, channel to send the actual data, way to handle offline nodes, encryption, device authentication, account management
  • Sync: Merging data from different peers, handling conflicts
  • UI: User interface for viewing and manipulating the data, showing sync status, managing devices, permissions, etc.

Local-first usually refers to systems that do the “sync” part on the client, though that doesn’t mean the other areas are easy :)

Muse

Adam Wiggins stopped by on Saturday morning to tell us about his work on Muse, a local-first whiteboard app for Apple platforms. While it’s a totally different tech stack and background, it was super interesting because Muse is one of very few consumer apps using local-first sync in production today.

Some of my takeaways from the session with Adam:

  • Local-first means all the logic lives in the client. In the Muse architecture, the server is extremely simple, basically just a dumb pipe routing data between clients. While data is not encrypted end-to-end in their case, it’s possible to use this same architecture with E2E.
  • CRDTs (conflict-free replicated data types) are a magical new advancement in computer science over the past few years, which makes the actual merging of content relatively easy.
  • Merge conflicts are not as big a deal as one might think, and not the hardest problem to solve in this space.
  • Local-first is a huge opportunity for desktop projects like GNOME. We were not really able to be competitive with proprietary software in the past decade on features like sync and multiplayer because we can’t realistically run huge cloud services for every single app/use case. Local-first could change this, since the logic is shifting back to the client. Servers become generic dumb pipes, which all kinds of apps can use without needing their own custom sync server.

To learn more about Muse, I recommend watching Adam’s Local-First Meetup talk from earlier this year, which touches on many of the topics we discussed in our hackfest session as well.

Other Relevant Art

The two projects we discussed as relevant art from our community are Christian Hergert’s Bonsai, and Carlos Garnacho’s work on RDF sync in tracker (codename “Emergence”).

Bonsai is not quite local-first architecturally, since it assumes an always-on home server. This server hosts your data and runs services, which apps on your other devices can use to access data, sync, etc. This is quite different from the dumb pipe server model discussed above, and of course comes with the usual caveats with any kind of public-facing service on local networks (NAT, weird network configurations, etc.).

Codename “Emergence” is a way to sync graph databases (such as tracker’s SPARQL database). It only touches on the “sync” layer, and is only intended for app data, e.g. bookmarks, contacts, and the like. There was a lot of discussion at the hackfest about whether the conflict resolution algorithm is/could be a CRDT, but regardless, using this system for syncing some types of content wouldn’t affect the overall architecture. We could use it for syncing e.g. bookmarks, and share the rest of the stack (e.g. network layer) with other apps not using tracker.

Afternoon local-first discussion. Left-to-right: Zeeshan, Sebastian, Andrei, Marvin, Adrian, Carlos, and myself

Next Steps

By the end of the hackfest, we had a rough consensus that long-term we probably want something like this:

  • Muse-style architecture with dumb pipe sync servers that only route encrypted traffic between clients
  • Some kind of system daemon that apps can use to send packets to sync servers, so they don’t all have to run in the background
  • The ability to fall back to other kinds of transport with full compatibility, e.g. local network or USB keys
  • A client library that makes it easy to integrate sync into apps, using well-established CRDTs

However, there was also a general feeling that we want to go slowly and explore the space before coming up with over-engineered solutions. To this end, we think the best next step is to try CRDTs in small, self-contained apps. We brainstormed a number of potentially interesting use cases, including:

  • Alarms: Make extra sure you hear your alarms by having it synced on all devices
  • Scratchpad: Super simple notepad that’s always in sync across devices
  • Emoji history: The same recently used emoji on all devices
  • Podcasts: Sync subscription list, episode playback state, per-episode progress, currently played episode, etc.
  • Birthday reminders: Simple list of birthdays with reminder notifications that syncs across all devices

For a first minimum viable prototype we discussed ways to cut as many corners as possible, and came up with the following plan:

  • Use an off-the shelf plain text CRDT to build a syncing scratchpad as a first experiment
  • To avoid having to deal with servers, do peer-to-peer transfer only and send data via WebRTC data channel
  • For peer discovery, just hardcode a public WebRTC STUN server in the client
  • Simple Rust GTK app mostly consisting of a text area, using gstreamer for WebRTC and automerge for CRDTs
  • Sync only between two devices

We’ll see how this develops, but it’s great to have mapped out the territory, and put together a concrete plan for next steps in this direction. I’m also conscious that we’re a huge community and only a handful of people were present at the hackfest. It’s very likely that these plans will evolve as more people get involved and we get more experience working with the technology.

For more detail on the discussions, check out the full notes from our local-first sessions.

If you’d like to experiment with this in your own app and have any questions, don’t hesitate to reach out :)

Our beautiful GNOME 45 cake :)

Transparent Panel

Jonas has had an open merge request for a transparent panel for a number of years, and while we’ve tried to get it over the line a few times we never quite managed. Recently Adrian Vovk was interested in giving it another try, so at the hackfest him and Jonas sat down and did some archaeology on Jonas’ old commits, rebased it, got it to work agian, and opened a new merge request.

Adrian and Jonas hard at work rebasing ancient commits

While there are still a few open questions and edge cases, it’s early in the cycle so there’s a real chance that we might finally get this in for 46 :)

And More!

A few other things I was involved with during the hackfest:

  • Julian looked into one of my pet bugs: The default generated avatars when you create a new account not looking like AdwAvatar, but using an older, uglier implementation. This is surprisingly tricky because GDM/GNOME Shell can’t show GTK widgets, and the exported PNG avatars from libadwaita can only be exported at the size they’re being displayed at (which is smaller than in GDM/GNOME Shell).
  • We worked a bit on Annotations in Evince with Pablo, and also interviewed Keywan about how they use annotations in the editorial process for their magazine.
  • DieBahn was officially renamed to Railway, and we discussed next steps for the app and train APIs in general. Railway works for so many providers by accident, because so many of them use the same backend (HAFAS), but it’d be great to have actual open APIs for querying trains from all providers. Perhaps we need a lobbying group to get some EU legislation for this? :)
  • We discussed a “demo mode”, i.e. an easy way to set up a device with a bunch of nice-looking apps, pre-loaded with nice-looking content. One potential approach we discussed was a script that installs a set of apps, and sets them up with data by pre-filling their .var/app/ directories. The exact process for creating and updating this data would need looking into, but I’m very interested in getting something set up for this, because not having it really our software hard to demo.
  • Marvin showed me how he uses CLion for C/Vala development, and we discussed what features Builder would need to gain for him to switch from his custom Vala setup in CLion to Builder.
Julian working on fixing the default avatars

Thanks to Sonny for co-organizing, Cultivation Space for hosting us, and the GNOME Foundation for financial sponsorship! See you next time :)

Berlin Mini GUADEC 2022

Given the location of this year’s GUADEC many of us couldn’t make it to the real event (or didn’t want to because of the huge emissions), but since there’s a relatively large local community in Berlin and nearby central Europe, we decided to have a new edition of our satellite event, to watch the talks together remotely.

This year we were quite a few more people than last year (a bit more than 20 overall), so it almost had a real conference character, though the organization was a lot more barebones than a real event of course.

Thanks to Sonny Piers we had c-base as our venue this year, which was very cool. In addition to the space ship interior we also had a nice outside area near the river we could use for COVID-friendly hacking.

The main hacking area inside at c-base

We also had a number of local live talks streamed from Berlin. Thanks to the people from c-base for their professional help with the streaming setup!

On Thursday I gave my talk about post-collapse computing, i.e. why we need radical climate action now to prevent a total catastrophe, and failing that, what we could do to make our software more resilient in the face of an ever-worsening crisis.

Unfortunately I ran out of time towards the end so there wasn’t any room for questions/discussion, which is what I’d been looking forward to the most. I’ll write it up in blog post form soon though, so hopefully that can still happen asynchronously.

Hacking outside c-base on the river side

Since Allan, Jakub, and I were there we wanted to use the opportunity to work on some difficult design questions in person, particularly around tiling and window management. We made good progress in some of these areas, and I’m personally very excited about the shape this work is taking.

Because we had a number of app maintainers attending we ended up doing a lot of hallway design reviews and discussions, including about Files, Contacts, Software, Fractal, and Health. Of course, inevitably there were also a lot of the kinds of cross-discipline conversations that can only happen in these in-person settings, and which are often what sets the direction for big things to come.

One area I’m particularly interested in is local-first and better offline support across the stack, both from a resilience and UX point of view. We never quite found our footing in the cloud era (i.e. the past decade) because we’re not really set up to manage server infrastructure, but looking ahead at a local-first future, we’re actually in a much better position.

The Purism gang posing with the Librem 5: Julian, Adrien, and myself

For some more impressions, check out Jakub’s video from the event.

Thanks to everyone for joining, c-base for hosting, the GNOME Foundation for financial support for the event, and hopefully see you all next year!

Save the Date: Berlin Mini GUADEC

Since GUADEC is hard to get to from Europe and some of us don’t do air travel, we’re going to do another edition of Berlin Mini GUADEC this year!

We have a pretty solid local community in Berlin these days, and there are a lot of other contributors living reasonably close by in and around central Europe. Last year’s edition was very fun and productive with minimal organizational effort, and this year will be even better!

Location and other details are TBA, but it’s going to be in Berlin during the conference and BoF days (July 20th to 25th).

Update: The location is C-Base (Rungestraße 20 in Kreuzberg, near U Jannowitzbrücke), and there’s now a Wiki page. Please add yourself to the attendee list so we get an idea how many people will be joining :)

See you in Berlin!

Berlin Mini GUADEC

Like everyone else, I’m sad that we can’t have in-person conferences at the moment, especially GUADEC. However, thanks to the lucky/privileged combination of low COVID case numbers in central Europe over the summer, vaccines being available to younger people now, and a relatively large local community in and around Berlin we were able to put together a tiny in-person GUADEC satellite event.

Despite the somewhat different context we did a surprising number of classic GUADEC activities such as struggling to make it to the venue by lunchtime, missing talks we wanted to watch, and walking around forever to find food.

As usual we also did quite a bit of hacking (on Adwaita, Fractal, and Shell among other things), and had many interesting cross-domain discussions that rarely happen outside of physical meetups.

Thanks to Elio Qoshi and Onion Space for hosting, the GNOME Foundation for sponsoring, and everyone for attending. See you all at a real GUADEC next year, hopefully!

Community Power Part 5: First Steps

In the previous parts of this series (part 1, part 2, part 3, part 4) we looked at how power works within GNOME, and what this means for people wanting to have an impact in the project. An important takeaway was that the most effective way to do that is to get acquainted with the project’s ethos and values, and then working towards things that align with these.

However, you have to start somewhere. In practical terms, how do you do that?

Start Small

Perhaps you have lots of big ideas and futuristic plans for the project, and your first impulse is to start working on those. However, if you’re a new contributor keep the following in mind:

  • There’s often important context and history around a subject that you may not be aware of yet. Having this context inform your ideas generally makes them better and easier for others to get on board with.
  • It’s important to build trust with the community. People are likely to be skeptical of super ambitious proposals from people they don’t know yet, and who may not stick around long term.
  • Learning to effectively advertise your ideas and get buy-in from various people takes time. This goes especially for bigger changes, e.g. ones which impact many different modules.

Ideally the size of the things you propose should be proportionate to how well-integrated into the community you are. Trying to do a complete rewrite of GNOME Shell as your first contribution is likely not going to result in much. Something simple and self-contained, such as an individual view in an app is usually a good place to get started.

This doesn’t mean newcomers shouldn’t dream big (I certainly did). However, realistically you’ll be more successful starting with small tasks and working your way up to larger ones as you gain a better understanding of the project’s history, the underlying technologies, and the interests of various stakeholders.

Jumping In

What exactly to do first depends on the area you’re planning on contributing to. I’ll keep this focused on the areas I’m personally most involved with and which have the most immediate impact on the product, but of course there are lots of other great ways to get involved, such as documentation, engagement, and localization.

  • For programming there is a newcomer guide that guides you towards your first merge request. Check out the developer portal for documentation and other resources. Beyond the newcomer projects you can of course also just look at open newcomer (and non-newcomer) issues in specific projects written in your language of choice on GNOME Gitlab.
  • For design it’s easiest to just reach out to the design team and ask them to help you find a good first task. Ideally you’d start working with developers on something real as soon as possible, and the design team usually know what urgently needs design at the moment.

Of course, if you’re a developer there’s also the option of starting out by writing your own third-party apps, rather than contributing to existing ones. A great third-party app is a very valuable contribution to the project, and with GNOME Circle there is a direct path to GNOME Foundation membership.

Community

Becoming a part of the community is not just about doing work. It’s also about generally being active in community spaces, whether that’s hanging out in chat rooms, interacting with fellow contributors on social media, or going to physical meetups, hackfests, and conferences.

Some starting points for that:

  • Join the Matrix channels for the projects you’re interested in. Depending on the channel it’s possible that not much is going on at the moment, but this tends to be seasonal. Especially app-specific channels can fluctuate wildly in activity depending on how many people are working on the app right now.
  • Join some of the larger “general” GNOME Matrix channels for project-wide discussions and community stuff.
  • Reach out to people who work on things you want to get into and ask them about ways to get involved more closely. Of course it’s important to be respectful of people’s time, but most people I know are happy to answer a few quick questions once in a while.
  • Come to GUADEC, LAS, or other real-world meetups. Meeting other contributors face to face is one of the best ways to truly become part of the community, and it’s a lot of fun! Once it’s possible again COVID-wise, I highly recommend attending an in-person event.

Doing the Work

If you follow the above steps and contribute on a regular basis for a few months you’ll find that you’ve organically become a part of the project.

People will start to ask your opinion about what they’re currently doing, or for you to review their work. You’ll probably specialize in one or a few areas, and maybe become the go-to person for those things. Before you know it someone will ask you if you’re coming to the next hackfest, if you’ve already got your Foundation membership, or if you’d like to become co-maintainer of a module.

If you’ve joined the project with big ideas, this is the point where you can really start moving towards making those ideas a reality. Of course, making big changes isn’t easy even as a long-time contributor. Depending on the scope of an initiative it can take months or years to get something done (for example, our adaptive apps initiative started in 2018 and is still ongoing).

However, as an experienced contributor you have the technical, social, and ideological context to push for your ideas in a way that aligns with other people’s goals and motivations. This not only makes it less likely that your plans will face opposition, but if you’re doing it right it people will join you and help make it happen.

Conclusion

This concludes my 5-part series on how power works in the GNOME community, and how to get your pet feature implemented. Sorry to disappoint if you thought it was going to be quick and easy :)

On the plus side though, it’s a chance to be part of this amazing community. The friends you make along the way are truly worth it!

While this is the end of the series as I originally planned it, there are definitely areas it doesn’t cover and that I might write about in the future. If there are specific topics you’d be interested in, feel free to leave a comment.

Happy hacking!

Community Power Part 4: The GNOME Way

In the first three parts of this series (part 1, part 2, part 3) we looked at how power works within GNOME and what that means for getting things done. We got to the point that to make things happen you (or someone you’ve hired) need to become a trusted member of the community, which requires understanding the project’s ethos.

In this post we’ll go over that ethos, both in terms of high level values, and what those translate to in more practical terms.

Values and Principles

GNOME is a very principled project, and there’s a fair amount of writing on this topic already.

Allan Day’s “The GNOME Way” (2017) is a great starting point, but I’d also recommend Havoc Pennington’s classic “Choosing our Preferences” (2002), and Emmanuele Bassi’s “Dev v. Ops” (2017). I’ve also written about some aspects of this in the past, including “There is no Linux Platform (2019)”. For some broader historical context also check out Emmanuele’s excellent History of GNOME podcast.

To give you an overview though, here’s my personal bullet point summary. It follows the same structure as the development process laid out in part 2 based on what areas specific values and ideas apply to. It’s not meant to be comprehensive, but rather give you an idea of the way people inside the project think.

The Why

Base motivations that inform everything we do.

  • We believe in software freedom as an inclusive, accountable model for producing technology in the commons.
  • Our software is built to be usable by everyone. We care deeply about user experience, accessibility, internationalization, and support for a diverse range of hardware.
  • Software should be structurally and aesthetically elegant, both in terms of underlying technology and user interface.

The What

What kinds of things we think are worth pursuing, and (just as important) what kinds of things should be avoided.

  • Third-party apps are the best abstraction to extend the core system with additional functionality. This is why we put a huge amount of work into empowering third party app developers to build more and better apps.
  • Every preference has a cost, and this cost rises exponentially as you add more of them. This is why we avoid preferences as much as possible, and focus on fixing the underlying problems instead.
  • Similarly, there is a direct relationship between how vertically integrated a product is and how cohesive you can make it. Every unnecessary variable you eliminate across the stack frees up time and energy, and creates opportunities for features you couldn’t otherwise build.
  • People’s attention is precious. We pride ourselves in being distraction free.

The How

Useful rules of thumb around how we go about making things.

  • We don’t do hacks. Rather than working around a problem at the wrong layer of abstraction, we believe in going to the root of the problem and fixing it for everyone, even if that means digging into lower layers (and ends up being far more difficult as a result).
  • We see design holistically, rather than as an isolated thing the design team does. It’s not just about functionality and aesthetics, but also underlying technology, and what to build in the first place. Even if you’re not contributing on the design team, developing an affinity for design will make you a more effective contributor.
  • Looking at relevant art is important, but simply copying the competition doesn’t usually produce great results. We have a proud history of inventing new paradigms that are better than the status quo.
  • As a general rule, start from the user experience you want and then go about building the technology necessary to create it, not the other way around. However: This is not an excuse for bad engineering or pursuing ideas that are conceptually impossible (e.g. multi-protocol chat clients).
  • Defer to the Expert. Everyone has different areas of expertise, such as user experience, security, accessibility, performance, or localization. Listen to the people most experienced in a given domain.
  • Design is all about trade-offs. Be wary of hard and fast rules that only look at one part of a problem (e.g. “vertical space is at a premium, therefore…”), and instead try to balance various concerns in a way that works well overall.

In Practice

Some of the above principles are quite abstract, so what do they translate to when actually building software day to day? Here are some examples of how they apply to real-world questions.

  • App developers should do their own packaging. It’s the only way to do it sustainably at scale.
  • Flatpak is the future of app distribution.
  • The “traditional desktop” is dead, and it’s not coming back (Note: I’m talking about Windows 95 era UI patterns here, not desktop vs. mobile). Instead of trying to bring back old concepts like menu bars or status icons, invent something better from first principles.
  • System-wide theming is a broken idea. If you don’t like the way apps look, contribute to them directly (or to the platform style).
  • Shell extensions are always going to be a niche thing. If you want to have real impact your time is better invested working on apps or GNOME Shell itself.
  • “Filling the available space” is rarely a good goal by itself, and an easy way to design yourself into a corner.

All of the above is of course my personal perception, and you’ll find variations on these ideas depending on who you talk to. However, in my experience most of them are shared fairly consistently by people across the community, especially given our informal structure.

Now that we’ve covered how things get done, by whom, and why, you’re in a great position to start making your mark. In the next part of this series we’ll look at practical first steps for contributing.

Until then, happy hacking!

Community Power Part 3: Just Do It!

In parts 1 and 2 of the series we looked at how different groups inside the GNOME community work together to get things done. In this post we’ll look at what that means for people wanting to push for their personal agenda, e.g. getting a specific feature implemented or bug fixed.

Implicit in the theoretical question how power works in GNOME is often a more practical one: How can I get access to it? How can I exercise power to get something I want?

At a high level that’s very easy to answer: You either do the work yourself, or you convince someone else to do it.

Do It Yourself

If you’re the person working on something you have a ton of power over that thing. Designing and building software is in essence an endless stream of decisions. The more work you do, the more of those decisions you end up making.

Of course, in practice it’s not quite that simple. User-visible features need design reviews, and unless you’re the sole maintainer of a project you also need to go through code reviews to get your changes merged. As a designer, most things you design need to be implemented by someone else, so you have to convince them to do that.

However, it’s definitely possible to have a huge impact simply by doing a lot of work, and not only because of all the decisions you end up making directly as you implement things. If you contribute regularly to a module you’ll eventually end up reviewing other people’s work, and generally being asked for your opinion on topics you’re knowledgeable about.

Making Your Case

If you can’t, don’t want to, or don’t have time to do the work yourself, you’ll need to find someone else to do it for you. This is obviously a difficult task, because you’re essentially trying to convince people to work for you for free.

Some general tips for this:

  • Get an idea of what kinds of things the people you’re trying to convince are interested in, e.g. technologies they like and types of problems they care about.
  • Make the case that your idea fits into something they are already working on, or will help them reach goals they are already pursuing.
  • Generally speaking, you’ll have a much better chance with new-ish contributors. They tend to be less overworked since they don’t maintain as many mission-critical modules.

Realistically, unless your idea is very small in scope, or exactly what someone was already looking for, this strategy is not very likely to succeed. Most contributors, volunteer or paid, already have a huge backlog of their own to work through. There are only so many hours in the day, and GtkTimeMachine is not yet a thing :)

However, the chances are not zero either, and it’s always possible that even if your idea isn’t picked up right away it will spark something later on, or influence future discussions.

Paying Someone Else

You can of course also convince people to work on something you want by hiring them (radical, I know!).

There are plenty of very talented people in the GNOME community who do contract development, from individuals to fairly large consultancies. You can also hire someone from outside the project, but then they will have to build trust with the community first, which is non-trivial overhead. In most cases, hiring existing contributors is orders of magnitude more effective than people who aren’t already a part of the project.

How to hire people to implement things for you is out of scope for this series, but if you’d like advice on it feel free to contact me or leave a comment. If there’s enough interest I might write about it in the future.


All of that said, if the thing you want doesn’t align with the ethos of the project it’s going to be difficult regardless of which strategy you go with. This is why familiarizing yourself with that ethos is important if you want to make your mark on the project. To help with that we’ll go over GNOME’s principles and values in the next part of this series.

Until then, happy hacking!

Community Power Part 2: The Process

In part 1 of this series we looked at some common misconceptions about how power works inside the GNOME project and went over the roles and responsibilites of various sub-groups.

With that in place, let’s look at how of a feature (or app, redesign, or other product initiative) goes from idea to reality.

The Why

At the base of everything are the motivations for why we embark on new product initiatives. These are our shared values, beliefs, and goals, rooted in GNOME’s history and culture. They include goals like making the system more approachable or empowering third party developers, as well as non-goals, such as distracting people or introducing unnecessary complexity.

Since people across the project generally already agree on these it’s not something we talk about much day-to-day, but it informs everything we do.

This topic is important for understanding our development process, but big enough to warrant its own separate post in this series. I’ll go into a lot more detail there.

The What

At any given moment there are potentially hundreds of equally important things people working on GNOME could do to further the project’s goals. How do we choose what to work on when nobody is in charge?

This often depends on relatively hard to predict internal and external factors, such as

  • A volunteer taking a personal interest in solving a problem and getting others excited about it (e.g. Alexander Mikhaylenko’s multi-year quest for better 1-1 touchpad gestures)
  • A company giving their developers work time to focus on getting a specific feature done upstream (e.g. Endless with the customizable app grid)
  • The design team coming up with something and convincing developers to make it happen (e.g. the Shell dialog redesign in 3.36)
  • A technological shift presenting a rare opportunity to get a long-desired feature in (e.g. the Libadwaita stylesheet refresh enabling recoloring)

For larger efforts, momentum is key: If people see exciting developments in an area they’ll want to get involved and help make it even better, resulting in a virtuous cycle. A recent example of this was GNOME 40, where lots of contributors who don’t usually do much GNOME Shell UI work pitched in during the last few weeks of the cycle to get it over the line.

If something touches more than a handful of modules (e.g. the app menu migration), the typical approach is to start a formal “Initiative”: This is basically a Gitlab issue with a checklist of all affected modules and information on how people can help. Any contributor can start an initiative, but it’s of course not guaranteed that others will be interested in helping with it and there are plenty of stalled or slow-moving ones alongside the success stories.

The How

If a new app or feature is user-facing, the first step towards making it happen is to figure out the user experience we’re aiming for. This means that at some point before starting implementation the designers need to work through the problem, formulate goals, look at relevant art, and propose a way forward (often in the form of mockups). This usually involves a bunch of iterations, conversations with various stakeholders, and depending on the scale of the initiative, user research.

If the feature is not user-facing but has non-trivial technical implications (e.g. new dependencies) it’s good to check with some experienced developers or the release team whether it fits into the GNOME stack from a technical point of view.

Once there is a more or less agreed-upon design direction, the implementation can start. Depending on the size and scope of the feature there are likely additional design or implementation questions that require input from different people throughout the process.

When the feature starts getting to the point where it can be tested by others it gets more thorough design reviews (if it’s user facing), before finally being submitted for code review by the module’s maintainers. Once the maintainers are happy with the code, they merge it into the project’s main branch.


In the next installment we’ll look at what this power structure and development process mean for individual contributors wanting to work towards a specific goal, such as getting their pet bug fixed or feature implemented.

Until then, happy hacking!

Community Power Part 1: Misconceptions

People new to the GNOME community often have a hard time understanding how we set goals, make decisions, assume responsibility, prioritize tasks, and so on. In short: They wonder where the power is.

When you don’t know how something works it’s natural to come up with a plausible story based on the available information. For example, some people intuitively assume that since our product is similar in function and appearance to those made by the Apples and Microsofts of the world, we must also be organized in a similar way.

This leads them to think that GNOME is developed by a centralized company with a hierarchical structure, where developers are assigned tasks by their manager, based on a roadmap set by higher management, with a marketing department coordinating public-facing messaging, and so on. Basically, they think we’re a tech company.

This in turn leads to things like

  • People making customer service style complaints, like they would to a company whose product they bought
  • General confusion around how resources are allocated (“Why are they working on X when they don’t even have Y?”)
  • Blaming/praising the GNOME Foundation for specific things to do with the product

If you’ve been around the community for a while you know that this view of the project bears no resemblance to how things actually work. However, given how complex the reality is it’s not surprising that some people have these misconceptions.

To understand how things are really done we need to examine the various groups involved in making GNOME, and how they interact.

GNOME Foundation

The GNOME Foundation is a US-based non-profit that owns the GNOME trademark, hosts our Gitlab and other infrastructure, organizes conferences, and employs one full-time GTK developer. This means that beyond setting priorities for said GTK developer, it has little to no influence on development.

Update: As of June 14, the GNOME Foundation no longer employs any GTK developers.

Individual Developers

The people actually making the product are either volunteers (and thus answer to nobody), or work for one of about a dozen companies employing people to work on various parts of GNOME. All of these companies have different interests and areas of focus depending on how they use GNOME, and tend to contribute accordingly.

In practice the line between “employed” contributor and volunteer can be quite blurry, as many contributors are paid to work on some specific things but also additionally contribute to other parts of GNOME in their free time.

Maintainers

Each module (e.g. app, library, or system component) has one or more maintainers. They are responsible for reviewing proposed changes, making releases, and generally managing the project.

In theory the individual maintainers of each module have more or less absolute power over those modules. They can merge any changes to the code, add and remove features, change the user interface, etc.

However, in practice maintainers rarely make non-trivial changes without consulting/communicating with other stakeholders across the project, for example the design team on things related to the user experience, the maintainers of other modules affected by a change, or the release team if dependencies change.

Release Team

The release team is responsible for coordinating the release of the entire suite of GNOME software as a single coherent product.

In addition to getting out two major releases every year (plus various point releases) they also curate what is and isn’t part of the core set of GNOME software, take care of the GNOME Flatpak runtimes, manage dependencies, fix build failures, and other related tasks.

The Release Team has a lot of power in the sense that they literally decide what is and isn’t part of GNOME. They can add and remove apps from the core set, and set system-wide default settings. However, they do not actually develop or maintain most of the modules, so the degree to which they can concretely impact the product is limited.

Design Team

Perhaps somewhat unusually for a free software project GNOME has a very active and well-respected design team (If I do say so myself :P). Anything related to the user experience is their purview, and in theory they have final say.

This includes most major product initiatives, such as introducing new apps or features, redesigning existing ones, the visual design of apps and system, design patterns and guidelines, and more.

However: There is nothing forcing developers to follow design team guidance. The design team’s power lies primarily in people trusting them to make the right decisions, and working with them to implement their designs.

How do things get done then?

No one person or group ultimately has much power over the direction of the project by themselves. Any major initiative requires people from multiple groups to work together.

This collaboration requires, above all, mutual trust on a number of levels:

  • Trust in the abilities of people from other teams, especially when it’s not your area of expertise
  • Trust that other people also embody the project’s values
  • Trust that people care about GNOME first and foremost (as opposed to, say, their employer’s interests)
  • Trust that people are in it for the long run (rather than just trying to quickly land something and then disappear)

This atmosphere of trust across the project allows for surprisingly smooth and efficient collaboration across dozens of modules and hundreds of contributors, despite there being little direct communication between most participants.


This concludes the first part of the series. In part 2 we’ll look at the various stages of how a feature is developed from conception to shipping.

Until then, happy hacking!