Author: Jordan Petridis

  • DHH and Omarchy: Midlife crisis

    DHH and Omarchy: Midlife crisis

    Couple weeks ago Cloudflare announced it would be sponsoring some Open Source projects. Throwing money at pet projects of random techbros would hardly be news, but there was a certain vibe behind them and the people leading them.

    In an unexpected turn of events, the millionaire receiving money from the billion-dollar company, thought it would be important to devote a whole blog post to random brokeboy from Athens that had an opinion on the Internet.

    I was astonished to find the blog post. Now that I moved from normal stalkers to millionaire stalkers, is it a sign that I made it? Have I become such a menace? But more importantly: Who the hell even is this guy?

    D-H-Who?

    When I was painting with crayons in a deteriorating kindergarten somewhere in Greece, DHH, David Heinemeier Hansson, was busy with dumping Ruby on Rails in the world and becoming a niche tech celebrity. His street cred for releasing Ruby on Rails would later be replaced by his writing on remote work. Famously authoring “Remote: Office Not Required”, a book based on his own company, 37signals.

    That cultural cache would go out the window in 2022 when he got in hot water with his own employees after an internal review process concluded that 37signals had been less than stellar when it came to handling race and diversity. Said review process culminated in a clash, where the employees were interested in further exploration of the topic, which DHH responded to them with “You are the person you are complaining about” (meaning: you, pointing out a problem, is the problem).

    No politics at work

    This incident lead the two founders of 37signals to the executive decision to forbid any kind of “societal and political discussions” inside the company, which, predictably, lead to a third of the company resigning in protest. This was a massive blow to 37signals. The company was famous for being extremely selective when hiring, as well as affording employees great benefits. Suddenly having a third of the workforce resign over disagreement with management sent a far more powerful message than anything they could have imagined.

    It would become the starting point for the downwards and radicalizing spiral along with the extended and very public crashout DHH will be going through in the coming years.

    Starting your own conference so you can never be banned from it

    Subsequently, DHH was uninvited from keynoting at RailsConf on the account of everyone being grossed out about the handling of the matter and in solidarity with the community members along the employees that quit in protest.

    That, in turn, would lead to the creation of the Rails Foundation and starting Rails World. A new conference about Rails that 100%-swear-to-god was not just about DHH having his own conference where he can keynote and would never be banned.

    In the following years DHH would go to explore and express all the spectrum of “down the alt-right pipeline” opinions, like:

    Omarchy

    You either log off a hero, or you see yourself create another linux distribution, and having failed the first part, DHH has been pouring his energy into creating a new project. While letting everyone know how he much prefers that than going to therapy. Thus, Omarchy was born, a set of copy pasted Window Manager and Vim configs turned distro. One of the two projects that Cloudflare will be proudly funding shortly. The only possible option for the compositor would be Hyprland, and even though it’s Wayland (bad!), it’s one of the good-non-woke ones. In a similar tone, the project website would be featuring the tight integration of Omarchy with SuperGrok.

    Rubygems

    On a parallel track, the entire Ruby community more or less collapsed in the last two months. Long story short, is that one of the major Ruby Central sponsors, Sidekiq, pulled out the funding after DHH was invited to speak at RailsConf 2025. Shopify, where DHH sits in the boards of directors, was quick to save the day and match the lost funding. Coincidentally an (allegedly) takeover of key parts of the Ruby Infrastructure was carried out by Ruby Central and placed under the control of Shopify in the following weeks.

    This story is ridiculous, and the entire ruby community is imploding following this. There’s an excellent write-up of the story so far here.

    In a similar note, and at the same time, we also find DHH drooling over Off-brand Peter Thiel and calling for an Anduril takeover of the Nix community in order to purge all the wokes.

    On Framework

    At the same time, Framework had been promoting Omarchy in their social media accounts for a good while. And DHH in turn has been posting about how great Framework hardware is and how the Framework CEO is contributing to his Arch Linux reskin. On October 8th, Framework announced its sponsorhip of the Hyprland project, following 37signal doing the same thing couple weeks earlier. On the same day they made another post promoting Omarchy yet again. This caused a huge backlash and overall PR nightmare, with the apex being a forum thread with over 1700 comments so far.

    The first reply in forum post, comes from Nirav, Framework’s CEO, with a very questionable choice of words:

    We support open source software (and hardware), and partner with developers and maintainers across the ecosystem. We deliberately create a big tent, because we want open source software to win. We don’t partner based on individual’s or organization’s beliefs, values, or political stances outside of their alignment with us on increasing the adoption of open source software.

    I definitely understand that not everyone will agree with taking a big tent approach, but we want to be transparent that bringing in and enabling every organization and community that we can across the Linux ecosystem is a deliberate choice.

    Mentioning twice a “big tent” as the official policy and response to complains about supporting Fascist and Racist shitheads, is nothing sort of digging a hole for yourself so deep it that it reemerges in another continent.

    Later on, Nirav would mention that they were finalizing sponsorship of the GNOME Foundation (12k/year) and KDE e.V. (10k/year). In the linked page you can also find a listing of Rails World (DHH’s personal conference) for a one time payment of 24k dollars.

    There has not been an update since, and at no point have they addressed their support and collaboration with DHH. Can’t lose the money cow and free twitter clout I guess.

    While I would personally would like to see the donation be rejected, I am not involved with the ongoing discussion on the GNOME Foundation side nor the Foundation itself. What I can say is that myself and others from the GNOME OS team, were involved in initial discussions with Framework, about future collaborations and hardware support. GNOME OS, much like the GNOME Flatpak runtime, is very useful as a reference point in order to identify if a bug, in hardware or software, is distro-specific or not.

    It’s been a month since the initial debacle with Framework. Regardless of what the GNOME Foundation plans on doing, the GNOME OS team certainly does not feel comfortable in further collaboration given how they have handled the situation so far. It’s sad because the people working there understand the issue, but this does not seem to be a trait shared by the management.

    A software midlife crisis

    During all this, DHH decided that his attention must be devoted to get into a mouth-off with a greek kid that called him a Nazi. Since this is not violence (see “Words are not violence” essay), he decided to respond in kind, by calling for violence against me (see “Words are violence” essay).

    To anyone who knows a nerd or two over the age of 35, all of the above is unsurprising. This is not some grand heel turn, or some brainwashing that DHH suffered. This is straight up a midlife crisis turned fash speedrun.

    Here’s a dude who barely had any time to confront the world before falling into an infinite money glitch in the form of Ruby on Rails, Jeff Bezos throwing him crazy money, Apple bundling his software as a highlighted feature, becoming a “new work” celebrity and Silicon Valley “Guru”. Is it any surprise that such a person later would find the most minuscule kind of opposition as an all-out attack on his self-image?

    DHH has never had the “best” opinions on a range of things, and they have been dutifully documented by others, but neither have many other developers that are also ignorant of topics outside of software. Being insecure about your hairline and masculine aesthetic to the point of adopting the Charles Manson haircut to cover your balding is one thing. However, it is entirely different to become a drop-shipped version of Elon, tweeting all day and stopping only to write opinion pieces that come off as proving others wrong rather than original thoughts.

    Case in point: DHH recently wrote about how “men who’d prefer to feel useful over being listened to”. The piece is unironically titled “Building competency is better than therapy”. It is an insane read, and I’ll speculate that it feels as if someone, who DHH can’t outright dismiss, suggested he goes to therapy. It’s a very “I’ll show you off in front of my audience” kind of text.

    Add to that a three year speedrun decrying the “theocracy of DEI” and the seemingly authoritarian powers of “the wokes”, all coincidentally starting after he could not get over his employees disagreeing with him on racial sensitivities.

    How can someone suggest his workers read Ta-Nehisi Coates’s “Between the World and Me” and Michelle Alexander’s “The New Jim Crow” in the aftermath of George Floyd’s killing and the BLM protests. While a couple of months later writing salivating blogposts after the EDL eugenics rally in England and giving the highest possible praise to Tommy Robinson?

    Can these people be redeemed?

    It is certainly not going to help that niche celebrities, like DHH, still hold clout and financial power and are able to spout the worst possible takes without any backlash because of their position.

    A bunch of Ruby developers recently started a petition to get DHH distanced from the community, and it didn’t go far before getting brigaded by the worst people you didn’t need to know existed. This of course was amplified to oblivion by DHH and a bunch of sycophants chasing the clout provided by being retweeted by DHH. It would shortly be followed by yet another “I’m never wrong” piece.

    Is there any chance for these people, who are shielded by their well-paying jobs, their exclusively occupational media diet, and stimuli all happen to reinforce the default world view?

    I think there is hope, but it demands more voices in tech spaces to speak up about how having empathy for others, or valuing diversity is not some grand conspiracy but rather enrichment to our lives and spaces. This comes hand in hand with firmly shutting down concern trolling and ridiculous “extreme centrist” takes where someone is expected to find common ground with others advocating for their extermination.

    One could argue that the true spirit of FLOSS, which attracted much of the current midlife crisis developers in the first place, is about diversity and empathy for the varied circumstances and opinions that enriched our space.

    Conclusion

    I do not know if his heart is filled with hate or if he is incredibly lost, but it makes little difference since this is his output in the world.

    David, when you read this I hope it will be a wake-up call. It’s not too late, you only need to go offline and let people help you. Stop the pathetic TemuElon speedrun and go take care of your kids. Drop the anti-woke culture wars and pick up a Ta-Nehisi Coates book again.

    To everyone else: Push back against their vile and misanthropic rhetoric at every turn. Don’t let their poisonous roots fester into the ground. There is no place for their hate here. Don’t let them find comfort and spew their vomit in any public space.

    Crush Fascism. Free Palestine ✊.

  • Nightly Flatpak CI gets a cache

    Recently I got around tackling a long standing issue for good. There were multiple attempts in the past 6 years to cache flatpak-builder artifacts with Gitlab but none had worked so far.

    On the technical side of things, flatpak-builder relies heavily on extended attributes (xattrs) on files to do cache validation. Using gitlab’s built-in cache or artifacts mechanisms results in a plain zip archive which strips all the attributes from the files, causing the cache to always be invalid once restored. Additionally the hardlinks/symlinks in the cache break. One workaround for this is to always tar the directories and then manually extract them after they are restored.

    On the infrastructure of things we stumble once again into Gitlab. When a cache or artifact is created, it’s uploaded into the Gitlab’s instance storage so it can later be reused/redownloaded into any runner. While this is great, it also quickly ramps up the network egress bill we have to pay along with storage.
 And since its a public gitlab instance that anyone can make request against repositories, it gets out of hand fast.

    Couple weeks ago Bart pointed me out to Flathub’s workaround for this same problem. It comes down to making it someone else problem, and ideally one someone who is willing to fund FOSS infrastructure. We can use ORAS to wrap files and directories into an OCI wrapper and publish it to public registries. And it worked. Quite handy! OCI images are the new tarballs.

    Now when a pipeline run against your default branch (and assuming it’s protected) it will create a cache artifact and upload to the currently configured OCI registry. Afterwards, any build, including Merge Request pipelines, will download the image, extract the artifacts and check how much of it is still valid.

    From some quick tests and numbers, GNOME Builder went from a ~16 minute build to 6 minutes for our x86_64 runners. While on the AArch64 runner the impact was even bigger, going from 50 minutes to 16 minutes. Not bad. The more modules you are building in your manifest, the more noticeable it is.

    Unlike Buildstream, there is no Content Addressable Server and flatpak-builder itself isn’t aware of the artifacts we publish or can associate them with the cache keys. The OCI/ORAS cache artifacts are manual and a bit hacky of a solution but works well in practice and until we have better tooling. To optimize a bit better for less cache-misses consider building modules from pinned commits/tags/tarballs and building modules from moving branches as late as possible.

    If you are curious in the details, take a look at the related Merge Request in the templates repository and the follow up commits.

    Free Palestine ✊

  • The Flatpak Runtime drops the 32-bit compatibility extension

    Last month GNOME 49 was released, very smooth overall, especially given the amount of changes across the entire stack that we shipped.

    One thing that is missing and that flew under the radar however, is that 32 bit Compatibility extension (org.gnome.Platform.i386.Compat) of the GNOME Flatpak Runtime is now gone. We were planning on making an announcement earlier but life got in the way.

    That extension is a 32-bit version of the Runtime that applications could request to use. This is mostly helpful so Wine can use a 32 bit environment to run against. However your wine or legacy applications most likely don’t require a 32 bit build of GTK 4, libadwaita or WebkitGTK.

    We rebuild all of GNOME from the latest commits in git in each module, at least twice a day. This includes 2 builds of WebkitGTK, a build of mozjs and a couple of rust libraries and applications. Multiplied for each architecture we support. This is no small task for our CI machines to handle. There were also a couple of updates that were blocked on 32-bit specific build failures, as projects rarely test for that before merging the code. Suffice to say that supporting builds that almost nobody used or needed was a universal annoyance across developers and projects.

    When we lost our main pool of donated CI machines and builders, the first thing in the chopping block was the 32-bit build of the runtime. It affected no applications, as none are relying on the Nightly version of the extension but it would affect some applications on Flathub once released.

    In order to keep the applications working, and to avoid having to overload our runners again, we thought about another approach. In theory it would be possible to make the runtime compatible with the org.Freedesktop.i386.Compat extension point instead. We already use freedesktop-sdk as the base for the runtime so we did not expect many issues.

    There were exactly 4 applications that made use of the gnome specific extension, 2 in Flathub, 1 in Flathub Beta and 1 archived.

    Abderrahim and I worked on porting all the application to the GNOME 49 runtime and have Pull Requests open. The developers of Bottles were great help in our testing and the subsequent PR is almost ready to be merged. Lutris and Minigalaxy need some extra work to upgrade the runtime but its for unrelated reasons.

    Since everything was working we never re-published the i386 GNOME compatibility extension again in Nightly, and thus we also didn’t for GNOME 49. As a result, the GNOME Runtime is only available for x86_64 and AArch64.

    Couple years ago we dropped the normal armv7 and i386 build as of the Runtime. With the i386 compatibility extension also gone, it means that we no longer have any 32 bit targets we QA before releasing GNOME as a whole. Previously, all modules we released would be guaranteed to at least compile for i386/x86 but going forward that will not be the case.

    Some projects, for example glib, have their own CI specifically for 32 bit architectures. What was a project-wide guarantee before, is now a per-project opt-in. While many maintainers will no longer go out of their way to fix 32 bit specific issues anymore, they will most likely still review and merge any patches sent their way.

    If you are a distributor, relying on 32 bit builds of GNOME, you will now be expected to debug and fix issues on your own for the majority of the projects. Alternatively you could also get involved upstream and help avoid further bit rot of 32 bit builds.

    Free Palestine ✊

  • X11 Session Removal FAQ

    Here is a quick series of frequently asked questions about the X11 session kissing us goodbye. Shoutout to Nate from which I copied the format of the post.

    Is Xorg unmaintained and abandoned?

    No, the Xorg Server is still very much maintained, however its feature development is halted. It still receives occasional bugfixes and there are timely security releases when needed.

    The common sentiment, shared among Xorg, Graphics, Kernel, Platform and Application developers is that any future development is a dead-end and shortcomings can’t be addressed without breaking X11. That’s why the majority of Xorg developers moved on to make a new, separate, thing: Wayland.

    In doing so, Xorg main focus became to be as reliable as possible and fix security issues as they come up.

    It’s the same people that still maintain Xorg. Thanklessly.

    If you are interested in Xorg’s history I can’t recommend enough this timeless talk by Daniel.

    What will happen to Xorg?

    The Xorg server is still there and will continue be maintained. Of course with GNOME and KDE transitioning away from it, it will be receiving even less attention but none of this has a direct impact on your other favorite X11-only desktops or means they will magically stop working overnight.

    Your favorite distribution most likely will still keep shipping Xorg packages for a couple more years, if not decades. What’s going away is the GNOME on Xorg session, not the Xorg server itself.

    Why did GNOME move to Wayland now?

    Early during the GNOME 46 development cycle I created the gnome-session Merge Requests in an attempt to gather feedback from people and identify leftover issues.

    48.rc addressed the last big remaining a11y issues, and Orca 48 is so much nicer, in large part thanks to funding from the STF and Igalia donating a bunch of work on top of that to get things over the line. With the functionality of the Wayland Session now being on par (if not straight up better) than Xorg, we all collectively decided that it was time to move on with the removal of the Xorg session.

    However 48.rc was also too late to plan and proceed with the removal of the session as well. In hindsight this was a good thing, because we found a couple very obscure bugs last month and we’d have to rush and crunch to fix these otherwise.

    On May 6th, we held a meeting among the GNOME Release team. We discussed the X11 session, among other things. There was one known issue with color-calibration but a fix was planned. Discussed timelines and possible scenarios for the removal and pointed out that it would be a great opportunity to go ahead with it for 49 which aligns with 25.10 release, rather than postponing to GNOME 50 and the upcoming 26.04 LTS. We set the topic aside afterwards as we’d wait for upcoming feedback from the Ubuntu team which had a planning meeting scheduled a week or so afterwards.

    On May 19th we (Release Team) held another meeting, picking up the X11 topic again. While we didn’t have a concrete decision from the Ubuntu side on what they’d plan to do, there also were not any new/unexpected issues or usecases from their side, so overall good news. Thus Adrian and Myself continued with the preparations for disabling the X11 sessions for 49.

    On May 20th FESCO approved the proposal to remove the GNOME on Xorg session for Fedora 43.

    June 1st I started working on a, earlier than usual, 49.alpha release and 3 days later I got a private confirmation that Ubuntu would indeed follow along with completely disabling the Xorg session for 49, matching the upstream defaults.

    Late night June 7th, more like morning of the 8th and after dealing with a couple of infrastructure issues, I finished with all the preparations, tagged 49.alpha.0 for GDM, gnome-shell, mutter and gnome-session and published the announcement blogpost. 2 days later Ubuntu followed suite with the public announcement from their side.

    Will my applications stop working?

    Most application toolkits have Wayland backends these days, however for those that do not, we have XWayland. This let’s X11-native application keep running on Wayland as if they were using an X11 session. It happens transparently and XWayland will be around with us for decades. You don’t have to worry about losing your applications.

    Is everything working for real?

    GNOME on Wayland is as functional as the Xorg session and in plenty of cases a lot more capable and efficient. There’s some niche workflows that are only possible on X11, but there isn’t any functionality regression.

    What’s the state of accessibility?

    There has been a lot of concerned trolling and misinformation specifically around this topic sadly from people that don’t care about it and have been abusing the discourse as a straw man argument. Drowning all the people that rely on it and need to be heard. Thankfully Aaron of fireborn fame wrote recently a blogpost talking about all this in detail and clearing up misconceptions.

    GNOME itself is already there when it comes to accessibility, but now next task will be rebuilding the third-party tooling (or integrating them directly when possible). We now have a foundation that allows us to provide better accessibility support and options to people, with designed solutions rather than piles of hacks held together by duck tape on top of a protocol from the 80s.

    Is Wayland Gay?

    Yes and Xorg is Trans.

    Picture of the libxtrans gitlab repository with a Trans flag as the project banner.

    Happy Pride month and Free Palestine ✊

  • An update on the X11 GNOME Session Removal

    A year and a half ago, shortly after the GNOME 45 release, I opened a pair of Pull Requests to deprecate and remove the X11 Session.

    A lot has happened since. The GNOME 48 release addressed all the remaining blocking issues, mainly accessibility regressions, but it was too late in the development cycle to drop the session as well.

    Now the time has come.

    We went ahead and disabled the X11 session by default and from now on it needs to be explicitly enabled when building the affected modules. (gnome-session, GDM, mutter/gnome-shell). This does not affect XWayland, it’s only about the X11/Xorg session and related functionality. GDM’s ability to launch other X11 sessions will be also preserved.

    Usually we release a single Alpha snapshot, but this time we have released earlier snapshots (49.alpha.0), 3 weeks ahead of the normal schedule, to gather as much feedback and testing as possible. (There will be another snapshot along the complete GNOME 49 Alpha release).

    If you are a distributor, please try to not change the default or at least let us (or me directly) know why you’d need to still ship the X11 session.

    As I mentioned in the tracking issue ticket, there 3 possible scenarios.

    The most likely scenario is that all the X11 session code stays disabled by default for 49 with a planned removal for GNOME 50.

    The ideal scenario is that everything is perfect, there are no more issues and bugs, we can go ahead and drop all the code before GNOME 49.beta.

    And the very unlikely scenario is that we discover some deal-breaking issue, revert the changes and postpone the whole thing.

    Having gathered feedback from our distribution partners, it now depends entirely on how well the early testing will go and what bugs will be uncovered.

    You can test GNOME OS Nightly with all the changes today. We found a couple minor issues but everything is fixed in the alpha.0 snapshot. Given how smooth things are going so far I believe there is a high likely-hood there won’t be any further issues and we might be able to proceed with the Ideal scenario.

    TLDR: The X11 session for GNOME 49 will be disabled by default and it’s scheduled for removal, either during this development cycle or more likely during the next one (GNOME 50). There are release snapshots of 49.alpha.0 for some modules already available. Go and try them out!

    Happy Pride month and Free Palestine ✊

  • The Fedora Project Leader is willfully ignorant about Flathub

    Update 1: Cassidy wrote a much more comprehensive and well written explanation about the review guidelines, permission, Flathub infrastructure and other things discussed in this post. I highly recommend you check it out.

    Update 2: A couple people mentioned that the mystery hardware survey application was indeed Hardware Probe. Miller actually opened a thread on Flathub’s discourse about it. At the time it was a terminal application and due to a bug that affected gnome-software, the confirmation prompt was getting skipped. This wasn’t affecting other store fronts or launching from the the application grid.

    Now to the original post:

    Today I woke up to a link of an interview from the current Fedora Project Leader, Matthew Miller. Brodie who conducted the interview mentioned that Miller was the one that reached out to him. The background of this video was the currently ongoing issue regarding OBS, Bottles and the Fedora project, which Niccolò made an excellent video explaining and summarizing the situation. You can also find the article over at thelibre.news. “Impressive” as this story is, it’s for another time.

    What I want to talk in this post, is the outrageous, smearing and straight up slanderous statements about Flathub that the Fedora Project Leader made during the interview..

    I am not directly involved with the Flathub project (A lot of my friends are), however I am a maintainer of the GNOME Flatpak Runtime, and a contributor to the Freedesktop-sdk and ElementaryOS Runtimes. I also maintain applications that get published on Flathub directly. So you can say I am someone invested in the project and that has put a lot of time into it. It was extremely frustrating to hear what would only qualify as reddit-level completely made up arguments with no base in reality coming directly from Matthew Miller.

    Below is a transcript, slightly edited for brevity, of all the times Flathub and Flatpak was mentioned. You can refer to the original video as well as there were many more interesting things Miller talked about.

    It starts off with an introduction and some history and around the 10-minute mark, the conversation starts to involve Flathub.

    Miller: [..] long way of saying I think for something like OBS we’re not really providing anything by packaging that. Miller: I think there is an overall place for the Fedora Flatpaks, because Flathub part of the reason its so popular (there’s a double edged sword), (its) because the rules are fairly lax about what can go into Flathub and the idea is we want to make it as easy for developers to get their things to users, but there is not really much of a review

    This is not the main reason why Flathub is popular, its a lot more involved and interesting in practice. I will go into this in a separate post hopefully soon.

    Claiming that Flathub does not have any review process or inclusion policies is straight up wrong and incredibly damaging. It’s the kind of thing we’ve heard ad nauseam from Flathub haters, but never from a person in charge of one of the most popular distributions and that should have really really known better.

    You can find the Requirements in the Flathub documentation if you spend 30 seconds to google for them, along with the submission guidelines for developers. If those documents qualify as a wild west and free for all, I can’t possibly take you seriously.

    I haven’t maintained a linux distribution package myself so I won’t go to comparisons between Flathub and other distros, however you can find people, with red hats even, that do so and talked about it. Of course this is one off examples and social bias from my part. But it proves how laughable of a claim is that things are not reviewed. Additionally, the most popular story I hear from developers is how Flathub requirements are often stricter and sometimes cause annoyances.

    Screenshot of the post from this link: https://social.vivaldi.net/@sesivany/114030210735848325

    Additionally, Flathub has been the driving force behind encouraging applications to update their metadata, completely reworking the User Experience and handling off permissions and made them prominent to the user. (To the point where even network access is marked as potentially-unsafe).

    Miller: [..] the thing that says verified just says that it’s verified from the developer themselves.

    No, verified does not mean that the developer signed off into it. Let’s take another 30 seconds to look into the Flathub documentation page about exactly this.

    A verified app on Flathub is one whose developer has confirmed their ownership of the app ID […]. This usually also may mean that either the app is maintained directly by the developer or a party authorized or approved by them.

    It still went through the review process and all the rest of requirements and policies apply. The verified program is basically a badge to tell users this is a supported application by the upstream developers, rather than the free for all that exists currently where you may or may not get an application released from years ago depending on how stable your distribution is.

    Sidenote, did you know that 1483/3003 applications on Flathub are verified as of the writing of this post? As opposed to maybe a dozen of them at best in the distributions. You can check for yourself

    Miller: .. and it doesn’t necessarily verify that it was build with good practices, maybe it was built in a coffee shop on some laptop or whatever which could be infected with malware or whatever could happen

    Again if Miller had done the bare minimum effort, he would have come across the Requirements page which describes exactly how an Application in Flathub is built, instead of further spreading made up takes about the infrastructure. I can’t stress enough how damaging it has been throughout the years to claim that “Flathub may be potential Malware”. Why it’s malware? Because I don’t like its vibes and I just assume so..

    I am sure If I did the same about Fedora in a very very public medium with thousand of listeners I would probably end up with a Layers letter from Redhat.

    Now Applications in Flathub are all built without a network access, in Flathub’s build servers, using flatpak-builder and Flatpak Manifests which are a declarative format, which means all the sources required to build the application are known, validated/checksumed, the build is reproducible to the extend possible, you can easily inspect the resulting binaries and the manifest itself used to build the application ends up in /app/manifest.json which you can also inspect with the following command and use it to rebuild the application yourself exactly like how it’s done in Flathub.

    $ flatpak run --command=cat org.gnome.TextEditor /app/manifest.json
    {
      "id" : "org.gnome.TextEditor",
      "runtime" : "org.gnome.Platform",
      "runtime-version" : "47",
      "runtime-commit" : "d93ca42ee0c4ca3a84836e3ba7d34d8aba062cfaeb7d8488afbf7841c9d2646b",
      "sdk" : "org.gnome.Sdk",
      "sdk-commit" : "3d5777bdd18dfdb8ed171f5a845291b2c504d03443a5d019cad3a41c6c5d3acd",
      "command" : "gnome-text-editor",
      "modules" : [
        {
    ...

    The exception to this, are proprietary applications naturally, and a handful of applications (under an OSI approved license) where Flathub developers helped the upstream projects integrate a direct publishing workflow into their Deployment pipelines. I am aware of Firefox and OBS as the main examples, both of which publish in Flathub through their Continues Deployment (CI/CD) pipeline the same way they generate their builds for other platforms they support and the code for how it happens is available on their repos.

    If you have issues trusting Mozilla’s infrastructure, then how are you trusting Firefox in the first place and good luck auditing gecko to make sure it does not start to ship malware. Surely distribution packagers audit every single change that happens from release to release for each package they maintain and can verify no malicious code ever gets merged. The xz backdoor was very recent, and it was identified by pure chance, none of this prevented it.

    Then Miller proceeds to describe the Fedora build infrastructure and afterward we get into the following:

    Miller: I will give an example of something I installed in Flathub, I was trying to get some nice gui thing that would show me like my system Hardware stats […] one of them ones I picked seemed to do nothing, and turns out what it was actually doing, there was no graphical application it was just a script, it was running that script in the background and that script uploaded my system stats to a server somewhere.

    Firstly we don’t really have many details to be able to identify which application it was, I would be very curious to know. Now speculating on my part, the most popular application matching that description it’s Hardware Probe and it absolutely has a GUI, no matter how minimal. It also asks you before uploading.

    Maybe there is a org.upload.MySystem application that I don’t know about, and it ended up doing what was in the description, again I would love to know more and update the post if you could recall!

    Miller: No one is checking for things like that and there’s no necessarily even agreement that that was was bad.

    Second time! Again with the “There is no review and inclusion process in Flathub” narrative. There absolutely is, and these are the kinds of things that get brought up during it.

    Miller: I am not trying to be down on Flathub because I think it is a great resource

    Yes, I can see that, however in your ignorance you were something much worse than “Down”. This is pure slander and defamation, coming from the current “Fedora Project Leader”, the “Technically Voice of Fedora” (direct quote from a couple seconds later). All the statements made above are manufactured and inaccurate. Myths that you’d hear from people that never asked, looked or cared about any of these cause the moment you do you its obvious how laughable all these claims are.

    Miller: And in a lot of ways Flathub is a competing distribution to Fedora’s packaging of all applications.

    Precisely, he is spot on here, and I believe this is what kept Miller willfully ignorant and caused him to happily pick the first anit-flatpak/anti-flathub arguments he came across on reddit and repeat the verbatim without putting any thought into it. I do not believe Miller is malicious on purpose, I do truly believe he means well and does not know better.

    However, we can’t ignore the conflict that arises from his current job position as an big influence to why incidents like this happened. Nor the influence and damage this causes when it comes from a person of Matthew Miller’s position.

    Moving on:

    Miller: One of the other things I wanted to talk about Flatpak, is the security and sandboxing around it. Miller: Like I said the stuff in the Flathub are not really reviewed in detail and it can do a lot of things:

    Third time with the no review theme. I was fuming when I first heard this, and I am very very angry about still, If you can’t tell. Not only is this an incredibly damaging lie as covered above, it gets repeated over and over again.

    With Flatpak basically the developer defines what the permissions are. So there is a sandbox, but the sandbox is what the person who put it there is, and one can imagine that if you were to put malware in there you might make your sandboxing pretty loose.

    Brodie: One of the things you can say is “I want full file system access, and then you can do anything”

    No, again it’s stated in the Flathub documentation, permissions are very carefully reviewed and updates get blocked when permissions change until another review has happened.

    Miller: Android and Apple have pretty strong leverage against application developers to make applications work in their sandbox

    Brodie: the model is the other way around where they request permissions and then the user grants them whereas Flatpak, they get the permission and then you could reject them later

    This is partially correct, the first part about leverage will talk about in a bit, but here’s a primer on how permissions work in Flatpak and how it compares to the sandboxing technologies in iOS and Android.

    In all of them we have a separation between Static and Dynamic permissions. Static are the ones the application always has access to, for example the network, or the ability to send you notifications. These are always there and are mentioned at install time usually. Dynamic permissions are the ones where the application has to ask the user before being able to access a resource. For example opening a file chooser dialog so the user can upload a file, the application the only gets access to the file the user consented or none. Another example is using the camera on the device and capturing photos/video from it.

    Brodie here gets a bit confused and only mentions static permissions. If I had to guess it would be cause we usually refer to the dynamic permissions system in the Flatpak world as “Portals”.

    Miller: it didn’t used to be that way and and in fact um Android had much weaker sandboxing like you could know read the whole file system from one app and things like that […] they slowly tightened it and then app developers had to adjust Miller: I think with the Linux ecosystem we don’t really have the way to tighten that kind of thing on app developers … Flatpak actually has that kind of functionality […] with portals […] but there’s no not really a strong incentive for developers to do that because, you know well, first of all of course my software is not going to be bad so why should I you know work on sandboxing it, it’s kind of extra work and I I don’t know I don’t know how to solve that. I would like to get to the utopian world where we have that same security for applications and it would be nice to be able to install things from completely untrusted places and know that they can’t do anything to harm your system and that’s not the case with it right now

    As with any technology and adoption, we don’t get to perfection from day 1. Static permissions are necessary to provide a migration path for existing applications and until you have developed the appropriate and much more complex dynamic permissions mechanisms that are needed. For example up until iOS 18 it wasn’t possible to give applications access to a subset of your contacts list. Think of it like having to give access your entire filesystem instead of the specific files you want. Similarly partial-only access to your photos library arrived couple years ago in IOS and Android.

    In an ideal world all permissions are dynamic, but this takes time and resources and adaptation for the needs of applications and the platform as development progresses.

    Now about the leverage part.

    I do agree that “the Linux ecosystem” as a whole does not have any leverage on applications developers. This is cause Miller is looking at the wrong place for it. There is no Linux ecosystem but rather Platforms developers target.

    GNOME and KDE, as they distribute all their applications on Flathub absolutely have leverage. Similarly Flathub itself has leverage by changing the publishing requirements and inclusion guidelines. Which I kept being told they don’t exist.. Every other application that wants to publish also has to adhere by the rules on Flathub. ElementaryOS and their Appcenter has leverage on developers. Canonical does have the same pull as well with the Snapstore. Fedora on the other hand doesn’t have any leverage cause the Fedora Flatpak repository is irrelevant, broken and nobody wants to use it.

    [..] The xz backdoor gets brought up when discussing dependencies and how software gets composed together.

    Miller: we try to keep all of those things up to date and make sure everything is patched across the dist even when it’s even when it’s difficult. I think that really is one of the best ways to keep your system secure and because the sandboxing isn’t very strong that can really be a problem, you know like the XZ thing that happened before. If XZ is just one place it’s not that hard of an update but if you’ve got a 100 Flatpaks from different places […] and no consistency to it it’s pretty hard to manage that

    I am not going to get in depth about this problem domain and the arguments over it. In fact I have been writing another blog post for a while. I hope to publish shortly. Till then I can not recommend high enough Emmanuele’s and Lennart’s blog posts, as well as one of the very early posts from Alex when Flatpak was in early design phase on the shortcomings of the current distribution model.

    Now about bundled dependencies. The concept of Runtimes has served us well so far, and we have been doing a pretty decent job providing most of the things applications need but would not want to bundle themselves. This makes the Runtimes a single place for most of the high profile dependencies (curl, openssl, webkitgtk and so on) that you’d frequently update for security vulnerabilities and once it’s done they roll out to everyone without needing to do anything manual to update the applications or even rebuilt them.

    Applications only need to bundle their direct dependencies,and as mentioned above, the flatpak manifest includes the exact definition of all of them. They are available to anyone to inspect and there’s tooling that can scan them and hopefully in the future alert us.

    If the Docker/OCI model where you end bundling the entire toolchain, runtime, and now you have to maintain it and keep up with updates and rebuild your containers is good enough for all those enterprise distributions, then the Flatpak model which is much more efficient, streamlined and thought out and much much much less maintenance intensive, it is probably fine.

    Miller: part of the idea of having a distro was to keep all those things consistent so that it’s easier for everyone, including the developers

    As mentioned above, nothing that fundamentally differs from the leverage that Flathub and the Platform Developers have.

    Brodie: took us 20 minutes to get to an explanation [..] but the tldr Fedora Flatpak is basically it is built off of the Fedora RPM build system and because that it is more well tested and sort of intended, even if not entirely for the Enterprise, designed in a way as if an Enterprise user was going to use it the idea is this is more well tested and more secure in a lot of cases not every case.
    Miller: Yea that’s basically it

    This is a question/conclusion that Brodie reaches with after the previous statements and by far the most enraging thing in this interview. This is also an excellent example of the damage Matthew Miller caused today and if I was a Flathub developer I would stop on nothing sort of a public apology from the Fedora project itself. Hell I want this just being an application developer that publishes on it. The interview has been basically shitting on both the Developers of Flathub and the people that choose to publish in it. And if that’s not enough there should be an apology just out of decency. Dear god..

    Brodie: how should Fedora handle upstreams that don’t want to be packaged  like the OBS case here where they did not want there to be a package in Fedora Flatpak or another example is obviously bottles which has made a lot of noise about the packaging

    Lastly I want to touch on this closing question in light of recent events.

    Miller: I think we probably shouldn’t do it. We should respect people’s wishes there. At least when it is an open source project working in good faith there. There maybe some other cases where the software, say theoretically there’s somebody who has commercial interests in some thing and they only want to release it from their thing even though it’s open source. We might want to actually like, well it’s open source we can provide things, we in that case we might end up you having a different name or something but yeah I can imagine situations where it makes sense to have it packaged in Fedora still but in general especially and when it’s a you know friendly successful open source project we should be friendly yeah. The name thing is something people forget history like that’s happened before with Mozilla with Firefox and Debian.

    This is an excellent idea! But it gets better:

    Miller: so I understand why they strict about that but it was kind of frustrating um you know we in Fedora have basically the same rules if you want to take Fedora Linux and do something out of it, make your own thing out of it, put your own software on whatever, you can do that but we ask you not to call it Fedora if it’s a fedora remix brand you can use in some cases otherwise pick your own name it’s all open source but you know the name is ours. yeah and I the Upstream as well it make totally makes sense.

    Brodie: yeah no the name is completely understandable especially if you do have a trademark to already even if you don’t like it’s it’s common courtesy to not name the thing the exact same thing

    Miller: yeah I mean and depending on the legalities like you don’t necessarily have to register a trademark to have the trademark kind of protections under things so hopefully lawyers you can stay out of the whole thing because that always makes the situations a lot more complicated, and we can just get along talking like human beings who care about making good software and getting it to users.

    And I completely agree with all of these, all of it. But let’s break it down a bit because no matter how nice the words and intentions it hasn’t been working out this way with the Fedora community so far.

    First, Miller agrees the Fedora project should be respecting of application developer’s wishes to not have their application distributed by fedora but rather it be a renamed version if Fedora wishes to keep distributing it.

    However, every single time a developer has asked for this, they have been ridiculed, laughed at and straight up bullied by Fedora packagers and the rest of the Fedora community. It has been a similar response from other distribution projects and companies as well, it’s not just Fedora. You can look at Bottle’s story for the most recent example. It is very nice to hear Miller’s intentions but means nothing in practice.

    Then Miller proceeds to assure us why he understand that naming and branding is such a big deal to those projects (unlike the rest of the Fedora community again). He further informs us how Fedora has the exact same policies and asks from people that want to fork Fedora. Which makes the treatment that every single application developer has received when asking about the same exact thing ever more outrageous.

    What I didn’t know is that in certain cases you don’t even need to have a trademark yet to be covered by some of the protections, depending on jurisdiction and all.

    And last we come into lawyers. Neither Fedora nor application developers would want it to ever come to this, and it was stated multiple times by Bottles developers that they don’t want to have to file for a trademark so they can be taken seriously. Similarly, OBS developers said how resorting to legal action would be the last thing they would want to do and would rather have the issue resolved before that. But it took until OBS, a project of a high enough profile, with the resources required to acquire a trademark and to threaten legal action before the Fedora Leadership cared to treat application developers like human beings and get the Fedora packagers and community members to comply. (Something which they had stated multiple times they simply couldn’t do).

    I hate all of this. Fedora and all the other distributions need to do better. They all claim to care about their users but happily keep shipping broken and miss configured software to them over the upstream version, just cause it’s what aligns with their current interests. In this case is the promotion of Fedora tooling and Fedora Flatpaks over the application in Flathub they have no control over. In previous incidents it was about branding applications like the rest of the system even though it was making them unusable. And I can find you and list you with a bunch of examples from other distributions just as easily.

    They don’t care about their users, they care about their bottom line first and foremost. Any civil attempts at fixing issues get ignored and laughed at, up until there is a threat of a legal action or a big enough PR damage, drama and shitshow that they can’t ignore it anymore and have to backtrack on them.

    This is my two angry cents. Overall I am not exactly sure how Matthew Miller managed in a rushed and desperate attempt at damage control for the OBS drama, to not only to make it worse, but to piss off the entire Flathub community at the same time. But what’s done is done, let’s see what we can do to address the issues that have festered and persisted for years now.

  • Thoughts on employing PGO and BOLT on the GNOME stack

    Christian was looking at PGO and BOLT recently I figured I’d write down my notes from the discussions we had on how we’d go about making things faster on our stack, since I don’t have time or the resource to pursue those plans myself atm.

    First off let’s start with the basics, PGO (profile guided optimizations) and BOLT (Binary Optimization and Layout Tool) work in similar ways. You capture one or more “profiles” of a workload that’s representative of a usecase of your code and then the tools do their magic to make the common hot paths more efficient/cache-friendly/etc. Afterwards they produce a new binary that is hopefully faster than the old one and functionally identical so you can just replace it.

    Now already we have two issues here that arise here:

    First of all we don’t really have any benchmarks in our stack, let alone, ones that are rounded enough to account for the majority of usecases. Additionally we need better instrumentation to capture stats like frames, frame-times, and export them both for sysprof and so we can make the benchmark runners more useful.

    Once we have the benchmarks we can use them to create the profiles for optimizations and to verify that any changes have the desired effect. We will need multiple profiles of all the different hardware/software configurations.

    For example for GTK ideally we’d want to have a matrix of profiles for the different render backends (NGL/Vulkan) along with the mesa drivers they’d use depending on different hardware AMD/Intel and then also different architectures, so additional profile for Raspberrypi5 and Asahi stacks. We might also want to add a profile captured under qemu+virtio while we are it too.

    Maintaining the benchmarks and profiles would be a lot of work and very tailored to each project so they would all have to live in their upstream repositories.

    On the other hand, the optimization itself has to be done during the Tree/userland/OS composition and we’d have to aggregate all the profiles from all the projects to apply them. This is easily done when you are in control of the whole deployment as we can do for the GNOME Flatpak Runtime. It’s also easy to do if you are targeting an embedded deployment where most of the time you have custom images you are in full control off and know exactly the workload you will be running.

    If we want distros to also apply these optimizations and for this to be done at scale, we’d have to make the whole process automatic and part of the usual compilation process so there would be no room for error during integration. The downside of this would be that we’d have a lot less opportunities for aggregating different usecases/profiles as projects would either have to own optimizations of the stack beneath them (ex: GTK being the one relinking pango) or only relink their own libraries.

    To conclude, Post-linktime optimization would be a great avenue to explore as it seems to be one of the lower-hanging fruits when it comes to optimizing the whole stack. But it also would be quite the effort and require a decent amount of work to be committed to it. It would be worth it in the long run.

  • Thessaloniki spring Hackfests!

    Hello everyone!

    I am here to terrorize your calendar by dropping the dates for two back to back hackfests we are organizing in the beautiful city of Thessaloniki, Greece (who doesn’t like coming to Greece on work time, right?).

    May 27-29th we will be hosting the annual GStreamer Spring Hackfest. If multimedia is your thing, you know the drill. Newcomers are also welcome ofc!

    May 31st-June 5th we will be hosting another edition of the GNOME ♥️ Rust Hackfest. First in person Rust hackfest ever since the pandemic started. From what I heard, half of Berlin will be coming for this one so we might change its scope to an all around GNOME one, but we will see. You are all welcome!

    See the pages of each hackfest for more details.

    We are in the final steps of booking the venue but it will most likely be in the city center and it should be safe to book accommodation and traveling tickets.

    Additionally the venue we are looking at can accommodate around 40 people, so please please add yourself to the organizing pad of each hackfest you are interested in, in addition to any dietary restrictions you might have.

    See you all IRL!

  • Developing in GNOME OS: systemd-sysext

    This is the first post in a series about tools used to develop GNOME and GNOME OS. Part two coming soon.

    In the old age, developing the desktop was simple™️. You only had to install a handful of toolchains, development headers, and tools from the distribution packages, run make install, execute say_prayer.sh and if you had not eaten meat on Friday you had a 25% chance for your system to work after a reboot.

    But how do you develop software in the brave new world of Image-Based systems and containerization?

    Well, if you are an application developer Flatpak makes this very simple. Applications run against a containerized runtimes, with their own userspace. The only thing they need from the host system is a working Desktop Environment, Flatpak, and Portals. Builder and flatpak-builder along with all the integration we built into the Desktop make sure that it will be a breeze.

    But what if you are developing a system component, like a daemon or perhaps GNOME Shell?

    Till now the goto solution has been “Grab a fedora container and open enough sandbox holes until things work”. This is what Fedora Toolbox does, and it works great for a lot of things. However the sandbox makes things way more difficult than they need to be, and its rather limiting on what parts of the system you can test. But there is another way.

    In GNOME OS we provide two images. The first one is how we envision GNOME to be, with all the applications and services we develop. The second one is complimentary, and it adds all the development tools, headers and debugging information we need to develop GNOME. Now the popular image based OSes don’t provide you with something like that, and you have to layer everything on your own. This makes it much harder to do things like running gdb against the host system. But in GNOME OS its easy, and by switching to the Development Edition/Image you get access to all the tools required to build any GNOME component and even GNOME OS itself.

    Alright, I have a compiler and all the dependencies I need to build my project but /usr is still immutable. How do I install and run my modified build?

    I am glad you asked! Enter systemd-sysext.

    If you are familiar with ostree-based operating systems, you probably used ostree admin unlock at some point. systemd-sysext is another take on the same concept. You build your software as usual with a /usr (or /opt) prefix and install it in a special folder along with some metadata. Then upon running systemd-sysext merge, systemd creates an ovelayfs with the merged contents of /usr and your directory, and replaces the existing /usr mount atomically.

    The format is very simple, there are 2 directories we care about for our usecase:

    • /run/extensions
    • /var/lib/extensions

    /run/extensions is a temporary directory that’s wiped on reboot, so it’s excellent for experimental changes that might take down the system. After a power cycle you will boot back in a clean slate.

    /var/lib/extensions is for persistent changes. Say you are experimenting with UI changes or want to thoroughly test a patch set for a couple days. Or you might simply want to have a local FFmpeg build with extra codecs, cause god lawyer forbid we have working video playback.

    If you are installing a build in /run/extensions only thing you need to do is the following two commands:

    sudo DESTDIR=/run/extensions/custom_install meson install -C _build
    sudo systemd-sysext refresh --force

    This installs our tree into the custom_install directory and tells systext to refresh, which means that it will look at all extensions we might have, unmerge (unmount) them and then merge the updated contents again. Congrats, you can launch your new binaries now.

    Normally sysext will check if the extension is compatible with the host operating system. This is done by using a metadata file that includes the ID= field you’d find in /etc/os-release (see more at the systemd-sysext manpage). The --force argument ignores this, as we build the project on host and we can be reasonably sure things will work.

    If we want to have our build to be persistent and available across reboots, we have to install it in /var/lib/extensions/ and create the metadata file so systemd-sysext will mount it automatically upon boot. It’s rather simple to do but gets repetitive after a while and hunting in your console history is never fun. Here is a simple script that will take care of it.

    #! /bin/bash
    
    set -eu
    
    EXTENSION_NAME="custom_install"
    # DESTDIR="/var/lib/extensions/$EXTENSION_NAME"
    DESTDIR="/run/extensions/$EXTENSION_NAME"
    VERSION_FILE="$DESTDIR/usr/lib/extension-release.d/extension-release.$EXTENSION_NAME"
    
    sudo mkdir -p $DESTDIR
    
    # rm -rf _build 
    # meson setup _build --prefix=/usr
    # meson compile -C _build 
    sudo meson install --destdir=$DESTDIR -C _build --no-rebuild
    
    sudo mkdir -p $DESTDIR/usr/lib/extension-release.d/
    # ID=_any to ignore it completly
    echo ID=org.gnome.gnomeos | sudo tee $VERSION_FILE
    
    sudo systemd-sysext refresh
    sudo systemd-sysext list

    Here’s a demo, I used sysprof as the example since it’s more visible change than my gnome-session MR. You can also test gnome-shell the same way by installing, refreshing and then logging out and login in again.

    Anothe example was from today, where I was bisecting gdm. Before systemd-sysext, I’d be building complete images with different commits of gdm in order to bisect. It was still fast, at ~25m per build for the whole OS, but that’s still 24 minutes more after it becomes annoying.

    Now, I switched to the gdm checkout, started a bisect, compiled, installed and then run systemctl restart gdm.service. The login greeter would either come up and I’d continue the bisect, or it would be blank at which point I’d ssh in, switch to a tty or even hit the power button and continue knowing it’s a bad commit.  Repeat. Bisect done in 10 minutes.

    And the best is that we can keep updating the operating system image uninterrupted, and on next boot the trees will get merged again. Want to go back? Simply systemd-sysext unmerge or remove the extension directories!

    One caveat when using systemd-sysext is that you might occasionally need to systemctl daemon-reload. Another one when using custom DESTDIRs, is that meson won’t run post-install/integration commands for you (nor would it work), so if you need to recompile glib schemas, you will have to first systemd-sysext refresh, compile the schemas, place the new binary in the extension or make a new extension, and systemd-sysext refresh again.

    Another use case I plan on exploring in the near future, is generating systemd-sysext images in the CI for Merge Requests, same way we generate Flatpak Bundles for applications. This proved to be really useful for people wanting to tests apps in an easier way. Begone shall be the days where we had to teach designers how to setup JHBuild in order to test UI changes in the Shell. Just grab the disk image, drop it in GNOME OS, refresh and you are done!

    And that’s not all, none of this is specific to GNOME OS, other than having bleeding edge versions of all the gnome components that is! You can use systemd-sysext the same way in Fedora Workstation, Arch, Elementary OS etc. The only requirement is having recent enough systemd and a merged /usr tree. Next time you are about to meson install on your host system, give systemd-sysext a try!

    This whole post is basically a retelling of Lennart’s blogpost about systemd-sysext, It has more details and you should check it out. This is also how I initially found out about this awesome tool! I tried to get people hooked on it in the past but it didn’t bear fruit, so here’s one post specific to GNOME development!

    Happy Hacking!

  • You are not actually mad at Flatpak

    It’s that time of the month again, when some clueless guy tries to write a hit-piece about Flatpak and we all get dejavus.

    One of my favorite past-time activities for a while now has been seeing people on the internet trying to rationalize concepts and decisions, and instead of asking why things ended up the way they did, what where the issues and goals of system A, and design B, and what were the compromise, they just pick the first idea that comes to their mind and go with it.

    For example, a very common scenario is that someone picks a random Proprietary application and points out all the sandbox holes it needs to function and thus declares the sandbox as useless. At no point does one of them ever ask, “Hey why does Flatpak allow to punch such holes”, “What developments have been done to limit that”, “What tools are available to deal with that”, “Why am I a cherry-picking an evil proprietary application as my example that no distribution would be able to distribute anyway and I wouldn’t want any person to use” and “What went wrong with my life that I have to write hate posts to get attention and feel any kind of emotion”. These are just a few of the question that should have come up and given one pause, way before getting anywhere near the the publish button.

    Now I can answer most of these questions, and you would be happy to know that even Chromium and Electron have been adopting more and more of the sandboxed Portal APIs as the years pass. But there isn’t any point in talking about it cause none of the delirium is about the technical decisions behind Flatpak or how it works. None.

    Let me explain.

    Flatpak itself is a piece of software. It provides major advantages to distributing and running applications such as: atomic updates, binary deltas, reproducible build and run environments, mandatory sandboxing for many resources, and so on. How the software is built and distributed however has nothing to do with Flatpak. If you think the distribution-model is what’s best for you, you can already use Fedora’s flatpaked applications, Canonical’s snaps or your fav distro version of this. Everything is still built from distribution packages, by your distribution vendor, vetted by the package maintainers, come with the same downstream patches you’d see in the normal rpm/deb/etc variations, and so on. And you would still get the advantages of sandboxing, atomicity, etc even though you don’t need them cause you love and trust your distro so much.

    On the other hand what every single post really complains about is Flathub. You see, what Flatpak gave us was the ability to decouple the applications from the host system. Instead of taking the existing runtime from some distro, We (The platform and application developers) built our Runtimes from scratch, that we were in full control of, that we could update and mold at will, that was not bound to any existing distribution or corporation, that we could make sure our applications were fully functional with, without any downstream patches that made things orange or blue. And unlike the old distribution model, Flathub gave application developers the same autonomy. We no longer had to wait for dependencies to be packaged, or the worry about some distribution shipping an incompatible version. We didn’t have to wait until a new enough version of a library was included into an LTS release before making use of it. We could now ship our applications on our cadence, without gatekeepers, in the way we envisioned and intended.

    This is what made applications truly work on any distribution. This is what was truly disruptive about Flatpak. This is what the haters are mad about.

    Thanks to Flathub the social dynamic for distributing applications has changed. Now the people that create the Platforms (GNOME, KDE, Elementary, etc) and Applications are in charge of distributing them. The sysadmin-turned-distro-packager middleman trope from the 90s is gone and no developer, or user wants it back. This is why Flathub took over, this is why no application developer became a Fedora packager even when they could build Flatpaks from the rpms packaged. If we ever want “Desktop Linux” to succeed, we have to let go of the idea of Linux distributions and “Linux” as a monolith.

    The old distribution model is still useful for very specific, enterprise environments where you depend on a single ISV for all your software, but unless your Surname is Mr. IBM or Mr. Canonical, you gain nothing by asking for this on your desktop.

    If you want to read more on the subject I highly suggest these two blogposts, along with Richard Brown’s Fosdem 2023 talk.