Category: Fedora

  • Fedora Must (Carefully) Embrace Flathub

    Motivation

    Opportunity is upon us! For the past few years, the desktop Linux user base has been growing at a historically high rate. StatCounter currently has us at 4.14% desktop OS market share for Q2 2025. For comparison, when Fedora Workstation was first released in Q4 2014, desktop Linux was at 1.38%. Now, StatCounter measures HTTP requests, not computers, but it’s safe to say the trend is highly encouraging. Don’t trust StatCounter? Cloudflare reports 2.9% for Q2 2025. One of the world’s most popular websites reports 5.1%. And although I was unable to figure out how to make a permanent link to the results, analytics.usa.gov is currently reporting a robust 6.2% for the past 90 days, and increasing. The Linux user base is already much larger than I previously suspected would ever be possible, and it seems to be increasing quickly. I wonder if we are perhaps nearing an inflection point where our user base may soon increase even more considerably. The End of 10 and enthusiastic YouTubers are certainly not hurting.

    Compared to its peers, Fedora is doing particularly well. It’s pretty safe to say that Fedora is now one of the 2 or 3 most popular and successful desktop Linux operating systems, a far cry from its status 10 years ago, when Fedora suffered from an unfortunate longstanding reputation that it was an unstable “test bed” OS only suitable for experienced technical users. Those days are long gone; nowadays, Fedora has an army of social media users eager to promote it as a reliable, newcomer-friendly choice.

    But we cannot stop here. If we become complacent and content ourselves with the status quo, then we will fail to take maximum advantage of the current opportunity.

    Although Fedora Workstation works well for most users, and although quality and reliability has improved considerably over the past decade, it is still far too easy for inexperienced users to break the operating system. Today’s Fedora Workstation is fundamentally just a nicer version of the same thing we already had 10 years ago. The original plan called for major changes that we have thus far failed to deliver, like “Robust Upgrades,” “Better upgrade/rollback control,” and “Container based application install.” These critical goals are notably all already achieved by Fedora Silverblue, the experimental image-based alternative to Fedora Workstation, but few Fedora users benefit because only the most experienced and adventurous users are willing to install Silverblue. I had long assumed that Silverblue would eventually become the next Fedora Workstation, and that the Silverblue code name would eventually be retired. This is now an explicit project goal of Fedora’s Strategy 2028, and it is critical for Fedora’s success. The Fedora Workstation of the future must be:

    • Safe and image-based by default: an atomic operating system composed of RPMs built on bootc. Most users should stick with image-based mode because it’s much harder to break the OS, and easier to troubleshoot when something does go wrong.
    • Flexible if you so choose: converting the image-based OS into the traditional package-based OS managed by RPM and dnf must be allowed, for users who prefer or require it. Or alternatively, if converting is not possible, then installing a traditional non-atomic Fedora must remain possible. Either way, we must not force users to use image-based desktops if they do not want to, so no need to panic. But image-based must eventually become the new default.

    Silverblue is not ready yet, but Fedora has a large community of developers and should be able to eventually resolve the remaining problems.

    But wait, wasn’t this supposed to be a blog post about Flathub? Well, consider that with an image-based OS, you cannot easily install traditional RPM packages. Instead, in Fedora Silverblue, desktop applications are installed only via Flatpak. (This is also true of Fedora Kinoite and Fedora’s other atomic desktop variants.) So Fedora must have a source of Flatpaks, and that source must be enabled by default, or there won’t be any apps available.

    (Don’t like Flatpak? This blog post is long enough already, so I’ll ask you to just make a leap of faith and accept that Flatpak is cool. Notably, Flatpak applications that keep their bundled dependencies updated and do not subvert the sandbox are much safer to use than traditional distro-packaged applications.)

    In practice, there are currently only two interesting sources of Flatpaks to choose from: Fedora Flatpaks and Flathub. Flathub is the much better choice, and enabling it by default should be our end goal. Fedora is already discussing whether to do this. But Flathub also has several disadvantages, some of which ought to be blockers.

    Why Flathub?

    There are important technical differences between Fedora’s Flatpaks, built from Fedora RPMs, vs. Flathub’s Flatpaks, which are usually built on top of freedesktop-sdk. But I will not discuss those, because the social differences are more important than the technical differences.

    Users Like Flathub

    Feedback from Fedora’s user base has been clear: among users who like Flatpaks, Flathub is extremely popular. When installing a Flatpak application, users generally expect it to come from Flathub. In contrast, many users of Fedora Flatpaks do not install them intentionally, but rather by accident, only because they are the preferred software source in GNOME Software. Users are often frustrated to discover that Fedora Flatpaks are not supported by upstream software developers and have a different set of bugs than upstream Flatpaks do. It is also common for users and even Fedora developers to entirely remove the Fedora Flatpak application source.

    Not so many users prefer to use Fedora Flatpaks. Generally, these users cite some of Flathub’s questionable packaging practices as justification for avoiding use of Flathub. These concerns are valid; Flathub has some serious problems, which I will discuss in more detail below. But improving Flathub and fixing these problems would surely be much easier than creating thousands of Fedora Flatpak packages and attempting to compete with Flathub, a competition that Fedora would be quite unlikely to win.

    Flathub is drastically more popular than Fedora Flatpaks even among the most hardcore Fedora community members who participate in change proposal debate on Fedora Discussion. (At time of writing, nearly 80% of discussion participants favor filtering out Fedora Flatpaks.)

    This is the most important point. Flathub has already won.

    Cut Out the Middleman

    In general, upstream software developers understand their software much better than downstream packagers. Bugs reported to downstream issue trackers are much less likely to be satisfactorily resolved. There are a variety of ways that downstream packagers could accidentally mess up a package, whether by failing to enable a feature flag, or upgrading a dependency before the application is compatible with the new version. Downstream support is almost never as good as upstream support.

    Adding a middleman between upstream and users really only makes sense if the middleman is adding significant value. Traditional distro-packaged applications used to provide considerable value by making it easy to install the software. Nowadays, since upstreams can distribute software directly to users via Flathub, that value is far more limited.

    Bus Factor is Critical

    Most Flatpak application developers prefer to contribute to Flathub. Accordingly, there are very few developers working on Fedora Flatpaks. Almost all of the Fedora Flatpaks are actually owned by one single developer who has packaged many hundreds of applications. This is surely not a healthy situation.

    Bugs in Fedora Flatpaks are reported on the Fedora Flatpak SIG issue tracker. This SIG notably does not have a list of active members, but rather a years-old list of people who are interested in joining the SIG, who are encouraged to attend the first meeting. Needless to say the SIG does not seem to be in a very good state.

    I suspect this situation is permanent, reflecting a general lack of interest in Fedora Flatpak development, not just a temporary shortfall of contributors. Quality is naturally going to be higher where there are more contributors. The quality of Fedora Flatpak applications is often lower than Flathub applications, sometimes significantly so. Fedora Flatpaks also receive significantly less testing than Flathub Flatpaks. Upstream developers do not test the Fedora Flatpaks, and downstream developers are spread too thin to have plausible hope of testing them adequately.

    Focus on What Really Matters

    Fedora’s main competency and primary value is the core operating system, not miscellaneous applications that ship on top of it for historical reasons.

    When people complain that “distros are obsolete,” they don’t mean that Linux operating systems are not needed anymore. Of course you need an OS on which to run applications. The anti-distro people notably all use distros.

    But it’s no longer necessary for a Linux distribution to attempt to package every open source desktop application. That used to be a requirement for a Linux operating system to be successful, but nowadays it is an optional activity that we perform primarily for historical reasons, because it is what we have always done rather than because it is still truly strategic or essential. It is a time-consuming, resource-intensive side quest that no longer makes sense and does not add meaningful value.

    The Status Quo

    Let’s review how things work currently:

    • By default, Fedora Workstation allows users to install open source software from the following sources: Fedora Flatpaks, Fedora RPMs, and Cisco’s OpenH264 RPM.
    • The post-install initial setup workflow, gnome-initial-setup, suggests enabling third-party repositories. If the user does not click the Enable button, then GNOME Software will make the same suggestion the first time it is run. Clicking this button enables all of Flathub, plus a few other RPM repositories.
    Image displaying the Third-Party Repositories page in Fedora's gnome-initial-setup.

    Fedora will probably never enable software sources that contain proprietary software by default, but it’s easy to enable searching for proprietary software if desired.

    (Technically, Fedora actually has a filter in place to allow hiding any Flathub applications we don’t want users to see. But since Fedora 38, this filter is empty, so no apps are hidden in practice. The downstream filter was quite unpopular with users, and the mechanism still exists only as a safety hatch in case there is some unanticipated future emergency.)

    The Future

    Here are my proposed requirements for Fedora Workstation to become a successful image-based OS.

    This proposal applies only to Fedora Workstation (Fedora’s GNOME edition). These proposals could just as well apply to other Fedora editions and spins, like Fedora KDE Plasma Desktop, but different Fedora variants have different needs, so each should be handled separately.

    Flathub is Enabled by Default

    Since Flathub includes proprietary software, we cannot include all of Flathub by default. But Flathub already supports subsets. Fedora can safely enable the floss subset by default, and replace the “Enable Third-Party Repositories” button with an “Enable Proprietary Software Sources” button that would allow users to switch from the floss subset to the full Flathub if they so choose.

    This goal can be implemented today, but we should wait because Flathub has some problems that we ought to fix first. More on that below.

    All Default Applications are Fedora Flatpak Applications

    All applications installed by default in Fedora Workstation should be Fedora Flatpaks. (Or almost all. Certain exceptions, like gnome-control-center, would make more sense as part of the OS image rather than as a Flatpak.)

    Notice that I said Fedora Flatpaks, not Flathub. Fedora surely does need to control the handful of applications that are shipped by default. We don’t want to be at the mercy of Flathub to provide the core user experience.

    There has been recent progress towards this goal, although it’s not ready yet.

    All Other Applications are Flathub Flatpaks

    With the exception of the default Fedora Flatpak applications, Flathub should be the only source of applications in GNOME Software.

    It will soon be time to turn off GNOME Software’s support for installing RPM applications, making it a Flatpak-only software center by default. (Because GNOME Software uses a plugin architecture, users of traditional package-based Fedora who want to use GNOME Software to install RPM applications would still be able to do so by installing a subpackage providing a plugin, if desired.)

    This requirement is an end goal. It can be done today, but it doesn’t necessarily need to be an immediate next step.

    Flathub Must Improve

    Flathub has a few serious problems, and needs to make some policy changes before Fedora enables it by default. I’ll discuss this in more detail next.

    Fedora Must Help

    We should not make demands of Flathub without helping to implement them. Fedora has a large developer community and significant resources. We must not barge in and attempt to take over the Flathub project; instead, let’s increase our activity in the Flathub community somewhat, and lend a hand where requested.

    The Case for Fedora Flatpaks

    Earlier this year, Yaakov presented The Case for Fedora Flatpaks. This is the strongest argument I’ve seen in favor of Fedora Flatpaks. It complains about five problems with Flathub:

    • Lack of source and build system provenance: on this point, Yaakov is completely right. This is a serious problem, and it would be unacceptable for Fedora to embrace Flathub before it is fixed. More on this below.
    • Lack of separation between FOSS, legally encumbered, and proprietary software: this is not a real problem. Flathub already has a floss subset to separate open source vs. proprietary software; it may not be a separate repository, but that hardly matters because subsets allow us to achieve an equivalent user experience. Then there is indeed no separate subset for legally-encumbered software, but this also does not matter. Desktop users invariably wish to install encumbered software; I have yet to meet a user who does not want multimedia playback to work, after all. Fedora cannot offer encumbered multimedia codecs, but Flathub can, and that’s a major advantage for Flathub. Users and operating systems can block the multimedia extensions if truly desired. Lastly, some of the plainly-unlicensed proprietary software currently available on Flathub does admittedly seem pretty clearly outrageous, but if this is a concern for you, simply stick to the floss subset.
    • Lack of systemic upgrading of applications to the latest runtime: again, Yaakov is correct. This is a serious problem, and it would be unacceptable for Fedora to embrace Flathub before it is fixed. More on this below.
    • Lack of coordination of changes to non-runtime dependencies: this is a difference from Fedora, but it’s not necessarily a problem. In fact, allowing applications to have different versions of dependencies can be quite convenient, since upgrading dependencies can sometimes break applications. It does become a problem when bundled dependencies become significantly outdated, though, as this creates security risk. More on this below.
    • Lack of systemic community engagement: it’s silly to claim that Flathub has no community. Unresponsive Flathub maintainers are a real problem, but Fedora has an unresponsive maintainer problem too, so this can hardly count as a point against Flathub. That said, yes, Flathub needs a better way to flag unresponsive maintainers.

    So now we have some good reasons to create Fedora Flatpaks. But maintaining Flatpaks is a tremendous effort. Is it really worth doing if we can improve Flathub instead?

    Flathub Must Improve

    I propose the following improvements:

    • Open source software must be built from source on trusted infrastructure.
    • Applications must not depend on end-of-life runtimes.
    • Applications must use flatpak-external-data-checker to monitor bundled dependencies wherever possible.
    • Sandbox holes must be phased out, except where this is fundamentally technically infeasible.

    Let’s discuss each point in more detail.

    Build Open Source from Source

    Open source software can contain all manner of vulnerabilities. Although unlikely, it might even contain malicious backdoors. Building from source does nothing to guarantee that the software is in any way safe to use (and if it’s written in C or C++, then it’s definitely not safe). But it sets an essential baseline: you can at least be confident that the binary you install on your computer actually corresponds to the provided source code, assuming the build infrastructure is trusted and not compromised. And if the package supports reproducible builds, then you can reliably detect malicious infrastructure, too!

    In contrast, when shipping a prebuilt binary, whoever built the binary can easily insert an undetectable backdoor; there is no need to resort to stealthy obfuscation tactics. With proprietary software, this risk is inherent and unavoidable: users just have to accept the risk and trust that whoever built the software is not malicious. Fine. But users generally do not expect this risk to extend to open source software, because all Linux operating systems fortunately require open source software to be built from source. Open source software not built from source is unusual and is invariably treated as a serious bug.

    Flathub is different. On Flathub, shipping prebuilt binaries of open source software is, sadly, a common accepted practice. Here are several examples. Flathub itself admits that around 6% of its software is not built from source, so this problem is pervasive, not an isolated issue. (Although that percentage unfortunately considers proprietary software in addition to open source software, overstating the badness of the problem, because building proprietary software from source is impossible and not doing so is not a problem.) Update: I’ve been advised that I misunderstood the purpose of extra-data. Most apps that ship prebuilt binaries do not use extra-data. I’m not sure how many apps are shipping prebuilt binaries, but the problem is pervasive.

    Security is not the only problem. In practice, Flathub applications that do not build from source sometimes package binaries only for x86_64, leaving aarch64 users entirely out of luck, even though Flathub normally supports aarch64, an architecture that is important for Fedora. This is frequently cited by Flathub’s opponents as major disadvantage relative to Fedora Flatpaks.

    A plan to fix this should exist before Fedora enables Flathub by default. I can think of a few possible solutions:

    • Create a new subset for open source software not built from source, so Fedora can filter out this subset. Users can enable the subset at their own risk. This is hardly ideal, but it would allow Fedora to enable Flathub without exposing users to prebuilt open source software.
    • Declare that any software not built from source should be treated equivalent to proprietary software, and moved out of the floss subset. This is not quite right, because it is open source, but it has the same security and trust characteristics of proprietary software, so it’s not unreasonable either.
    • Set a flag date by which any open source software not built from source must be delisted from Flathub. I’ll arbitrarily propose July 1, 2027, which should be a generous amount of time to fix apps. This is my preferred solution. It can also be combined with either of the above.

    Some of the apps not currently built from source are Electron packages. Electron takes a long time to build, and I wonder if building every Electron app from source might overwhelm Flathub’s existing build infrastructure. We will need some sort of solution to this. I wonder if it would be possible to build Electron runtimes to provide a few common versions of Electron. Alternatively, Flathub might just need more infrastructure funding.

    Tangent time: a few applications on Flathub are built on non-Flathub infrastructure, notably Firefox and OBS Studio. It would be better to build everything on Flathub’s infrastructure to reduce risk of infrastructure compromise, but as long as this practice is limited to only a few well-known applications using trusted infrastructure, then the risk is lower and it’s not necessarily a serious problem. The third-party infrastructure should be designed thoughtfully, and only the infrastructure should be able to upload binaries; it should not be possible for a human to manually upload a build. It’s unfortunately not always easy to assess whether an application complies with these guidelines or not. Let’s consider OBS Studio. I appreciate that it almost follows my guidelines, because the binaries are normally built by GitHub Actions and will therefore correspond with the project’s source code, but I think a malicious maintainer could bypass that by uploading a malicious GitHub binary release? This is not ideal, but fortunately custom infrastructure is an unusual edge case, rather than a pervasive problem.

    Penalize End-of-life Runtimes

    When a Flatpak runtime reaches end-of-life (EOL), it stops receiving all updates, including security updates. How pervasive are EOL runtimes on Flathub? Using the Runtime Distribution section of Flathub Statistics and some knowledge of which runtimes are still supported, I determined that 994 out of 3,438 apps are currently using an EOL runtime. Ouch. (Note that the statistics page says there are 3,063 total desktop apps, but for whatever reason, the number of apps presented in the Runtime Distribution graph is higher. Could there really be 375 command line apps on Flathub?)

    Using an EOL runtime is dangerous and irresponsible, and developers who claim otherwise are not good at risk assessment. Some developers will say that security does not matter because their app is not security-critical. It’s true that most security vulnerabilities are not actually terribly important or worth panicking over, but this does not mean it’s acceptable to stop fixing vulnerabilities altogether. In fact, security matters for most apps. A few exceptions would be apps that do not open files and also do not use the network, but that’s really probably not many apps.

    I recently saw a developer use the example of a music player application to argue that EOL runtimes are not actually a serious problem. This developer picked a terrible example. Our hypothetical music player application can notably open audio files. Applications that parse files are inherently high risk because users love to open untrusted files. If you give me a file, the first thing I’m going to do is open it to see what it is. Who wouldn’t? Curiosity is human nature. And a music player probably uses GStreamer, which puts it at the very highest tier of security risk (alongside your PDF reader, email client, and web browser). I know of exactly one case of a GNOME user being exploited in the wild: it happened when the user opened a booby-trapped video using Totem, GNOME’s GStreamer-based video player. At least your web browser is guaranteed to be heavily sandboxed; your music player might very well not be.

    The Flatpak sandbox certainly helps to mitigate the impact of vulnerabilities, but sandboxes are intended to be a defense in depth measure. They should not be treated as a primary security mechanism or as an excuse to not fix security bugs. Also, too Flatpak many apps subvert the sandbox entirely.

    Of course, each app has a different risk level. The risk of you being attacked via GNOME Calculator is pretty low. It does not open files, and the only untrusted input it parses is currency conversion data provided by the International Monetary Fund. Life goes on if your calculator is unmaintained. Any number of other applications are probably generally safe. But it would be entirely impractical to assess 3000 different apps individually to determine whether they are a significant security risk or not. And independent of security considerations, use of an EOL runtime is a good baseline to determine whether the application is adequately maintained, so that abandoned apps can be eventually delisted. It would not be useful to make exceptions.

    The solution here is simple enough:

    • It should not be possible to build an application that depends on an EOL runtime, to motivate active maintainers to update to a newer runtime. Flathub already implemented this rule in the past, but it got dropped at some point.
    • An application that depends on an EOL runtime for too long should eventually be delisted. Perhaps 6 months or 1 year would be good deadlines.
    • A monitoring dashboard would make it easier to see which apps are using maintained runtimes and which need to be fixed.

    Monitor Bundled Dependencies

    Flatpak apps have to bundle any dependencies not present in their runtime. This creates considerable security risk if the maintainer of the Flathub packaging does not regularly update the dependencies. The negative consequences are identical to using an EOL runtime.

    Fortunately, Flathub already has a tool to deal with this problem: flatpak-external-data-checker. This tool automatically opens pull requests to update bundled dependencies when a new version is available. However, not all applications use flatpak-external-data-checker, and not all applications that do use it do so for all dependencies, and none of this matters if the app’s packaging is no longer maintained.

    I don’t know of any easy ways to monitor Flathub for outdated bundled dependencies, but given the number of apps using EOL runtimes, I assume the status quo is pretty bad. The next step here is to build better monitoring tools so we can better understand the scope of this problem.

    Phase Out Most Sandbox Holes (Eventually)

    Applications that parse data are full of security vulnerabilities, like buffer overflows and use-after-frees. Skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted malicious data to gain total control of your user account on your computer. They can then install malware, read all the files in your home directory, use your computer in a botnet, and do whatever else they want with it. But if the application is sandboxed, then a second type of exploit, called a sandbox escape, is needed before the app can harm your host operating system and access your personal data, so the attacker now has to exploit two vulnerabilities instead of just one. And while app vulnerabilities are extremely common, sandbox escapes are, in theory, rare.

    In theory, Flatpak apps are drastically safer than distro-packaged apps because Flatpak provides a strong sandbox by default. The security benefit of the sandbox cannot be understated: it is amazing technology and greatly improves security relative to distro-packaged apps. But in practice, Flathub applications routinely subvert the sandbox by using expansive static permissions to open sandbox holes. Flathub claims that it carefully reviews apps’ use of static permissions and allows only the most narrow permissions that are possible for the app to function properly. This claim is dubious because, in practice, the permissions of actual apps on Flathub are extremely broad, as often as not making a total mockery of the sandbox.

    While some applications use sandbox holes out of laziness, in many cases it’s currently outright impossible to sandbox the application without breaking key functionality. For example, Sophie has documented many problems that necessitate sandbox holes in GNOME’s image viewer, Loupe. These problems are fixable, but they require significant development work that has not happened yet. Should we punish the application by requiring it to break itself to conform to the requirements of the sandbox? The Flathub community has decided that the answer is no: application developers can, in practice, use whatever permissions they need to make the app work, even if this entirely subverts the sandbox.

    This was originally a good idea. By allowing flexibility with sandbox permissions, Flathub made it very easy to package apps, became extremely popular, and allowed Flatpak itself to become successful. But the original understanding of the Flatpak community was that this laxity would be temporary: eventually, the rules would be tightened and apps would be held to progressively higher standards, until sandbox holes would eventually become rare. Unfortunately, this is taking too long. Flatpak has been around for a decade now, but this goal is not within reach.

    Tightening sandbox holes does not need to be a blocker for adopting Flathub in Fedora because it’s not a problem relative to the status quo in Fedora. Fedora Flatpaks have the exact same problem, and Fedora’s distro-packaged apps are not sandboxed at all (with only a few exceptions, like your web browser). But it’s long past time to at least make a plan for how to eventually phase out sandbox holes wherever possible. (In some cases, it won’t ever be possible; e.g. sandboxing a file manager or disk usage analyzer does not make any sense.) It’s currently too soon to use sticks to punish applications for having too many sandbox holes, but sticks will be necessary eventually, hopefully within the next 5 years. In the meantime, we can immediately begin to use carrots to reward app developers for eliminating holes. We will need to discuss specifics.

    We also need more developers to help improve xdg-desktop-portal, the component that allows sandboxed apps to safely access resources on the host system without using sandbox holes. This is too much work for any individual; it will require many developers working together.

    Software Source Prioritization

    So, let’s say we successfully engage with the Flathub project and make some good progress on solving the above problems. What should happen next?

    Fedora is a community of doers. We cannot tell Fedora contributors to stop doing work they wish to do. Accordingly, it’s unlikely that anybody will propose to shut down the Fedora Flatpak project so long as developers are still working on it. Don’t expect that to happen.

    However, this doesn’t mean Fedora contributors have a divine right for their packaged applications to be presented to users by default. Each Fedora edition (or spin) should be allowed to decide for itself what should be presented to the user in its software center. It’s time for the Fedora Engineering Steering Committee (FESCo) to allow Fedora editions to prefer third-party content over content from Fedora itself.

    We have a few options as to how exactly this should work:

    • We could choose to unconditionally prioritize all Flathub Flatpaks over Fedora Flatpaks, as I proposed earlier this year (Workstation ticket, LWN coverage). The precedence in GNOME Software would be Flathub > Fedora Flatpaks.
    • Alternatively, we could leave Fedora Flatpaks with highest priority, and instead apply a filter such that only Fedora Flatpaks that are installed by default are visible in GNOME Software. This is my preferred solution; there is already an active change proposal for Fedora 43 (proposal, discussion), and it has received considerable support from the Fedora community. Although the proposal only targets atomic editions like Silverblue and Kinoite for now, it makes sense to extend it to Fedora Workstation as well. The precedence would be Filtered Fedora Flatpaks > Flathub.

    When considering our desired end state, we can stop there; those are the only two options because of my “All Other Applications are Flathub Flatpaks” requirement: in an atomic OS, it’s no longer possible to install RPM-packaged applications, after all. But in the meantime, as a transitional measure, we still need to consider where RPMs fit in until such time that Fedora Workstation is ready to remove RPM applications from GNOME Software.

    We have several possible precedence options. The most obvious option, consistent with my proposals above, is: Flathub > Fedora RPMs > Fedora Flatpaks. And that would be fine, certainly a huge improvement over the status quo, which is Fedora Flatpaks > Fedora RPMs > Flathub.

    But we could also conditionally prioritize Flathub Flatpaks over Fedora Flatpaks or Fedora RPMs, such that the Flathub Flatpak is preferred only if it meets certain criteria. This makes sense if we want to nudge Flathub maintainers towards adopting certain best practices we might wish to encourage. Several Fedora users have proposed that we prefer Flathub only if the app has Verified status, indicating that the Flathub maintainer is the same as the upstream maintainer. But I do not care very much whether the app is verified or not; it’s perfectly acceptable for a third-party developer to maintain the Flathub packaging if the upstream developers do not wish to do so, and I don’t see any need to discourage this. Instead, I would rather consider whether the app receives a Probably Safe safety rating in GNOME Software. This would be a nice carrot to encourage app developers to tighten sandbox permissions. (Of course, this would be a transitional measure only, because eventually the goal is for Flathub to be the only software source.)

    There are many possible outcomes here, but here are my three favorites, in order:

    1. My favorite option: Filtered Fedora Flatpaks > Probably Safe Flathub > Fedora RPMs > Potentially Unsafe Flathub. Fedora Flatpaks take priority, but this won’t hurt anything because only applications shipped by default will be available, and those will be the ones that receive the most testing. This is not a desirable end state because it is complicated and it will be confusing to explain to users why a certain software source was preferred. But in the long run, when Fedora RPMs are eventually removed, it will simplify to Filtered Fedora Flatpaks > Flathub, which is elegant.
    2. A simple option, the same thing but without the conditional prioritization: Filtered Fedora Flatpaks > Flathub > Fedora RPMs.
    3. Alternative option: Probably Safe Flathub > Fedora RPMs > Potentially Unsafe Flathub > Unfiltered Fedora Flatpaks. When Fedora RPMs are eventually removed, this will simplify to Flathub > Unfiltered Fedora Flatpaks. This alternative option behaves almost the same as the above, except allows users to manually select the Fedora Flatpak if they wish to do so, rather than filtering them out. But there is a significant disadvantage: if you uninstall an application that is installed by default, then reinstall the application, it would come from Flathub rather than Fedora Flatpaks, which is unexpected. So we’ll probably want to hardcode exceptions for default apps to prefer Fedora Flatpaks.
    4. The corresponding simple option without conditional prioritization: Flathub > Fedora RPMs > Unfiltered Fedora Flatpaks.

    Any of these options would be fine.

    Conclusion

    Flathub is, frankly, not safe enough to be enabled by default in Fedora Workstation today. But these problems are fixable. Helping Flathub become more trustworthy will be far easier than competing against it by maintaining thousands of Fedora Flatpaks. Enabling Flathub by default should be a strategic priority for Fedora Workstation.

    I anticipate a lively debate on social media, on Matrix, and in the comments. And I am especially eager to see whether the Fedora and Flathub communities accept my arguments as persuasive. FESCo will be considering the Filter Fedora Flatpaks for Atomic Desktops proposal imminently, so the first test is soon.

  • WebKitGTK API Versions Demystified

    WebKitGTK has a bunch of different confusing API versions. Here are the three API versions that are currently supported by upstream:

    • webkitgtk-6.0: This is WebKitGTK for GTK 4 (and libsoup 3), introduced in WebKitGTK 2.40. This is what’s built by default if you build WebKit with -DPORT=GTK.
    • webkit2gtk-4.1: This is WebKitGTK for GTK 3 and libsoup 3, introduced in WebKitGTK 2.32. Get this by building with -DPORT=GTK -DUSE_GTK3=ON.
    • webkit2gtk-4.0: This is WebKitGTK for GTK 3 and libsoup 2, introduced in WebKitGTK 2.6. Get this by building with -DPORT=GTK -DUSE_GTK3=ON -DUSE_SOUP2=ON.

    webkitgtk-6.0 contains a bunch of API changes, and all deprecated APIs were removed. If you’re upgrading to webkitgtk-6.0, then you’re also upgrading your application from GTK 3 to GTK 4, and have to adapt to much bigger GTK API changes anyway, so this seemed like a good opportunity to break compatibility and fix old mistakes for the first time in a very long time.

    webkit2gtk-4.1 is exactly the same as webkit2gtk-4.0, except for the libsoup API version that it links against.

    webkit2gtk-4.0 is — remarkably — mostly API stable since it was released in September 2014. Some particular C APIs have been deprecated and don’t work properly anymore, but no stable C APIs have been removed during this time, and library ABI is maintained. (Caveat: WebKitGTK used to have unstable DOM APIs, some of which were removed before the DOM API was eventually stabilized. Nowadays, the DOM APIs are all stable but deprecated in webkit2gtk-4.1 and webkit2gtk-4.0, and removed in webkitgtk-6.0.)

    If you are interested in history, here are the older API versions that do not matter anymore:

    • webkit2gtk-5.0: This was an unstable API version used during development of the GTK 4 API, intended for WebKit developers rather than application developers. It was obsoleted by webkitgtk-6.0.
    • webkit2gtk-3.0: original API version of WebKitGTK for GTK 3 and libsoup 2, obsoleted by webkit2gtk-4.0. This was the original “WebKit 2” API version, which was only used by a few applications before it was removed one decade ago (history).
    • webkitgtk-3.0: note the missing “2”, this is “WebKit 1” (predating the modern multiprocess architecture) for GTK 3. This API version was widely-used on Linux, and its removal one decade ago precipitated a security crisis which I called The Great API Break. (This crisis was worth it. Modern WebKit’s multiprocess architecture is far more secure than the old single-process architecture.)
    • webkitgtk-1.0: the original WebKitGTK API version, this is “WebKit 1” for GTK 2. This API version was also widely-used on Linux before it was removed in The Great API Break.

    Fedora and RHEL users, are you confused by all the different confusing downstream package names? Here is your map:

    • webkitgtk6.0, webkit2gtk4.1, and webkit2gtk4.0: This is the current binary package naming in Fedora, corresponding precisely to the WebKitGTK API version to reduce confusion.
    • webkit2gtk3: old name for webkit2gtk4.0, still used in RHEL 9 and RHEL 8
    • webkitgtk4: even older name for webkit2gtk4.0, still used in RHEL 7
    • webkitgtk3: this is the webkitgtk-3.0 API version, still used in RHEL 7
    • webkitgtk: this is webkitgtk-1.0, used in RHEL 6

    Notably, webkit2gtk4.0, webkit2gtk3, and webkitgtk4 are all the same thing.

  • Safety on Matrix

    Hello Matrix users, please review the following:

    Be cautious with unexpected private message invites. Do not accept private message invites from users you do not recognize. If you want to talk to somebody who has rejected your invite, contact them in a public room first.

    Fractal users: Fractal 11.rc will block media preview by default in public rooms, and can be configured to additionally block media preview in private rooms if desired.

    Matrix server administrators: please review the Matrix.org Foundation announcement Introducing Policy Servers.

  • I Entered My GitHub Credentials into a Phishing Website!

    We all think we’re smart enough to not be tricked by a phishing attempt, right? Unfortunately, I know for certain that I’m not, because I entered my GitHub password into a lookalike phishing website a year or two ago. Oops! Fortunately, I noticed right away, so I simply changed my unique, never-reused password and moved on. But if the attacker were smarter, I might have never noticed. (This particular attack website was relatively unsophisticated and proxied only an unauthenticated view of GitHub, a big clue that something was wrong. Update: I want to be clear that it would have been very easy for the attacker to simply redirect me to the real github.com after stealing my credentials, in which case I would not have noticed the attack and would not have known to change my password.)

    You might think multifactor authentication is the best defense against phishing. Nope. Although multifactor authentication is a major security improvement over passwords alone, and the particular attack that tricked me did not attempt to subvert multifactor authentication, it’s actually unfortunately pretty easy for phishers to defeat most multifactor authentication if they wish to do so:

    • Multifactor authentication based on SMS is extremely insecure (because SS7 is insecure)
    • Multifactor authentication based on phone calls is also insecure (because SIM swapping isn’t going away; determined attackers will steal your phone number if it’s an obstacle to them)
    • Multifactor authentication based on authenticator apps (using TOTP or HOTP) is much better in general, but still fails against phishing. When you paste your one-time access code into a phishing website, the phishing website can simply “proxy” the access code you kindly provided to them by submitting it to the real website. This only allows authenticating once, but once is usually enough.

    Fortunately, there is a solution: passkeys. Based on FIDO2 and WebAuthn, passkeys resist phishing because the authentication process depends on the domain of the service that you’re actually connecting to. If you think you’re visiting https://example.com, but you’re actually visiting a copycat website with a Cyrillic а instead of Latin a, then no worries: the authentication will fail, and the frustrated attacker will have achieved nothing.

    The most popular form of passkey is local biometric authentication running on your phone, but any hardware security key (e.g. YubiKey) is also a good bet.

    target.com Is More Secure than Your Bank!

    I am not joking when I say that target.com is more secure than your bank (which is probably still relying on SMS or phone calls, and maybe even allows you to authenticate using easily-guessable security questions):

    Screenshot of target.com prompting the user to register a passkey
    target.com is introducing passkeys!

    Good job for supporting passkeys, Target.

    It’s probably perfectly fine for Target to support passkeys alongside passwords indefinitely. Higher-security websites that want to resist phishing (e.g. your employer’s SSO service) should consider eventually allowing only passkeys.

    No Passkeys in WebKitGTK

    Unfortunately for GNOME users, WebKitGTK does not yet support WebAuthn, so passkeys will not work in GNOME Web (Epiphany). That’s my browser of choice, so I’ve never touched a passkey before and don’t actually know how well they work in practice. Maybe do as I say and not as I do? If you require high security, you will unfortunately need to use Firefox or Chrome instead, at least for the time being.

    Why Was Michael Visiting a Fake github.com?

    The fake github.com appeared higher than the real github.com in the DuckDuckGo search results for whatever I was looking for at the time. :(

  • How to Get Hacked by North Korea

    Good news: exploiting memory safety vulnerabilities is becoming more difficult. Traditional security vulnerabilities will remain a serious threat, but attackers prefer to take the path of least resistance, and nowadays that is to attack developers rather than the software itself. Once the attackers control your computer, they can attempt to perform a supply chain attack and insert backdoors into your software, compromising all of your users at once.

    If you’re a software developer, it’s time to start focusing on the possibility that attackers will target you personally. Yes, you. If you use Linux, macOS, or Windows, take a moment to check your home directory for a hidden .n2 folder. If it exists, alas! You have been hacked by the North Koreans. (Future malware campaigns will presumably be more stealthy than this.)

    Attackers who target developers are currently employing two common strategies:

    • Fake job interview: you’re applying to job postings and the recruiter asks you to solve a programming problem of some sort, which involves installing software from NPM (or PyPI, or another language ecosystem’s package manager).
    • Fake debugging request: you receive a bug report and the reporter helpfully provides a script for reproducing the bug. The script may have dependencies on packages in NPM (or PyPI, or another language ecosystem’s package manager) to make it harder to notice that it’s malware. I saw a hopefully innocent bug report that was indistinguishable from such an attack just last week.

    But of course you would never execute such a script without auditing the dependencies thoroughly! (Insert eyeroll emoji here.)

    By now, most of us are hopefully already aware of typosquatting attacks, and the high risk that untrustworthy programming language packages may be malicious. But you might not have considered that attackers might target you personally. Exercise caution whenever somebody asks you to install packages from these sources. Take the time to start a virtual machine before running the code. (An isolated podman container probably works too?) Remember, attackers will target the path of least resistance. Virtualization or containerization raises the bar considerably, and the attacker will probably move along to an easier target.

  • GNOME 45 Core Apps Update

    It’s been a few months since I last reviewed the state of GNOME core apps. For GNOME 45, we have implemented the changes proposed in the “Imminent Core App Changes” section of that blog post:

    • Loupe enters core as GNOME’s new image viewer app, developed by Christopher Davis and Sophie Herold. Loupe will be branded as Image Viewer and replaces Eye of GNOME, which will no longer use the Image Viewer branding. Eye of GNOME will continue to be maintained by Felix Riemann, and contributions are still welcome there.
    • Snapshot enters core as GNOME’s new camera app, developed by Maximiliano Sandoval and Jamie Murphy. Snapshot will be branded as Camera and replaces Cheese. Cheese will continue to be maintained by David King, and contributions are still welcome there.
    • GNOME Photos has been removed from core without replacement. This application could have been retained if more developers were interested in it, but we have made the decision to remove it due to lack of volunteers interested in maintaining it. Photos will likely be archived eventually, unless a new maintainer volunteers to save it.

    GNOME 45 beta will be released imminently with the above changes. Testing the release and reporting bugs is much appreciated.

    We are also looking for volunteers interested in helping implement future core app changes. Specifically, improvements are required for Music to remain in core, and improvements are required for Geary to enter core. We’re also not quite sure what to do with Contacts. If you’re interested in any of these projects, consider getting involved.

  • GNOME Core Apps Update

    It’s been a while since my big core app reorganization for GNOME 3.22. Here is a history of core app changes since then:

    • GNOME 3.26 (September 2017) added Music, To Do (which has since been renamed to Endeavor), and Document Scanner (simple-scan). (I blogged about this at the time, then became lazy and stopped blogging about core app updates, until now.)
    • To Do was removed in GNOME 3.28 (March 2018) due to lack of consensus over whether it should really be a core app.  As a result of this, we improved communication between GNOME release team and design team to ensure both teams agree on future core app changes. Mea culpa.
    • Documents was removed in GNOME 3.32 (March 2019).
    • A new Developer Tools subcategory of core was created in GNOME 3.38 (September 2020), adding Builder, dconf Editor, Devhelp, and Sysprof. These apps are only interesting for software developers and are not intended to be installed by default in general-purpose operating systems like the rest of GNOME core.
    • GNOME 41 (September 2021) featured the first larger set of changes to GNOME core since GNOME 3.22. This release removed Archive Manager (file-roller), since Files (nautilus) is now able to handle archives, and also removed gedit (formerly Text Editor). It added Connections and a replacement Text Editor app (gnome-text-editor). It also added a new Mobile subcategory of core, for apps intended for mobile-focused operating systems, featuring the dialer app Calls. (To date, the Mobile subcategory has not been very successful: so far Calls is the only app included there.)
    • GNOME 42 (March 2022) featured a second larger set of changes. Screenshot was removed because GNOME Shell gained a built-in screenshot tool. Terminal was removed in favor of Console (kgx). We also moved Boxes to the Developer Tools subcategory, to recommend that it no longer be installed by default in general purpose operating systems.
    • GNOME 43 (September 2022) added D-Spy to Developer Tools.

    OK, now we’re caught up on historical changes. So, what to expect next?

    New Process for Core Apps Changes

    Although most of the core app changes have gone smoothly, we ran into some trouble replacing Terminal with Console. Console provides a fresher and simpler user interface on top of vte, the same terminal backend used by Terminal, so Console and Terminal share much of the same underlying functionality. This means work of the Terminal maintainers is actually key to the success of Console. Using a new terminal app rather than evolving Terminal allowed for bigger changes to the default user experience without upsetting users who prefer the experience provided by Terminal. I think Console is generally nicer than Terminal, but it is missing a few features that Fedora Workstation developers thought were important to have before replacing Terminal with Console. Long story short: this core app change was effectively rejected by one of our most important downstreams. Since then, Console has not seen very much development, and accordingly it is unlikely to be accepted into Fedora Workstation anytime soon. We messed up by adding the app to core before downstreams were comfortable with it, and at this point it has become unclear whether Console should remain in core or whether we should give up and bring back Terminal. Console remains for now, but I’m not sure where we go from here. Help welcome.

    To prevent this situation from happening again, Chris and Sophie developed a detailed and organized process for adding or removing core apps, including a new Incubator category designed to provide notice to downstreams that we are considering adding new apps to GNOME core. The new Incubator is much more structured than my previous short-lived Incubator attempt in GNOME 3.22. When apps are added to Incubator, I’ve been proactively asking other Fedora Workstation developers to provide feedback to make sure the app is considered ready there, to avoid a repeat of the situation with Console. Other downstreams are also welcome to watch the  Incubator/Submission project and provide feedback on newly-submitted apps, which should allow plenty of heads-up so downstreams can let us know sooner rather than later if there are problems with Incubator apps. Hopefully this should ensure apps are actually adopted by downstreams when they enter GNOME core.

    Imminent Core App Changes

    Currently there are two apps in Incubator. Loupe is a new image viewer app developed by Chris and Sophie to replace Image Viewer (eog). Snapshot is a new camera app developed by Maximiliano and Jamie to replace Cheese. These apps are maturing rapidly and have received primarily positive feedback thus far, so they are likely to graduate from Incubator and enter GNOME core sooner rather than later. The time to provide feedback is now. Don’t be surprised if Loupe is included in core for GNOME 45.

    In addition to Image Viewer and Cheese, we are also considering removing Photos. Photos is one of our “content apps” designed to allow browsing an entire collection of files independently of their filesystem locations. Historically, the other two content apps were Documents and Music. The content app strategy did not work very well for Documents, since a document browser doesn’t really offer many advantages over a file browser, but Photos and Music are both pretty decent at displaying your collection of pictures or songs, assuming you have such a collection. We have been discussing what to do with Photos and the other content apps for a very long time, at least since 2015. It took a very long time to reach some rough consensus, but we have finally agreed that the design of Photos still makes sense for GNOME: having a local app for viewing both local and cloud photos is still useful. However, Photos is no longer actively maintained. Several basic functionality bugs imperiled timely release of Fedora 37 last fall, and the app is less useful than previously because it no longer integrates with cloud services like Google Photos. (The Google integration depends on libgdata, which was removed from GNOME 44 because it did not survive the transition to libsoup 3.) Photos has failed the new core app review process due to lack of active maintenance, and will be soon be removed from GNOME core unless a new maintainer steps up to take care of it. Volunteers welcome.

    Future Core App Changes

    Lastly, I want to talk about some changes that are not yet planned, but might occur in the future. Think of this entire section as brainstorming rather than any concrete plans.

    Like Photos, we have also been discussing the status of Music. The popularity of DRM-encumbered cloud music services has increased, and local music storage does not seem to be as common as it used to be. If you do have local music, Music is pretty decent at handling it, but there are prominent bugs and missing features (like the ability to select which folders to index) detracting from the user experience. We do not have consensus on whether having a core app to play local music files still makes sense, since most users probably do not have a local music collection anymore. But perhaps all that is a moot point, because Videos (totem) 3.38 removed support for opening audio files, leaving us with no core apps capable of playing audio for the past 2.5 years. Previously, our default music player was Videos, which was really weird, and now we have none; Music can only play audio files that you’ve navigated to using Music itself, so it’s impossible for Music to be our default music player. My suggestion to rename Videos to Media Player and handle audio files again has not been well-received, so the most likely solution to this conundrum is to teach Music how to open audio files, likely securing its future in core. A merge request exists, but it does not look close to landing. Fedora Workstation is still shipping Rhythmbox rather than Music specifically due to this problem. My opinion is this needs to be resolved for Music to remain in core.

    It would be nice to have an email client in GNOME core, since everybody uses email and local clients are much nicer than webmail. The only plausible candidate here is Geary. (If you like Evolution, consider that you might not like the major UI changes and many, many feature removals that would be necessary for Evolution to enter GNOME core.) Geary has only one active maintainer, and adding a big application that depends on just one person seems too risky. If more developers were interested in maintaining Geary, it would feel like a safer addition to GNOME core.

    Contacts feels a little out of place currently. It’s mostly useful for storing email addresses, but you cannot actually do anything with them because we have no email application in core. Like Photos, Contacts has had several recent basic functionality bugs that imperiled timely Fedora releases, but these seem to have been largely resolved, so it’s not causing urgent problems. Still, for Contacts to remain in the long term, we’re probably going to need another maintainer here too. And perhaps it only makes sense to keep if we add Geary.

    Finally, should Maps move to the Mobile category? It seems clearly useful to have a maps app installed by default on a phone, but I wonder how many desktop users really prefer to use Maps rather than a maps website.

    GNOME 44 Core Apps

    I’ll end this blog post with an updated list of core apps as of GNOME 44. Here they are:

    • Main category (26 apps):
      • Calculator
      • Calendar
      • Characters
      • Cheese
      • Clocks
      • Connections
      • Console (kgx)
      • Contacts
      • Disks (gnome-disk-utility)
      • Disk Usage Analyzer (baobab)
      • Document Scanner (simple-scan)
      • Document Viewer (evince)
      • Files (nautilus)
      • Fonts (gnome-font-viewer)
      • Help (yelp)
      • Image Viewer (eog)
      • Logs
      • Maps
      • Music
      • Photos
      • Software
      • System Monitor
      • Text Editor
      • Videos (totem)
      • Weather
      • Web (epiphany)
    • Developer Tools (6 apps):
      • Boxes
      • Builder
      • dconf Editor
      • Devhelp
      • D-Spy
      • sysprof
    • Mobile (1 app):
      • Calls
  • WebKitGTK API for GTK 4 Is Now Stable

    With the release of WebKitGTK 2.40.0, WebKitGTK now finally provides a stable API and ABI for GTK 4 applications. The following API versions are provided:

    • webkit2gtk-4.0: this API version uses GTK 3 and libsoup 2. It is obsolete and users should immediately port to webkit2gtk-4.1. To get this with WebKitGTK 2.40, build with -DPORT=GTK -DUSE_SOUP2=ON.
    • webkit2gtk-4.1: this API version uses GTK 3 and libsoup 3. It contains no other changes from webkit2gtk-4.0 besides the libsoup version. With WebKitGTK 2.40, this is the default API version that you get when you build with -DPORT=GTK. (In 2.42, this might require a different flag, e.g. -DUSE_GTK3=ON, which does not exist yet.)
    • webkitgtk-6.0: this API version uses GTK 4 and libsoup 3. To get this with WebKitGTK 2.40, build with -DPORT=GTK -DUSE_GTK4=ON. (In 2.42, this might become the default API version.)

    WebKitGTK 2.38 had a different GTK 4 API version, webkit2gtk-5.0. This was an unstable/development API version and it is gone in 2.40, so applications using it will break. Fortunately, that should be very few applications. If your operating system ships GNOME 42, or any older version, or the new GNOME 44, then no applications use webkit2gtk-5.0 and you have no extra work to do. But for operating systems that ship GNOME 43, webkit2gtk-5.0 is used by gnome-builder, gnome-initial-setup, and evolution-data-server:

    • For evolution-data-server 3.46, use this patch which applies on evolution-data-server 3.46.4.
    • For gnome-initial-setup 43, use this patch which applies on gnome-initial-setup 43.2. (Update: for your convenience, this patch will be included in gnome-initial-setup 43.3.)
    • For gnome-builder 43, all required changes are present in version 43.7.

    Remember, patching is only needed for GNOME 43. Other versions of GNOME will have no problems with WebKitGTK 2.40.

    There is no proper online documentation yet, but in the meantime you can view the markdown source for the migration guide to help you with porting your applications. Although the API is now stable and it is close to feature parity with the GTK 3 version, there are some problems to be aware of:

    Big thanks to everyone who helped make this possible.

  • Stop Using QtWebKit

    Today, WebKit in Linux operating systems is much more secure than it used to be. The problems that I previously discussed in this old, formerly-popular blog post are nowadays a thing of the past. Most major Linux operating systems now update WebKitGTK and WPE WebKit on a regular basis to ensure known vulnerabilities are fixed. (Not all Linux operating systems include WPE WebKit. It’s basically WebKitGTK without the dependency on GTK, and is the best choice if you want to use WebKit on embedded devices.) All major operating systems have removed older, insecure versions of WebKitGTK (“WebKit 1”) that were previously a major security problem for Linux users. And today WebKitGTK and WPE WebKit both provide a webkit_web_context_set_sandbox_enabled() API which, if enabled, employs Linux namespaces to prevent a compromised web content process from accessing your personal data, similar to Flatpak’s sandbox. (If you are a developer and your application does not already enable the sandbox, you should fix that!)

    Unfortunately, QtWebKit has not benefited from these improvements. QtWebKit was removed from the upstream WebKit codebase back in 2013. Its current status in Fedora is, unfortunately, representative of other major Linux operating systems. Fedora currently contains two versions of QtWebKit:

    • The qtwebkit package contains upstream QtWebKit 2.3.4 from 2014. I believe this is used by Qt 4 applications. For avoidance of doubt, you should not use applications that depend on a web engine that has not been updated in eight years.
    • The newer qt5-qtwebkit contains Konstantin Tokarev’s fork of QtWebKit, which is de facto the new upstream and without a doubt the best version of QtWebKit available currently. Although it has received occasional updates, most recently 5.212.0-alpha4 from March 2020, it’s still based on WebKitGTK 2.12 from 2016, and the release notes bluntly state that it’s not very safe to use. Looking at WebKitGTK security advisories beginning with WSA-2016-0006, I manually counted 507 CVEs that have been fixed in WebKitGTK 2.14.0 or newer.

    These CVEs are mostly (but not exclusively) remote code execution vulnerabilities. Many of those CVEs no doubt correspond to bugs that were introduced more recently than 2.12, but the exact number is not important: what’s important is that it’s a lot, far too many for backporting security fixes to be practical. Since qt5-qtwebkit is two years newer than qtwebkit, the qtwebkit package is no doubt in even worse shape. And because QtWebKit does not have any web process sandbox, any remote code execution is game over: an attacker that exploits QtWebKit gains full access to your user account on your computer, and can steal or destroy all your files, read all your passwords out of your password manager, and do anything else that your user account can do with your computer. In contrast, with WebKitGTK or WPE WebKit’s web process sandbox enabled, attackers only get access to content that’s mounted within the sandbox, which is a much more limited environment without access to your home directory or session bus.

    In short, it’s long past time for Linux operating systems to remove QtWebKit and everything that depends on it. Do not feed untrusted data into QtWebKit. Don’t give it any HTML that you didn’t write yourself, and certainly don’t give it anything that contains injected data. Uninstall it and whatever applications depend on it.

    Update: I forgot to mention what to do if you are a developer and your application still uses QtWebKit. You should ensure it uses the most recent release of QtWebEngine for Qt 6. Do not use old versions of Qt 6, and do not use QtWebEngine for Qt 5.

  • Best Practices for Build Options

    Build options are sometimes tricky to get right. Here’s my take on best practices. The golden rule is to set good upstream defaults. Everything else follows from this.

    Rule 1: Choose Good Upstream Defaults

    Occasionally I see upstream developers complain that a downstream operating system has built their software “incorrectly,” generally because some important dependency or feature has been disabled. Sometimes downstreams really do mess up, but more often poor upstream defaults are to blame. Upstreams must set good defaults because upstream software developers know far more about their projects than downstream packagers do. Upstreams generally have a good idea of how they expect software to be built by downstreams, whereas downstreams generally do not. Accordingly, do the thinking upstream whenever possible. When you set good defaults, it becomes easier for downstreams to build your software the way you expect, because active effort is required for downstreams to mess things up.

    For example, say a project has the following two build options:

    Option NameDefault Value
    --enable-thing-you-usually-want-enabledfalse
    --disable-thing-you-rarely-want-disabledtrue

    The thing you usually want enabled is not enabled by default, and the thing you rarely want disabled is disabled by default. Sad. Unfortunately, this pattern used to be extremely common with Autotools build systems, because in the real world, the names of the options are more subtle than this, and also because nobody likes squinting at configure.ac files to audit whether the options make sense. Meson build systems tend to be somewhat better because meson_options.txt is separate from the rest of the build definitions, making it easier to review all your options and check to ensure their defaults are set appropriately. However, there are still a few more subtle ways you can mess up your Meson build system, which I’ll discuss below.

    Rule 2: Prefer Upstream Defaults Downstream

    Conversely, downstreams should not second-guess upstream defaults unless you have a good reason to do so and really know what you’re doing.

    For example, glib-networking’s Meson build system provides you with two different TLS backend options: OpenSSL or GnuTLS. The GnuTLS backend is enabled by default (well, sort of, see the next section on auto dependencies) while the OpenSSL backend is disabled by default. There’s a good reason for this: the OpenSSL backend is half-baked, partially due to bugs in glib-networking, and partially because OpenSSL just cannot do certain things that GnuTLS can. The OpenSSL backend is provided because some environments really do require it for license reasons, but it’s not the right choice for general-purpose operating systems. It may be tempting to think that you can pick whichever library you prefer, but you really should not.

    Another example: WebKitGTK’s CMake build system provides a USE_WPE_RENDERER build option, which is enabled by default. This option controls which graphics rendering stack is used: if enabled, rendering uses libwpe and wpebackend-fdo, whereas if disabled, rendering uses a legacy internal Wayland compositor. The option is provided because libwpe and wpebackend-fdo are newer dependencies that are expected to be unavailable on older (pre-2020) operating systems, so older operating systems legitimately need to be able to disable it. But this configuration receives little serious testing and the upstream developers do not notice when it breaks, so you really should not be using it unless you have to. This recently caused rendering bugs that appeared to be distribution-specific, which upstream developers were not willing to investigate because upstream developers could not reproduce the issue.

    Sticking with upstream defaults is generally safest. Sometimes you really need to override them. If so, go ahead. Just be careful.

    Rule 3: Handle Auto Dependencies and Features with Care

    The worst default ever is “build with feature enabled only if dependency xyz is installed; otherwise, disable it.” This is called an auto dependency. If using CMake or Autotools, auto dependencies are almost never permissible, and in this case “handle with care” means repent and fix it. Auto dependencies are acceptable only if you are using the Meson build system.

    The theory behind auto dependencies is that it’s convenient for people casually building the software to do so with the fewest number of build errors possible, which is true. Problem is, this screws over serious production builds of your software by requiring your downstreams to possess magical knowledge of what dependencies are required to build your software properly. Users generally expect most features to be enabled at build time, but if upstream uses auto dependencies, the result is a build dependencies lottery: your feature will be enabled or disabled due to luck, based on which downstream build dependencies transitively depend on which other build dependencies. Even if it’s built properly today, that could easily change tomorrow when some other dependency changes in some other package. Just say no. Do not expect downstreams to look at your build system at all, let alone study the possible build options and make accurate judgments about which build dependencies are required to enable them. Avoiding auto dependencies is part of setting good upstream defaults.

    Look at this example from WebKit’s OptionsGTK.cmake:

    if (ENABLE_SPELLCHECK)
        find_package(Enchant)
        if (NOT PC_ENCHANT_FOUND)
            message(FATAL_ERROR "Enchant is needed for ENABLE_SPELLCHECK")
        endif ()
    endif ()
    

    ENABLE_SPELLCHECK is ON by default. If you don’t have enchant installed, the build will fail unless you manually disable it by passing -DENABLE_SPELLCHECK=OFF". This makes it hard to mess up: downstreams have to make an intentional choice to build with spellchecking disabled. It cannot happen by accident.

    Many projects would instead write it like this:

    if (ENABLE_SPELLCHECK)
        find_package(Enchant)
        if (NOT PC_ENCHANT_FOUND)
            set(ENABLE_SPELLCHECK OFF)
        endif ()
    endif ()

    But this is an auto dependency, which results in downstream build dependency lottery. If you write your build system like this, you cannot complain when the feature winds up disabled by mistake in downstream builds. Don’t do this.

    Exception: if you use Meson, auto dependencies are acceptable if you use the feature option type and set the default to auto. Although auto features are silently enabled or disabled by default depending on whether the required dependency is present, you can easily override this behavior for serious production builds by passing -Dauto_features=enabled, which enables all the auto features and will result in build failures if dependencies are missing. All major Linux operating systems do this when building Meson packages, so Meson’s auto features should not cause problems.

    Rule 4: Be Very Careful with Meson’s Build Types

    Let’s categorize software builds into production builds or non-production builds. A production build is intended to be either distributed to end users or else run production workloads, whereas a non-production build is intended for testing or development and might have extra debug features enabled, like slow assertions. (These are more commonly called release builds or debug builds, but that terminology would be very confusing in the context of this discussion, as you’re about to see.)

    The CMake and Meson build systems give us more than just two build types. Compare CMake build types to the corresponding Meson build types:

    CMake Build TypeMeson Build TypeMeson debug OptionProduction Build?  (excludes Windows)
    ReleasereleasefalseYes
    DebugdebugtrueNo
    RelWithDebInfodebugoptimizedtrueYes, be careful!
    MinSizeRelminsizetrueYes, be careful!
    N/AplainfalseYes

    To simplify, let’s exclude Windows from the discussion for now. (We’ll come back to Windows in a bit.) Now, notice the nomenclature difference between CMake’s RelWithDebInfo (“release with debuginfo”) build type versus Meson’s debugoptimized build type. This build type functions exactly the same for both Meson and CMake, but CMake’s name is better because it clearly indicates that this is a release or production build type, whereas the Meson name seems to indicate it is a debug or non-production build type, and Meson’s debug option is set to true. In fact, it is an optimized production build with debuginfo enabled, the same style of build that almost all Linux operating systems use for their packages (although operating systems use the plain build type instead). The same problem exists for Meson’s minsize build type. This is another production build type where debug is true.

    The Meson build type name accurately reflects that the debug option is enabled, but this is very confusing because for most platforms, that option only controls whether debuginfo is generated. Looking at the table above, you can see that you must never use the debug option alone to decide whether you have a production build or a non-production build. As the table indicates, the only non-production build type is the vanilla debug build type, which you can detect by checking the combination of the debug and optimization options. You have a non-production (debug) build if debug is true and if optimization is 0 or g; otherwise, you have a production build.  I wrote this in bold because it is important and not at all obvious. (However, before applying this rule in a cross-platform project, keep reading below to see the huge caveat regarding Windows.)

    Here’s an example of what not to do in your meson.build:

    # Use debug/optimization flags to determine whether to enable debug or disable
    # cast checks
    gtk_debug_cflags = []
    debug = get_option('debug')
    optimization = get_option('optimization')
    if debug
      gtk_debug_cflags += '-DG_ENABLE_DEBUG'
      if optimization in ['0', 'g']
        gtk_debug_cflags += '-DG_ENABLE_CONSISTENCY_CHECKS'
      endif
    elif optimization in ['2', '3', 's']
      gtk_debug_cflags += ['-DG_DISABLE_CAST_CHECKS', '-DG_DISABLE_ASSERT']
    endif
    

    This is from GTK’s meson.build. The code based only on the optimization option is OK, but the code that sets -DG_ENABLE_DEBUG is looking only at the debug option. What the code really wants to do is set G_ENABLE_DEBUG if this is a non-production build, but instead it is tied to debuginfo, which is not the desired result. Downstreams are forced to scratch their heads as to what they should do. Impassioned build engineers have held spirited debates about this particular meson.build snippet. Don’t do this! (I will submit a merge request to improve this.)

    Here’s a much better, although still not perfect, example of how to do the same thing, this time from GLib’s meson.build:

    # Use debug/optimization flags to determine whether to enable debug or disable
    # cast checks
    glib_debug_cflags = []
    glib_debug = get_option('glib_debug')
    if glib_debug.enabled() or (glib_debug.auto() and get_option('debug'))
      glib_debug_cflags += ['-DG_ENABLE_DEBUG']
      message('Enabling various debug infrastructure')
    elif get_option('optimization') in ['2', '3', 's']
      glib_debug_cflags += ['-DG_DISABLE_CAST_CHECKS']
      message('Disabling cast checks')
    endif
    
    if not get_option('glib_assert')
      glib_debug_cflags += ['-DG_DISABLE_ASSERT']
      message('Disabling GLib asserts')
    endif
    
    if not get_option('glib_checks')
      glib_debug_cflags += ['-DG_DISABLE_CHECKS']
      message('Disabling GLib checks')
    endif
    

    Notice how GLib provides explicit build options that allow downstreams to decide whether debug should be enabled or not. Using explicit build options here was a good idea! The defaults for glib_assert and glib_checks are intentionally set to true to encourage their use in production builds, while G_DISABLE_CAST_CHECKS is based only on the optimization level. But sadly, if not explicitly configured, GLib sets the value of the glib_debug_cflags option automatically, based on only the value of the debug option. This is actually an OK use of an auto feature, because it is a carefully-considered attempt to provide good default behavior for downstreams, but it fails here because it assumes that debug means “non-production build,” which we have previously established cannot be determined without checking optimization as well. (I will submit a merge request to improve this.)

    Here’s another helpful table that shows how the various build types correspond to CFLAGS:

    CMake/Meson Build TypeCMake CFLAGSMeson CFLAGS
    Release/release-O3 -DNDEBUG-O3
    Debug/debug-g-O0 -g
    RelWithDebInfo/debugoptimized-O2 -g -DNDEBUG-O2 -g
    MinSizeRel/minsize-Os -DNDEBUG-Os -g

    Notice Meson’s minsize build type includes debuginfo, while CMake’s does not. Since debuginfo requires a huge amount of space, CMake’s behavior seems better here. We’ll discuss NDEBUG momentarily.

    OK, so that all makes sense, right? Well I thought so too, until I ran a draft of this blog post past Jussi, who pointed out that the Meson build types function completely differently on Windows than they do on other platforms. Unfortunately, whereas on most platforms the debug option only controls debuginfo generation, on Windows it instead controls whether the C library enables extra runtime debugging checks. So while debugoptimized and minsize are production build types on Linux and have nice corresponding CMake build types, they are non-production build types on Windows. This is a Meson defect. The point to remember is that the debug option is completely different on Windows than it is on other platforms, so my otherwise-nice rule for detecting production builds does not work properly on Windows. Cross-platform projects need to be especially careful with the debug option. There are various ways this could be fixed in Meson in the future: a nice simple proposal would be to add a new debuginfo option separate from debug, then deprecate the debugoptimized build type and replace it with releasewithdebuginfo.

    CMake dodges all these problems and avoids any ambiguity because its build types are named differently: “RelWithDebInfo” and “MinSizeRel” leave no doubt that you are dealing with a release (production) build.

    Rule 5: Think about NDEBUG

    The other behavior difference visible in the table above is that CMake defines NDEBUG for its production build types, whereas Meson has a separate option bn_debug that controls whether to define NDEBUG. NDEBUG controls whether the C and C++ assert() macro is enabled: if this value is defined, asserts are disabled. CMake is the only build system that defines NDEBUG for you automatically. You really need to think about this: if your software is performance-sensitive and contains slow assertions, the consequences of messing this up are severe, e.g. see this historical mesa bug where Fedora’s mesa package suffered a 10x slowdown because mesa upstream accidentally enabled assertions by default. Again, please, do not blame downstreams for bad upstream defaults: downstreams are (usually) not experts on upstream software, and cannot possibly be expected to pick better defaults than upstream’s.

    Meson allows developers to explicitly choose whether to enable assertions in production builds. Assertions are enabled in production by default, the opposite of CMake’s behavior. Some developers prefer that all asserts be disabled in production builds to optimize speed as far as possible, but this is usually not the best choice: having assertions enabled in production provides valuable confidence that your code is actually functioning as intended, and often improves security by converting many code execution exploits into denial of service. Most assertions do not have noticeable performance impact, so I prefer to leave most assertions enabled by default in production, and disable only asserts that are slow. Hence, I like Meson’s default behavior. But many engineers disagree with me, and some projects really do need assertions disabled in production; in particular, everyone agrees that performance-sensitive assertions should not be running in production builds. If you’re using Meson and want assertions disabled in production builds, you’re in theory supposed to use b_ndebug=if-release, but it doesn’t actually work because it only disables assertions if your build type is release or plain, while leaving assertions enabled for debugoptimized and minsize builds. We’ve already established that these are both production build types, so sadly that behavior is broken. Instead, it’s better to manually define NDEBUG except in non-production builds. Again, you have a non-production (debug) build when debug is true and if optimization is 0 or g; otherwise, you have a production build (except on Windows).

    Rule 6: plain Means “Production Build,” Not “No Flags”

    The GNOME release team recently had an exciting debate about the meaning of Meson’s plain build type. It is impressive how build engineers can be so enthusiastic about build options!

    I asked Jussi to explain the plain build type. His response was: “Meson does not, by itself, add compiler flags,” emphasis mine. It does not mean your project should not add its own compiler flags, and it certainly does not mean it’s OK to set bad defaults as long as they are vanilla-flavored. It is a production build type, and you should ensure that it receives defaults in line with the other production build types. You’ll be fine if you follow the same rule we already established: you have a non-production (debug) build if debug is true and if optimization is 0 or g; otherwise, you have a production build (except on Windows).

    The plain build type exists because it makes it easier for downstreams to implement their own compiler flags. Downstreams have to pass -O2 -g via CFLAGS because CMake and Meson are the only build systems that can do this automatically, and it’s easier to let downstreams disable this functionality than to force downstreams to set different CFLAGS separately for each supported build system.

    Rule 7: Don’t Forget Hardening Flags

    Sadly, by default all build systems generate insecure, unhardened binaries that should never be used in production. This is true of Autotools, CMake, Meson, and likely also whatever other build system you are thinking of. You must manually add your own hardening flags or your builds will be insecure. Unfortunately this is a little complicated to do. Fedora and RHEL’s recommended compiler flags are documented here. The freedesktop-sdk and GNOME Flatpak runtimes use these recommendations as the basis for their compiler flags, and by default, so do Flatpak applications based on these runtimes. It’s actually not very easy to replicate the same hardening flags since libraries and executables require different flags, so naively setting CFLAGS is not possible. Fedora and RHEL use GCC spec files to achieve this, whereas freedesktop-sdk relies on building GCC itself with a non-default configuration (yes, second-guessing upstream defaults). The good news is that all major downstreams have figured this out one way or another, so you only need to worry about it if you’re doing your own production builds.

    Conclusion

    That’s all you need to know. Remember, upstreams know their software better than downstreams do, so the hard thinking should happen upstream. We can minimize mistakes and trouble if upstreams carefully set good defaults, and if downstreams deviate from those defaults only when truly necessary. Keep these rules in mind to avoid unnecessary bug reports from dissatisfied users.

    History

    I updated this blog post on August 3, 2022 to change the primary guidance to “You have a non-production (debug) build if debug is true and if optimization is 0 or g; otherwise, you have a production build.” Originally, I failed to consider g. -Og means “optimize debugging experience” and it is supposedly a better choice than -O0 for debugging according to gcc(1). It’s definitely not actually, but at least that’s the intent.

    Jussi responded to this blog post on August 13, 2022 to discuss why Meson’s build types don’t work so well. Read his response.