Author: Michael Catanzaro

  • On WebKit Security Updates

    Linux distributions have a problem with WebKit security.

    Major desktop browsers push automatic security updates directly to users on a regular basis, so most users don’t have to worry about security updates. But Linux users are dependent on their distributions to release updates. Apple fixed over 100 vulnerabilities in WebKit last year, so getting updates out to users is critical.

    This is the story of how that process has gone wrong for WebKit.

    Before we get started, a few disclaimers. I want to be crystal clear about these points:

    1. This post does not apply to WebKit as used in Apple products. Apple products receive regular security updates.
    2. WebKitGTK+ releases regular security updates upstream. It is safe to use so long as you apply the updates.
    3. The opinions expressed in this post are my own, not my employer’s, and not the WebKit project’s.

    Browser Security in a Nutshell

    Web engines are full of security vulnerabilities, like buffer overflows and use-after-frees. The details don’t matter; what’s important is that skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted HTML to gain total control of your user account on your computer (or your phone). They can then install malware, read all the files in your home directory, use your computer in a botnet to attack websites, and do basically whatever they want with it.

    If the web engine is sandboxed, then a second type of attack, called a sandbox escape, is needed. This makes it dramatically more difficult to exploit vulnerabilities. Chromium has a top-class Linux sandbox. WebKit does have a Linux sandbox, but it’s not any good, so it’s (rightly) disabled by default. Firefox does not have a sandbox due to major architectural limitations (which Mozilla is working on).

    For this blog post, it’s enough to know that attackers use crafted input to exploit vulnerabilities to gain control of your computer. This is why it’s not a good idea to browse to dodgy web pages. It also explains how a malicious email can gain control of your computer. Modern email clients render HTML mail using web engines, so malicious emails exploit many of the same vulnerabilities that a malicious web page might. This is one reason why good email clients block all images by default: image rendering, like HTML rendering, is full of security vulnerabilities. (Another reason is that images hosted remotely can be used to determine when you read the email, violating your privacy.)

    WebKit Ports

    To understand WebKit security, you have to understand the concept of WebKit ports, because different ports handle security updates differently.

    While most code in WebKit is cross-platform, there’s a large amount of platform-specific code as well, to improve the user and developer experience in different environments. Different “ports” run different platform-specific code. This is why two WebKit-based browsers, say, Safari and Epiphany (GNOME Web), can display the same page slightly differently: they’re using different WebKit ports.

    Currently, the WebKit project consists of six different ports: one for Mac, one for iOS, two for Windows (Apple Windows and WinCairo), and two for Linux (WebKitGTK+ and WebKitEFL). There are some downstream ports as well; unlike the aforementioned ports, downstream ports are, well, downstream, and not part of the WebKit project. The only one that matters for Linux users is QtWebKit.

    If you use Safari, you’re using the Mac or iOS port. These ports get frequent security updates from Apple to plug vulnerabilities, which users receive via regular updates.

    Everything else is broken.

    Since WebKit is not a system library on Windows, Windows applications must bundle WebKit, so each application using WebKit must be updated individually, and updates are completely dependent on the application developers. iTunes, which uses the Apple Windows port, does get regular updates from Apple, but beyond that, I suspect most applications never get any security updates. This is a predictable result, the natural consequence of environments that require bundling libraries.

    (This explains why iOS developers are required to use the system WebKit rather than bundling their own: Apple knows that app developers will not provide security updates on their own, so this policy ensures every iOS application rendering HTML gets regular WebKit security updates. Even Firefox and Chrome on iOS are required to use the system WebKit; they’re hardly really Firefox or Chrome at all.)

    The same scenario applies to the WinCairo port, except this port does not have releases or security updates. Whereas the Apple ports have stable branches with security updates, with WinCairo, companies take a snapshot of WebKit trunk, make their own changes, and ship products with that. Who’s using WinCairo? Probably lots of companies; the biggest one I’m aware of uses a WinCairo-based port in its AAA video games. It’s safe to assume few to no companies are handling security backports for their downstream WinCairo branches.

    Now, on to the Linux ports. WebKitEFL is the WebKit port for the Enlightenment Foundation Libraries. It’s not going to be found in mainstream Linux distributions; it’s mostly used in embedded devices produced by one major vendor. If you know anything at all about the internet of things, you know these devices never get security updates, or if they do, the updates are superficial (updating only some vulnerable components and not others), or end a couple months after the product is purchased. WebKitEFL does not bother with pretense here: like WinCairo, it has never had security updates. And again, it’s safe to assume few to no companies are handling security backports for their downstream branches.

    None of the above ports matter for most Linux users. The ports available on mainstream Linux distributions are QtWebKit and WebKitGTK+. Most of this blog will focus on WebKitGTK+, since that’s the port I work on, and the port that matters most to most of the people who are reading this blog, but QtWebKit is widely-used and deserves some attention first.

    It’s broken, too.

    QtWebKit

    QtWebKit is the WebKit port used by Qt software, most notably KDE. Some cherry-picked examples of popular applications using QtWebKit are Amarok, Calligra, KDevelop, KMail, Kontact, KTorrent, Quassel, Rekonq, and Tomahawk. QtWebKit provides an excellent Qt API, so in the past it’s been the clear best web engine to use for Qt applications.

    After Google forked WebKit, the QtWebKit developers announced they were switching to work on QtWebEngine, which is based on Chromium, instead. This quickly led to the removal of QtWebKit from the WebKit project. This was good for the developers of other WebKit ports, since lots of Qt-specific code was removed, but it was terrible for KDE and other QtWebKit users. QtWebKit is still maintained in Qt and is getting some backports, but from a quick check of their git repository it’s obvious that it’s not receiving many security updates. This is hardly unexpected; QtWebKit is now years behind upstream, so providing security updates would be very difficult. There’s not much hope left for QtWebKit; these applications have hundreds of known vulnerabilities that will never be fixed. Applications should port to QtWebEngine, but for many applications this may not be easy or even possible.

    Update: As pointed out in the comments, there is some effort to update QtWebKit. I was aware of this and in retrospect should have mentioned this in the original version of this article, because it is relevant. Keep an eye out for this; I am not confident it will make its way into upstream Qt, but if it does, this problem could be solved.

    WebKitGTK+

    WebKitGTK+ is the port used by GTK+ software. It’s most strongly associated with its flagship browser, Epiphany, but it’s also used in other places. Some of the more notable users include Anjuta, Banshee, Bijiben (GNOME Notes), Devhelp, Empathy, Evolution, Geany, Geary, GIMP, gitg, GNOME Builder, GNOME Documents, GNOME Initial Setup, GNOME Online Accounts, GnuCash, gThumb, Liferea, Midori, Rhythmbox, Shotwell, Sushi, and Yelp (GNOME Help). In short, it’s kind of important, not only for GNOME but also for Ubuntu and Elementary. Just as QtWebKit used to be the web engine for choice for Qt applications, WebKitGTK+ is the clear choice for GTK+ applications due to its nice GObject APIs.

    Historically, WebKitGTK+ has not had security updates. Of course, we released updates with security fixes, but not with CVE identifiers, which is how software developers track security issues; as far as distributors are concerned, without a CVE identifier, there is no security issue, and so, with a few exceptions, distributions did not release our updates to users. For many applications, this is not so bad, but for high-risk applications like web browsers and email clients, it’s a huge problem.

    So, we’re trying to improve. Early last year, my colleagues put together our first real security advisory with CVE identifiers; the hope was that this would encourage distributors to take our updates. This required data provided by Apple to WebKit security team members on which bugs correspond to which CVEs, allowing the correlation of Bugzilla IDs to Subversion revisions to determine in which WebKitGTK+ release an issue has been fixed. That data is critical, because without it, there’s no way to know if an issue has been fixed in a particular release or not. After we released this first advisory, Apple stopped providing the data; this was probably just a coincidence due to some unrelated internal changes at Apple, but it certainly threw a wrench in our plans for further security advisories.

    This changed in November, when I had the pleasure of attending the WebKit Contributors Meeting at Apple’s headquarters, where I was finally able meet many of the developers I had interacted with online. At the event, I gave a presentation on our predicament, and asked Apple to give us information on which Bugzilla bugs correspond to which CVEs. Apple kindly provided the necessary data a few weeks later.

    During the Web Engines Hackfest, a yearly event that occurs at Igalia’s office in A Coruña, my colleagues used this data to put together WebKitGTK+ Security Advisory WSA-2015-0002, a list of over 130 vulnerabilities disclosed since the first advisory. (The Web Engines Hackfest was sponsored by Igalia, my employer, and by our friends at Collabora. I’m supposed to include their logos here to advertise how cool it is that they support the hackfest, but given all the doom and gloom in this post, I decided perhaps they would perhaps prefer not to have their logos attached to it.)

    Note that 130 vulnerabilities is an overcount, as it includes some issues that are specific to the Apple ports. (In the future, we’ll try to filter these out.) Only one of the issues — a serious error in the networking backend shared by WebKitGTK+ and WebKitEFL — resided in platform-specific code; the rest of the issues affecting WebKitGTK+ were all cross-platform issues. This is probably partly because the trickiest code is cross-platform code, and partly because security researchers focus on Apple’s ports.

    Anyway, we posted WSA-2015-0002 to the oss-security mailing list to make sure distributors would notice, crossed our fingers, and hoped that distributors would take the advisory seriously. That was one month ago.

    Distribution Updates

    There are basically three different approaches distributions can take to software updates. The first approach is to update to the latest stable upstream version as soon as, or shortly after, it’s released. This is the strategy employed by Arch Linux. Arch does not provide any security support per se; it’s not necessary, so long as upstream projects release real updates for security problems and not simply patches. Accordingly, Arch almost always has the latest version of WebKitGTK+.

    The second main approach, used by Fedora, is to provide only stable release updates. This is more cautious, reflecting that big updates can break things, so they should only occur when upgrading to a new version of the operating system. For instance, Fedora 22 shipped with WebKitGTK+ 2.8, so it would release updates to new 2.8.x versions, but not to WebKitGTK+ 2.10.x versions.

    The third approach, followed by most distributions, is to take version upgrades only rarely, or not at all. For smaller distributions this may be an issue of manpower, but for major distributions it’s a matter of avoiding regressions in stable releases. Holding back on version updates actually works well for most software. When security problems arise, distribution maintainers for major distributions backport fixes and release updates. The problem is that this not feasible for web engines; due to the huge volume of vulnerabilities that need fixed, security issues can only practically be handled upstream.

    So what’s happened since WSA-2015-0002 was released? Did it convince distributions to take WebKitGTK+ security seriously? Hardly. Fedora is the only distribution that has made any changes in response to WSA-2015-0002, and that’s because I’m one of the Fedora maintainers. (I’m pleased to announce that we have a 2.10.7 update headed to both Fedora 23 and Fedora 22 right now. In the future, we plan to release the latest stable version of WebKitGTK+ as an update to all supported versions of Fedora shortly after it’s released upstream.)

    Ubuntu

    Ubuntu releases WebKitGTK+ updates somewhat inconsistently. For instance, Ubuntu 14.04 came with WebKitGTK+ 2.4.0. 2.4.8 is available via updates, but even though 2.4.9 was released upstream over eight months ago, it has not yet been released as an update for Ubuntu 14.04.

    By comparison, Ubuntu 15.10 (the latest release) shipped with WebKitGTK+ 2.8.5, which has never been updated; it’s affected by about 40 vulnerabilities fixed in the latest upstream release. Ubuntu organizes its software into various repositories, and provides security support only to software in the main repository. This version of WebKitGTK+ is in Ubuntu’s “universe” repository, not in main, so it is excluded from security support. Ubuntu users might be surprised to learn that a large portion of Ubuntu software is in universe and therefore excluded from security support; this is in contrast to almost all other distributions, which typically provide security updates for all the software they ship.

    I’m calling out Ubuntu here not because it is specially-negligent, but simply because it is our biggest distributor. It’s not doing any worse than most of our other distributors.

    Debian

    Debian provides WebKit updates to users running unstable, and to testing except during freeze periods, but not to released version of Debian. Debian is unique in that it has a formal policy on WebKit updates. Here it is, reproduced in full:

    Debian 8 includes several browser engines which are affected by a steady stream of security vulnerabilities. The high rate of vulnerabilities and partial lack of upstream support in the form of long term branches make it very difficult to support these browsers with backported security fixes. Additionally, library interdependencies make it impossible to update to newer upstream releases. Therefore, browsers built upon the webkit, qtwebkit and khtml engines are included in Jessie, but not covered by security support. These browsers should not be used against untrusted websites.

    For general web browser use we recommend Iceweasel or Chromium.

    Chromium – while built upon the Webkit codebase – is a leaf package, which will be kept up-to-date by rebuilding the current Chromium releases for stable. Iceweasel and Icedove will also be kept up-to-date by rebuilding the current ESR releases for stable.

    (Iceweasel and Icedove are Debian’s de-branded versions of Firefox and Thunderbird, the product of an old trademark spat with Mozilla.)

    Debian is correct that we do not provide long term support branches, as it would be very difficult to backport security fixes. But it is not correct that “library interdependencies make it impossible to update to newer upstream releases.” This might have been true in the past, but for several years now, we have avoided requiring new versions of libraries whenever it would cause problems for distributions, and — with one big exception that I will discuss below — we ensure that each release maintains both API and ABI compatibility. (Distribution maintainers should feel free to get in touch if we accidentally introduce some compatibility issue for your distribution; if you’re having trouble taking our updates, we want to help. I recently worked with openSUSE to make sure WebKitGTK+ can still be compiled with GCC 4.8, for example.)

    The risk in releasing updates is that WebKitGTK+ is not a leaf package: a bad update could break some application. This seems to me like a good reason for application maintainers to carefully test the updates, rather than a reason to withhold security updates from users, but it’s true there is some risk here. One possible solution would be to have two different WebKitGTK+ packages, say, webkitgtk-secure, which would receive updates and be used by high-risk software like web browsers and email clients, and a second webkitgtk-stable package that would not receive updates to reduce regression potential.

    Recommended Distributions

    We regularly receive bug reports from users with very old versions of WebKit, who trust their distributors to handle security for them and might not even realize they are running ancient, unsafe versions of WebKit. I strongly recommend using a distribution that releases WebKitGTK+ updates shortly after they’re released upstream. That is currently only Arch and Fedora. (You can also safely use WebKitGTK+ in Debian testing — except during its long freeze periods — and Debian unstable, and maybe also in openSUSE Tumbleweed, and (update) also in Gentoo testing. Just be aware that the stable releases of these distributions are currently not receiving our security updates.) I would like to add more distributions to this list, but I’m currently not aware of any more that qualify.

    The Great API Break

    So, if only distributions would ship the latest release of WebKitGTK+, then everything would be good, right? Nope, because of a large API change that occurred two and a half years ago, called WebKit2.

    WebKit (an API layer within the WebKit project) and WebKit2 are two separate APIs around WebCore. WebCore is the portion of the WebKit project that Google forked into Blink; it’s too low-level to be used directly by applications, so it’s wrapped by the nicer WebKit and WebKit2 APIs. The difference between the WebKit and WebKit2 APIs is that WebKit2 splits work into multiple secondary processes. Asides from the UI process, an application will have one or many separate web processes (for the actual page rendering), possibly a separate network process, and possibly a database process for IndexedDB. This is good for security, because it allows the secondary processes to be sandboxed: the web process is the one that’s likely to be compromised first, so it should not have the ability to access the filesystem or the network. (Remember, though, that there is no Linux sandbox yet, so this is currently only a theoretical benefit.) The other main benefit is robustness. If a web site crashes the renderer, only a single web process crashes (corresponding to one tab in Epiphany), not the entire browser. UI process crashes are comparatively rare.

    Intermission: Certificate Verification

    Another advantage provided by the API change is the opportunity to handle HTTPS connections more securely. In the original WebKitGTK+ API, applications must handle certificate verification on their own. This was a serious mistake; predictably, applications performed no verification at all, or did so improperly. For instance, take this Shotwell bug which is not fixed in any released version of Shotwell, or this Banshee bug which is still open. Probably many more applications are affected, because I have not done a comprehensive check. The new API is secure by default; applications can ignore verification errors, but only if they go out of their way to do so.

    Remember that even though WebKitGTK+ 2.4.9 was released upstream over eight months ago, Ubuntu 14.04 is still on 2.4.8? It’s worth mentioning that 2.4.9 contains the fix for that serious networking backend issue I mentioned earlier (CVE-2015-2330). The bug is that TLS certificate verification was not performed until an HTTP response was received from the server; it’s supposed to be performed before sending an HTTP request, to prevent secure cookies from leaking. This is a disaster, as attackers can easily use it to get your session cookie and then control your user account on most websites. (Credit to Ross Lagerwall for reporting that issue.) We reported this separately to oss-security due to its severity, but that was not enough to convince distributions to update. But most applications in Ubuntu 14.04, including Epiphany and Midori, would not even benefit from this fix, because the change only affects WebKit2; remember, there’s no certificate verification in the original WebKitGTK+ API. (Modern versions of Epiphany do use WebKit2, but not the old version included in Ubuntu 14.04.) Old versions of Epiphany and Midori load pages even if certificate verification fails; the verification result is only used to change the status of a security indicator, basically giving up your session cookies to attackers.

    Removing WebKit1

    WebKit2 has been around for Mac and iOS for longer, but the first stable release for WebKitGTK+ was the appropriately-versioned WebKitGTK+ 2.0, in March 2013. This release actually contained three different APIs: webkitgtk-1.0, webkitgtk-3.0, and webkit2gtk-3.0. webkitgtk-1.0 was the original API, used by GTK+ 2 applications. webkitgtk-3.0 was the same thing for GTK+ 3 applications, and webkit2gtk-3.0 was the new WebKit2 API, available only for GTK+ 3 applications.

    Maybe it should have remained that way.

    But, since the original API was a maintenance burden and not as stable or robust as WebKit2, it was deleted after the WebKitGTK+ 2.4 release in March 2014. Applications had had a full year to upgrade; surely that was long enough, right? The original WebKit API layer is still maintained for the Mac, iOS, and Windows ports, but the GTK+ API for it is long gone. WebKitGTK+ 2.6 (September 2014) was released with only one API, webkit2gtk-4.0, which was basically the same as webkit2gtk-3.0 except for a couple small fixes; most applications were able to upgrade by simply changing the version number. Since then, we have maintained API and ABI compatibility for webkit2gtk-4.0, and intend to do so indefinitely, hopefully until GTK+ 4.0.

    A lot of good that does for applications using the API that was removed.

    WebKit2 Adoption

    While upgrading to the WebKit2 API will be easy for most applications (it took me ten minutes to upgrade GNOME Initial Setup), for many others it will be a significant challenge. Since rendering occurs out of process in WebKit2, the DOM API can only be accessed by means of a shared object injected into the web process. For applications that perform only a small amount of DOM manipulation, this is a minor inconvenience compared to the old API. For applications that use extensive DOM manipulation — the email clients Evolution and Geary, for instance — it’s not just an inconvenience, but a major undertaking to upgrade to the new API. Worse, some applications (including both Geary and Evolution) placed GTK+ widgets inside the web view; this is no longer possible, so such widgets need to be rewritten using HTML5. Say nothing of applications like GIMP and Geany that are stuck on GTK+ 2. They first have to upgrade to GTK+ 3 before they can consider upgrading to modern WebKitGTK+. GIMP is working on a GTK+ 3 port anyway (GIMP uses WebKitGTK+ for its help browser), but many applications like Geany (the IDE, not to be confused with Geary) are content to remain on GTK+ 2 forever. Such applications are out of luck.

    As you might expect, most applications are still using the old API. How does this work if it was already deleted? Distributions maintain separate packages, one for old WebKitGTK+ 2.4, and one for modern WebKitGTK+. WebKitGTK+ 2.4 has not had any updates since last May, and the last real comprehensive security update was over one year ago. Since then, almost 130 vulnerabilities have been fixed in newer versions of WebKitGTK+. But since distributions continue to ship the old version, few applications are even thinking about upgrading. In the case of the email clients, the Evolution developers are hoping to upgrade later this year, but Geary is completely dead upstream and probably will never be upgraded. How comfortable are you with using an email client that has now had no security updates for a year?

    (It’s possible there might be a further 2.4 release, because WebKitGTK+ 2.4 is incompatible with GTK+ 3.20, but maybe not, and if there is, it certainly will not include many security fixes.)

    Fixing Things

    How do we fix this? Well, for applications using modern WebKitGTK+, it’s a simple problem: distributions simply have to start taking our security updates.

    For applications stuck on WebKitGTK+ 2.4, I see a few different options:

    1. We could attempt to provide security backports to WebKitGTK+ 2.4. This would be very time consuming and therefore very expensive, so count this out.
    2. We could resurrect the original webkitgtk-1.0 and webkitgtk-3.0 APIs. Again, this is not likely to happen; it would be a lot of work to restore them, and they were removed to reduce maintenance burden in the first place. (I can’t help but feel that removing them may have been a mistake, but my colleagues reasonably disagree.)
    3. Major distributions could remove the old WebKitGTK+ compatibility packages. That will force applications to upgrade, but many will not have the manpower to do so: good applications will be lost. This is probably the only realistic way to fix the security problem, but it’s a very unfortunate one. (But don’t forget about QtWebKit. QtWebKit is based on an even older version of WebKit than WebKitGTK+ 2.4. It doesn’t make much sense to allow one insecure version of WebKit but not another.)

    Or, a far more likely possibility: we could do nothing, and keep using insecure software.

  • On Boot Times

    Why does it take as long to boot Fedora 23 in 2016 as it did to boot Windows 95 in 1995?

    I knew we were slow, but I did not realize how slow:

    $ systemd-analyze
    Startup finished in 9.002s (firmware) + 5.586s (loader) + 781ms (kernel) + 24.845s (initrd) + 1min 16.803s (userspace) = 1min 57.019s

    Two minutes. (Edit: The 25 seconds in initrd is mostly time spent waiting for me to enter my LUKS password. Still, 1.5 minutes.)

    $ systemd-analyze blame
    32.247s plymouth-quit-wait.service
    22.837s systemd-cryptsetup@luks\x2df1993bc3\x2da397\x2d4b38\x2d9bef\x2d
    18.058s systemd-journald.service
    16.804s firewalld.service
    9.314s systemd-udev-settle.service
    8.905s libvirtd.service
    7.890s dev-mapper-fedora_victory\x2d\x2droad\x2droot.device
    5.712s abrtd.service
    5.381s accounts-daemon.service
    2.982s packagekit.service
    2.871s lvm2-monitor.service
    2.646s systemd-tmpfiles-setup-dev.service
    2.589s systemd-journal-flush.service
    2.370s dmraid-activation.service
    2.230s proc-fs-nfsd.mount
    2.024s systemd-udevd.service
    2.000s lm_sensors.service
    1.932s polkit.service
    1.931s systemd-fsck@dev-disk-by\x2duuid-30901da9\x2dab7e\x2d41fc\x2d9b
    1.852s systemd-fsck@dev-mapper-fedora_victory\x2d\x2droad\x2dhome.serv
    1.795s iio-sensor-proxy.service
    1.786s gssproxy.service
    1.759s gdm.service

    (Truncated.)

    This review of Fedora 23 shows how severely our boot speed has regressed (spoiler: 56.5% slower than Fedora 21, 49% slower than Ubuntu 15.10). The review also shows that Fedora 23 takes twice as long to power off as Fedora 22.

    I think we can do better.

  • Time to use header bars in Unity?

    My employer, Igalia, recently purchased a Gazelle Pro from System76 for me to use. So far, it seems like a great laptop, but time will tell. It came with Ubuntu 15.04 preinstalled, so before replacing that with Fedora Workstation, I decided to check out how some of our applications look under Ambiance, the GTK+ theme that Ubuntu uses instead of Adwaita.

    For the most part, Ambiance looks great. The overlay scrollbars leave much to be desired compared to upstream’s, but that’s my only real complaint. I found that Ambiance makes better use of space in general, using much less padding than Adwaita to fit significantly more content into application windows. (This is the reason behind complaints that “everything is bigger” in GNOME.) On the whole, that seems like a big advantage over Adwaita to me, though I’m concerned that might make it harder to use a touchscreen.

    But I found some of the applications I maintain did not look so great, due to no fault of Ambiance, but to some non-ideal use of GtkHeaderBar.

    When we started using GtkHeaderBars to replace system title bars a couple years ago, many GTK+ themes needed some time to catch up, as they were suddenly responsible for themeing title bars to look similar to the window manager’s title bars. One disadvantage of this is that it’s no longer possible to mix-and-match GTK+ themes with different window manager themes and get a good result, but if the GTK+ theme matches the window manager’s theme, there is no problem.

    This approach worked well enough for us with most distributions, but Ubuntu, rather than improving their theme (which is not easy work, to be sure) and using the provided settings to put the proper window decorations in the right place, started patching our applications to set the header bars as the title bars only in GNOME. These patches took various forms: in some cases, like Calculator, the header bar was removed and its contents replaced with a menu bar (a strategy I dislike: we’ve been getting rid of menu bars because they are difficult to use), but generally the header bar was kept and simply packed underneath the title bar, instead of replacing the title bar. Since this made things worse in environments with updated themes that used the new window decoration settings, I decided to start accepting these patches upstream (in most cases), but tweaked so that the header bar would be used as the title bar in all desktops except Unity, rather than only in GNOME.

    The problem is, Ubuntu’s handling of header bars as title bars has since improved considerably, and it seems applications look better now with the header bars used as title bars than with the header bars underneath the title bars. Compare Font Viewer (which uses a header bar as the title bar) to Disks and Sudoku (which pack the header bar underneath the title bar, but only when running in Unity):

    Disks and Sudoku pack header bars underneath the title bar when running in Unity. Font Viewer sets the header bar as the title bar unconditionally.
    Disks and Sudoku pack header bars underneath the title bar when running in Unity. Font Viewer sets the header bar as the title bar unconditionally. Which looks better?

    Seems to me that Font Viewer is looking much nicer than Disks and Sudoku. Sudoku is also suffering from redundancy, since the application title is included in both the title bar and the header bar. That’s fixable, but since this is a non-default configuration that developers never test, similar problems are just going to return.

    The same applications running under GNOME. (Look at Disks to see how Ambiance uses less space than Adwaita, though it's more noticeable in other applications.)
    The same applications running under GNOME. (Look at Disks to see how Ambiance uses less space than Adwaita, though it’s more noticeable in other applications.)

    So my inclination is to drop our special handling of Unity. Ubuntu might patch it back in — it is free software, after all — but maybe not, and in any case I’d feel better about the code we have upstream. Which approach looks better to you?

  • Your _get_type() function is not G_GNUC_CONST

    It’s not uncommon in GNOME to annotate the _get_type() function declaration of a GObject with G_GNUC_CONST. Like so:

    GType         ephy_download_get_type              (void) G_GNUC_CONST;

    What does this do? It expands to __attribute__((__const__)) if the compiler is GCC (or a compiler that pretends to be GCC, like Clang); otherwise, it expands to nothing. What does that attribute do? I could point you at the GCC documentation, but GLib’s documentation is simpler: “Declaring a function as const enables better optimization of calls to the function. A const function doesn’t examine any values except its parameters, and has no effects except its return value.” That’s really all there is to it. What’s important to keep in mind is that if your function doesn’t meet the preconditions for the attribute, the compiler is free to make optimizations that break your code.

    Since G_DEFINE_TYPE defines our _get_type() functions for us, it can be easy to forget what’s actually in there. Here’s the canonical example, from the GObject documentation:

    GType maman_bar_get_type (void)
    {
      static GType type = 0;
      if (type == 0) {
        const GTypeInfo info = {
          /* You fill this structure. */
        };
        type = g_type_register_static (G_TYPE_OBJECT,
                                       "MamanBarType",
                                       &info, 0);
      }
      return type;
    }
    

    The first thing you should notice is that it examines a value (type) that’s not a parameter. Next, you should notice that it has an effect other than its return value: it modifies type, and then registers with the type system. Obviously G_GNUC_CONST is not appropriate here. Fix your headers. Update: If you scroll down to the first comment below, Giovanni recommends using G_GNUC_CONST anyway and also g_type_ensure as a workaround for if you don’t use the return value of the function.

    Note that the new, highly-recommendable G_DECLARE_FINAL_TYPE and G_DECLARE_DERIVABLE_TYPE macros declare this function for you, so future code should be immune to this problem. Update: Those macros do not use G_GNUC_CONST, but maybe they will in the future? Who can say!

    P.S. I’m not the one who noticed this — it was brought up by somebody (Christian?) at the Boston Summit last year — but I don’t think anybody has blogged about it yet. Update: It was pointed out in the comments that this was noticed long ago. Here’s a GLib bug report about breakage in Glade, and my colleague Andy Wingo has a blog post about a GStreamer bug this caused.

  • Useful DuckDuckGo bangs

    DuckDuckGo bangs are just shortcuts to redirect your search to another search engine. My personal favorites:

    • !gnomebugs — Runs a search on GNOME Bugzilla. Especially useful followed by a bug number. For example, search for ‘!gnomebugs 100000’ and see what you get.
    • !wkb — Same thing for WebKit Bugzilla.
    • !w — Searches Wikipedia.

    There’s 6388 more, but those are the three I can remember. If you work on GNOME or WebKit, these are super convenient.

  • Stop using RC4

    A follow up of my previous post: in response to my letter, NIST is going to increase the CVSS score of CVE-2013-2566 (RC4) to match CVE-2011-3389 (BEAST). Yay!

    In other news, WebKitGTK+ 2.8 has full support for RFC 7465. That’s a fancy way of saying that we will no longer negotiate RC4 connections and you will now be unable to access the small minority of HTTPS sites that offer nothing but RC4. Hopefully other browsers will follow along sooner rather than later. In particular, Firefox nightly has stopped negotiating RC4 except for a few whitelisted sites: I would very much like to see that whitelist removed. Internet Explorer has stopped negotiating RC4 except when it performs voluntary protocol version fallback. It would be great to see a firmer stance from Mozilla and Microsoft, and some action from Google and Apple.

  • Security and Privacy Roadmap for Epiphany and WebKitGTK+

    I’ve laid out some informal thoughts on where we should be heading with regards to new security and privacy features in Epiphany. It’s in the form of a list of features we really ought to have. (That is, it’s a wishlist.) Most of these features would be implemented in WebKitGTK+, so other applications using WebKitGTK+ would benefit as well.

    There’s certainly no shortage of work to be done, so except for a couple items on the list, this is not a list of things you should expect to be implemented soon. Comments welcome on the wiki or on this blog. Volunteers especially welcome! Most of these tasks on the list would make for great GSoC projects (but I’m not accepting more applicants this year: prospective students should find another mentor who’s interested in one of the tasks).

    The list will also be used to help assign one or more bounties using some of the money we raised in our 2013 security and privacy campaign.

  • RC4 vs. BEAST: which is worse?

    RFC 7465 has been published, and in a perfect world it would spell doom for the use of RC4 in TLS. But, spoiler alert, the theme of this blog is that there are tons of problems with TLS that your browser either cannot or willfully will not protect you against — major browser vendors love nothing more than sacrificing your security in the name of compatibility with lousy servers — so it’s too soon for optimism.

    This guy who sounds like he knows what he’s talking about and who I’ve blindly decided to trust says that PCI-compliant sites must disable CBC-based block ciphers so that they’re not vulnerable to the BEAST attack against TLS 1.0. But CBC is the only mode for block ciphers that provides a reasonable level of security in TLS 1.0, so these servers are limited to negotiating only stream ciphers. And RC4 is the only stream cipher in TLS, so that’s the only thing these poor servers are left with. But nobody is actually vulnerable to BEAST anymore — web browsers have been able to prevent the BEAST attack for several years — so this makes no sense.

    So what it a PCI-compliant site? In theory, it’s any site that processes credit card data. For instance, check out the SSL Labs report for www.bankofamerica.com. (In case you’re not yet thoroughly convinced of the truth of the second sentence in this post, take note of the eight bold WEAK warnings and also the bold DANGER. Even major banks don’t care.) Scroll down to the handshake simulations and note how AES is only sometimes used with TLS 1.2, and RC4 is always picked with TLS 1.0. In practice, I’ve checked SSL Labs results for sites that do use AES with TLS 1.0, like www.amazon.com, that do take credit card data, so I’m not sure if guy-who-sounds-like-he-knows-what-he’s-talking-about has the full story, but maybe audits come less frequently than I would expect.

    Hopefully browser vendors will push forward and disable RC4 anyway, but that doesn’t seem sufficiently probable, and these poor sites are hardly going to disable RC4 if it means they will fail their next security audit. So what better way to spend a Friday afternoon than write a letter to NIST?

    Hi,

    The CVSS score for CVE-2011-3389 (BEAST) [1] relative to the score for CVE-2013-2566 [2] may discourage efforts to implement RFC 7465 [3], which prohibits use of RC4-based ciphersuites with TLS. Delays in the implementation of this RFC will harm the overall security of the TLS ecosystem.

    The issue is described succinctly at [4]: PCI-compliant servers may not enable CBC-based ciphersuites because CVE-2011-3389 has a base score of 4.3, leaving RC4-based ciphersuites as the only possible options for the server to use with TLS 1.0. CVE-2013-2566, the RC4 vulnerability, has a lower CVSS score. However, CVE-2013-2566 is a much more serious issue in practice. CVE-2011-3389 has been long-since mitigated on the client side in major browsers using the 1/n-1 split technique [5], allowing CBC-based ciphersuites to be used safely. In contrast, no client-side mitigation for CVE-2013-2566 is available short of disabling RC4. Note also that a more serious attack against RC4 will be published next month [6].

    In summary, a properly-configured TLS server *should not* attempt to mitigate CVE-2011-3389, as this discourages clients from mitigating CVE-2013-2566, and clients already mitigate CVE-2011-3389. Please reconsider the relative ratings for these vulnerabilities to allow PCI-compliant servers to re-enable CBC-based ciphersuites, so that browser vendors can more comfortably disable support for RC4 as required by RFC 7465 [4] [7] [8].

    Thank you for your consideration,

    Michael Catanzaro

    [1] https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-3389
    [2] https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-2566
    [3] http://www.rfc-editor.org/rfc/rfc7465.txt
    [4] https://code.google.com/p/chromium/issues/detail?id=375342#c17
    [5] https://bugzilla.mozilla.org/show_bug.cgi?id=665814#c59
    [6] https://www.blackhat.com/asia-15/briefings.html#bar-mitzva-attack-breaking-ssl-with-13-year-old-rc4-weakness
    [7] https://bugzilla.mozilla.org/show_bug.cgi?id=999544
    [8] https://bugs.webkit.org/show_bug.cgi?id=140014

    Now, will this actually work? Will I even get a response? I have no clue. Let’s find out!

    Update: NIST responded.

  • redhat.corpmerchandise.com is fixed

    redhat.corpmerchandise.com is no longer broken. That’s a relatively good reaction time to the problem. Unfortunately, I’ve seen no talk of change in NSS to prevent sites from making similar mistakes in the future, and they are out of medium Fedora T-shirts.

  • Mozilla is responsible for the redhat.corpmerchandise.com fiasco

    First of all, I should probably admit that, despite the title of this post, no, the redhat.corpmerchandise.com fiasco is not Mozilla’s fault: it’s Red Hat’s, because obviously Mozilla has no control over that domain. But that wouldn’t make for a very interesting title for a blog post, and Mozilla set the stage for this to happen, so let’s go with “Mozilla’s fault.” Also, it’s not really Red Hat’s fault; Staples is really to blame, since corpmerchandise.com is their domain, but I really shouldn’t be pointing that out when the point of this blog post is to blame Mozilla. And gosh now I’m off on a tangent, but it’s not really a fiasco either: it’s a significant screw-up, but not that big a deal; but words like “fiasco” make for good clickbait headlines, so let’s go with that. FIASCO.

    One last note before I begin. I hold Mozilla to a higher standard than other software development companies. Sometimes it makes mistakes, like the one I’m about to present, and it’s important to call them out when this happens, but it’s because of good choices at Mozilla that Firefox still (mostly) respects your freedom, unlike other major browsers. It’s a good company.

    OK, so you’ve read this far in suspense, I should probably explain the redhat.corpmerchandise.com fiasco before you reach the end of your three-paragraph Internet-length attention span. Yesterday the Fedora Store went live, where you can buy low-cost Fedora-branded items: a T-shirt, water bottle, pub glass, or baseball cap. I want a T-shirt. OK, that’s great, so what is the fiasco? Well click on this link (quick! before it gets fixed!) to find out: https://redhat.corpmerchandise.com/ProductList.aspx?did=20588

    Now, depending on your browser, you may or may not have discovered the problem. When I load that site in Firefox, I see Fedora merchandise. When I load it in Epiphany, I see something noticeably less friendly:

    Screenshot from 2015-01-30 20:06:22

    “Legitimate banks, stores, and other public sites will not ask you to do this.” Ouch. (I actually took that language from Firefox when I designed that interstitial for Epiphany.) Ah, well, clearly there is some bug in Epiphany, because Firefox is a major browser and Firefox doesn’t get stuff like this wrong, right? Well, no, Epiphany is not wrong. Then Firefox is wrong? Well… from a certain point of view… (like mine)….

    Firefox and Epiphany use different cryptography libraries to determine if the certificate is valid, and they sometimes differ in what certificates they will accept. Firefox uses NSS, a library maintained by Mozilla primarily for use by Firefox (it’s also used by Chrome on Linux), while Epiphany (indirectly) uses GnuTLS, originally a GNU project that is now de-facto maintained by Red Hat. So is NSS just better than GnuTLS at determining whether a certificate is valid? Actually, NSS really is more permissive than GnuTLS, and this does sometimes lead Firefox to approve of sites that Epiphany will not, but that’s not the case here. Let’s try a little experiment to see what’s happening. Firefox has a weird feature that feels like it was designed in the 90s for the era when computers had one user account apiece: it lets you create multiple profiles for bookmarks, history, and other settings. So let’s give this a whirl:

    $ firefox -ProfileManager

    Create a new profile, launch Firefox with it, then load https://redhat.corpmerchandise.com/ProductList.aspx?did=20588 to try this experiment again. Or just keep reading and trust me when I say that you’ll see this:

    Screenshot from 2015-01-30 20:23:45

    Oooh, that’s not good, now Firefox thinks we’re being attacked. We’re not. So what’s going on here? Why is Firefox so inconsistent?

    First off, let’s get one thing straight: this site is totally and hopelessly broken. To see why, let’s use the super-handly tool gnutls-cli:

    $ gnutls-cli redhat.corpmerchandise.com
    Processed 182 CA certificate(s).
    Resolving 'redhat.corpmerchandise.com'...
    Connecting to '174.47.191.32:443'...
    - Certificate type: X.509
    - Got a certificate list of 1 certificates.
    - Certificate[0] info:
    - subject `C=US,ST=Kansas,L=Overland Park,O=STAPLES CONTRACT & COMMERCIAL\, INC.,OU=Information Techology,CN=*.corpmerchandise.com', issuer `C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert SHA2 High Assurance Server CA', RSA key 2048 bits, signed using RSA-SHA256, activated `2014-11-12 00:00:00 UTC', expires `2015-12-09 12:00:00 UTC', SHA-1 fingerprint `50cfb26c680434d132dc64e80db54de51a5a07a6'
    Public Key ID:
    c273ca58bfdb2902ea30dbf5946c27178affd588
    Public key's random art:
    +--[ RSA 2048]----+
    | |
    | |
    | |
    | . |
    | = S |
    | .* O o o |
    |o .+.X E o . |
    | =.. =.+.+ . |
    |..o oo+oo |
    +-----------------+

    - Status: The certificate is NOT trusted. The certificate issuer is unknown.
    *** PKI verification of server certificate failed...
    *** Fatal error: Error in the certificate.
    *** Handshake has failed
    GnuTLS error: Error in the certificate.

    If you’re familiar with digital certificates, it’s pretty obvious what’s wrong here. When you connect securely to a web site, it sends a chain of certificates: the first certificate is owned by the web site, then it sends some number of additional certificates, usually one or two, that belong to certificate authorities. Each certificate is signed by the next certificate in the chain (not quite, but it’s almost true, so let’s go with that for this post), up until you get to the last one in the chain, which must be signed by a certificate in your browser’s (or operating system’s) root trust store. The certificates in your root trust store are super valuable, and if one were to be compromised by an attacker the devastation to the Web would be terrible, so certificate authorities must keep their roots safe at all costs, and they do this by almost never using them. Legitimate certificate authorities never sign web sites’ certificates with their root certificate; instead, they create a few other certificates, sign them with the root, and use only those to sign web sites’ certificates. So if you ever visit a site and it sends you only one certificate, you know that the site is broken for sure. And here we have a site that has sent only one certificate (there’s a Certificate[0] but no Certificate[1]), a classic case of server misconfiguration (aka fiasco).

    So why did Firefox allow the site at first, even though it has no chain of trust, but not allow it with a fresh profile? Well, even though the site has presented no chain of trust, NSS goes far, far out of its way to find one. Whenever you visit a web site, NSS saves each intermediate certificate it sees, makes sure it’s signed by a trusted root, and caches it for future use. Then, whenever you visit a site that sends a broken chain of trust, NSS will effectively treat all those intermediates as roots, and use them to complete the chain of trust. This is completely safe, since it has already verified them. Those intermediate certs are saved in your Firefox profile, so by switching to a fresh profile they are no longer used, and you can’t access the broken site anymore.

    If you were able to access redhat.corpmerchanise.com in Firefox, you can verify this for yourself: open Preferences -> Advanced -> Certificates -> View Certificates -> Authorities. Anything listed as Default Trust or System Trust is a root, and anything listed as Software Security Device is a cached intermediate cert. Don’t touch those root certs, but feel free to Delete or Distrust any Software Security Device — it will just be cached again the next time you visit a web site that uses it. Scroll down to DigiCert SHA2 High Assurance Server CA. That’s the cached intermediate cert that is allowing you to visit redhat.corpmerchandise.com — it’s not shipped with Firefox, and new Firefox users won’t have it. Delete it, restart the browser, then try reloading https://redhat.corpmerchandise.com. Oh no, it’s untrusted! Now visit https://stackoverflow.com, which sends a certificate signed by DigiCert SHA2 High Assurance Server CA, which will cause NSS to cache it once again. Now back to https://redhat.corpmerchandise.com, and Firefox knows it’s safe again. And that, folks, is how you screw up your web site so that it only works if you first visit Stack Overflow.

    So why does NSS do this? Well, once upon a time (ten years ago), browsers were less strict about verifying chains of trust, and on an untrusted connection would let you proceed with maybe just a pop-up warning, and maybe not even that. So sites were less diligent about making sure they had valid chains of trust than they are today, in the era of nasty interstitial warnings that discourage the user from visiting the site. Since there were a lot of sites with broken chains, NSS chose to cache intermediate certificates to reduce the number of unnecessary validation errors for Firefox users. At the time, this might have been an OK choice.

    Today, if your online store is missing a chain of trust, the browser makes clear in no uncertain terms that this site is not to be trusted, and sites lose visitors/customers, so they try pretty hard to get this right. (How many lost visitors depends on the browser — a large majority of Chrome and probably Epiphany users will click through the warnings, but a large majority of Firefox users will not, because Firefox’s UI for this is much scarier.) When setting up a new site, you check it in a couple of browsers to make sure it works properly, and you trust that if it works in Firefox on your machine, surely it will work in Firefox for everyone else, right? Well, no, it won’t. When setting up a secure web site, you must always test it with a fresh Firefox profile to make sure that you got the chain of trust correct. Of course, nobody knows to do this, which is how we wind up with broken sites like redhat.corpmerchandise.com.

    I suspect this breakage would happen far less often if NSS did not cache intermediate certs, tricking site admins into thinking their sites are set up properly. Sure, cached certs don’t hurt the user who has them cached, but they’re bad for all other users of Firefox, as well as users of browsers that do no certificate caching. And there’s no good reason for this, because browsers don’t need to cache intermediate certificates in 2015, because almost all sites that redirect from HTTP to HTTPS get this right nowadays, and those that get it wrong are probably getting it wrong because they tested with a browser that had the right cached intermediate. Chicken and egg much? There’s only one way to fix this problem, and that brings me to my request: Mozilla, do the Web a favor and stop caching intermediate certificates.

    P.S. Astute readers would note that there’s absolutely no point in deleting an intermediate certificate with the Firefox certificate manager, except to test things like this. It’s just going to come back the next time you see it.