On reCAPTCHA Dread

When did reCAPTCHA become so maddeningly difficult and time-consuming?

I wanted to read Matthew Garrett’s post on Intel’s remote AMT vulnerability, but since I’m using Private Internet Access, Cloudflare has gated it behind reCAPTCHA. reCAPTCHA is much, much harder than it used to be. Although there seem to be a couple of other variants, nowadays you’re generally expected to identify squares that contain street signs and squares that contain mountains. Now either the answer key is regularly wrong, or I just don’t know what street signs and mountains are. You’d think the former… but there actually is a good degree of ambiguity in selecting which squares to tag.  Do I only tag all the squares that contain the signage-portion of the sign, or do I also tag the squares containing the signpost? (The former seems to work better, in my experience.) What if only a little bit of the sign extends into a particular square? (Jury’s out.) What if there are very distant signs in the background of the image, with many big signs in the foreground: should the distant signs be tagged too? And what constitutes a mountain anyway? Most of the “mountains” I see in the reCAPTCHA images look more like impressive hills to me. My guess is that reCAPTCHA wants me to tag any bit of elevated land as a mountain, but who knows, really.

Worse, once you tag some squares as signs or mountains, more squares appear in their places, making the captcha seem almost never-ending: you never know when you’ll finish. I dread each time new squares appear. I regularly find myself taking much too long to solve the captcha, often as much time as I would actually spend reading the gated content. Once I finish, I await the answer… “try again.” Which squares did I get wrong, Google? There’s no way to know. Back to start.

Today, for the first time that I can recall, I gave up after I was told to Try Again a few too many times in a row. I refreshed the page, thinking to try the captcha again… no luck this time, either. So congratulations, Google, on designing a usability nightmare captcha that can keep humans out. I know it has to be hard enough to make it difficult to solve with computer vision, but if humans can’t pass it, your captcha has failed. And so I am stuck reading Matthew’s article in Liferea.

Remember the days when all you had to do was spell out two words?

More On Private Internet Access

A few quick follow-up thoughts from my original review. First, problems I haven’t solved yet:

  • I forgot an important problem in my first blog: email. Evolution is borderline unusable with PIA. My personal GMail account usually works reliably, but my Google Apps school GMail account (which you’d think would function the same) and my Igalia email both time out with the error “Source doesn’t support prompt for credentials”. That’s Evolution’s generic error that it throws up whenever the mail server is taking too long to respond. So what’s going on here? I can check my email via webmail as a workaround in the meantime, but this is really terrible.
  • Still no solution for the first attempt to connect always failing. That’s really annoying! I was expecting some insight (or at least guesses) as to what might be going wrong here, but nobody has suggested anything about this yet. Update: The problem is that I had selected “Make available to other users” but “Store the password only for this user”, which results in the first attempt to connect always failing, because it’s performed by the gdm user. The fix is to store the password for all users.

Some solutions and answers to problems from my original post:

  • Jonh Wendell suggested using TCP instead of UDP to connect to PIA. I’ve been trying this and so far have not noticed a single instance of connection loss. So I think my biggest problem has been solved. Yay!
  • Dan LaManna posted a link to vpnfailsafe. I’m probably not going to use this since it’s a long shell script that I don’t understand, and since my connection drop problems seem to be solved now that I’ve switched to TCP, but it looks like it’d probably be a good solution to its problem. Real shame this is not built in to NetworkManager already.
  • Christel Dahlskjaer has confirmed that freenode requires NickServ/SASL authentication to use via PIA. This isn’t acceptable for me, since Empathy can’t handle it well, so I’m probably just going to stop using freenode for the most part. The only room I was ever really active in was #webkitgtk+, but in practice our use of that room is basically redundant with #epiphany on GIMPNet (where you’ll still find me, and which would be a better location for a WebKitGTK+ channel anyway), so I don’t think I’ll miss it. I’ve been looking to reduce the number of IRC rooms I join for a long time anyway. The only thing I really need freenode for is Fedora Workstation meetings, which I can attend via a web gateway. (Update: I realized that I am going to miss as well. Hmm, this could be a problem….)

So my biggest issue now is that I can’t use my email. That’s pretty surprising, as I wouldn’t think using a VPN would make any difference for that. I don’t actually care about my Google Apps account, but I need to be able to read my Igalia mail in Evolution. (Note: My actual IP seems to leak in my email headers, but I don’t care. My name is on my emails anyway. I just care that it works.)

On Private Internet Access

I’m soon going to be moving to Charter Communications territory, but I don’t trust Charter and don’t want it to keep records of all the websites that I visit.  The natural solution is to use a VPN, and the natural first choice is Private Internet Access, since it’s a huge financial supporter of GNOME, and I haven’t heard anybody complain about problems with using it. This will be a short review of my experience.

The service is not free. That’s actually good: it means I’m the customer, not the product. Cost is $40 per year if you pay a year in advance, but you should probably start with the $7/month plan until you’re sure you’re happy with the service and will be keeping it long-term. Anyway, this is a pretty reasonable price that I’m happy to pay.

The website is fairly good. It makes it easy to buy or discontinue service, so there are no pricing surprises, and there’s a pretty good library of support documentation. Unfortunately some of the claims on the website seem to be — arguably — borderline deceptive. A VPN service provides excellent anonymity against your ISP, but relying on a VPN would be a pretty bad idea if your adversary is the government (it can perform a traffic correlation attack) or advertising companies (they know your screen resolution, the performance characteristics of your graphics card, and until recently the rate your battery drains…). But my adversary is going to be Charter Communications, so a VPN is the perfect solution for me. If you need real anonymity, you absolutely must use the Tor Browser Bundle, but that’s going to make your life harder, and I don’t want my life to be harder, so I’ll stick with a VPN.

Private Internet Access provides an Ubuntu app, but I’m going to ignore that because (a) I use Fedora, not Ubuntu, and (b) why on Earth would you want a separate desktop app for your VPN when OpenVPN integration is already built-in on Ubuntu and all modern Linux desktops? Unfortunately the documentation provided by Private Internet Access is not really sufficient — they have a script to set it up automatically, but it’s really designed for Ubuntu and doesn’t work on Fedora — so configuration was slightly challenging.  I wound up following instructions on some third-party website, which I have long since forgotten. There are many third-party resources for how to configure PIA on Linux, which you might think is good but actually indicates a problem with the official documentation in my opinion. So there is some room for improvement here. PIA should ditch the pointless desktop app and improve its documentation for configuring OpenVPN via NetworkManager. (Update: After publishing this post, I discovered this article. Seems the installation script now supports for Fedora/RHEL and Arch Linux. So my claim that it only works on Ubuntu is outdated.) But anyway, once you get it configured properly with NetworkManager, it works: no need to install anything (besides the OpenVPN certificate, of course).

Well, it mostly works. Now, I have two main requirements to ensure that Charter can’t keep records of the websites I’m visiting:

  • NetworkManager must autoconnect to the VPN, so I don’t have to do it manually.
  • NetworkManager must reconnect to the VPN service if connection drops, and must never send any data if the VPN is off.

The first requirement was hard to solve, and I still don’t have it working perfectly. There is no GUI configuration option for this in gnome-control-center, but I eventually found it in nm-connection-editor: you have to edit your normal non-VPN connection, which has a preference to select a VPN to connect to automatically. So we should improve that in gnome-control-center. Unfortunately, it doesn’t work at all the first time your computer connects to the internet after it’s booted. Each time I boot my computer, I’m greeted with a Connection Failed notification on the login screen. This is probably a NetworkManager bug. Anyway, after logging in, I just have to manually connect once, then it works.

As for the next requirement, I’ve given up. My PIA connection is routinely lost about once every 30-45 minutes, usually when watching YouTube or otherwise using a lot of data. This is most likely a problem with PIA’s service, but I don’t know that: it could just as well be my current ISP cutting the connection, or maybe even some client-side NetworkManager bug. Anyway, I could live with brief connection interruptions, but when this happens, I lose connection entirely for about a minute — too long — and then the VPN times out and NetworkManager switches back to sending all the data outside the VPN. That’s totally unacceptable. To be clear, sending data outside the VPN is surely a NetworkManager problem, not a PIA problem, but it needs to be fixed for me to be comfortable using PIA. I see some discussion about that on this third-party GitHub issue, but the “solution” there is to stop using NetworkManager, which I’m not going to do. This is probably one of the reasons why PIA provides a desktop app — I think the PIA app doesn’t suffer from this issue? — but like I said, I’m not going to use a third-party OpenVPN app instead of the undoubtedly-nicer support that’s built in to GNOME.

Another problem is that I can’t connect to Freenode when I’m using the VPN. GIMPNet works fine, so it’s not a problem with IRC in general: Freenode is specifically blocking Private Internet Access users. This seems very strange, since Freenode has a bunch of prominent advertising for PIA all over its website. I could understand blocking PIA if there are too many users abusing it, but not if you’re going to simultaneously advertise it.

I also cannot access Igalia’s SIP service when using PIA. I need that too, but that’s probably something we have to fix on our end.

So I’m not sure what to do now. We have two NetworkManager bugs and a problem with Freenode. Eventually I’ll drop Empathy in favor of Matrix or some other IRC client where registering with NickServ is not a terrible mistake (presumably they’re only blocking unregistered users?), so the Freenode issue seems less-important. I think I’d be willing to just stop visiting Freenode if required to use PIA, anyway. But those NetworkManager issues are blockers to me. With those unfixed, I’m not sure if I’m going to renew my PIA subscription or not. I would definitely renew if someone were to fix those two issues. The ideal solution would be for PIA to adopt NetworkManager’s OpenVPN plugin and ensure it gets cared for, but if not, maybe someone else will fix it?

Update: See part two for how to solve some of these problems.

How to install Ubuntu safely with non-US keyboards

I use Fedora Workstation on my desktop computer for all my daily work, and Fedora Workstation is the only operating system that I ever recommend to others. But sometimes I like to try out other operating systems on my travel laptop just for a change of pace. In light of the recent announcement that Ubuntu is switching back to GNOME, I decided Ubuntu would be a good choice. If you’ll pardon the pun, this is the time for a show of unity between GNOME and Ubuntu, which is soon going to be our largest distributor by far, and that means we all ought to be more familiar with where Ubuntu users are coming from. And besides, this is a System76 laptop with an Ubuntu key on the keyboard, so it seems appropriate anyway.

I have just two constraints:

  • Must have full-disk encryption. Anything less is totally unacceptable.
  • Must have non-US keyboard layout for both installed system and encryption passphrase.

The problem is that Ubuntu’s installer asks for the disk encryption passphrase before allowing you to set the keyboard layout, and there seems to be no way to avoid this. If I type my passphrase before setting the keyboard layout, it obviously won’t work to unlock the installed system. The only workaround I could think of is to manually work out how to type my passphrase the first time on a US keyboard, but this is a huge pain. I have no trouble installing Ubuntu if I settle for home directory encryption, because the installer asks you to choose to encrypt your home directory after setting keyboard layout. But I don’t consider encrypting only the home directory to be acceptable. What a shame!

When I started writing this blog post, I thought all hope was lost and I’d just have to give up on Ubuntu, so I was writing this post to complain and hope against hope that somebody would fix it. But then I discovered that keyboard layout options are available from Unity’s top bar, in the top-right corner of the screen nestled between the Wi-Fi and Bluetooth menus. Just click on Text Entry Settings and you’ll be good to go. Pretty hidden, but it’s there. You’re welcome, Internet!

On “Insights On Companies/Developers Behind Wayland”

I recently read a peer-reviewed academic paper from a couple years ago that analyzed the contributions of different companies to WebKit. The authors didn’t bother to account for individuals using non-corporate email addresses, since that’s hard work, and did not realize that most Google developers contribute to the project using @chromium.org email addresses, resulting in Google’s contributions being massively undercounted. There were other serious mistakes in the paper too, but this is the one that came to mind when reading The FOSS Post’s article Insights On Companies/Developers Behind Wayland.

The FOSS Post didn’t bother to account for where some big developers work, incorrectly trusting that all employees use corporate emails when contributing to open source projects. It contains some interesting claims, like “Clearly, Samsung and the individual ‘Bryce Harrington’ are almost doing the same work [on Wayland build tools]” and “75% of the code [in libinput] is written by Peter Hutterer. Followed by 10% for a group of individuals and 5% by Red Hat.” I have only very passing familiarity with the Wayland project, but I do know that Bryce works for Samsung, and that Peter works for Red Hat. Suggesting that Red Hat contributed only 5% of the code to libinput, when the real number looks more like 80%, does not speak well of the quality of The FOSS Post’s insights. Also notably, Kristian Høgsberg’s massive contributions to the  project were not classed as contributions from Intel, where he was working at the time.

You don’t have to be an expert on the community to take the time to account for people not using corporate emails before publishing an analysis. This is why it’s important to understand the community you are analyzing at least somewhat before publishing such “insights.”

Update: The FOSS Post’s article was completely updated with new charts to address this issue.

Update #2: Jonas reports in the comments below that the charts are still completely wrong.

A Web Browser for Awesome People (Epiphany 3.24)

Are you using a sad web browser that integrates poorly with GNOME or elementary OS? Was your sad browser’s GNOME integration theme broken for most of the past year? Does that make you feel sad? Do you wish you were using an awesome web browser that feels right at home in your chosen desktop instead? If so, Epiphany 3.24 might be right for you. It will make you awesome. (Ask your doctor before switching to a new web browser. Results not guaranteed. May cause severe Internet addiction. Some content unsuitable for minors.)

Epiphany was already awesome before, but it just keeps getting better. Let’s look at some of the most-noticeable new features in Epiphany 3.24.

You Can Load Webpages!

Yeah that’s a great start, right? But seriously: some people had trouble with this before, because it was not at all clear how to get to Epiphany’s address bar. If you were in the know, you knew all you had to do was click on the title box, then the address bar would appear. But if you weren’t in the know, you could be stuck. I made the executive decision that the title box would have to go unless we could find a way to solve the discoverability problem, and wound up following through on removing it. Now the address bar is always there at the top of the screen, just like in all those sad browsers. This is without a doubt our biggest user interface change:

Screenshot showing address bar visible
Discover GNOME 3! Discover the address bar!

You Can Set a Homepage!

A very small subset of users have complained that Epiphany did not allow setting a homepage, something we removed several years back since it felt pretty outdated. While I’m confident that not many people want this, there’s not really any good reason not to allow it — it’s not like it’s a huge amount of code to maintain or anything — so you can now set a homepage in the preferences dialog, thanks to some work by Carlos García Campos and myself. Retro! Carlos has even added a home icon to the header bar, which appears when you have a homepage set. I honestly still don’t understand why having a homepage is useful, but I hope this allows a wider audience to enjoy Epiphany.

New Bookmarks Interface

There is now a new star icon in the address bar for bookmarking pages, and another new icon for viewing bookmarks. Iulian Radu gutted our old bookmarks system as part of his Google Summer of Code project last year, replacing our old and seriously-broken bookmarks dialog with something much, much nicer. (He also successfully completed a major refactoring of non-bookmarks code as part of his project. Thanks Iulian!) Take a look:

Manage Tons of Tabs

One of our biggest complaints was that it’s hard to manage a large number of tabs. I spent a few hours throwing together the cheapest-possible solution, and the result is actually pretty decent:

Firefox has an equivalent feature, but Chrome does not. Ours is not perfect, since unfortunately the menu is not scrollable, so it still fails if there is a sufficiently-huge number of tabs. (This is actually surprisingly-difficult to fix while keeping the menu a popover, so I’m considering switching it to a traditional non-popover menu as a workaround. Help welcome.) But it works great up until the point where the popover is too big to fit on your monitor.

Note that the New Tab button has been moved to the right side of the header bar when there is only one tab open, so it has less distance to travel to appear in the tab bar when there are multiple open tabs.

Improved Tracking Protection

I modified our adblocker — which has been enabled by default for years — to subscribe to the EasyPrivacy filters provided by EasyList. You can disable it in preferences if you need to, but I haven’t noticed any problems caused by it, so it’s enabled by default, not just in incognito mode. The goal is to compete with Firefox’s Disconnect feature. How well does it work compared to Disconnect? I have no clue! But EasyPrivacy felt like the natural solution, since we already have an adblocker that supports EasyList filters.

Disclaimer: tracking protection on the Web is probably a losing battle, and you absolutely must use the Tor Browser Bundle if you really need anonymity. (And no, configuring Epiphany to use Tor is not clever, it’s very dumb.) But EasyPrivacy will at least make life harder for trackers.

Insecure Password Form Warning

Recently, Firefox and Chrome have started displaying security warnings  on webpages that contain password forms but do not use HTTPS. Now, we do too:

I had a hard time selecting the text to use for the warning. I wanted to convey the near-certainty that the insecure communication is being intercepted, but I wound up using the word “cybercriminal” when it’s probably more likely that your password is being gobbled up by various  governments. Feel free to suggest changes for 3.26 in the comments.

New Search Engine Manager

Cedric Le Moigne spent a huge amount of time gutting our smart bookmarks code — which allowed adding custom search engines to the address bar dropdown in a convoluted manner that involved creating a bookmark and manually adding %s into its URL — and replacing it with an actual real search engine manager that’s much nicer than trying to add a search engine via bookmarks. Even better, you no longer have to drop down to the command line in order to change the default search engine to something other than DuckDuckGo, Google, or Bing. Yay!

New Icon

Jakub Steiner and Lapo Calamandrei created a great new high-resolution app icon for Epiphany, which makes its debut in 3.24. Take a look.

WebKitGTK+ 2.16

WebKitGTK+ 2.16 improvements are not really an Epiphany 3.24 feature, since users of older versions of Epiphany can and must upgrade to WebKitGTK+ 2.16 as well, but it contains some big improvements that affect Epiphany. (For example, Žan Doberšek landed an important fix for JavaScript garbage collection that has resulted in massive memory reductions in long-running web processes.) But sometimes WebKit improvements are necessary for implementing new Epiphany features. That was true this cycle more than ever. For example:

  • Carlos García added a new ephemeral mode API to WebKitGTK+, and modified Epiphany to use it in order to make incognito mode much more stable and robust, avoiding corner cases where your browsing data could be leaked on disk.
  • Carlos García also added a new website data API to WebKitGTK+, and modified Epiphany to use it in the clear data dialog and cookies dialog. There are no user-visible changes in the cookies dialog, but the clear data dialog now exposes HTTP disk cache, HTML local storage, WebSQL, IndexedDB, and offline web application cache. In particular, local storage and the two databases can be thought of as “supercookies”: methods of storing arbitrary data on your computer for tracking purposes, which persist even when you clear your cookies. Unfortunately it’s still not possible to protect against this tracking, but at least you can view and delete it all now, which is not possible in Chrome or Firefox.
  • Sergio Villar Senin added new API to WebKitGTK+ to improve form detection, and modified Epiphany to use it so that it can now remember passwords on more websites. There’s still room for improvement here, but it’s a big step forward.
  • I added new API to WebKitGTK+ to improve how we handle giving websites permission to display notifications, and hooked it up in Epiphany. This fixes notification requests appearing inappropriately on websites like the https://riot.im/app/.

Notice the pattern? When there’s something we need to do in Epiphany that requires changes in WebKit, we make it happen. This is a lot more work, but it’s better for both Epiphany and WebKit in the long run. Read more about WebKitGTK+ 2.16 on Carlos García’s blog.

Future Features

Unfortunately, a couple exciting Epiphany features we were working on did not make the cut for Epiphany 3.24. The first is Firefox Sync support. This was developed by Gabriel Ivașcu during his Google Summer of Code project last year, and it’s working fairly well, but there are still a few problems. First, our current Firefox Sync code is only able to sync bookmarks, but we really want it to sync much more before releasing the feature: history and open tabs at the least. Also, although it uses Mozilla’s sync server (please thank Mozilla for their quite liberal terms of service allowing this!), it’s not actually compatible with Firefox. You can sync your Epiphany bookmarks between different Epiphany browser instances using your Firefox account, which is great, but we expect users will be quite confused that they do not sync with your Firefox bookmarks, which are stored separately. Some things, like preferences, will never be possible to sync with Firefox, but we can surely share bookmarks. Gabriel is currently working to address these issues while participating in the Igalia Coding Experience program, and we’re hopeful that sync support will be ready for prime time in Epiphany 3.26.

Also missing is HTTPS Everywhere support. It’s mostly working properly, thanks to lots of hard work from Daniel Brendle (grindhold) who created the libhttpseverywhere library we use, but it breaks a few websites and is not really robust yet, so we need more time to get this properly integrated into Epiphany. The goal is to make sure outdated HTTPS Everywhere rulesets do not break websites by falling back automatically to use of plain, insecure HTTP when a load fails. This will be much less secure than upstream HTTPS Everywhere, but websites that care about security ought to be redirecting users to HTTPS automatically (and also enabling HSTS). Our use of HTTPS Everywhere will just be to gain a quick layer of protection against passive attackers. Otherwise, we would not be able to enable it by default, since the HTTPS Everywhere rulesets are just not reliable enough. Expect HTTPS Everywhere to land for Epiphany 3.26.

Help Out

Are you a computer programmer? Found something less-than-perfect about Epiphany? We’re open for contributions, and would really appreciate it if you would try to fix that bug or add that feature instead of slinking back to using a less-awesome web browser. One frequently-requested feature is support for extensions. This is probably not going to happen anytime soon — we’d like to support WebExtensions, but that would be a huge effort — but if there’s some extension you miss from a sadder browser, ask if we’d allow building it into Epiphany as a regular feature. Replacements for popular extensions like NoScript and Greasemonkey would certainly be welcome.

Not a computer programmer? You can still help by reporting bugs on GNOME Bugzilla. If you have a crash to report, learn how to generate a good-quality stack trace so that we can try to fix it. I’ve credited many programmers for their work on Epiphany 3.24 up above, but programming work only gets us so far if we don’t know about bugs. I want to give a shout-out here to Hussam Al-Tayeb, who regularly built the latest code over the course of the 3.24 development cycle and found lots of problems for us to fix. This release would be much less awesome if not for his testing.

OK, I’m done typing stuff now. Onwards to 3.26!

On Problems with Vala

If you’re going to be writing a new application based on GNOME technologies and targeting the GNOME ecosystem, then you should seriously consider writing it in the Vala programming language.

That’s a pretty controversial statement! Emmanuele just told us that Vala is dying and that you should find an alternative. So, if I’m recommending that you start writing new applications in Vala, clearly I disagree with him at least somewhat. Even so, I actually think pretty much all of Emmanuele’s points are correct. Vala really is dying! The status quo really is pretty bad. Using a dying programming language to write your application is rarely a good idea. You should think twice before doing so.

Still, I wouldn’t be so quick to write off Vala. For one thing, it’s a pleasure to use. The design of the language is very good. The integration with GObject and the GNOME ecosystem, from GObject signals and properties to native support for D-Bus and composite GTK+ widget templates, is second to none, and will probably never be surpassed by another language. It’s hard to understate how good the syntax of the language is, and how tailored it is for GNOME programming. People like Vala for good reasons.

Emmanuele says that it’s time to look at alternatives to Vala, but the alternatives we have to Vala right now have big problems too. If I were to start writing a new GNOME app today, Vala is still the language I would use. So now I have to try to convince you of that! First, let’s look at current problems with Vala in more detail. Then, let’s look into the alternatives we have available.

The problems with Vala are real and very serious, so I can only give it that qualified recommendation. The Vala community is slowly dying, and I would not recommend starting a big, complex application in Vala today, given the risk that the compiler might be completely unmaintained in a few years. But most GNOME applications are fairly small — only a few, like Builder or Evolution or Epiphany, are big and complex — and I think most will probably do well enough in the long run with Vala even if the Vala compiler stops improving.

Problems with Vala

Yeah, I’m afraid this is not going to be a short post. Let’s take this in two: first, common complaints that I don’t think are actually serious problems, and second, the actual serious problems.

Minor Problems with Vala

Let me start off by pointing out a couple things that I don’t consider to be serious problems with Vala: bindings issues and tooling issues.

People often complain that there are bugs with the bindings. And there are! Debugging bindings bugs is not fun at all. But I get the impression that bindings complaints are generally about the state of the bindings five years ago. The bindings situation is a lot better now than it was then, and it is constantly improving. Vala bindings actually are well-maintained (thanks mostly to Rico Tzschichholz); it’s only the compiler itself that is having maintenance problems. And most of the bindings problems are caused by introspection issues in the upstream libraries. That means that if you’re hitting a bindings problem in Vala, it’s probably a problem in every other language you might want to use as well… except C and C++, of course. And bindings issues are actually arguably far easier to debug in Vala than they would be in Python or JavaScript, since you can look for errors in the generated C code. Fixing bindings is generally easy, and you can work around the problems using a drop-in vapi file if you can’t wait to get the fix upstreamed. Adding new bindings is work if the library is not introspectable, but much easier than it is in other languages. No doubt programming would be nicer if bindings were not necessary, but unless you want to write everything in C or C++, bindings are a fact of life that won’t go away, and Vala’s are pretty darn good.

As far as tooling: it’s true that the Vala ecosystem does not have great tools, but I don’t think this is really a horrible problem. The most common complaint I see is that debugging requires looking at generated C code, and there’s no special Vala debugger. Now, in the case of crashes, it usually does indeed require looking at the generated code. But, in my experience, crashes are much, much rarer in applications that are written in Vala, so we’re going to be spending a lot less time in the debugger than we would when working on C applications anyway. And debugging the generated code with gdb isn’t so horrible. It’s hardly a great experience, but you get used to it. Be sure that you’re using vala -g to emit Vala line numbers into the generated code, otherwise you’re just making your life unnecessarily difficult. At any rate, gdb plus line numbers is the way to go here. Vala debugging is never going to be as simple as C or C++ debugging, but you’ll have to do less of it than you would in C or C++, and that’s a reasonable trade-off.

Another problem with Vala is that it suffers from the same safety issues as C and C++. You will make mistakes, and your mistakes will allow remote attackers to take control of your users’ computers. Vala doesn’t do anything to avoid buffer overflows, for instance. That’s pretty bad. But you will at least make fewer mistakes than you would in C or C++. For instance, I believe the language makes refcounting errors an order of magnitude less likely, drastically reducing the number of use-after-free vulnerabilities in your code. Compared to Rust or Python or JavaScript, this is not very good at all, but compared to C or C++, it’s excellent.

Major Problems with Vala

I see two serious problems with Vala. The first is that the compiler has bugs, and debugging compiler bugs is very unpleasant. The second is that the compiler is not well-maintained. Like Emmanuele says, the Vala community is dying, or, if you want to be generous, at least not in a very healthy state. So when you report compiler bugs, probably nobody is going to fix those bugs. This can be very frustrating.

Vala Bugs

I can confidently say that Vala has more bugs than any other programming language you might be considering using for GNOME development. It’s sad, but true. Most of the bugs are just small annoyances; for instance, bugs in which the Vala compiler emits C code that does not actually compile. These are usually easy to work around, but that can be pretty annoying. Other bugs are more serious. For instance, see signal handler spuriously runs when signal is emitted by object not connected to once every 98 emissions (which was fixed a few years ago, but a good example of how Vala bugs can cause runtime problems) or Incorrect choice of signal marshaller causes crash when promoting a pawn in GNOME Chess when built with Fedora or Debian hardening flags (still broken).

Of course, all bugs are fragile if there is an active community of developers fixing them. But, as Emmanuele has already pointed out, that is not going so well.

Vala Maintainership and Community

Vala’s greatest strength — its focus on GNOME — is also its greatest weakness. Vala is not very interesting to anyone outside the GNOME and GTK+ development communities. Accordingly, the community of Vala developers and maintainers is several orders of magnitude smaller than other programming language communities.

Relative to the fairly small size of the GNOME ecosystem, there are actually a very large number of Vala applications in existence. (All of elementary’s applications use Vala, for example.) So there is a relatively large number of Vala application maintainers with a stake in the success of the Vala project. But they’re mostly focused on developing their applications, not Vala itself. A programming language is probably not the greatest tool for any job if it requires that you participate in maintaining the compiler, after all. And the barrier for entry to Vala compiler development is high. For starters, compilers are difficult and complicated; working on a compiler is far more difficult than working on desktop applications. Moreover, of the people who are motivated to contribute to the compiler and submit a patch, most probably get discouraged pretty quickly, because most patches posted on Bugzilla do not get reviewed. There are currently 179 unreviewed patches in Vala’s request queue. The oldest patch there is 2,556 days old, so we know that it’s been seven years since anyone has cared for the outstanding patches. Any of those discouraged contributors might have eventually turned into Vala maintainers if only their patches were reviewed. Of course, most would not have, but if only one or two of the people who submitted patches was an active Vala maintainer today, the project would be in a significantly better state. And I see patches there from a large number of different developers.

But who can review the patches? Vala needs more maintainers. Rico is taking good care of the bindings and appears to be committing patches to the compiler as well, but he’s just one person and can’t do it alone. Vala stakeholders need to increase investment in the compiler. But this is a familiar problem: the majority of our modules need more maintainers. Maintainers do not grow on trees. Ideally a company will step in to support Vala compiler development, but few companies seem to have taken an interest in Vala, so this doesn’t seem likely. This is unfortunate.

I frankly expect that Emmanuele’s prediction will prove true, and that the Vala situation will only get worse in the next five years. It’s more likely than not. But I’m not very confident in that guess! Several people have contributed significant patches to the Vala compiler recently. (Carlos Garnacho, you have earned much beer.) The future is still uncertain. I very much hope that my pessimistic expectation is proven wrong, and that the maintainership situation will improve soon.

But while the Vala compiler may stagnate, it’s probably not going to get worse. I think it’s good enough for writing GNOME applications today, and I expect it will still be good enough in five years.

Alternatives to Vala

So Vala is not in great state. What else can we use to write GNOME applications? The only serious programming languages in the GNOME ecosystem are C, C++, Vala, Python (using PyGObject), and JavaScript (using gjs). No, I did not miss any options. If your favorite language isn’t listed there, it’s either because (a) it doesn’t have decent GObject bindings, or (b) the language is not popular at all. To my knowledge, all GNOME software is written in one of those five languages, except for a couple old applications that use C#. And the state of C# in GNOME makes Vala look like an active vibrant language. If you want to start writing a GTK+ 2 app in 2017, go ahead and use C#. The rest of us will restrict our search to C, C++, Vala, Python, and JavaScript.

(Tangent time! Rust is trendy, but I’m told it needs more help to improve the GObject bindings before we start using it in applications. I’m hoping that it will emerge as the superior option in the not so distant future, but it’s definitely not ready for use in GNOME yet. It has to have better GObject integration. It has to have some degree of ABI stability, even if it’s limited. Dynamic linking has to be the default. It’s not going to be successful in the GNOME community otherwise. You should join the Rust folks and help out!)

Let’s start with C. C is undoubtedly the most popular language used in GNOME programming, but it would be flatly irresponsible to choose it for writing new applications. I enjoy writing C, but like everyone else, I make mistakes, and I think it would be desirable if my programming mistakes did not allow attackers to execute arbitrary code on your computer. It’s also extremely verbose, requiring far more lines of code to do simple things than the other programming languages that we’re considering do. C is not a reasonable option for new applications in 2017, even if it is the language you are most familiar with. I wouldn’t go so far as to say that our existing applications need to be rewritten in a safer language, because rewriting applications is hard and our developer community is small, but I certainly would not want to start writing any new applications in C. We need a C migration plan.

Modern C++ is a bit safer and much more pleasant to use than C, but that’s really not saying all that much. Footguns abound. You have to know all sorts of arcane rules to use it properly. The barrier for entry to new contributors is much higher than it is with C. Developers still make lots of mistakes, and those mistakes still allow remote attackers to take control of your users’ computers. So C++ is not a good choice for new applications either.

Python… OK, I suppose Python is pretty good, if you’re willing to give up compiler errors and static typing. I prefer to use a compiled language for writing serious software, because I make a lot of mistakes, and I’d rather the compiler catch those mistakes when possible than find out about them at runtime. So I would still prefer Vala. But if you prefer scripting languages, then Python is just fine, and doesn’t suffer from any of the disadvantages of Vala, and you should use it for your new app. Some developers have mentioned that there are some gotchas and interoperability issues with moving between Python APIs and GNOME APIs, but no programming environment is ever going to be perfect. PyGObject is good enough, and I’m pretty sure we’re going to be using it for a long time.

The last option is JavaScript. With all due respect to the gjs folks — and Philip Chimento in particular, who has been working hard at Endless to improve the JavaScript experience for GNOME developers — there’s no way to change the reality that JavaScript is a terrible language. It has close to zero redeeming features, and many confusing ones. You use it in web browsers because you have to, but for a desktop application, I have no clue why you would choose to use this over Python. We have to maintain gjs forever (for some value of “forever”) because GNOME Shell uses it, and it’s also being used by a couple apps like GNOME Weather and GNOME Documents. But it should be your last choice for a desktop application. Do not use JavaScript for new projects.

Another disadvantage of using JavaScript is that there is a huge barrier to entry for newcomers. But wait, lots of web developers are familiar with JavaScript; wasn’t the whole point of using it to lower the barrier of entry to newcomers? Well look how well that worked out for us! We have approximately zero new developers flocking to work on our JavaScript applications. The only documentation currently available online is over three years old, covers only a subset of the introspectable libraries that you want to use, and is frankly pretty bad. Unless opening gir files in a text editor and reading internal gjs unit tests to figure out how to call functions sounds like a good newcomer experience to you, then we need to steer far clear of JavaScript. The documentation situation is a fixable problem — Philip has much improved documentation that’s just waiting for hosting to materialize — but there’s no momentum to fix it right now, and the defects of the language can’t ever be fixed.

So all of the alternatives to Vala have big problems too, except maybe for Python, which is not a compiled language, which many of us would consider a serious disadvantage in itself. If you don’t want to use Vala, you have to pick one of the alternatives. So which will it be? I have no doubt that many or even most of our community places different weight on the various advantages and disadvantages of the languages. I actually expect mine is a minority opinion. But at the very least, I think I’ve shown why Vala still seems attractive to many developers.

(Note that the above analysis does not apply to libraries. You cannot write a system library in Python or JavaScript. You can do so with Vala or C++, but it requires special care. GNOME platform libraries must have a C API in order to be introspectable and useful.)

Conclusion

If you ignore its bugs and its maintainership status, Vala is by far the best language for writing GNOME applications. But those are pretty big things to ignore. I’d still use it anyway. It’s hard to understate how pleasant it is to develop with. The most frequent complaints I see are about problems that I don’t actually consider very serious. I don’t know. I also don’t know what the language of GNOME’s future is, but I do know that we need to stop writing new applications in C, and until GObject integration for Rust is ready, Vala still seems like our best shot at that.

Who Maintains That Stuff?

If you use GNOME or Ubuntu, then GNOME Disks is probably what you rely on if you ever need to do any disk management operations, so it’s a relatively important piece of software for GNOME and Ubuntu users. Now if you’re a command line geek, you might handle disk management via command line, and that’s fine, but most users don’t know how to do that. Or if you’re living in the past like Ubuntu and not yet using Wayland, you might prefer GParted (which does not work under Wayland because it requires root permissions, while we intentionally will not allow applications to run as root in Wayland). But for anyone else, you’re probably using GNOME Disks. So it would be good for it to work reliably, and for it to be relatively free of bugs.

I regularly receive new bug reports against GNOME Disks. Sometimes they’re not very well-constructed or based on some misunderstanding of how partitioning works, in which case I’ll close them, but most of them are good and valid. So who fixes bug reports against GNOME Disks? The answer is: nobody! Unless it’s really, really easy — in which case I might allocate five minutes for it — nobody is going to fix the bug that you reported. What a shame!

Who is the maintainer? In this case, it’s me, but I don’t actually know much anything about the application and certainly don’t have time to fix things; I just check Bugzilla to see if anybody has posted a patch, so that contributors’ patches (which are rare) don’t get totally neglected, and make new releases every once in a while, and only because I didn’t want to see such a critical piece of software go completely unmaintained.

If you’re a software developer with an interest in both GNOME and disk management, GNOME Disks would be a great place to help out. A great place to start would be to search through GNOME Bugzilla for issues to work on, and submit patches for them.

Of course, Disks is far from the only unmaintained or undermaintained software in GNOME. Last year, Sébastien set up a wiki page to track unmaintained and undermaintained apps. It has had some success: in that time, GNOME Calculator, Shotwell, Gtranslator, and Geary have all found maintainers and been removed from the list of unmaintained modules. (Geary is still listed as undermaintained, and no doubt it would be nice to have more Geary maintainers, but the current maintainer seems to be quite active, so I would hesitate to list it as undermaintained. Epiphany would love to have a second maintainer as well. No doubt most GNOME apps would.)

But we still have a few apps that are listed as unmaintained:

  • Bijiben (GNOME Notes)
  • Empathy
  • GNOME Disks

No doubt there are more GNOME modules that should be listed. If you know of some, please add them or leave a comment here.

Help would be very much welcome with any of these. In particular, Empathy and Bijiben are both slated to be removed from Fedora beginning with Fedora 27 due to their unacceptable dependencies on an old, insecure version of WebKitGTK+ that is about to be removed from the distribution. Most of the work to port these applications to modern WebKitGTK+ is already done (and, in the case of Empathy, I’ve already committed the port to git), but an active maintainer is required to finish the job and get things to a releasable state. Last I checked, Bijiben also still needed to be ported to GTK+ 3.20. If nobody is interested in helping out, these apps are going to disappear sooner rather than later.

Disks, fortunately, is not going to disappear anytime soon. But the bugs aren’t going to fix themselves.

P.S. This blog is not the right place to complain about no longer allowing applications to run as root. Such applications can and should use Polkit to move privileged operations out of the GUI and into a helper process. This should have been done roughly a decade ago. Such applications might themselves be unmaintained or undermaintained; can you help them out?

An Update on WebKit Security Updates

One year ago, I wrote a blog post about WebKit security updates that attracted a fair amount of attention at the time. For a full understanding of the situation, you really have to read the whole thing, but the most important point was that, while WebKitGTK+ — one of the two WebKit ports present in Linux distributions — was regularly releasing upstream security updates, most Linux distributions were ignoring the updates, leaving users vulnerable to various security bugs, mainly of the remote code execution variety. At the time of that blog post, only Arch Linux and Fedora were regularly releasing WebKitGTK+ updates, and Fedora had only very recently begun doing so comprehensively.

Progress report!

So how have things changed in the past year? The best way to see this is to look at the versions of WebKitGTK+ in currently-supported distributions. The latest version of WebKitGTK+ is 2.14.3, which fixes 13 known security issues present in 2.14.2. Do users of the most popular Linux operating systems have the fixes?

  • Fedora users are good. Both Fedora 24 and Fedora 25 have the latest version, 2.14.3.
  • If you use Arch, you know you always have the latest stuff.
  • Ubuntu users rejoice: 2.14.3 updates have been released to users of both Ubuntu 16.04 and 16.10. I’m very  pleased that Ubuntu has decided to take my advice and make an exception to its usual stable release update policy to ensure its users have a secure version of WebKit. I can’t give Ubuntu an A grade here because the updates tend to lag behind upstream by several months, but slow updates are much better than no updates, so this is undoubtedly a huge improvement. (Anyway, it’s hardly a bad idea to be cautious when releasing a big update with high regression potential, as is unfortunately the case with even stable WebKit updates.) But if you use the still-supported Ubuntu 14.04 or 12.04, be aware that these versions of Ubuntu cannot ever update WebKit, as it would require a switch to WebKit2, a major API change.
  • Debian does not update WebKit as a matter of policy. The latest release, Debian 8.7, is still shipping WebKitGTK+ 2.6.2. I count 184 known vulnerabilities affecting it, though that’s an overcount as we did not exclude some Mac-specific security issues from the 2015 security advisories. (Shipping ancient WebKit is not just a security problem, but a user experience problem too. Actually attempting to browse the web with WebKitGTK+ 2.6.2 is quite painful due to bugs that were fixed years ago, so please don’t try to pretend it’s “stable.”) Note that a secure version of WebKitGTK+ is available for those in the know via the backports repository, but this does no good for users who trust Debian to provide them with security updates by default without requiring difficult configuration. Debian testing users also currently have the latest 2.14.3, but you will need to switch to Debian unstable to get security updates for the foreseeable future, as testing is about to freeze.
  • For openSUSE users, only Tumbleweed has the latest version of WebKit. The current stable release, Leap 42.2, ships with WebKitGTK+ 2.12.5, which is coincidentally affected by exactly 42 known vulnerabilities. (I swear I am not making this up.) The previous stable release, Leap 42.1, originally released with WebKitGTK+ 2.8.5 and later updated to 2.10.7, but never past that. It is affected by 65 known vulnerabilities. (Note: I have to disclose that I told openSUSE I’d try to help out with that update, but never actually did. Sorry!) openSUSE has it a bit harder than other distros because it has decided to use SUSE Linux Enterprise as the source for its GCC package, meaning it’s stuck on GCC 4.8 for the foreseeable future, while WebKit requires GCC 4.9. Still, this is only a build-time requirement; it’s not as if it would be impossible to build with Clang instead, or a custom version of GCC. I would expect WebKit updates to be provided to both currently-supported Leap releases.
  • Gentoo has the latest version of WebKitGTK+, but only in testing. The latest version marked stable is 2.12.5, so this is a serious problem if you’re following Gentoo’s stable channel.
  • Mageia has been updating WebKit and released a couple security advisories for Mageia 5, but it seems to be stuck on 2.12.4, which is disappointing, especially since 2.12.5 is a fairly small update. The problem here does not seem to be lack of upstream release monitoring, but rather lack of manpower to prepare the updates, which is a typical problem for small distros.
  • The enterprise distros from Red Hat, Oracle, and SUSE do not provide any WebKit security updates. They suffer from the same problem as Ubuntu’s old LTS releases: the WebKit2 API change  makes updating impossible. See my previous blog post if you want to learn more about that. (SUSE actually does have WebKitGTK+ 2.12.5 as well, but… yeah, 42.)

So results are clearly mixed. Some distros are clearly doing well, and others are struggling, and Debian is Debian. Still, the situation on the whole seems to be much better than it was one year ago. Most importantly, Ubuntu’s decision to start updating WebKitGTK+ means the vast majority of Linux users are now receiving updates. Thanks Ubuntu!

To arrive at the above vulnerability totals, I just counted up the CVEs listed in WebKitGTK+ Security Advisories, so please do double-check my counting if you want. The upstream security advisories themselves are worth mentioning, as we have only been releasing these for two years now, and the first year was pretty rough when we lost our original security contact at Apple shortly after releasing the first advisory: you can see there were only two advisories in all of 2015, and the second one was huge as a result of that. But 2016 seems to have gone decently well. WebKitGTK+ has normally been releasing most security fixes even before Apple does, though the actual advisories and a few remaining fixes normally lag behind Apple by roughly a month or so. Big thanks to my colleagues at Igalia who handle this work.

Challenges ahead

There are still some pretty big problems remaining!

First of all, the distributions that still aren’t releasing regular WebKit updates should start doing so.

Next, we have to do something about QtWebKit, the other big WebKit port for Linux, which stopped receiving security updates in 2013 after the Qt developers decided to abandon the project. The good news is that Konstantin Tokarev has been working on a QtWebKit fork based on WebKitGTK+ 2.12, which is almost (but not quite yet) ready for use in distributions. I hope we are able to switch to use his project as the new upstream for QtWebKit in Fedora 26, and I’d encourage other distros to follow along. WebKitGTK+ 2.12 does still suffer from those 42 vulnerabilities, but this will be a big improvement nevertheless and an important stepping stone for a subsequent release based on the latest version of WebKitGTK+. (Yes, QtWebKit will be a downstream of WebKitGTK+. No, it will not use GTK+. It will work out fine!)

It’s also time to get rid of the old WebKitGTK+ 2.4 (“WebKit1”), which all distributions currently parallel-install alongside modern WebKitGTK+ (“WebKit2”). It’s very unfortunate that a large number of applications still depend on WebKitGTK+ 2.4 — I count 41 such packages in Fedora — but this old version of WebKit is affected by over 200 known vulnerabilities and really has to go sooner rather than later. We’ve agreed to remove WebKitGTK+ 2.4 and its dependencies from Fedora rawhide right after Fedora 26 is branched next month, so they will no longer be present in Fedora 27 (targeted for release in November). That’s bad for you if you use any of the affected applications, but fortunately most of the remaining unported applications are not very important or well-known; the most notable ones that are unlikely to be ported in time are GnuCash (which won’t make our deadline) and Empathy (which is ported in git master, but is not currently in a  releasable state; help wanted!). I encourage other distributions to follow our lead here in setting a deadline for removal. The alternative is to leave WebKitGTK+ 2.4 around until no more applications are using it. Distros that opt for this approach should be prepared to be stuck with it for the next 10 years or so, as the remaining applications are realistically not likely to be ported so long as zombie WebKitGTK+ 2.4 remains available.

These are surmountable problems, but they require action by downstream distributions. No doubt some distributions will be more successful than others, but hopefully many distributions will be able to fix these problems in 2017. We shall see!

On Epiphany Security Updates and Stable Branches

One of the advantages of maintaining a web browser based on WebKit, like Epiphany, is that the vast majority of complexity is contained within WebKit. Epiphany itself doesn’t have any code for HTML parsing or rendering, multimedia playback, or JavaScript execution, or anything else that’s actually related to displaying web pages: all of the hard stuff is handled by WebKit. That means almost all of the security problems exist in WebKit’s code and not Epiphany’s code. While WebKit has been affected by over 200 CVEs in the past two years, and those issues do affect Epiphany, I believe nobody has reported a security issue in Epiphany’s code during that time. I’m sure a large part of that is simply because only the bad guys are looking, but the attack surface really is much, much smaller than that of WebKit. To my knowledge, the last time we fixed a security issue that affected a stable version of Epiphany was 2014.

Well that streak has unfortunately ended; you need to make sure to update to Epiphany 3.22.6, 3.20.7, or 3.18.11 as soon as possible (or Epiphany 3.23.5 if you’re testing our unstable series). If your distribution is not already preparing an update, insist that it do so. I’m not planning to discuss the embarrassing issue here — you can check the bug report if you’re interested — but rather on why I made new releases on three different branches. That’s quite unlike how we handle WebKitGTK+ updates! Distributions must always update to the very latest version of WebKitGTK+, as it is not practical to backport dozens of WebKit security fixes to older versions of WebKit. This is rarely a problem, because WebKitGTK+ has a strict policy to dictate when it’s acceptable to require new versions of runtime dependencies, designed to ensure roughly three years of WebKit updates without the need to upgrade any of its dependencies. But new major versions of Epiphany are usually incompatible with older releases of system libraries like GTK+, so it’s not practical or expected for distributions to update to new major versions.

My current working policy is to support three stable branches at once: the latest stable release (currently Epiphany 3.22), the previous stable release (currently Epiphany 3.20), and an LTS branch defined by whatever’s currently in Ubuntu LTS and elementary OS (currently Epiphany 3.18). It was nice of elementary OS to make Epiphany its default web browser, and I would hardly want to make it difficult for its users to receive updates.

Three branches can be annoying at times, and it’s a lot more than is typical for a GNOME application, but a web browser is not a typical application. For better or for worse, the majority of our users are going to be stuck on Epiphany 3.18 for a long time, and it would be a shame to leave them completely without updates. That said, the 3.18 and 3.20 branches are very stable and only getting bugfixes and occasional releases for the most serious issues. In contrast, I try to backport all significant bugfixes to the 3.22 branch and do a new release every month or thereabouts.

So that’s why I just released another update for Epiphany 3.18, which was originally released in September 2015. Compare this to the long-term support policies of Chrome (which supports only the latest version of the browser, and only for six weeks) or Firefox (which provides nine months of support for an ESR release), and I think we compare quite favorably. (A stable WebKit series like 2.14 is only supported for six months, but that’s comparable to Firefox.) Not bad?