Epiphany 3.20

So, what’s new in Epiphany 3.20?

First off: overlay scrollbars. Because web sites have the ability to style their scrollbars (which you’ve probably noticed on Google sites), WebKit embedders cannot use a normal GtkScrolledWindow to display content; instead, WebKit has to paint the scrollbars itself. Hence, when overlay scrollbars appeared in GTK+ 3.16, WebKit applications were left out. Carlos García Campos spent some time to work on this, and the result speaks for itself (if you fullscreen this video to see it properly):

Overlay scrollbars did not actually require any changes in Epiphany itself — all applications using an up-to-date version of WebKit will immediately benefit — but I mention it here as it’s one of the most noticeable changes. Read about other WebKit improvements, like the new Faster Than Light FTL/B3 JavaScript compilation tier, on Carlos’s blog.

Next up, there is a new downloads manager, also by Carlos García Campos. This replaces the old downloads bar that used to appear at the bottom of the screen:

Screenshot of the new downloads manager in Epiphany 3.20.

I flipped the switch in Epiphany to enable WebGL:

If you watched that video in fullscreen, you might have noticed that page is marked as insecure, even though it doesn’t use HTTPS. Like most browsers, we used to have several confusing security states. Pages with mixed content received a security warning that all users ignored, but pages with no security at all received no such warning. That’s pretty dumb, which is why Firefox and Chrome have been talking about changing this for a year or so now. I went ahead and implemented it. We now have exactly two security states: secure and insecure. If your page loads any content not over HTTPS, it will be marked as insecure. The vast majority of pages will be displayed as insecure, but it’s no less than such sites deserve. I’m not concerned at all about “warning fatigue,” because users are not generally expected to take any action on seeing these warnings. In the future, we will take this further, and use the insecure indicator for sites that use SHA-1 certificates.

Moving on. By popular request, I exposed the previously-hidden setting to disable session restore in the preferences dialog, as “Remember previous tabs on startup:”

Screenshot of the preferences dialog, with the new "Remember previous tabs on startup" setting.

Meanwhile, Carlos worked in both WebKit and Epiphany to greatly improve session restoration. Previously, Epiphany would save the URLs of the pages loaded in each tab, and when started it would load each URL in a new tab, but you wouldn’t have any history for those tabs, for example, and the state of the tab would otherwise be lost. Carlos worked on serializing the WebKit session state and exposing it in the WebKitGTK+ API, allowing us to restore full back/forward history for each tab, plus details like your scroll position on each tab. Thanks to Carlos, we also now make use of this functionality when reopening closed tabs, so your reopened tab will have a full back/forward list of history, and also when opening new tabs, so the new tab will inherit the history of the tab it was opened from (a feature that we had in the past, but lost when we switched to WebKit2).

Interestingly, we found the session restoration was at first too good: it would restore the page really exactly as you last viewed it, without refreshing the content at all. This means that if, for example, you were viewing a page in Bugzilla, then when starting the browser, you would miss any new comments from the last time you loaded the page until you refresh the page manually. This is actually the current behavior in Safari; it’s desirable on iOS to make the browser launch instantly, but questionable for desktop Safari. Carlos decided to always refresh the page content when restoring the session for WebKitGTK+.

Last, and perhaps least, there’s a new empty state displayed for new users, developed by Lorenzo Tilve and polished up by me, so that we don’t greet new users with a completely empty overview (where your most-visited sites are normally displayed):

Empty State

That, plus a bundle of the usual bugfixes, significant code cleanups, and internal architectual improvements (e.g. I converted the communication between the UI process and the web process extension to use private D-Bus connections instead of the session bus). The best things have not changed: it still starts up about 5-20 times faster than Firefox in my unscientific testing; I expect you’ll find similar results.

Enjoy!

Do you trust this package?

Your distribution’s package manager probably uses GPG signature checking to provide an extremely strong guarantee that the software packages you download have not been maliciously modified by a man in the middle (MITM) attacker when traveling over the Internet from your distribution to you. Smaller distros might have no such infrastructure in place (these distros are not safe to use), but for most major distros, a MITM attack between your distribution and your computer would be very difficult to pull off once your distribution has been installed. (Installing a distribution for the first time is another matter.)

But what guarantee is there that no MITM attacker compromised the tarballs when they were downloaded from upstream by a distro package maintainer? If you think distro package maintainers bother with silly things like GPG signature checking when downloading tarballs, then I regret to inform you that Santa is not real, and your old pet is not on vacation, it is dead.

HTTPS is far from perfect, but it’s much better than no HTTPS, and it is the only effective way to secure packages between upstreams and distributions. Now for an easy game: find an important free software package that is distributed upstream without using HTTPS. Don’t bother with small desktop software either, focus on big name stuff. You have a one minute time limit, because this game would be too easy otherwise. Ready, set, go.

Done? Think about how many different ways exist for an attacker to insert arbitrary code into the tarball you found. HTTPS makes these attacks far more difficult. Webmasters, please take a few minutes to secure your site with HTTPS and HSTS.

Do you trust this application?

Much of the software you use is riddled with security vulnerabilities. Anyone who reads Matthew Garrett knows that most proprietary software is a lost cause. Some Linux advocates claim that free software is more secure than proprietary software, but it’s an open secret that tons of popular desktop Linux applications have many known, unfixed vulnerabilities. I rarely see anybody discuss this, as if it’s taboo, but it’s been obvious to me for a long time.

Usually vulnerabilities go unreported simply because nobody cares to look. Here’s an easy game: pick any application that makes HTTP connections — anything stuck on an old version of WebKit is a good place to start — and look for the following basic vulnerabilities:

  • Failure to use TLS when required (GNOME Music, GNOME Weather; note these are the only apps I mention here that do not use WebKit). This means the application has no security.
  • Failure to perform TLS certificate verification (Shotwell and Pantheon Photos). This means the application has no security against active attackers.
  • Failure to perform TLS certificate verification on subresources (Midori and XombreroLiferea). As sites usually send JavaScript in subresources, this means active attackers can get total control of the page by changing the script, without being detected (update: provided JavaScript is enabled). (Regrettably, Epiphany prior to 3.14.0 was also affected by this issue.)
  • Failure to perform TLS certificate verification before sending HTTP headers (private Midori bugBanshee). This leaks secure cookies, usually allowing attackers full access to your user account on a website. It also leaks the page you’re visiting, which HTTPS is supposed to keep private. (Update: Regrettably, Epiphany prior to 3.14.0 was affected by this issue. Also, the WebKit 2 API in WebKitGTK+ prior to 2.6.6, CVE-2015-2330.)

Except where noted, the latest release of all of the applications listed above are still vulnerable at the time of this writing, even though almost all of these bugs were reported long ago. With the exception of Shotwell, nobody has fixed any of these issues. Perhaps nobody working on the project cares to fix it, or perhaps nobody working on the project has the time or expertise to fix it, or perhaps nobody is working on the project anymore at all. This is all common in free software.

In the case of Shotwell, the issue has been fixed in git, but it might never be released because nobody works on Shotwell anymore. I informed distributors of the Shotwell vulnerability three months ago via the GNOME distributor list, our official mechanism for communicating with distributions, and advised them to update to a git snapshot. Most distributions ignored it. This is completely typical; to my knowledge, the stable releases of all Linux distributions except Fedora are still vulnerable.

If you want to play the above game, it should be very easy for you to add to my list by checking only popular desktop software. A good place to start would be to check if Liferea or Xombrero (supposedly a security-focused browser) perform TLS certificate verification before sending HTTP headers, or if Banshee performs verification on subresources, on the principle that vulnerable applications probably have other related vulnerabilities. (I did not bother to check.)

On a related note, many applications use insecure dependencies. Tons of popular GTK+ applications are stuck on an old, deprecated version of WebKitGTK+, for example. Many popular KDE applications use QtWebKit, which is old and deprecated. These deprecated versions of WebKit suffer from well over 100 remote code execution vulnerabilities fixed upstream that will probably never be backported. (100 is a lowball estimate; I would be unsurprised if the real number for QtWebKit was much, much higher.)

I do not claim that proprietary software is generally more secure than free software, because that is absolutely not true. Proprietary software vendors, including big name corporations that you might think would know better, are still churning out consumer products based on QtWebKit, for example. (This is unethical, but most proprietary software vendors do not care about security.) Not that it matters too much, as proprietary software vendors rarely provide comprehensive security updates anyway. (If your Android phone still gets updates, guess what: they’re superficial.) A few prominent proprietary software vendors really do care about security and do good work to keep their users safe, but they are rare exceptions, not the rule.

It’s a shame we’re not able to do better with free software.

Do you trust this website?

TLS certificate validation errors are much less common on today’s Internet than they used to be, but you can still expect to run into them from time to time. Thanks to a decade of poor user interface decisions by web browsers (only very recently fixed in major browsers), users do not understand TLS and think it’s OK to bypass certificate warnings if they trust the site in question.

This is completely backwards. You should only bypass the warning if you do not trust the site.

The TLS certificate does not exist to state that the site is somehow trustworthy. It exists only to state that the site is the site you think it is: to ensure there is no man in the middle (MITM) attacker. If you are visiting https://www.example.com and get a certificate validation error, that means that even though your browser is displaying the URL https://www.example.com, there’s zero reason to believe you’re really visiting https://www.example.com rather than an attack site. Your browser can tell the difference, and it’s warning you. (More often, the site is just broken, or “misconfigured” if you want to be generous, but you and your browser have no way to know that.)

If you do not trust the site in question (e.g. you do not have any user account on the site), then there is not actually any harm in bypassing the warning. You don’t trust the site, so you do not care if a MITM is changing the page, recording your passwords, sending fake data to the site in your name, or whatever else.

But if you do trust the site, this error is cause to freak out and not continue, because it gives you have strong reason to believe there is a MITM attacker. Once you click continue, you should assume the MITM has total control over your interaction with the trusted website.

I will pick on Midori for an example of how bad design can confuse users:

The button label reads "Trust this website," but it should read "I do not trust this website."
The button label reads “Trust this website,” but it should read “I do not trust this website.”

As you can see from the label, Midori has this very wrong. Users are misled into continuing if they trust the website: the very situation in which it is unsafe to continue.

Firefox and Chrome handle this much better nowadays, but not perfectly. Firefox says “Your connection is not secure” while Chrome says “Your connection is not private.” It would be better to say: “This doesn’t look like the real www.example.com.”

On Subresource Certificate Validation

Ryan Castellucci has a quick read on subresource certificate validation. It is accurate; I fixed this shortly after joining Igalia. (Update: This was actually in response to a bug report from him.) Run his test to see if your browser is vulnerable.

Epiphany, Xombrero, Opera Mini and Midori […] were loading subresources, such as scripts, from HTTPS servers without doing proper certificate validation. […] Unfortunately Xombrero and Midori are still vulnerable. Xombrero seems to be dead, and I’ve gotten no response from them. I’ve been in touch with Midori, but they say they don’t have the resources to fix it, since it would require rewriting large portions of the code base in order to be able to use the fixed webkit.

I reported this to the Midori developers in late 2014 (private bug). It’s hard to understate how bad this is: it makes HTTPS completely worthless, because an attacker can silently modify JavaScript loaded via subresources.

This is actually a unique case in that it’s a security problem that was fixed only thanks to the great API break, which has otherwise been the cause of many security problems. Thanks to the API break, we were able to make the new API secure by default without breaking any existing applications. (But this does no good for applications unable to upgrade.)

(A note to folks who read Ryan’s post: most mainstream browsers do silently block invalid certificates, but Safari will warn instead. I’m not sure which behavior I prefer.)

Stop using RC4

A follow up of my previous post: in response to my letter, NIST is going to increase the CVSS score of CVE-2013-2566 (RC4) to match CVE-2011-3389 (BEAST). Yay!

In other news, WebKitGTK+ 2.8 has full support for RFC 7465. That’s a fancy way of saying that we will no longer negotiate RC4 connections and you will now be unable to access the small minority of HTTPS sites that offer nothing but RC4. Hopefully other browsers will follow along sooner rather than later. In particular, Firefox nightly has stopped negotiating RC4 except for a few whitelisted sites: I would very much like to see that whitelist removed. Internet Explorer has stopped negotiating RC4 except when it performs voluntary protocol version fallback. It would be great to see a firmer stance from Mozilla and Microsoft, and some action from Google and Apple.

RC4 vs. BEAST: which is worse?

RFC 7465 has been published, and in a perfect world it would spell doom for the use of RC4 in TLS. But, spoiler alert, the theme of this blog is that there are tons of problems with TLS that your browser either cannot or willfully will not protect you against — major browser vendors love nothing more than sacrificing your security in the name of compatibility with lousy servers — so it’s too soon for optimism.

This guy who sounds like he knows what he’s talking about and who I’ve blindly decided to trust says that PCI-compliant sites must disable CBC-based block ciphers so that they’re not vulnerable to the BEAST attack against TLS 1.0. But CBC is the only mode for block ciphers that provides a reasonable level of security in TLS 1.0, so these servers are limited to negotiating only stream ciphers. And RC4 is the only stream cipher in TLS, so that’s the only thing these poor servers are left with. But nobody is actually vulnerable to BEAST anymore — web browsers have been able to prevent the BEAST attack for several years — so this makes no sense.

So what it a PCI-compliant site? In theory, it’s any site that processes credit card data. For instance, check out the SSL Labs report for www.bankofamerica.com. (In case you’re not yet thoroughly convinced of the truth of the second sentence in this post, take note of the eight bold WEAK warnings and also the bold DANGER. Even major banks don’t care.) Scroll down to the handshake simulations and note how AES is only sometimes used with TLS 1.2, and RC4 is always picked with TLS 1.0. In practice, I’ve checked SSL Labs results for sites that do use AES with TLS 1.0, like www.amazon.com, that do take credit card data, so I’m not sure if guy-who-sounds-like-he-knows-what-he’s-talking-about has the full story, but maybe audits come less frequently than I would expect.

Hopefully browser vendors will push forward and disable RC4 anyway, but that doesn’t seem sufficiently probable, and these poor sites are hardly going to disable RC4 if it means they will fail their next security audit. So what better way to spend a Friday afternoon than write a letter to NIST?

Hi,

The CVSS score for CVE-2011-3389 (BEAST) [1] relative to the score for CVE-2013-2566 [2] may discourage efforts to implement RFC 7465 [3], which prohibits use of RC4-based ciphersuites with TLS. Delays in the implementation of this RFC will harm the overall security of the TLS ecosystem.

The issue is described succinctly at [4]: PCI-compliant servers may not enable CBC-based ciphersuites because CVE-2011-3389 has a base score of 4.3, leaving RC4-based ciphersuites as the only possible options for the server to use with TLS 1.0. CVE-2013-2566, the RC4 vulnerability, has a lower CVSS score. However, CVE-2013-2566 is a much more serious issue in practice. CVE-2011-3389 has been long-since mitigated on the client side in major browsers using the 1/n-1 split technique [5], allowing CBC-based ciphersuites to be used safely. In contrast, no client-side mitigation for CVE-2013-2566 is available short of disabling RC4. Note also that a more serious attack against RC4 will be published next month [6].

In summary, a properly-configured TLS server *should not* attempt to mitigate CVE-2011-3389, as this discourages clients from mitigating CVE-2013-2566, and clients already mitigate CVE-2011-3389. Please reconsider the relative ratings for these vulnerabilities to allow PCI-compliant servers to re-enable CBC-based ciphersuites, so that browser vendors can more comfortably disable support for RC4 as required by RFC 7465 [4] [7] [8].

Thank you for your consideration,

Michael Catanzaro

[1] https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-3389
[2] https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-2566
[3] http://www.rfc-editor.org/rfc/rfc7465.txt
[4] https://code.google.com/p/chromium/issues/detail?id=375342#c17
[5] https://bugzilla.mozilla.org/show_bug.cgi?id=665814#c59
[6] https://www.blackhat.com/asia-15/briefings.html#bar-mitzva-attack-breaking-ssl-with-13-year-old-rc4-weakness
[7] https://bugzilla.mozilla.org/show_bug.cgi?id=999544
[8] https://bugs.webkit.org/show_bug.cgi?id=140014

Now, will this actually work? Will I even get a response? I have no clue. Let’s find out!

Mozilla is responsible for the redhat.corpmerchandise.com fiasco

First of all, I should probably admit that, despite the title of this post, no, the redhat.corpmerchandise.com fiasco is not Mozilla’s fault: it’s Red Hat’s, because obviously Mozilla has no control over that domain. But that wouldn’t make for a very interesting title for a blog post, and Mozilla set the stage for this to happen, so let’s go with “Mozilla’s fault.” Also, it’s not really Red Hat’s fault; Staples is really to blame, since corpmerchandise.com is their domain, but I really shouldn’t be pointing that out when the point of this blog post is to blame Mozilla. And gosh now I’m off on a tangent, but it’s not really a fiasco either: it’s a significant screw-up, but not that big a deal; but words like “fiasco” make for good clickbait headlines, so let’s go with that. FIASCO.

One last note before I begin. I hold Mozilla to a higher standard than other software development companies. Sometimes it makes mistakes, like the one I’m about to present, and it’s important to call them out when this happens, but it’s because of good choices at Mozilla that Firefox still (mostly) respects your freedom, unlike other major browsers. It’s a good company.

OK, so you’ve read this far in suspense, I should probably explain the redhat.corpmerchandise.com fiasco before you reach the end of your three-paragraph Internet-length attention span. Yesterday the Fedora Store went live, where you can buy low-cost Fedora-branded items: a T-shirt, water bottle, pub glass, or baseball cap. I want a T-shirt. OK, that’s great, so what is the fiasco? Well click on this link (quick! before it gets fixed!) to find out: https://redhat.corpmerchandise.com/ProductList.aspx?did=20588

Now, depending on your browser, you may or may not have discovered the problem. When I load that site in Firefox, I see Fedora merchandise. When I load it in Epiphany, I see something noticeably less friendly:

Screenshot from 2015-01-30 20:06:22

“Legitimate banks, stores, and other public sites will not ask you to do this.” Ouch. (I actually took that language from Firefox when I designed that interstitial for Epiphany.) Ah, well, clearly there is some bug in Epiphany, because Firefox is a major browser and Firefox doesn’t get stuff like this wrong, right? Well, no, Epiphany is not wrong. Then Firefox is wrong? Well… from a certain point of view… (like mine)….

Firefox and Epiphany use different cryptography libraries to determine if the certificate is valid, and they sometimes differ in what certificates they will accept. Firefox uses NSS, a library maintained by Mozilla primarily for use by Firefox (it’s also used by Chrome on Linux), while Epiphany (indirectly) uses GnuTLS, originally a GNU project that is now de-facto maintained by Red Hat. So is NSS just better than GnuTLS at determining whether a certificate is valid? Actually, NSS really is more permissive than GnuTLS, and this does sometimes lead Firefox to approve of sites that Epiphany will not, but that’s not the case here. Let’s try a little experiment to see what’s happening. Firefox has a weird feature that feels like it was designed in the 90s for the era when computers had one user account apiece: it lets you create multiple profiles for bookmarks, history, and other settings. So let’s give this a whirl:

$ firefox -ProfileManager

Create a new profile, launch Firefox with it, then load https://redhat.corpmerchandise.com/ProductList.aspx?did=20588 to try this experiment again. Or just keep reading and trust me when I say that you’ll see this:

Screenshot from 2015-01-30 20:23:45

Oooh, that’s not good, now Firefox thinks we’re being attacked. We’re not. So what’s going on here? Why is Firefox so inconsistent?

First off, let’s get one thing straight: this site is totally and hopelessly broken. To see why, let’s use the super-handly tool gnutls-cli:

$ gnutls-cli redhat.corpmerchandise.com
Processed 182 CA certificate(s).
Resolving 'redhat.corpmerchandise.com'...
Connecting to '174.47.191.32:443'...
- Certificate type: X.509
- Got a certificate list of 1 certificates.
- Certificate[0] info:
- subject `C=US,ST=Kansas,L=Overland Park,O=STAPLES CONTRACT & COMMERCIAL\, INC.,OU=Information Techology,CN=*.corpmerchandise.com', issuer `C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert SHA2 High Assurance Server CA', RSA key 2048 bits, signed using RSA-SHA256, activated `2014-11-12 00:00:00 UTC', expires `2015-12-09 12:00:00 UTC', SHA-1 fingerprint `50cfb26c680434d132dc64e80db54de51a5a07a6'
Public Key ID:
c273ca58bfdb2902ea30dbf5946c27178affd588
Public key's random art:
+--[ RSA 2048]----+
| |
| |
| |
| . |
| = S |
| .* O o o |
|o .+.X E o . |
| =.. =.+.+ . |
|..o oo+oo |
+-----------------+

- Status: The certificate is NOT trusted. The certificate issuer is unknown.
*** PKI verification of server certificate failed...
*** Fatal error: Error in the certificate.
*** Handshake has failed
GnuTLS error: Error in the certificate.

If you’re familiar with digital certificates, it’s pretty obvious what’s wrong here. When you connect securely to a web site, it sends a chain of certificates: the first certificate is owned by the web site, then it sends some number of additional certificates, usually one or two, that belong to certificate authorities. Each certificate is signed by the next certificate in the chain (not quite, but it’s almost true, so let’s go with that for this post), up until you get to the last one in the chain, which must be signed by a certificate in your browser’s (or operating system’s) root trust store. The certificates in your root trust store are super valuable, and if one were to be compromised by an attacker the devastation to the Web would be terrible, so certificate authorities must keep their roots safe at all costs, and they do this by almost never using them. Legitimate certificate authorities never sign web sites’ certificates with their root certificate; instead, they create a few other certificates, sign them with the root, and use only those to sign web sites’ certificates. So if you ever visit a site and it sends you only one certificate, you know that the site is broken for sure. And here we have a site that has sent only one certificate (there’s a Certificate[0] but no Certificate[1]), a classic case of server misconfiguration (aka fiasco).

So why did Firefox allow the site at first, even though it has no chain of trust, but not allow it with a fresh profile? Well, even though the site has presented no chain of trust, NSS goes far, far out of its way to find one. Whenever you visit a web site, NSS saves each intermediate certificate it sees, makes sure it’s signed by a trusted root, and caches it for future use. Then, whenever you visit a site that sends a broken chain of trust, NSS will effectively treat all those intermediates as roots, and use them to complete the chain of trust. This is completely safe, since it has already verified them. Those intermediate certs are saved in your Firefox profile, so by switching to a fresh profile they are no longer used, and you can’t access the broken site anymore.

If you were able to access redhat.corpmerchanise.com in Firefox, you can verify this for yourself: open Preferences -> Advanced -> Certificates -> View Certificates -> Authorities. Anything listed as Default Trust or System Trust is a root, and anything listed as Software Security Device is a cached intermediate cert. Don’t touch those root certs, but feel free to Delete or Distrust any Software Security Device — it will just be cached again the next time you visit a web site that uses it. Scroll down to DigiCert SHA2 High Assurance Server CA. That’s the cached intermediate cert that is allowing you to visit redhat.corpmerchandise.com — it’s not shipped with Firefox, and new Firefox users won’t have it. Delete it, restart the browser, then try reloading https://redhat.corpmerchandise.com. Oh no, it’s untrusted! Now visit https://stackoverflow.com, which sends a certificate signed by DigiCert SHA2 High Assurance Server CA, which will cause NSS to cache it once again. Now back to https://redhat.corpmerchandise.com, and Firefox knows it’s safe again. And that, folks, is how you screw up your web site so that it only works if you first visit Stack Overflow.

So why does NSS do this? Well, once upon a time (ten years ago), browsers were less strict about verifying chains of trust, and on an untrusted connection would let you proceed with maybe just a pop-up warning, and maybe not even that. So sites were less diligent about making sure they had valid chains of trust than they are today, in the era of nasty interstitial warnings that discourage the user from visiting the site. Since there were a lot of sites with broken chains, NSS chose to cache intermediate certificates to reduce the number of unnecessary validation errors for Firefox users. At the time, this might have been an OK choice.

Today, if your online store is missing a chain of trust, the browser makes clear in no uncertain terms that this site is not to be trusted, and sites lose visitors/customers, so they try pretty hard to get this right. (How many lost visitors depends on the browser — a large majority of Chrome and probably Epiphany users will click through the warnings, but a large majority of Firefox users will not, because Firefox’s UI for this is much scarier.) When setting up a new site, you check it in a couple of browsers to make sure it works properly, and you trust that if it works in Firefox on your machine, surely it will work in Firefox for everyone else, right? Well, no, it won’t. When setting up a secure web site, you must always test it with a fresh Firefox profile to make sure that you got the chain of trust correct. Of course, nobody knows to do this, which is how we wind up with broken sites like redhat.corpmerchandise.com.

I suspect this breakage would happen far less often if NSS did not cache intermediate certs, tricking site admins into thinking their sites are set up properly. Sure, cached certs don’t hurt the user who has them cached, but they’re bad for all other users of Firefox, as well as users of browsers that do no certificate caching. And there’s no good reason for this, because browsers don’t need to cache intermediate certificates in 2015, because almost all sites that redirect from HTTP to HTTPS get this right nowadays, and those that get it wrong are probably getting it wrong because they tested with a browser that had the right cached intermediate. Chicken and egg much? There’s only one way to fix this problem, and that brings me to my request: Mozilla, do the Web a favor and stop caching intermediate certificates.

P.S. Astute readers would note that there’s absolutely no point in deleting an intermediate certificate with the Firefox certificate manager, except to test things like this. It’s just going to come back the next time you see it.

Amazon redirecting to HTTP

For the past couple of weeks, https://www.amazon.com and https://amazon.com have redirected me to http://www.amazon.com. Region-specific sites like https://www.amazon.co.uk/ still work fine. There is probably no MITM attacker, since the secure page is performing the redirect, so a MITM would have to have a valid certificate for www.amazon.com, and if so he would presumably not add a redirect.

Questions for Amazon:

  • What the hell?
  • Why does your site work at all without HTTPS?
  • How am I going to buy things now?

It’s 2014, and this is unacceptable for an e-commerce site, plain and simple. Repent by implementing HSTS.