TODO: Break Internet

So there’s this thing at Mozilla where we try not to break the internet.  Call us wacky, but it seems like a bad play.  And so Rob Sayre is right to be a little miffed when it looks like we’ve done exactly that.  Sayre is often right, in fact, it’s his thing that he does.

Backstory
The web has this technology called SSL that lets you do two important things:

  1. Know who you’re talking to (because companies exist which verify this information, we’ve been over this)
  2. Talk to them in an encrypted, validated way so that no one can eavesdrop or tamper with the message
  3. Show a little padlock on your browser window

As I said, only two of them are important.

Because SSL makes these relatively useful promises, it is sort of a popular technology.  Because it’s generally important to get security things *precisely right* though, and because humans are people, there’s a lot of broken SSL out there too.

What’s “broken”?  Sometimes it means using the identification for one site on another site (because it’s cheapereasierfaster than getting a second one).  Sometimes it means using it after it has expired.  Sometimes “broken” isn’t actually broken at all, it’s just that the site is using SSL with identification they wrote themselves, so that they’re getting promise 2 (encrypted, validated), but not promise 1 (knowing who you’re talking to).

In the past, most browsers did a very dumb thing here:

FF2 Domain Mismatch Error

This dialog, in the hands of normal people, feels like it basically amounts to:

Snotweasel omegaforce warning

Why change such a fun and exciting system, I hear you ask?  The real problem here is that once in a while, when this kind of dialog appears, it actually might represent an actual attack.  Most of the time it’s site administrator laziness, but it’s hard to tell, and it could be a real problem; it could mean that someone has hacked your internet connection (or more likely totally controls it because you connected in some public WiFi spot like a coffee shop) and is redirecting you from your bank’s web site to their own.  When that happens, the fact that we’ve taught everyone to click OK blindly is a really bad thing, because we need you to stop and ask yourself what’s going on.

That’s a lot of backstory, if it was new to you, take a break here.  Have a cookie.

The State of Things
In Firefox 3, one of the things a lot of people were really pushing on was that we dump these dialogs, and we have.   Rob has a screenshot of what the current code does, and in case you missed it the first time, here’s another link.

Before we start talking about changing it, I want to give the crypto dudes, and particular Kai Engert from RedHat a shout-out here, because (believe it or not) I think this is actually a good first step, and was a lot of work to get implemented.

So now instead of a little, cryptic dialog box with an OK button, there’s a big, cryptic error page with no OK button.   Hmm.

Firefox 3 Control Panel

People are seeing that error page, and making a couple really important points:

  1. Everything needs to be less cryptic.  Human readable would be a good start.  Bug 398718
  2. There needs to be a way to get past it so that it’s not a dead-end. (There is, of course. There’s the Add Exception dialog added in bug 387480, which people generally seem to like, but it’s buried in the bowels of advanced prefs, so bugs like 399275 argue for making it much more directly accessible).
  3. You’re (excuse me) batshit fucking loco.

Security and ease of use are not intrinsically a tradeoff. Indeed, a lot of the time, good security comes from a better understanding of how people naturally work.  But there are times, and this feels like one of them, where doing the safer thing for users means annoying them more, and annoying them less means failing to honour our obligation to keep them safe.  Boo.

Walking and Chewing Gum

The thing is, we don’t get to just throw up our hands and say “well, better safe than sorry” nor do I think we get to say “Too annoying, let’s revert.”   That slider has middle positions, where annoyance and safety are in better balance, let’s get there.

Fixing the text is important.  It needs to speak in human terms about why this is a problem, and about what you can do to fix it.  I do think, though, that we need to consider giving people a path from the error page to the override UI.  I can already hear the furious head-smashing of anyone who understands PKI and has read the relevant literature.   Click-throughs beget bad security habits, which is why I think it should still be a multi-step process that hammers home the fact that you’re doing something aggressive.   But full-stop blocking our users is something that’s contentious even for known malware sites; here it feels like too much.

IE7 does this.  I think they win big points for human readability there – even though they still have a click-through.  I don’t know how much the red shield scares users off, maybe it does, but one-click override still turns my stomach a little.  What I’d like to see from us is an action like that, but which, rather than automatically extending trust, simply shortcuts you to the exception adding dialog.  The argument will be made that it’s just a longer click-through, I understand that, but my feeling is that it’s long enough, and scary enough, to get more of users’ attention.  My feeling is also that we might have to eat that possibility anyhow, because if we make it sufficiently annoying for users to browse the web, they really will decide it’s a Firefox problem, since other browsers let them through.  At that point we not only fail our users on the security front, we also go back to the bad old days of “only works on IE.”

Why Don’t You Just…

I love it when people have alternate suggestions, but some of the frequently recurring ones have pretty big problems.  I’ll call out a few here to save re-treading (unless I’m getting them wrong, in which case we should totally retread, since they’re often held up as much simpler than this other thing we’re doing).

“Why don’t you just let the connections through quietly, and just remove any indicators of security, like the padlock, yellow address bar, verified identity, etc?”   The argument here being that rather than blocking the load, why not serve the content, but not let users think it’s a secure site?  Compelling, no?

Approaches like this have the really unpleasant side effect of subverting whatever good security practices our users have developed.  Banks tell their customers to go to the website via a saved bookmark, rather than clicking on links in email or other web pages.  That’s a good practice.  Some even tell users to look for the “https” in the URL.  In the case where you’re being attacked, where the cert presented is a forgery (since only the legit site can present the real one) all of these habits will tell you you’re safe. The URL says https, and you clicked on the same bookmark you always click on to get to your bank.  This would be a present gift-wrapped for attackers.

“Why don’t you treat self-signed certs, which legitimate sites use when they want encryption but not identity, differently from actual breakages?”

The thing is that self-signed is no more or less trustworthy than, say, a domain-mismatched cert.  Likewise for the argument about treating a self-signed cert differently from one that is signed, but by an unknown signer.  I did open bug 398721 about the idea of using “Key Continuity Management” as a way to mitigate the hurt in the self-signed case while still getting the basics right, but in any event that wouldn’t make it in for Firefox 3.

Closing
To my friends and family using Firefox, don’t panic, none of this is happening in the currently released browser, you’re not going to see this debate enacting itself on a desktop near you anytime soon.  We are extremely cautious about changing the experience in released products after shipping.  This is happening purely among those running the up-to-the-minute versions under active development.

It will get better.  Bug 398718 (my fingers have already learned how to type that one automatically) will land, and the error pages will be things that make sense, and explain your options.  Bug 399275 will morph into a general discussion of what kind of path we want to create to add exceptions, or if it doesn’t, I’ll create a new one which does.  We’re not going to ship a browser you can’t use.  Even on sites that are doing it wrong, we put the choice in your hands, because it’s your browser.  And we like you very much.

19 thoughts on “TODO: Break Internet

  1. Oh hey, cool! There’s a padlock icon next to your URL. How’d you make your site secure without using HTTPS? 🙂

    Agreed that users have been spoon-fed on what to click on, even when they don’t understand the context. This is a forcing function and long overdue. Legitimate companies will finally be forced to correct their systems so that legit users will not see spurious error messages.

    We need to do something to start failing “secure” on the Internet. There’s a continuum and a balance to find between usability and security. We haven’t pushed hard enough in the past towards “more secure”.

    One effect of this will be to cause hackers to spend the extra $20 to buy SSL certs rather than spin their own. Hopefully the CAs are prepared for more business and will be appropriately vet applicants.

  2. One thing that you should consider is doing something like SSH does. For my private sites I’d like to approve the self-signed certificate once, and then have it work seamlessly unless the certificate changes, in which case I’d like a clear warning.

    And for the future: something similar to link fingerprints (http://weblogs.mozillazine.org/gerv/archives/007798.html). It’s quite common for secure sites to send registration confirmations by e-mail. If the link had a checksum for the certificate embedded in the link, then the browser could automagically trust that certificate for that site. No need for a central signing authority in that case.

  3. One problem I see (I’ve voiced this in one of the bugs) is that as someone who can’t afford several IPs (and certificates, but there are now some free ones) don’t have the tooling support available yet to get TLS/SSL correctly. IP addresses are expensive (probably not in the US but elsewhere) so you’re forced to host several domains on one IP address and usually you use one port for https. But using TLS certificates for several domains on the same IP makes server name indication (SNI) a necessity. OpenSSL will soon release a version of its library with SNI support. But it will take some time (a few years) until “turn key” hosting software is available with support for SNI. Until then all those people don’t have a chance to get their setups really correct.

    The other problem is that task oriented users will switch browsers extremely quickly if they encounter a dead end in one browser. Which doesn’t make the internet any safer at all.

  4. Excellent post that summarizes the major issues that led to our changes in IE7.

    Has FF disabled 40 and 56bit SSL ciphers? IE7 on Vista has these disabled, and Opera 9.5 will be turning these off as well.

    @Bill: One nice thing about forcing the bad guys to buy certificates is that revocation checking can bring down the phishers faster. It also increases the cost of attacks, since attackers must pay for each SSL cert for each fake domain they create.

  5. “Why don’t you just let the connections through quietly, and just remove any indicators of security, like the padlock, yellow address bar, verified identity, etc?”

    Another thing to consider is letting people click through to get to the site from the error page, but add “danger” indicators to the chrome. For example, outline the content area with red, have a notification box saying “Danger, Will Robinson”, etc. Perhaps warn again when the user enters data into the site, since tricking the user into entering certain information is what most phishing sites are about.

    Forcing the user to add a site to the whitelist to access it is similar to the way we treated addons installation in earlier fx versions — weird. Sometimes I don’t necessarily trust the cert, but want to see the content anyway.

  6. @Eric: don’t wish too loudly for those things, as you might get them! It would be extraordinarily unlikely that phishers, the masters of a planet-load of stolen credit cards, would find paying for SSL certs any difficulty whatsoever. Also, on revocation as a takedown possibility, phishers have already moved way beyond that with things like fast flux and so forth.

    If you want to effect phishing, any of these things will help:

    (1) implement TLS/SNI in the Apache and IIS servers (thankfully, firefox and IE already have it) so that TLS can be as easy to use as HTTP. (hat-tip to Arthur!)

    (2) display the name of the certificate signer for all TLS connections made by the browser, so that the PKI security model is properly implemented, and/or

    (3) implement petnaming. apparently a.k.a. KCM as in Jonath’s bug 398721. a.k.a. baby duck protocol. a.k.a. SSH security model. a.k.a. trusted first party.

  7. “Approaches like this have the really unpleasant side effect of subverting whatever good security practices our users have developed.”

    For the most part, I think the evidence is that few people have developed any good security practices … it seems to be a bit weird to rely on this as an argument when we use the exact reverse to get rid of the click-OK training popups.

    A better reason not to block anything is that it assumes that you the developer know more than the user, which is very hard to sustain. The only area where you can clearly know better than the user is in the understanding of what the security model is supposed to do, but that’s paper thin. Otoh, the user always knows more about the local circumstances. In such a mixed situation, it is far better to support the user than block her.

  8. Iang– uh, did it occur to you that there’s a reason that phishers aren’t doing this already?

    “fast flux” and all of these other techniques require a large pool of addresses. SSL increases cost for such a pool, and thus makes it harder to generate.

    phishers are subject to the laws of economics, just like the rest of us.

  9. I´m sorry but I don´t have time to see if this was already suggested.

    Why don’t use a IE like page that when clicking “Continue to this website (not recommended)”
    will led the user to a page that reads in big read letters “By entering this site some attacker can take the data you enter on the website”. And make the user wait like 3 seconds for the first 3 times he tries to enter the site. Later just the warning appears.

    Seems a good trade off between making users aware/angry…

  10. I will further second the complaint about treating self-signed certificates as an error. SSL exists to provide encryption, and sucks massively as an identity protocol. An SSL cert means two things to me: encryption occurs, and money changed hands. That second one doesn’t mean “identity”, no matter how much people might think it does. Witness the various documented cases of people obtaining SSL certificates for well-known business names.

  11. I second Iang’s comments. It’s absolutely insane that browsers make it appear that using encryption to talk to a web site — if it isn’t certified by Verisign — is more dangerous than talking to the same web site without encryption.

  12. Excellent Post. Ive been looking for any article that explained the self-signed cert issue etc in ahhh proper english and clearly outlined the pro’s and cons’s and why browsers work that way blah blah.. reallly helped.

  13. Thanks for the post. One problem I have, is that I work for a hardware appliance manufacturer, and on a daily basis, I have multiple workstations connecting to multiple new devices that just came up with a self signed certificate. Every new OS load means a new, generated self-signed cert.

    I’d like to find a plugin or option that allows insecure certs on a range of addresses, whether RFC1918 addresses only, or something like that, so that I can just get on with configuring my test hosts, and not have to worry about the security. I know the sites are insecure, and it drives me batshit crazy trying to get the new box up and running to get a base configuration on it, including using openssl to create a new cert from a local CA on my workstation.

    Please make this feature more flexible. Please. I get it in an end-user environment, but not in a QA or Manufacturing facility. I know we’re a 1% minority of the community, but if Mozilla wants us to test the product with their browser, it shouldn’t hurt so much to bring up a test site.

  14. I don’t want to go through few clicks and windows every time I entering one of my servers or devices (which have self-signed certificates because we just don’t need any identity verification at this place). I don’t need such stupid “features” added for idiots who don’t know what they do when clicking a buttons. So while I can’t get usable behavior with FF3 I just forced to switch back to FF2 or another more usable browser. Even IE7 have less paranoid (and still usable) behavior.

Leave a comment