SSL Infoquickie (with Bonus Firefox Pro-Tip!)

There is less public information out there about SSL certificate usage than one might like to see. Netcraft has a for-pay report with some interesting figures, and occasionally makes some of that data public, and I’ve blogged about other sources in the past, but in general, it’s pretty sparse. I keep meaning to do something coordinated about that, I have some ideas, but they keep getting back-burnered.

So it came to pass that when someone idly remarked that it would be nice to know what percentage of certs on the top sites were valid, I pounced upon it as a way to quickly release some pent-up info-gathering angst.

It’s profoundly unscientific, but so was the question. Are the Alexa top 500 sites even a good reflection of the most popular SSL sites? Not really. I think it will bias the data towards higher counts of untrusted certs (since the admins aren’t expecting them to be used) and towards lower overall cert counts (since many of those sites won’t answer SSL hails, whereas presumably a list of the top 500 SSL sites all would). Is blindly connecting to their main page on port 443 the best way to harvest their certs? Probably not, lots of them use secure.frobber.tld constructions, so that will also bias the data lower. Let’s just agree that it’s a sort of fun number to have as an order-of-magnitude style signpost.

Of the 500 top sites on Alexa, October 15, 2008:

  • 217 responded to an SSL query on port 443
  • 199 of those replies used valid certs chaining to trusted roots
  • The other 18 were a mix of self-signed, bad chains (likely from trusted roots, though I didn’t investigate), and expired certs.

If you prefer pretty pictures:

SSL Certificate Stats

Any conclusions you want to draw from this data will be only as good as the aforementioned biases within it, but don’t say I never do anything for you in a feeble attempt to vent my own info-lust urges.

Bonus Firefox Pro-Tip: If you are on Firefox 3.1 Nightlies or the upcoming Firefox 3.1 Beta 2, you now have the ability to turn off link-visited colouring.  David Baron recently landed a fix for bug 147777 that adds a new about:config preference to control the behaviour, layout.css.visited_links_enabled.

“Great!” I hear you all saying, “We’ve been hoping for a way to turn off an occasionally useful feature!”

And who hasn’t, really? But the thing of it is that colouring links can give away information to tricky sites about where you’ve been. It’s up to you whether you think that privacy/functionality trade-off is worth making, and the bug is still open while more universal solutions are contemplated, but in the meantime, you have the choice.

SSL Question Corner

From time to time, in the blogosphere or mailing lists, I will get questions about various security decisions we make in Firefox.  Here’s one that has been popular lately:

Q: I think you are dumb.

It is worded in a variety of ways, of course, but that’s the basic thrust.  A longer version might read:

Q: Why has Firefox started treating self-signed SSL certificates as untrustworthy?  I just want encryption, I don’t care that the cert hasn’t been signed by a certificate authority, and anyhow I don’t want to pay hundreds of dollars just to secure my communications.

There are a couple of implicit assumptions we should dispense with up front, before tackling the meat of the question, to wit:

  1. “Why has Firefox started treating…”  Firefox has been treating self-signed certificates as disconcerting for quite some time.  In Firefox 2, you would get a giant dialog box popping up asking what to do with them.  It was farcically easy to dismiss since just hitting OK would proceed to the site, and since the default was a temporary pass, not a permanent one, you saw the dialog frequently, making it even easier to ignore.  Firefox 3 has absolutely changed that flow — more on that later — but there is nothing new here.
  2. “ … I don’t want to pay hundreds of dollars …” Several CAs accepted by all major browsers sell certificates for less than $20/yr, and StartSSL, in the Firefox 3 root store, offers them for free.

Those concerns are red herrings, the real concern is in the middle:  “Why treat self-signed SSL as untrustworthy?  I just want encryption.”  Let’s explore this.

First of all, this isn’t quite right.  You never *just* want encryption, you want encryption to a particular system.  The whole reason for having encryption is that you don’t want various ill-doers doing ill with your data, so clearly you want encryption that isn’t going to those people.

“So fine, I want encryption to a particular system,” you say, “but I don’t need a CA to prove that my friend’s webmail is trustworthy.  CAs don’t even do that anyhow.  I trust him, Firefox should get out of my way.”

Yes, absolutely – the browser is your agent, and if you trust your friend’s webmail, you should be able to tell Firefox to do so as well.  But how do you know that’s who you’re talking to?

Permit me 3 short digressions…

Digression the First: Ettercap, webmitm, and friends

What if I told you that there were a group of programs out there that made it trivial, brain-dead simple, to intercept your web traffic, log it, and then pass it through without you ever noticing?  These “Man in the Middle” attacks used to be the stuff of scary security fiction, but now they are point-and-click.

If one of these is running on your network (you know, like the packet sniffers you’re protecting against with encryption in the first place) it will poison your network so that all requests go through them.  It will then transparently fetch and pass off any regular web pages without you noticing (after logging anything juicy, of course).  If you request an SSL page, it will generate its own certificate whose human readable details match the real site, same organization name, same domain name, everything, and use that to masquerade as the site in question.  The only difference is, it will be self-signed, since the tool obviously can’t get a CA signature.

Digression the Second: Drive-By Router Reconfig

Do you use one of those home cable-dsl-router/wifi-access-point thingies?  For the last couple years, security folks have gotten giggles out of finding ways to break them, and the number one thing they do is rewrite your network configuration so that your connections go to computers of their choosing.  If your router is subverted in this way, the only hint you might have is that your secure sites have all become self-signed.

Digression the Third: Kaminsky Breaks the Internet

This week I’m at the Black Hat security conference in Vegas, where it is a virtual certainty that Dan Kaminsky is going to outline an attack that lets any site on the internet pretend to be any other site on the internet.  I can pretend to be paypal.com.  You can pretend to be bankofamerica.com.  If your ISP doesn’t fix all of their servers, one aforementioned doer-of-ill can trick them into sending all of their customers to forgeries of the actual sites they seek.  They don’t even have to be on the same network anymore.  This is substantially easier than packet sniffing. The only thing that will tell you whether the sites you are visiting are real is the existence of a trusted certificate, which only the legitimate site can have.

Back to the Plot

The question isn’t whether you trust your buddy’s webmail – of course you do, your buddy’s a good guy – the question is whether that’s even his server at all.  With a CA-signed cert, we trust that it is – CAs are required to maintain third party audits of their issuing criteria, and Mozilla requires verification of domain ownership to be one of them.

With a self-signed certificate, we don’t know whether to trust it or not.  It’s not that these certificates are implicitly evil, it’s that they are implicitly untrusted – no one has vouched for them, so we ask the user.  There is language in the dialogs that talks about how legitimate banks and other public web sites shouldn’t use them, because it is in precisely those cases that we want novice users to feel some trepidation, and exercise some caution. There is a real possibility there, hopefully slim, that they are being attacked, and there is no other way for us to know.

On the other hand – if you visit a server which does have a legitimate need for a self-signed certificate, Firefox basically asks you to say “I know you don’t trust this certificate, but I do.”  You add an exception, and assuming you make it permanent, Firefox will begin trusting that specific cert to identify that specific site.  What’s more, you’ll now get the same protection as a CA signed cert – if you are attacked and someone tries to insert themselves between you and your webmail, the warning will come up again.

I don’t think the approach in Firefox 3 is perfect, I’m not sure any of us do. I have filed bugs, and talked about things I think we could do to continue to enhance our users’ security while at the same time reducing unnecessary annoyances.  You’ll notice that Firefox 3 has fewer “Warning: you are submitting a search to a search engine” dialog boxes than Firefox 2 did, and it’s because of precisely this desire.

I welcome people who want to make constructive progress towards a safer internet and a happier browsing experience. That’s what motivated this change, it’s what motivates everything we do with the browser, really.  So it sure would be nice if we didn’t start from the assumption that changes are motivated by greed, malice, or stupidity.

The Most Important Thing

Microphone by hiddedevries on flickr… or How Mozilla Does Security and What You Can Steal

As promised last week, I have now put my presentation slides for my talk at FIRST2008 online.  I’ve also put up a video I recorded of a dry-run through the slides, in case you want to experience the talk, and not just read it.

Slides (CC-BY-SA):

Video (CC-BY-SA):

Thanks again to Mike Shaver for helping me put these slides together, and to all the people who reviewed them ahead of time.  I really enjoyed this talk, and hope to give it again – as I’ve said many times before, we have learned a lot of lessons the hard way; we should be sharing that experience broadly, since we’re one of the few organizations that can.

I would love any edits or suggestions for the slides themselves, or my presentation of them.  I’ll also accept offers of exciting cash and prizes to give this talk at your campus/company/private island.

Security Screencast(s)

As Alix mentions, I recently put together a quick screencast of some of the new security features in Firefox 3. Of course, beltzner promptly scooped me with his own inimitable screencast, and what with the launch, it’s only now that I’m getting around to posting mine.

What’s interesting to me, though, is the difference between what I originally recorded, and what Alix published. I recorded the raw screencast using Jing, which is a simple, free screencasting tool for Mac and Windows. It caps you at 5 minutes, and records as flash, but it’s super easy to use, and screencast.com will host the resultant video for you. You can see what I recorded here:

http://content.screencast.com/bootstrap.swf

But then I handed it off to Alix and David and Rainer, and they turned my 5 minutes of low production values into 2 minutes of edited, titled video, with helpful visuals! See if you notice the difference…


Firefox 3: Security from Mozilla Firefox on Vimeo.

As promised in my last post, I’ll soon be posting yet another video, this time an hour long talk I gave at FIRST. And then, I think, no more blatant self-promotion for a couple weeks, eh?

Have you installed Firefox 3 yet?

Hello Vancouver! Briefly!

A quick note, to any Vancouverites that may be interested, that I will be in town on Wednesday to speak at the FIRST 2008 conference. The title of the talk is “The Most Important Thing – How Mozilla Does Security, and What You Can Steal.” If you’re attending the conference, I hope I’ll see you there. Once the conference is over, I’ll post my slides and a video of a presentation dry-run, in case anyone is interested.

I had a lot of help from several people, most notably Shaver, in putting this presentation together; my goal is to keep adapting it and ideally get other people giving it as well. Security is something that the Mozilla project has a lot of experience with, and a lot to be proud of. It is important to our mission that we share that expertise. Even when what we’re saying isn’t new (“have unit tests”), the fact that we have achieved the success we have lets us be a proof point for people trying to make change in their own projects (“Mozilla didn’t think code review was too time-intensive.”)

I may not be an official member of the evangelism team, but I will do whatever I can to encourage more people in our community to take their knowledge outbound. We are doing crazy awesome stuff here (how many IT people, on the planet, have dealt with what Justin‘s team has?) and we should consider it an obligation to spread that knowledge around. Heck, that’s actually sort of what my talk is about.

Firing Up Browser Security

Low Flying Dogs on FlickrWindow and I recently did a joint interview for Federico Biancuzzi at SecurityFocus about many of the security changes we’ve made in Firefox 3. It covers both front-end and back-end information, and mentions several changes that I haven’t had a chance to mention here in the past.

If you’re interested, check it out.

[PS – Full props to r80o on flickr – this is a pretty excellent photo for “caution”, and CC too!]

Mal-what? Firefox 3 vs. Bad People

A lot of the things I write here are for geeks.  That’s unsurprising, given my own wonkish leanings, but I appreciate that it makes me a tough guy to love, much less read, at times.  Sorry about that, and thanks for sticking with me.

With Firefox 3 on the cusp of the precipice of the knife’s edge of release, though, I wanted to stop pretending that everyone reads the same articles I do and talk about one of the many, really concrete things we’re doing to keep our users, like you, safe.  There will be graphs.

Continue reading “Mal-what? Firefox 3 vs. Bad People”

Security UI in Firefox 3plus1

We’ve made a lot of changes (and more importantly, a lot of positive progress) in security UI for Firefox 3.

We have built-in malware protection now, and better phishing protection.  We have a password manager that intelligently lets you see whether your login was successful before saving, instead of interrupting the page load.  We have gotten rid of several security dialogs that taught users to click OK automatically, unseeingly.  We have OCSP on by default.  We have a consistent place in the UI now where users can get information about the site they are visiting, including detailed secondary information about their history with the site; all of which are first steps in a long road towards equipping users with more sophisticated tools for browsing online, by taking advantage of habits they already have, and things we already know.  All the people who worked on this stuff know who they are, and I want to thank them, because it sure as hell wasn’t all me.

With Firefox 3 in full down-hunker for final release (and with conference silly season upon us) though, I’ve started to get serious about thinking through what comes next.

Here’s my initial list of the 3 things I care most about, what have I missed?

1. Key Continuity Management

Key continuity management is the name for an approach to SSL certificates that focuses more on “is this the same site I saw last time?” instead of “is this site presenting a cert from a trusted third party?”  Those approaches don’t have to be mutually exclusive, and shouldn’t in our case, but supporting some version of this would let us deal more intelligently with crypto environments that don’t use CA-issued certificates.

The exception mechanism in Firefox 3 is a very weak version of KCM, in that security exceptions, once manually added, do have “KCM-ish” properties (future visits are undisturbed, changes are detected).  But without the whole process being transparent to users, we miss the biggest advantage to this approach.

Why I care: KCM lets us eliminate the most-benign and most-frequently-occurring SSL error in Firefox 3.  Self-signed certs aren’t intrinsically dangerous, even if they do lack any identification information whatsoever.  The problem is that case-by-case, we don’t have a way to know if a given self-signed cert represents an attack in progress.  The probability of that event is low, but the risk is high, so we get in the way.  That’s not optimal, though.  When the risk is negligible, we should get out of the way, and save our warnings for the times when they can be most effective.

2. Secure Remote Passwords

Secure Remote Password protocol is a mechanism (have some math!) for allowing a username/password-style exchange to happen, without an actual password going out along the wire. Rob Sayre already has a patch.  That patch makes the technology available, but putting together a UI for it that resists spoofing (and is attractive enough that sites want to participate) will be interesting.

Why I care: SRP is not the solution to phishing, but it does make it harder to make use of stolen credentials, and that’s already a big deal.  It also has the happy side effect of authenticating the site to you while it’s authenticating you to the site.  I wouldn’t want this useful technology to get stuck in the chicken-egg quagmire of “you implement it first.”

3. Private Browsing Mode

This is the idea of a mode for Firefox which would protect their privacy more aggressively, and erase any trace of having been in that mode after the fact.  Ehsan Akhgari has done a bunch of work here, and in fact has a working patch.  While his version hooks into all the various places we might store personal data, I’ve also wondered about a mode where we just spawn a new profile on the spot (possibly with saved passwords intact) and then delete it once finished.

Why I care: Aside from awkward teenagers (and wandering fiancés), there are a lot of places in the world where the sites you choose to visit can be used as a weapon against you.  Private browsing mode is not some panacea for governmental oppression, but as the user’s agent, I think it is legitimately within our scope (and morally within our responsibility) to put users in control of their information.  We began this thinking with the “Clear Private Data” entry in the tools menu, but I think we can do better.

(And also…)

Outside of these 3, there are a couple things that I know will get some of my attention, but involve more work to understand before I can talk intelligently about how to solve them.

The first is for me to get a better understanding of user certificates. In North America (outside of the military, at least) client certificates are not a regular matter of course for most users, but in other parts of the world, they are becoming downright commonplace.  As I understand it, Belgium and Denmark already issue certs to their citizenry for government interaction, and I think Britain is considering its options as well.  We’ve fixed some bugs in that UI in Firefox 3, but I think it’s still a second-class UI in terms of the attention it has gotten, and making it awesome would probably help a lot of users in the countries that use them.  If you have experience and feedback here, I would welcome it.

The second is banging on the drum about our mixed content detection.  We have some very old bugs in the area, and mixed content has the ability to break all of our assumptions about secure connections.  I think it’s just a matter of getting the right people interested in the problem, so it may be that the best way for me to solve this is with bottles of single malt.  Whatever it takes.  If you can help here, name your price.

Obviously I’ve left out all the tactical fixup work on the UI we already have.  We all know that those things will need to happen, to be re-evaluated and evolved.  I wanted to get these bigger-topic thoughts out early, so that people like you can start thinking about whether they are interesting and relevant to the things you care about, and shouting angrily if they aren’t.