Jono recently wrote a blog post about Firefox updates, and Atul wrote a follow up. They are two of the brightest usability thinkers I know. When they talk about users, I listen. I listen, even though some of the things they say sound confused to me, and some are plain wrong. And I listen because if people as bright and in tune with Mozilla as them think these things, I bet others do, too.
When I read (and re-read) the posts, I see 3 main points:
- The constant interruption of updates is toxic to the usability of any piece of software, especially one as important as your web browser.
- Our reasons for frequent updates were arbitrary, and based on the wrong priorities.
- We take our users for granted.
To be honest, if it weren’t for the third point, I wouldn’t be writing this. Anytime you do something that impacts lots of people, especially lots of impassioned and vocal people, you’re gonna get criticism. Listening to that is essential, but fanning the flames can consume all your energy and even still some people won’t be convinced. The third point, though, made by two people who know and love Mozilla even if they haven’t been close to the release process, isn’t something I want to leave sitting. I understand how it can fit a narrative, but it’s just not true.
Since I’m writing anyhow, though, let’s take them in order.
Interruptions Suck
Yes. They do. One criticism that I think we should openly accept is that the move to regular releases was irritating. The first releases on the new schedule had noisy prompts (both ours and the operating systems’). They broke extensions. Motives aside, our early execution was lacking and we heard about it. Plenty.
Today our updates are quiet. Addons have been compatible by default since Firefox 10 back in January. But that was a mountain of work that would have been much nicer to have in hand up front. As Jono says, hindsight is 20/20, but we should have done better with foresight there.
Motivations
It was hard for me to read the misapprehension of motives in these posts. Hard because I think Mozilla’s earned more credit than that, and hard because it means I haven’t done a good job articulating them.
Let me be clear here because I’m one of the guys who actually sits in these conversations: when we get together to talk about a change like this, concepts like “gotta chase the other guys” are nowhere in the conversation. When we get together and draw on whiteboards, and pound on the table, and push each other to be better, it is for one unifying purpose: to do right by our users and the web.
I wrote about this a while back, but it bears repeating. We can’t afford to wait a year between releases ever again; we can’t afford to wait 6 months. Think how much the web changes in a year, how different your experience is. Firefox 4 was 14 months in the making. A Firefox that updates once every 14 months is not moving at the speed of the web; we can’t go back there. Every Firefox release contains security, compatibility, technology and usability improvements; they should not sit on the shelf.
There’s nothing inviolate about a 6 week cycle, but it’s not arbitrary either. It is motivated directly from our earnest belief that it is the best way for us to serve our users, and the web at large.
And so the hardest thing for me to read was the suggestion that…
We Take Our Users For Granted
Nonsense. I don’t know how else to say it. In a very literal way, it just doesn’t make sense for a non-profit organization devoted to user choice and empowerment on the web to take users for granted. The impact of these changes on our users was a topic of daily conversation (and indeed, clearly, remains one).
To watch a Mozilla conversation unfold, in newsgroups or in blogs, in bugzilla or in a pub, is an inspiring thing because of how passionately everyone, on every side of an issue, is speaking in terms of the people of the web and how we can do right by them. We are at our most excellent then.
There’s beauty in the fact that this is another of those conversations. It is not lost on me, nor on Jono and Atul, I’d wager. They are Mozillians. And I believe they care deeply about Firefox users. I hope they realize how much the rest of us do, too.
05
Aug 08
SSL Question Corner
From time to time, in the blogosphere or mailing lists, I will get questions about various security decisions we make in Firefox. Here’s one that has been popular lately:
It is worded in a variety of ways, of course, but that’s the basic thrust. A longer version might read:
There are a couple of implicit assumptions we should dispense with up front, before tackling the meat of the question, to wit:
Those concerns are red herrings, the real concern is in the middle: “Why treat self-signed SSL as untrustworthy? I just want encryption.â€Â Let’s explore this.
First of all, this isn’t quite right. You never *just* want encryption, you want encryption to a particular system. The whole reason for having encryption is that you don’t want various ill-doers doing ill with your data, so clearly you want encryption that isn’t going to those people.
“So fine, I want encryption to a particular system,†you say, “but I don’t need a CA to prove that my friend’s webmail is trustworthy. CAs don’t even do that anyhow. I trust him, Firefox should get out of my way.â€
Yes, absolutely – the browser is your agent, and if you trust your friend’s webmail, you should be able to tell Firefox to do so as well. But how do you know that’s who you’re talking to?
Permit me 3 short digressions…
Digression the First: Ettercap, webmitm, and friends
What if I told you that there were a group of programs out there that made it trivial, brain-dead simple, to intercept your web traffic, log it, and then pass it through without you ever noticing? These “Man in the Middle†attacks used to be the stuff of scary security fiction, but now they are point-and-click.
If one of these is running on your network (you know, like the packet sniffers you’re protecting against with encryption in the first place) it will poison your network so that all requests go through them. It will then transparently fetch and pass off any regular web pages without you noticing (after logging anything juicy, of course). If you request an SSL page, it will generate its own certificate whose human readable details match the real site, same organization name, same domain name, everything, and use that to masquerade as the site in question. The only difference is, it will be self-signed, since the tool obviously can’t get a CA signature.
Digression the Second: Drive-By Router Reconfig
Do you use one of those home cable-dsl-router/wifi-access-point thingies? For the last couple years, security folks have gotten giggles out of finding ways to break them, and the number one thing they do is rewrite your network configuration so that your connections go to computers of their choosing. If your router is subverted in this way, the only hint you might have is that your secure sites have all become self-signed.
Digression the Third: Kaminsky Breaks the Internet
This week I’m at the Black Hat security conference in Vegas, where it is a virtual certainty that Dan Kaminsky is going to outline an attack that lets any site on the internet pretend to be any other site on the internet. I can pretend to be paypal.com. You can pretend to be bankofamerica.com. If your ISP doesn’t fix all of their servers, one aforementioned doer-of-ill can trick them into sending all of their customers to forgeries of the actual sites they seek. They don’t even have to be on the same network anymore. This is substantially easier than packet sniffing. The only thing that will tell you whether the sites you are visiting are real is the existence of a trusted certificate, which only the legitimate site can have.
Back to the Plot
The question isn’t whether you trust your buddy’s webmail – of course you do, your buddy’s a good guy – the question is whether that’s even his server at all. With a CA-signed cert, we trust that it is – CAs are required to maintain third party audits of their issuing criteria, and Mozilla requires verification of domain ownership to be one of them.
With a self-signed certificate, we don’t know whether to trust it or not. It’s not that these certificates are implicitly evil, it’s that they are implicitly untrusted – no one has vouched for them, so we ask the user. There is language in the dialogs that talks about how legitimate banks and other public web sites shouldn’t use them, because it is in precisely those cases that we want novice users to feel some trepidation, and exercise some caution. There is a real possibility there, hopefully slim, that they are being attacked, and there is no other way for us to know.
On the other hand – if you visit a server which does have a legitimate need for a self-signed certificate, Firefox basically asks you to say “I know you don’t trust this certificate, but I do.â€Â You add an exception, and assuming you make it permanent, Firefox will begin trusting that specific cert to identify that specific site. What’s more, you’ll now get the same protection as a CA signed cert – if you are attacked and someone tries to insert themselves between you and your webmail, the warning will come up again.
I don’t think the approach in Firefox 3 is perfect, I’m not sure any of us do. I have filed bugs, and talked about things I think we could do to continue to enhance our users’ security while at the same time reducing unnecessary annoyances. You’ll notice that Firefox 3 has fewer “Warning: you are submitting a search to a search engine†dialog boxes than Firefox 2 did, and it’s because of precisely this desire.
I welcome people who want to make constructive progress towards a safer internet and a happier browsing experience. That’s what motivated this change, it’s what motivates everything we do with the browser, really. So it sure would be nice if we didn’t start from the assumption that changes are motivated by greed, malice, or stupidity.