At Our Most Excellent

Jono recently wrote a blog post about Firefox updates, and Atul wrote a follow up. They are two of the brightest usability thinkers I know. When they talk about users, I listen. I listen, even though some of the things they say sound confused to me, and some are plain wrong. And I listen because if people as bright and in tune with Mozilla as them think these things, I bet others do, too.

When I read (and re-read) the posts, I see 3 main points:

  1. The constant interruption of updates is toxic to the usability of any piece of software, especially one as important as your web browser.
  2. Our reasons for frequent updates were arbitrary, and based on the wrong priorities.
  3. We take our users for granted.

To be honest, if it weren’t for the third point, I wouldn’t be writing this. Anytime you do something that impacts lots of people, especially lots of impassioned and vocal people, you’re gonna get criticism. Listening to that is essential, but fanning the flames can consume all your energy and even still some people won’t be convinced. The third point, though, made by two people who know and love Mozilla even if they haven’t been close to the release process, isn’t something I want to leave sitting. I understand how it can fit a narrative, but it’s just not true.

Since I’m writing anyhow, though, let’s take them in order.

Interruptions Suck

Yes. They do. One criticism that I think we should openly accept is that the move to regular releases was irritating. The first releases on the new schedule had noisy prompts (both ours and the operating systems’). They broke extensions. Motives aside, our early execution was lacking and we heard about it. Plenty.

Today our updates are quiet. Addons have been compatible by default since Firefox 10 back in January. But that was a mountain of work that would have been much nicer to have in hand up front. As Jono says, hindsight is 20/20, but we should have done better with foresight there.

Motivations

It was hard for me to read the misapprehension of motives in these posts. Hard because I think Mozilla’s earned more credit than that, and hard because it means I haven’t done a good job articulating them.

Let me be clear here because I’m one of the guys who actually sits in these conversations: when we get together to talk about a change like this, concepts like “gotta chase the other guys” are nowhere in the conversation. When we get together and draw on whiteboards, and pound on the table, and push each other to be better, it is for one unifying purpose: to do right by our users and the web.

I wrote about this a while back, but it bears repeating. We can’t afford to wait a year between releases ever again; we can’t afford to wait 6 months. Think how much the web changes in a year, how different your experience is. Firefox 4 was 14 months in the making. A Firefox that updates once every 14 months is not moving at the speed of the web; we can’t go back there. Every Firefox release contains security, compatibility, technology and usability improvements; they should not sit on the shelf.

There’s nothing inviolate about a 6 week cycle, but it’s not arbitrary either. It is motivated directly from our earnest belief that it is the best way for us to serve our users, and the web at large.

And so the hardest thing for me to read was the suggestion that…

We Take Our Users For Granted

Nonsense. I don’t know how else to say it. In a very literal way, it just doesn’t make sense for a non-profit organization devoted to user choice and empowerment on the web to take users for granted. The impact of these changes on our users was a topic of daily conversation (and indeed, clearly, remains one).

To watch a Mozilla conversation unfold, in newsgroups or in blogs, in bugzilla or in a pub, is an inspiring thing because of how passionately everyone, on every side of an issue, is speaking in terms of the people of the web and how we can do right by them. We are at our most excellent then.

There’s beauty in the fact that this is another of those conversations. It is not lost on me, nor on Jono and Atul, I’d wager. They are Mozillians. And I believe they care deeply about Firefox users. I hope they realize how much the rest of us do, too.

The SSL Observatory

Oh ho, lookit what the EFF went and did!

The EFF SSL Observatory is a project to investigate the certificates used to secure all of the sites encrypted with HTTPS on the Web. We have downloaded a dataset of all of the publicly-visible SSL certificates, and will be making that data available to the research community in the near future.

This is exciting. I knocked together a less ambitious version of this last year, but the EFF guys are doing it like grown-ups, and are getting some interesting data.

Numbers-wise, they’re in the right ballpark, as far as I can tell. Their numbers (1-2m CA-signed certs) coarsely match ones I’ve seen from private sources. I’ve heard from a few CAs that public-crawl estimates tend to err 50-80% low since they miss intranet dark matter, but at least the EFF is tracking other public-crawls. Given that their collection tools and data are going to be made public, that’s a really big deal. Previously, I haven’t been able to get this kind of data without paying for it or collecting it myself. If the database is actively maintained and updated, this will be a great resource for research.

Their analysis of CA certificate usage is also interesting. I’d like to see more work done, here, and in particular I’d like to see how CA usage breaks down between the Mozilla root store and others. We spend considerable effort managing our root store, and recently removed a whole pile of CA certificates that were idle. In some places, the paper seems to make the claim that fully half of trusted CAs are never used, but in other places, the number of active roots they count outnumbers our entire root program. I understand why they blurred the line for the initial analysis, but it would be swell to see it broken out.

As they mention, there are legit reasons for root certs to be idle, particularly for future-proofing. We have several elliptic curve roots, and some large-modulus RSA roots, which are waiting for technology to catch up before they become active issuers while giving CAs a panic switch in the case of an Interesting Mathematical Result — that feels okay to me. On the other hand, if there are certs which are just redundant, it would be great to know, so that we can have that conversation with the relevant CAs, and understand the need to keep the cert active.

This is exactly what I hoped would come of my crawler last year, but they’ve done a much more thorough job. We’ve seen an uptick in research interest in SSL over the last few years. Having a high quality data source to poke when testing a hunch is going to make it easier to spot trends, positive or otherwise. Interesting work, folks; keep it going!

Kathleen, a FAQ

Q: Kathleen who?

Kathleen Wilson works for the Mozilla Corporation, and manages our queue of incoming certificate authority requests. She coordinates the information we need from the CAs, shepherds them through our public review process and, if approved, files the bugs to get them into the product.

Q: Holy crap! One person does all of that? Is she superhuman?

It has been proven by science. She is 14% unobtainium by volume.

Q: That’s really awesome, but I am a terrible, cynical person and require ever-greater feats of amazing to maintain any kind of excitement.

She came in to a root program with a long backlog and sparse contact information, and has reduced the backlog, completely updated our contact information, and is now collecting updated audit information for every CA, to be renewed yearly.

Q: Hot damn! She’s like some kind of awesome meta-factory that just produces new factories which each, in turn, produce awesome!

I know, right? She has also now removed several CAs that have grown inactive, or for which up to date audits cannot be found. They’ll be gone as of Firefox 3.6.7. They’re already gone on trunk.

Q: Wait, what?

Yeah – you can check out the bug if you like. I’m not positive, but I think this might represent one of the first times that multiple trust anchors have ever been removed from a shipping browser. It’s almost certainly the largest such removal.

Q: I don’t know what to say. Kathleen completes Mozilla. It is inconceivable to me that there could be anything more!

Inconceivable, yes. And yet:

  1. She’s also made what I believe to be the first comprehensive listing of our root, with signature algorithms, moduli, expiry dates, &c.
  2. In her spare time, she’s coordinating with the CAs in our root program around the retirement of the MD5 hash algorithm, which should be a good practice run for the retirement of 1024-bit RSA (and eventually, in the moderately distant but forseeable future, SHA-1).
  3. She has invented a device that turns teenage angst into arable land suitable for agriculture.

Fully 2 of the above statements are true!

Q: All I can do is whimper.

Not true! You can also help! Kathleen ensures that every CA in our program undergoes a public review period where others can pick apart their policy statements or issuing practices and ensure that we are making the best decisions in terms of who to trust, and she’d love you to be a part of that.

Q: I’ll do it! Thanks!

No, thank you. That wasn’t a question.

Interview with a 419 Scammer

For those who haven’t seen it, scam-detectives.co.uk has a really interesting 3-part interview with a former Nigerian scammer.

Scam-Detective: A reader has asked me to talk to you about face to face scams. Were you ever involved in meeting a victim, or was all of your contact by email?

John: I never met a victim, but I was involved in a couple of Wash-Wash scams.

Scam-Detective: Wash Wash scams? What does that involve?

John: We would tell the victim that we had a trunk full of money, millions of dollars. One victim met some of my associates in a hotel in Amsterdam, where he was shown a box full of black paper. He was told that the money had been dyed black to get through customs, and that it could be cleaned with a special chemical that was very expensive. My associates showed him how this worked with a couple of $100 bills from the top of the box, which they rinsed with some liquid to remove the black dye. Of course the rest of the bills were only black paper, but the victim saw real money. He handed over $27,000 (about £17,000) to buy the chemicals and was told to return to the hotel later that day to pick up the cash. Of course when he came back, there was nobody there. He couldn’t report it to anybody because if it had been real it would have been illegal, so he would have gotten himself into trouble.

Part 1, Part 2, Part 3.

We build tools in Firefox like stale-plugin warnings and malware blocking to help protect our users, to neuter the technological attacks they may encounter on the web. But we also try, and need to keep trying, to build tools that inform our users so that they can make better decisions. Our phishing warnings and certificate errors try to do this, but mostly by scaring users away from specific attack situations. I hope we’ll continue to build tools like Larry which try to give people some affirmative context as well, to lend some nuance to their sense of place online. I want us to help our users know when they’re on Main Street, and when they’re in an alley.

I know: People get conned in the real world, too, and certainly no browser UI is going to save you from an email-based scam. Stories like this, though, are just specific instances of what I believe to be a more universal principle:

the biggest security risk most people face is misplaced trust

John: Some of the blame has to go to the victims. They wanted the money too because they were greedy. Lots of times I would get emails telling me that they wanted more money than I was offering because of the money they were having to send. They could afford to lose the money.

Scam-Detective: John, I think you have been basically honest with me so far. Please don’t stop that now. You know as well as I do that not all of your victims were motivated by greed. I have seen plenty of scam emails that talk about dying widows who want to give their money to charity, or young people who are in refugee camps and need help to get out. You targetted vulnerable, charitable people as well as greedy businessmen, didn’t you? You didn’t care whether they could afford it or not, did you?

John: Ok, you are right. I am not proud of it but I had to feed my family.

If you have ideas for how we can help users place their trust online more deliberately and carefully: please comment here, or build an addon, or file a bug.

Videos – Firefox Privacy & Security Features

Preamble (with Discussion Question)

I don’t know if there are people out there who like the way they sound in audio recordings, or look on video. I certainly don’t. I don’t think it’s a self-image issue, either; and I know I’m not alone. My recorded voice lacks the resonance I experience internally, and my recorded image just looks… mouthier (?!) than I imagine myself to be. I don’t even know what that means.

Proposed:

Nightingale’s Corollary to the Uncanny Valley Hypothesis: The depth of one’s psychological attachment to, and familiarity with, one’s own image, amplifies feelings of canny/uncanniness. This can result in greater than average affinity for moderately dissimilar representations (c.f. the popularity of “realistic cartoon avatar” generators, or caricature artists), but also particularly heightened sensitivity to minor dissimilarities.

[Discuss. Cite examples.]

The Point (i.e. Where You Should Have Started Reading)

I bring this up because the inimitable duo of Alix and Rainer recently took some of my scattered ramblings and knit them together into an educational piece on some of the security features in Firefox. I think they did a lovely job:



YouTube

In very much related news, Drew worked with Alix and Rainer to put together a video that talks about some of Firefox’s privacy features. I find it much easier to listen to Drew’s calm, matter of fact, “we did awesome stuff, and want you to know about it” delivery. I suspect you will, as well.


YouTube

Deletion

To a first approximation, I think you can gauge how much people think about software quality by how highly they value deletion. While most rookie developers are chiefly interested in building rather than in tearing down (for what I hope are obvious reasons), great throbbing brains like Graydon speak about deletion with the kind of reverence that I presume cardinals reserve for only the coolest of popes.

In what history will likely judge as a vain attempt to impress him, then, I recently landed bug 513147, deletion of the now antiquated “Properties” dialog that used to be available on right-clicking things like images and links. Not because it was useless (every feature is someone’s baby, and is added for a reason) but because it wasn’t useful enough, to enough people, to justify the cost.

50kb of code in our product that is poorly understood, not often used, and not covered by unit tests is not free. When bugs show up, it takes longer than it should to fix them. If a security bug were to show up (which is always a risk when content mixes with chrome, however remote it may seem) it would be particularly expensive for us to reload that context into our brains to fix it.

Deleting it isn’t free either, of course – there are 4 extensions that build off that dialog that will need to be updated, and there may be some who use it regularly who will be disappointed. But the forces of software (inertia, squeaky wheels, cynicism and inertia) bias so heavily towards keeping code in the tree that we should all try to take clear deletion opportunities when they come up. Not capriciously, not without sensitivity to the impact it can have, but with recognition that the hidden cost to keeping them is also large and… hidden.

It is in the spirit of this sensitivity that we, on the Firefox team, have tagged this bug and others like it: [killthem].  What else do you think should go? (And please, be gentle. Remember, every feature is someone’s baby.)

[Update: Geoff Lankow has taken the code that used to be built in, and made it into an add-on, which is think is fantastic. As I said to him, and as I said above, my assertion has never been that the code was useless, just that it wasn’t useful enough to justify its cost in the core product. An add-on is a great place for functionality like that, and I thank Geoff for his work.]

Updated SSL Certificate Database

When I blogged about my database of SSL certs from the top 1M alexa sites, it got much more reaction than I expected. It’s nice to have peers in this microcosm of nerdspace.

Easily the most often requested improvement was to include intermediates in the database. People wanted to see which issuers had a bunch of subordinate CAs and which issued right from the root. They wanted to see what kind of key sizes and algorithms CAs chose, and how they compared to the key sizes and algorithms used in regular site certs.

I’ve gone and re-crawled to gather that information now, and you can download the zipped db (509M). It’s still an SQLite3 database, though I’ve changed the schema a bit, with certificates now stored in their own table.  Let me know in the comments/email if you need help working with the data.

The schema, if you can call it that, was 100% expediency over forethought, so I would welcome any suggestions on DB organization/performance tweaking. I have done no optimizing so low-hanging fruit abounds, and a complicated query can take more than a day right now, so your suggestions will have visible effects!

Deep Packet Inspection Considered Harmful?

I was recently asked, in the context of the ongoing Phorm debacle, and with other interested parties, to meet with members of the UK government and discuss deep packet inspection technologies, and their impact on the web.  I’m still organizing my thoughts on the subject – I’ve put some here, but I’d love to know where else you think I should look to ensure I have considered the relevant angles.

Brief Background

Phorm‘s technology hooks in at the ISP level, watches and logs user traffic, and uses it to assemble a comprehensive profile for targeting advertising. While an opt-out mechanism was provided, many users have complained that there was no notice, or that it was insufficiently clear what was going on. NebuAd, another company with a similar product, has apparently used its position at the ISP level to not only observe, but also to inject content into the pages before they reached the user.  It’s hard to get unbiased information here, but this is what I understand of the situation.

Thoughts

1.  Deep packet inspection, in the general case, is a neutral technology. Some technologies are malicious by design (virus code, for instance), but I think DPI has as many positive uses as negative. DPI can let an ISP make better quality of service decisions, and can be done with the full knowledge and support of its users. I don’t think DPI, as a technology, should be treated as insidious.

2. Using deep packet inspection to assemble comprehensive browsing profiles of users without explicit opt-in is substantially more questionable. My browsing history and habits are things I consider private in aggregate, even though any single visit is obviously visible to the site I’m browsing.

It’s possible that I will choose to allow this surveillance in exchange for other things I value, but it must be a deliberate exchange. I would want to have that choice in an explicit way, and not to be opted in by default, even for aggregate data. Moreover, given the complexity of this technology, I would want a great deal of care to go into the quality of the explanation.  Explaining this well to non-technical users might be so difficult as to be impossible, which is why it’s so important that it be opt-in.

3. Using deep packet inspection in conjunction with software that modifies the resultant pages to include, for instance, extra advertising content, is profoundly offensive and undermines the web. The content provider and the user have a reasonable expectation that no one else is modifying the content, and a typical user should not be expected to understand the mechanics of the web sufficiently to be able to anticipate such modifications.

Solutions

As a browser, we do some things to help our users here, but we can’t solve the problem. https resists this kind of surveillance and tampering well, but requires sites to provide 100% of their content over SSL. Technologies like signed http content would prevent tampering, if not surveillance, but once again assume that sites (and browsers!) will support the technology. Ad blockers can turn off any injected ads, tools like NoScript can de-fang any injected javascript but, fundamentally, http content is not tamper-proof, and no plaintext protocol is eavesdropping proof.

People trust their ISPs with a huge amount of very personal data. It’s fine to say that customers should vote with their feet if their ISP is breaking that trust, but in many areas, the list of available ISPs is small, and so the need for prudence on the part of ISPs is large.

That’s what I’m thinking so far, what am I missing?

SSL Information Wants to be Free

Recent events have really thrown light onto something I’ve been feeling for a while now: we need better public information about the state of the secure internet.  We need to be able to answer questions like:

  • What proportion of CA-signed certs are using MD5 signatures?
  • What key lengths are being used, with which algorithms?
  • Who is issuing which kinds of certificates?

So I decided to go get some of that information, so that I could give it to all of you wonderful people.

Continue reading “SSL Information Wants to be Free”

Firefox Malware?

A crappy thing happened last week – someone wrote some malware that infects Firefox. We obviously don’t like that very much at all, but I wanted to at least make it clear what is and isn’t happening, since there’s some confusion out there.

What is going on?

Basically for as long as there has been software, there have been nasty people out there who get you to download and install software which turns out to have hidden cargo.  Security folks use names like “virus,” “trojan,” “worm,” and “malware” to describe different types, but the point is that if a person can be tricked into running nasty programs, they can do nasty things.

In this case, rather than wiping your hard drive or turning all your icons upside down, this particular jerk has decided to mess with your Firefox. Once you run the program, it hooks into your Firefox and watches for you to visit certain sites, at which point it will steal your username and password.

How Can I Tell If I Have It?

You can open up your Firefox addons manager (Tools->Add-ons) and go to the “Plugins” section.  If you have a plugin called “Basic Example Plugin for Mozilla” you should disable it.


Original credit to TrustDefender Labs’ blog post on the subject

Does This Mean that Firefox is Insecure?

No, and here’s why:

  • This particular malware targets our program, but once you have malicious software running on your system, it can just as easily attack other programs, or harm your computer in other ways.
  • This isn’t contracted by just browsing around the web with Firefox 3. In fact, the Malware Protection features in Firefox 3 are designed specifically to prevent sites from being able to attack your computer.

The people getting infected here are either downloading enticing files that have the malware hiding inside (which is why Firefox 3 hands off all downloads to your computer’s virus scanner once downloaded) or, as some sites are reporting, people who have already been infected in the past having their computers forced to download this file as well.

Typical Firefox 3 users who avoid downloading software they don’t trust are unlikely to ever see this, and even the sites reporting it describe its incidence as “rare”.

What’s this I hear about GreaseMonkey?

There are some mentions of greasemonkey in a couple of the early reports based on some analysis of the code used by this malware, but I want to be clear that the (legitimate, and awesome) Greasemonkey Addon is not involved in this malware in any way. It is not involved in the installation or execution of the attack.

As always, the best defense is vigilance.  Use a browser with a solid security record and modern anti-malware defenses built in, and be very careful about downloading and running programs you find online.  If a bad guy is able to get you to run a program on your machine they will be able to do bad things, so we’ll keep trying to stop them and you keep trying to as well.

More details are also available on the official Mozilla security blog.