Jono recently wrote a blog post about Firefox updates, and Atul wrote a follow up. They are two of the brightest usability thinkers I know. When they talk about users, I listen. I listen, even though some of the things they say sound confused to me, and some are plain wrong. And I listen because if people as bright and in tune with Mozilla as them think these things, I bet others do, too.
When I read (and re-read) the posts, I see 3 main points:
- The constant interruption of updates is toxic to the usability of any piece of software, especially one as important as your web browser.
- Our reasons for frequent updates were arbitrary, and based on the wrong priorities.
- We take our users for granted.
To be honest, if it weren’t for the third point, I wouldn’t be writing this. Anytime you do something that impacts lots of people, especially lots of impassioned and vocal people, you’re gonna get criticism. Listening to that is essential, but fanning the flames can consume all your energy and even still some people won’t be convinced. The third point, though, made by two people who know and love Mozilla even if they haven’t been close to the release process, isn’t something I want to leave sitting. I understand how it can fit a narrative, but it’s just not true.
Since I’m writing anyhow, though, let’s take them in order.
Interruptions Suck
Yes. They do. One criticism that I think we should openly accept is that the move to regular releases was irritating. The first releases on the new schedule had noisy prompts (both ours and the operating systems’). They broke extensions. Motives aside, our early execution was lacking and we heard about it. Plenty.
Today our updates are quiet. Addons have been compatible by default since Firefox 10 back in January. But that was a mountain of work that would have been much nicer to have in hand up front. As Jono says, hindsight is 20/20, but we should have done better with foresight there.
Motivations
It was hard for me to read the misapprehension of motives in these posts. Hard because I think Mozilla’s earned more credit than that, and hard because it means I haven’t done a good job articulating them.
Let me be clear here because I’m one of the guys who actually sits in these conversations: when we get together to talk about a change like this, concepts like “gotta chase the other guys” are nowhere in the conversation. When we get together and draw on whiteboards, and pound on the table, and push each other to be better, it is for one unifying purpose: to do right by our users and the web.
I wrote about this a while back, but it bears repeating. We can’t afford to wait a year between releases ever again; we can’t afford to wait 6 months. Think how much the web changes in a year, how different your experience is. Firefox 4 was 14 months in the making. A Firefox that updates once every 14 months is not moving at the speed of the web; we can’t go back there. Every Firefox release contains security, compatibility, technology and usability improvements; they should not sit on the shelf.
There’s nothing inviolate about a 6 week cycle, but it’s not arbitrary either. It is motivated directly from our earnest belief that it is the best way for us to serve our users, and the web at large.
And so the hardest thing for me to read was the suggestion that…
We Take Our Users For Granted
Nonsense. I don’t know how else to say it. In a very literal way, it just doesn’t make sense for a non-profit organization devoted to user choice and empowerment on the web to take users for granted. The impact of these changes on our users was a topic of daily conversation (and indeed, clearly, remains one).
To watch a Mozilla conversation unfold, in newsgroups or in blogs, in bugzilla or in a pub, is an inspiring thing because of how passionately everyone, on every side of an issue, is speaking in terms of the people of the web and how we can do right by them. We are at our most excellent then.
There’s beauty in the fact that this is another of those conversations. It is not lost on me, nor on Jono and Atul, I’d wager. They are Mozillians. And I believe they care deeply about Firefox users. I hope they realize how much the rest of us do, too.
05
Mar 09
Deep Packet Inspection Considered Harmful?
I was recently asked, in the context of the ongoing Phorm debacle, and with other interested parties, to meet with members of the UK government and discuss deep packet inspection technologies, and their impact on the web. I’m still organizing my thoughts on the subject – I’ve put some here, but I’d love to know where else you think I should look to ensure I have considered the relevant angles.
Brief Background
Phorm‘s technology hooks in at the ISP level, watches and logs user traffic, and uses it to assemble a comprehensive profile for targeting advertising. While an opt-out mechanism was provided, many users have complained that there was no notice, or that it was insufficiently clear what was going on. NebuAd, another company with a similar product, has apparently used its position at the ISP level to not only observe, but also to inject content into the pages before they reached the user. It’s hard to get unbiased information here, but this is what I understand of the situation.
Thoughts
1. Deep packet inspection, in the general case, is a neutral technology. Some technologies are malicious by design (virus code, for instance), but I think DPI has as many positive uses as negative. DPI can let an ISP make better quality of service decisions, and can be done with the full knowledge and support of its users. I don’t think DPI, as a technology, should be treated as insidious.
2. Using deep packet inspection to assemble comprehensive browsing profiles of users without explicit opt-in is substantially more questionable. My browsing history and habits are things I consider private in aggregate, even though any single visit is obviously visible to the site I’m browsing.
It’s possible that I will choose to allow this surveillance in exchange for other things I value, but it must be a deliberate exchange. I would want to have that choice in an explicit way, and not to be opted in by default, even for aggregate data. Moreover, given the complexity of this technology, I would want a great deal of care to go into the quality of the explanation. Explaining this well to non-technical users might be so difficult as to be impossible, which is why it’s so important that it be opt-in.
3. Using deep packet inspection in conjunction with software that modifies the resultant pages to include, for instance, extra advertising content, is profoundly offensive and undermines the web. The content provider and the user have a reasonable expectation that no one else is modifying the content, and a typical user should not be expected to understand the mechanics of the web sufficiently to be able to anticipate such modifications.
Solutions
As a browser, we do some things to help our users here, but we can’t solve the problem. https resists this kind of surveillance and tampering well, but requires sites to provide 100% of their content over SSL. Technologies like signed http content would prevent tampering, if not surveillance, but once again assume that sites (and browsers!) will support the technology. Ad blockers can turn off any injected ads, tools like NoScript can de-fang any injected javascript but, fundamentally, http content is not tamper-proof, and no plaintext protocol is eavesdropping proof.
People trust their ISPs with a huge amount of very personal data. It’s fine to say that customers should vote with their feet if their ISP is breaking that trust, but in many areas, the list of available ISPs is small, and so the need for prudence on the part of ISPs is large.
That’s what I’m thinking so far, what am I missing?