21
Jun 08

Firing Up Browser Security

Low Flying Dogs on FlickrWindow and I recently did a joint interview for Federico Biancuzzi at SecurityFocus about many of the security changes we’ve made in Firefox 3. It covers both front-end and back-end information, and mentions several changes that I haven’t had a chance to mention here in the past.

If you’re interested, check it out.

[PS – Full props to r80o on flickr – this is a pretty excellent photo for “caution”, and CC too!]


21
May 08

Mal-what? Firefox 3 vs. Bad People

A lot of the things I write here are for geeks.  That’s unsurprising, given my own wonkish leanings, but I appreciate that it makes me a tough guy to love, much less read, at times.  Sorry about that, and thanks for sticking with me.

With Firefox 3 on the cusp of the precipice of the knife’s edge of release, though, I wanted to stop pretending that everyone reads the same articles I do and talk about one of the many, really concrete things we’re doing to keep our users, like you, safe.  There will be graphs.

Continue reading →


06
May 08

About Larry

Blue LarryI’ve been meaning to write a post like this for a while, and maybe I still will, but in the meantime Deb has done a great job of introducing the world to Larry.  Her writing is enviably clearer than my own, so you should go check it out right now.

I bet she’d love it if you gave her some digg love, too.

[Killing comments on this one to reduce forking/repetition – take ’em to digg or debb]


16
Apr 08

Security UI in Firefox 3plus1

We’ve made a lot of changes (and more importantly, a lot of positive progress) in security UI for Firefox 3.

We have built-in malware protection now, and better phishing protection.  We have a password manager that intelligently lets you see whether your login was successful before saving, instead of interrupting the page load.  We have gotten rid of several security dialogs that taught users to click OK automatically, unseeingly.  We have OCSP on by default.  We have a consistent place in the UI now where users can get information about the site they are visiting, including detailed secondary information about their history with the site; all of which are first steps in a long road towards equipping users with more sophisticated tools for browsing online, by taking advantage of habits they already have, and things we already know.  All the people who worked on this stuff know who they are, and I want to thank them, because it sure as hell wasn’t all me.

With Firefox 3 in full down-hunker for final release (and with conference silly season upon us) though, I’ve started to get serious about thinking through what comes next.

Here’s my initial list of the 3 things I care most about, what have I missed?

1. Key Continuity Management

Key continuity management is the name for an approach to SSL certificates that focuses more on “is this the same site I saw last time?” instead of “is this site presenting a cert from a trusted third party?”  Those approaches don’t have to be mutually exclusive, and shouldn’t in our case, but supporting some version of this would let us deal more intelligently with crypto environments that don’t use CA-issued certificates.

The exception mechanism in Firefox 3 is a very weak version of KCM, in that security exceptions, once manually added, do have “KCM-ish” properties (future visits are undisturbed, changes are detected).  But without the whole process being transparent to users, we miss the biggest advantage to this approach.

Why I care: KCM lets us eliminate the most-benign and most-frequently-occurring SSL error in Firefox 3.  Self-signed certs aren’t intrinsically dangerous, even if they do lack any identification information whatsoever.  The problem is that case-by-case, we don’t have a way to know if a given self-signed cert represents an attack in progress.  The probability of that event is low, but the risk is high, so we get in the way.  That’s not optimal, though.  When the risk is negligible, we should get out of the way, and save our warnings for the times when they can be most effective.

2. Secure Remote Passwords

Secure Remote Password protocol is a mechanism (have some math!) for allowing a username/password-style exchange to happen, without an actual password going out along the wire. Rob Sayre already has a patch.  That patch makes the technology available, but putting together a UI for it that resists spoofing (and is attractive enough that sites want to participate) will be interesting.

Why I care: SRP is not the solution to phishing, but it does make it harder to make use of stolen credentials, and that’s already a big deal.  It also has the happy side effect of authenticating the site to you while it’s authenticating you to the site.  I wouldn’t want this useful technology to get stuck in the chicken-egg quagmire of “you implement it first.”

3. Private Browsing Mode

This is the idea of a mode for Firefox which would protect their privacy more aggressively, and erase any trace of having been in that mode after the fact.  Ehsan Akhgari has done a bunch of work here, and in fact has a working patch.  While his version hooks into all the various places we might store personal data, I’ve also wondered about a mode where we just spawn a new profile on the spot (possibly with saved passwords intact) and then delete it once finished.

Why I care: Aside from awkward teenagers (and wandering fiancés), there are a lot of places in the world where the sites you choose to visit can be used as a weapon against you.  Private browsing mode is not some panacea for governmental oppression, but as the user’s agent, I think it is legitimately within our scope (and morally within our responsibility) to put users in control of their information.  We began this thinking with the “Clear Private Data” entry in the tools menu, but I think we can do better.

(And also…)

Outside of these 3, there are a couple things that I know will get some of my attention, but involve more work to understand before I can talk intelligently about how to solve them.

The first is for me to get a better understanding of user certificates. In North America (outside of the military, at least) client certificates are not a regular matter of course for most users, but in other parts of the world, they are becoming downright commonplace.  As I understand it, Belgium and Denmark already issue certs to their citizenry for government interaction, and I think Britain is considering its options as well.  We’ve fixed some bugs in that UI in Firefox 3, but I think it’s still a second-class UI in terms of the attention it has gotten, and making it awesome would probably help a lot of users in the countries that use them.  If you have experience and feedback here, I would welcome it.

The second is banging on the drum about our mixed content detection.  We have some very old bugs in the area, and mixed content has the ability to break all of our assumptions about secure connections.  I think it’s just a matter of getting the right people interested in the problem, so it may be that the best way for me to solve this is with bottles of single malt.  Whatever it takes.  If you can help here, name your price.

Obviously I’ve left out all the tactical fixup work on the UI we already have.  We all know that those things will need to happen, to be re-evaluated and evolved.  I wanted to get these bigger-topic thoughts out early, so that people like you can start thinking about whether they are interesting and relevant to the things you care about, and shouting angrily if they aren’t.


17
Mar 08

Should Malware Warnings have a Clickthrough?

In the latest nightly builds of FF3, and in the upcoming Beta 5, we let users choose to ignore our phishing warning, and click through to the site, just like they could in Firefox 2:

Ignore this Warning

But that same spot is empty in the malware case (unless you install my magic extension.)  Should it be?  It’s a harder question than it seems, on first blush.

Continue reading →


26
Feb 08

State of the Malware Nation

It’s a couple weeks old, I know, but for anyone who hasn’t seen it, Google’s Online Security Blog has linked to a draft article produced by some of their malware researchers about the trends they’ve observed in malware hosting and distribution.  Aside from a troubling pre-occupation with CDF graphs, it’s a really interesting look at the way malware networks are spread through the internet.

I found this snippet interesting:

We also examined the network location of the malware distribution servers and the landing sites linking to them. Figure 8 shows that the malware distribution sites are concentrated in a limited number of /8 prefixes. About 70% of the malware distribution sites have IP addresses within 58.* — 61.* and 209.* — 221.* network ranges.

Our results show that all the malware distribution sites’ IP addresses fall into only 500 ASes. Figure 9 shows the cumulative fraction of these sites across the 500 ASes hosting them (sorted in descending order by the number of sites in each AS).  The graph further shows the highly nonuniform concentration of the malware distribution sites— 95% of these sites map to only 210 ASes.

But I think this is the big takeaway:

Malware Landing Site Distribution

Because malware is being distributed via ad networks more and more, it’s no longer safe to assume that you’ll be okay if you just avoid the seedy parts of the net.  And because it’s no longer requiring user interaction in a lot of cases, the old-school “don’t run executables from random websites” best practice might not be enough either.  To stay on top of things, you are going to want to be running a browser that is as hardened as we can make it, and that also incorporates active checking of known malware sites.

And lookit, the Firefox 3 beta is right over here.


23
Jan 08

Being Green, easiness of

As of today’s nightly firefox build, we’ve turned on EV support and activated the Verisign EV root for testing purposes.  What this means is that when you go to sites that have Verisign-issued EV certificates like, say, British Airways, the site-identity button (shall we call it Larry? Yes. Let’s.) will pick up the name of the site owner, all green-like.

I rather suspect this might startle a few of you.

Larry on British Airways

I’ve talked a lot about identity and security in Firefox 3, but some of the actual changes were easy to ignore if you weren’t looking for them.  The site button has been around for a while, with Larry telling you what he knows about a site, but you could choose not to click on him, not to get that information.  A while ago, I mentioned a way to get the EV behaviour ahead of schedule, if you wanted to test, but now those steps are no longer necessary.

So things are going to feel a little weird for a few days.  There are about 4000 EV sites these days (the AOTA has a pretty long list) so you will probably hit a few, and it will probably feel weird.  By all means, open bugs.  The whole reason we’re doing this is to get more sunlight on the code, because it’s required weird custom builds and secret handshakes for too long.

The story goes that when London first introduced street signs, there was significant protest.  They were gaudy, the argument went, and anyhow the locals already knew where they were going.  Many streets in London still don’t have them.  I’m excited about getting feedback into the UI to help users know better who they’re dealing with online, help them orient themselves, and rebuild some of the cues that we all take for granted in the real world.  But like the London signposts, I suspect it’ll take some getting used to.  Especially on Proto. Where it currently looks, as Shaver so eloquently puts it, like the South end of a North-facing horse.


10
Jan 08

Standardizing UI, and other Crazy Ideas

Decision making, by nerovivoStandards make the web go ’round.  I hope it doesn’t come as too much of a surprise that Mozilla cares a lot about standards, or that a significant percentage of the community, myself included, participate in active standards groups, be they W3C, WHATWG, industry consortia, or other.

They are often, to be honest, a slog.  Anything important enough to be standardized is important enough to attract a variety of interests and motivations, and being in the middle of multiple, divergent forces can be just as fun as it sounds.  They are usually noble slogs, though.  An open web needs a set of linguas franca. As it matures, people invent new creoles to express new ideas, and so our standards need to constantly evolve and add that new wealth to the growing lexicon of awesome.

A little while ago though, the W3C decided to try something sort of odd.  They formed up a working group to look at standardizing security UI.

Standardizing. UI.

To anyone who has designed a user interface, that sort of feels like standardizing art. Not that we are quite so full of hubris as to imagine ourselves Caravaggios, but UI design is a complex interplay of functionality, ergonomics, and subjective experience.  There are general principles, sure, but it’s a very different beast from, say, CSS2 margin properties, where everyone can at least agree that there ought to be a single correct result, even if they disagree about what that result should be or how to obtain it.

Nevertheless, boldly forth they have gone and established the Web Security Context working group with a pretty broad charter. Capturing current best practice is certainly fair game, but it is equally permissible for the group to try to move the state of the art forward.  We’re active members, as are Opera and Konqueror (though not Apple or MS), but like most standards bodies, the group includes folks from academia, from other companies, and from various interested groups as well.

This workgroup has put out its First Public Working Draft (FPWD), which means I have two things to ask you, or maybe ask of you.  In marketing, I believe they call this the Call to Action, so if you were looking for it, here it is!

The first thing I would ask, if you are at all interested, is that you to read it and remark upon it.  The group needs public comment, and you fabulous people are ably placed to provide it.

This first draft was kept deliberately inclusive, to make sure that the majority of recommendation proposals got public airings. So if your main criticism is just “too much,” that is unsurprising, but still welcome, feedback.

The second thing is harder.

We participate in this group for all the reasons mentioned above, and I personally take that participation seriously.  Even on the sketchy topic of standardized UI, I think there’s potential. A document which all browsers conform to as a baseline guide, which says things like “Don’t let javascript arbitrarily resize windows, because it lets this spoofing attack happen,” is a valuable one.  At Mozilla, we talk about things like making the mobile web a better place, for example. One thing we can do right up front in that world is spare this new generation of browser implementors (and their users!) from rediscovering our mistakes the hard way.  This standard could help do that.

But this draft is also defining new UIs, new interactions, new metaphors for online browsing.  The academics in the group have offered to gather usability data on several proposed recommendations, but at a fundamental level, I have asked the group a couple times whether it’s right to use a standard to do this kind of work at all.  I think several of the proposed requirements sound like interesting, probably fruitful UI experiments.  But that’s not the same as “Standards-compliant user agents MUST …”

My second question is this: as members of the Mozilla community, is this an effort that you want me (or people like me) participating in, and helping drive to final publication?

I’m still engaged on the calls and the mailing list – I still see good things coming out of the group, and I have my own opinions about how to best contribute.  But as an employee of Mozilla, I feel an obligation to steward my own resources responsibly, and to expend them on things that the community finds valuable, so it’s important for me to hear how people feel about the value of this work.

Opinions? Suggestions? Funny anecdotes?