We’ve made a lot of changes (and more importantly, a lot of positive progress) in security UI for Firefox 3.
We have built-in malware protection now, and better phishing protection. We have a password manager that intelligently lets you see whether your login was successful before saving, instead of interrupting the page load. We have gotten rid of several security dialogs that taught users to click OK automatically, unseeingly. We have OCSP on by default. We have a consistent place in the UI now where users can get information about the site they are visiting, including detailed secondary information about their history with the site; all of which are first steps in a long road towards equipping users with more sophisticated tools for browsing online, by taking advantage of habits they already have, and things we already know. All the people who worked on this stuff know who they are, and I want to thank them, because it sure as hell wasn’t all me.
With Firefox 3 in full down-hunker for final release (and with conference silly season upon us) though, I’ve started to get serious about thinking through what comes next.
Here’s my initial list of the 3 things I care most about, what have I missed?
1. Key Continuity Management
Key continuity management is the name for an approach to SSL certificates that focuses more on “is this the same site I saw last time?” instead of “is this site presenting a cert from a trusted third party?” Those approaches don’t have to be mutually exclusive, and shouldn’t in our case, but supporting some version of this would let us deal more intelligently with crypto environments that don’t use CA-issued certificates.
The exception mechanism in Firefox 3 is a very weak version of KCM, in that security exceptions, once manually added, do have “KCM-ish” properties (future visits are undisturbed, changes are detected). But without the whole process being transparent to users, we miss the biggest advantage to this approach.
Why I care: KCM lets us eliminate the most-benign and most-frequently-occurring SSL error in Firefox 3. Self-signed certs aren’t intrinsically dangerous, even if they do lack any identification information whatsoever. The problem is that case-by-case, we don’t have a way to know if a given self-signed cert represents an attack in progress. The probability of that event is low, but the risk is high, so we get in the way. That’s not optimal, though. When the risk is negligible, we should get out of the way, and save our warnings for the times when they can be most effective.
2. Secure Remote Passwords
Secure Remote Password protocol is a mechanism (have some math!) for allowing a username/password-style exchange to happen, without an actual password going out along the wire. Rob Sayre already has a patch. That patch makes the technology available, but putting together a UI for it that resists spoofing (and is attractive enough that sites want to participate) will be interesting.
Why I care: SRP is not the solution to phishing, but it does make it harder to make use of stolen credentials, and that’s already a big deal. It also has the happy side effect of authenticating the site to you while it’s authenticating you to the site. I wouldn’t want this useful technology to get stuck in the chicken-egg quagmire of “you implement it first.”
3. Private Browsing Mode
This is the idea of a mode for Firefox which would protect their privacy more aggressively, and erase any trace of having been in that mode after the fact. Ehsan Akhgari has done a bunch of work here, and in fact has a working patch. While his version hooks into all the various places we might store personal data, I’ve also wondered about a mode where we just spawn a new profile on the spot (possibly with saved passwords intact) and then delete it once finished.
Why I care: Aside from awkward teenagers (and wandering fiancés), there are a lot of places in the world where the sites you choose to visit can be used as a weapon against you. Private browsing mode is not some panacea for governmental oppression, but as the user’s agent, I think it is legitimately within our scope (and morally within our responsibility) to put users in control of their information. We began this thinking with the “Clear Private Data” entry in the tools menu, but I think we can do better.
(And also…)
Outside of these 3, there are a couple things that I know will get some of my attention, but involve more work to understand before I can talk intelligently about how to solve them.
The first is for me to get a better understanding of user certificates. In North America (outside of the military, at least) client certificates are not a regular matter of course for most users, but in other parts of the world, they are becoming downright commonplace. As I understand it, Belgium and Denmark already issue certs to their citizenry for government interaction, and I think Britain is considering its options as well. We’ve fixed some bugs in that UI in Firefox 3, but I think it’s still a second-class UI in terms of the attention it has gotten, and making it awesome would probably help a lot of users in the countries that use them. If you have experience and feedback here, I would welcome it.
The second is banging on the drum about our mixed content detection. We have some very old bugs in the area, and mixed content has the ability to break all of our assumptions about secure connections. I think it’s just a matter of getting the right people interested in the problem, so it may be that the best way for me to solve this is with bottles of single malt. Whatever it takes. If you can help here, name your price.
Obviously I’ve left out all the tactical fixup work on the UI we already have. We all know that those things will need to happen, to be re-evaluated and evolved. I wanted to get these bigger-topic thoughts out early, so that people like you can start thinking about whether they are interesting and relevant to the things you care about, and shouting angrily if they aren’t.