So we need to get better. We need to start fixing our messages to users so that we are more accurately communicating security information, while being mindful to not bury them in technicalities they neither want nor need. We need cues that are persistent (not relying on people to notice their absence), that are difficult to spoof, and that don’t mix metaphors.
We also, difficult as it is, need to get out of the “safety” game. We can’t tell users “this site is safe” because we don’t know that. Even ignoring the liabilities that might come with such a claim, there isn’t a good technological way to tell, right now, whether a particular site is safe in the way users care about. Do they handle credit card information properly? Do they ignore angry customers? Are they a front for stolen goods? These kinds of naughty people could get SSL certificates (and accompanying padlocks) and even the extended validation practices being discussed wouldn’t really stop them.
What we can do is equip people to make the safety decision for themselves, just as they often have to in the physical world, because we do have some information. It’s like putting ingredients labels on food. What we can do is change the conversation to be about identity instead of safety. This is important, so pay attention:
We need to change the conversation to be one about identity, not safety.
Identity is something we can verify. The padlock conflated identity with other things like encryption status and security, and while that conflation is almost natural to PKI-veterans, it has proven misleading for users.
This is a preliminary mockup, and mostly it demonstrates my inability to draw. Having said that though, it’s something I’d like to see us looking at for Firefox. The idea is that as soon as the user starts loading a page on a site they’ve never visited, Firefox tries to identify it.
Why The Dude?
He is the international standard passport guy. I call him Larry.
If we’re going to be making a change like this, to talk about identity instead of security, then our visual language needs to reflect that too. I’m not at all stuck on the passport dude in particular, but he is iconic, somewhat visually simple (though not as much as I might like), internationalizable, and already familiar to a large percentage of the population, in a role not unlike the one he would be playing here (i.e. a verifier of trusted identity documents).
Other thoughts have included:
- The blue i for “information”. Visually simple and already in common use, but we’re looking for something a little more specific than just generic “information.” It also might not internationalize well to non-latin alphabets.
- A simpler icon representing a passport (i.e. sans-Larry). This would seem to get us over the visual simplicity hump, but it’s hard to distinguish a passport from a generic book without resorting to either fine detail or language, both of which hurt us here.
- Very simple icons like ? or checkmarks in place of the lock. There isn’t a visual constant (like Larry) to tie the icons together in this case, which risks leaving the icons as visual clutter, and users without a clear idea of what they represent.
But other ideas (or more attractive, royalty-free renderings of Larry) are certainly welcome.
How Does It Work?
To avoid being profoundly irritating, I’m thinking we don’t get in your face on sites that have already been checked out once. In all cases the little dude will live in the address bar, to be interacted with as and if desired, but only on new sites will the speech bubble and text come up. This means that on, say, a phishing site, the speech bubble is basically guaranteed to pop up, actively informing the user. The fact that the speech bubble crosses between chrome and content area is something that also makes spoofing more challenging.
Technologically, what’s happening here is that we’re looking for an EV or other high-assurance certificate. This is a precursor to loading an HTTPS connection anyhow, which means this can be happening before content is presented, minimizing the impact on actual web interactions. And yes, those among you who know how this works might object to the claim that Firefox is “verifying” – it takes us milliseconds to verify an SSL cert’s validity and we’re really only checking an OID attached by the CA. But it’s a sensible mental model to develop with our users, I’d argue.
Firefox verifies, and then, assuming everything’s super, we get:
After a few seconds (or on any activity within the content area, scrolling, mouse clicks, or typing) the bubble will collapse back into the address bar icon and get out of your way. This collapsing action helps tie the two pieces of presentation together, and invites the user to interact with the address bar entry in the future. On mouseover or click, we can bring the speech bubble back up and so reinforce our users’ behaviour to go here when seeking information.
The visuals, once again, are open to design revision, but the key takeaways are that when a site can be properly identified, we:
- Change the visual treatment to reflect the fact that we have received valid identification.
- Show the user a meaningful, verified business name, giving the user something other than only the domain name to work with.
- Identify the party responsible for verifying that identification, since there has been, until now, very little way for a user to make informed decisions about which CAs they trust – the supposed root of the entire public SSL infrastructure.
- Provide them with a discoverable method to get more information.
If that last bullet makes you wonder whether we are also looking to change the way we present the “Page Info” dialogs, you get a gold star. That is another blog post though.
When the site cannot be identified, we get this instead:
Personally, I think that text is a little wordy, though less so than my first attempts. We have to be mindful here not to make every site without an EV cert feel criminal. Even the red in the question mark might be too harsh, but again the key design points are:
- The visuals have been weakened – lower contrast, question marks. The idea is not to portray danger, just uncertainty.
- Instead of identifying company and verifier, we have only some text to elaborate on the situation. Must be kept to a sentence, and ideally a short one, so that it has some chance of being read.
- Once again, there is a call to investigate the page further should the user desire. Once again we are putting the decision making power in the hands of the person who can make it. Ingredients on a soup can.
Putting It All Together
Changing the conversation about web security is easier said than done. And it’s easier to bitch about the padlock than it is to try to put something new out there. But passive, intermittent, spoofable and misleading security cues really are a bad thing because there are lots of bad people out there. The design I discuss here is evolving, but it is persistent, elaborative, difficult to spoof and avoids complicating things with mixed metaphors.
There are pragmatic issues aplenty, of course. This blog post isn’t intended to delve into all of them, but a couple we’ll need to look at are:
- How to handle self-signed certs or CA-issued but non-EV certs that you are confident you can trust. Answer: probably we just let you manually add those certs, at which point they get the checkmark, and read “Verified by: You.” This task needn’t be particularly easy or prominent in the UI, since it’s somewhat rare, and relatively expertish. It’s not like you can’t proceed without it.
- Do we have the right set of information in the speech bubble? Should the domain name be there too? Is it more likely to help, or hinder, decision making?
As for the padlock, I think it has had its day. That’s a hard thing to say for me not just because I remember its birth, and have lived with it ever since, but also because it has a very significant amount of user learning behind it and that’s a hard thing to abandon. But that inertia can’t keep us stuck somewhere we don’t want to be. If the padlock had all that hifalutin learning behind it, I dare say that phishing incidents would be somewhat rarer than they are, and a bad idea doesn’t get better by being old.
I’m not opposed to displaying the padlock as a secondary indicator somewhere for users that want to dig for it, as an emblem purely of encryption (as it was, arguably, initially intended). But I think its time as the primary security indicator is ending.
What do you think?
[N.B. I know beltzner hates it when people ask open-ended questions like that, but it turns out I already know what he thinks. And he gets partial credit for much of this thinking, having been Mozilla’s sit-in on a lot of security UI stuff before I came around.]