Revisiting Security UI – Part 1 of 2

I tend to get excited about things. I’d say one of the key problems I have when writing – blogs, articles, books will probably be even worse here – is that, since I tend to be excited about things, my writing tends to wander to whichever dog has a puffy tail at the moment, and I sometimes look back and end up wishing each piece was tighter and more single-minded.

Take my post last week. Right now I’m excited about Firefox security UI, and about how to do a better job with the way we give users information. This is a good thing for me to be excited about, since it pays my bills. But I want to engender conversation about it, and to build context around my thoughts on the matter, and meandering isn’t necessarily the best way to do that.

So. This is the first of two posts I will write in the next week or so about this stuff. The goal is to outline:

  1. The way things are, and why we need to change them
  2. My thoughts on where we need to be looking to go

This is the first. What are we, as browser builders, doing for the user today when it comes to security UI?

The Padlock

The most iconic (ha!) of browser security indicators is the little padlock icon that appears on “safe” websites. It can be in the address bar…

Padlock in Address Bar

… or the status bar…

Padlock in Status Bar

… and is one of the few really universal indicators (please ignore the monkey.)

It, and its predecessor, Netscape’s key icon, try to tie a complicated statement about web site encryption to a concrete metaphor, a lock and key. The good thing about this indicator is that it has some history of user education behind it, and it’s a relatively easy concept to understand.

The padlock has a lot of problems, though. First of all, it is misleading; it doesn’t mean “safe” at all. The padlock appears when a website presents a valid SSL certificate, issued by a company that your browser thinks is trustworthy. But the bar for getting one of these can be as low as $10, and the validation the companies do varies from excellent to non-existent. Even back in 2005, there were over 400 phishing attacks using SSL. So clearly, the padlock is not equivalent to safety.

Moreover, as with some of the other cues I discuss below, the padlock has no anti-padlock equivalent. That is, if the padlock is meant to signal safety, then the possibility for danger is indicated by… nothing. Users are expected to notice the absence of an indicator. There is a relatively enormous wealth of data to back up claims that users are very bad at using the absence of something as a behaviour modifier.

Finally, the padlock’s positioning is pretty weak from a usability point of view. Putting in the address bar was a step in the right direction when it comes to associating the cue more strongly with the page you are viewing, but it is still a small, peripheral indicator that is only helpful to those who know, and remember, to check regularly. An intrusive indicator could be worse, if it caused people to disable it completely, but if your cue is invisible to most users, it might as well not be there at all.

Address Bar Decorations

More recently, a lot of attention has been paid to the various ways to use the address bar as an indicator. The flagship example of this is IE7’s “green bar”.

IE7 Green Bar

The idea with the green bar is to call out sites which have gone through the extra trouble to obtain an “Extended Validation” certificate for their site. This standard is still being drafted but MS went ahead and included support for this cue anyhow, since standards bodies rarely line up with product release schedules. Mozilla, in a similar vein, turns the address bar yellow to supplement the padlock icon on encrypted sites.

I was in New York a couple weeks ago, with other browser vendors and ceritificate authorities, and let me tell you, there is a lot of interest in whether or not we’ll start shipping a green bar. A consistent user experience around these things is important, so they’re not wrong to want to know where we stand.

But the green bar has most of the same problems that the padlock did. Like the padlock, it is misconstrued to mean safety when it oughtn’t. “Green means go” has become the press spin on the green bar, but just like the padlock, it’s making a statement about encryption and identity of the website, not a statement about whether they have honest business practices or protect your personal information.

Like the padlock, it expects users to notice its absence on sites without EV certs, leaving the address bar white in those cases. In fact, Microsoft turns the address bar yellow on “suspicious” web sites, in delicious contrast to firefox turning it yellow on encrypted sites.

Finally, affirmative address bar decorations are spoofable. In a now relatively-infamous study, Microsoft Research found that the green bar actually made users more susceptible to a particular kind of phishing attack called a picture in picture attack. A clever attacker can use various doctored images to make it look like there’s an IE window within the real IE window, recreating the toolbars and menu items and oh yes, creating a green address bar in the process. Users trained to look only for the green bar are easily fooled by this deception, in part because the real IE window isn’t offering any counter-indicating cues. Just the absence of the green bar.

Page Info

When I say that the padlock isn’t about safety, I mean it. We can’t really be in the business, as browser vendors, of telling users whether a site is absolutely safe or not. What we can do if arm them with the information to make their own decisions. Not “AES encrypted with a 256 bit key” which is so impenetrable to most users as to constitute a total lack of information, but something. In fact we already do this. If you click on a padlock icon or right click on a page, you can get to the page’s security information. It looks like this (at least, on a mac):

Page Info

I don’t have much to say here except that we can do better. I don’t mean it as a knock on the developers of this code – I know it’s intended to be for advanced users, and that if you dig into it, it really does provide complete information about the certs you’re working with.

But it’s a missed opportunity. Page info, particularly the security tab, should be giving you information to help you make security judgements. Have you been to this site before? Do you have a stored password for this site? My mom might or might not ever check for that information, but on the off chance that she did, I am certain it would be more helpful to her than “(RC4 128 bit)”.


In more recent versions of most browsers, things have gotten a little better. Most major browsers now ship with support for anti-phishing protection that looks something like this:


Anti-phishing has a lot of things going for it, as a piece of security UI. It’s attention getting. It’s very difficult to spoof since it crosses between content and chrome in a visible way (although presumably an attacker wouldn’t want to spoof a warning message anyhow). Even the iconography is better because unlike the padlock which provides a misleading signal, the red-circle sign means DO NOT ENTER in the real world, and means the same thing here. It has some minor bugs, but is mostly a good UI.

The big objection raised against it is that blacklist UIs are only as good as the blacklist, and it guarantees you’re always behind the times. We’re pretty happy with it nonetheless, but it’s obviously not a complete solution.

So. That’s where we’re at. A lot of our cues miss the mark in some important ways. We need cues that resist spoofing, that are clearer about the kind of information they provide, and that provide it in ways that are meaningful. I have ideas here, so do lots of other smart people, and those ideas will no doubt evolve heavily in the coming months.

That’s part 2.

13 thoughts on “Revisiting Security UI – Part 1 of 2

  1. ..and to build context around my thoughts on the matter, and meandering isn’t necessarily the best way to do that.

    It’s possible, in that case, that you should rename your blog. 🙂

    Also, I refuse to ignore the monkey.

    Also, very interesting post. Not news, mostly, but I suppose that in doing what you do, that would be your hope. 🙂

  2. Have you been to this site before? Do you have a stored password for this site?

    That are good points, and a corresponding UI can certainly be implemented. You can also add Bookmarks to the list.

  3. The statement that the browser makes is crucial to aligning the parties to work at this thing. At the moment, the browser sweeps it all under the table … to the security tab … so ordinary users have no conception of the fraud.

    EV makes the right statement: Site XYZ is identifed by CA Blue. It is crucial to identify who says the site is as is, otherwise it falls to the browser. The CA must be represented as the one who said so, not only because it is their cert, but because this correctly allocates the liability to them for getting it wrong.

    But EV mucked up and turned it into a marketing-driven campaign: those who pay more (somehow) get the valuable green bar. Which means “the few” … who really aren’t so much of an issue. Meanwhile, the rest, the $10 cert factories with low or no security thresholds, are hidden from user scrutiny. To be a security statement and not a marketing statement, it should have been the other way around: Low cost CAs will always be named and shamed with the statements they make. (High cost CAs might be able to buy a better class of shame…)

  4. Browsers need to make this UI non-spoofable by phishers if it is to be of any real value. They can easily do so, (FF has hidden user preferences that enable that now) but so far have lacked the will to do so. Will that change now?

  5. Interesting post, thanks. However… blacklist-based anti-phishing indicators, apparently jonhath’s favorite (and of most/all browsers), is a dangerous deadend. Blacklists are not just always behind the times; they are simply too easy to circumvent, by use of dynamic IP address and domain name. So if they become a major defense, phishers will break them, and we’ll have to fix the secure UI once more – and every time we do this, there is a higher price in user adoption and confidence.

    IMHO, the solution will ultimately have to break to:
    1. Secure the login process (many good ideas available here!). This does not solve the entire problem!! We stil need to prevent spoofed sites from spreading malware and false information (content)!
    2. Improved site-identification indicator. IEv7 improved a bit, by indicating organization name and CA in the location bar (unfortunatley, for EV-equipped sites only). Their display, unfortunately, is not very visible and not customizable – I believe that in TrustBar, a FF extension I did with students, we did better – more visible, customized, etc. Further improvements are needed (we are testing now), e.g. we may ask users to click on the site identifier to enter each (sensitive) site.
    3. Improved protection against malware – in particular, may be a good idea, to require explicit user identification of each site from which browser receives content.
    4. Allow users to delegate the trust decision (which CA identifications are acceptable) to a security/trust service provider (like anti-virus service).

  6. Browsers need to make this UI non-spoofable by phishers if it is to be of any real value. They can easily do so, (FF has hidden user preferences that enable that now) but so far have lacked the will to do so. Will that change now?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s