Every Six Weeks

It’s astounding to me, but we’ve been living rapid release for a few months now. We’re moving faster. A new feature implemented today and landed on mozilla-central can be delivered to our users in 12 to 18 weeks, not months or years. Incredibly, the same process that gives us that agility is giving us greater robustness, too. Testing and stabilization of each release across progressively larger audiences helps us find and fix bugs early, and build confidence in the quality of each release.

I want to clarify an important part of the process, though, that I think many people haven’t yet understood. Remember, an individual release train is 6 weeks of development time followed by 12 weeks of stabilization:

New work doesn’t land on Aurora and Beta. Instead, those channels focus exclusively on working with our heroic and growing community of testers to spot any unexpected issues introduced during development, and then resolve them. Looking at this diagram, you might well conclude that we’d have a release ready every 18 weeks.

Aurora and Beta are so single-minded in their focus on stabilization and testing, though, that many engineers can move on to new work. If we take a step back and look at the broader picture, this is what actually happens:

During the 12 weeks that a release spends on Aurora and Beta, the Mozilla community is not sitting idle. They are already working on features and fixes for the next release, and the release after that. Every 6 weeks their work is picked up into the next Aurora, the next Beta, and the next release. When you look at this broader picture, you notice an important point:

There can be a new release of Firefox every 6 weeks, not every 12 or 18.

I’ll say it again, because it’s important: most of the time, we’ll release a new Firefox every 6 weeks.

Many people are surprised by this fact, though it’s been part of the process all along. When Firefox 4 came out, we committed to ship the next release of Firefox within 3 months. We did it, and when we did I think many people concluded that we have moved to a 3 month cycle. In truth, though, the only reason it took us 3 months was that our Aurora and Beta channels started off empty; they had to wait for the new release to make it through the process. The next Firefox is already in Beta, and is scheduled to come out 6 weeks after the last one. When that happens, yet another Firefox will enter Beta, and so on.

We’re studying the effects of the process carefully; it’s a big change and we will be flexible in our approach as new information comes in. We may decide that 6 weeks is the wrong interval, for instance, though it’s worth remembering that Firefox maintenance releases have been released on 6-8 week intervals for years, and sometimes included major changes. We’re also paying close attention to the impacts this cycle has on our ecosystem of add-ons, plugins, and other 3rd party software that interacts with Firefox. We’re working with large organizations, too, to understand how rapid release can fit into their software deployment systems.

Whatever adjustments we make, it’s clear that rapid release is a major improvement in our ability to respond to the needs of our users and the web. Every 6 weeks we have a new Firefox to evaluate and, unless some surprising and irreconcilable breakage is discovered, release to the world. No one will have to wait a year for the developer scratchpad now in Beta, or the massive memory and performance improvements already on Aurora, or the slick tab management animations soon to land on Nightly. Rapid release is already paying dividends, and we’re just getting started.

[This post originally appeared on the Channels blog]

Deliberacy

a team in a rowboat on blue water
The Firefox community is kicking ass. We just worked through our first Aurora merge as part of our new rapid release process and in less than 6 weeks, the next train leaves the station. We are rewriting the way we build software and we are doing it fast.

We’re succeeding because we’re acting deliberately. We’re doing it on purpose. We know what we need our release process to do and we’re building forward from that, instead of shooting first and calling whatever we hit the target.

I’m glad that we’re being deliberate about how we build Firefox. We need to be deliberate about what we build, too. I’ll tell you how I think that should go, after a brief digression on how it has gone up until now.

(If you have no time for that, deb’s written a more concise introduction.)

Continue reading “Deliberacy”

Vacuums and You (or, Estimating Like an Astronaut)

I’m going to teach you a surprisingly effective trick for estimating better, but first I need to talk about dressing up vacuum cleaners.

Ze Frank is a pretty creative guy, but what makes him really interesting to me is his ability to make other people creative. It’s what he does. He catalyzes creativity, frequently among those who don’t consider themselves creative. And when he talks about how he does it, he talks about the value of constraint.

Asked to go and “be creative,” he notes, most people shut down. So, instead, he asks for something more specific. He asked them to make a whole earth sandwich; they made a few. He asked people to send in pictures of vacuum cleaners dressed as people. He got 215. Constraining people, forcing them to solve a smaller problem, made them better at it.

Creativity isn’t the only thing that benefits from constraint. Asking engineers (or, really, anyone) for “an estimate” is basically akin to asking them to “be creative.” They know what examples of the thing in question look like, they understand that it’s a reasonable request, they just don’t actually know how to get there from here, much less how to be accurate about it.

Back in the sixties, NASA and the US DoD were spending a great deal of money on engineering. They therefore took a keen interest in improving planning and estimation, not unlike the interest you might take if someone was setting all of your money on fire. Out of this interest sprung the mellifluously titled “PERT/COST SYSTEMS DESIGN” which, on the subject of estimation, made this central observation:

If you ask engineers for 3 estimates (Best Case, Most Likely, Worst Case) instead of 1, you get different answers.

That’s pretty exciting! Constraints get us different answers, and different answers mean more bits of information. If you’re not convinced that this is brilliant, though, here comes some next level awesome: A (weighted) average of these 3 estimates is a better predictor of actual completion time than any one of them. Specifically

(Best + 4*Most Likely + Worst) / 6

turns out to work pretty well in the general case. These so-called “PERT Estimates” or “3-point Estimates” give engineers credit for their assessment of “most likely” by weighting it heavily, but still allow optimism and pessimism to pull the average. I dare you to argue with this graph:

Likelihood of project completion date vs estimates (Science, bitches!)
Likelihood of project completion date vs estimates

Having 3 data points actually helps in other ways, too. It means you can more clearly quantify the uncertainty of a project by comparing best and worst case estimates, and watching to see if the distance between them shrinks over time. It means you can produce “optimistic” and “pessimistic” schedules. And, most importantly, it means that everyone is saying the same thing when they estimate.

Best, Worst, Most Likely. Try it for your next project, and see how it works. As we finish Firefox 4 and start looking at what comes next, there will be plenty of estimation happening, and I’m keen to see us bringing more science to the table. This may not be the right model for us, or we may discover that the coefficients need changing in our version of the equation; that’s fine. That would actually be a great result. My interest isn’t in pushing a particular tool, my interest is in getting better at planning, getting more awesome out to our users faster. I think we do that by looking for systems that have worked for others, and seeing how well they adapt to us.

And then we dress up the vacuum cleaners.

Automatic Date Links in MediaWiki

I had time between 1:1s today to solve a wiki problem that’s been nagging me. My codes, let me show you them.

Problem: We have meetings.

What’s worse, we persist in having them every week. Being the kind of project we are, we keep agendas and notes from those meetings publicly and invite the community to participate (does your browser? Great!)

What you want, then, is for each week’s meeting notes to link to next week’s and last week’s, like such:

And so, we do. But those links have to be hand-edited every week. Indeed, the pages for various meeting notes have earnest, heart-wrenching pleas in HTML comments, like

<!– REPLACE YYYY-MM-DD with the previous and next week’s year-month-date –>

No one should have to live like that.

Solution: ParserFunctions

Our mediawiki install includes the ParserFunctions extension, which has a whole bag of tricks. One of these tricks is {{#time}}. #time lets you produce various kinds of time/date strings, to wit:

{{#time: Y-m-d }}

Particularly nice, though, is that you can specify relative times, e.g.

{{#time: Y-m-d|+1 week}}

The relative syntax is so flexible, in fact, that I can utter this monstrosity:

[[Platform/{{#time: Y-m-d|tuesday last week}}|« previous week]]

to link to last week’s notes from a given page!

Still with me? Because there’s one snag left. The above works for people who have a static front page with this week’s info, and only ever want to link one week back. But those relative dates are relative to now — what if I want each link in the chain to link to the week prior?

No problem — our pages are named according to their dates, so just make the link relative to that, instead:

[[Platform/{{#time: Y-m-d | {{SUBPAGENAME}} -1 week}}|« previous week]]

Presto.

The things you learn while waiting for a phone call. If you want to get really exciting, you can do all this in transclusion tags, to have last week’s notes automatically added to this week, but that’s left as a terrifyingly-recursive exercise for the reader.

What’s your favourite mediawiki hack?

(PS – Full credit to Melissa for giving me the idea in the first place. I am naught but the implementor.)

It’s Almost Ready

Shipping great software to lots of people is hard. At Mozilla we talk about shipping only “when it’s ready,” and the devotion our community has to Firefox users, and to shipping them a high quality product, is unlike anything I’ve seen elsewhere. We answer to no one but you.

“When it’s ready” doesn’t mean we can take our time, though. Firefox 4 is good for the web, good for our users, and puts the heat on other vendors to up their own game. We need to ship it ASAP – we want release candidates in weeks, not months. And that means a hard look at our blocker list.

Blocker bugs have a rank order. If you can’t have all of them, there are some you’d want more than others, even though every single one of them is a bug we want to fix. That’s healthy. Building software means making those calls. Each bug is evaluated against whether it’s worth holding back the thousands of fixes that have already made it into the Firefox 4 tree. At this point, very few bugs are worth holding back that much awesome.

Hard vs. Soft Blocking1

To that end, then, if you watch bugzilla, you’ve seen blocker bugs sprouting one of two new whiteboard labels:

  • [hardblocker] – These bugs prevent us from shipping. We’ll hold the release for the very last one of them. A hard blocker is a failure of a core part of our release criteria, e.g. a crash, a memory leak, a performance hit, a security issue, a UI breakage that can’t be recovered from, an incompatibility we can’t stomach.
  • [softblocker] – These bugs are things we want to fix as soon as possible, but can ship with if the hard blockers are done. They can be fixed in maintenance releases if needed, or in Firefox 5 which, remember, is not so very far away. Soft blockers might include visual polish, strange edge cases, optional aspects of new specs, or opportunistic performance wins.

Hard blockers trump everything. That doesn’t mean they are the only things that will get fixed – indeed we hope and expect many of our soft blockers to make it in as well. We didn’t clear their blocking flags, they are still legit work items and have landing pre-approval. Soft blockers are what beltzner calls the “opportunity space” – the work that lifts the quality and delight of the product. But we have to make the hard calls, and soft blockers are second priority to shipping. People paid to work on Firefox will be focusing exclusively on hard blockers, first.

The hard blocker list is currently at 143. When it hits 0, we can ship. Let’s kill it dead.

[1] Inevitably, when we do a pass like this, someone will want to digress into a thread about nomenclature. “Why are they blockers if they don’t block?” “Are there hard soft blockers, or soft hard blockers?” I love the creativity of our community, but I think it’s a distraction right now, and I’d suggest to you that we have more interesting problems to solve in the next little while!

First Impressions from China

Great Wall in FogChina is different.

When I got back from my recent trip to visit Mozilla Online in Beijing, I heard myself saying that often, but it’s very nearly a content-free statement. Of course China is different. A better, albeit clumsier, way to express things is:

The Chinese web is not the web we are used to.

“We” Mozilla, “We” the Western tech world, “We” the builders of the web. China is going about things differently, and they’re bringing more than a billion people online with them. The folks at Mozilla online understand this and were exceedingly patient and generous with their time helping me begin to do so as well.

Here’s one way of thinking about that difference: Continue reading “First Impressions from China”

The SSL Observatory

Oh ho, lookit what the EFF went and did!

The EFF SSL Observatory is a project to investigate the certificates used to secure all of the sites encrypted with HTTPS on the Web. We have downloaded a dataset of all of the publicly-visible SSL certificates, and will be making that data available to the research community in the near future.

This is exciting. I knocked together a less ambitious version of this last year, but the EFF guys are doing it like grown-ups, and are getting some interesting data.

Numbers-wise, they’re in the right ballpark, as far as I can tell. Their numbers (1-2m CA-signed certs) coarsely match ones I’ve seen from private sources. I’ve heard from a few CAs that public-crawl estimates tend to err 50-80% low since they miss intranet dark matter, but at least the EFF is tracking other public-crawls. Given that their collection tools and data are going to be made public, that’s a really big deal. Previously, I haven’t been able to get this kind of data without paying for it or collecting it myself. If the database is actively maintained and updated, this will be a great resource for research.

Their analysis of CA certificate usage is also interesting. I’d like to see more work done, here, and in particular I’d like to see how CA usage breaks down between the Mozilla root store and others. We spend considerable effort managing our root store, and recently removed a whole pile of CA certificates that were idle. In some places, the paper seems to make the claim that fully half of trusted CAs are never used, but in other places, the number of active roots they count outnumbers our entire root program. I understand why they blurred the line for the initial analysis, but it would be swell to see it broken out.

As they mention, there are legit reasons for root certs to be idle, particularly for future-proofing. We have several elliptic curve roots, and some large-modulus RSA roots, which are waiting for technology to catch up before they become active issuers while giving CAs a panic switch in the case of an Interesting Mathematical Result — that feels okay to me. On the other hand, if there are certs which are just redundant, it would be great to know, so that we can have that conversation with the relevant CAs, and understand the need to keep the cert active.

This is exactly what I hoped would come of my crawler last year, but they’ve done a much more thorough job. We’ve seen an uptick in research interest in SSL over the last few years. Having a high quality data source to poke when testing a hunch is going to make it easier to spot trends, positive or otherwise. Interesting work, folks; keep it going!

Kathleen, a FAQ

Q: Kathleen who?

Kathleen Wilson works for the Mozilla Corporation, and manages our queue of incoming certificate authority requests. She coordinates the information we need from the CAs, shepherds them through our public review process and, if approved, files the bugs to get them into the product.

Q: Holy crap! One person does all of that? Is she superhuman?

It has been proven by science. She is 14% unobtainium by volume.

Q: That’s really awesome, but I am a terrible, cynical person and require ever-greater feats of amazing to maintain any kind of excitement.

She came in to a root program with a long backlog and sparse contact information, and has reduced the backlog, completely updated our contact information, and is now collecting updated audit information for every CA, to be renewed yearly.

Q: Hot damn! She’s like some kind of awesome meta-factory that just produces new factories which each, in turn, produce awesome!

I know, right? She has also now removed several CAs that have grown inactive, or for which up to date audits cannot be found. They’ll be gone as of Firefox 3.6.7. They’re already gone on trunk.

Q: Wait, what?

Yeah – you can check out the bug if you like. I’m not positive, but I think this might represent one of the first times that multiple trust anchors have ever been removed from a shipping browser. It’s almost certainly the largest such removal.

Q: I don’t know what to say. Kathleen completes Mozilla. It is inconceivable to me that there could be anything more!

Inconceivable, yes. And yet:

  1. She’s also made what I believe to be the first comprehensive listing of our root, with signature algorithms, moduli, expiry dates, &c.
  2. In her spare time, she’s coordinating with the CAs in our root program around the retirement of the MD5 hash algorithm, which should be a good practice run for the retirement of 1024-bit RSA (and eventually, in the moderately distant but forseeable future, SHA-1).
  3. She has invented a device that turns teenage angst into arable land suitable for agriculture.

Fully 2 of the above statements are true!

Q: All I can do is whimper.

Not true! You can also help! Kathleen ensures that every CA in our program undergoes a public review period where others can pick apart their policy statements or issuing practices and ensure that we are making the best decisions in terms of who to trust, and she’d love you to be a part of that.

Q: I’ll do it! Thanks!

No, thank you. That wasn’t a question.

Developer Tools in Firefox

jk5854/flickr cc

Web developers make the open web go.

For Mozilla, that means that if we want to see the open web succeed, we need to help web developers build it. When we talk to them about building for the web, most of what they want to talk about is web featuresCSS improvements, new HTML5 goodness, content magic like geolocation and orientation events. We invest a lot in making those things awesome, but they are only part of the answer.

The other thing that web developers talk about is tools. Specifically, when we talk to them about tools they ask for two things:

  1. Mozilla should invest in Firebug. The Firebug and Firefox communities should be working together to fix bugs, not working around them. Firefox releases should ship with a compatible Firebug out of the gate, not weeks or months later.
  2. Mozilla should be leading in developer tools. Before Firebug, View Source and DOM Inspector were the state of the art. Now other browsers are copying Firebug and shipping their tools by default, and the question is where the tools are going to go next. We should be a strong voice there, and back it up with code.

For #1: got it. Loud and clear. Firefox 3.6 shipped with a compatible Firebug from day 1, due in no small part to the contributions of Mozilla employees paid to work on Firebug. Jan “Honza” Odvarko has been fixing bugs and building out features left and right, and Rob Campbell has helped drive the project, and made sure that Firefox dependencies get attention. We don’t want to try to take Firebug over; it has its own, healthy community. We are much more active participants than we used to be, though.

#2 is harder. What tools do web developers need that don’t yet exist? Which tools would be broadly useful, and which ones niche? What can Mozilla bring to the table, as the developer of a browser, to make the design & development experience better/easier/faster/funner? We’re trying to figure that out, we’re working on some early ideas that I’ll write about in subsequent posts, but I’d also like to hear what you think is missing.

Building developer tools into Firefox will mean a lot of exploration, and a lot of new code – that’s scary, but the benefits are huge. In the short term, this work will rekindle the conversation about developer tools, and get us all thinking outside of the existing boxes for a few minutes. In the long term, it should make life better for web devs and tool authors; everybody wins.

Web devs are smart, it’s no coincidence that #1 and #2 above pull in the same direction: make Firefox the best platform for web development and tool building. We all want web authors to have an awesome, empowered experience and I think working together in this way is the best play we have for continuing to build that.