Brooks on “Plan to Throw One Away”

In an excellent short interview by Kevin Kelly, Fred Brooks, author of The Mythical Man-Month, has this to say when asked how his thinking has changed:

“When I first wrote The Mythical Man-Month in 1975, I counseled programmers to “throw the first version away,” then build a second one. By the 20th-anniversary edition, I realized that constant incremental iteration is a far sounder approach. You build a quick prototype and get it in front of users to see what they do with it. You will always be surprised.”

The 20th-anniversary edition was 15 years ago, yet I still hear “plan to throw one away” quoted and attributed to Brooks. True, but way out of date.

Tight Schedules and Gridlock, Illustrated

Software development that’s done under pressure often turns into a train wreck. Managers pressure project leads and developers for aggressive estimates. Project leads and developers comply. The earliest that a task can possibly be completed magically turns into a commitment. Anything that looks remotely like slack gets taken out of the plan. Long hours ensue. And the slightest hiccup—there is always a hiccup—can throw everything into chaos.

The effect can be hard to describe to people who haven’t experienced a thrashing project. Everything seems to be under control, and then, suddenly, it’s not. Things that shouldn’t get gridlocked get gridlocked, and the gridlock spreads.

A good illustration of this comes to us from the Mathematical Society of Traffic Flow. Tom Vanerbilt, author of Traffic: Why We Drive the Way We Do (and What It Says About Us) uses this video in his talks. (No cars were harmed in the film that follows.)

Watch it again. Can you point to any one cause for the stopped traffic? I’ve watched this video several dozen times, and the best I can say is that I see people trying to adjust. The problem always seems to be upstream of wherever I’m about to place the blame.

Now in place of cars, think of little boxes on a Microsoft Project PERT chart, with dependent tasks lined up tightly behind them. (Tom explains that the spacing between cars is minimal according to traffic engineers; what looks like slack in the video isn’t.) Now imagine people doing their best to push their tasks forward, with little room for error. Even if nothing goes wrong, things won’t run as smoothly as the project plan seems to promise. And something will go wrong.

Imagine also a manager confidently claiming that if the team would only drive faster, all lost time could be made up for.

One way out is to allow for some slack in the schedule, because things will go faster when you do. There’s more reasoning behind this in Tom DeMarco’s Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency, which is a good book to have read when you next find yourself managing under time pressure.

A less obvious benefit of Iterations

In Tiny projects keep it new, Jason Fried of 37 Signals talks about the cycle of waning interest as long projects lumber on (“No one likes being stuck on a project that never seems to end”), and the power of small projects to keep motivation up by presenting frequent fresh starts.

On the surface, this is one of the benefits of the Agile practice of breaking work into iterations/sprints/episodes. Every few weeks, the team gets a start fresh. Now this isn’t quite true, as many who have been on Agile projects for more than a year will tell you: Runs of iterations with similar themes can get monotonous. Still, the cadence of doing work in discrete iterations, and the frequent customer feedback this enables, can help team members stay engaged for much longer than they might on a long waterfall project. And iterations provide natural stopping points for rotating developers between teams, which keeps things fresh by spreading knowledge around.

Iterations have another, less obvious, benefit: They reduce the opportunity for people to get dangerously ahead of the rest of the team by latching on to interesting technical bits that appear somewhere out ahead on the project roadmap. I’ve seen this happen over and over in long waterfall projects. Someone sees a technical challenge that will have to be solved eventually, rationalizes that starting on it now will “reduce risk”, and then pours time, effort, love, and raw ego into the problem, making architectural decisions that the rest of the team comes to feel constrained by and resentful of later. And if the project makes a mid-course adjustment, the “risk reducing” cherry-picked work can end up being expensive waste.

Wisdom from Uncle Bob

In Business software is Messy and Ugly, Robert Martin writes:

“One of the developers asked the question point blank: ‘What do you do when your managers tell you to make a mess?’ I responded: ‘You don‘t take it. Behave like a doctor who‘s hospital administrator has just told him that hand-washing is too expensive, and he should stop doing it.’


I’ve seen this problem time and again. Organizations get addicted to “going for it”, taking hideous short-cuts, and letting testing, refactoring, and overall code base hygiene slip while pursuing some short-term goal, leaving the developers stuck in a tar pit. And it always seems to surprise everyone that the tar pit is so bloody difficult to get out of, except perhaps the few developers who’ve lived the story before. Some developers get out by simply walking away.

The few teams I’ve seen avoid this tar pit have set, and self-enforced, an internal standard for what “done” means, and have quietly ignored attempts by management to force the team into taking shortcuts unless there’s a clearly explained, compelling business goal that the team is willing to sign up for.

An appropriate level of care

Thumbing through Gerald Weinberg’s The Psychology of Computer Programming (first edition) to verify a quote, I found a snippet that I’d underlined while at college, working for the college computer center.

“[E]ach program has an appropriate level of care and sophistication dependent on the uses to which it will be put. Working above that level is, a way, even less professional than working below it.”

I had no way then of knowing how true that would continue to be, but must have had some inkling of how finding the right level would be a challenge for many years to follow. It’s a challenge for teams and companies as well as for individuals. Companies, especially startups, can wound themselves mortally by shipping a bug-ridden product. And they can burn through time and cash trying to build a “quality” product, when putting something half-baked but “good enough” out in front of customers sooner could have gained vital feedback that the product wasn’t quite right.

Data that supports part of the Agile Manifesto

Those who’ve become proponents of Agile after using Agile practices and seeing that they work are often at a loss when prospects ask for hard data instead of anecdotes and “trust me, it works!”. Steve McConnell has hard data on “soft factors” that supports “Individuals and interactions over process and tools“. It’s not new, but I missed it the first time by.

One caveat is that Steve leans heavily on a vetting of the Cocomo II estimation model. He writes:

“In Cocomo II terms, the influence of office environment is 2.6, which is significantly greater than that of process maturity.”

My dim memory on this, which I need to follow his references to check, is that the Cocomo II analysis happened before XP/Agile caught on, so the effect of “an office with a door” didn’t have the XP alternative of “quiet open seating for a team that is doing pair programming” to compare against. When you’re in a prairie dog cube farm, especially if you’re mixed in among people who yack on the phone, an office with a door is a big step up for productivity. A place for a team to work together where the only interruptions are other team members talking about the work the team has in progress can be a bigger step up. Or so it’s claimed. I’ve seen it work, but I need some better data to cite.