Gladwell vs Brooks

Posted on  

"Gladwell vs Brooks"

By Matt Zeitlin

David Brooks’s column today hits on a very Brooksian theme: stuff can be really complicated and its hard to blame individuals for any given system failure. Not the most insightful point, but a fair one. What’s really interesting though are the examples he uses. A great deal of those that don’t have to do with oil drilling come from a 1996 New Yorker piece by Malcolm Gladwell.

Brooks cites the piece, quoting Gladwell as writing “Human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another” and “We have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life.” This is all well and good, but what the average Times reader might not realize  is that a good number of Brooks’s examples are taken straight from Gladwell.

Here’s Brooks on Three Mile Island:

In the first place, people have trouble imagining how small failings can combine to lead to catastrophic disasters. At the Three Mile Island nuclear facility, a series of small systems happened to fail at the same time. It was the interplay between these seemingly minor events that led to an unanticipated systemic crash.

And Gladwell:

Here, in other words, was a major accident caused by five discrete events. There is no way the engineers in the control room could have known about any of them. No glaring errors or spectacularly bad decisions were made that exacerbated those events. And all the malfunctions-the blocked polisher, the shut valves, the obscured indicator, the faulty relief valve, and the broken gauge-were in themselves so trivial that individually they would have created no more than a nuisance. What caused the accident was the way minor events unexpectedly interacted to create a major problem.

Brooks on the Challenger disaster:

Second, people have a tendency to get acclimated to risk. As the physicist Richard Feynman wrote in a report on the Challenger disaster, as years went by, NASA officials got used to living with small failures. If faulty O rings didn’t produce a catastrophe last time, they probably won’t this time, they figured.

Feynman compared this to playing Russian roulette. Success in the last round is not a good predictor of success this time. Nonetheless, as things seemed to be going well, people unconsciously adjust their definition of acceptable risk.

Gladwell:

It doesn’t take much imagination to see how risk homeostasis applies to NASA and the space shuttle. In one frequently quoted phrase, Richard Feynman, the Nobel Prize- winning physicist who served on the Challenger commission, said that at NASA decision-making was “a kind of Russian roulette.” When the O-rings began to have problems and nothing happened, the agency began to believe that “the risk is no longer so high for the next flights,” Feynman said, and that “we can lower our standards a little bit because we got away with it last time.” But fixing the O-rings doesn’t mean that this kind of risk-taking stops. There are six whole volumes of shuttle components that are deemed by NASA to be as risky as O-rings. It is entirely possible that better O-rings just give NASA the confidence to play Russian roulette with something else.

Brooks on crosswalks:

Third, people have a tendency to place elaborate faith in backup systems and safety devices. More pedestrians die in crosswalks than when jay-walking. That’s because they have a false sense of security in crosswalks and are less likely to look both ways.

Gladwell:

Why are more pedestrians killed crossing the street at marked crosswalks than at unmarked crosswalks? Because they compensate for the “safe” environment of a marked crossing by being less viligant about oncoming traffic

Obviously, this isn’t plagiarism or anything like that. He cites Gladwell twice, even if he doesn’t attribute the examples to the article he had read.

I just read Gladwell’s piece yesterday because Megan McArdle linked to it. Brooks probably read the article, liked it and then decided he’d write a column on it and figured that Gladwell already had picked out the best examples. The irony, of course, is that Gladwell’s piece culls all of its examples and analysis from academics who did original research on these disasters and on risk homeostasis. What goes around comes around?

« »

By clicking and submitting a comment I acknowledge the ThinkProgress Privacy Policy and agree to the ThinkProgress Terms of Use. I understand that my comments are also being governed by Facebook, Yahoo, AOL, or Hotmail’s Terms of Use and Privacy Policies as applicable, which can be found here.