Probably the clearest and most obvious difference between Freakonomics and the sequel is that in the original book, Dubner & Levitt were writing about Steven Levitt’s actual research. People like this research! He won important prizes for it. And not only is the research mathematically sophisticated and prize-worthy, it’s often about quirky, interesting subjects. The sequel, by contrast, has basically nothing to do with Levitt’s research. They just decided to deploy the brand to help sell copies of what’s really just a lot of third-rate political punditry. Interestingly, though, Levitt’s still doing the kind of work that made him famous in the first place.
For an example, check out this recent paper “Professionals Do Not Play Minimax: Evidence from Major League Baseball and the National Football League”:
In the perfect world of game theory, two players locked in a zero-sum contest always make rational choices. They opt for the “minimax” solution — the set of plays that minimizes their maximum possible loss – and their play selection does not follow a predictable pattern that might give their opponent an edge. But minimax predictions typically have not fared well in lab experiments. And real-world studies, while more supportive, have often used small samples.
Now a new study, Professionals Do Not Play Minimax: Evidence from Major League Baseball and the National Football League (NBER Working Paper No. 15347), looks at two of the biggest high-stakes examples of zero-sum contests: pitch selection in Major League Baseball and play-calling in the National Football League. Authors Kenneth Kovash and Steven Levitt find that: “Pitchers appear to throw too many fastballs; football teams pass less than they should.” They also find that the selection of pitches or plays is too predictable. The researchers conclude that “correcting these decisionmaking errors could be worth as many as two additional victories a year to a Major League Baseball franchise and more than a half win per season for a professional football team.”
Kovash and Levitt examine all Major League pitches – more than 3 million of them — during the regular seasons from 2002 to 2006 (excluding extra innings). They categorize them as fastballs, curveballs, sliders, or changeups. They measure the outcome of each pitch using the sum of the batter’s on-base percentage and slugging percentage (a measure they label OPS) and they determine that fastballs lead to a slightly higher OPS than other types of pitches.
That’s interesting! With the world mired in the most serious recession in decades, arguably not the most important subject for economists to be focused on. But still interesting. And it suggests additional research issues. Are pitchers and managers just making a mistake in throwing too many fastballs? Or is it maybe that for biomechanical reasons most pitchers can’t throw the optimal number of breaking balls without wrecking their arms?