What The Austerity Paper’s Intellectual Collapse Tells Us About Modern Journalism

Posted on  

"What The Austerity Paper’s Intellectual Collapse Tells Us About Modern Journalism"

Tuesday afternoon, Mike Konczal transformed the debate over austerity and growth by reporting new study finding that a paper by Carmen Reinhart and Kenneth Rogoff that served as one of the principal intellectual justification for austerity turns out to be have been fatally flawed. Its conclusions, for example, were based in part on an elementary Excel coding error (Reinhart and Rogoff, for their part, aren’t conceding the game).

This error is obviously of immense economic and political importance; Matt Yglesias rightly calls it “literally the most influential article cited in public and policy debates about the importance of debt stabilization.” But the young controversy also says something interesting about the way that modern journalists do business, especially as arguments over academic papers becomes a bigger and bigger part of public debate, particularly over economics.

The Reinhart/Rogoff paper wasn’t just influential for self-described deficit hawks like Rep. Paul Ryan (R-WI) or the Washington Post editorial page; it got serious play in an astonishing array of media outlets. Because Reinhart and Rogoff were eminent economists who had written a widely-praised history of financial crises, their work was generally treated very respectfully by the mainstream press, often invoked totemically as proof that the United States couldn’t cross the 90 percent debt-to-GDP “threshold” without dire consequences for growth.

But let’s be honest: most journalists had no real way of knowing whether or not Reinhart/Rogoff had actually provided *good* proof of the 90 percent theory. The reason economists tend to have PhDs is that economics is a difficult, technical profession; assessing the robustness of any individual finding requires a degree of econometric expertise, background knowledge, and free time that the vast majority of journalists simply don’t have. That goes double for the evaluation of disputes between leading economists. As Tim Harford, a seasoned economics journalist with an Oxford Master’s in the subject, put it when confronted with an early critique of Reinhart/Rogoff: “this is beyond my pay grade.”

The data Reinhart and Rogoff based their initial finding on wasn’t even publicly available; the debunking authors were specifically provided with it for the purposes of replicating Reinhart and Rogoff’s results. So even if most journalists were capable of rigorously testing Reinhart/Rogoff before reporting it, they wouldn’t have been able to unless Reinhart and Rogoff had deigned to give them the means to.

This a deeper problem for journalists than many will care to admit. One of the greatest successes of blogging as a medium has been serving as a bridge between academia and the broader world. Online journalism has often brought academic research to bear on public policy questions in ways that disrupt staid, false conventional wisdom among journalists, Nate Silver’s use of political science literature in election writing being the most famous, but far from only, example.

However, it’s easy for respect for academic work to slip into lazy invocations of authority. Slapping a link on the phrase “study finds” is an easy way of making your argument sound stronger; no one will know if you’ve bothered to assess the underlying solidity of the study you’re citing. And even writing a whole post or article about an individual study may not require sufficient examination of its merits, as the many write-ups of Reinhart-Rogoff suggest. This raises the disturbing prospect that the new spate of academic-study blogging might, far from informing the public, actually be lulling it into a false sense of intellectual security.

The point here isn’t that the trend towards writing up empirical studies is an intrinsically bad development; people like Silver, the crew over at Wonkblog, and Konczal himself do a great public service by doing it right. Rather, the point is that more care on the part of journalists (myself including) citing academic research wouldn’t be a bad thing. Tyler Cowen wondered if, after the Reinhart/Rogoff story broke, it was a sign he should read less quantitative research, recalling that “[t]his is not the first time that an extremely influential major empirical result has been overturned or at least thrown into serious doubt.” Cowen suggests supplementing it with “narrative history;” I’d also suggest philosophical research (something journalists generally ignore) and a generally more skeptical attitude about what mathematical manipulation of data can tell us. The integration of quantitative economic research into journalism is undoubtedly a good thing, but it’s more than possible to go overboard.

Tags:

« »

By clicking and submitting a comment I acknowledge the ThinkProgress Privacy Policy and agree to the ThinkProgress Terms of Use. I understand that my comments are also being governed by Facebook, Yahoo, AOL, or Hotmail’s Terms of Use and Privacy Policies as applicable, which can be found here.