In Which Economics Enters a Period of Critical Self-Examination


What’s behind the spate of public criticism of the work of a number of prominent applied economists?

Some of the top names in the new generation — Caroline Hoxby of Harvard University, Steven Levitt of the University of Chicago, the research trio known as AJR, consisting of Daron Acemoglu, Simon Johnson (both of Massachusetts Institute of Technology) and James Robinson of Harvard — recently have been subject to review, often by young economists just starting out on their careers. Sometimes the criticism has made headlines — most recently, for example, “Oops-onomics.”

That was the headline of a page in last week’s Economist, asking if University of Chicago professor Levitt, co-author of the best-selling Freakonomics, just possibly did “get his most notorious paper wrong?”

A pair of economists at the Federal Reserve Bank of Boston had discovered a programming error in an influential technical paper by Levitt and John Donohue of Yale Law School in 2001 that rendered less convincing its conclusion — that the legalization of abortion in the US in the 1970s had cut crime in the 1990s by as much as half.

Needless to say, the notion that unwanted children tend to become criminals has generated a great deal of controversy in various communities concerned with reproductive rights.

When the test for statistical significance of certain effects (having to do with state-of- residence data) was actually performed as described, according to Christopher Foote and Christopher Goetz, that particular test of the argument was undone. (Foote and Goetz had nothing to say about the rest of Levitt and Donohue’s evidence of the link between abortion and crime.)

First The Wall Street Journal wrote the story up, whereupon the Economist chimed in, gibing “…[F]or someone of Mr. Levitt’s iconoclasm and technical ingenuity, technical ineptitude is a much graver charge than moral turpitude. To be politically incorrect is one thing; to be simply incorrect quite another.”

Levitt quickly posted a reply (“Back to the drawing board for our latest critics”) on his blog, readers posted eighteen comments, and the Boston Fed authors promised a speedy response of their own.

In other cases — Jesse Rothstein on Caroline Hoxby, David Albouy on Acemoglu, Johnson and Robinson — a lengthy review process must take place, in which editors and anonymous referees examine the charges and countercharges, before journals will publish an exchange of comments. Only then will it be possible to discriminate between trivial and meaningful concerns.

Taken together, though, the controversies signal that something is up. What?

It’s not as though it hasn’t all happened before. Back in the 1950s, when the dream of the unification of economic theory, statistics and mathematics was new, the econometrics movement seemed to promise great new certainties with which to rearrange the social world. The high-tech neologism itself signaled as much.

Like most new technologies, the new discipline turned out to be subject to a familiar cycle of mood swings: wild optimism followed by excessive discouragement. By the end of the decade, much of the excitement in empirical economics was shifting to a new generation of labor economists, many of them trained by H. Gregg Lewis at the University of Chicago, and another of productivity researchers, many of them students of Zvi Griliches of Harvard

Another part of the answer clearly has to do with an evolving standard of curiosity. In the last fifteen years or so, economists started choosing new and clever instrumental variables, assembling their own data, instead of simply relying on off-the-shelf measurements whose collection had been set in motion by prior theory. Hoxby’s use of the prevalence of rivers and streams as a proxy for school district fragmentation is a case in point. So is Acemoglu, Johnson and Robinson’s adoption of colonial mortality data to uncover the effects on economic growth of property rights institutions.

The advent of powerful and inexpensive computers and sophisticated econometric software has greatly stimulated the search for cause and effect as well. And, of course, with the vast expansion of economic arguments into legal reasoning and the law, the market for expert testimony in litigation has boomed.

Not surprisingly, then, best practice in empirical economics today is subject to a process of high-level critical evaluation. Michael Murray, a Bates College professor, for example, is author of a new graduate textbook in econometrics. In a recent working paper, The Bad, the Weak and the Ugly: Avoiding the Pitfalls of Instrumental Variable Estimation (which is headed slowly towards publication in the Journal of Economic Perspectives), Murray discusses the methods and findings of seven published papers that he says furnish illustrations of exemplary techniques.

Acemoglu, Johnson and Robinson on the effects of institutions on economic growth; Levitt on the effects of incarceration on crime; Janet Currie and Aaron Yelowitz on whether living in public housing is good for kids; Hoxby on the relationship between test scores and class size and composition; Motohiro Yogo on inter-temporal elasticity of substitution in eleven countries; Acemoglu, Johnson, Robinson and Pierre Yared on the relationship between democracy and education; Jeffrey Kling, Jens Ludwig and Lawrence Katz on whether poor households gain from moving to middle-class neighborhoods: it’s not that any of these cynosure papers exhibits all seven of his cardinal virtues, Murray says. But all demonstrate particular strengths.

Moreover, many economic journals are tighten up on their requirements that authors make available on request the data on which their results are based. The Journal of Political Economy, for instance, recently adopted the stringent policy of the American Economic Review. A third leader, the Quarterly Journal of Economics, had not embraced the change. Harvard Professor Robert Barro, a co-editor, explains:

“We think it is important to encourage researchers to disseminate data and also to generate and assemble new data. This environment involves a tradeoff between the benefits of free access to existing data versus the incentives to create new data. The situation is analogous to the costs and benefits from patents on inventions. We think it unlikely that the best resolution of this tradeoff is always to insist that data be made fully available upon its first use. Such a policy would often not provide sufficient individual reward for putting together the data. Therefore, although we encourage authors to disseminate their data (and also point out that this dissemination can be individually valuable to the author who put together or discovered the data), we do not insist on this immediate distribution as a condition for publication. (Proprietary data involves different issues.)”

Meanwhile, there are claims for powerful new methods in econometrics. Get ready for a period of renewed adventure and optimism. One constant, however, which doesn’t change, is the disdain of the priestly econometricians for applied economists who, they say, are mere dabblers in cause and effect. “Most of their work, so influential at the Ivies and in the policy world, would receive a failing grade in my introductory classes,” says Essie Maasoumi, a prominent econometrician. The deans of the present-day understanding of practical work have yet to be heard on the most recent developments: Jerry Hausman and Frank Fischer of MIT; Daniel McFadden of the University of California at Berkeley; James Heckman of the University of Chicago.

In the final analysis, though, some of the present-day controversy surely has to do with sharp differences of opinion behind the scenes of what it means to run too fast or press too hard in a scholarly career. Economists are a competitive lot. Inevitably, sometimes their reach exceeds their grasp.

During the last 30 years, a great deal has changed in the way economics is done — in theory, measurement and statistical inference; in the way departments, business schools and research collectivities are organized; in the way that computers are employed and publication is pursued. (Levitt was so busy writing a response to his critics for his blog last week that he forgot to write a succinct letter to the Economist.) In all that time, applied microeconomics has run uninterrupted before the wind.

Now the profession is seeking to re-establish benchmarks of what constitutes consistently admirable work over the span of a research career. That’s what the current spate of criticism and self-examination is about.