PHILADELPHIA — When historians look back on economics in the last quarter of the 20th century, one of its more striking features will be the explosion in the quantity and quality of empirical work that was done. It’s not that economics became less mathematical. It didn’t.
But the advent of the desk-top computer made it possible to make a distinguished career doing something besides theory, namely sorting through the rich details of the real world in hopes of illuminating underlying mechanisms that are supposed to exist, or even finding relationships whose existence was unexpected.
That much was on conspicuous last week when the American Economic Association and its many affiliates held its annual meeting here. The incoming president-elect gets to plan the program. Because econometrician Daniel McFadden of the University of California at Berkeley had the job the year, many sessions of the meetings reflected this taste for applied work.
Work was presented on all kinds of practical topics. For example: cars, gas and pollution policies; downtown parking and traffic congestion; kidney exchange; private funding in china’s education system; patent examiners impact on enforcement; rural and urban poverty in Africa; the sources of racial differences in health care in the United States; the relationship between wealth and democracy; the U.S. gender pay gap; the growing population of postdoctoral students in U.S. universities; the question of who receives IPO allocations and why — all of them buttressed by careful empirical work
Two Nobel Prizes have been awarded in the past four years to econometricians for achievements in connecting theory more closely to data. (Berkeley’s McFadden was one of the four economists honored.)
The centerpiece of the Philadelphia meetings was a call by a leading applied economist, Martin Feldstein of Harvard University, for an ambitious program of forced savings to replace a series of government social insurance programs — unemployment insurance, Social Security and Medicare — that he said had been shown to have deleterious effects.
And in forthcoming spring 2005 issue of the Journal of Economic Perspectives, David Colander of Middlebury College reports the results of a survey of students in seven top graduate schools of economics that he conducted, first in the early 1980s, then again in the early ’00s.
Economics has changed over the last twenty years, he says.
The proportion of those reporting a belief that empirical work was very important had doubled (to 30 percent), while those describing excellence in math as vital to a successful career halved (to 30 percent).
“Creativity in actually saying something, finding the ‘killer ap,’ or the perfect field or natural experiment, has gained in importance,” writes Colander, “and pure technique has faded in importance.” Among graduate students, he writes, “The perception of a rigid neoclassical economics gas been replaced by an eclectic mainstream whose central these is, ‘What can you tell be that I don’t already know?’”
Much of this activity goes forward under the banner of econometrics, a relatively recent topic in economics. The idea of combining rigorous measurement techniques and statistical methods with economic theory gained prominence in the 1930, in hopes of imparting a real-world flavor to the discipline that theory itself at the time seemed to lack.
When Sweden’s Royal Academy of Sciences began giving a Nobel Prize in Economics in 1969, the first one went to two pioneers of the econometric movement, the Norwegian Ragnar Frisch and the Dutchman Jan Tinbergen.
Like a lot of good ideas, econometrics was more easily envisaged than accomplished. Great enthusiasm for the field in the 1950s was followed by significant disillusionment, amid the recognition that the issues were much more complicated than initially had been thought, and that the comparison of costs and benefits, especially large-scale forecasting, would prove very difficult indeed. The mood was memorably expressed as early as 1960 by S. Valavanis, as cited in Peter Kennedy’s A Guide to Econometrics:
Econometric theory is like an exquisitely balanced French recipe, spelling out precisely with how many turn to mix the sauce, how many carats of spice to add, and for how many milliseconds to bake the mixture at 474 degrees of temperature. But when the statistical cook turns to raw materials, he finds that hearts of cactus fruit are unavailable, so he substitutes chunks of cantaloupe; where the recipe calls for vermicelli, he uses shredded wheat; and he substitutes green garment dye for curry; ping-pong ball for turtles’ eggs; and, for Chalifougnac 1883, a can of turpentine.
Things have changed since then. Data have been improved, particularly microeconomic data. Theory is more subtle. Mainly sophisticate number-crunching software has made it possible to implement ambiguity-reducing procedures that would have been unthinkable before. The improved precision is real.
That emphatically does not mean, however, that computers and econometric testing have put to rest the extensive differences of opinion, both about theory itself and the role of values in making essentially political decisions, that have made economics such a lively matter for debate since its earliest days.
Differences of opinion still make horse races. Just watch what happens in the social insurance debate.