Among the most prescient sentences I have ever written was this one, in 1984, in my first book, The Idea of Economic Complexity: “Complexity is an idea on the tip of the modern tongue.”
Three years later, New York Times reporter James Gleick created a sensation in certain circles with his best-seller, Chaos: Making a New Science (1987). Complexity, The Emerging Science at the Edge of Order and Chaos, by Science news writer M. Mitchell Waldrop; and Complexity: Life at the Edge of Chaos, by his editor-colleague, Roger Lewin, two books about the Santa Fe Institute, a research center established near Los Alamos National Laboratory in 1984 with National Science Foundation funding, appeared in 1992.
The chemistry professor who reviewed both the
All three writers struck out on their own soon thereafter. The excitement they had created in the upper realms of popular culture touched off a vogue for offering advice on “the business applications of complexity theory” among many high-end consulting firms. My shelves hold couple dozen titles of trade books by strategists and various complexity grand-masters Ilya Prigogine, Per Bak, Stuart Kaufmann, John Holland, Murray Gell-Mann, Christopher Langton. W. Brian Arthur and J. Doyne Farmer let others tell their tales. Two late starters: Eric Beinhocker, of McKinsey & Co. (The Origin of Wealth); and Paul Ormerod, of Volterra Consulting (Why Things Fail).
It was only after Massachusetts Institute of Technology physicist Seth Lloyd delineated 45 slightly different meanings of the word – a finding given wide exposure in June 1995 by science writer John Horgan in a widely-circulated Scientific American article (“From Complexity to Perplexity”) – that the enthusiasm began to subside. In The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age two years later, Horgan coined the term “chaoplexity” to indicate the overlap of chaos and complexity with a pair of fads left over from earlier decades, cybernetics and catastrophe theory.
My own sense of the meaning of the word had derived from the use of the word complexity in a famous paper by the economist Allyn Young (“Increasing Returns and Economic Progress”) to describe the growing variety of goods and services, their apparently ever-increasing specialization and differentiation, what we mean today by the supremely hazy term “development.” (New York City is more complex than Dubuque, Iowa, not more “highly-developed.”) By the late 1980s, therefore, I was paying far more attention to mainstream economics, especially to a number of young economists who were working on the determinants of international trade and technological change, than to the complexity biz.
I remained fond of those complexitarians. With their genetic algorithms and their cellular automata and their get-rich quick schemes, they seemed like fugitives from some novel by Malcolm Bradbury or David Lodge. Moreover, I expect that, in some fundamental if cloudy sense, they are right. They are soldiers in the “brave army of heretics” that John Maynard Keynes described at the end of his General Theory, “who, following their intuitions, have preferred to see the truth obscurely and imperfectly rather than to maintain error, reached indeed with clearness and consistence and by easy logic but on hypotheses inappropriate to the facts.”
So ten days ago I journeyed to James Madison University, in Harrisonburg, Virginia, in the Shenandoah Valley, to a meeting on “Transdisciplinary Perspectives on Economic Complexity,” to see what had become of them.
I was glad and, I confess, slightly surprised, to see how much real progress had been made in 25 years.
The convener was J. Barkley Rosser Jr., a robust 60-year-old JMU professor who has traveled a long intellectual road since 1977, when he arrived in
In 1999, in an article in the Journal of Economic Perspectives, Rosser acknowledged (as John Horgan had charged) that it was difficult to identify a “concrete discovery,” as opposed to colorful metaphors that had emerged from the new field (the butterfly that causes a storm by flapping its wings, for example),. Last week, playing host to a gaggle of complexity experts from around the world as part of JMU’s centenary celebrations, he was more sanguine – but only slightly. Probably a “grand synthetic transdisciplinary perspective” on economic complexity was beyond reach. But as economics plumbed its relation to its scientific ancestors, physics and biology, in particular, certain important features of the family tree were coming into focus.
Like what? Alan Kirman, of the Univerity of Aix-Marseille, France, a much-decorated veteran; and Robert Axtell, of George Mason University, an up-and-comer; laid out the case for a point of view known as “agent-based” modeling. They describe this as a “bottom-up” approach to thinking about economic phenomena, made practicable by modern computers, in which people are heterogeneous and react directly with each other; in which their information is local and their behavior governed by rules of thumb; and in which the aggregate behavior of the system emerges from behavior of individuals rather than a “representative” agent.” Simulations of this sort by Thomas Schelling, which had formally identified the “tipping point” phenomenon (a fairly concrete discovery, after all) and elucidated mechanisms that give rise to residential segregation, had been laboriously worked out on a checkerboard in the 1970s, Axtell noted; with today’s models, 5 million more intricate calculations can be performed in an instant.
Seem sensible? It does to anyone not properly trained in economics, said Kirman. For the last 125 years, however, ever since the views of Leon Walras and other theorists of general equilibrium became encoded in a famous textbook of Alfred Marshall (or, rather, partially encoded), technical economists have viewed the economy as a system in which individuals deal with one another only through the market mechanism, reacting to signals about prices and quantities as if they were determined by some central authority, best thought of as an auctioneer. Individual actors adapt as best they can to market signals which they are powerless to affect, until some sort of balance between supply and demand is achieved. Then things settle down and no individual has any reason to change his behavior, unless some external “shock” to the system occurs.
What about “imperfect competition”? Present-day economists have plenty of models in which individuals have market power and seek to use it. But, says Kirman, such models depend on game theory, which attributes superpowers to individuals – unlimited calculating power and superior analytic ability. The complexity vision of the economy falls somewhere between these two approaches, says Kirman, requiring neither the coordinating mechanism of the market nor sophisticated game-players to make it work. This is why David Colander, of
It all mapped nicely into the work of Philip Mirowski, who, for twenty years, has been working through various relationships among images and analytic forms that economics shares with physics, biology and computer science. Whether or not economists simply have been “absorbing methods from other sciences with no conception of what they are about,” as Mirowski maintains, remains an open question. It was debated by conference-goers once again last week when he once again sketched his latest project, extensively hashed-over in the June 2007 JEBO, an approach to economics in which markets are conceived as evolving computational entities (markomata, a wordplay on automata), like so much software to which people must conform.
Certainly there was plenty of evidence of transdisciplinarity at JMU: Duncan Foley, of the
Ironically, solid-state physicist Jean-Phillipe Bouchaud, of
Many senior figures in the complexity community didn’t attend. For instance, absent were Robert Axelrod, of the University of Michigan; William (Buz) Brock and Steven Durlauf, of the University of Wisconsin; Blake LeBaron, of Brandeis University; Kenneth Judd, of the Hoover Institution at Stanford University, physicist Joseph McCauley, of the University of Houston; Jose Scheinkman, of Princeton University; and Leigh Tesfatsion, of Iowa State University.
Most poignant of all was the absence of Peter Albin, of John Jay College of the City University of New York, who was perhaps the first economist to take complexity seriously; he died last winter, at 73, after a long illness. It was in 1971 that Albin tumbled onto the “game of Life,” a recreational version of John Von Neumann’s cellular automata theory, in Martin Gardner’s column in Scientific American. “What struck me,” he wrote in The Analysis of Complex Socioeconomic Systems (1975), “was that in working with automata that derived from ‘Life,’ a very few operating principles generated model behavior which seemed to be as interesting as those produced by quite massive constructions…. With a bit of ingenuity, one could combine the elements to form an intriguing world apparently filled with social interactions, threshold phenomena, environmental effects, and evolutionary or adaptive developments.” Albin called the method he introduced Single Unit Representation; it was the forerunner of today’s agent-based computational economics.
John Horgan came to the JMU conference, teaching science writing now at the Stevens Institute of Technology, long retired from the strenuous reconnaissance he performed for Scientific American. He pronounced a genial benediction on the meeting, noting that “for all its sense of intellectual fizziness and ferment, [it] merely reinforced my skepticism that chaoplexity or physics or biology or all of the above can make economics and other ‘soft’ social sciences more rigorous.” Complexity was still perplexity in Horgan’s book. But then forty years ago, philosopher Margaret Masterman demonstrated that Thomas Kuhn had attached 21 different meanings to the word paradigm in The Structure of Scientific Revolutions. That didn’t make paradigms seem less real, and probably complexity is here to stay as well.
Perhaps the best evidence that complexity is in under active consideration can be found in the aforementioned second edition of the The New Palgrave Dictionary of Economics, which is more of an encyclopedia than an alphabetical list of definitions. It appeared earlier this month, five years in the making, with 1,850 articles by 1,500 economists requiring eight volumes to print (though the plan is to sell access mostly by online subscription). The original Palgrave, compiled by Sir Robert Harry Inglis Palgrave, editor of The Economist and published in 1884, required two. The first edition of the New Palgrave, appearing exactly a hundred years later, required four. That project had problems. It came out just as economics was changing dramatically, and the emphasis it gave to various topics reflected the judgments of a trio of editors, neoclassical, Marxist and historian of thought, who couldn’t agree.
The editors of the new edition,
Driving away from
The project had begun two years ago under former World Bank president Paul Wolfowitz. No longer did it enjoy the enthusiastic backing of top bank officials. But the 21 commissioners themselves were very senior politicians and business executives from around the world – among them Zhou Xiaochuan, governor of the People’s Bank of China; Montek Ahluwalla, deputy chairman of India’s Planning Commission; Robert Rubin, former Treasury Secretary of the United States; Ernesto Zedillo, former president of Mexico; Kemal Dervis, the former Finance Minister of Turkey; Goh Chok Tong, chair of the Monetary Authority of Singapore; Trevor Manuel, South African minister of Finance; and Nobel laureate Robert Solow, of MIT.
They had worked hard, commissioning more than forty background papers, consulting closely with an array of top economists in hopes of devising a blueprint for growth more persuasive than the “Washington Consensus” of the early 1990s, with its single-minded emphasis on free trade, privatization and fiscal restraint. They recognized that the events of the 1990s – the rapid growth that occurred in nations such as China and India that substantially ignored the standard recipe, the series of crises that had gripped the nations that followed the Washington rules – had considerably undermined faith in the standard approach. They knew, too, that a considerable re-thinking had occurred within technical economics since the mid-1980s, under the heading of “new” growth economics.
As Spence explained last week to Krishna Guha of the Financial Times, “What we learned is not that things went crazily off base in the Washington Consensus, but that in some sense that set of propositions was not enough to get the job done.… I suspect that the role of government as envisaged by the Washington Consensus needs to be reconsidered. I think it was defined too narrowly and not sufficiently pragmatically… Things you can confidently delegate to the private sector in Europe or
It was true, as Guha wrote last week, that the World Bank’s Growth Report seeks to provide a counterpart to what Michael Porter, of the
Only after the appearance of the first few articles in mainstream journals, such as The Journal of International Economics and The Journal of Political Economy, did a variety of concerns previously considered heterodox begin to migrate from the periphery to the center – not the result of a butterfly flapping its wings but instead the outcome of processes of discovery and the spread of ideas that are relatively well understood.
I had the feeling that, in due course, something like it is bound to happen again, and again.