It is not often that a book’s reception warrants front-page coverage in *The New York Times*, but this one did, in October 1946, some eighteen months after Oskar Morgenstern joined with John von Neumann to publish *Theory of Games and Economic Behavior*. The new approach, explained reporter Will Lissner, “has caused a sensation among professional economists.” A 29-year-old acolyte of the new approach named Leo Hurwicz told Lissner that game theory held out the hope of “revamping and enriching in realism a good deal of economic theory” – not overnight, but eventually.

The authors had fled Hitler before the war. Morgenstern, the man who had replaced Hayek in Vienna in 1931, was, on the surface, just another economist, albeit a sharp one. Von Neumann, on the other hand, was a genius who, as a young mathematician in Gottingen in the 1920s had established the mathematical underpinnings necessary to describe quantum mechanics. Soon after Hitler became chancellor, in 1933, he joined Einstein and others at the Institute for Advanced Study, in Princeton, N.J. Morgenstern followed a few years later, hired by the economics department of Princeton University.

The new book – 650 pages, much of it consisting of dense math – purported to be a grammar of economic decision-making. “We hope,” the authors wrote, “to obtain a real understanding of the problem of exchange by studying it from an altogether different angle” – as a matter of strategic behavior by participants, each of whom knows that his ability to obtain his objective depends on the objectives of the others. In *Fortune* magazine, John McDonald assured readers that the new theory has “none of the naiveté popularly associated with longhairs.” He wrote that “from the striking of a bargain in the market to the dread clash of war” the theory of strategy was “more *avant garde* than Sartre, more subtle than a Jesuit, and as honest as one can safely be.”

The authors boldly asserted that a new mathematics would be required to do what need to be done. Indirectly, they disparaged the methodological claims that Paul Samuelson was about to make in *Foundations of Economic Analysis*.

The decisive phase of the application of mathematics to physics – Newton’s creation of a rational discipline of mechanics—brought about, and can hardly be separated from, the discovery of the infinitesimal calculus. … [T]he importance of the social phenomena, the wealth and multiplicity of their manifestations, and the complexity of their structure, are at least equal to those in physics. It is therefore to be expected – or feared – that mathematical discoveries of a stature comparable to that of calculus will be needed in order to produce decisive success in this field.

They sought to circumscribe their own claims

The great progress in every science came when, in the study of problems that were modest as compared with ultimate aims, methods were developed which could be extended further and further… The sound procedure is first to obtain utmost precision and mastery in a limited field, and then to proceed to another, somewhat wider one, and so on.

While casting aspersions on the ambitions of others.

This would do away with the unhealthy process of applying so-called theories to economic or social reform where they are in no way useful.

That *Theory of Games an Economic Behavior *received such respectful attention was partly explained by the experience of the war. Mobilizing for that titanic struggle had moved economists to the center of the action. Already in 1946 It was clear that the Cold War would keep them there. Suddenly there was plenty of money to support economists in their careers. Nuclear weapons, bomber fleets and, soon enough, intercontinental missiles, had transformed the nature of warfare and changed the nature of peace. Perhaps the application of the new strategic thinking to institutions – nations, armies, companies – would do as much. Enthusiasm for game theory was greatest among Practicals who ran the War Department (now renamed Defense Department). Moreover, von Neumann had been at the very heart of the war effort, advising the Pentagon on the design of everything from computers to atomic bombs.

From the start, there was some confusion about the use of the word *game*. The authors might have been understood more widely if they had called their book *Theory of* *Strategic Economic Behavior*. Once the hype died down, the new discipline was often characterized as “interactive decision theory; or, even more elementally, as a matter of putting yourself in the other’s place and looking several moves ahead. But then “game” was the whole point of their approach. A game was the unit of analysis, a way of isolating an aspect of interaction in the world, as Adam Smith had singled out the idea of the market, in order to be precise about it.

However attractive game theory was to military planners, it had a hard time catching on in economics. Economists in the 1950s were far more interested in the new macroeconomics than in devising a more comprehensive and precise account of price theory. They were slow to pick up the new tools. Instead of the economic departments at MIT and Chicago, the new ideas emanated from the Institute for Advanced Study and the Princeton University department of mathematics; the Cowles Commission, in Chicago; and the RAND Corp., in Santa Monica, California.

. **Chess Board to Blackboard**

Game theory began with chess. You might think that the severe and highly constrained board game would be an unlikely source of illumination of what goes on in the hurly-burly of markets. A good starting place is Robert Leonard’s brilliant account of its origins, Von Neumann, Morgenstern, and the Creation of Game Theory: From Chess to Social Science 1900-1960 (Cambridge 2010). I can’t recommend this book more warmly, at least to a certain kind of reader (the kind who might already have read The Man without Qualities, by Robert Musil.) As an account of the historical forces that shaped the ambitions of the field, it has no peer. Most of the details here and much of the interpretation is drawn from it.

Leonard’s first chapter begins with an epigraph from a 1908 newspaper column by world chess champion Emmanuel Lasker:

I was in one of those moods where danger is attractive. Hence I plunged from the start into a combination the outcome of which was exceedingly doubtful. For the gain of a pawn I risked to retard [my] development and to accelerate that of the opponent. Mr. Speijer [his Dutch adversary in a championship match] also wisely sacrificed the exchange, and opened a concentrated fire upon my king; but he missed the best continuation, and therefore lost quickly. Games of this character, where every move counts for much, are best suited to entertain spectators, and they are of great value for the ripening of the “situation judgement.” He who relies solely on tactics he can wholly comprehend is liable, in the course of time to weaken his imagination. And he is at a disadvantage against an opponent who tries to win through bold venture, yet who does not step beyond the finely-drawn boundary of what is sound.

Chess in the first years of the twentieth century enjoyed enormous popularity in Europe and Britain. Newspapers covered championship matches the way they cover major sporting events today. Lasker (1868-1941) was the game’s foremost figure, world champion from 1897 to 1921. A mathematician by training, he played chess for a living, and in 1907 published an 80-page pamphlet titled *Kampf*, in which, Leonard says, “he extended the metaphor of ‘game’ to social life, developing an embryonic ‘science of struggle.” Lasker called it “machology,”after “Machee,” a classical Greek term for a fight.

The most obvious problem stemmed from sheer multiplicity of possibilities. Even in chess, Lasker wrote, almost anything can happen from a particular situation near the beginning of the game. But then most possible moves in chess are not useful, and the master strategist will economize on the moves he explores in his mind as the game unfolds. Lasker continued in that vein to make many connections with economic reasoning, including the role of chance: the master strategist would take into account the probabilities of random mistakes, his and others. These ideas were quickly taken up by others, especially after Lasker’s domination of world chess increased after 1908. Leonard writes, he had become “a central figure in the emergence of a discourse that combined elements of games, mathematics and social interaction.”

“Machology” gave way in the hands of Ernst Zermello to a much more ambitious program after 1912. That was when Zermello presented “On an Application of Set Theory to the Theory the Game of Chess” to the International Congress of Mathematicians meeting in Cambridge. Zermello was a pretty good chess player, but he was a very good mathematician, a professor at Gottingen. If Lasker was writing mostly about the psychology of chess, Zermello struck out in the opposite direction. He wanted to abstract away from the tactical aspects of the game in order to give a thorough mathematical description of it. He wasn’t considering strategic interaction or equilibrium, so the mathematician wasn’t really doing game theory, as Ulrich Schwalbe and Paul Walker have pointed out.

Given that there was a finite number of positions on the board (64 squares and 32 pieces), Zermello asked whether it was possible to say with precision when it was that a player had achieved “a winning position”? And whether, once such a position had been achieved, it could be said with certainty how many moves would be required to end the game? The sheer complexity of the problem would require a different sort of math, Zermello noted. He was an expert in set theory, which had undergone rapid development since Georg Cantor had broached it in the 1870s. Calculus was clearly no help thinking about chess, but this new branch of mathematics might be.

Those first twenty years after Lasker’s pamphlet appeared produced many of game theory’s most basic concepts – the zero-sum game, for instance, in which one player’s win is another’s equal loss: chess, checkers, scissors-paper-stone. There could be no point in cooperating in such as game. The analysis of poker, not to mention most military and economic situations, in which gains could be shared, seemed to be a much more forbidding task. It turned out, though, that many of the tools developed for the zero-sum case were much more widely applicable.

They included the idea of the extensive (or tree) form of a particular game, meaning all its moves, from beginning to end; the pure strategy of each player, meaning a complete plan of how to play in response to moves by the other, a plan that never varies; and the notion of a mixed, or randomized strategy; meaning the desirability of playing probabilistically, switching strategies without warning in order to remain unpredictable. Always showing paper is a surefire way to lose at scissors-paper-stone.

Indeed, among the greatest contributions of all may be the tabular description that has come to be known as a game’s strategic form, according to Robert Aumann, a Nobel laureate and author of an especially good history in *The New Palgrave: A Dictionary of Economics* (MacMillan 1987).Setting out the strategic form involves jotting down the problem as a matrix, with rows and columns corresponding to the pure strategies of players 1 and 2, and the possible outcomes of play expressed by a pair of numbers in each cell. The best-known of these matrices today is the game known as the prisoner’s dilemma; the games of chicken, ultimatum, and battle of the sexes are other familiar examples. Strategic form can be expressed mathematically, too, but the little 2×2 matrices powerfully convey the essence of game theory. Aumann wrote,

In interactive situations there is a great temptation to think, “What should I do?” When one writes down the matrix, one is led to a different viewpoint, one that explicitly and automatically takes into account that the other players are also facing a decision problem

This was the situation in 1928, when John von Neumann appeared on the scene. The theory of games had become a community after twenty years, though not quite yet a scientific community. Its prominent members included not just Lasker and Zermello but the celebrated French mathematician Emile Borel, professor of statistics at the Sorbonne, and John Maynard Keynes, whose *Treatise on Probability* had appeared in 1921.

Born in in 1903,in Budapest, to a well-connected Jewish Hungarian family, von Neuman is one of the more fabulous characters in twentieth-century science, a polymath so many-sided that he has defied biographers’ attempts to see him whole, though Norman Macrae, for many years deputy editor of *The Economist*, gave it a valiant journalistic try in 1992. (The redoubtable solo journalist William Poundstone published Prisoner’s Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb, which is better on the technical detail, the same year.) In John von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More, Macrae wrote,

The most startling innovator among the pure mathematicians of the 1920s, he surged on to leave his mark on theoretical physics and then on dramatically applied physics, on meteorology, on biology, on economics, on deterrence to war – and, eventually became, more than any other individual, the creator of the modern digital computer and the most farsighted of those to put it to early use. He marked up nearly all his achievements while he was mainly engaged in something else.

The “something else” in which von Neumann was mainly engaged in 1927-29 was creating, practically single-handedly, the mathematical framework by which the apparently contradictory quantum mechanics of Werner Heisenberg, in Göttingen, and Erwin Schrödinger, in Zurich, would be understood, as described in some detail in John Von Neumann and Norbert Wiener: From Mathematics to the Technologies if Life and Death., by physicist-turned-historian Steve J. Heims. As an enthusiastic *bon vivant*, though, von Neumann always had been drawn to games, not just roulette and baccarat, but, games in which players are able to bluff or feint or otherwise misrepresent their strengths and weaknesses – chess, poker and bridge.

So in 1928, in “The Theory of Parlor Games,” delivered first in 1926 as a talk to the Gottingen Mathematical Society, he set out an argument that became the foundation on which game theory would be built. Although the “minimax theorem” was a largely a statement of algebra, according to Heims, in order to prove it von Neumann borrowed from set theory, the same branch of topology with which he had been describing quantum mechanics,

Von Neumann did a little explaining before mathematical notation began. His topic was how to play a game that depends on the actions of others to get the best possible result, as long as it is the kind of game when one player wins, the other one loses. There “hardly a situation in daily life in which the problem does not enter” in some degree; never mind that we cooperate effortlessly about most things. What emerged was the minimax theorem, a powerful proof at the end of a long chain of reasoning that, even expecting that both players adopt of “mixed,” meaning randomized strategies, it is possible to identify a strategy that will protect you from a major loss, no matter what the other player does – and still leave the possibility of a windfall gain, if the opposition doesn’t play minimax herself.

The best plain English illustration I know of the minimax theorem is to be found in Poundstone’s *Prisoner’s Dilemma*: one of those situations drawn from daily life. How do you get two quarreling kids to divide a piece of cake? Make a game of it. Give one the knife and let the other choose which of the two pieces that result. Greed and mistrust will do the rest. The kid with the knife will cut the cake as close to half as he is able, to heave himself the largest possible piece of cake consistent with losing (the minimax; the other will choose the piece that seems to have one crumb more, the maximin. (Technically they make their choices in the same instant.) The expected outcome of such a game came to be known as its saddlepoint, as in a pass between two mountains – as high as possible for the player traveling up; as low as necessary for the other traveling down. Appropriate strategies for games of chance are only somewhat more complicated, it turns out.

Of the mathematics involved, the best discussion I’ve come across is a remarkable article by the Danish historian Tinne Hoff Kjeldsen, in the *Archive for the History of Exact Sciences* for 2001. Her painstaking account of the evolution of von Neumann’s proof illuminates the indirect path he took to reach the territory where this installment must eventually end up: “the mathematico-geometrical theory of linearity and convexity” – a region more than a little forbidding to those of us who long ago left behind our math.

A note to the reader about this breathless pace: Ordinarily this would be more than enough reading for a single sitting, but we must plunge on if we are to reach the beginning of the next installment, in 1967, on time next week. It’s like careening around Europe, fourteen cathedrals in nine days. All deliberate haste is necessary, though, if we are to return to the Panic of 2008 and end this serial by Labor Day.

So if you want to learn some game theory, take a course or two. Meanwhile, recognizing that we are falling behind, I am going to truncate the rest of this installment, often dramatically. If I ever have time, I may return to the task of making this installment a nice easy read. For now, here’s what happened next.

. Berlin and Vienna to Princeton

Von Neumann moved to the newly-created Institute for Advance Study, in Princeton, in 1933; Albert Einstein and the Hungarian physicist Eugene Wigner had preceded him. He plunged ahead with his mathematical work. The standard explanation of why he returned to game theory after a lapse of decade ascribes it to Oskar Morgenstern’s arrival at Princeton University’s department of economics in early 1938. (Morgenstern had been fired by the Nazis from his job as professor at the University of Vienna.) Historian Robert Leonard argues for a deeper reading, based on von Neumann’s correspondence. The reawakening had at least something to do with the political tumult in Europe and von Neumann’s last prewar trip to Europe, in 1938. For economists born in the US who came of age in the 1930s – Samuelson, Friedman and, as we shall see, Kenneth Arrow – the unemployment and idle resources of the Great Depression were the central problem. For von Neumann, it was the world itself that was falling apart.

This helps explain why the book that emerged in 1944 from von Neumann and Morgenstern’s collaboration was “more than a little schizophrenic,” in the words, many years later, of game theorist Ken Binmore. The first half, dealing with zero-sum two person games, would be tightly reasoned, the opportunities of players and the basis for their choices carefully spelled out – the 1928 minmax paper, with some elaborations and a more elegant proof, devised in 1937 by von Neumann for a different purpose, in 1937. The second half, concerned with games with three or more players, abandoned the strict standards, including the search for unique solutions, in hopes of showing how players might unite coalitions in stable classes of solutions, relying on a black box labeled “social norms” to bring about and maintain stability. Morgenstern drafted the introduction, motivating the book in economic problems. Von Neumann wrote parts one and two. It is page 128 where the book steers into the realm of topological convexity. Thereafter economists interested in game theory would deal with vectors, matrices and hyperplanes.

Within the economics profession, the reception to *Theory of Games and Economic Behavior* was polite but restrained. Morgenstern irritated many; his colleagues were said to dislike him, and not merely because they couldn’t understand what was going on (“there was a certain aristocratic touch to Oskar,” explained a friend). Samuelson found him “slightly Napoleonic.” Jacob Viner, who had moved from the University of Chicago to the Institute for Advanced Study, was said to be particularly disapproving. If they can’t solve chess, he would say, and economics is so much more complicated than chess, then what conceivable good can it be?

With von Neumann himself, Samuelson had only one encounter, in1945, in a lecture at Harvard. Von Neumann presented his model of general equilibrium from 1937; it involved, he asserted, a new kind of mathematics that had no relation to calculus as employed in economics. Samuelson recalled in his Nobel lecture, “Maximum Principles in Analytical Economics,”

I piped up from the back of the room that I thought it was not all that different from the concept we have in economics of the opportunity cost frontier, in which for specified amounts of all inputs and all but one output society seeks the maximum of the remaining output. Von Neumann replied at that lightning speed that was characteristic of him, “Would you bet a cigar on that?” I am ashamed to report that for once little David retired from the field with his tail between his legs. And yet some day when I pass through Saint Peter’s Gates I do think I have half a cigar still coming to me – only half because Von Neumann still had a valid point.

Samuelson was thinking at the time of the interpretation he had given in the *Foundations* of LaGrangeian multipliers, familiar from classical mechanics, as shadow prices – an operation with which a good deal of optimization could be done. Soon the story would shift to Chicago – but not before a surprising further development in Princeton.

. Coalitions to Individuals

Everyone who has read Sylvia Nasar’s 1998 biography of John Nash, A Beautiful Mind, knows what happened next. Nash did what Von Neuman had not: he found a way to generalize the minimax theorem to any number of people, and for all finite games, not just zero-sum ones. This would enable him to distinguish between cooperative and non-cooperative games, the latter being those in which players make their decisions independently of one another, and so put the second half of the book on the back burner for the next seventy-five years. Nasar makes a riveting story out of what happened when the 21-year-old graduate student went to try out his idea on von Neumann in October 1949.

Von Neumann was sitting at an enormous desk, looking more like a prosperous bank president than an academic in his expensive three-piece suit, silk tie and jaunty pocket handkerchief. He had the preoccupied air of a busy executive. At the time he was holding a dozen consultancies, “arguing the ear off Robert Oppenheimer” over the development of the H-bomb, and overseeing the construction and programming of two prototype computers. He gestured Nash to sit down. He knew who Nash was, of course, but seemed a bit puzzled by his visit.

He listened carefully, with his head cocked slightly to one side and his fingers tapping. Nash started to describe the proof he had in mind for equilibrium in games of more than two players. But before he had gotten out more than a few disjointed sentences, von Neumann interrupted him, jumped ahead to the yet unstated conclusion of Nash’s argument, and said abruptly, “That’s trivial, you know, That’s just a fixed-point theorem.”

Never mind that it was von Neumann himself who had employed a fixed point theorem in his earlier paper on general equilibrium, in 1937. The topological argument about the equivalence of sets enabled Nash to show that in every game there was at least one equilibrium in which everyone was doing the best they could; none had a way to do better. In a letter to the historian Leonard many years after the fact, Nash described what he had done. “I was playing a non-cooperative game in relation to von Neumann, rather than simply seeking to join his coalition. And of course it was psychologically natural for him not to be entirely pleased by a rival theoretical approach.” To illustrate how hard it was to maintain a coalition against attempts to break it, he devised a game he called So Long Sucker. So it was that a young American genius from the backwoods of Appalachia put some analytic bite in von Neumann and Morgenstern’s agenda, John Nash was about to begin a long shadowy time in the grip of schizophrenia.

What’s not clear from Nasar’s book is how Nash equilibrium had the effect it did on economics – not all at once but gradually, As later as 1990 – four years before Nash shared a Nobel Prize — the significance had yet to dawn on those who sought to explain economics to the lay public. Robert Heilbroner never did get around to game theory in The Worldly Philosophers: The Lives, Times, and Ideas of the Great Economic Thinkers, even by the seventh and last edition, in 1998. And Jürg Niehans gave Nash only half a page in A History of Economic Theory: Classic Contributions 1720-1980 (Johns Hopkins, 1994). So on the fiftieth anniversary of the transmittal of Nash’s seminal paper, Roger Myerson, of the University of Chicago, took to the pages of the *Journal of Economics Literature *to explain, in Nash Equilibrium and the History of Economic Theory. (Myerson shared a Nobel Prize in 2007 for work further integrating game theory into economics.)

What Nash did, says Myerson, was to teach us to keep economists focused on the process of individual (non-cooperative) decision-making, even in a negotiation to collude. In the process, he broadened economics’ purview to include incentives of all sorts. Myerson wrote,

Before Nash, price theory was the one general analytical methodology available to economics. The power of price-theoretic analysis enabled economists to serve as highly valued guides in practical policy-making, to a degree that was not approached by scholars in any other area of social science. But even within the traditional scope of economics, price theory has serious limits. Bargaining situations where individuals have different information do not fit easily into standard price-theoretic terms. The internal organization of a firm is largely beyond the scope of price theory. Price theory can offer deep insights into the functioning and efficiency of a market system where clear and transferable property rights are assumed for all commodities, but price theory cannot be applied to study the defects of a nonprice command economy. In development economics, an exclusive methodological reliance on price theory can lead naturally to a focus on those aspects of the developing economy that can be formulated within the terms of price theory, such as savings rates and international terms of trade, with a relative neglect of other fundamental problems such as crime and corruption, which can undermine the system of property rights that price theory assumes.

The broader analytical perspective of noncooperative game theory has liberated practical economic analysis from these methodological restrictions. Methodological limitations no longer deter us from considering market and nonmarket systems on an equal footing, and from recognizing the essential interconnections between economic, social, and political institutions in economic development.

So Nash’s formulation of noncooperative game theory should be viewed as one of the great turning points in the long evolution of economics and social science.

Probably so. But there were others.

. Meanwhile, in Chicago

Readers of the earlier episode, The Rivals, about Paul Samuelson and Milton Friedman, will remember that the Cowles Commission, founded in 1932 with the help of Irving Fisher. Having met in Colorado Springs during summers of the 1930s, Cowles pulled back to Chicago in 1939. War brought a new director, Jacob Marschak, and a mission – build a model of the US economy using the latest econometric techniques. The arrival of Marschak changed Cowles utterly. He was a shrewd judge of talent.

In this saga of wartime and post-war economics, Cowles deserves a chapter by itself. It is not going to get it here, not now – not even the half chapter to accompany game theory that I started out to write. Instead, there is more, and worse, truncation. I’ll claw back what I can next week, when we come to Kenneth Arrow.

The commitment to simultaneous equations led to friction with Friedman after he arrived in 1946. As the Cowles group became more enthusiastic about their methods, they became more critical of those that had been employed by Wesley Clair Mitchell and his successors at the National Bureau of Economic Research. “Measurement without Theory,” a Cowles monograph was the result, in 1947. It meant more friction with Friedman, but that is not the story here.

The author of “Measurement without Theory” was Tjalling Koopmans, a Dutch statistician who had come to Cowles in 1944 after a year at Princeton and two years at the Combined Shipping Adjustment Board, a joint US-British agency. There he had worked what had become known as the transportation problem – how to achieve a stated objective in ocean shipping at the lowest cost or, alternatively, achieve as much as possible with a given set of resource limitations.

An Air Force consultant, working on the same problem, George Dantzig, came by to visit in early 1947, before going on to Princeton to see von Neumann. The parallels with the mathematics of game theory were quickly recognized, and Dantzig became a key figure in the development of new methods of planning – “programming,” in the artfully neutered language of the time. (Central government planning was all the rage in Europe.)

Koopmans organized a conference in Chicago in June 1949, “Activity analysis of Production and Allocation,” which turned into a coming-out party for the new methods. “Activity analysis” became “linear programming,” or, more generally, “operations research.” The background is described with extraordinary care by Till Düppe and E. Roy Weintraub , historians of economics and mathematics, in Finding Equilibrium: Arrow, Debreu, McKenzie and the Problem of Scientific Credit (Princeton, 2014).

For one more series of important developments was about to commence at Cowles – developments that are the subject of *Finding Equilibrium*. News of Nash’s existence theorem in game theory inspired a search in the early 1950s for a similar proof of the existence of general equilibrium. A race of sorts developed among three young economists – Kenneth Arrow, Gerard Debreu, and Lionel McKenzie — no less interesting than the earlier rivalry between von Neumann and Nash.

Why race? With the ultimate refinement of the Invisible Hand that such a proof represented, economics would gain a template for judging real-world competition no less valuable than that Nash equilibrium. Arrow-Debreu-McKenzie equilibrium was, in the phrase of theorist Andreu Mas-Colell, “the door that opened into the house of analysis.”

Now there were three overlapping specialty fields, operations research, game theory, and the theory of general economic equilibrium, all requiring the services of the new mathematics of convexity. Especially after Koopmans’ 1957 book, Three Essays on the State of Economic Science, progress was rapid. “So complete was this takeover that among its enthusiasts the use of calculus to explore problems of optimization soon came to be looked upon as an unnatural act,” editor Peter Newman wrote in *The New Palgrave: A Dictionary of Economics *(1984) – until the 1970s, he added, when enthusiasm for still newer methods of global analysis took over.

In Cambridge, Robert Dorfman. Paul Samuelson and Robert Solow (DOSSO in graduate school parlance) collaborated in a text in the new methods — *Linear Programming and Economic Analysis*, which speared in 1958. A sturdy text of mainstream economics, it included two short chapters near the end on game theory. In Chicago, Milton Friedman consolidated his hod on the department. The embattled factions left to go to their respective corners — game theorists to the West Coast, especially the Santa Monica clubhouse that was RAND Corp; Koopmans and others of the Cowles group, to Yale.

So game theory remained something of an orphan in economics in the 1950s. Duncan Luce and Howard Raiffa wrote a pioneering text, *Games and Decisions* (1957). Martin Shubik and Thomas Schelling found ways to connect game theory to economics. But on the twentieth anniversary of the publication of *Theory of Games and Economic Behavior*, Samuelson wrote that, while it had added much, the book had “accomplished everything except what it started out to do – namely, revolutionize economic theory.”

Center stage was Cambridge, and Camelot.