The course in introductory computer science at Harvard College this year surpassed elementary economics in popularity. It has 818 enrollments, or nearly 12 percent of the undergraduate population, compared to 711 students in Economics 10. For most of the last thirty years, the economics course has been the catalog’s top offering.
The swing probably reflects the expert showmanship of computer scientist David Malan more than any underlying change in the relative importance of the fields. (See this eye-opening account of Malan’s entrepreneurial flair by Cordelia Mendez, and/or read this defense of it by Malan’s colleague Harry Lewis.) But the surge of interest in computers surely also reflects desire on the part of undergraduates to understand better whatever happens next, especial in the sorts of fields they think they might like to enter.
It is a conviction among many computer scientists that when (not if) the generalizable knowledge representation problem is solved, a new wave of expert systems will quickly emerge, enabling robot software to displace human doctors, lawyers, bankers, teachers, pilots, and, of course, taxi drivers in performing a wide range of their customary tasks.
An especially good meditation on this uncertain future has just appeared. The Glass Cage: Automation and Us (Norton), by Nicholas Carr, stipulates that software has cut costs, decreased workloads and enhanced safety. But, warns Carr, it has also eroded skills, dulled perceptions, slowed reactions and produced a “glass cage” of complacency about automation – a cage that he aims to break.
A case in point. Carr’s book includes a wonderful account of the first appearance, almost exactly a hundred years ago, of a technology whose evolution since is of great interest to him. The scene was the Paris Air Show, June 18, 1914 – an event designed then, as now, to showcase the latest developments in aviation.
Piloting a Curtis bi-plane that day was Lawrence Sperry, son of inventor Elmer Sperry. Flying with him was his mechanic, Emil Cachin. On the first low pass before the grandstand. Sperry held his hands aloft. Remarkable! The plane was flying itself.
On the second pass, Cachin had climbed out to stand alone on one wing. Again, no hands, despite the change in wind resistance. On the third pass, Sperry, too, had climbed out of the cockpit to stand on the other wing. Carr writes, “The crowd and the judges were dumbfounded.”
Beneath the vacant pilot’s seat was Elmer Sperry’s “gyroscopic stabilizer apparatus,” a pair of gyroscopes, installed horizontally and vertically and powered by the wind, the controllers of history’s first autopilot. “Sperry won the grand prize – fifty thousand francs – and the next day his face beamed from the front pages of newspapers across Europe.”
Fast-forward to Air France Flight 447, Rio de Janeiro to Paris, June 1, 2009. You remember the story. The Airbus A330 encountered a storm in mid-Atlantic in the middle of the night. The Pitot tubes, its air-speed sensors, iced up, the autopilot disengaged, and the co-pilot who took control, thinking he was going too slow, pulled back on the yoke and put the plane into a stall. Instead of reversing himself to pick up speed (or simply letting go to permit the plane to fly itself), he continued to try to climb. The pilot sought to take control but it was too late. The plane fell 30,000 feet into the ocean.
Carr rehearses the history of the design competition between Airbus and Boeing over the thirty years since computer-controlled “fly by wire” techniques (as opposed to traditional cables, pulleys and gears) were introduced. Airbus pursued a technology-centered approach to render its planes “pilot-proof,” designing software system that in some cases overruled commonly made pilot errors. Boeing, in contrast, embraced computer control of airplane surfaces, but kept the aviator at the center of its systems. Significantly, it retained bulky old-fashioned front-mounted yokes in contrast to the smaller side-mounted game-controller-like devices with which Airbus pilots steered their planes. Carr writes:
Airbus makes magnificent planes. Some commercial pilots prefer them to Being’s jets, and the safety records of the two manufacturers are pretty much identical. But recent incidents reveal the shortcomings of Airbus’s technology-centered approach. Some aviation experts believe that the design of the Airbus cockpit played a part in the Air France disaster. The voice-recorder transcript revealed that the whole time the [co]pilot controlling the plane, Pierre-Cédric Bonin, was pulling back on his sidestick, his co-pilot [captain] David Robert, was oblivious to Bonin’s fateful mistake. In a Boeing cockpit, each pilot has a clear view of the other pilot’s yoke, and how it’s being handled. If that weren’t enough, the two yokes operate as a single unit. If one pilot pulls back on his yoke, the other pilot’s goes back too. Through both visual and haptic [tactile] cues, the pilots stay in synch. Airbus sidesticks, in contrast, are not in clear view, they work with much subtler motions,, and they operate independently. It’s easy for a pilot to miss what his colleague is doing, particularly in emergencies when stress rises and focus narrows.
Design differences like these in particular products will be resolved over time, but the differences in philosophy that the story reveals are ubiquitous, and often important on a systemic scale. Carr concludes:
As computer systems and software applications come to play an ever larger role in shaping our lives and the world, we have an obligation to be more, not less involved in decisions about their design and use – before technological momentum forecloses the options. We should be careful about what we make.
Thus the appeal of computer science, especially to those just starting out on their careers. Economics, too, has much to say about these matters, under the heading, at least at first, of recent work on the diffusion of general purpose technologies (steam, chemistry, electricity, computers, biotechnology). After that come the deeper social mysteries of the nature of work itself. Expect those enrollments to remain high. There will be plenty of opportunity teaching and writing about robotology.
2 responses to “Early Warnings”
Another nice early warning of where we’re heading.
Though, given that this is Harvard, I’d expect the dominating factor to be “Lots of money will be made in this area for the next few decades.”
In regard to automation and safety, as the quote says “the safety records of the two manufacturers are pretty much identical”. A much more interesting case is the stunning built-in fragility of the late housing finance system: One could not have deliberately designed a system better to hide the risks involved, while creating risks that had never existed before (the people who write the mortgages hold no financial risk regarding their quality; making it possible for housing prices to crash everywhere in the country simultaneously).
“We must live with the fact, true throughout recorded history, that our artifacts are sometimes flawed and cause us to die in novel and unexpected ways, and we can only do our human best to minimize the problems.” — Kent Paul Dolan
Both economics and CS/Automation have a darkly attractive side of “domination through disregard”.
If I can define an economic system algebraically, and decide on my optimization variable; I can disregard the interest or humanity of the elements of that system so long as my variable is being optimized.
Similarly, being on the computer’s side of an expert system that makes obsolete a field of human endeavour, is being on the winning side, and disregards the interest of losing side.
I’m no ‘bleeding heart’; my background is in economics and computer science. But I wouldn’t be surprised if, for our Harvard friends, part of the attraction is the sense of power than comes from winning, especially through winning by disregarding the ‘realness’ of the loser.