Among those who work for newspapers, the inevitability of bias is a widely acknowledged problem. News reporters bring all sorts of prior convictions to their work. The job of editors is to strive to correct for bias and minimize its effects.
Editors hire reliable and imaginative reporters (within parameters that they and their publishers set themselves), establish standards, promote the exemplary (and sideline those who in the course of their work become most heavily committed to a point of view), match factions against one another within their newsrooms, and otherwise seek to produce a report as close as possible to neutral with respect to the burning questions of the day — all the while encouraging the freedom of inquiry that is the essence of any journalistic enterprise.
Responsibility for the overall result inevitably falls on the top editor. At The New York Times, for example, executive editor Joseph Lelyveld said of one of his predecessors, “Abe [Rosenthal] would always say, with some justice, that you have to keep your hand on the tiller and steer to the right or it’ll drift off to the left.”
So what to make of a couple of economists who say they’ve devised “an objective measure of the slant of news,” one that lets them say with some certainty who’s liberal and how much? What to think when the first use of this yardstick shows the most blatant bias in newspapers to be that of the news pages of The Wall Street Journal — not the famously conservative editorial page, mind you, but the news pages? When the account appears, not on some watchdog Webpage but in the pages of a prestigious economic journal?
By its readers, and probably in the news business generally, the WSJ is considered the gold standard of fair and balanced coverage — not perfect, but closer than any other newspaper to the unattainable goal of absolute neutrality, and over a broader swathe of intrinsically political activity, to boot. I can’t prove this. I merely assert it. But I believe it to be the case.
Yet a pair of professors of political economy, Timothy Groseclose of the University of California at Los Angeles and Jeffrey Milyo of the University of Missouri, say that the WSJ‘s news columns were “the most liberal” of the 20 major US newspapers, magazines and television programs that they measured. In almost all the rest, they found evidence of “a strong liberal bias.” And they have numbers to back it up — tables, regressions, an alternative hypothesis, a statistical model. (The paper can be found at http://www.polisci.ucla.edu/faculty/groseclose/Media.Bias.pdf.)
So one might reasonably ask, what’s going on here?
The fundamental premise of “A Measure of Media Bias” is that the news business can be approached the same way that journal managers, prize-givers and historians approach the scientific literature — through citation counts. “You are who you quote — and vice versa” is not an unusual notion for practitioners of an art in which influence often is based on citation counts. It is, however, a novel approach to the news.
Here’s what Groseclose and Milyo did: they started with a list of 200 prominent think-tanks that they found on the Web. Then they hired a bunch of college students to count the number of times that members of Congress mentioned those think tanks over a period of ten years. That part was not so hard because The Congressional Record is computer-searchable and free. (They omitted mentions that included ideological labels and also mentions that were critical.)
Next, reluctant to apply their own labels to think tanks, the authors let Americans for Democratic Action do it for them. That is, the ADA, a liberal group, long has rated members of Congress annually on the basis of their votes on 20 measures of great interest to liberals. 100 for 20 yes votes, 0 for none. By associating the ADA rating of the member making it with each favorable mention and averaging the results, Groseclose and Milyo transferred the ADA rating from individual members to the various think tanks (adjusted in certain complicated ways to reflect the changing composition of Congress).
So far so good — or at least mostly in line with expectations. The Economic Policy Institute and NAACP scored near the high end of the scale, with imputed ADA ratings of 80 and 82. The Brookings Institution (53.3) and the Carnegie Endowment for international Peace (51.9) were rated in the middle, where they seek to be seen. The Heritage Foundation (20.0) and Christian Coalition (22.6) were well to the right.
There were, in fact, a couple of interesting anomalies, readily identified and discussed by the authors. The ACLU’s unexpectedly middle-of-the-road ranking (49.8) could be explained by its opposition to campaign finance laws and consequent popularity among conservatives in Congress. RAND Corp.’s relatively liberal score (60.4) was odds with its reputation as a source of non-partisan work. It could be explained by what the authors deemed something of a split personality — conservative authors’ work on classified military studies was less likely to be quoted than liberals’ declassified work’ on social problems.
Armed with their new think-tank scores, the authors repeated their cite-counting procedure with news organizations: that is, they counted the frequency with which the top fifty think tanks were mention in print or on the air. Here the sample periods varied widely and often were much shorter than the ten-year study of the The Congressional Record, though the authors noted they had at least 300 citations from each news outlet.
Then came the really interesting part: call it the Tinkers to Evers to Chance gambit, after the legendary double-play combination of the Chicago Cubs in the early years of the twentieth century. The authors took the ADA scores they had imputed to the think tanks (from the Congressional ratings) and transferred these scores to the news outlets, on the basis of the frequency with which the newpapers, magazines and television programs had mentioned the think-tanks’ work. When they assigned an imaginary “average voter” an ADA score of 50, they had their results.
The Washington Times weighed in at an imputed ADA rating of 35, Fox News Special Report with Brit Hume at around 40. Next came NewsHour with Jim Lehrer, CNN NewsNight with Aaron Brown and ABC Good Morning America, all clustered at around 56. Time and Newsweek rated around 65, The New York Times and CBS identically at 73.7. And the WSJ registered a scale-topping 85.1, reflecting (apparently) the frequency with which those think tanks of which some Congressional liberals most approved were mentioned in its pages during the sample period.
So what does it signify when an academic study arrives at conclusions so different from the conventional wisdom among the news professionals who are the object of the study? Both versions can’t be right.
The WSJ’s instinctive reaction was to question the choice of think tanks as an appropriate instrument for measuring political dialogue. After the UCLA public relations office issued a press release promoting the study, a Dow Jones spokesman wrote in a memo to a widely-followed news industry Webpage, “What are we to make of the validity of a list of important policy groups that doesn’t include, say, the Chamber of Commerce, the National Association of Manufacturers, the AFL-CIO or the Concord Coalition but that does include People for the Ethical Treatment of Animals?”
(Moreover, if a think-tank expert is cited as taking one side in an argument and a university professor or a politician is quoted taking the opposite view, only the think-tank mention goes into measure of bias. The best newspapers pride themselves on how wide they throw their net in order to include all the interested parties to a debate. To imply that think tanks are where the real debate occurs amounts to a full employment policy for the authors’ students.
(Indeed, four of the think-tanks most frequently cited by Congress are identified by their ADA scores as right wing — National Taxpayers Union, Heritage Foundation, Citizens Against Government Waste, National Federation of Independent Businesses. Only the American Enterprise Institute makes the media’s top ten. The authors take this to be preliminary evidence that the media is more liberal than the Congress. I take it to be show that Congress and the media are in very different businesses and that think-tanks seek to influence them in very different ways — and that mapping one set of opinions onto another and another is less like Tinkers to Evers to Chance than baseball to football to soccer. Sure, each game is played with a ball, but each has different rules.)
Also irritating to Dow Jones was the very brief period of the sample of their coverage on which the authors based their results. “The reader of this report has to travel all the way to Table III on page 57 to discover that the researchers ‘study’ of the content of The Wall Street Journal covers exactly FOUR MONTHS in 2002, while the period examined for CBS News covers more than 12 years, and National Public Radio‘s content is examined for more than 11 years.”
(The Washington Post and the Washington Times also were sampled for the same four months, while Time was sampled for a little less than two years, Fox News Special Report for five years, Newsweek for eight years and ABC News for nine years.)
Indeed, so brief was the four month survey accorded the WSJ, the Post and The Washington Times that it seemed as if “they were simply thrown into the mix as an afterthought,” wrote the Dow Jones spokesperson. “Yet the researchers provide these findings with the same weight as all the others, without bothering to explain that in any meaningful way to the study’s readers.”
“Suffice it to say that ‘research’ of this variety would be unlikely to warrant a mention at all in any Wall Street Journal story.”
Yet there may be a story there someplace — if only because “A Measure of Media Bias” appears in the November issue of the Quarterly Journal of Economics. True, if the study were just another salvo from a media watch-dog organization, it probably wouldn’t be news.
But the QJE is the oldest professional journal of economics in the English language. It is edited at Harvard University’s Department of Economics. And the authors have high hopes for what they have done. They write, “[T]he main goal of our research is simply to demonstrate that it is possible to create an objective measure of the slant of the news. Once this is done, as we hope we have demonstrated…, it is easy to raise a host of theoretical issues to which such a measure can be applied.”
Groseclose and Milyo make much of the fact that universities paid for their research — their own salaries, those of their research assistants and their substantial Lexis-Nexis bills. “No other organization or person helped us fund this research project,” they write at the outset. Yet both researchers tilt sharply to the right themselves. Groseclose has been a National Fellow of the Hoover Institution; Milyo a fellow of the Heritage Foundation. Their disclaimer brings to mind an old dictum associated with the late, great Laurence Stern of the Washington Post: “Punctiliousness in the matter of small debts is usually a sign of a developing scam.”
Then there is the QJE co-editor who shepherded the article into print, Harvard University economist Robert Barro, himself famously conservative. Barro says that he first heard the paper given at Stanford in 2004 and encouraged the authors to submit it to the QJE. Twice Barro wrote up the paper himself in the popular press — first in his monthly Business Week column (“The Liberal Media: It’s No Myth”), then six months later for been The Weekly Standard (“Bias Beyond a Reasonable Doubt.”) Meanwhile, “A Measure of Media Bias” went through the usual refereeing process, and was favorably reviewed “by several serious scholars.” A second QJE co-editor signed off on it, Barro says, though whether it was Edward Glaeser or Lawrence Katz is not clear. Both are professors at Harvard University. Once the article was accepted, the press releases started to roll.
“I made a number of suggestions,” says Barro, “one of which was to enlarge the sample, particularly to include the WSJ.” The WSJ had omitted from the original survey, he explains, because it couldn’t be accessed through Lexis-Nexis. “Therefore generating the WSJ data was very time-consuming. For this reason the time period for the WSJ is smaller than for other outlets, but I don’t think this is a problem.” Besides, he adds, the authors replicated the “surprising leftwing rating for the news pages” of the WSJ in a sample for a second year, though those findings are apparently unpublished.
Then, too, there is the “anecdotal evidence” that the authors cite to support their findings. Fairly typical is this passage: “[S]ome anecdotal evidence agrees with the result. For instance, Reed Irvine and Cliff Kincaid note that ‘The Journal has had a long-standing separation between its conservative editorial pages and its liberal news pages.” Paul Sperry, in an article entitled ‘Myth of the Conservative Wall Street Journal,’ notes that the news division of the Journal sometimes calls the editorial division ‘Nazis.’ ‘Fact is,’ Sperry writes, the Journal’s news and editorial departments are as politically polarized as North and South Korea.’” With corroboration like this, who needs doubts? (Sperry’s article can be found here.)
There are many other conceptual problems with the study, most of them having to do with the effort to identify a neutral “center” in a time of dramatic polarization and rapid change. Groseclose and Milyo will get a good going over from fellow-scholars in the months to come. (The blogs have been having a field day.)
No one should be surprised that role of an independent press has begun to come under the lens of the economists, given the new interest in the institutions that stimulate growth and make democracy work. And, in truth, the thirty years of opposition between the Wall Street Journal’s editorial page and its news pages have the makings of a great natural experiment — not so much North and South Korea as religion and science.
It will, however, take stronger theorizing than citation counting to reveal its secrets. Groseclose and Milyo have measured something. I don’t know what it is, and I am pretty certain that they don’t know either. They seem more interested in proclaiming their findings then understanding them. It would be ironic if the main thing that the authors established beyond a reasonable doubt was the existence of a high degree of partisan zeal at the helm of the QJE.