Friday, February 25, 2005

Media Bias Quantified

QandO points me towards a new study of bias in the media: A Measure of Media Bias (download the whole thing here)

I've read the whole thing, although I'll admit I only skimmed over their mathematical models, and it seems like a pretty competent piece of social science. I'm sure people could have quibbles with some of the decisions they make in their approach, but it seems to me that they have gone about this task in a fair and conscientious manner. Their basic idea is to construct ADA (Americans for Democratic Action) scores for media outlets by noting which think tanks they use for support in news stories and comparing them to members of Congress who use the same sources for support. Therefore they are not defining bias in an absolute fashion, but in a comparative fashion. Thus they can say if Newspaer X were a Senator they would be most like Ted Kennedy or Bill Frist.

I'm not gonna go over all of their methodological byways. The paper is littered with caveats and special considerations (everything from dealing with gerrymandered districts, to Mitch McConnell singlehandedly skewing the score of the ACLU), but their basic measures seem sound.

The findings? The wind up defining the center as a score of 50.1 on the ADA scale (although they also construct another "center" score of 54.0 as well.)

Next, we compute the difference of a media outlet’s score from 50.1 to judge how centrist it is. We list these results in Table 4. Most striking is that all but two of the outlets we examine are left of center. Even more striking is that if we use the more liberal definition of center (54.0)—the one constructed from congressional scores from 1975-94—it is still the case that eighteen of twenty outlets are left of center.


Fox News’ Special Report is approximately one point more centrist than ABC’s World News Tonight (with Peter Jennings) or NBC’s Nightly News (with Tom Brokaw). In neither case is the difference statistically significant. Given that Special Report is one hour long and the other two shows are a half-hour long, our measure implies that if a viewer watched all three shows each night, he or she would receive a nearly perfectly balanced version of the news. (In fact, it would be slanted slightly left by 0.4 ADA points.)

Special Report is approximately thirteen points more centrist than CBS Evening News (with Dan Rather). This difference is significant at the 99% confidence level. Also at 99% confidence levels, we can conclude that NBC Nightly News and ABC World News Tonight are more centrist than CBS Evening News.


Another implication of the scores concerns the New York Times. Although some claim that the liberal bias of the New York Times is balanced by the conservative bias of other outlets, such as the Washington Times or Fox News’ Special Report, this is not quite true. The New York Times is slightly more than twice as far from the center as Special Report. Consequently, to gain a balanced perspective, a news consumer would need to spend twice as much time watching Special Report as he or she spends reading the New York Times. Alternatively, to gain a balanced perspective, a reader would need to spend 50% more time reading the Washington Times than the New York Times.

The full chart looks like:

Period of ADA Observation -- Score
ABC Good Morning America 6/27/97 - 6/26/03 -- 56.1
ABC World News Tonight 1/1/94 - 6/26/03-- 61.0
CBS Early Show 11/1/99 - 6/26/03-- 66.6
CBS Evening News 1/1/90 - 6/26/03-- 73.7
CNN NewsNight with Aaron Brown 11/9/01 - 2/5/04 --56.0
Drudge Report 3/26/02 - 7/1/04 60.4
Fox Special Report with Brit Hume 6/1/98 - 6/26/03 --39.7
LA Times 6/28/02 - 12/29/02 --70.0
NBC Nightly News 1/1/97 - 6/26/03-- 61.6
NBC Today Show 6/27/97 - 6/26/03 --64.0
New York Times 7/1/01 - 5/1/02 --73.7
Newshour with Jim Lehrer 11/29/99 - 6/26/03-- 55.8
Newsweek 6/27/95 - 6/26/03-- 66.3
NPR Morning Edition 1/1/92 - 6/26/03-- 66.3
Time Magazine 8/6/01 - 6/26/03-- 65.4
U.S. News and World Report 6/27/95 - 6/26/03-- 65.8
USA Today 1/1/02 - 9/1/02 --63.4
Wall Street Journal 1/1/02 - 5/1/02-- 85.1
Washington Post 1/1/02 - 5/1/02 --66.6
Washington Times 1/1/02 - 5/1/02-- 35.4

average 62.6

Compare this with your favorite legislator:

Legislator Ave. Score

Maxine Waters (D.-Calif.) 99.6
Ted Kennedy (D.-Mass.) 88.8
John Kerry (D.-Mass.) 87.6
average Democrat 84.3
Tom Daschle (D.-S.D.) 80.9
Joe Lieberman (D-Ct.) 74.2
Constance Morella (R-Md.) 68.2
Ernest Hollings (D-S.C.) 63.7
John Breaux (D-La.) 59.5
Christopher Shays (R-Ct.) 54.6
Arlen Specter (R-Pa.) 51.3
James Leach (R-Iowa) 50.3
Howell Heflin (D-Ala.) 49.7
Tom Campbell (R-Ca.) 48.6
Sam Nunn (D-Ga.) 48.0
Dave McCurdy (D-Ok.) 46.9
Olympia Snowe (R-Me.) 43.0
Susan Collins (R-Me.) 39.3
Charlie Stenholm (D-Tex.) 36.1
Rick Lazio (R-N.Y.) 35.8
Tom Ridge (R-Pa.) 26.7
Nathan Deal (D-Ga.) 21.5
Joe Scarborough (R.-Fla.) 17.7
average Republican 16.1
John McCain (R.-Ariz.) 12.7
Bill Frist (R.-Tenn.) 10.3
Tom Delay (R.-Tex.) 4.7

Maybe not conclusive, but damn interesting.


Anonymous said...

In my exceptionally quick read of this study, several things strike me as possible weaknesses to this study, that I don't think they addressed (again, they may have and I might have missed it).

1) There may be a serious confound here in that what Senators and Congressmen said during debate may have come from the media outlets they describe. If so, then the bias is not the media's, but that of the Congress. (Though it's fair to say that they may be citing them precisely because the media leans left- I have to think more about this problem.)

2) It's important to note, and I don't think that the authors do this, is that they are not measuring media bias as a whole, but only very select policy issues. Congress does not cite the same variety of issue in policy debates that the media cover, and where the media may portray a bias. Example, capital punishment probably has not gotten a lot of attention on the Hill, and probably not much attention by think tanks, but the media covers it alot. This does not suggest that the media isn't more liberal on capital punishment, but that this paper doesn't measure it. It's very design causes the study to be an examination of an extremely narrow slice of media behavior.

3) It may be that there is a built in leftist tilt to their study simply by measuring the ADA scores and comparing it. That is, by focusing on ADA scores, they are already heading in that direction. The best way to test that is to use the scoring of conservative group on the same sample of stories. If the findings there are close to a mirror image of the findings they present, that would strengthen their argument. But if they find that the findings are skewed and suggest a conservative bias, than their findings could simply be a product of the measure they use and not a real description of media bias.

4) Their operative measure, the number of times a particular think tank is mentioned, is very problematic, as it does not really capture the quality of the comment nor the extent. For example, if I were a reporter who is instituting liberal bias into a story, consciously or not, I am probably more likely to pick quotes from the liberal think tank that are of higher quality than the ones I pick from the conservative think tank. I could pick the same number of quotes, or even more conservative quotes than liberals ones, but the quotes I pick could make the conservatives look like raving lunatics. That is not captured in their measure, and it could skew the findings as more liberal or more conservative, the point being that they just don't know.

Also, they don't measure column inches, which strikes me as a more precise measure than the number of times something is quoted (or the length of the citation from the Hill). A reporter could cite a liberal think tank three times to one from a conservative think tank, but the conservative quote could be twice the total length as the others combined. If that were the case, then the findings would skew liberal when it's probably not the case. Again, this could change the findings in either direction, but that authors are not able to tell from the way their study is designed. This would also impact the comparison among media outlets.

5) In addition, it's problematic to compare across media outlets because of the nature of reporting. The stories on the network news are likely to be shorter and not as comprehensive as a magazine or newpaper story, so the amount of mentions may be less. They may only go to one source because of time limits (that choice of that source is of course subject to bias). The newspaper (and longer broadcast media) stories will have more opportunity to cite various sources, which could skew the findings.

6) I would be far more comfortable if their bias test was done during the exact same time frame for each media sources. During different administrations, different policy oriented groups and think tanks will get diverging amount of attention from the media. Also, the change in party control of the House and Senate will affect what issues are addressed on the floor, and the opportunities the politicians from each party have to speak. In addition, the issues scored by ADA for their measure will change over time. It could be, but I doubt it, that these variations are exogenous to their model and would not affect it, but my strong guess is that they would almost necessarily impact it, and as such, the comparisons across media sources are likely invalid due to the varying time spans.

They may have tried to control for these changes in their model, but it would be difficult. Indeed, they may have addressed many of my concerns (in my quick reading, I didn't see them to do so, but I could have missed it).


The Iconic Midwesterner said...

My quick take on your points:

#1) Is this a chicken & the egg problem? In other words, do Congress members cite studies because the media cites them, or does the media site studies because congress sites them? I'm not sure that causal relationship is important for determining what they want to determine.

#2) Yes, their study is not issue specific. I think they tried to get at this by stressing that think tanks tend to be consistently left, center or right across their policy positions. That's why they cut RAND into two sections (Military and Domestic) because otherwise it made RAND sort of schizophrenic. To the degree that think tank are NOT ideologically consistent, it could be a problem.

#3) I think you are right. Using ADA scores alone might exaggerate the left bias. But you would have thought it would also have skewed the right wing further RIGHT and not towards the center. Isn't that right?

#4) I agree with you. I think they wanted to remove most subjective evaluations from the process. I'd have to re-read the section on how they coded their data to know how to respond to this criticism.

#5) They do address this point at least in passing. But they are not comparing media with each other as such, but deriving an ADA score based of who they reference. So I'm unsure it that is too big of a problem for them.

#6) I think you are right here. I dont think they explain the differences of time frames involved.