Beware of Flawed Polling

Originally posted at The Moderate Voice

The contrast in quality between the news and opinion divisions of The Wall St. Journal never ceases to amaze me.

While the news reporting has won award after award, and has become the definitive source of news for the American business and financial communities, the opinion pages are characterized by what some would consider mendacity and methodological sloppiness or a genuine lack of understanding of the subject matter. Indeed:

The 13 March, 2009, op-ed by Doug Schoen and Scott Rasmussen is just another indication of why one should beware of anything written in those sections. Both authors are pollsters, and should therefore know better, but they didn’t provide full cross-tabs, sampling information, fielding dates or questionnaires for the polls they cite, and their analysis meanders back and forth between different polls. It is impossible to take any of the claims in this article completely seriously on their own merits.


In my first piece for The Moderate Voice, I discussed the problems with Mark Penn’s holding back the data products from his clients. Journalistic best practices also involve making the data products available, usually in the form of a filled-in questionnaire. Without providing this information, he was basically telling his clients to take him at his word. In print, where real estate is at a premium, the full cross-tabs or a filled-in questionnaire is infeasible, so all we get is sampling and fielding information. It usually looks like this: “The New York Times/Gallup poll called 1,000 adults evenly distributed across the country on the evenings of X,Y and Z. Respondents were evenly distributed for gender and race.” That is the bare minimum of information required to report on a poll. In web journalism, hyper-linking abolishes the real estate problem: you can link to PDF’s of the data you’re citing. In fact, most people do. It’s responsible journalism.

This is not just a minor issue of protocol, but of great import to any poll or analysis. The means of verification of a poll consist of the sampling, and fielding. Without a carefully constructed sample, questionnaire and fielding plan, a the data in a poll are absolutely meaningless. The meaning of a poll is in precisely these details. Any credible pollster engaging in bona fide research knows this, which is why we spend so much time on these details. Pollsters will spend a lot of time and effort into making sure that all of these things are perfectly designed to acquire the right information to discover the answers to the questions being asked. Any discussion of a poll or its meaning will involve these details. And anyone who’s ever worked at a polling firm in any capacity knows this.

They say, “Polling data show that Mr. Obama’s approval rating is dropping and is below where George W. Bush was in an analogous period in 2001. Rasmussen Reports data shows that Mr. Obama’s net presidential approval rating — which is calculated by subtracting the number who strongly disapprove from the number who strongly approve — is just six, his lowest rating to date”, to which I ask, “Which data?” They haven’t provided any information about the polls, made the full tabs available or behaved as any responsible pollster briefing a client would. We’re basically being told to “trust them” that these data exist and indicate what they report without being able to look and see for ourselves. .To put it bluntly, we don’t know who was selected for the poll or what they were asked. All we have is their word that the polls they cite :

1. exist,
2. ask neutrally worded questions,
3. use a scientifically constructed representative sample of the population,
4. were called evenly
5. and actually say what Schoen and Rasmussen would have us believe that they do.

How many polls are they citing? Are they examining the shifts in a rigourous time-series analysis? We don’t know, because they won’t tell us. We don’t know when these polls were fielded, either, and whether or not these were all different polls with completely different questions asked of entirely different people, or if they were all different waves of the same poll.

Then, in a completely astonishing move, they shift from analyzing these elusive Rasmussen data and begin analyzing elusive Gallup data (they also fail to describe or cite these data in the same way that they do the Rasmussen data.), but fail to describe how they merged and reconciled two completely distinct datasets into one poll for analysis.

To put this into perspective, imagine a scientist writing a paper of his findings by writing about two completely distinct and unrelated experiments without doing any sort of work to show how these two distinct and unrelated experiments are actually neither distinct nor unrelated, and then moving back and forth between these two experiments at will, as portions of each may give him the findings that he wants. Such a scientist would be laughed out of academia, yet when social scientists are doing so, they are given prime journalistic real estate, the opinion pages of The Wall St. Journal.

In a nutshell, this is the problem. Some might suggest that Schoen and Rasmussen have engaged in such methodological sloppiness that one can only imagine that they were looking for anything that they could find to try and prop up a controversial thesis. Their seemingly magpie approach to data analysis would likely embarrass any entry level assistant analyst at any polling firm — but because most people are unfamiliar with the mechanics of polling, it has earned them headlines on the opinion pages of WSJ.

The Real Significance of The CBS AIG Poll

Originally posted at The Moderate Voice.

A CBS News poll fielded over the weekend shows a surprising finding: while a majority of Americans disapprove of AIG handing out bonuses, believe that the government should do more to recover them and specifically give Obama low marks for handling this issue, there is no appreciable change in his overall job performance and there is an increase in overall confidence in his ability to handle the economy. Treasury Secretary Tim Geithner, has slightly lower numbers.


q1 Do you approve or disapprove of the way Barack Obama is handling his job as President?
*** Party ID ***
Total Rep Dem Ind Mar09a
% % % % %
Approve 64 35 87 60 62
Disapprove 20 50 2 17 24
DK/NA 16 15 11 23 14

q2 Do you approve or disapprove of the way Barack Obama is handling the economy?
Approve 61 29 84 59 56
Disapprove 29 59 9 29 33
DK/NA 10 12 7 12 11

q3 Do you approve or disapprove of the way Congress is handling its job?
Approve 30 22 43 23 30
Disapprove 56 72 40 62 56
DK/NA 14 6 17 15 14

q4 blank

q5 blank

q6 How much confidence do you have in Treasury Secretary Tim Geithner’s ability to handle the nation’s financial crisis – a lot, some, not much, or none at all?

Weighted (227,354, 368)

Total Rep Dem Ind Mar09a
A lot 13 4 21 10
Some 41 31 48 41
Not much 20 31 12 21
None at all 15 28 6 15
DK/NA 11 6 13 13

The most striking feature of these data is the partisanship that they indicate: in each case, the respondents giving low marks to the subject of the question are the Republicans.

Democrats support Obama’s handling of the economy at 84%, Independents 59%. Democrats support Geithner at 69%, Independents at 51%. Republican respondents give Obama 45% and Geithner 34%.

While Independents remain the largest portion of the sample at 39% of the sample, they are strongly breaking in support of the administration. If one assumes that Democrats are likely to support Obama at high numbers, no matter the circumstances, and that Republicans are likely to oppose Obama at high numbers, no matter the circumstances, the view is still good for the administration. The total number of Democrats and Independents who approve of Obama’s handling of the economy is: (298 Dems) + (218 Independent) is 516, or 54%, without any Republicans. The total number of Republicans and Independents who disapprove of Obama’s handling of the economy is (134 Reps) + (107 Independent) is 241, or 25%, without any Democrats.

So, although Obama has low numbers amongst Republicans, if our hypothesis holds, these were voters he was never likely to retain anyway. The strong numbers amongst Independents and Democrats, the two largest blocs surveyed show him to be in a good position. The increase in approval, while at the same time shedding Republican support, shows that Obama is holding the voters he was supposed to hold anyway and winning over the Independents. In other words, he is winning the argument. The AIG bonuses issue, while changing the news for a few days, did not drag down his numbers or his trending support.

There are two questions missing from this poll that I would have liked to have seen there to make the Congressional approval question more useful: the approval of Democrats in Congress and the approval of Republicans in Congress. Given the strong power of the minority to block action in the Congress, it’s impossible to read determine the roots of Congressional approval without seeing how voters are favour both Democras and Republicans.

The important takeaway from this poll is that while voters are angry about the AIG bonuses, and think that the government should try to recover the money, they are still confident in Obama and Geithner. The dropping Republican numbers are predictable, and given the small number of people identifying as Republicans, of no worry to Obama. After all, he’s winning and growing his share with Democrats and Independents.

Dheeraj Chand has worked in Democratic polling since 2007. Prior to that, he was a political journalist, Democratic field operative and a high school debate coach.

Time For Mark Penn To Go

Originally posted to The Moderate Voice – I’ll be guest writing there on polling and polls for a while.

I’m a little late to the fight between Democratic pollsters Stan Greenberg and Mark Penn, but I thought it important to finish reading Greenberg’s book before I offered an opinion.

To those of us who work in polling, data or analytics, the revelations in Greenberg’s book provide few new revelations but replete examples of how far out of the mainstream of best practices and ethics Mark Penn’s work really is. This is also a perfect illustration of the problematic culture of Democratic consultants and their deleterious effects on Democratic campaigns.

But first, a little context.

Stan Greenberg is best known as the political scientist who first described and predicted the party switch of working class whites from Democrats to Republicans, and the pollster who guided Bill Clinton to a narrow plurality victory in 1992. Since then, he has founded the Democratic polling behemoth, Greenberg Quinland Rosner Research. Several of his employees have gone on to become prestigious Democratic pollsters in their own right, such as Diane Feldman and Celinda Lake. (Full Disclosure: I have worked for both.) Furthermore, Greenberg is generally considered a strong contributor to the Democratic analytics community through programs like Democracy Corps.

Mark Penn is best known as the former CEO of Burson-Marsteller, and the head of their political division, Penn Schoen Berland and Associates. His most prominent political work was as pollster and chief strategist for the 2008 Hillary Clinton campaign and a short stint as a pollster for Gore/Lieberman 2000. Prior to that, he was one of the pollsters, working with Dick Morris, who helped Bill Clinton in 1996, focusing on voter “segmentation” and devising policies that would appeal to those segments. Aside from his work for the Clintons, he mostly did (and has returned to) commercial work.

During the 2008 Democratic primary, Penn became a household name among political junkies and other high-information voters for the depth and breadth of his controversial statements and behaviours. In a widely disseminated piece in The Atlantic, it was reported that Penn would withhold data from key players in the campaign and insist that they just trust him, as well as advising a morally reprehensible strategy of racial division against Senator Obama. He also attempted to sideline anyone who disagreed with his methods and conclusions, and frequently overstepped his portfolio in an attempt to take undue credit for campaign victories. He is, to say the least, a highly polarising figure.

In his book, Greenberg describes a prior instance of Penn withholding data — Blair’s reelection campaign in Britain — and then goes further to describe how many believe Penn falsifies data to back up his own preferred readings. In other words, if the data don’t say what Penn wants them to say, he simply makes enough changes to get the outcomes he wants. Less politely, he lies to his clients and jeopardises their chances for success. Penn’s response was that Greenberg was out of the loop on Blair’s campaign, and so he would have had no way of knowing whether or not Penn was withholding data.

In turn, Penn’s defenders point out that, while he didn’t provide the actual data products (such as completed questionnaires, daily partial reports, and a full cross-tabular analysis of each poll as it was completed), he did provide written reports and analyses of what he’d derived from these polls, and that this contribution was useful. That defense seems reasonable until you consider it is a fundamental tenet of public opinion research that the clients own the data.

The job of the pollster is to collect the data, create legible products from them, provide reports on the meaning of the data and then house them for the client in a secure database — but ultimately, the data belong to the client. It is absolutely unconscionable for a pollster to refuse to provide data products upon completion, alongside his analyses of them. To further refuse to provide data products upon explicit request from the client, as Penn did in the Blair, Gore, and Clinton campaigns, moves beyond eccentricity or bad habit into the realm of malpractice and malfeasance.

In conversations, Penn’s defenders point out that Greenberg comes from academia, with its conventions on long-term research, validation and fact-checking, which is why Greenberg is so insistent on rigourously produced data products that are given to the client. In contrast, Penn , comes from the commercial world, which moves at breakneck speed, and often doesn’t have the time to create all the data products that an academic like Greenberg would deliver. That defense is both false and irrelevant.

It is false because Penn would have required those very products in order to produce the reports and briefings he delivered to his clients, and also because the world of commercial opinion research, especially at the higher dollar levels, is far more rigourous than the world of political research. In other words, if it’s true that Penn’s methodology is shaped by his time in the commercial world, then he should be inundating the client with data products and raw data.

Even if we grant Penn’s defenders their assumptions about the nature of commercial public opinion research, that defense is irrelevant because Penn chose to work in the political sphere, where Greenberg’s methods are the norm. Accordingly, the arguments of Penn’s supporters make about as much sense as saying that it would be okay for him to travel to the United Kingdom and drive on the right side of the road because he was raised in America.

Such conflicts might seem trivial to people outside the analytics community. Outsiders may wonder what such conflicts between rival consultants have to do with their daily lives and whether or not this is just another inside-the-Beltway tempest in a teapot. While understandable, such views miss the larger point: the money that we give candidates and parties we support might be wasted and spent badly, which means that our preferred outcome, the victory of our candidates, is jeopardised. The import of such waste is heightened when one considers the fact that many campaigns receive matching funds from government and hence tax dollars could go to malefactors.

I wish it were possible to say Penn is an isolated case, a single bad apple, in the field of Democratic consultants, but he’s not. While I have no direct knowledge of other Democratic pollsters behaving like Penn apparently did, Amy Sullivan’s 2005 article for The Washington Monthly argues that Democratic consultants have frequently abused the trust of their clients. In her article, Sullivan describes how personal loyalty, a cliquish mentality, and perverse structural incentives combined to have the same unsuccessful consultants get re-hired, move upward, and be able to charge more for worthless services, after every careless loss.

The fact that people like Penn continued to be hired is part of what led to low confidence in Democrats for so long. After all, if we couldn’t trust candidates to staff a campaign with competent people, how could we trust them to staff government? Things have changed since Sullivan’s essay: The 2006 midterms saw a lot of these people run out out of the consulting world, leaving the bona fide consultants in place and bringing in a world of new people and techniques. The 2008 campaigns continued that trend, and Penn is now a dinosaur trying to defend his record.

Polling provides a valuable service to candidates and elected officials. Without polling, the concerns of people who don’t have the time to visit or call the candidate’s office would go unheard. A poll is a way for a candidate to reach people he would otherwise never have a chance to talk to and hear what they think is important. If a candidate doesn’t make his campaign about helping the people he wants to serve, he doesn’t deserve to win. Unfortunately, people like Mark Penn might very well be turning this valuable service into something nefarious, with claims of secret data validating their strategies. The purging of such malefactors should continue.