CNU survey part 1

The CNU Center for Public Policy released part 1 of its recent survey of Virginia voters. The poll of 700 registered voters was conducted January 8-10 and has a margin of error of +/- 3.7%.

The poll found:

  • Virginia voters are more positive than negative about the state’s overall direction, with 48% of them saying that the Commonwealth is headed in the right direction and 27% saying it is headed in the wrong direction. In the October poll, these numbers were 50% and 24%, respectively. Democratic voters are more positive with 60% agreeing that the state is headed in the right direction compared to 44% of Republicans.
  • Voters have positive views of the job Kaine is doing. Sixty percent of the voters rate Governor Tim Kaine positively, up from 55% in October. However, the number of voters who rate him negatively has also increased – from 25% in October to 29% in this poll. Seventy-nine percent of Democratic voters rate Kaine positively and only 14% rate him negatively. Republicans are almost evenly split in their assessment of the governor: 43% rate him positively while 42% rate him negatively.
  • The General Assembly gets positive ratings from 45% of the voters, up from 34% in October, while receiving negative ratings of 31%, down from 42% in October. Interestingly enough, only 41% of Democrats rate the GA positively while 51% of the Republican voters do. How these numbers change after the current session will be of great interest to me.
  • When it comes to local government, 55% of Virginia voters surveyed feel positive about them while 30% feel negative. Northern VA voters are the most positive about their local government (58%), followed by Richmond (57%), Roanoke/SW (55%), Valley/Charlottesville (51%). Hampton Roads voters have the least positive response – only 50% – and by far the highest negative assessment – 38%. My take on this is that the Hampton Roads Transportation Authority, which was approved by most of the local governments over the wishes of many, is the reason.
  • Voters want the General Assembly to compromise and get things done, with 68% preferring compromise and only 25% saying they prefer GA members to stick to their beliefs. By party, 72% of Democrats and 64% of Republicans prefer compromise over gridlock. I hope the House and Senate leadership, in particular, are paying attention to this.
  • Voter’s top three priorities:
    • Change the law to stop people with a history of mental illness from purchasing guns – cited by 75% of those surveyed as the highest priority. (Democrats named this 82%, Republicans 75%)
    • Require gun purchasers at gun shows to undergo the same background check required for guns purchased at gun shops – cited by 68% as the highest priority (Democrats 80%, Republicans 66%) Too bad this survey wasn’t released before they voted to kill HB745 (which incorporated HB592).
    • Cracking down on businesses that employ illegal immigrants – cited by 54% as the highest priority (Democrats 41%, Republicans 68%)

Probably the most interesting part of the survey to me was the voters’ priorities. After all of the hype about abuser fees, it ranked 6th on the list. I wonder how much this had to do with the promises made during the last election cycle that the fees would be either modified to include out-of-state drivers or repealed. Banning smoking ranked eighth, just above payday lending reform. Sadly, changing the way legislative districts are drawn ranked 11th – dead last – with only 15% of those surveyed (20% of Democrats and 11% of Republicans) identifying it as highest priority. The Virginia Redistricting Coalition has some work to do in raising the awareness of this issue.

Part 2 of this poll will be released Tuesday and will provide information on the state budget, immigration and redistricting.

UPDATE: The survey can be found on the web here.

6 thoughts on “CNU survey part 1

  1. As a former survey designer, when reading other people’s polls, I always ask about how the sample was taken. That boilerplate statement about the poll having, “a margin of error of +/- 3.7%” is easy to slap on the cover letter, but where is the data about how the sample was gathered to back that up? How many of the math-averse members of the journalist community have the capability to examine the sample procedure and sample size to determine if this was truly a random sample?

    For example, if the poll was conducted by phone, how does the poll designer compensate for the fact that many citizens screen out unfamiliar callers. The last poll I ran that gathered responses strictly by phone was terribly skewed by the fact that many of those in the upper income brackets would never answer the phone, or would refuse to participate, consequently our sample was heavily populated by the poor and unemployed, who at the time, were yet to join the trend of using caller ID and blocking features. Merely selecting the phone numbers at random does not correct for the lack of upper income, well educated citizens, in the sample.

    I do not say that the CNU and other polls are invalid. I only wish to point out that before the results are given credibility, journalists must be able to explain how the data was gathered and how the poll designer compensated for factors that skew the results.

  2. As a former survey desiger, I’m sure you’re aware that you can go to the CNU Center for Public Policy’s website and find the news release for the poll, and that these news releases always explain the methodology of the survey and provide contact information for additional questions about methodology, results and interview requests in the final paragraph.

  3. I forgot to include a link to the online press release – I have updated the post to reflect that. I’m not one of the members of the “math-averse” community, though, and I have, in fact, looked at the survey methodology as well as the cross-tabs on the data.

  4. All the final paragraph says with respect to sources of data is: “…In addition to sampling error, the other potential sources of error include non-response, question wording, and interviewer error…”

    This does not explain how the surveyors (pollsters) compensated for the fact that those at the higher income levels and highest education levels, screen-out calls from “phone polls” and have been doing so in such increasing numbers, that the validity of phone only polling must be questioned.

    It is no longer adequate for a polling company to pledge that the randomized sample provides 95% or better accuracy, since the population who are answering these random calls is skewed to the lower income and lower educated segments of the population.

    I have the utmost respect for CNU and especially for Dr. Kidd. All I wish to point out here, is that the Press and others who are presented with polling results, should be aware that the results may be skewed, based on a lack of compensation for the socio-economic clustering brought about by the use of phone-only polling.

  5. Vivian was nice enough to let me know about this discussion, so I thought I would respond to your comment because it goes to the heart of a serious question concerning modern telephone survey research. The bottom line is that as we go forward with telephonic surveys we will have to find ways to deal with the diminishing numbers of households who have “land line” telephones. The discipline has not yet settled on how best to do that…

    The primary attraction of telephone surveying is that it enables data to be collected cheaply and quickly (having said that, a state-wide survey is still very expensive). The primary potential problem, however, is with obtaining an adequately representative sample of the general population one purports to represent. This has to do with sampling, and how one decides to try to obtain a representative probability sample. There are several problems with obtaining a representative probability sample, but the two biggest are that a certain portion of the population does not have a telephone in their home (and this portion is getting larger as people go to cell phones and drop their land lines) and that a portion of the population has their numbers “blocked” from access.

    How do we do our survey? We purchase for each survey a telephone lists from a national vendor who specializes in up to date list of telephone numbers drawn from various sources such as nationally published sets of telephone listings, marketing data bases, etc. These lists are entered into a Computer-Aided Telephone Interview (CATI) system. We use a random sample management module, which means that each number in the CATI system has the same chance of being drawn for the sample as any other number in the system. Regional quotas are employed to ensure adequate coverage of the various regions of Virginia. Finally, reported frequencies and crosstabs are weighted using a demographic profile (age, race, sex) to provide a sample that best represents the targeted population from which the sample is drawn. The proportions comprising the demographic profile are drawn from census estimates of the population.

    Sampling error, or margin of error, simply refers to the percentage that our survey results likely differ from the actual population due to the size of the sample drawn. We seek a margin of error less than 5%, which is why we draw a sample of 700. Our margin of error would be zero if we were able to interview every person in the population (all people in Virginia). As we note, our margin of error is a statistical estimation based upon a mathematical theorem, and there are other possible sources of error in survey research such as a bad sample design.

  6. Thanks for the additional information about how you help compensate for call blockers and those without traditional phones, Dr. Kidd.

    This is the sort of information-based discussion that I would like to see on all blogs.

    Economical ways to compensate for these and other skewing factors have been on my mind for a while. It would not surprise me if some of the larger research firms already have developed math models to compensate for the screening and other non-participation factors.

    Since CNU is near the DoD Modeling and Simulation Center, I wonder if someone over there could help develop some compensation models that could adjust for the non-participant factors, for future surveys?

Comments are closed.