The Truth About the DC Voucher Program

Published October 1, 2007

This June, the U.S. Department of Education’s first-year report on the impact of the nation’s first federally funded school voucher program, the DC Opportunity Scholarship Program (OSP), contained some very good news. Unfortunately, voucher supporters did not sufficiently understand it, and opponents mischaracterized it.

District of Columbia Del. Eleanor Holmes Norton, for example, declared, “Vouchers have received a failing grade.”

Media reports about the study seemed to take Norton’s point of view as gospel, but the truth may be far from it.

Created by Congress in 2004, the OSP is providing researchers an unprecedented opportunity to examine critical questions involving student achievement, safety, and overall parent and student satisfaction with a voucher program.

What distinguishes this evaluation from many previous efforts to determine vouchers’ impact is that students were admitted to the program through a random lottery. The result is an experiment that provides researchers a rare opportunity to study school choice–and in particular a publicly funded voucher program–utilizing the “gold standard” of research methods.

Hyperbolic Rhetoric

Like Norton, other voucher opponents seized on the findings of the first-year evaluation. For example, the interest group People for the American Way (PFAW) released a statement saying the “study lays to rest claims by voucher supporters that publicly funded school vouchers would improve academic achievement.”

Such hyperbolic rhetoric is thoroughly unjustified, as these pronouncements are at best premature and at worst a willful misreading of the study. On the other hand, voucher advocates haven’t done a very good job highlighting the successes found within the study.

Most have zeroed in on the report’s finding that parents who were given the opportunity to choose their child’s school were more satisfied than parents without choices, followed up by rightly cautioning that after only one year it’s still too early to tell–and suggesting a “wait and see” attitude regarding achievement impacts.

Statistical Significance

At issue is the finding that at the end of the first year, using a voucher had “no statistically significant impact, positive or negative, on student reading or math achievement”–a statement found in the executive summary of the report and subsequently reprinted by The New York Times and The Washington Post.

Unfortunately, most observers are not familiar enough with statistical jargon to decipher the full report, and it is unlikely most took the time to read past the executive summary’s first few pages. If they had, a few important facts would have merited further discussion.

Reliable Gains

The first point worth noting is that the term “statistically significant” should not be thrown around so casually by reporters and interest groups.

In simple terms, it refers to a predetermined level of acceptable error. In this case, the federal government requires the level be set at 5 percent, which means the observed difference between voucher and traditional public school students must be sufficiently large that it could be caused by random chance five times of 100.

It turns out that while reading differences were not statistically significant, the gain the voucher students showed in math had an error-level probability of 7 percent.

To put this in perspective, ask yourself the following: If the weather forecast said there was a 95 percent chance of rain, would you take your umbrella? What if it said there was a 93 percent chance? The difference between 5 and 7 percent does very little to reduce our confidence that a real, positive voucher effect is being observed, just as it does little to change our minds about grabbing the umbrella on the way out the door in the morning.

Unequal Treatment

It is wrong to treat the government’s 95 percent standard as an absolute threshold when the academic community commonly reports statistical findings using a lower threshold of 90 percent confidence, particularly when the research design is rigorous.

To declare a program a failure because of such a small difference in confidence levels makes little sense. Unfortunately, this is an inherent danger in presenting complex statistical reports to policymakers and journalists.

Moreover, the study actually did find some positive effects for vouchers that more than cleared the statistical bar set by the government. When the authors looked at the two-thirds of the sample that entered the program the most prepared to learn, they found statistically significant (at the 97 percent confidence level) gains in math scores.

In other words, the two-thirds of the sample who, according to their higher baseline test scores, had the best chance to adjust to an accelerated learning environment outpaced their public school counterparts.

Rebound Effect

Such a finding is consistent with a multitude of studies that have shown it can take a while for children to rebound from the often negative effect of switching schools.

The fact that math gains were statistically significant, even at the federal government’s high standard, for the two-thirds of the sample that came into the program the most prepared is a very promising finding that ought to have voucher supporters dancing in the streets.

The folks at PFAW said they have “no expectation that a little thing like ‘facts’ will stand in the way of [voucher supporters’] anti-public school crusade.” Ironically, whether by a lack of understanding or willful ignorance, it appears they are guilty of their own charge.


Matthew Carr ([email protected]) is education policy director at the Buckeye Institute for Public Policy Solutions and a distinguished doctoral fellow at the University of Arkansas. Brent Riffel ([email protected]) is a deputy director in the Office for Education Policy, also at the University of Arkansas.


For more information …

The executive summary and full text of “Evaluation of the DC Scholarship Program: Impacts after One Year,” Institute for Education Statistics, issued in June 2007 by the U.S. Department of Education, are available through PolicyBot™, The Heartland Institute’s free online research database. Point your Web browser to http://www.policybot.org and search for documents #21959 (summary) and #21958 (full text).