Vouchers Improve Academic Outcomes

Published April 1, 2000

Since supporters of parental choice in education hold the moral high ground in the education reform debate, opponents have consistently attempted to shift the debate to secondary issues, such as cost and whether choice produces better outcomes.

Last July, for example, First Lady Hillary Rodham Clinton declared: “There is simply no evidence that vouchers improve student achievement.” But opponents cannot win this argument, either: There is too much evidence that vouchers improve student achievement.

Now that vouchers have been in use for several years in Milwaukee and Cleveland, it might seem a simple matter to compare the academic achievement of voucher students to that of students in public schools. Some of the earliest studies of Catholic schools did just that.

However, such comparisons must be done carefully to be valid. Student populations may differ in many respects, such as socioeconomic background, parental involvement, or innate ability of the students. Critics have long charged Catholic and other nonpublic schools with “skimming the cream,” drawing off the best and brightest students. If true, such “selection bias” would invalidate a simple comparison of academic outcomes.

Making the Analysis Objective

Education researchers must use sophisticated techniques to try to correct for differences in student populations. Regression analysis attempts to isolate the effects of school type by measuring the other factors that might contribute to academic performance, such as race, parents’ education, and family income. But even the most careful analysis may still be questioned if the analyst didn’t measure or take into account other significant contributing factors.

In medicine, the “gold standard” for evaluating the effectiveness of a treatment is the randomized controlled trial, where patients are assigned at random to either a control or experimental group. Rarely do social scientists have the opportunity to conduct randomized trials. But when it is possible, the “cream-skimming” criticism described above is eliminated, and the academic performance of students in alternative schools can be compared directly with those in public schools.

Research Confirms Choice Is Better

Before vouchers, Catholic schools represented the main affordable education alternative for families of limited means. The substantial difference in test scores and graduation rates of Catholic school students compared to their public school counterparts attracted the attention of social scientists such as James Coleman, Derek Neal, and Kirk Johnson. Recognizing that selection bias was likely, their studies used regression analysis to limit the influence of family and socioeconomic differences.

Results supported the existence of a “Catholic school effect.” Catholic high school students gained knowledge at up to three times the rate of public school students (Coleman, 1982); 88 percent of Catholic students graduated, compared to 67 percent at public schools (Neal, 1997); and Catholic school students scored significantly higher on standardized tests, with the advantage growing from 6.5 percent in fourth grade to 8.2 percent in eighth grade (Johnson, 1999).

Local experiments with private vouchers added to the evidence on school choice outcomes. In 1997, the School Choice Scholarships Foundation offered half-tuition scholarships to low-income grade-school children in New York City to help them attend private schools. With 19 applicants for each available scholarship, the Foundation used a lottery to award the scholarships, creating a randomized trial.

Paul Peterson (1997) of Harvard University’s Program on Education Policy and Governance compared applicants who had received scholarships with applicants who hadn’t. He found the scholarship students averaged two percentile points higher on standardized tests after just one year, with fourth and fifth graders experiencing a six-point advantage.

The real debate among educational outcome researchers, however, revolves around publicly funded voucher programs. For example, when the current Milwaukee voucher program started six years ago, social scientists welcomed the opportunity to evaluate a randomized trial on a larger scale. However, for the first four years, a single researcher–John Witte, appointed by the Department of Public Instruction–controlled the Milwaukee data. His annual reports indicated that choice students showed no academic benefits over public school students.

But when the data finally were made public, independent analysis by Peterson and his colleagues showed significant gains in reading and math achievement for voucher students compared to applicants who weren’t awarded vouchers. Why the difference in results? Greene and colleagues (1996) found that Witte had used an inappropriate comparison group–all Milwaukee public school students–instead of a truly equivalent group with backgrounds and aptitudes similar to those of the students going into the program.

Nevertheless, voucher critics still use Witte’s initial flawed results to try to discredit vouchers–even though his results show that, at worst, vouchers produce the same outcomes as are produced by public schools.

In Cleveland, a similar controversy arose when the appointed analyst, Kim Metcalf, reported that voucher students were no better off than their public school peers after one year in the program. At the same time, Jay Greene and colleagues found enormous improvements in reading and math scores in selected Cleveland voucher schools. Metcalf’s second-year report painted a much brighter picture, as he found that voucher students were outperforming public school students by a significant margin.

While voucher supporters never had any doubt that school choice would improve outcomes, it’s reassuring when sound research backs up that confidence.


Outcomes in Publicly Funded Voucher Systems

Milwaukee: Eye of the Storm

  • John Witte–no significant difference in achievement relative to public schools
  • Green, Peterson, and Du (1996)–modest gains in reading and math rose to significant gains in years three and four.

Cleveland: Converging on Agreement?

  • Greene, Howell, and Peterson (1997) studied HOPE schools, found huge gains in first year (20 percent rise in reading scores, 30 percent in math scores).
  • Kim Metcalf (1999)–No gains in first year, but for second year reports 13 percent increase in language scores and 10 percent in science.

Joy Kiviat, an economist, is research director for Citizens for Educational Freedom, a grassroots organization advocating parental freedom in education. This article is based on her presentation at CEF’s Fortieth Anniversary Celebration in St. Louis last October.