Schoolwide Programs Cheat Disadvantaged Kids

Published January 1, 2003

Despite research claims of dramatic improvements among disadvantaged children who took part in a schoolwide reform program, independent examination of actual test results shows no such improvement, with disadvantaged children performing poorly and/or even worse than their peers.

Since the research was used as the basis for shifting Title I funds to schoolwide intervention programs and away from individual disadvantaged children, this represents a major misdirection of school reform efforts.


Title I is the primary federal effort to reduce the learning gap and equalize opportunity by boosting the performance of disadvantaged children. Traditionally, these funds were given to schools to give additional assistance directly to low-income students.

But Congress re-targeted Title I funding after top researchers reported that a program geared to comprehensively improving the school as a whole dramatically boosted test scores for the disadvantaged.

The new legislation allowed Title I funds to be used to help all students in a school, even where the disadvantaged are a minority of the student body. The U.S. Department of Education (DoEd) backed the shift in a variety of ways, from funding the dissemination of the research to pushing schools and states to adopt schoolwide programs over traditional interventions.

There was only one problem. Subsequent independent investigations revealed that all the key research on schoolwide improvement–which was produced by the co-developers and associates of the leading schoolwide improvement model, Success for All–was not scientifically valid. The disadvantaged students in the supposedly successful schools were in reality performing very poorly.

These findings have significant implications for federal reform efforts:

  • First, it appears much of the $100 billion in federal and state reform funds was misdirected towards an intervention approach that provides no help to disadvantaged children; in fact, learning gaps nationally actually widened during the 1990s, despite major increases in reform funding.
  • Second, in Title I schools, funds intended to help the disadvantaged disproportionally help advantaged students.
  • Third, DoEd did not perform due diligence over the research it funded, the practices it validated, and the programs it advocated.

Responding to these concerns is critical, since the No Child Left Behind legislation mandates the use of scientifically validated practices, and a federal “What Works Clearinghouse” is being established to define such practices.

How could Congress and the whole education profession change course based on results from questionable research? The short answer is: The results were never independently verified. A small group of elite researchers with interrelated interests and potential conflicts of interest conducted and disseminated scientifically invalid research that seriously misdirected education reform efforts. The real victims of this misdirection are the needy and disadvantaged children the reforms are supposed to help.

Here’s how it happened.

Influential Studies

The original studies on schoolwide reform were done by Robert Slavin and Nancy Madden, who reported the success in Baltimore of the schoolwide Success for All program they had co-developed. Since Slavin is a highly regarded researcher and the studies were published in prestigious research journals, the studies had credibility. Then another researcher reported dramatic gains for Success for All schools in Memphis.

Those reported successes led Congress to consider changing the Title I law to favor the use of the schoolwide strategy. DoEd was asked to study the effectiveness of traditional versus schoolwide interventions. The study concluded schoolwide models were better and that Success for All was the best schoolwide model. The two researchers who conducted the DoEd study were in the center at Johns Hopkins University where Success for All had been developed; the wife of one of the researchers is an officer in the Success for All Foundation.

One of the two researchers from the DoEd study then headed another study commissioned by the five major national teacher and administrator professional organizations to determine which interventions were research-based. That study concluded there were only three research-based programs, and that the most validated was Success for All.

Slavin, co-developer of Success for All, also was co-director of a center at Johns Hopkins University, called the Center for Research on the Education of Students Placed At-Risk (CRESPAR), the sole national federally funded center for disseminating research about disadvantaged students. CRESPAR conducted another evaluation that concluded Success for All was the most research-validated of the schoolwide intervention programs.

Preponderance of Evidence

Convinced by the preponderance of evidence produced by this small group of researchers, Congress changed Title I to favor, though not mandate, the schoolwide approach. DoEd, however, promoted the schoolwide approach and discouraged using Title I funds for serving disadvantaged students directly.

By the late 1990s, virtually all of the discretionary new grants from DoEd and its Office of Educational Research and Improvement (OERI) went to researching, disseminating, and developing schoolwide reform models, with approximately one-third–roughly $62 million over a five-year period–going to the Johns Hopkins’ centers where Success for All was developed and the Success for All Foundation.

Other organizations, including Edison Schools, adopted the Success for All program. The superintendent of Memphis was named Superintendent of the Year for the district’s “gains” from the program, which were documented by a professor who had an interest in distribution of the program.

In the Abbott school finance ruling, the New Jersey Supreme Court mandated the program for the state’s 277 high-poverty elementary schools, based solely on the advice of a University of Wisconsin professor who was a consultant to New American Schools, whose most prominent model was Success for All, according to Forbes magazine.

Unravelling Story

But the supposed research-based success story began to unravel when independent researchers examined the “success.” When Richard Venezky of the University of Delaware looked at student records in Baltimore, he was “shocked and disappointed by how poorly the Success for All students had actually done.” Projecting Venezky’s data, after five years of Success for All, students reached the sixth grade reading the equivalent of 3 to 4 years below grade level.

Another independent analysis by the present author showed the Success for All schools in Memphis also had not gained in achievement as reported but had actually declined during 1998-99. More than half of them–some 24 schools–were ranked among the lowest 77 in reading in the state. When district staff confirmed the lack of progress in Success for All and other schoolwide schools, the new superintendent threw out all schoolwide reform programs.

Ineffective, but Still Funded

Additional independent research studies continued to surface, now involving more than 250 Success for All schools, showing the program to be ineffective and/or less effective than the districts’ own initiatives. An evaluation of the comprehensive schoolwide strategy by the RAND Corporation also found no effects. An analysis by the present author showed how the methodology demonstrating the apparent success of the schoolwide program was scientifically invalid.

In spite of these findings, DoEd re-funded CRESPAR without competition and has provided Slavin with another multi-million dollar grant to carry out a further self-evaluation of his program. As a result, the same scientifically invalid research continues to be disseminated at taxpayer expense as evidence of “success.” For example, a November 10, 2002 article in The New York Times Sunday Magazine discussed a new study by a University of Wisconsin professor who once again concluded Success for All was one of only three research-validated intervention programs.

This latest study was funded by CRESPAR, and the author was until recently an employee of CRESPAR. In addition, the researcher from two of the studies referred to earlier was just made project director for the new Federal “What Works Clearinghouse.”

Dr. Stanley Pogrow is a professor of education at the University of Arizona, where he specializes in school reform and improvement policy and practice. He is the developer of the HOTS (Higher Order Thinking Skills) program for Title I and LD students, and the pre-Algebra Supermath curriculum. His email address is [email protected].

For more information …

See Stanley Pogrow, “Success for All Does Not Produce Success for Students,” Phi Delta Kappan, September 2000, pages 67-80; available on the Internet at

Debra Viadero, “Whole-School Projects Show Mixed Results,” Education Week, November 7, 2001; available on the Internet at