Statutory language requiring the use of “evidence-based medicine” (EBM) in programs from Medicaid to workers’ compensation will reportedly be presented to the Colorado General Assembly in 2004.
EBM will be defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual injured persons entitled to benefits.” The individual clinical expertise of a “health care provider” will be “integrated” with a “covered person’s choice of care” and “efficacious interventions to maximize the quality and quantity of life for individual covered persons.”
Critics say that in practice such language subordinates patient wishes to treatment decisions made by an unknown “health care provider” using an undefined standard for “efficacious intervention.” In practice, a patient’s wishes will have only as much weight as the unnamed unaccountable all-powerful integrator decides to give them.
Those in favor of compulsory evidence-based guidelines say imposing their standards on physicians will lower overall health care costs by improving clinical treatment. In their view, waste and unnecessary expenditures will be controlled by the requirement that physicians conform to guidelines developed by committees after a review of the latest findings from randomized controlled trials using statistical controls to help sort out imperfect human perceptions of causality.
Some proponents put so much faith in trials that they confidently dismiss evidence gathered by means such as case reports and clinical observation. “If you find that [a] study was not randomized,” the authors of one book on practicing and teaching EBM advise readers, “we’d suggest that you stop reading it and go on to the next article.”
Those critical of EBM programs argue such mandates are unnecessary, because physicians rapidly adopt new treatments once the evidence in their favor is clear and convincing. They also say requiring the use of guidelines developed under EBM protocols gives too much power to small committees of experts who are human and may make mistakes.
Questionable Data
Expensive and difficult to design, randomized controlled trials are often “underpowered,” meaning their sample sizes are too small to detect any but very large differences between treatments. Underpowered trials are more likely to find no difference between old and new treatments, an outcome that makes it easier to justify some kinds of rationing decisions.
Weaknesses in study design, some of which are not well understood until the results have been reported and picked over by the scholarly community, also cause problems. And because trials take so long to carry out, the treatments they test and the guidelines developed from them are often outpaced by medical progress.
The seven-year-long Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) study carried out by the National Heart, Lung, and Blood Institute at a cost of $125 million is a case in point. Designed to compare how well four drugs prevented death in people with high blood pressure, its results found no difference between the effect of a simple diuretic and a newer, more expensive, ACE inhibitor.
Though the Institute subsequently published U.S. guidelines stating that first-line treatment for people with high blood pressure should be a diuretic, alternative interpretations of the ALLHAT study pointed out that roughly one-third of the sample was African-American, that African-Americans are known to be relatively unresponsive to ACE inhibitors, and that all-cause mortality for Caucasians was lower in the ACE inhibitor group than in the diuretic group. One British scholar noted that the antiquated drug regimes specified for ALLHAT would “more commonly [be] found in a pharmaceutical museum than in our patients.”
Other scholars noted ALLHAT suffered from inadequate power, lower than projected sample size, and poor compliance. Michael A. Weber, a hypertension specialist at New York’s Downstate Medical Center, also pointed out, “although not presented in the ALLHAT report, the relative cost of drugs somehow became part of its published conclusion.”
In the United Kingdom, where the National Institute for Clinical Excellence (NICE) promulgates evidence-based guidelines for the National Health Service, cost savings have not materialized. NICE itself estimates that it added an estimated £575 million in costs to the National Health Service budget in its first two-and-a-half years of operation. And centralized control of clinical treatment also introduces a new ethical dilemma for physicians.
The Double Lie
As a 2000 editorial in the British Medical Journal put it, evidence-based programs that force centralized control on clinical medicine live a double lie.
The first lie is that EBM is not about rationing, because it supposedly doesn’t count as rationing if you are denying ineffective interventions.
The second and related lie is to “give the impression that if the evidence supports a treatment then it’s made available, and it if doesn’t it isn’t. In other words, the whole messy problem of deciding which interventions to make available can be decided with some data and a computer.”
The editorial concludes that to believe this is to ignore the fact that risk matters. In reality, deciding what constitutes cost-effective treatment for a given patient is an ethical judgment, not a mere technical problem.
Linda Gorman is director of the Independence Institute’s Health Care Policy Center. Her email address is [email protected].