We are all different. So why should health care providers want to treat us the same?
Consider the many varieties of people throughout the world. Some are tall. Some are short. Some are fat, others are thin. Some are old, others are young. They come in a variety of different colors and from all parts of the globe. With all of this variety, why should we be surprised when their health care problems cannot always be treated the same way?
Doctors know that people are different from one another. Treat one person’s pneumonia with penicillin, and you save his life. Treat another the same way, and he goes into anaphylactic shock and dies. Every day, doctors in practice see the many different manifestations of disease.
That’s why, as Dr. William Osler noted many years ago, we recognize it is the patient who gets the disease and not the disease which gets the patient. It is the patient whom we treat, not the disease.
As science progresses and we move more toward individually targeted patient-specific therapies, government health care policy is moving in the opposite direction—viewing patients as groups or herds. Comparative effectiveness research (CER) is but one more attempt to do this. CER would lead you to believe all patients can be treated the same without considering their unique differences. It ignores reality.
Government-Driven ‘Solution’
The federal government has dedicated significant money to CER. The American Recovery and Reinvestment Act of 2009 provided $1.1 billion of seed money to propel comparative effectiveness research. President Obama has established the Patient-Centered Outcomes Research Institute, a public-private nonprofit organization that will set priorities and direct the funding of comparative effectiveness research.
In order to obtain approval from the FDA to market a drug, it has to be shown to be not only safe but also efficacious. This should not be confused with comparative effectiveness, which attempts to compare various drugs and treatments to each other. CER requires the making of value judgments. Unfortunately, value judgments are subjective, not science.
Consider, for example, the Ocular Hypertension Treatment Study which compared treatment versus non-treatment of increased pressure in the eyes. Such a condition can lead to significant loss of sight. The study involved more than 1000 patients. Ten percent of the patients not treated went on to develop visual field loss, versus 5 per cent who were treated.
Which approach is better? Your answer, a value judgment, depends on where you are sitting. If you are sitting in the patient’s chair, treatment will decrease your risk of going blind by 50 percent. However, if you are sitting in the third party payers’ seat, you could just as easily argue you will not cover treatment because even without treatment 90 percent of the patients get better. You prefer not to pay for a measly increase of 5 percent. After all, they’re not your eyes.
Evaluation Limited to Costs
Such a value judgment becomes even more difficult to make when the two drugs being evaluated have markedly different means of administration. A testosterone injection is painful, whereas testosterone gel can be painlessly applied to the skin daily—but the gel is more expensive. Some drugs have to be taken orally three times a day, whereas a more expensive one is only given once a day. How does one factor in these differences and others when comparing treatments?
It is one thing to compare a drug against a placebo and attempt to detect a therapeutic difference between them. It is quite another to compare two drugs which both have a positive effect on a disease. Drug/placebo differences are much larger than drug/drug differences. In the latter case much, much larger studies are needed to detect a difference between them.
Such studies will cost millions to perform, however, and ultimately result in higher drug prices. When performed incorrectly, moreover, such studies might find no difference between the drugs other than price.
Potential Dangers of Cost Emphasis
It is likely, in fact, that cost becomes the primary basis on which a drug is recommended under CER. Pressure might well be brought to bear on the physician, as in the case of the cholesterol-lowering drug Baycol, to use the cheaper drug because it has been “proven” to have no difference other than price.
Yet a lack of evidence of a difference is not evidence of a lack of difference. Baycol was ultimately removed from the market because of safety issues. At least one insurer pressured physicians to use Baycol in a step therapy approach prior to its withdrawal. By so doing, the error was institutionalized when safer alternatives were available.
Political Science, Not Medical
The term “central planning” has negative connotations. Therefore central planners constantly invent newer, more acceptable terms. “Comparative effectiveness” is but one of these.
On the surface it sounds almost reasonable. But it will be used by third party payers to override the physician’s best medical judgment. It will force them to use cheaper treatments because a third-party payer says the two are comparable. It will force them to provide the same therapy for each patient without considering their individual differences.
‘Stakeholders,’ not scientists, will set the agenda. And that is the ultimate problem: Comparative effectiveness is not based on medical science, but rather on political science.
Dr. Richard Dolinar ([email protected]) is a senior fellow of The Heartland Institute and a clinical endocrinologist in Phoenix, Arizona.