In Part One of this three-part series, we covered the macro history of managed care. Now we turn to a more in-depth discussion of the evolution of health care since World War II, which will give us a better understanding of why health care costs got out of control, and why fee-for-service medicine was viewed as the cost culprit.
Fee-for-Service the Norm
Fee-for-service health care was the norm after WWII. When you visited the doctor or got a blood test, you were expected to pay. Outpatient care was routinely excluded from health insurance policies, which generally covered only hospitalizations. If you wanted insurance to pay for your care, you had to be in a hospital bed. From the beginning, health insurance encouraged use of the hospital rather than a less expensive alternative, outpatient care.
Insurance paid the doctor when you were hospitalized, but generally did not pay for visits to the doctor’s office. The emphasis on high-cost health care was established.
Over time, physician outpatient services began to be covered by insurance, but doctors continued to be paid each time they provided a service. This suited the public just fine. Plenty of service was utilized at little personal cost. All health care providers were happy to be paid each time they provided a service.
Expansion of health care services was dramatic, and patients were ready to take advantage of these services. A popular axiom was, “a bed built is a bed filled.” The number of graduating physicians increased rapidly, as did the number of nurses, therapists, and other health care professionals.
For at least 25 years, this expansion of the supply of health care services had few detractors. The doctors did their thing and got paid. The hospitals did their thing and got paid, too. Patients still paid little out of their pocket. Everyone was happy.
Government Steps In
In 1965, Medicare was passed by Congress and signed into law. Now every American over the age of 65 had almost a blank check for health care. The health care system was unprepared for the explosion in demand for health care services.
As is true of many government programs, the up-front cost estimates for Medicare were far too low. Costs quickly outstripped appropriations.
Soon, the impact of Medicaid, the program for the poor, was being felt on both federal and state treasuries. The cost monster had appeared with a vengeance. A number of years ago, the governor of Colorado, Richard Lamm, called Medicaid the Pac-man of the state budget. Health care just chewed up the dollars.
This timeframe also saw the full impact of the federal health care research budget and the research investment of private companies in new health care technology, drugs, etc. Unfortunately, research seemed not to produce labor-saving innovations in health care, as was happening in most industries. Instead, innovations in health care produced new ways to treat diseases that were not being treated appropriately before. New equipment required new types of technical personnel to run the machines and provide patient treatments.
New health professional categories were created. More and more costs were added to the system. More care was being provided for more diseases. It looked like we would have a cure for everything.
Fee-for-Service Blamed
In the 1970s, the nation began to get serious about health care costs. A finger was pointed at fee-for-service: If you were a health care provider paid for every service you provided, the argument went, you would quite naturally provide more services.
It was noted at the time that Mississippi had one-half the number of surgeons per-capita that California had. Surgeons in Mississippi performed only one-half the number of surgeries per-capita as were performed in California. Yet it was easy to see that Mississippians were not dying on the streets from lack of surgery. Perhaps there was too much surgery being performed by too many surgeons in California?
The way the health care system was organized began to come under attack. Most physicians practiced alone or in small groups of two or three. They were paid fee-for-service and essentially had no practice restrictions. There was no incentive to save money or practice conservative medicine. Hospitals had no incentives to save either. They were paid for all their costs or paid what they charged. Physicians and hospitals had no incentive to work together to save money.
The incentives were all pushing for increased health care expenditures. That raised questions about why health care was organized this way. Was there not a better way?
More Harm than Good?
Compounding all this confusion, a contrarian movement appeared in the 1970s that decried the over-emphasis on costly health care service. Ivan Illich, in his book, Medical Nemesis, wrote, “Medicine unquestionably injures more than it cures . . . [it] stripped patients of the tools to take care of themselves.”
Perhaps he was looking forward to research not yet done, such as the 1993 Harvard Study that concluded 1 million injuries per year were caused by the health care system itself, and 150,000 people died each year from health care system-induced problems and mistakes. The Harvard researchers concluded that too much health care was being delivered, and that the health care services themselves were causing major problems.
Ethical questions were being raised. How much care is too much? It became clear we could deliver much more care than we needed or could afford. As patients, we had become dependent on health services, turning to the health care system to handle our lifestyle-induced health problems, rather than taking responsibility for them ourselves.
Enter Managed Care
If necessity is the mother of invention, then it was time to invent something new.
As noted in Part One of this series, we did at the time have examples of health care delivery systems that offered a chance to save money without compromising quality. These forerunners of HMOs were attractive alternatives. A so-called Jackson Hole group convened and with the help of Dr. Paul Ellwood, the concept of a Health Maintenance Organization was formally developed.
An HMO replaces fee-for-service health care. The sum of money available to an HMO is fixed. There is no incentive in an HMO to provided unneeded services. The HMO is committed, by contract, to provide necessary care for its patients and to keep patients healthy.
For a monthly fee, a person or family becomes an HMO member, and they are assigned to a primary care physician. This physician becomes responsible for the total care of each patient, including making the decision to refer a patient to a specialist.
Most HMOs have established procedures, sometimes called protocols, to be followed when patients develop serious problems. Frequently, these protocols emphasize a conservative approach to caring for the patient, rather than aggressive testing and treatment regimes. These protocols tend to reduce the use of high-tech/high-cost services.
The overall cost for patient care in an HMO is thus likely to be lower than the cost of care in insurance plans, usually called indemnity plans, that provide the patient with almost unlimited choices utilizing fee-for-service medicine.
Part Three in this series will identify why managed care provides hope for the future in the war against increasing health care costs.
Stuart A. Wesbury Jr., Ph.D. is professor emeritus in the School of Health Administration and Policy at Arizona State University’s College of Business. With Joseph L. Bast and Richard C. Rue, Wesbury is the coauthor of Why We Spend Too Much On Health Care (1992-1993). He can be reached by email at [email protected].