This belief has persisted in the face of growing doubts raised by academic studies and government assessments. Among the troubling findings: frequent underuse of treatments for major disease; an alarmingly high rate of serious medical errors; and--as a 2001 Institute of Medicine (IOM) report put it--"a chasm" dividing the health care we have from the health care we could have.
Still, many Americans argue, where's the evidence that anyone else does better--or as well? In 2004, however, the mantra of United States superiority got a dose of reality.
For the first time in more than 30 years of research on health quality measurement, a multinational team carried out a country-to-country study using a common set of standards, or "quality indicators." The collaborators, including HSPH John H. Foster Professor of Health Policy and Management Arnold Epstein, compared health care in five nations of the developed world--the U.S., Australia, Canada, England, and New Zealand.
The 21 indicators (winnowed from an initial 1,000 that were relevant to at least one country) included such metrics as survival rates for several types of cancer, organ transplants, and heart attacks; success in prevention of infectious diseases and suicides; and the percentage of the population vaccinated against certain diseases or screened for specific cancers.
The results, published in the May/June 2004 issue of Health Affairs, might surprise many Americans. Neither the U.S. nor any other country was crowned overall health quality champion. Each of the five ranked best on some indicators and worst on others. Each country had something to teach the rest about delivering excellent medical care.
The range in quality among the countries was relatively narrow, but still striking. The U.S. scored highest in five-year survival rates for breast cancer (85.5 percent, vs. 75 percent--the lowest rate--in England) and also in the rate of cervical cancer screening. But its survival rates for kidney and liver transplants were the lowest. And the U.S. was the only country in which death rates from asthma were climbing.
Epstein, chair of the Department of Health Policy and Management at HSPH, was the report's senior author. He calls the five-country study a significant achievement, despite its limitations (many major causes of death and disability, such as diabetes, were not measured).
"Though these data are by no means comprehensive, this is an important demonstration that we can make meaningful comparisons of quality of care internationally," says Epstein. "It opens opportunities to someday have annual or biannual reports across many countries" that can reveal quality gaps. "Then you can drill down into these areas and look for explanations and ways to improve." Which, after all, is the ultimate goal.
The report raised conundrums aplenty. No particular financing system or medical care delivery structure--nationalized health care, or the U.S. patchwork of fee-for-service providers and HMOs--was consistently associated with high quality.
did the amount of money spent correlate with quality rankings. This was
particularly vexing for the U.S., which far outstrips the other nations
with its $4,600 per person annual expenditure--13.9 percent of Gross National
Product, compared with, for example, 7.6 percent in England. "It
is difficult to conclude from these data that [the U.S.] is getting good
value for its medical care dollar," the report's authors observed.
Demands for health quality monitoring have surged in the past decade. Employers, turning to managed care for relief from the upward cost spiral, want to know what they are getting for their money. The highlighting of medical errors has fueled consumer interest in quality report cards for providers and health care institutions. Quality measurement provides the feedback for correcting lapses.
Professional medical groups are going into action, aware that if they don't attend to quality issues, someone else--like insurers or the federal government--will do it for them. The American Society of Clinical Oncology (ASCO), responding to a 1999 IOM report that found widespread underuse of the best cancer care, quickly launched the National Initiative on Cancer Care Quality in collaboration with other professional societies to design a prototype of a national quality monitoring system.
As a first step, HSPH researchers led by Eric Schneider and Epstein, collaborating with scientists from RAND Health in California, gathered quality information on the treatment of breast and colorectal cancers in five U.S. metropolitan areas: Atlanta, Cleveland, Kansas City, Houston, and Los Angeles. To compile a list of patients with a cancer diagnosis, the researchers tapped hospital cancer registries participating in the American College of Surgeons' National Cancer Data Base; sought patient records (often scattered among several facilities); asked physicians' approval to interview cancer patients; and conducted telephone interviews of the willing survivors. Of 3,711 eligible survivors, they were able to interview 2,366, of whom 1,763 had records complete enough for study.
The quality measurement was the extent to which a patient's care adhered to a number of established best practices--43 for breast and 47 for colon cancer, and 18 that applied to both diseases.
For example: in 84 percent of cases, patients undergoing surgery for colon cancer had an evaluation of their lymph nodes (guidelines call for it in all such patients). Only 31 percent of women having a mastectomy were informed about breast-conservation surgery, according to their medical records (all should have been informed).
Overall, the quality of cancer care was quite good--surprisingly so, considering the critical IOM report five years before, Schneider reported at the annual ASCO meeting in June. While some of the study's limitations could have resulted in a rosier picture than is warranted, Schneider speculated that cancer care may be delivered more efficiently than other types of medical care, in part because it is frequently delivered by teams. "Doctors are looking over each others' shoulders more routinely now," he said.
The study also yielded valuable--if daunting--lessons for doing this kind of research on a national, ongoing scale. The investigators encountered delays and barriers at almost every turn in the road. Hospital registries were slow to approve participation; consent rules barred researchers' access to records of deceased patients; some doctors wouldn't give permission to contact patients; and financially pressed hospitals weren't always willing to provide medical charts.
Moreover, quality measurement is expensive: the ASCO study cost more than $4 million, and a new, more comprehensive cancer care assessment by the National Cancer Institute is budgeted at ten times that. At this early point, it's hard to imagine translating these prototype studies into a nationwide system that would be practical and affordable.
"There's no way these studies can be done on a long-term basis," says Jane Weeks, a professor in Epstein's department and an oncologist at Dana-Farber Cancer Institute. "It's taken the smartest people in the field working for years to get this far."
Weeks adds, "The fundamental need here is a sustainable system to monitor and help institutions improve over time. I think this will come, but we're not there yet."
Richard Saltus has been a science and health reporter for the Boston Globe, the San Francisco Examiner, and the Associated Press.
illustration: Anne Hubbard