Return to Emily Friedman home page

The Burden of Uncertainty

by Emily Friedman

First published in Hospitals & Health Networks OnLine, April 6, 2010

We are seeing an enormous spurt of interest in "comparative effectiveness" research and other efforts to determine the best therapy for a given condition. Proponents say these initiatives will improve quality and save money; opponents say they will open the door to "cookbook medicine" and lowballing by payers. Who's right? And what are providers supposed to do while all this plays out

Emily Friedman
Emily Friedman

Several years ago, I was visiting with a dear friend — a talented and highly respected surgeon. He told me about a patient whose fate still haunts him. The patient was an older man, an avid gardener who had practiced good health habits throughout his life. However, all those years in the garden had taken a toll on his knees, and he needed surgery to correct the problem.

The operation was a great success; the patient should have done very well and gotten back to gardening. Instead, everything started to go south. Despite every precaution having been taken, every intervention being timely, and every threat immediately discerned and addressed, the patient suffered complication after complication, culminating in the amputation of both of his legs. No one could fully explain what had happened.

When this man was being discharged from the hospital, my friend — distraught and guilt-stricken, racking his brain about what he could have done differently — went to say goodbye. He expected a tongue-lashing. Instead, from his wheelchair, the patient grabbed my friend's hands and said enthusiastically, "Thanks, Doc! You did a great job!" My friend probably would have felt better if the patient had heaved a grenade at him. He later concluded that because he had been so concerned and involved in the case and its aftermath, the patient knew he had done all he could. He told me, "There's something to be said for the quality of caring as well as the quality of care."

Comparative Effectiveness Research

That tale has been much on my mind lately, because I have been doing a good bit of writing and speaking about comparative effectiveness research (CER) and other initiatives designed to find out (what a concept!) which therapeutic approaches work best in health care.

Comparative effectiveness research is not rocket science, at least in theory; it simply moves quality-of-care efforts one step further, beyond "is it safe?" and "does it work?" to "which of these approaches is the most effective?" In other words, if a patient has chronic knee pain, should the knee be rested, exercised, treated with physical therapy or operated on — or should the patient just learn to live with it? The criterion is, or should be, what produces the best results.

This is a high-stakes game, obviously. If Drug A is found to be more effective for a given condition than Drug B, the makers of Drug B are not likely to be happy, especially if Drug A costs less and has fewer side effects — because at least some payers are likely to stop reimbursing for use of Drug B. And if bed rest or exercise proves to be just as effective as arthroscopic surgery for achy knees, a whole lot of orthopedic surgeons may be less than enchanted with the findings.

Of course, facing the demise of a lucrative product or procedure could make anyone nervous. But health care is rife with stories of supposedly safe and effective therapies that ended up being left by the wayside. Not too many people still practice bleeding with leeches. Gastric freezing as a treatment for many stomach ulcers was long ago replaced by a simple course of antibiotics, and most ulcer patients do not have to restrict their diets to milk and pudding for the rest of their lives. I had a prophylactic tonsillectomy when I was 7; my adenoids were also removed. Most children don't have to go through that now, and indeed, prevailing wisdom counsels against routine use of those procedures. Many of the health care uses to which radiation was put in the 1950s and 1960s proved to be not only ineffective, but also dangerous. Despite the best efforts of their manufacturers to cloud the issue, a variety of pharmaceutical products and medical devices have been found to produce less than optimal results.

So we learn, as well we should. And CER is really only the formalization of what responsible scientists and providers have been doing for centuries: trying to figure out what works best, and using that approach whenever possible.

However, support for this branch of research has much more momentum now than in the past. The federal economic stimulus bill that passed last year contained $1.1 billion in funding for CER, divided among three federal agencies. Many private entities, mostly insurers, engage in CER as well. Both the House and Senate health care "reform" bills (which may or may not exist by the time this is published) contain provisions for a federally funded center for this research. A freestanding bill seeking to establish the same sort of center has been introduced twice in the Senate in recent years, although it did not pass.

Why all the interest? It's pretty obvious. We waste a lot of money in health care, and there are those among us who think that it might be fun to find out what works and ditch the rest. Third-party payers are among those who are most interested in this endeavor, as one might imagine. Also, as quality-improvement efforts continue to pick up steam, this is a logical element of the campaign. Besides, whether it cramps some provider's or manufacturer's style or not, no patient should be subjected to a therapy that doesn't work, or that doesn't work as well as another therapy does.

The Effect on Payment

Most CER advocates stress that this research is not designed to influence payment decisions, but rather to provide "guidance" to patients, physicians and others in the determination of treatment choice. I think they're being just a teeny-weeny bit coy. If I were an insurer or a Medicare official, and I found out that a couple of Advil and a short walk every day would moderate knee pain as well as arthroscopy, with less risk to the patient, it would be irresponsible for me to continue to pay for scoping in such situations. In addition, quite frankly, it would save me a bundle. So, whatever CER fans say in public, I think it's a foregone conclusion that this research will profoundly influence future payment decisions. And in clear-cut cases, it probably should.

I'm using CER as an example, but there are many other initiatives afoot to try to find out what is really going on in health care — and whether, as has been postulated by many researchers, most of what we do has never been subjected to rigorous scientific inquiry, so we have no idea of whether it works, or for whom, or under what circumstances. In the absence of solid, clear-cut, reliable information, many physicians end up believing they must try everything — you know, throw it all against the wall and see what sticks. That's hardly a recipe for optimal patient care.

Needless to say, there are many providers who do not welcome this research. Some simply don't want to change their ways, just as physicians 150 years ago would often refuse to wash their hands between deliveries of newborns, despite the fact that antisepsis had been established as a life-saving approach and that patients of physicians who did wash their hands were far less likely to develop "childbed fever" than patients of those who did not.

Some, as I mentioned, fear for their future incomes and, possibly, the future of their specialty or subspecialty.

Still others are worried about clinical autonomy, about what they consider to be the right of the physician to practice medicine as "an art," not just a science. They raise the specter of "cookbook medicine," painting a picture of a regimented medical profession, forced by nonclinicians to practice medicine in only one way, regardless of circumstances or anomalies or doubts. Some say that many quality improvement efforts are simply a ploy by payers to save money.

And there have been instances when one does have to wonder, such as when a very large insurer announced that it would not pay for a certain prescription analgesic, because over-the-counter painkillers work just as well. On the one hand, from what I knew of the research, it was a justifiable decision; on the other hand, insurers don't have to pay for over-the-counter drugs, do they? And I bet they saved plenty by not paying for the more expensive prescription drug. Rarely is there only one motivation for any decision, especially when it comes to health policy and practice.

All of which gets us back to the gardener whose treatment, which by any existing standard was excellent, failed — but who accepted the dreadful result. And to the late Congressman John Murtha (D-Pa.), who underwent routine gallbladder surgery in January at a respected Navy hospital and ended up dead. And my friend Bruce, who developed a minor ear infection — which killed him. And so many others.

On the other hand, there are patients for whom everything seems to go wrong — the surgery was botched, the pharmacist missed a potential adverse drug interaction, the nurse didn't wash her hands, a terrible infection ensued — and they recover and go home, perfectly healthy.

The Cost of Not Knowing

The great British health policy analyst Rudolf Klein once wrote, "We have to recognize that much of medicine is about the management of uncertainty." Uncertainty on the part of physicians about which approach to use, uncertainty on the part of patients about which option to choose, uncertainty on the part of researchers about the most effective therapy. One recent example: The National Institutes of Health (NIH) received $400 million in economic stimulus money for CER in 2009. In February of this year, the NIH announced that a "landmark" clinical trial had compared two procedures that are used to prevent stroke in high-risk patients: carotid endarterectomy, a surgical procedure considered (according to the NIH) to be "the gold standard prevention treatment," and carotid artery stenting, which places a stent in the affected blood vessel.

The NIH's conclusion? The procedures are "equally safe and effective…. Physicians will now have more options in tailoring treatments for their patients at risk for stroke." Stenting produced "slightly" better results for patients younger than 70; surgery produced "slightly" better results for older patients. The average age of the patient study cohort was 69. The manufacturer of the stents paid for part of the cost of the research. Hmmm.

I'm not necessarily suggesting any sneaky business here; I'm just pointing out that a lot of money and a lot of research by respected professionals (the two study sites were the Mayo Clinic and the University of Medicine and Dentistry of New Jersey, after all) produced … uncertainty. Physicians who favor surgery can use it; physicians who favor stents can use them. Patients can choose either one. We still don't know.

As these research efforts move forward, I think there's going to be a lot more of this type of finding. The researchers may be satisfied, but there's little to guide individual clinicians other than keeping on top of research reports, along with gut instinct, knowing the patient well, and the patient's wishes. For every situation in which we know what to do — get a flu shot if you are in a high-risk group, immunize the kids, have that strangulated hernia repaired — there may well be one in which we will never figure out the best therapy. And in that case, it's quite likely that payers will choose to favor the less expensive one — and it would be difficult to argue with their choice.

The Individuality of Patients

Except for one thing: Patients aren't identical cardboard cutouts. They aren't all the same. In fact, none of them is exactly like anyone else (identical twins excepted). People whose physical characteristics appear to be the same react very differently to a given drug or treatment. People have allergies and sensitivities (I drank coffee until I was in my early 20s; then it began to make me violently ill, for some reason). People have comorbidities. People have weird metabolisms. Some people never get flu shots, yet don't catch the flu; others do, even if they get shots. And on and on.

The burden of uncertainty falls on all of us. As more research is conducted and more is learned about what should usually be done, we must be aware of issues such as atypical patients, anomalous subpopulations, and subtle factors that can make the scientifically optimal treatment the wrong treatment in some cases. At the same time, we cannot be seduced by recalcitrant physicians complaining about "cookbook medicine" and regimentation because they fear that they may have to change their ways or because their incomes are at risk. We must be very, very wary of the involvement in any phase of research by those with a financial stake, especially the manufacturers of pharmaceuticals and medical devices (given the bleak track record of their inappropriate influence on research). And if payers are going to tie reimbursement decisions to research findings, they had better do it for the right reasons. And the research on which they base those decisions had better be rock-solid.

As all of this plays out, hospital, medical group and health system leaders could be in a bit of a pickle. They may have made decisions to create new facilities, new programs and new ventures that might not match well with future research findings about the most effective therapies. If gene therapy and stem-cell initiatives prove to be better than oncology surgery, what then? If those pesky knee pains prove to be best treated with exercise, what about all the projected revenue from arthroscopy? If a breakthrough drug can prevent or cure Alzheimer's and other dementias, what's to be done with the memory-impairment unit that was just completed?

Although there are likely to be such disruptive situations, it is important to remember that if we insist on using the most effective therapy known, it will benefit us all — and, as a side plus, it will keep predatory trial attorneys at bay.

Even though the concept is sometimes used as a smokescreen, I do believe that part of medicine is art; there is no substitute for a physician who has known and treated a patient for a long time and is sensitive to that patient's individuality, health status and preferences. But as Leonardo da Vinci so brilliantly demonstrated, art can be informed by science. There's a broad middle ground between straitjacketing physicians and allowing clinical care to be a free-for-all based on God knows what. Let's find that middle ground, and claim it for our clinicians and our patients.

Copyright ©2010 by Emily Friedman. All rights reserved.

Emily Friedman is an independent writer, speaker and health policy and ethics analyst based in Chicago. She is also a regular contributor to H&HN Weekly and a member of the Center for Healthcare Governance's Speakers Express service.

First published in Hospitals & Health Networks OnLine, April 6, 2010


Hospitals & Health Networks welcomes your comment on this article. E-mail your comments to, fax them to H&HN Editor at (312) 422-4500, or mail them to Editor, Hospitals & Health Networks, Health Forum, One North Franklin, Chicago, IL 60606.

Return to Emily Friedman home page