ECRI Institute Home About ECRI Institute Products and Services Conferences and Events Information For Patients
hdr3_col3Start Contact Us Report a Device Problem Careers SiteMap Frequently asked questions  
hdr3_row4Start hdr3_row4SearchECRI Go Search Tips  

Comparative Effectiveness Resource Center

Member Login

User Name
(Email address)
Password
Remember me on this computer (Not recommended for shared computers.)

Not a Member Yet?

Explorar en Español

ECRI Institute's 15th Annual Conference Report: Key Questions and Issues

Comparative Effectiveness of Health Interventions: Strategies to Change Policy and PracticeAndrew Holtz 
By Andrew Holtz, MPH

Executive Summary
Efforts to compare the effectiveness of healthcare interventions are as old as medicine itself; but there are so many more clinical options now than ever before, with new products and services flowing into the market, that questions about effectiveness, quality, and value have reached unparalleled intensity. Pending legislation and the platforms of several leading Presidential candidates support creating a national center for comparative effectiveness research and dissemination.

Comparative effectiveness research may help mitigate increases in overall healthcare spending and, even more important, show how to get the most health value out of our investments. If comparative effectiveness research is not expanded and communicated, clinicians and consumers will face an ever-growing array of choices without adequate guidance.

On October 17 and 18, 2007, ECRI Institute’s 15th Annual Conference: Comparative Effectiveness of Health Interventions: Strategies to Change Policy and Practice brought together more than 200 representatives from state and federal government agencies, device manufacturers, pharmaceutical companies, provider organizations, health plans and insurers at the Offices of Arnold and Porter, LLP, in Washington, DC, for a dialogue with 30 of the nation’s leading health policy experts and stakeholders. The discussion explored the opportunities and challenges of applying comparative effectiveness approaches to healthcare interventions.

If more comparative effectiveness work is performed, what is achievable? Pulling together available evidence and commissioning more trials would reduce the uncertainty facing those making clinical decisions and allocating resources. There will be ongoing tension between the demand for timely results and the desire for definitive findings. Redefining research questions to focus on conditions and populations, rather than just interventions, could make comparative effectiveness information more useful. Current efforts to understand how healthcare interventions work in common clinical applications offer models for broader implementation.

Would the fruits of comparative effectiveness research be used? There are many examples of research results not being applied by clinicians or not accepted by consumers, as well as political backlash against federal agencies that recommended unpopular changes in clinical practice. Proponents of comparative effectiveness research say objections and counter-arguments should be anticipated. Comparative effectiveness guidance will be resisted if it is seen as primarily a tool to control healthcare costs. On the other hand, it is more likely to be embraced if the information helps patients and providers raise the quality of healthcare and reap greater value. Nevertheless, the need to manage rising costs currently drives important comparative effectiveness efforts. For example, more than a dozen states use reports from the Drug Effectiveness Review Project and other sources to help them maximize healthcare services while living with balanced-budgets requirements. The Centers for Medicare and Medicaid Services (CMS), as well as private benefit plan managers, are attempting to use comparative effectiveness results to guide their coverage decisions. Professional societies are using the information to educate providers, both voluntarily and as part of board-certification requirements. Consumers Union’s “Best Buy Drugs” and patient advocacy training programs of the National Breast Cancer Coalition are examples of attempts to present comparative effectiveness information to general audiences.

Who will do comparative effectiveness research in a way to best ensure that the results will not only be credible, but will be applied? There is growing discussion of establishing a new national comparative effectiveness research entity that could build on, fund, and coordinate expanded efforts. While the structure of such an entity has yet to be defined, commonly used models include the Institute of Medicine and the Federal Reserve. The entity would need to be able to establish a reputation for unbiased analysis, and it would need to resist the kind of political pressure that hobbled some earlier institutions. In general, recommendations call for such an institute to be advisory, with the U.S. Food and Drug Administration (FDA), CMS, and other public and private institutions continuing to make approval and coverage decisions.

Throughout the conference, the discussion often touched on two key dialectics: whether cost-effectiveness should be an integral component of comparative effectiveness analyses and whether greater attention to comparative effectiveness would impair innovations.

The Need
“I can’t think of anything that we are discussing here in Washington or around the country that is more important than the issue of the need to move aggressively forward to develop the capacity of this country to do effective comparative research,” said Stuart Altman, Ph.D., Professor of National Health Policy, Heller Graduate School of Social Policy and Management, Brandeis University. Altman argued that the nation cannot afford healthcare that is not supported by evidence of sufficient benefit. “Should public funds be used to pay for services of limited or no value?” asked Altman. “And unfortunately we are paying for a lot of them.” He said that inevitable choices will be made either arbitrarily or with guidance from comparative effectiveness science.

The Director of the Congressional Budget Office, Peter Orszag, Ph.D., said rising healthcare spending is turning his organization into the “Congressional Health Office.” But he criticized the widespread belief that this trend is the inexorable result of an aging population. He noted that if Medicare and Medicaid held spending level for each beneficiary, overall budgets would rise only slowly. “The rate at which costs per beneficiary grow is much more important than the fact that we are going to have more beneficiaries in the two programs. And yet the vast majority of the discussion that occurs in the Washington Post, just to pick an easy target, has the relative mix backwards. It talks about Social Security, Medicare and Medicaid, it talks about the coming retirement of the Baby Boomers, and, oh, yes, healthcare.”

He noted that while the use of interventions supported by clear evidence of superior effectiveness is relatively consistent across the nation, where there is uncertainty about the effectiveness of an intervention, some regions have much higher utilization than others. For example, while hip fracture treatment is consistent across the nation, rates of certain back surgeries vary dramatically. “In the absence of that kind of research,” said Orszag, “medical norms in different parts of the country develop differently, and the more interventionist norms, which are not necessarily backed by specific evidence, generate higher costs without better health outcome. An obvious approach to addressing that is to build up the information base on what works and what doesn’t.”

Proponents of placing greater emphasis on comparative effectiveness do not focus solely on healthcare costs. “We hope that this actually bends the cost curve over the long run, but our motivation was not strictly on that basis,” Mark E. Miller, Ph.D., Executive Director, Medicare Payment Advisory Commission (MedPAC) said. “Even if that didn’t bend the cost curve, we think that this line of research needs to be pursued” in order to get the greatest value from healthcare spending.

As decisions are made about what a ramped-up comparative effectiveness effort would look like, Carolyn Clancy, M.D., Director, Agency for Healthcare Research and Quality (AHRQ), urged continued focus on the ultimate objective: “The importance of never losing sight of why we are doing this in the first place, which is to give people the best possible information so they can make the kinds of choices they would make for themselves if they knew as much about medicine or healthcare or the interventions as we do.”

What Is Achievable?
Comparative effectiveness research may be desirable, perhaps indispensable, if the nation is to get the greatest value from healthcare spending. But what would a more aggressive initiative to produce new information about the comparative effectiveness of interventions be able to provide?

The effort would include more original trials, including head-to-head comparisons of similar interventions and also comprehensive management strategies of chronic conditions involving multiple comorbidities. Systematic reviews would collect and assess the findings of trials and other studies, both to define the state of the art in various healthcare domains and to set priorities for future investigations. There are both great opportunities and inherent limitations. A systematic effort could provide immediate gains from simply pulling together work that has already been done, and yet even a dramatically expanded research initiative would not provide simple and clear-cut answers to all the important questions about healthcare.

Indeed, the matter of determining the right questions is critical. Gail Wilensky, Ph. D., Senior Fellow, Project Hope, said that rather than the traditional focus on interventions—for example, how one brand of coronary artery stent compares to another brand—comparative effectiveness analyses should look at broader issues, such as how to treat cardiovascular disease. And rather than look at how drugs or devices perform under ideal circumstances in uncomplicated cases, comparative effectiveness investigations will need to explore how the interventions are actually used in order to determine what works best for which populations in what sorts of circumstances.

Wilensky and others said that in attempting to answer these questions, comparative effectiveness research cannot rely only on head-to-head comparisons using randomized controlled trials. These trials would take too long to provide all the needed evidence and would cost too much. In addition, blinded randomization is not always ethically appropriate or even possible in certain circumstances.

Current controversies about coronary artery stents and the diabetes drug Avandia (rosiglitazone) highlighted several discussions of the limits of conventional randomized controlled trials. Janet Woodcock, M.D., Deputy Commissioner and Chief Medical Officer, FDA, pointed out that trials of Avandia have involved many thousands of patients, and yet there is persistent uncertainty about whether the drug increases a patient’s risk of heart attack and death. “I’m just trying to explain some of the boundaries here about how much knowledge you can actually gain and how heroic your efforts may have to be,” Woodcock said.

Yet during the same session, Donna-Bea Tillman, Ph.D., Director, Office of Device Evaluation, FDA Center for Devices and Radiological Health, pointed out that regulators make decisions about devices that rarely have the kind of blinded randomized trial evidence available for new drugs. “The device statute really allows us to accept a variety of clinical evidence, going from well-controlled clinical trials all the way down to robust human experience,” Tillman said.

Sharon Levine, M.D., Associate Executive Director, the Permanente Medical Group, Inc. used her organization’s experience with a total joint registry and study of clinical events in patients taking Cox-2 inhibitors to point out that new and useful knowledge can be produced even when a conventional controlled trial is not practical. She said the registries largely serve to generate hypotheses, but the alerts they produce do lead to further study and ultimately important improvements in clinical practice.

Hospitals may frame the definition of “comparative effectiveness” differently than some other constituencies.  For example, Ascension Health, the nation’s largest not-for-profit hospital system, views comparative effectiveness through the lens of operations research applied in patient safety programs, not just which technologies are inherently most effective. Ascension set a goal of eliminating preventable deaths and then used existing medical records and reports to define the problem and identify corrective actions. Ascension Health Chief Medical Officer David B. Pryor, M.D., said the first data review indicated that throughout their system about 15% of the deaths of patients not admitted for end–of-life care were potentially preventable. “That’s not a model, that’s 900 preventable deaths a year; three a day. We used to start some meetings asking for a moment of silence for the three people who died in our system today who didn’t need to,” Dr. Pryor said. Rather than being potentially stymied by all the ways hospital care might be improved, Ascension focused on eight quality measures that were important and yet did not create an excessive burden on staff. “Much to our surprise, I’m pleased to tell you that mortality declined 21% in our first year of work.” Pryor said one lesson of their experience is that it is possible to move ahead, even if you do not have all the answers.

These two examples of healthcare systems illustrate success and yet also point out a problem. The kinds of internal data-tracking systems used by The Permanente Group and Ascension Health don’t exist on a national level. Lynn Etheredge, Consultant, Health Insurance Reform Project, George Washington University, said that national databases of de-identified clinical data, perhaps modeled on the open databases maintained by the Human Genome Project, could dramatically accelerate comparative effectiveness research.

Observational studies, registries, and other methods of gathering insight to the relative benefits and harms of interventions will be needed. The strengths and weaknesses of these methods will then become matters of debate as policymakers and others attempt to apply the findings. Brian G. Firth, M.D., Ph.D., Worldwide Vice President, Health Affairs, Cordis Corporation, warned against the temptation to treat findings from registries as if they had been produced by trials. He used the example of the Swedish Coronary Angiography and Angioplasty Registry in which short-term results prompted a sharp decline in the use of drug-eluting stents, but then a year later longer-term data did not point to an elevated risk of blood clots in patients.

As ECRI Institute President and CEO Jeffrey C. Lerner, Ph.D., noted, comparing new technologies to those already established can be like comparing a child to an adult. Nevertheless, if comparisons are not attempted, then patients and provider will continue to face a wide gray zone where conventional evidence fails to provide guidance.

Would It Be Used?
As difficult as it can be, acquiring knowledge about the comparative effectiveness of healthcare interventions is only the beginning of the use of evidence to make a difference in patient care and resource allocation. The results generated by comparative effectiveness data must be applied. Healthcare is replete with examples of study results that changed practice only slowly, if at all. Troyen A. Brennan, M.D., J.D., M.P.H., Chief Medical Officer, Aetna, Inc., and others noted that it took more than three decades to act on evidence of the benefit of beta blockers for patients with cardiovascular disease. What factors are likely to influence the acceptance and use of comparative effectiveness research?

William Novelli, M.A., Chief Executive Officer, AARP, urged proponents of comparative effectiveness to think about who is likely to resist using the information. “Who would tell people, ‘You don’t want cookie-cutter medicine?’ or ‘This is really denying you freedom of choice.’” Novelli said public education about the utility of such information will take a long time and depend on the support of physicians.

The way comparative effectiveness research is framed is critical to how it is likely to be received. If it is seen as simply another attempt by payers to control costs, clinicians and consumers are likely to resist the concept. Consumers, patients, and providers will have to be convinced that comparative effectiveness research can improve health outcomes and the quality and value of their care. But that mindset is not yet widespread. Carolina Hinestrosa, M.A., M.P.H., Executive Vice President for Programs and Planning, National Breast Cancer Coalition (NBCC) was among those who said, “We have embraced a philosophy in this country of more must be better.” Only when both providers and patients accept the premise that sometimes more, or new, isn’t better, that standard care or even less care is safer and more effective, will they be likely to embrace the findings of comparative effectiveness work.  NBCC worked against the grain in the 1990s when many patients and oncologists promoted the use of high-dose chemotherapy for patients with metastatic breast cancer despite the complete absence of controlled studies. The technology proved counterproductive, with great numbers of treatment-related deaths. Similarly, many patients abandoned their skepticism over hormone-replacement therapy (HRT) or use of Cox-2 inhibitors only reluctantly.   

Nevertheless, the public and others hunger for better information, as evidenced by consumers and institutions currently trying to use available data. 

States are one constituency now deploying comparative effectiveness research information because they need to balance their budgets and are under political pressure to both limit taxes and maximize services.  This imperative focuses attention on the value received for the healthcare dollars spent. “It really does drive state policymakers to a point where the pressure-relief valve is around maximizing value, and that’s where increasing comparative effectiveness is such a benefit,” said Mark Gibson, Deputy Director, Center for Evidence-Based Policy, Oregon Health and Science University (OHSU). Gibson is developing and implementing programs that provide systematic reviews of drugs and other healthcare interventions to state Medicaid agencies and other entities. He said that an open process about how research is conducted and how results and conclusions are derived has been vital to defending against criticism of comparative effectiveness reports. Public constituents, including industry, are invited to comment and provide data and information at the outset when new systematic reviews are planned. 

Of course, sharp criticism and resistance to the findings of such reports persist. Carmen Hooker Odom, Milbank Memorial Fund President and until recently the Secretary of Health, North Carolina Department of Health and Human Services, said that policymakers must be thick-skinned and find ways to navigate political obstacles. When tasked by the governor of North Carolina with trimming $70 million from Medicaid spending, Hooker Odom instituted a mandatory preferred drug list and supplemental rebates in the state’s Medicaid prescription drug plan. “The response was immediate, it was heavy, and it was all pretty negative,” she said. The legislature reversed her actions.  

She says she then used reports from the OHSU Drug Effectiveness Review Project (DERP) to provide information to prescribers. The same legislature that had rebuked her earlier mandates endorsed educational outreach to physicians of patients using the multiple prescription drugs. For example, a physician who prescribes three or more psychotropic drugs to a child is now required to undergo a peer consultation in which alternatives, based on evidence-based criteria, are offered. “This new language is the first time in statute there appears the words ‘evidence-based criteria’ regarding the North Carolina Medicaid Prescription Drug Program,” Odom said. 

The systematic reviews produced by DERP also provide the foundation for the “Best Buy Drugs” reports from Consumers Union. “What ‘Best Buy Drugs’ does is translate very complicated systematic reviews so consumers can understand what they mean,” said Gail Shearer, Director, Health Policy Analysis, Consumers Union. So far, 19 reports have been posted to a Web site (www.crbestbuydrugs.org) and printed versions are being published. While the language and content of the reports are simplified, Shearer said that she was surprised to see that the overwhelming majority of Web users who downloaded reports chose the more in-depth versions (14 to 30 pages) over 2-page summaries. The information can affect patient choices. In collaboration with Medco mail-order pharmacy, the summaries were sent to about 1 million statin users. Shearer said there was a 4% shift to lower-cost statins, saving about $8 million. 

Typically, consumers are merely passive recipients of comparative effectiveness information. However, well-educated consumer advocacy groups can help shape practice and policy. Carolina Hinestrosa talked about the National Breast Cancer Coalition’s advocacy training programs that focus on teaching advocates how to understand clinical trial data—evidence. She said that the Coalition does not merely advocate for more treatment and research. “We said we need to ensure that treatment decisions and coverage are based on evidence and best practices.” In support of that principle, training programs that last up to five days teach members basic concepts of science and health-services research.

Employers and health plans are beginning to create benefits that drive treatment practices in the direction of what the available evidence supports. The central concerns of employers, according to Charles M. Yarborough, M.D., M.P.H., Director, Health and Wellness Medical Strategies, Lockheed Martin Corporation, include increasing productivity and reducing absenteeism. Yarborough referenced bariatric surgery for weight loss, an increasingly popular procedure in the working population. He said that even though evidence shows that surgery is effective for some types of patients, it is often used for the wrong patients at the wrong time. Evidence-based benefit plans do not deny coverage, but they require a physician-supervised weight-loss program before surgery.

Barry Straube, M.D., Director and Chief Clinical Officer, Office of Clinical Standards and Quality, CMS, said that more and more of the agency’s decisions are becoming nuanced to identify the specific patient populations and circumstances in which an intervention should be used to achieve optimal outcomes.  Such decisions are supported by both the available evidence and the opportunity to develop new evidence for interventions lacking sufficient evidence. “Rather than saying ‘No we won’t cover a service,’ we are now adding additional evidence gathering, which is known as ‘coverage with evidence development,’” he said. Often, that means establishing a registry to collect data, but it can also mean covering treatments under the auspices of a clinical trial. As an example, he said that rather than limit coverage of implantable cardioverter defibrillator devices, CMS opted to offer broader coverage while gathering data to eventually determine which subpopulations benefit from the devices.

While coverage decisions can influence patient access to certain healthcare services, other incentives, such as board certification, offer opportunities to educate physicians and modify their behavior. American Board of Internal Medicine (ABIM) certificates no longer last a lifetime; they expire after 10 years, thus creating opportunities for physician education and measurement of their practice patterns. Cary Sennett, M.D., Ph.D., Senior Vice President, Strategy and Clinical Analytics, explained how ABIM’s Practice Improvement Modules offer a framework for physicians to measure their own practices. “For many physicians, this is the first time that they’ve ever looked at the performance of their practice at a population-based level,” he said. Physicians sometimes find, for example, that overall the glycemic control among their diabetes patients is not as good as they believed. The program then suggests establishing a registry or other system to better track and support these patients.

Fiona Wilmot, M.D., M.P.H., Director of Medical Policy, Pharmacy and Therapeutics and Transplant, Blue Shield of California, also offered an example of encouraging physician use of evidence through academic detailing and technology assessment committees of physicians and pharmacists. But she noted substantial barriers to wider use of comparative effectiveness research: employers may think that health plans are being self-serving, providers worry about their incomes, and plan members may believe it is just another way to deny coverage. But on balance, Wilmot sees no reason to stall the deployment of comparative effectiveness programs while the field continues to develop.

As a reminder that efforts to produce evidence-based information to inform practice can lead to the demise of agencies producing the evidence, Roger Herdman, M.D., Director, National Cancer Policy Forum, Institute of Medicine recalled the demise of the U.S. Office of Technology Assessment (OTA) (which he headed) and of predecessor federally funded technology assessment organizations in which he played a role. He recalled the experience of the U.S. Agency for Healthcare Policy and Research (AHCPR) (subsequently renamed the Agency for Healthcare Research and Quality).  After AHCPR produced a systematic review that it then used to create clinical practice guidelines indicating that certain back pain surgeries appeared to be ineffective, professional societies opposing the findings effectively lobbied congress and succeeded in efforts to drastically cut agency funding. “It appears that if the game gets rough, a health technology [assessment] agency is the player most likely to be benched,” Herdman concluded.

AHRQ Director Carolyn Clancy, M.D., emphasized that only transparent processes can win public trust, and portraying the benefits and harms of interventions in a way that is accessible to patients when they need to make decisions is difficult. She stressed that producing comparative effectiveness research is only the beginning of the job. “If we are not focused on the demand side and how this information will be used, we’ll be building a better library,” Clancy said.

Who Would Do It?
Comparative effectiveness research is now being done by public agencies, private companies, and organizations. Nonetheless, there is growing discussion about whether the nation would benefit from not only increasing the resources available, but also centralizing the effort in some fashion. Legislation pending in Congress and proposals by presidential candidates call for the creation of a national center or institute to fund or perform comparative effectiveness research and disseminate the results.

In a June 2007 report, MedPAC recommended to Congress that a new independent entity be created, in part because comparative effectiveness research is currently scattered across separate agencies, including AHRQ, the National Institutes of Health, and the Department of Veterans Affairs, which have other priorities, too. “There is not one place where there is a concentrated effort, where systematically an agenda is set and information is disseminated,” said Mark E. Miller, Ph.D., Executive Director, MedPAC.

Yet even as supporters call for a new entity, they have yet to solidify consensus on details of its structure and funding. Wilensky said that there is no perfect place to house the entity. “I like the notion of ‘close to, but not too close to government,’” she said. Several speakers made analogies to the Institute of Medicine and the Federal Reserve. In general, the proposed entity was described as being independent, yet appropriately responsive to stakeholders. Key participants should be knowledgeable and representative, yet free from conflicts. Speakers recognized the difficult tensions that would pull at the institution.

Representatives of manufacturers raised concerns that an emphasis on independence and freedom from conflicts of interest would exclude participants with technical expertise. Brian G. Firth, M.D., Ph.D., Worldwide Vice President, Health Affairs, Cordis Corporation, criticized the findings of comparative effectiveness reports produced by independent academics. In response to a question from a pharmaceutical company employee, Miller said, “I am absolutely aware that there are people within the private sector and companies who have been leaders in innovation. There’s no question about it. The point would be: can that be brought into the process in a way where there is clearly not a conflict of interest?”

Mark Gibson explained how DERP has tried to strike the right balance. He said the process has changed in response to feedback from participants. “It’s also changed over time in response to feedback from the industry, and it’s also changed over time in response to feedback from advocacy groups across the country,” Gibson said. The DERP process is intended to remain open to comment, yet protected from influence. Proposed questions for new studies are posted for public comment, as are draft reports, which are revised before final reports are released. DERP holds conferences with industry and solicits data from companies, but the DERP’s staff stands as a firewall between industry and researchers. DERP reports are simply advisory. The states that participate make their own decisions about how to apply the findings.

Wilensky, Miller, and others said a national entity should follow that model, leaving approval and coverage decisions to FDA, CMS, and others. In the United Kingdom (U.K.), the National Institute for Health and Clinical Excellence (NICE) and National Health Service (NHS) Quality Improvement Scotland provide guidance to the NHS. David Steel, Ph.D., Chief Executive, NHS Quality Improvement Scotland, said the reports carry great weight, in part because politicians there prefer to defer to expert opinion. “Technically they are advice, rather than decisions, but in effect the documents and guidance that comes out of NICE and my organization on technology appraisals are mandatory,” said Steel.

Despite caution about the potential political ramifications of proposing a comparative effectiveness research structure based on a U.K. model, Steven D. Pearson, M.D., Director, Institute of Clinical and Economic Review, Harvard Medical School, noted that NICE has become deeply embedded in the NHS and has won broad political support there. “With all the caveats about the lessons that can be transferred across the pond, I still think it does have some lessons for us about how the value of procedure and the value of being explicit about certain things can provide political durability that I think would have shocked people.”

Fiona Wilmot, M.D., M.P.H., Director of Medical Policy, Pharmacy and Therapeutics and Transplant, Blue Shield of California, said the health plan makes a distinction between the decision-making processes for coverage of pharmaceuticals and devices. Coverage decisions regarding devices are made by an internal committee of physicians, who review technology assessments from multiple sources. The sources include the California Technology Assessment Forum, which is housed within the Blue Shield Foundation, at arm’s length from the company. Reports from committees of the Blue Cross Blue Shield Association are also part of the review process. For information on therapeutics, the plan relies on an internal committee of pharmacists. But then coverage decisions are made by an external committee of physicians and pharmacists who are not employed by Blue Shield of California. The votes of this committee are binding on the plan. Wilmot said that another difference between the processes for drugs and devices is that device decisions are generally to provide coverage or not, while decisions about pharmaceuticals are generally where to put a particular drug on the plan’s formulary.

Proposals for funding a new national entity included using a mix of direct appropriations, taxes, or other contributions from private sources, and Medicare Trust Fund monies. The focus of discussion was on creating a funding structure that would insulate the entity from political influence.

Trust was the keyword in the discussion. While there were several proposals for how to devise a governing board or how to incorporate input from consumers, providers, and industry, the speakers generally said the details were secondary to the objective: create an institution that can maintain broad support as an unbiased source of credible comparative effectiveness information and guidance.

POINT / COUNTERPOINT
The discussions focused on two issues: whether and how cost-effectiveness should be part of comparative effectiveness reviews and what effect closer scrutiny may have on innovation. In each case, the discussions took on the form of dialectics.

Cost-Effectiveness

Point: Comparisons should look at clinical effectiveness only.
“My only concern is that the emotional stuff attached to cost-effectiveness could be used as a reason not to do any of the good work, develop better methods, better information, and so forth,” AHRQ Director Carolyn Clancy said. “I think making it understandable and not confusing, and not sounding like rationing; I don’t think we’ve gotten to a point where we can communicate that yet.”

In discussing a proposed new national center for comparative effectiveness research, Gail Wilensky, Senior Fellow, Project Hope, said cost-effectiveness work should be done somewhere else. “I agree with the importance of cost-effectiveness as information to be available, I just don’t want to have it in this clinical effectiveness center because I think it makes it too vulnerable politically.”

Mark Gibson, Deputy Director, Center for Evidence-Based Policy, Oregon Health and Science University, said that while rising healthcare costs are the main reason states support the systematic review work of the Drug Effectiveness Review Project, the researchers at his center do not consider cost. “The states don’t ask us to do cost analyses for them. So far, at the level the states are working, they’re pretty confident that they can figure out the cost issue. They are dealing with cost, but not in a research construct; they are just looking at their actual costs.”

Counterpoint: Comparisons without cost-effectiveness evaluations would be uninformative.
“I don’t think it’s only a clinical issue. And if it is a clinical issue today, it’s going to be a clinical and cost issue tomorrow, and I don’t shy away from that. Appropriateness requires both components,” said Stuart Altman, Dean of the Heller School for Social Policy and Management at Brandeis University.  

Sharon Levine, Associate Executive Director, Kaiser Permanente Medical Group, Inc., said that patients would get a far different picture of their treatment alternatives if physicians included costs in discussions of clinically comparable options; but she noted that physicians have a long tradition of considering cost to be someone else’s responsibility. “I think this is a huge challenge in terms of getting the physician community—professionals,—comfortable with their own role in actually being responsible for the affordability of healthcare.” 

Steven D. Pearson, Director, Institute for Clinical and Economic Review, Harvard Medical School, said the analyses at the Institute for Clinical and Economic Review at Harvard Medical School put interventions on a grid; one axis rating the clinical effectiveness, the other assessing comparative value. He presented the case of intensity-modulated radiation therapy (IMRT) for prostate cancer. Medicare pays $42,000 for IMRT, compared to $10,000 for 3-dimensional conformal radiation therapy (3-D-CRT).

“No one really argues, much less is there evidence, that IMRT actually produces an improvement in disease-free survival,” Pearson said. The difference is a reduction in the risk of inflammation of the bowel from about 15% to about 3%. Using IMRT means spending an additional $300,000 to avoid one case of bowel inflammation.

Synthesis: A two-stage process
“Bring on comparative effectiveness, but don’t confuse it with cost-effectiveness, which creates mistrust. Let’s keep those processes separate and then by all means apply them both,” said John C. Lewin, M.D., Chief Executive Officer, American College of Cardiology.

Public confidence and trust can be nurtured if it is clear that clinical factors are being evaluated first. Cost-effectiveness reviews should be distinct; and perhaps performed by separate entities.

Innovation

Point: Comparative effectiveness study requirements would stifle innovation.
Brian G. Firth, Cordis Corporation, said the increasing costs of doing studies to satisfy regulators and market demands already discourage smaller device companies from trying to commercialize their own products. “They want to take it to a certain point and be bought, because they simply cannot afford the back-end costs. That’s a very, very clear model, and I think policymakers should recognize this; it has implications in terms of innovation; it has implications in terms of competitiveness in the marketplace. This is a real phenomenon.” Firth also said that additional requirements for certain studies take resources away from other studies that might be more useful. 

Counterpoint: Comparative effectiveness will reveal whether new interventions are superior to standard practice.
“Can we do this without harming innovation?” Gail Wilensky asked. “These are legitimate issues, but I believe you can make a good argument. And the answer is, ‘Yes, you can go ahead and do this without harming innovation, if you provide the mechanisms so that people can get to market quickly.’” For instance, she said companies could agree to accept payment levels similar to those of existing treatment in order to gain quick access to the market, or they could gather clinical data about the relative value of the new product or service.

“Innovative industry needs to have sophisticated purchasers,” said James C. Robinson, Editor-in-Chief, Health Affairs. He used the example of the auto industry, in which purchasers push their suppliers to develop products that work well together. “On the biomedical side, the industry has traditionally enjoyed very unsophisticated purchasers. The physicians, of course, can be very sophisticated about the clinical dimension; but the people actually paying for it, the hospitals, the medical groups, and behind them the insurance companies, traditionally have not had a good understanding of the science, the value—clinical and economic—of what they are paying for.” As a result, he said, there is a lot of innovation in healthcare, but also unwarranted variation. As Stuart Altman noted, the public now pays for services that are of little or no value.

Synthesis: Paying for value will encourage innovations that make a difference
Steven D. Pearson said comparative effectiveness research will help purchasers support innovation that leads to improved health outcomes. “They want real innovation. They want real, quantum improvements in quality. They want people to think about value. And that doesn’t mean just the acquisition costs. Things can be very expensive up front, biotechnology, etc., but if modeling shows it will produce good value, that’s what health plans want."

Closing Thoughts     
Given the limits of available evidence and the challenges of answering comprehensive questions about healthcare interventions, one attendee wondered whether comparative effectiveness efforts were merely harvesting the low hanging fruit. “I don’t think we should underestimate how important it is to harvest the low hanging fruit, because we absolutely learn from that,” said Mark Gibson. “It also begins to change the political calculus around how you can do this. When people see good object lessons of how you can increase value in healthcare by utilizing comparative research, they begin to get the idea.”

Sean Tunis, M.D., M.Sc., Director, Center for Medical Technology Policy, cautioned against waiting. He noted that intensity-modulated radiation therapy for prostate cancer is already in wide use, despite a lack of evidence comparing it to other types of radiation treatment. Meanwhile, proton beam therapy is not as established yet. “Let’s at least make sure that five years from now we aren’t whining about the fact that we don’t have evidence on proton beam therapy. We’ve got to start somewhere.”

As ECRI Institute President and CEO Jeffrey C. Lerner noted, even though science is a messy business, every day people take that mess and translate it into the rules we use to guide our healthcare.

 

Comparative Effectiveness Resource Center
Please visit our CE Resource Center for proceedings and recordings from our past health policy conferences, plus links to stakeholder positions, federal legislation, and news.