Comparative Clinical Effectiveness and Cost-Effectiveness Research: Background, History, and Overview

Comparative clinical effectiveness research has been discussed as a source of information for health care decision makers that may aid them in reaching evidence-based decisions. The premise that “what is newest is not always the best” is the core of the rationale behind comparative effectiveness research. Diverse governmental and non-governmental organizations have publicly expressed their support and reservations about comparative effectiveness research. Many bills have been introduced in the 110th Congress that support comparative effectiveness research, including S. 3, H.R. 2184, H.R. 3162 (CHAMP Act), and the Healthy Americans Act (S. 334 and H.R. 3163). Although publicly supported by many governmental and non-governmental entities in the abstract, controversy about comparative clinical effectiveness research lies in its practice and implementation.

Health technology assessment tools (e.g., comparative clinical effectiveness, cost-effectiveness, and cost-benefit analysis) have been used for decades in the United States. To determine whether and what type of research is needed, the scope and scale of current comparative effectiveness research efforts must be understood. This report summarizes research efforts that have been funded and conducted.

Both the Agency for Healthcare Research and Quality (AHRQ) and the National Institutes of Health (NIH) provide extramural research funding for health technology assessments. AHRQ’s ongoing health technology assessment program includes the Centers for Education and Research on Therapeutics (CERTs), the Developing Evidence to Inform Decisions about Effectiveness (DEcIDE) Program, Evidence-based Practice Centers (EPCs), and the Research Initiative in Clinical Economics (RICE). The Veterans Health Administration (VHA) and the Department of Defense (DOD) also have centers that conduct health technology assessments to help the agencies make formulary and pricing decisions. Health technology assessments by AHRQ’s Medical Treatment Effectiveness Program (MEDTEP) and the Congressional Office of Technology Assessment (OTA) were terminated in 1995.

Some organizations that have used these assessments include the Academy of Managed Care Pharmacy (AMCP), Consumer Reports’ Best Buy Drugs project, the DOD, for-profit firms (including consulting firms, private insurers, and pharmaceutical manufacturers), the Centers for Medicare and Medicaid Services (CMS), the Oregon Health Plan, and the VHA. Some other countries have given comparative clinical effectiveness and cost-effectiveness more explicit roles in their health care systems.

Proponents maintain that a new comparative clinical effectiveness research entity in the United States could have the potential to increase the efficiency and coordination of research, boost the perceived independence and scientific integrity of the research, or generate research not currently being conducted. Realizing such anticipated gains could depend on many factors. This report will be updated upon legislative activity.

Comparative Clinical Effectiveness and Cost-Effectiveness Research: Background, History, and Overview

October 15, 2007 (RL34208)

Contents

Tables

Summary

Comparative clinical effectiveness research has been discussed as a source of information for health care decision makers that may aid them in reaching evidence-based decisions. The premise that "what is newest is not always the best" is the core of the rationale behind comparative effectiveness research. Diverse governmental and non-governmental organizations have publicly expressed their support and reservations about comparative effectiveness research. Many bills have been introduced in the 110th Congress that support comparative effectiveness research, including S. 3, H.R. 2184, H.R. 3162 (CHAMP Act), and the Healthy Americans Act (S. 334 and H.R. 3163). Although publicly supported by many governmental and non-governmental entities in the abstract, controversy about comparative clinical effectiveness research lies in its practice and implementation.

Health technology assessment tools (e.g., comparative clinical effectiveness, cost-effectiveness, and cost-benefit analysis) have been used for decades in the United States. To determine whether and what type of research is needed, the scope and scale of current comparative effectiveness research efforts must be understood. This report summarizes research efforts that have been funded and conducted.

Both the Agency for Healthcare Research and Quality (AHRQ) and the National Institutes of Health (NIH) provide extramural research funding for health technology assessments. AHRQ's ongoing health technology assessment program includes the Centers for Education and Research on Therapeutics (CERTs), the Developing Evidence to Inform Decisions about Effectiveness (DEcIDE) Program, Evidence-based Practice Centers (EPCs), and the Research Initiative in Clinical Economics (RICE). The Veterans Health Administration (VHA) and the Department of Defense (DOD) also have centers that conduct health technology assessments to help the agencies make formulary and pricing decisions. Health technology assessments by AHRQ's Medical Treatment Effectiveness Program (MEDTEP) and the Congressional Office of Technology Assessment (OTA) were terminated in 1995.

Some organizations that have used these assessments include the Academy of Managed Care Pharmacy (AMCP), Consumer Reports' Best Buy Drugs project, the DOD, for-profit firms (including consulting firms, private insurers, and pharmaceutical manufacturers), the Centers for Medicare and Medicaid Services (CMS), the Oregon Health Plan, and the VHA. Some other countries have given comparative clinical effectiveness and cost-effectiveness more explicit roles in their health care systems.

Proponents maintain that a new comparative clinical effectiveness research entity in the United States could have the potential to increase the efficiency and coordination of research, boost the perceived independence and scientific integrity of the research, or generate research not currently being conducted. Realizing such anticipated gains could depend on many factors. This report will be updated upon legislative activity.


Comparative Clinical Effectiveness and Cost-Effectiveness Research: Background, History, and Overview

Comparative clinical effectiveness research has been discussed as an avenue for producing information to help health care decision makers, such as patients, providers, and public and private payers, reach informed, evidence-based decisions. Proponents maintain that such information would aid in using limited resources effectively and efficiently, becomes even more necessary as resources become more limited, variation in medical practice patterns persist,1 and the rate of health care spending continues to rise.2 A recent Institute of Medicine (IOM) report on the future of drug safety notes "what is newest is not always the best"3 and that decision makers need information on the comparative risks and benefits of treatments and services.

Diverse governmental and non-governmental organizations have publicly expressed their support or reservations about increased comparative clinical effectiveness research. The House Ways and Means Subcommittee on Health held a hearing on June 12, 2007, that discussed the creation of a public-private body that would oversee comparative clinical effectiveness research. The director of the Congressional Budget Office (CBO) concluded at the hearing that comparative clinical effectiveness research, combined with changes in payment incentives, "offers a promising mechanism for reducing health care costs to a significant degree over the long term while maintaining or improving the health of Americans," while emphasizing that significant cost savings from such research would not been seen for many years.4 The executive director of the Medicare Payment Advisory Commission (MedPAC) stated that "there is not enough credible, empirically based comparative-effectiveness information available to patients, providers, and payers to make informed treatment decisions."5 AARP, an advocacy group for persons aged 50 and older, mentioned the need for comparative effectiveness studies for pharmaceuticals as early as 2005 in their Solutions Statement to the White House Conference on Aging.6 America's Health Insurance Plans (AHIP), a trade association representing health insurance plans, has urged Congress to give the Centers for Medicare and Medicaid Services (CMS) the authority to use comparative effectiveness and cost-effectiveness information in its coverage and reimbursement decisions.7 Jack Rowe, the former chief executive officer of Aetna, has stated that employers are "dying" to have information on the comparative effectiveness of treatments.8 On the other hand, some have expressed reservations about funding increased comparative clinical effectiveness research. Scott Gottlieb, a researcher at the American Enterprise Institute for Public Policy Research (AEI), articulated problems he perceived with government-sponsored prescription drug research, including poor study design, lack of access to full study results, and bias in interpreting the results.9 Robert Goldberg, vice president of the Center for Medicine in the Public Interest, argued that comparative clinical effectiveness research ignores individual differences and is biased towards concluding cheaper drugs are more effective.10

Related Legislation in the 110th Congress

A number of bills have been introduced that support comparative clinical effectiveness research, including several in the 110th Congress. S. 3, the Medicare Part D price negotiation bill sponsored by Senate Finance Committee Majority Leader Max Baucus, would require the Director of Health and Human Services (HHS) to develop a prioritized list of comparative effectiveness research studies.11 Representatives Tom Allen (D-ME) and Jo Ann Emerson (R-MO) co-sponsored H.R. 2184,12 which would establish a public-private funding mechanism for comparative clinical effectiveness research, overseen by an independent advisory board. The Act would establish a trust fund for the research that would receive $100 million in FY2008, $200 million in FY2009, and $900 million per year for FY2010-FY2012. Similarly, H.R. 3162, the Children's Health and Medicare Protection (CHAMP) Act of 2007,13 would establish a public-private funding mechanism for comparative clinical effectiveness research, overseen by an independent commission, and a trust fund that would be appropriated at least $90 million in FY2008, $100 million in FY2009, $110 million in 2010, and no more than $90 million in years thereafter. CBO has stated that the information produced by the comparative effectiveness portion of the CHAMP Act would reduce total spending by public and private purchasers by $0.5 billion over 5 years and $6 billion over 10 years. Direct spending by the federal government was estimated to be reduced by $0.1 billion over 5 years and $1.3 billion over 10 years. Thus, the majority of the savings from the research would be realized by private purchasers rather than by the federal government. The net federal expenditures from the comparative effectiveness portion of the CHAMP Act were estimated to be $0.5 billion over 5 years and $1.1 billion over 10 years. CBO assumed the savings would primarily be realized through changes in physicians' practice patterns and, to a lesser extent, changes in coverage rules.14 The Healthy Americans Act, S. 33415 and H.R. 316316, includes tax deduction, patent extension, and market exclusivity incentives for pharmaceutical and medical device manufacturers to conduct comparative clinical effectiveness research. The Josephine Butler United States Health Service Act, H.R. 3000, would create a National Health Board that includes a new institute—the National Institute of Evaluative Clinical Research. Among the Institute's responsibilities would be to identify the most effective methods of prevention, diagnosis, and treatment and assist the National Health Board in establishing clinical practice guidelines. More details about these bills and others in the 109th and 110th Congress are included in the Appendix.

To help inform the discussion surrounding comparative clinical effectiveness research, this report provides an overview and discusses past and current comparative clinical effectiveness research and other forms of technology assessment in the United States. This report also briefly discusses the use of technology assessment in the U.S. and other countries, and the potential role of a new comparative effectiveness research entity.

What Is Comparative Effectiveness Research?

Comparative effectiveness research is a term that has been defined by people in many different ways. All agree that comparative effectiveness research compares the effectiveness of two or more health care services or treatments, and is one form of health technology assessment. It compares outcomes resulting from different treatments or services, and provides information about the relative effectiveness of treatments.

Additional specifics about the research and its definition are sources of contention. In particular:

  • Effectiveness—How should effectiveness be measured? Should the research compare only the effectiveness (the effect in routine clinical practice) or also the efficacy (the effect under optimal conditions) of treatments or services?
  • Costs—Should costs be included in the research? Should the costs be reported separately from the effectiveness results? Or should a cost-effectiveness ratio be the ultimate goal?

Effectiveness: How Should It Be Measured?

Effectiveness Is Complicated to Measure

Measuring the benefit or effectiveness is not a straightforward matter; which factors are included and how they are counted can greatly affect the results. Quantifying benefits and effectiveness often requires assumptions about the population benefitting from the treatment. For example, to what extent will the benefits from the treatment vary across the country and in different settings? More specifically, how should the benefits observed in a clinical trial be extrapolated to the rest of the population? Researchers have debated such questions over the years, resulting in an expert consensus17 concerning best research methods. These suggested methods include the practice of conducting sensitivity analyses to assess how study results would change if parameters, such as effectiveness, were measured differently.18

Effectiveness Differs from Efficacy

A treatment's efficacy is the effect of the treatment under optimal conditions. A treatment's effectiveness is the effect of the treatment in routine clinical practice.19 For example, randomized clinical trials conducted for Food and Drug Administration (FDA) marketing approval typically aim to assess the relative safety and efficacy of a treatment so as to best determine the sole effect of the treatment, absent any other influential factors. Clinical trials for FDA approval also typically compare the efficacy of an investigational treatment to a placebo,20 rather than another treatment. Effectiveness research relaxes the strict exclusionary criteria that are typically required in such trials, in order to assess the treatment in the wide range of patients and environments in which the product is actually used.

Efficacy and effectiveness research results may differ because often in clinical practice, patients may have more than one illness, doses may vary, methods of administering the treatment may vary, and patients may simultaneously take treatments for multiple illnesses. Moreover, large segments of the potential patient population are often excluded from efficacy trials in order to achieve a more uniform study population.21 An often cited example is that many patients with high blood pressure also have diabetes; yet, efficacy clinical trials for high blood pressure treatments may not include patients with diabetes. As a result, any interaction between blood pressure and diabetes medications may not be known until the treatment is approved by the FDA and used in clinical practice. Also, different patients may respond to treatments differently due to physiologic differences, such as different metabolic rates of drugs and safety of anesthesia in surgical options.

Although conducted after FDA approval, post-marketing22 (also known as phase IV) studies are not necessarily effectiveness studies, and only rarely could be classified as comparative effectiveness studies. Post-marketing studies most often assess any ongoing safety concerns of one drug or device rather than the effectiveness of a product. Moreover, the few studies that compare two or more treatments often only assess the equivalence or superiority of the study sponsor's product, rather than the relative effectiveness of competing products.

Costs: Should They Be Included?

Costs Are Complicated to Measure

Costs are not always easy to define or measure. The total treatment costs may differ, sometimes dramatically, depending upon which perspective (e.g., patient, government payer, private insurer, society) is taken in the analysis, and which costs are included.23 As with the measurement of effectiveness, researchers have tried to resolve these issues through expert consensus of best research methods and the practice of conducting sensitivity analyses.

Inclusion Depends on Role in Health Care Decision Making

Much of the controversy surrounding whether costs should be included in comparative effectiveness research lies in the questions: when, how, and by whom will the research results be used to make decisions?24 The issue is most controversial if results that include costs are used to make insurance reimbursement, pricing, or coverage decisions. The inclusion of costs in research tends to not be as controversial when the results are not directly linked to medical and health policy decision making. One reason for the controversy is that policymakers may disagree about the way costs are measured or which costs are included in a research study.

Cost-effectiveness and Cost-benefit: Two Ways to Include Costs

Cost-effectiveness and cost-benefit analysis are two frequently used techniques of incorporating costs in the results of health technology assessments. The techniques compare the costs to the health benefits received from services or treatments. Both methods are used to help determine whether the additional health benefits of a service or treatment can justify the additional costs. Cost-effectiveness and cost-benefit differ in how the health benefits are measured. In cost-benefit analysis, the health benefits are monetarized, and the results are stated either in the form of a ratio or monetary difference between costs and benefits.25 In cost-effectiveness analyses, the health benefits are commonly measured in non-monetary units, such as life years (i.e., the additional years of life gained) or life years adjusted for quality (i.e., quality-adjusted life years—QALYs), and the end product is usually a ratio of the costs and benefits (e.g., dollars/QALY). Also, cost-effectiveness analysis always compares one or more alternatives, while cost-benefit analysis can be used to assess a single option (i.e., assessing whether the benefits are greater than the costs) or more than one option.

What Research Is Needed?

Past and Current Research Efforts

In order to determine whether and what type of comparative effectiveness research is needed, the scope and scale of current comparative effectiveness research efforts must be understood. A survey of the published medical literature and a review of the historical and current research initiatives provide a limited summary of the scale and scope of the comparative effectiveness research that has been funded and conducted. This section provides such a survey of the literature and reviews health technology research initiatives in the U.S. The section also discusses how comparative effectiveness research has been used in the U.S. and other countries and the potential role of a new comparative effectiveness entity.

Research in the Published Medical Literature

The published medical literature is one source of information that may help assess the extent to which various types of entities are currently conducting comparative clinical effectiveness research. We conducted a search of the published medical literature which included all studies published in PubMed journals that compared the effectiveness of at least two treatments or services between January 2004 and August 2007.26 The search was not intended to be an exhaustive search of all comparative clinical effectiveness studies, but rather was intended to summarize the information available from one large source of recently completed studies. It did not include studies that compared a treatment or service to a placebo. Studies conducted through initiatives discussed later in this report were excluded from this literature search. Each study was categorized by the type of research entity that conducted the study, which was determined by the affiliation of the study's contact author. The categories of types of research entities were academic, private institute, pharmaceutical companies, and government. A private institute was defined in this search as a for-profit or non-profit research group that was not based at a university or pharmaceutical company. Examples of private institutes include hospitals and private practices not affiliated with universities, such as the Kaiser Permanente Medical Center and the Black Hills Regional Eye Institute. Research conducted at Veterans Affairs (VA) hospitals was categorized as government research. The relative share of studies published by different types of entities is shown in Figure 1. The kinds of studies published by each type of entity are shown in Table 1.

The published studies primarily focused on treatments for mental health disorders and cardiovascular disease. Most comparative clinical effectiveness studies in the medical literature were produced by academic researchers.27 Researchers affiliated with pharmaceutical companies and the government have published relatively few comparative clinical effectiveness studies in the medical literature. Few studies focused on the comparative effectiveness of treatments in sub-populations, which was defined as populations other than white middle-age males (or females for diseases, such as ovarian cancer, that only occur in females). Sub-populations, such as children, elderly, and non-white races, may respond to treatments differently due to physiologic differences, such as different metabolic rates of drugs and safety of anesthesia in surgical options; thus research including sub-populations may help inform clinical practice. Also, few studies included patients with more than one disease (i.e., comorbidities28). Since nearly 60% of hospitalizations have at least one comorbidity (i.e., two diseases) and over 33% have two or more (i.e., at least three diseases),29 this type of research is generally held to help inform clinical decisions.

Figure 1. Share of Comparative Clinical Effectiveness Studies Published in the Medical Literature, by Each Type of Entity, January 2004-August 2007

Source: Congressional Research Service analysis from search of PubMed at the National Library of Medicine.

Notes: The shares are based upon counts of studies and are not weighted by significance, cost, or any other factors. The type of research entity that conducted the study was determined by the affiliation of the study's contact author.

Table 1. Types of Studies Conducted by Each Research Entity, January 2004-August 2007

Type of Entity

Academic

Private

Pharmaceutical
Companies

Government

Total

Number of studies

65

34

8

6

113

Study topics

 

 

Cancer

2%

6%

0%

0%

3%

 

Cardiovascular disease

17%

20%

12%

0%

17%

 

Diabetes

8%

3%

0%

33%

7%

 

Digestive system

2%

3%

0%

0%

2%

 

Infectious diseases

6%

8%

25%

17%

9%

 

Mental health disorders

22%

11%

39%

50%

21%

 

Muscle, bone, and joints

3%

3%

0%

0%

3%

 

Ophthalmic disorders

8%

20%

0%

0%

11%

 

Pain management

8%

17%

12%

0%

11%

 

Pulmonary diseases

9%

3%

0%

0%

5%

 

Other

15%

6%

12%

0%

11%

 

Total

100%

100%

100%

100%

100%

Included patients with more than
one disease (comorbidities)

6%

6%

0%

0%

5%

Included sub-populations

18%

6%

12%

0%

13%

Source: Congressional Research Service analysis from search of PubMed at the National Library of Medicine.

Note: The type of research entity that conducted the study was determined by the affiliation of the study's contact author.

Federal Funding of Technology Assessments

Health technology assessment, including comparative clinical effectiveness, cost-effectiveness, and cost-benefit analysis, has been conducted for decades in the United States through both public and private initiatives.30 Some of these initiatives in the United States are ongoing, while others were terminated because of lack of funding. Further details about the initiatives can be found in the Appendix. The initiatives' timing and scale are summarized in Table 2.

The Agency for Healthcare Research and Quality (AHRQ) and the National Institutes of Health (NIH) are currently the largest federal funders of extramural health technology assessments. Rather than conducting the research at the agency, these executive-branch agencies within the Department of Health and Human Services (HHS) primarily provide funding for academic and private sector researchers. AHRQ in particular has several ongoing programs for health technology assessments, including the Centers for Education and Research on Therapeutics (CERTs), the Developing Evidence to Inform Decisions about Effectiveness (DEcIDE) Program, Evidence-based Practice Centers (EPCs), and the Research Initiative in Clinical Economics (RICE). These centers and programs conduct technology assessments, comparative effectiveness research, pharmaceutical outcomes research, and economic valuations of health care services and treatments. Although some Institutes at the NIH provide some funding for health technology assessments, unlike AHRQ, the NIH has not organized the research into centers or programs. Each of the AHRQ programs differ in their clinical focus, purpose, and types of technology assessments funded. For example, the EPCs are academic and private sector research centers that have five-year research contracts with AHRQ while funding from RICE is primarily allocated through competitive research grants. Unlike AHRQ and the NIH, the Veterans Health Administration's (VHA) Pharmacy Benefits Management Strategic Healthcare Group (PBMSHG) and the Department of Defense (DOD) PharmacoEconomic Center (PEC) do not out-source their health technology assessments. Rather, the assessments are funded, conducted, and used by the respective agency to make formulary and pricing decisions. Moreover, the research funded by AHRQ and NIH is intended to be a public good and to aid all health care decision makers, while the research from VHA and DOD centers is intended to aid decision makers at the respective agencies.

AHRQ previously sponsored research through its Medical Treatment Effectiveness Program (MEDTEP). Among other projects the program funded the Patient Outcomes Research Teams (PORTs). Funding for this program was terminated in 1995 for many reasons, including criticisms over the quality of the PORT guidelines. The Congressional Office of Technology Assessment (OTA) also previously funded and conducted technology assessments. The OTA was a nonpartisan congressional agency that conducted health and non-health technology assessments for Congress. The agency would use in-house researchers as well as experts outside of the agency. It was disbanded in 1995 partly due to controversy over its technology assessments, and partly for other reasons discussed in the Appendix.

Table 2. Other Federal Funding of Technology Assessments

Initiative

Years of
Existence

Number of
Technology
Assessmentsa

Types of Technology Assessments

Notes

AHRQ Programs

CERTs

1999-present

30-50 publications in medical journals per year;
over 200 current projects

Health outcomes and cost-effectiveness research of drugs and devices

Administered through cooperative agreement between AHRQ and FDA; Also conducted in partnership with private corporations; 10 centers are affiliated with academic institutions and the other is the HMO Research Network

DEcIDE

2004-present

15 reports since inception

Health outcomes and comparative effectiveness research

Established by section 1013 of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA); 9 centers are at universities and others are at Acumen, Brigham and Women's Hospital, Harvard Pilgrim Health Care, Outcome Sciences, and RTI International

EPCs

1997-present

155 reports since inception

Research on effectiveness, cost, and safety of technologies, evidence reports, and research methods

1 center supports USPSTF work; 3 centers conduct technology assessments for CMS; 10 centers are affiliated with academic institutions and the others are the BCBS TEC, the ECRI Institute, and the Rand Corporation

MEDTEP

1989-1995

N/A

Health outcomes and cost-effectiveness research, guidelines development, database development, and methods of disseminating information

Funded PORTs; Results of MEDTEP sponsored research contributed to decisions to decrease the agency's budget by 20% in 1995

RICE

2001-present

Over 40 grants for clinical economics research funded since inception

Clinical economics research, including cost-benefit, cost-effectiveness, and health outcomes research, and research methods

Funding primarily allocated through competitive research grants

DOD PEC

1992-present

N/A

Evaluations of the cost and clinical outcomes of pharmaceuticals, clinical practice guidelines

Helps establish Tri-Service Drug Formulary list and the National Mail Order Pharmacy list; Also works with the VHA.

OTA

1972-1995

Approximately 50 reports per year

Any type of technology assessment requested

Disbanded as part of the budget reductions in the 104th Congress

VHA PBMSHG

1995-present

N/A

Evaluations of the cost and clinical outcomes of pharmaceuticals, clinical practice guidelines, and drug monographs

Establishes the VA formulary, drug pricing, and contracts; Also works with the DOD.

Source: Congressional Research Service analysis.

Notes: N/A = Information not available; AHRQ = Agency for Healthcare Research and Quality; CERTs = Centers for Education and Research on Therapeutics; CMS = Centers for Medicare and Medicaid Services; DEcIDE = Developing Evidence to Inform Decisions about Effectiveness; DOD = Department of Defense; EPCs = Evidence-based Practice Centers; MEDTEP = Medical Treatment Effectiveness Program; OTA = Congressional Office of Technology Assessment; PBMSHG = Pharmacy Benefits Management Strategic Healthcare Group; PEC = PharmacoEconomic Center; PORTs = Patient Outcomes Research Teams; RICE = Research Initiative in Clinical Economics; USPSTF = U.S. Preventive Services Task Force; VA = Department of Veterans Affairs; VHA = Veterans Health Administration.

a. Number of studies from initiatives' websites as of August 2007.

Effect of Research on U.S. Policy and Practice

Health technology assessments can be used for many purposes, including aiding decisions by

  • insurers for coverage, drug formulary placement, and pricing of technologies;
  • health care providers (e.g., physicians, nurses) for improving clinical practice; and
  • consumers for making informed decisions.

Use of Technology Assessments in the United States

Over the past two decades, many organizations have tried to use existing technology assessments for health care decisions. Some organizations that have used these assessments include the Academy of Managed Care Pharmacy (AMCP), Consumer Reports' Best Buy Drugs project, the DOD PEC, for-profit firms (including consulting firms, private insurers, and pharmaceutical manufacturers), the Centers for Medicare and Medicaid Services (CMS), the Oregon Health Plan, and the VHA PBMSHG. The organizations' timing, target audience, and purpose are summarized in Table 3.

With the exception of Consumer Reports' Best Buy Drugs, these organizations have used technology assessments for insurers' coverage, formulary, and pricing decisions. The AMCP promulgated guidelines for pharmaceutical companies' submissions for formulary assessments by private health insurers. The guidelines suggest comparisons to other products and a model that predicts the costs and health outcomes with the product in the health insurance plan. Research suggests that many pharmaceutical companies have adopted the guidelines; a survey found that managed care organizations had dossiers for 40% of drugs under review for coverage.31 Best Buy Drugs is a non-profit project of Consumer Reports that combines comparative effectiveness information with drug pricing information to select their "Best Buy picks" for health care consumers and providers. The DOD PEC monitors drugs' use, cost, and pharmacoeconomics within the Military Health System. The PEC has been credited by the DOD with improving patient safety and decreasing costs.32 Both the Medicare program and the Oregon Health Plan encountered opposition when the programs tried to incorporate cost-effectiveness analyses into policy decisions. In response, the Oregon Health Plan modified their program so that coverage was not based strictly on cost-effectiveness and, since January 2006, Medicare has explicitly excluded treatment costs from its national coverage determinations.33 The VHA PBMSHG compares the effectiveness of drugs to produce clinical practice guidelines and drug monographs, and to establish the VA formulary, drug pricing, and contracts. Further details about the initiatives can be found in the Appendix.

Table 3. Large Initiatives to Use Existing Technology Assessments

Organization

Years Used

Target Audience

Purpose of Initiative

Notes

AMCP

2001-present

Private insurance formulary decisions

To standardize dossier format for submitting information to private insurers

 

Best Buy Drugs

2004-present

All health care decision makers

To provide comparative effectiveness information to consumers and providers

Synthesizes DERP reports and combines reports with drug pricing information to select drugs that are "best buys"

CMS

Never used for NCDs. May be used for LCDs.

Medicare coverage determinations

To aid coverage decisions

First proposed to use cost-effectiveness in 1989 for NCDs.

DERP

2001-present

Medicaid programs and other health care decision makers

To make available information regarding drugs' comparative effectiveness and safety

Originally sponsored by the Oregon legislature; used by Consumer Reports' Best Buy Drugs project

DOD PEC

1992-present

DOD formulary, coverage, and pricing decisions

To improve the clinical, economic, and humanistic outcomes of drug therapy for military personnel

Established in response to rising DOD pharmaceutical expenditures

Oregon Health Plan

1993a-present

Oregon Medicaid program

To select covered services in a judicious manner in order to expand the population of Oregonians covered by Medicaid

 

USPSTF

1984-present

Over 100 clinical evaluations

Compares methods of preventing diseases

Funded by AHRQ since 1998

VHA PBMSHG

1995-present

VHA formulary, coverage, and pricing decisions

To encourage the appropriate use of medications

 

Source: Congressional Research Service analysis.

Notes: AMCP = Academy of Managed Care Pharmacy; CMS = Centers for Medicare and Medicaid Services; DERP = Drug Effectiveness Review Project; DOD PEC = Department of Defense PharmacoEconomic Center; NCD = National Coverage Determination; LCD = Local Coverage Determination; VHA PBMSHG = Veterans Health Administration Pharmacy Benefits Management Strategic Healthcare Group.

a. The legislation responsible for the Oregon Health Plan was passed by the Oregon legislature in 1989. The program was granted a federal waiver in March 1993 and was implemented in 1994.

Translating Research into Clinical Practice

Depending on many factors, new information about treatments' safety or effectiveness may or may not change physicians' clinical practice. For example, one study suggested that simply the wording of a study's results may significantly influence whether physicians change which drugs they prescribe.34 Another study found that dissemination of educational materials alone was ineffective in changing physicians' prescribing habits. More active (and costly) methods, such as one-to-one educational outreach, multifaceted interventions, and participatory clinical guideline development were found to be more effective.35 Other more difficult factors to change, such as the management structure of the private insurer employing the physician, were also found to influence prescribing habits.36 Researchers and policy makers have tested many methods for changing clinical practice, and optimal strategies have evolved over the years.37 Overall, changing clinical practice is not a simple or inexpensive process, and requires more than disseminating information and expecting individuals to comb-through research studies and find ways to translate the findings into action.

Use of Technology Assessments by Other Governments

Comparative and cost-effectiveness analysis are given explicit roles in some other countries. In a 2001 survey of 11 OECD member countries that use technology assessment,38 such as comparative effectiveness or cost-effectiveness, three countries (Belgium, Italy, and the Netherlands) reported that the goal of using technology assesssment was cost containment, three (Belgium, the Netherlands, and Portugal) indicated global budgeting,39 and five (Australia, the Netherlands, Portugal, Sweden, and the U.K.) reported value-for money. Ten of the countries reported that a federal agency is responsible for either processing or conducting the assessments. Three of the countries (Australia, Belgium, and France) would only appoint consultants with no links to pharmaceutical manufacturers, while three other countries (the Netherlands, Portugal, and Switzerland) would only appoint consultants with no links to the manufacturer of the drug under review. Three of the countries (Japan, Belgium40 and the U.K.) noted that the assessment may be completed by the payer, while the other countries indicated that the assessment would only be completed by the product manufacturer. Belgium was the only country that reported that technology assessments reduced total drug expenditures. Italy and Portugal noted that it reduced unnecessary drug use, while Australia, Belgium, and Portugal indicated that it improved the cost-effectiveness of drug prescribing.41 None of the countries reported that lack of co-operation from pharmaceutical companies was an obstacle to obtaining improved results from technology assessment.

Three countries that are often as examples of governments using health technology assessments are the U.K., Australia, and Canada. Use of health technology assessments in these countries is summarized in Table 4. Further details about how these three countries use technology assessments can be found in the Appendix.

Table 4. Comparison of the Use of Technology Assessments by Other Governments

Country

Years of existence of technology assessment

Includes cost-effectiveness?

Assesses budgetary impact of product?

How do the results effect coverage or inclusion in a formulary?

Produces new primary data or analyzes existing data?

U.K.

1999-present

Yes

No

Develops a negative list: NHS will not pay for treatments until NICE determines how, by whom, and when the treatment should be used

Existing data

Australia

PBAC:
1953a-present

Yes

Yes

Develops a positive list: Only products with positive results can be added to the formulary

Existing data

Canada

CADTH:
1990-presentb

Yes

No

No mandatory effect

Existing data

Source: Congressional Research Service analysis.

Notes: NHS = National Health Service; NICE = National Institute for Health and Clinical Excellence; PBAC = Pharmaceutical Benefits Advisory Committee; CADTH = Canadian Agency for Drugs and Technologies in Health.

a. Analysis of comparative effectiveness and cost-effectiveness became mandatory in PBAC in 1993.

b. The agency did not become a permanent entity until 1993.

The Potential Contribution of a New Research Entity

The multitude of current comparative effectiveness research entities in the U.S. introduces the question: why would a new entity be needed to conduct more comparative clinical effectiveness research? Many entities are currently conducting comparative effectiveness research, but the results are not centralized nor are they necessarily in a format that is easy for health care decision makers to use. Thus, one possible reason that could be offered in support of a new research entity might be increased efficiency and coordination of research. The entity could also arguably improve researchers' independence and scientific integrity, or spawn the genesis of research not currently being conducted on drugs, other health technologies, or services.

Factors that May Influence Its Success

Many factors might influence whether the aforementioned potential gains from a new entity could be realized. Some of these influential factors might be determined when the entity is established. The ideal structure, funding source,42 mission, and authority of the entity may depend upon the intended use of the research. For example, if the entity aims to influence insurers' decisions, then input from these stakeholders may help achieve this goal. If the entity aims to change clinical practice, then decision makers may wish to plan how the new research will help achieve this goal. Additionally, if, like the OTA, the audience is Congress, then there may be some benefit to establishing the entity as a congressional agency. There may also be some efficiency gains to having the agency work with existing agencies in the executive branch, such as AHRQ or NIH.

Other factors influential to the entity's success would more likely be determined by the entity's administration, including:

  • Which treatments would be studied?
  • Would different types of treatment options, such as drugs and surgery, be compared to each other?
  • Would the research methods include systematic reviews, decision models, and observational studies, as well as randomized trials?
  • Who would oversee and review the studies' methods, timing, and clinical endpoints?
  • Which researchers and what expertise would be required to conduct the studies?
  • How would the results be presented and used?
  • How would the political support for the entity be maintained?

The answers to these questions could have repercussions on many interested parties including physicians, patients, payers, manufacturers, researchers, and federal agencies.

For example, the timing of the research could influence the impact of the research, since the amount of information that is known about a treatment differs at different time periods. The issue is that conclusions may need to be modified when more information becomes available in a particular research area. On the one hand, waiting for perfect information on a treatment before conducting any analyses would help curb modifications, but any resulting conclusions may have a limited impact on improving clinical decision making. On the other hand, analyzing areas with imperfect information would be more likely to have a large impact on clinical decision making, but the conclusions may change once more information becomes available. A middle, but imperfect, option would be to re-evaluate conclusions as more information becomes available; while this option would allow all information to be incorporated into decisions in a timely manner, it would also increase the overall costs of technology assessments. A new entity might need to grapple with these types of decisions.

Appendix. Description of U.S. and Non-U.S. Comparative Effectiveness Initiatives and Legislation in the 109th and 110th Congresses

U.S. Initiatives

Academy of Managed Care Pharmacy

In 2001, Academy of Managed Care Pharmacy (AMCP) promulgated guidelines, known as the AMCP Format for Formulary Submissions, for conducting formulary assessments, including economic evaluations.43 The purpose of the guidelines was to help ensure that any increased utilization of pharmaceuticals and vaccines was based on good scientific evidence and value. The guidelines encouraged pharmaceutical manufacturers to submit a dossier with clinical and economic data from published and unpublished studies, along with an economic model that predicts the costs and health outcomes with the product in the health plan. The dossier also includes a section on the comparative pharmacokinetic and pharmacologic data for other agents commonly used to treat the condition.44 The dossiers have become an industry standard, and are used by pharmacy and therapeutics (P&T) committees of managed care organizations and health insurers.

A recent survey found that managed care organizations had dossiers for 40% of drugs under review for coverage. Fifty-three percent of the dossiers received included budget-impact models and 39.3% included cost-effectiveness or cost-benefit analyses. Less than half of the economic models were deemed adequate by the P&T committees. Nearly two-thirds of survey respondents indicated that P&T committees modified the economic model on the dossier because pharmaceutical manufacturers did not make models directly applicable to the health plan's population.45

Agency for Healthcare Research and Quality

The agency, formerly known as the Agency Health Care Policy and Research (AHCPR), was established in 1989 as an agency within the Department of HHS. The agency's statutory responsibilities included outcomes research and clinical practice guidelines development. The Medical Treatment Effectiveness Program (MEDTEP) was established in 1989 as part of AHCPR.46 MEDTEP funded effectiveness research, guideline development, database development, and methods of disseminating information.

In 1992, Congress directed the AHCPR to incorporate cost-effectiveness information in its technology assessments and clinical practice guidelines. The use of cost-effectiveness and the development of the clinical practice guidelines generated controversy and criticisms. The criticism included critiques from IOM committees, the Government Accountability Office (GAO), the Physician Payment Review Commission (PPRC), the Congressional Office of Technology Assessment (OTA), and an influential lobbyist group of orthopedic surgeons.47 The agency was also criticized for its role in the Clinton health care reform plan. In 1995, partially in response to the criticism and concerns, Congress sharply decreased the FY1997 budget for AHCPR by 20%, thereby ending the funding for MEDTEP. From 1995 to 1999, AHCPR received operating funds through annual appropriations.

The agency was reauthorized and renamed the Agency for Healthcare Research and Quality (AHRQ) in 1999, and now has many centers and programs that conduct inter-related research on health care treatments. These include the Centers for Education and Research on Therapeutics (CERTs), the Developing Evidence to Inform Decisions about Effectiveness (DEcIDE) Program, the Evidence-based Practice Centers (EPCs), and the Research Initiative in Clinical Economics (RICE), which conduct technology assessments, comparative effectiveness research, pharmaceutical outcomes research, and economic valuations of health care services and treatments, respectively.

  • Centers for Education and Research on Therapeutics. AHRQ funds pharmaceutical outcomes research through the CERTs, which is a national demonstration program for education and research on the optimal use of drugs, biologicals, and medical devices. AHRQ was given the responsibility of administering the program in 1997 as part of the Food and Drug Administration Modernization Act (FDAMA; P.L. 105-115), and the first centers were funded in 1999. The program is administered as a cooperative agreement by AHRQ in consultation with the FDA. Some of the research is also conducted in partnership with private corporations, such as insurers or pharmaceutical manufacturers. The research compares the health risks, benefits, cost-effectiveness, economic implications, and interactions of treatments.48 Some of the research also examines the cost-effectiveness of treatments. Currently, 10 of the 11 centers are affiliated with academic institutions.
  • Developing Evidence to Inform Decisions about Effectiveness Program. Section 1013 of the MMA authorizes AHRQ to conduct and support research on outcomes, comparative clinical effectiveness, and appropriateness of pharmaceuticals, devices, and health care services. The section prohibits the Administrator of CMS from using the data produced under the section to withhold coverage of a prescription drug. Although the section authorized $50 million to be appropriated for the research in 2004, AHRQ has been appropriated $15 million each year for carrying out the research. AHRQ created the DEcIDE program to tackle the responsibilities described in section 1013. Like the EPCs, the DEcIDE centers are primarily based at universities. Unlike the EPCs, the DEcIDE centers do not examine the cost-effectiveness of technologies, but rather focus on health outcomes and comparative clinical effectiveness.49 As of August 2007, the agency has funded 15 projects that evaluate the comparative effectiveness of health care treatments.
  • Evidence-based Practice Centers Program. The purpose of the EPC program is to improve the quality, effectiveness, and appropriateness of health care through technology assessments, evidence reports, and research on the methods for systematic reviews. The reports inform public and private insurers' coverage decisions, and are used to develop quality measures, educational materials, guidelines, and research agendas. Cost-effectiveness analysis has been used as a research tool in some of the reports.50 Topics for technology assessments are nominated by AHRQ's non-federal partners51 and assessments tend to be completed in approximately 15 months.52 Thirteen EPCs were awarded five-year contracts with AHRQ in 2002. Three of the Centers (Duke University, ECRI, and Tufts University-New England Medical Center) specialize in technology assessments for CMS to inform national coverage decisions for the Medicare program and provide information to Medicare carriers.53 One center (Oregon Health & Science University), supports the work of the U.S. Preventive Services Task Force.54 Ten of the centers are affiliated with academic institutions and the remainder are private institutions. Ten are based in the U.S., and the other three are based in Canada. As of August 2007, 155 evidence reports had been published by the EPC program.
  • Research Initiative in Clinical Economics. Begun in 2001, RICE funds research on the cost-effectiveness, cost-benefit, and methods for estimating the value of health care interventions. Although focused on cost-effectiveness, this research has not sparked controversy. Unlike other initiatives, this research has not been used for clinical practice guidelines or coverage decisions, and thus has not been explicitly connected to policy recommendations or implementation.

Blue Cross Blue Shield Technology Evaluation Center

The Technology Evaluation Center (TEC) of the BlueCross BlueShield (BCBS) Association has been assessing the relative effectiveness and appropriateness of different technologies since 1985.55 AHRQ designated and funded the TEC as one of its first Evidence-based Practice Centers (EPCs) in 1997, and renewed the designation for an additional five years in 2002.56 The TEC relies upon medical and research employees to conduct the evaluations, under the guidance of their Medical Advisory Panel of clinical experts. The Center produced 17 evaluations in 2005, 14 in 2006, and 4 between January and August 2007. The Center's evaluations focus on the relative effectiveness of technologies, particularly with regard to the effect upon health outcomes, such as length of life, quality of life, and functional abilities. The TEC compares the effectiveness of pharmaceuticals, medical devices, and health services. Cost-effectiveness analyses are mentioned by the TEC as a potential type of special technology assessment the Center may undertake;57 however, since 2005, none of the publicly available assessments have analyzed the cost-effectiveness of technologies. All evaluations use the same criteria, approach for reviewing the evidence, and format for reporting the results. The evaluations are intended to be for informational purposes only and are not characterized as recommendations or guidelines. All completed evaluations are available for free on the center's website, along with the list of technology evaluations in process.

Consumer Reports' Best Buy Drugs Project

Consumer Reports' Best Buy Drugs is a non-profit project of Consumer Reports that is primarily supported by educational grants.58 The project synthesizes DERP findings in order to provide comparative effectiveness information about drugs to health care consumers and providers, and selects "Best Buy picks" within drug classes; the most influential factor in the selection process is the drug's effectiveness. The summaries include information on effectiveness, safety, and price. The drug prices are national average cash prices.59 The summaries are available for free on the project's website, and are updated as new information becomes available.

DOD PharmacoEconomic Center

The Department of Defense (DOD) PharmacoEconomic Center (PEC) was established in 1992 in response to rising DOD pharmaceutical expenditures.60 Its mission is to "improve the clinical, economic, and humanistic outcomes of drug therapy in support of the readiness and managed healthcare missions of the Military Health System."61 The center performs cost-effectiveness analyses, works with the DOD P&T committee to establish the Tri-Service Drug Formulary list and the National Mail Order Pharmacy formulary list, provides drug treatment guidelines with the VHA, and monitors drugs' use, cost, and pharmacoeconomics within the Military Health System. Some of the evaluations by the PEC are publicly available. The PEC publishes a monthly newsletter called the PEC Update "to educate health care providers and other pharmacy benefit stakeholders about cost-effective drug therapy."62 PEC analyses have shown that the center has improved patient safety and decreased costs for the Military Health System.63

Drug Effectiveness Review Project

In 2001, the Oregon state legislature began commissioning, through the Oregon Medicaid program, the Oregon Health Science University (OHSU) to assess the comparative clinical effectiveness and safety of drugs in clinical practice. The initiative was named the Drug Effectiveness Review Project (DERP).64 OHSU was a logical location for the initiative since, at the time, it was an AHRQ funded EPC; as such, it completed systematic literature reviews to produce evidence reports and technology assessments. The reviews were viewed as a way to equalize buyers' and sellers' information about heterogeneous products.65 Credibility and transparency of the research were viewed as crucial to the success of the project, and DERP's conflict of interest policy forbids its reviewers from having financial ties with the companies whose products they are evaluating.66

Currently, a team of researchers at OHSU coordinate the reviews with experts in the clinical area. The reviews incorporate a scientific literature review as well as assessments of the evidence on the effectiveness, safety and adverse effects of the drugs nationwide, as well as in sub-populations. Cost information is explicitly excluded from the reviews. The reviews only compare drugs within a class, and do not compare the effectiveness of medical devices or health services. The primary audience for the DERP reviews is the state Medicaid programs, which use them to help inform Medicaid drug coverage decisions. Topics are chosen at biannual public meetings with the state pharmacy directors and medical directors from the funding Medicaid programs; the selection is based upon Medicaid expenditures,67 the potential usefulness of a review, and availability of data. Once a drug class is chosen, a solicitation for information is sent to all pharmaceutical manufacturers with products in the class of interest. Any information provided to DERP by manufacturers is disclosed on request to any interested party.68 DERP also draws upon information from the published and unpublished scientific literature. DERP does not conduct new comparative effectiveness studies and only uses existing information. The possible outcomes of a DERP report are (1) no evidence of differences between drugs, (2) some differences under some circumstances, (3) unclear whether drugs differ, and (4) significant differences between drugs.69 The final reports do not make coverage, payment, or formulary recommendations. Rather, state decision making groups in the funding Medicaid programs use the DERP reports to help reach conclusions about coverage, payment, and formulary status; notably, states may not necessarily reach the same conclusion after reviewing the same DERP report.70 All DERP reports are available for free on the website.71 As of August 2007, 13 states (plus the Canadian Agency for Drugs and Technologies in Health) were participating in and financially supporting DERP, which has produced more than 30 reports since October 1, 2003.72

ECRI Institute

Some non-profit organizations, such as ECRI Institute, also perform health care technology assessments and comparative effectiveness analyses. The clients of these organizations may range from private insurers to government agencies. For example, ECRI is one of AHRQ's EPCs and provides health technology assessments to the DOD TRICARE program, but also counts the World Health Organization, hospitals and private insurers as clients.73

For-Profit Firms

Many for-profit firms also perform technology assessments, including comparative effectiveness, cost-effectiveness, or pharmaceutical outcomes research, for their clients, such as pharmaceutical manufacturers and health insurers. Examples of such firms are Hayes Inc., and United Biosource Corporation (formerly known as MEDTAP). Other firms, such as McKesson's InterQual, produce clinical guidelines and decision support criteria for clients.74 Many pharmaceutical and private health insurance plans also perform technology assessments in-house. Much of the information produced by such firms may be considered confidential client information and would not be available to the public.

Medicare

As dictated by statute, Medicare pays for medically needed and necessary services provided to elderly and disabled individuals. Fundamental questions concern what constitutes medically reasonable and necessary care and under what circumstances should the care be covered by the program? These determinations happen through two mechanisms: national coverage determinations (NCDs) and local coverage determinations (LCDs). As the terms imply, NCDs are coverage determinations that are made by CMS and applied across the nation, whereas LCDs are coverage determinations made by local contractors for the Medicare program and applied to limited geographic areas.

An Example of Opposition to Comparative Effectiveness-Based Reimbursement

In 2003, as part of its outpatient hospital rate-setting process, CMS asserted that two anti-anemia drugs, Aranesp and Procrit, were "functionally equivalent," and, as such, should be paid the same Medicare rate, subject to an appropriate conversion ratio. Previously, Medicare had established a specific payment rate for Procrit as a new technology "pass-through." When the FDA approved Aranesp in 2002, the issue of appropriate payment for these drugs arose. CMS justified the functional equivalence label under the rationale that "both products use the same biological mechanism to produce the same clinical result."75 The agency cited their authority in Section 1833(t)(2)(E) of the Social Security Act (SSA) to make an adjustment determined "necessary to ensure equitable payments."76 The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA; P.L. 108-173) amended Section 1833(t)(6) of the SSA to prohibit any additional application of "functional equivalence," with the exception of cases in which it had already been applied. As of January 2006, Medicare payments for Aranesp and Procrit are not based on any determination of "functional equivalence" and instead are based on the average sales price (ASP) plus 6%, which is the CMS policy that is generally used for separately payable outpatient drugs under outpatient prospective payment system.

Medicare has had an uneven history of incorporating the cost-effectiveness of treatments when establishing coverage determinations. In 1989, the Health Care Financing Administration (HCFA), the predecessor to CMS, issued a Federal Register notice proposing that cost-effectiveness information be included as a component of Medicare coverage decisions.77 This was the first time that HCFA had ever proposed to include cost-effectiveness as a factor in coverage decisions. The 1989 proposal met substantial opposition by professional and industry groups, and as a result was never implemented and instead was withdrawn in 1999.78 In 2000, Medicare issued new guidelines for determination of coverage that included demonstrable medical benefit and "added value."79 The medical benefit was to be determined using "evidence-based medicine," which weighs the risks and benefits of a treatment by considering the reliability of the source of the evidence.80 Evidence could include experimental studies, expert opinion, informal studies, logical reasoning from biological knowledge, or clinical guidelines. The term "added value" was defined as adding greater benefit; if two technologies provided equal benefit, then only the lower cost technology would be covered. Due to opposition to the notion of "added value," the 2000 notice of intent for determination of coverage was withdrawn in 2003 and never implemented.81

Since January 2006, CMS has explicitly excluded treatment costs from its NCDs.82 In a 2006 coverage guidance, CMS stated "cost effectiveness is not a factor CMS considers in making national coverage determinations."83 On the same day, CMS issued a guidance on technology assessments that included the statement "cost is not a factor in our review or determination to cover a particular technology."84

In June 2006, a chapter in the MedPAC report explored the methodological advantages and disadvantages of using cost-effectiveness analysis in Medicare.85 The chapter notes that the results (and conclusions) of cost-effectiveness studies may vary due to differences in studies' methods, differences in the clinical characteristics of patients, and the timing of the study. Moreover, the report noted that some studies' methods were opaque. The report states that considering the clinical and cost-effectiveness of treatments might increase the return on society's investment in health care, but doing so will be most useful when results are comparable across studies and treatments.

CMS continues to use the "evidence-based medicine" approach to evaluate a treatment's health benefit, and studies from AHRQ's EPCs may be included as evidence.86 CMS also is gathering information about the effectiveness of treatments through its Coverage with Evidence Development (CED) program.87 Under CED, CMS may cover a treatment with the condition that providers and patients who use the treatment allow data to be collected about the patient, treatment, and health outcomes. This information would then be evaluated to ensure that the medical care is reasonable and necessary. CED may be required for treatments that (1) are in new drug classes with new mechanisms; (2) may be effective only for subpopulations; (3) may provide clinical benefit for off-label88 uses; or (4) may have substantial consequences for treating the wrong patients.

In contrast to NCDs, CMS believes their contractors have the authority to use cost and cost-effectiveness for LCDs, specifically with regard to "least costly alternative" policies.89 Least costly alternative policies state that a Medicare contractor will not pay the additional cost of a more expensive item if a clinically comparable item costs less. Such policies have been proposed for nebulizers used for respiratory conditions such as asthma, emphysema, and chronic bronchitis, and implemented for Lupron and Zoladex, treatments for advanced prostate cancer.90

Office of Technology Assessment

Although charged with being a nonpartisan source of information about scientific and technical issues for the legislative branch, the analyses of the Congressional Office of Technology Assessment (OTA) were, at times, controversial.91 Part of the reason for the controversy was its explicit inclusion of costs and cost-effectiveness in health technology assessments. Another part of the reason for controversy was the work was perceived by some as not timely enough, duplicative of other agencies, and not necessarily useful to public programs.92 As a result of the controversy, the agency was disbanded in 1995, 23 years after its inception, as part of budget reductions in the 104th Congress. Prior to its disbandment, the OTA was the smallest Congressional agency, with less than 200 employees and an annual budget of $22 million. The agency released approximately 50 reports each year.93

In creating the agency, great care had been taken to ensure and protect the agency's nonpartisanship and scientific integrity. The OTA was governed by the Technology Assessment Board (TAB), which was composed of six Senators and six Representatives with equal representation from each party. The TAB appointed the Director of OTA, for a six-year term, as well as an advisory council of 10 experts, including the Comptroller General and the Director of the Congressional Research Service, to advise the Agency. The TAB also reviewed all proposed studies as well as the final reports prior to release. The agency's studies did not recommend a single policy option, but rather laid out different options and projected the possible consequences of each.

The Chairman of any congressional committee, the TAB, and the Director of OTA had the authority to request any technology assessment from OTA; the assessments generally took 1-2 years to complete. Relevant stakeholders and experts were consulted on the reports to provide a diversity of viewpoints and to help shape and critique interim reports. Once they were approved by TAB, the final reports were publicly released.

Oregon Health Plan94

An objective of the Oregon Basic Health Services Act of 1989 was to expand the population covered by Medicaid, to all Oregonians with incomes below 100% of the federal poverty level. The additional costs would be managed by covering fewer services. In order to expand the program in this manner, the Oregon Medicaid program was required to receive a federal waiver of Medicaid statutes.95 Even though increasing the income limits for Medicaid eligibility in this manner is credited with helping to reduce the percentage of uninsured in Oregon from 18% in 1992 to 11% in 1996, controversy surrounded the Oregon Medicaid program's process of selecting which services to cover. The Oregon Health Plan was the first large-scale public attempt to apply cost-effectiveness analysis to set priorities for medical services. The struggles of the program had ripple effects on the use of cost-effectiveness analysis in other settings.

The Act created the Oregon Health Services Commission and charged it with developing a list that ranked medical services by priority. The program initially used cost-effectiveness analysis to develop the prioritized list. The value of the health benefits was determined by assessing (1) Oregonians' community values of different treatments in town meetings; (2) Oregonians' ratings of the desirability of health states (i.e., health-related quality of life); and (3) medical professionals' judgment of the efficacy of different treatments.96 The resulting cost-effectiveness ratios were criticized by some as not being reflective of societal values.97 Critics noted that capping teeth would have been ranked as a higher priority than life-saving surgery for appendicitis.98 Some researchers concluded that cost should not be considered in determining treatment priorities, while others blamed what they saw as counterintuitive rankings on the methods for measuring benefits.99 As a result of the controversy, the Oregon health commissioners' re-ranked the treatments, which resulted in rankings that were not necessarily related to treatments' relative costs and benefits, and were perceived to be more subjective.100 The revised list included 709 disease-treatment combinations, and the state Medicaid budget allowed the program to cover the costs of combinations 1 through 587.

In August 1992, the Department of HHS rejected Oregon's waiver application due to concerns about possible discrimination against disabled people through the use of Oregonians' ratings of the desirability of health states; these concerns were supported by an analysis by the Congressional Office of Technology Assessment (OTA).101 In November 1992, the Oregon Medicaid program submitted a new list that did not use ratings of health states. As a result, the new list was not based on cost-effectiveness ratios and instead was perceived by some observers to be more the result of pressure from advocacy groups and the commissioners' judgments.102 The revised plan was approved in March 1993 and implemented in 1994.

The Oregon program's analysis was very different from many other cost-benefit, cost-effectiveness, and comparative effectiveness analyses because treatments for each disease were compared to treatments for entirely different diseases. Most other cost-benefit and cost-effectiveness analyses compare like-to-like—that is, only treatments for the same disease are compared to each other. For example, the costs and benefits of a treatment for colorectal cancer would not typically be compared to a treatment for heart disease, but the Oregon program explicitly compared treatments in this manner.

U.S. Preventive Services Task Force

The U.S. Preventive Services Task Force (USPSTF) was established in 1984 as an independent federal advisory committee, under the U.S. Public Health Service, and given the responsibility of developing clinical practice guidelines for primary care physicians.103 The USPSTF took what was perceived to be a novel approach at the time by basing the guidelines upon quality and strength of clinical evidence, rather than simply "expert consensus." The task force published its first set of guidelines in 1989, and was reconvened in 1990 and 1998, the latter of which incorporated cost-effectiveness analysis in its reviews104 and was financially supported by AHRQ. The USPSTF has also sponsored some cost-effectiveness studies to better inform the USPSTF clinical practice guidelines.105 The guidelines, in general, focus on the prevention of diseases, and compare the preventative methods. They do not explicitly compare different drugs or technologies, but rather they compare screening to taking preventative medications to surgical options for a disease.

Veterans Health Administration

The Pharmacy Benefits Management Strategic Healthcare Group (PBMSHG) was established within the Veterans Health Administration (VHA) in 1995 to improve the health status of veterans by encouraging the appropriate use of medications.106 To this end, the group compares and publishes analyses of the effectiveness of drugs in the same class, produces clinical practice guidelines, and drug monographs, in addition to establishing the Department of Veterans Affairs' (VA) formulary, drug pricing, and contracts.107 Twenty-five drug class reviews were available on the group's website.108 However, the group does not publish the guidelines that are used in the internally generated economic assessments.

Other Governments' Initiatives

Australia

The Pharmaceutical Benefits Scheme (PBS) is the Australian health care system's program of subsidizing the cost of outpatient prescription medicines for all Australian citizens who are residents.109 The objective of the PBS is "to provide timely access to medicines that Australians need, at a cost individuals and the community can afford."110 Approximately 80% of all prescription medicines in Australia are subsidized under the PBS and more than 90% of outpatient drugs. Drugs may be sold in Australia if they pass the Australian regulatory review, which is similar to the FDA, but patients pay the full cost of the drugs unless they are listed on the PBS. The PBS has more than 650 drugs (more than 2500 items) on the formulary, which includes at least one drug for most medical conditions for which drug therapy is appropriate. To be included in the formulary, a drug must be evaluated by the Pharmaceutical Benefits Advisory Committee (PBAC), which is an independent panel of experts that recommends to the Minister of Health and Ageing whether a drug should be included. The Pharmaceutical Benefits Pricing Authority (PBPA) then provides advice to the Minister on negotiating an appropriate price for formulary drugs.111

The PBAC is required to consider the comparative clinical effectiveness, and comparative cost-effectiveness of a drug relative to the therapy most likely to be replaced in practice, which may be another drug and non-drug therapy.112 The existence of one drug on the formulary does not preclude the addition of a similar drug to the formulary; the number of drugs available to treat a particular condition is not limited. A positive recommendation from PBAC is a necessary but not a sufficient condition for inclusion on the formulary. In other words, the Minister can only add drugs to the formulary that have received a positive recommendation but not all drugs that receive a positive recommendation are automatically added. The only circumstance in which a Minister has exercised the right to not add a positively recommended drug occurred in 2002 when the Minister elected to not include Viagra.113 This decision was reportedly made due to concern about the impact of erectile dysfunction treatments on the PBS budget. Accordingly, the Minister simultaneously removed Caverject, the only drug at the time that was included on the PBS for the treatment of impotence.

To make a submission to the PBAC,114 sponsors (usually pharmaceutical manufacturers) are required to undertake a literature review, identify relevant trials, assess the quality of the trials, and aggregate the trial data.115 They are also required to perform a cost-minimization or cost-effectiveness analysis (which may include modeled analyses), the selection of which would follow PBAC's guidelines. The sponsor selects the initial price of their drug for the economic analysis and nominates a comparator drug. If PBAC does not agree that the nominated comparator is appropriate, then it may instruct the sponsor to use a different comparator drug.

The submitted trials may evaluate either the efficacy or effectiveness of the drug. Head-to-head randomized clinical trials, while preferred, are not mandatory. If such direct comparisons are not available, sponsors may submit two sets of randomized trials, or even non-randomized trials, that use the same reference drug. However, the guidelines note that non-randomized studies often over-estimate the benefit of an intervention, and "claims about the comparative clinical performance that are based solely on data from such sources will be treated with some scepticism."116 Applicant sponsors are required to demonstrate that differences between the comparison treatments are statistically significant as well as clinically important.117

If a drug does not receive a positive recommendation by the PBAC, then the manufacturer may resubmit the application and provide additional data for the committee's consideration. If no additional data are available, the sponsor may seek an independent review. The independent review may only consider specific issues in dispute and cannot review the PBAC's overall recommendation.118

Canada

The Canadian Agency for Drugs and Technologies in Health (CADTH)119 provides assessments of the effectiveness and efficiency of drugs and health technologies to Canadian health decision makers.120 It is a non-profit organization that was established by the Canadian government on a trial-basis in 1990; it became a permanent entity in 1993. The agency is funded on an annual basis by the Canadian government.121

One of CADTH's programs is the Canadian Expert Drug Advisory Committee (CEDAC), which makes recommendations to participating122 publicly financed drug insurance plans regarding inclusion of a new drug in formularies. CEDAC is an independent advisory committee that is accountable to the CADTH Board of Directors. Its recommendations are informed by drug evaluations from the Common Drug Review (CDR), which is an intergovernmental body established in 2003 to evaluate new chemical entities (NCEs) and drug combinations. A CDR evaluation includes the safety, clinical efficacy, therapeutic advantages and disadvantages, and the relative cost-effectiveness of a drug; it does not consider the net cost or budgetary impact of the drug.

Another of CADTH's programs is the Canadian Optimal Medication Prescribing and Utilization Service (COMPUS), which was launched in 2004 and is charged with identifying and promoting best clinical practices. COMPUS does not create new guidelines, but rather critiques and rates the evidence of guidelines produced by others. The most influential factors in determining COMPUS's research priorities are variations in practice, affected patient population size, availability of data on outcomes, and approval by Canadian deputy ministers of health.

United Kingdom

The National Institute for Clinical Excellence (NICE) was established in April 1999 and expanded its responsibilities, to include guidance on the prevention of ill health and the promotion of good health, and became the National Institute for Health and Clinical Excellence (NICE) in April 2005.123 Its mission is to advise health care providers in England and Wales on how to use resources effectively and deliver the highest quality of care to National Health Service (NHS) patients.124 NICE is an independent organization that is funded by the Department of Health in England (with contributions from Wales, Scotland, and Northern Ireland), and accountable to the British Parliament. It was established as one of several "arm's-length bodies" within the NHS so as to politically insulate the organization and help to produce intellectually honest, high quality guidance.125 The total CY2007 budget was £31 million for recurring programs, £4 million for non-recurring programs, and a separate £3.5 million for guidance on technologies.

Among other types of guidelines, NICE produces guidance on new and existing technology and treatments based on their clinical and cost-effectiveness, the latter of which is required to be included. The cost, utilization, uncertainty, and variation in use of the product are considered to be critical factors in selecting topics for the guidelines. As of May 2007, NICE had published 119 guidelines for new and existing technologies and 46 for treatments.126 The guidelines play a large role in drug cost reimbursement by the NHS, and as such are important to pharmaceutical manufacturers. For example, some manufacturers have recently cut the price of their products due to concern about potentially negative NICE guidelines.127 Investors and Wall Street analysts also tend to be concerned about negative NICE guidelines because of their potential impact on NHS coverage and manufacturers' revenues.128

NICE does not fund new primary research. Rather, the analysis is produced by academics who use existing research, gathered through systematic literature reviews, and statistical modeling to answer the questions poised by NICE. These academics are affiliated with either universities or professional organizations (Royal Colleges). Multidiscliplinary committees review the technology and clinical assessments and consult with relevant stakeholders to produce determinations, guidelines, and public health guidance. Determinations can be appealed before NICE releases its final determination guidance directly to the NHS.

The process and methods are publicly available and accessible, and all guidance is subject to public consultation. Draft recommendations are subject to public appeals from stakeholders. The evidence on which the recommendations are made is publicly available, with the exception of companies' confidential information. NHS managers are expected to fund the mandatory implementation of technology appraisals no later than three months after the guidance is issued. It is recognized that the implementation of other types of guidelines may take longer than three months due to their greater scope.

The cost-effectiveness analysis includes only the costs from the perspective of the public decision maker, and the comparison treatment is the most commonly used alternative treatment. The analyses segment the patient populations by the value and clinical benefit added by the drug. As a result, the majority of the guidelines state whether a technology or treatment is effective, and specify the population in which the product is most cost-effective.

NICE does not account for the budget impact or affordability of a new technology.129 Some researchers disagree with this separation and feel that NICE should prioritize its guidance within a fixed budget, or use some other method that helps contain NHS costs.130 Researchers and policymakers have also disagreed about the role cost-effectiveness should play in the NICE appraisal process.131

Table A-1. List of Bills Introduced in the 110th Congress to Conduct Comparative Clinical Effectiveness Research

Date Bill Introduced

Bill Name

Bill Number

Congressman
Introducing Bill

Comparative Effectiveness Research
the Primary Purpose of the Bill?

Jan. 4, 2007

Medicare Prescription Drug Price
Negotiation Act of 2007

S. 3

Sen. Harry Reid

(D-NV)

No

Jan. 18, 2007
July 24, 2007

Healthy Americans Act

S. 334
H.R. 3163

Sen. Ron Wyden (D-OR)
Rep. Brian Baird (D-WA)

No

Jan. 31, 2007

Food and Drug Administration
Safety Act of 2007

H.R. 788

Rep. John F. Tierney (D-MA)

No

May 7, 2007

Enhanced Health Care Value for
All Act of 2007

H.R. 2184

Rep. Tom Allen (D-ME)

Yes

July 11, 2007

Josephine Butler United States
Health Service Act

H.R. 3000

Rep. Barbara Lee (D-CA)

No

July 24, 2007

Children's Health and Medicare
Protection Act of 2007

H.R. 3162

Rep. John D. Dingell (D-MI)

No

Table A-2. Description and Role of Comparative Effectiveness Research in Bills Introduced in the 110th Congress to Conduct Comparative Clinical Effectiveness Research

Bill Name

Description of Bill and Role of Comparative Effectiveness Research

Medicare Prescription Drug Price Negotiation Act of 2007
(S. 3)

Stated purpose of the bill: To amend part D of title XVIII of the Social Security Act to provide for fair prescription drug prices for Medicare beneficiaries.
Entities instructed to conduct comparative clinical effectiveness research: The Secretary of HHS would develop a list of studies.
Role of entities with regard to comparative clinical effectiveness research, as described in the bill:

  • Prescription drug plans would be required to take relevant comparative clinical effectiveness research into account when developing and reviewing their formularies.
  • The Secretary would be required to develop a prioritized list of studies that are needed.
  • The Secretary would be required to establish an advisory committee to provide advice on setting priorities for comparative clinical effectiveness research.

Healthy Americans Act
(S. 334; H.R. 3163)

Stated purpose of the bill: To provide affordable, guaranteed private health coverage that will make Americans healthier and can never be taken away.
Entity instructed to conduct comparative clinical effectiveness research: Pharmaceutical and medical device manufacturers.
Role of entity with regard to comparative clinical effectiveness research, as described in the bill:

  • Pharmaceutical manufacturers would not be allowed to take a tax deduction for advertising or promotion expenses for 3 years after approval of a new drug application (NDA), unless the manufacturer began a comparative effectiveness study.
  • If a pharmaceutical manufacturer were to include evidence of the comparative effectiveness of the drug in the NDA, then the manufacturer would be entitled to the same 6 month market exclusivity extension from the FDA that the manufacturer would have received for conducting additional studies on the effectiveness of the drug in pediatric populations.
  • If a device manufacturer were to include evidence of the comparative effectiveness of the device in their application for premarket approval (PMA), then the manufacturer would be entitled to a 2 year patent extension from the U.S. Patent and Trademark Office.
  • If a device or pharmaceutical manufacturer did not include evidence of the comparative effectiveness of the product in the NDA, then the Secretary of HHS would be permitted to require all promotional material for the drug to include the following disclosure: 'This drug [device] has not been proven to be more effective than other drugs on the market for any condition or illness mentioned in this advertisement.'

Food and Drug Administration Safety Act of 2007
(H.R. 788)

Stated purpose of the bill: To amend the Federal Food, Drug, and Cosmetic Act.
Entity instructed to conduct comparative clinical effectiveness research: The Center for Postmarket Evaluation and Research for Drugs and Biologics—a new center within the FDA that is established by the Act—may instruct manufacturers of drugs and biologics to conduct the studies.
Role of entity with regard to comparative clinical effectiveness research, as described in the bill:

  • The Director of the Center would be permitted to require manufacturers of drugs and biologics to conduct post-market studies, including comparative clinical effectiveness studies, as a condition for market approval or any time after the product is approved.
  • The Director would be required to review and publish a list of ongoing postmarketing studies, including comparative clinical effectiveness studies.

Enhanced Health Care Value for All Act of 2007
(H.R. 2184)

Stated purpose of the bill: To amend the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 to expand comparative effectiveness research and to increase funding for such research to improve the value of health care.
Entity instructed to conduct comparative clinical effectiveness research:
AHRQ and the Comparative Effectiveness Advisory Board—a new advisory board established by the Act.
Role of entity with regard to comparative clinical effectiveness research, as described in the bill:

  • The Comparative Effectiveness Advisory Board would be charged with advising, organizing, identifying priorities, and making recommendations on comparative effectiveness research, and would be required to submit reports to Congress on its activities.
  • In addition to current comparative effectiveness research responsibilities designated in section 1013 of the MMA, AHRQ would be provided the authority to conduct clinical trials and educate health care providers about comparative effectiveness information.
  • The Act would establish the Health Care Comparative Effectiveness Research Trust Fund for carrying out comparative clinical effectiveness research. The Fund would receive contributions from the Medicare trust fund, private health insurance, and self-insured health plans.

Josephine Butler United States Health Service Act
(H.R. 3000)

Stated purpose of the bill: To establish a United States Health Service to provide high quality comprehensive health care for all Americans and to overcome the deficiencies in the present system of health care delivery.
Entity instructed to conduct comparative clinical effectiveness research: The National Institute of Evaluative Clinical Research—a new institute under the National Health Board, which is established by the Act.
Role of entity with regard to comparative clinical effectiveness research, as described in the bill: The Institute would be required to identify the most effective methods of prevention, diagnosis, and treatment and assist the National Health Board in establishing clinical practice guidelines.

Children's Health and Medicare Protection Act of 2007
(H.R. 3162)

Stated purpose of the bill: To amend titles XVIII, XIX, and XXI of the Social Security Act to extend and improve the children's health insurance program, to improve beneficiary protections under the Medicare, Medicaid, and the CHIP program, and for other purposes.
Entity instructed to conduct comparative clinical effectiveness research:
The Center for Comparative Effectiveness Research, a new center established within AHRQ and overseen by the Comparative Effectiveness Research Commission, a new commission established by the Act.
Role of entities with regard to comparative clinical effectiveness research, as described in the bill:

  • The Center would be required to conduct, support, synthesize, disseminate, and help incorporate into practice clinical research and comparative effectiveness research, including the research authorized under section 1013 of the MMA.
  • The Center would be required to encourage the development of clinical registries and health care data networks
  • The Center would be required to develop methodological standards
  • The Research Commission would be required to determine national priorities; monitor the research funds; identify, review, and approve credible research methods and standards; support forums to increase stakeholder awareness and feedback; make recommendations for public data access quality and periodic reviews; appoint a clinical perspective advisory panel; review processes of the Center; provide guidance to health care providers and consumers; recommend strategies for disseminating findings; and submit reports to Congress.
  • The Act would establish the Comparative Effectiveness Research Trust Fund for carrying out comparative clinical effectiveness research. The Fund would receive contributions from the Medicare Trust Fund, private health insurance, and self-insured health plans.

Table A-3. List of Bills Introduced in the 109th Congress to Conduct Comparative Clinical Effectiveness Research

Date Bill
Introduced

Bill Name

Bill
Number

Congressman
Introducing
Bill

Comparative Effectiveness
Research the
Primary Purpose
of the Bill?

Jan. 26, 2005

Medical Innovation Prize
Act of 2005

H.R. 417

Rep. Bernard Sanders (I-VT)

No

Feb. 28, 2005

Fair Access to Clinical
Trials (FACT) Act

S. 470

Sen. Christopher J. Dodd (D-CT)

No

June 30, 2005

 

H.R. 3196

Rep. Henry A. Waxman (D-CA)

 

April 27, 2005

Food and Drug
Administration Safety
Act of 2005

S. 930

Sen. Charles E. Grassley (R-IA)

No

Nov. 11, 2005

 

H.R. 4429

Rep. John F. Tierney (D-MA)

 

Sept. 8, 2005

Medical Advertising
Reform Act

H.R. 3696

Rep. Sherrod Brown (D-OH)

No

Dec. 15, 2005

National Innovation
Act of 2005

S. 2109

Sen. John Ensign (R-NV)

No

Jan. 3, 2006

National Innovation
Act of 2006

H.R. 4654

Rep. Adam B. Schiff (D-CA)

 

July 25, 2006

Vaccine Safety and Public
Confidence Assurance
Act of 2006

H.R. 5887

Rep. Dave Weldon (R-FL)

No

July 28, 2006

Prescription Drug Comparative
Effectiveness Act of 2006

H.R. 5975

Rep. Tom Allen (D-ME)

Yes

Table A-4. Description and Role of Comparative Effectiveness Research in Bills Introduced in the 109th Congress to Conduct Comparative Clinical Effectiveness Research

Date Bill Introduced

Description of Bill and Role of Comparative Effectiveness Research

Medical Innovation Prize Act of 2005
(H.R. 417)

Stated purpose of the bill: To provide incentives for investment in research and development for new medicines, to enhance access to new medicines, and for other purposes.
Entity instructed to conduct comparative clinical effectiveness research: Potential recipients of the Fund for Medical Innovation Prizes—a new fund established by the Act.
Role of entity with regard to comparative clinical effectiveness research, as described in the bill:

  • The Board of Trustees for the Fund for Medical Innovation Prizes would be permitted to use the incremental therapeutic benefit of the innovation (either drug, biological product, or manufacturing process), as compared to other existing drugs, products, or processes, as one criteria for selecting recipients and determining prize payments.
  • The Board is composed of the Administrator of CMS, the Commissioner of FDA, the Director of NIH, and the Director of the Centers for Disease Control and Prevention (CDC), as well as representatives of the business sector, private medical research and development sector, and consumer and patient interest groups.

FACT Act
(S. 470;
H.R. 3196)

Stated purpose of the bill: To amend the Public Health Service Act to establish the scope of information required for the data bank on clinical trials of drugs, and for other purposes.
Entity instructed to conduct comparative clinical effectiveness research: AHRQ
Role of entity with regard to comparative clinical effectiveness research, as described in the bill:

  • The Secretary of HHS would be required to deposit in an account, dedicated to funding comparative clinical effectiveness under AHRQ, any fines or sanctions received from manufacturers who do not submit required clinical trial information to the data bank.
  • The Director of AHRQ is required to develop a priority list for the comparative effectiveness research, within six months of the enactment of the Act.

Food and Drug Administration Safety Act of 2005
(S. 930;
H.R. 4429)

Stated purpose of the bill: To amend the Federal Food, Drug, and Cosmetic Act.
Entity instructed to conduct comparative clinical effectiveness research: The Center for Postmarket Evaluation and Research—a new center within the FDA that is established by the Act—may instruct manufacturers of drugs and biologics to conduct the studies.
Role of entities with regard to comparative clinical effectiveness research, as described in the bill:

  • The Director of the Center would be permitted to require manufacturers of drugs and biologics to conduct post-market studies, including comparative clinical effectiveness studies, as a condition for market approval or any time after the product is approved.
  • The Director would be required to review and publish a list of ongoing postmarketing studies, including comparative clinical effectiveness studies.

Medical Advertising Reform Act
(H.R. 3696)

Stated purpose of the bill: To amend the Federal Food, Drug, and Cosmetic Act to require prior approval by the FDA of advertisements for prescription drugs and restricted medical devices, and for other purposes.
Entity instructed to conduct comparative clinical effectiveness research: The Secretary of HHS, acting through the Commissioner of the FDA.
Role of entity with regard to comparative clinical effectiveness research, as described in the bill:

  • The Secretary would be required to submit a report proposing inclusion of information concerning comparative effectiveness and comparative cost-effectiveness of prescription drugs in promotional material.

National Innovation Act of 2005
(S. 2109;
H.R. 4654)

Stated purpose of the bill: To provide a national innovation initiative.
Entities instructed to conduct comparative clinical effectiveness research: The Secretary of HHS and the Secretary of Labor.
Role of entities with regard to comparative clinical effectiveness research, as described in the bill:

  • Secretaries would be required to include an assessment of the role "comparative clinical effectiveness research can play in improving quality, value, and efficiency throughout the United States healthcare system" within a study and report on catastrophic healthcare.

Vaccine Safety and Public Confidence Assurance Act of 2006
(H.R. 5887)

Stated purpose of the bill: To direct that vaccine safety monitoring and research focus on active surveillance, researching biological mechanisms for acute and chronic adverse events following vaccination, developing prevaccination screening methods within a framework that is free from actual and perceived biases, and developing a vaccine safety research agenda.
Entity instructed to conduct comparative clinical effectiveness research: The Agency for Vaccine Safety Evaluation—a new agency within the Office of the Secretary of HHS that is established by the Act—may award research grants.
Role of entity with regard to comparative clinical effectiveness research, as described in the bill:

  • The Director of Vaccine Safety Evaluation would be permitted to award grants to conduct comparative effectiveness studies of vaccines if more than one vaccine is licensed for the same disease.
  • If one vaccine is determined to be more effective than another for the same disease, then the Director would be required to make the determination available to the public.

Prescription Drug Comparative Effectiveness Act of 2006
(H.R. 5975)

Stated purpose of the bill: To require the AHRQ, in consultation with the Director of the NIH, to conduct research to develop valid scientific evidence regarding comparative clinical effectiveness, outcomes, and appropriateness of prescription drugs, medical devices, and procedures, and for other purposes.
Entities instructed to conduct comparative clinical effectiveness research:
AHRQ, the NIH, the FDA, and the Secretary of HHS
Role of entities with regard to comparative clinical effectiveness research, as described in the bill:

  • AHRQ comparative effectiveness research could include clinical research or systematic reviews.
  • AHRQ would be required to give particular consideration to supporting research on high volume, high cost, and high risk treatments.
  • The Secretary of HHS would be required to develop a coordinated plan for comparing drug safety with the AHRQ, the NIH, and the FDA.
  • The Director of AHRQ would be required to submit an annual report on the research progress and results to Congress and the Directors of several agencies, and make the research results public.
  • The Act would authorize $100 million to be appropriated for FY2007 for the purpose of carrying out these responsibilities.

Footnotes

1.

For more information, see Elliot S. Fisher et al., "The Implications of Regional Variations in Medicare Spending, Part 1: The Content, Quality, and Accessibility of Care," Annals of Internal Medicine, vol. 38, no. 4 (February 18, 2003), pp. 273-287.

2.

For example, see Earl P. Steinberg and Bryan R. Luce, "Evidence Based? Caveat Emptor!" Health Affairs, vol. 24, no. 1 (January/February 2005), pp. 80-92.

3.

For more information, see the Institute of Medicine, Committee on the Assessment of the U.S. Drug Safety System, "The Future of Drug Safety: Promoting and Protecting the Health of the Public," The National Academies Press, 2007.

4.

For more information, see Testimony of Peter R. Orszag, Director of Congressional Budget Office, before the House Committee on Ways and Means Subcommittee on Health, Research on the Comparative Effectiveness of Medical Treatments: Options for an Expanded Federal Role, 110th Cong., 1st sess., June 12, 2007.

5.

For more information, see Testimony of Mark E. Miller, Executive Director of the Medicare Payment Advisory Commission, before the House Committee on Ways and Means Subcommittee on Health, Producing Comparative-Effectiveness Information, 110th Cong., 1st sess., June 12, 2007.

6.

For more information, see AARP, AARP Solutions Statement: White House Conference on Aging, July 6, 2005, available at http://www.whcoa.gov/about/des_events_reports/aarpjuly6.pdf, accessed August 1, 2007.

7.

Information reported by Mike Lillis, "AHIP Seeks to Inject Comparative Effectiveness, Cost Data Into CMS Decisions," Inside Health Policy, April 19, 2007.

8.

For more information, see John Reichard, "Authoritative Center Urged for Comparing Value of Treatments," CQ HealthBeat, February 14, 2007.

9.

For more information, see Scott Gottlieb, "The War on Expensive Drugs," The Wall Street Journal, August 30, 2007, A11.

10.

For more information, see Robert Goldberg, "Medical Quackery," Washington Times, September 7, 2007.

11.

S. 3 was introduced and passed by the Senate Finance Committee on April 12, 2007. The bill did not receive enough votes to reach cloture on the Senate Floor.

12.

H.R. 2184 was introduced in the House Committee on Ways and Means Subcommittee on Health on May 15, 2007.

13.

H.R. 3162 was passed by the House of Representatives on August 1, 2007.

14.

For more information, see letter from Peter R. Orszag, Director of the Congressional Budget Office, to Chairman Pete Stark, September 5, 2007.

15.

S. 334 was introduced in the Senate Finance Committee on January 18, 2007.

16.

H.R. 3163 was introduced in the House Energy and Commerce, House Education and Labor, House Ways and Means, and House Oversight and Government Reform Committees on July 24, 2007.

17.

For more information, see Marthe R. Gold et al., Cost-Effectiveness in Health and Medicine (Oxford, U.K.: Oxford University Press, 1996).

18.

For more information, see Anthony O'Hagan et al., "Incorporation of Uncertainty in Health Economic Modelling Studies," Pharmacoeconomics, vol. 23, no. 6 (2005), pp. 529-536.

19.

Efficacy and effectiveness may also be referred to as internal and external validity, respectively. Internal validity is measured when a causal relationship is demonstrated between the treatment and health outcome. External validity is measured when a relationship is demonstrated between the treatment and health outcome across different settings, patients, and procedures for administering the treatment. For more information about differences between efficacy and effectiveness research, see the Agency for Healthcare Research and Quality, Criteria for Distinguishing Effectiveness From Efficacy Trials in Systematic Reviews. Technical Review 12, AHRQ-06-0046, April 2006.

20.

A placebo is an inactive treatment given to satisfy a patient's expectation of treatment.

21.

For more information about exclusion criteria in clinical trials, see Harriette G. C. Van Spall et al. "Eligibility criteria of randomized controlled trials published in high-impact general medical journals: a systematic sampling review," JAMA, vol. 297, no. 11 (March 21, 2007), pp. 1233-1240.

22.

For more information, see the FDA's Introduction to Postmarketing Study Commitments, available at http://www.fda.gov/cder/pmc/, accessed April 24, 2007.

23.

For more information, see Alan M. Garber, "Cost-Effectiveness and Evidence Evaluation as Criteria for Coverage Policy," Health Affairs, (May 19, 2004), W4-284-296. Also see Kevin Frick, Steven Kymes, and Gretchen Jacobson, "Cost-Effectiveness: How Do We Avoid Making A Difficult Science Incomprehensible?" Ophthalmic Epidemiology, vol. 11, no. 5 (December 2004), pp. 331-335.

24.

The medical and health policy decision making process consists of layers of evidence. The safety of a treatment is typically the first layer, such that a treatment must be deemed safe before any other factors could be considered. The ordering of the other layers is debated. The efficacy (and effectiveness, if available) of the treatment may or may not be preceded by insurance coverage decisions. Decisions involving costs (i.e., pricing and insurance reimbursement) may or may not be preceded by coverage decisions. Technology assessments may occur at any point in the layers of decision making, since safety profile comparisons, clinical guidelines, comparative effectiveness, cost-effectiveness, or cost-benefit analyses may all be included under the technology assessment umbrella. For more discussion of the role of cost-effectiveness in coverage decisions, see Alan M. Garber, "Cost-Effectiveness and Evidence Evaluation as Criteria for Coverage Policy," Health Affairs, W4 (May 19, 2004), pp. 284-296.

25.

For more information, see Michael E. Drummond et al., Methods for the Economic Evaluation of Health Care Programmes, 3rd ed. (Oxford, U.K.: Oxford University Press, 2005).

26.

PubMed is free digital archive of biomedical and life sciences journal literature at the National Institutes of Health and includes over 17 million citations. Participation by publishers in PubMed is voluntary.

27.

Notably, this result does not necessarily imply that academic researchers produce the most comparative clinical effectiveness studies, since academic researchers are much more likely than any other entity to publish their research in the medical literature.

28.

Comorbidities are defined as conditions that exist at the same time as the primary condition in the same patient. For more information, see National Center for Health Statistics (NCHS), "Comorbidities," available at http://www.cdc.gov/nchs/datawh/nchsdefs/comorbidities.htm, accessed August 26, 2007.

29.

For more information, see the Agency for Healthcare Research and Quality, "Hospitalization in the United States," HCUP Fact Book No. 6, (2002). Available at http://www.ahrq.gov/data/hcup/factbk6/factbk6a.htm, accessed August 26, 2007.

30.

As previously mentioned, the studies produced from these initiatives were excluded from the survey of the published medical literature.

31.

For more information, see Michael B. Nichol et al., "Opinions Regarding the Academy of Managed Care Pharmacy Dossier Submission Guidelines: Results of a Small Survey of Managed Care Organizations and Pharmaceutical Manufacturers," Journal of Managed Care Pharmacy, vol. 13, no. 4 (May 2007), pp. 360-371.

32.

For more information, see Department of Defense PharmacoEconomic Center: Estimated Cost Avoidance in DOD MTFs Due to National Pharmaceutical Contracts, Fiscal Years 1999-2002 (Fact Sheet), San Antonio, TX. Department of Defense PharmacoEconomic Center, 2004. Also see Department of Defense PharmacoEconomic Center; PDTS Factoids thru 31 December 2003 (Fact Sheet), San Antonio, TX. Department of Defense PharmacoEconomic Center. 2004.

33.

For more information, see Centers for Medicare and Medicaid Services, "Guidance for the Public, Industry, and CMS Staff: Factors CMS Considers in Opening a National Coverage Determination," April 11, 2006, available at http://www.cms.hhs.gov/mcd/ncpc_view_document.asp?id=6, accessed August 10, 2007. See also Centers for Medicare and Medicaid Services, "Guidance for the Public, Industry, and CMS Staff: Factors CMS Considers in Commissioning External Technology Assessments," April 11, 2006, available at http://www.cms.hhs.gov/mcd/ncpc_view_document.asp?id=7.

34.

For more information see Clifton R. Lacy et al., "Impact of Presentation of Research Results on Likelihood of Prescribing Medications to Patients with Left Ventricular Dysfunction," American Journal of Cardiology, vol. 87 (January 15, 2001), pp. 203-207.

35.

For more information, see Sallie-Anne Pearson et al., "Changing Medication Use in Managed Care: A Critical Review of the Available Evidence," American Journal of Managed Care, vol. 9, no. 11 (November 2003), pp. 715-731.

36.

For more information, see Sallie-Anne Pearson et al., ibid.

37.

For more information, see C. David Naylor, "The Complex World of Prescribing Behavior," JAMA, vol. 291, no. 1 (January 7, 2004), pp. 104-106. See also Donald M. Berwick, "Disseminating Innovations in Health Care," JAMA, vol. 289, no. 15 (April 16, 2003), pp. 1035-1040.

38.

For more information about the survey, see Michael Dickson, Jeremy Hurst, and Stephane Jacobzone, "Survey of Pharmacoeconomic Assessment Activity in Eleven Countries," OECD Health Working Papers, No. 4 (2003). Note that not all countries responded to every question.

39.

Global budgeting refers to allocating health systems' financial resources.

40.

Belgium reported the assessment may be completed by either the government or the product manufacturer.

41.

Note that only five of the surveyed countries responded to the latter two questions.

42.

Notably, both pharmaceutical manufacturers and the federal government have been accused of withholding the results of studies they funded. For more information, see Scott Gottlieb, "The War on Expensive Drugs," The Wall Street Journal, August 30, 2007, A11. See also Shankar Vedantam, "Journals Insist Drug Manufacturers Register All Trials; Editors Say That, Otherwise, Studies Will Not Be Published; Goal Is To Ferret Out Suppressed Data," Washington Post, September 9, 2004, p. A02. Also see Barry Meier, "Contracts Keep Drug Research Out Of Reach," New York Times, November 29, 2004, p. A01.

43.

For more information, see Dwight S. Fullerton, Debbie Atherly, and Sean D. Sullivan, "Showing Outcomes and Proving Value Brings Success," Managed Care Interface, vol. 14, no. 6 (2001), pp. 63-65.

44.

For more information, see Jack McCain, "System Helps P&T Committees Get Pharmacoeconomic Data They Need," Managed Care, April 2001, available at http://www.managedcaremag.com/archives/0104/0104.amcp.html, accessed August 15, 2007.

45.

For more information, see Michael B. Nichol et al., "Opinions Regarding the Academy of Managed Care Pharmacy Dossier Submission Guidelines: Results of a Small Survey of Managed Care Organizations and Pharmaceutical Manufacturers," Journal of Managed Care Pharmacy, vol. 13, no. 4 (May 2007), pp. 360-371.

46.

For more information about MEDTEP, see Claire W. Maklan, Richard Greene, and Mary A. Cummings, "Methodological Challenges and Innovations in Patient Outcomes Research," Medical Care, vol. 32, no. 7 (July, 1994), pp. JS13-JS21.

47.

For more information, see Bradford H. Gray, Michael K. Gusmano, and Sara R. Collins, "AHCPR and the Changing Politics of Health Services Research," Health Affairs, 2003, W283-307. See also testimony of Government Accountability Office (GAO) Director of Health Financing and Public Health Issues Sarah F. Jagger, in the U.S. Congress, House of Representatives, Committee on Ways and Means, Subcommittee on Health, Practice Guidelines: Overview of Agency for Health Care Policy and Research Efforts, 104th Cong., 1st sess., July 25, 1995, GAO/T-HEHS-95-221, available at http://archive.gao.gov/t2pbat1/154801.pdf. See also Physician Payment Review Commission, Annual Report to Congress (Washington: PPRC, 1995). See also U.S. Office of Technology Assessment, Identifying Health Technologies That Work: Searching for Evidence, Pub. No OTA-H-608 (Washington: U.S. Government Printing Office, September 1994).

48.

For more information about CERTS, see Agency for Healthcare Research and Quality (AHRQ), "Centers for Education and Research on Therapeutics Fact Shee," available at http://www.ahrq.gov/clinic/certsovr.pdf, accessed August 15, 2007.

49.

For more information on DEcIDE centers, see Agency for Healthcare Research and Quality (AHRQ), "Overview of the DEcIDE Research Network," available at http://effectivehealthcare.ahrq.gov/decide/index.cfm, accessed August 15, 2007.

50.

For more information, see Mark Helfand, "Incorporating Information About Cost-Effectiveness Into Evidence-Based Decision Making," Medical Care, vol. 43, no. 7 suppl. (July 2005), pp. II-33-II43.

51.

For more information about the procedures for topics nominations, see AHRQ, "EPC Topic Nomination & Selection," available at http://www.ahrq.gov/clinic/epc/epctopicn.htm, accessed August 15, 2007.

52.

Agency for Healthcare Research and Quality (AHRQ), "EPC Topic Nomination & Selection," available at http://www.ahrq.gov/clinic/epc/epctopicn.htm, accessed August 15, 2007.

53.

For more information, see AHRQ, "Technology Assessments," available at http://www.ahrq.gov/clinic/techix.htm, accessed August 15, 2007.

54.

For more information, see Wilhemine Miller, "Value-Based Coverage Policy in the United States and the United Kingdom: Different Paths to a Common Goal," National Health Policy Forum, November 29, 2006. See also AHRQ, "U.S. Preventive Services Task Force (USPSTF)," available at http://www.ahrq.gov/clinic/uspstfix.htm, accessed August 15, 2007.

55.

For more information, see BlueCross BlueShield Association, "Technology Evaluation Center;" available at http://www.bcbs.com/betterknowledge/tec/, accessed August 6, 2007.

56.

For more information, see Agency for Healthcare Research and Quality, "Blue Cross and Blue Shield Association, Technology Evaluation Center (TEC)," available at http://www.ahrq.gov/clinic/epc/bcbsatec.htm, accessed August 6, 2007.

57.

For more information, see BlueCross BlueShield Association, "What is the Technology Evaluation Center?" Available at http://www.bcbs.com/betterknowledge/tec/what-is-tec.html, accessed August 6, 2007.

58.

For more information, see Consumer Reports Best Buy Drugs at http://www.crbestbuydrugs.org/aboutus.shtml, accessed August 7, 2007.

59.

The project notes that, since prices vary widely across communities and the nation, the prices may not reflect what health care consumers pay at the pharmacy counter.

60.

For more information, see Kevin Ridderhoff and Daniel Remund, "The Department of Defense Pharmacy Benefit Management Program," Military Medicine, vol. 170, no. 4 (April 2005), pp. 302-304.

61.

For more information, see The Department of Defense Pharmacoeconomic Center at http://www.pec.ha.osd.mil/, accessed August 16, 2007.

62.

For more information, see Department of Defense Pharmacoeconomic Center at http://www.pec.ha.osd.mil/mission.htm, accessed August 28, 2007.

63.

For more information, see Department of Defense PharmacoEconomic Center: Estimated Cost Avoidance in DOD MTFs Due to National Pharmaceutical Contracts, Fiscal Years 1999-2002 (Fact Sheet), San Antonio, TX. Department of Defense PharmacoEconomic Center, 2004. Also see Department of Defense PharmacoEconomic Center; PDTS Factoids thru 31 December 2003 (Fact Sheet), San Antonio, TX. Department of Defense PharmacoEconomic Center. 2004.

64.

For more information on DERP, see Oregon Health Science University, "Drug Effectiveness Review Project;" available at http://www.ohsu.edu/drugeffectiveness/reports/final.cfm. See also Mark Gibson and John Santa, "The Drug Effectiveness Review Project: An Important Step Forward," Health Affairs, vol. 26 (June 6, 2006), pp. w272-w275. Also see Peter J. Neumann, "Emerging Lessons From the Drug Effectiveness Review Project," Health Affairs vol. 25, June 2006, pp. w262-w271.

65.

Information from a telephone conversation with John Santa, medical director of DERP at the Center for Evidence-based Policy at OHSU.

66.

The DERP communication, conflict of interest, oversight, and decision-making policies are available at http://www.ohsu.edu/drugeffectiveness/process/index.htm, accessed August 7, 2007.

67.

According to John Santa, medical director of DERP, the 30 drug classes with completed reviews represent 70%-80% of Medicaid prescription drug expenses.

68.

For more information on the DERP review process, see Mark Gibson and John Santa, "The Drug Effectiveness Review Project: An Important Step Forward," Health Affairs, vol. 26 (June 6, 2006), pp. w272-w275.

69.

For more information on the DERP review process, see Mark Gibson and John Santa, "The Drug Effectiveness Review Project: An Important Step Forward," Health Affairs, vol. 26 (June 6, 2006), pp. w272-w275.

70.

Information from a telephone conversation with John Santa, medical director of DERP at the Center for Evidence-based Policy at OHSU.

71.

For examples of summaries by AARP and Consumer Reports, see AARP, "Cost & Availability;" available at http://www.aarp.org/health/comparedrugs/, and Consumer Reports, "Best Buy Drugs;" available at http://www.crbestbuydrugs.org/aboutus_faqs.html.

72.

For more information, see Drug Effectiveness Review Project, "Description," available at http://www.ohsu.edu/drugeffectiveness/description/index.htm, accessed August 8, 2007.

73.

For more information, see AHRQ, "ECRI: Evidence-based Practice Center," available at http://www.ahrq.gov/clinic/epc/ecriepc.htm, accessed August 16, 2007.

74.

For more information, see InterQual History, available at http://www.interqual.com/IQSite/about/history.aspx, accessed August 28, 2007.

75.

For more information, see CMS, "Fact Sheet: Payment for Epogen, Procrit, and Aranesp Under The Medicare Outpatient Prospective Payment System (OPPS)," October 31, 2002.

76.

For more information, see CMS, "Fact Sheet: Payment for Epogen, Procrit, and Aranesp Under The Medicare Outpatient Prospective Payment System (OPPS)," October 31, 2002.

77.

See Centers for Medicare and Medicaid Services (CMS), "Criteria and Procedures for Making Medical Services Coverage Decisions that Relate to Health Care Technology, 54 Fed. Reg. 4302, (January 30, 1989). For more information, also see Wilhelmine Miller,"Value-Based Coverage Policy in the United States and the United Kingdom: Different Paths to a Common Goal," National Health Policy Forum, November 29, 2006.

78.

For more information, see CMS, "Procedures for Making National Coverage Decisions," Federal Register, vol. 64, no. 22 (April 27, 1999).

79.

For more information, see CMS, "Criteria for Making Coverage Decisions," Federal Register, vol. 65, no. 95 (May 16, 2000).

80.

For more information, see Sean R. Tunis and Jeffrey L. Kang, "Improvements in Medicare Coverage of New Technology," Health Affairs, 20, no. 5 (September/October 2001), pp. 83-85.

81.

For more information, see Sean R. Tunis and Jeffrey L. Kang, "Improvements in Medicare Coverage of New Technology," Health Affairs, 20, no. 5 (September/October 2001), pp. 83-85. Also see CMS, "Notice of Revised Process for Making National Coverage Determinations," Federal Register, vol. 68, no. 187 (September 26, 2003), pp. 55634-55641.

82.

An exception to this is the CMS decision to apply an "inherent reasonableness" payment policy for services, other than physician services. For more information, see CMS, "Medicare Program; Application of Inherent Reasonableness Payment Policy to Medicare Part B Services (Other Than Physician Services)," 42 CFR Part 405, December 13, 2005.

83.

The coverage guidance is available at Centers for Medicare and Medicaid Services, "Guidance for the Public, Industry, and CMS Staff: Factors CMS Considers in Opening a National Coverage Determination," April 11, 2006; available at http://www.cms.hhs.gov/mcd/ncpc_view_document.asp?id=6, accessed August 10, 2007.

84.

The guidance on technology assessments is available at Centers for Medicare and Medicaid Services, "Guidance for the Public, Industry, and CMS Staff: Factors CMS Considers in Commissioning External Technology Assessments," April 11, 2006; available at http://www.cms.hhs.gov/mcd/ncpc_view_document.asp?id=7.

85.

For more information on the report, see Medicare Payment Advisory Commission, "Chapter 10: Medicare's Use of Clinical and Cost-effectiveness Information," Report to the Congress: Increasing the Value of Medicare, June 2006.

86.

For an example of an evaluation of the relationship between Medicare coverage and medical evidence, see Peter J. Neumann et al., "Medicare's National Coverage Decisions, 1999-2003: Quality of Evidence and Review Times," Health Affairs, vol. 24, no. 1 (Jan/February 2005), pp. 243-254.

87.

For more information, see Sean R. Tunis and Steven D. Pearson, "Coverage Options for Promising Technologies: Medicare's 'Coverage with Evidence Development'," Health Affairs, vol. 25, no. 5 (September/October 2006), pp. 1218-1230. See also CMS, "National Coverage Determinations with Data Collection as a Condition of Coverage: Coverage with Evidence Development," July 12, 2006. Available at http://www.cms.hhs.gov/mcd/ncpc_view_document.asp?id=8, accessed September 9, 2007.

88.

Off-label refers to the use of treatments for a purpose other than one of the indications for which the product was approved by the FDA.

89.

Notably, there are many procedural differences between national coverage determinations and local coverage determinations, one of which is that local coverage determinations can be overturned by an administrative law judge. For more information on how local coverage determinations can be made, and least costly alternative policies, see CRS Report RS22495, Medicare Durable Medical Equipment: Proposed Payment Changes for Certain Inhalation Medications, by [author name scrubbed].

90.

For more information, see the Department of Health and Human Services, Office of Inspector General, Medicare Reimbursement for Lupron, January 2004, OEI-03-03-00250.

91.

For more information about OTA, see Woodrow Wilson School of Public and International Affairs, "The OTA Legacy," available at http://www.wws.princeton.edu/ota/, accessed August 14, 2007.

92.

For a more detailed history of the politics surrounding OTA, see Gregory C. Kunkle, "New Challenge or the Past Revisited? The Office of Technology Assessment in Historical Context," Technology in Society, vol. 17, no. 2 (1995), pp. 175-196. Available at http://www.wws.princeton.edu/ota/ns20/nchal_f.html, accessed August 28, 2007.

93.

For more information, see Warren E. Leary, "Congress's Science Agency Prepares to Close Its Doors," New York Times, September 24, 1995, p. 26. Available at http://www.wws.princeton.edu/ota/ns20/nyt95_f.html, accessed September 5, 2007.

94.

For a more detailed history in the Oregon Health Plan, see Jonathan Oberlander, "Health Reform Interrupted: The Unraveling of the Oregon Health Plan," Health Affairs, vol. 26, no. 1 (2007), pp. w96-w105.

95.

This was a Section 1115 Medicaid demonstration project.

96.

For more information, see Robert M. Kaplan, "Value Judgment in the Oregon Medicaid Experiment," Medical Care, vol. 32, no. 10, pp. 975-988.

97.

For more information, see David M. Eddy, "Oregon's Methods: Did Cost-Effectiveness Analysis Fail?" JAMA, vol. 266, no. 15 (October 16, 1991), pp. 2135-2141.

98.

For more information, see David C. Hadorn, "Setting Health Care Priorities in Oregon: Cost-Effectiveness Meets the Rule of Rescue," JAMA, vol. 265, no. 17 (May 1, 1991), pp. 2218-2225.

99.

For more information about the methods used for measuring health benefits, see Erik Nord, "Unjustified Use of the Quality of Well-Being Scale in Priority Setting in Oregon," Health Policy, vol. 24 (1993), p. 45-93. See also David M. Eddy, "Oregon's Methods: Did Cost-Effectiveness Analysis Fail?" JAMA, vol. 266, no. 15 (October 16, 1991), p. 2135-2141. For a discussion of why costs should not be included, see David C. Hadorn, "Setting Health Care Priorities in Oregon: Cost-Effectiveness Meets the Rule of Rescue," JAMA, vol. 265, no. 17 (May 1, 1991), p. 2218-2225. See also Erik Nord, et al., "Who Cares About Cost? Does Economic Analysis Impose Or Reflect Social Values?" Health Policy, vol. 34 (1995), p. 79-94.

100.

For more information, see David M. Eddy, "Oregon's Methods: Did Cost-Effectiveness Analysis Fail?" JAMA, vol. 266, no. 15 (October 16, 1991), pp. 2135-2141. For an example of an analysis of treatments' net benefit versus ranking, see Robert M. Kaplan, "Value Judgment in the Oregon Medicaid Experiment," Medical Care, vol. 32, no. 10, pp. 975-988.

101.

For more information about the OTA analysis, see U.S. Congress, Office of Technology Assessment, Evaluation of the Oregon Medicaid Proposal, OTA-H-531, May 1992. Available at http://www.wws.princeton.edu/ota/ns20/pubs_f.html. Last accessed September 6, 2007. For another analysis of the intersection between the Oregon Health Plan and the Americans with Disabilities Act (ADA), see [no authors listed], "The Oregon Health Care Proposal and the Americans with Disabilities Act," Harvard Law Review, vol. 106, no. 6 (April 1993), pp. 1296-1313. For a discussion of whether the rankings were biased against the disabled, see Paul T. Menzel, "Oregon's Denial: Disabilities and Quality of Life," The Hastings Center Report, vol. 22, no. 6 (November 1992), pp. 21-25. See also Alexander Morgan Capron, "Oregon's Disability: Principles or Politics?" The Hastings Center Report, vol. 22, no. 6 (November 1992), pp. 18-20.

102.

For more information, see Robert M. Kaplan, "Value Judgment in the Oregon Medicaid Experiment," Medical Care, vol. 32, no. 10, pp. 975-988. Also see James F. Blumstein, "The Oregon Experiment: The Role of Cost-Benefit Analysis In the Allocation of Medicaid Funds," Social Science and Medicine, vol. 45, no. 4, August 1997, pp. 545-554. See also Tammy O. Tengs et al., "Oregon's Medicaid Ranking and Cost-Effectiveness: Is There Any Relationship?" Medical Decision Making, vol. 16, no. 2, April-June 1996, p. 99-107. For an example of a discussion of the ethics of the Oregon experience, see Norman Daniels, "Is the Oregon Rationing Plan Fair?" JAMA, vol. 265, no. 17 (May 1, 1991), pp. 2232-2235.

103.

For more information, see Steven H. Woolf and David Atkins, "The Evolving Role of Prevention in Health Care: Contributions of the U.S. Preventive Services Task Force," American Journal of Preventive Medicine20, suppl. 3 (2001), pp.13-20. See also AHRQ, "CERTs Overview," available at http://www.ahrq.gov/clinic/certsovr.htm, accessed August 15, 2007.

104.

For more information on the 1998 USPSTF, see Eileen Salinsky, "Clinical Preventive Services: When Is the Juice Worth the Squeeze?" National Health Policy Forum, Issue Brief No. 806, August 24, 2005, available at http://www.nhpf.org/pdfs_ib/IB806_ClinicalPrevServices_08-24-05.pdf. A list of services recommended by the USPSTF is available in the appendix of the issue brief.

105.

Ibid.

106.

For more information, see the Department of Veterans Affairs Pharmacy Benefits Management Strategic Healthcare Group available at http://www.pbm.va.gov/default.aspx, accessed August 15, 2007.

107.

For more information, see the Veterans Affairs Pharmacy Benefits Management Strategic Healthcare Group available at http://www.pbm.va.gov/DrugClassReviews.aspx, accessed August 15, 2007.

108.

Ibid.

109.

For more information, see Medicare Australia at http://www.medicare.gov.au/yourhealth/our_services/apbs.shtml, accessed August 15, 2007.

110.

Australian Pharmaceutical Advisory Council. National Medicines Policy. Canberra: Commonwealth Department of Health and Aged Care, 1999.

111.

Drugs with acceptable cost-effectiveness ratios are generally recommended for listing at the price selected by the manufacturer.

112.

For more information, see Australia Commonwealth Department of Health and Ageing, "2006 Guidelines for the Pharmaceutical Industry on Preparation of Submissions to the Pharmaceutical Benefits Advisory Committee," November 2006. Available at http://www.health.gov.au/internet/wcms/publishing.nsf/content/pbac_guidelines, accessed September 10, 2007.

113.

For more information, see Australian Government Department of Health and Ageing, "Government Rejects Viagra Listing on PBS," press release, February 13, 2002.

114.

Note that this description of process and requirements only applies for brand-name drugs.

115.

For more information, see Australia Commonwealth Department of Health and Ageing, "2006 Guidelines for the Pharmaceutical Industry on Preparation of Submissions to the Pharmaceutical Benefits Advisory Committee," November 2006. Available at http://www.health.gov.au/internet/wcms/publishing.nsf/content/pbac_guidelines, accessed September 10, 2007.

116.

Ibid, p. 188

117.

For more information, see Australia Commonwealth Department of Health and Ageing, "2006 Guidelines for the Pharmaceutical Industry on Preparation of Submissions to the Pharmaceutical Benefits Advisory Committee," November 2006. Available at http://www.health.gov.au/internet/wcms/publishing.nsf/content/pbac_guidelines, accessed September 10, 2007.

118.

For more information about independent reviews, see Australian Government, "Independent Review (PBS)." Available at http://www.independentreviewpbs.gov.au, accessed September 10, 2007.

119.

The Agency was known as the Canadian Coordinating Office for Health Technology Assessment (CCOHTA) until April 2006.

120.

For more information, see Valerie Paris and Elizabeth Docteur, "Pharmaceutical Pricing and Reimbursement Policies in Canada," OECD Working Papers, no. 24 (2006).

121.

For more information about CCOHTA and CADTH, see the Canadian Agency for Drugs and Technologies in Health, "CCOHTA to CADTH: Our History," available at http://www.cadth.ca/index.php/en/cadth/corporate-profile/history, accessed August 26, 2007.

122.

All Canadian territories and provinces, except for Quebec, participate.

123.

Information based on presentation by Andrea Sutcliffe, Deputy Chief Executive of NICE, May 21, 2007. See also Peter Littlejohns and Mike Kelly, "The Changing Face of NICE: The Same But Different," Lancet vol. 366, no. 9488 (September 3-9, 2005), pp. 791-794.

124.

For more information, see Steven D. Pearson and Michael D. Rawlins, "Quality, Innovation, and Value for Money: NICE and the British National Health Service," JAMA, vol. 294, no. 20 (November 2005), pp. 2618-2622.

125.

For more information, see the U.K. Office of Public Sector Information, "The National Institute for Clinical Excellence (Establishment and Constitution) Order," Statutory Instrument 1999 No. 220. Available at http://www.opsi.gov.uk/si/si1999/19990220.htm, accessed August 13, 2007.

126.

NICE also produces guidance on the safety and efficacy of procedures, as well as guidance on public health interventions and programs; 216 guidelines on procedures and 5 on public health programs and interventions had been published as of May 2007.

127.

For examples and a discussion about the role of comparative effectiveness research in coverage decisions, see Jonathan Gardner, "Comparative Effectiveness Information: Would the U.S. Use It In a NICE Way?" Health Affairs Blog, June 12, 2007.

128.

For example, see Jacob Goldstein, "U.K. Gov't Could Stop Paying For Drug-Coated Stents," The Wall Street Journal, August 27, 2007.

129.

For more information, see Steven D. Pearson and Michael D. Rawlins, "Quality, Innovation, and Value for Money: NICE and the British National Health Service," JAMA, vol. 294, no. 20 (November 23/30, 2005), pp. 2618-2622.

130.

For more information, see Richard Cookson, David McDaid, and Alan Maynard, "Wrong SIGN, NICE Mess: Is National Guidance Distorting Allocation of Resources?" British Medical Journal, vol. 323 (September 29, 2001), p. 743-745. See also Alan Maynard, Karen Bloor, and Nick Freemantle, "Challenges for the National Institute for Clinical Excellence," British Medical Journal, vol. 329 (July 24, 2004), p. 227-229. Also see Anthony Culyer, et al., "Searching for A Threshold, Not Setting One: The Role of the National Institute for Health and Clinical Excellence," Journal of Health Services Research & Policy, vol. 12, no. 1 (January 2007), pp. 56-58.

131.

For more information, see Iestyn Williams, Stirling Bryan, and Shirley McIver, "How Should Cost-effectiveness Analysis Be Used in Health Technology Coverage Decisions? Evidence From the National Institute for Health and Clinical Excellence Approach," Journal of Health Services Research and Policy, vol. 12, no. 2 (2007), pp. 73-79.