0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Review |

Strength of Study Evidence Examined by the FDA in Premarket Approval of Cardiovascular Devices FREE

Sanket S. Dhruva, MD; Lisa A. Bero, PhD; Rita F. Redberg, MD, MSc
[+] Author Affiliations

Author Affiliations: Department of Medicine (Drs Dhruva and Redberg); Department of Clinical Pharmacy, School of Pharmacy (Dr Bero); Institute for Health Policy Studies, School of Medicine (Dr Bero); and Division of Cardiology (Dr Redberg), University of California, San Francisco.


JAMA. 2009;302(24):2679-2685. doi:10.1001/jama.2009.1899.
Text Size: A A A
Published online

Context Medical devices are common in clinical practice and have important effects on morbidity and mortality, yet there has not been a systematic examination of evidence used by the US Food and Drug Administration (FDA) for device approval.

Objectives To study premarket approval (PMA)—the most stringent FDA review process—of cardiovascular devices and to characterize the type and strength of evidence on which it is based.

Data Sources and Study Selection Systematic review of 123 summaries of safety and effectiveness data for 78 PMAs for high-risk cardiovascular devices that received PMA between January 2000 and December 2007.

Data Extraction Examination of the methodological characteristics considered essential to minimize confounding and bias, as well as the primary end points of the 123 studies supporting the PMAs.

Results Thirty-three of 123 studies (27%) used to support recent FDA approval of cardiovascular devices were randomized and 17 of 123 (14%) were blinded. Fifty-one of 78 PMAs (65%) were based on a single study. One hundred eleven of 213 primary end points (52%) were compared with controls and 34 of 111 controls (31%) were retrospective. One hundred eighty-seven of 213 primary end points (88%) were surrogate measures and 122 of 157 (78%) had a discrepancy between the number of patients enrolled in the study and the number analyzed.

Conclusion Premarket approval of cardiovascular devices by the FDA is often based on studies that lack adequate strength and may be prone to bias.

Figures in this Article

Cardiovascular devices are increasing in number, use, complexity, and cost.1,2 In 2008, at least 350 000 pacemakers, 140 000 implantable cardioverter-defibrillators, and 1 230 000 stents were implanted2 (Mike Weinstein, BS, J.P. Morgan Securities; written communication). Although there has been recent scrutiny of evidence used in the US Food and Drug Administration (FDA) drug approval process,3 less attention has been paid to the approval process for medical devices. Medical devices are less likely than drugs to have demonstrated clinical safety before they are marketed,4 and evidence shows that “review performance has begun to decline.”5

Devices are stratified by increasing risk for patients, as class I, II, and III, with stringency of the approval process corresponding to device risk.6 Class III devices, the highest risk, are defined as “usually those that support or sustain human life, are of substantial importance in preventing impairment of human health, or which present a potential, unreasonable risk of illness or injury.”7 The scientific and regulatory review process “to evaluate the safety and effectiveness of class III medical devices” is the premarket approval (PMA), “the most stringent type of device marketing application required by FDA.”8 The PMAs are required for “novel or high risk” devices.9 Class III devices account for 50 to 80 of the 8000 new medical devices marketed each year.4

As clinical use and insurance coverage often quickly follow FDA approval of a device,10 and as medical devices are increasingly being marketed directly to consumers after FDA approval,11,12 it is important to understand thoroughly the study data on which such approval is based. Further, the US Supreme Court's February 2008 decision in Riegel v Medtronic means that FDA approval of a device preempts consumers from suing because of problems with the safety or effectiveness of the device, making this approval a vital consumer protection safeguard.13 Given these issues, it is essential that the study evidence on which device approval is based is of high quality. Ideally, this evidence should consist of randomized, double-blind studies with adequate controls, sufficient duration, and thorough follow-up on prespecified primary end points without bias.14

After a device receives PMA, the FDA makes publicly available its approval order, labeling guidelines, and a summary of safety and effectiveness data (SSED). The SSED “is intended to present a reasoned, objective, and balanced critique of the scientific evidence which served as the basis of the decision to approve or deny the PMA.”15 To our knowledge, this study evidence has not been systematically examined. Therefore, the type and quality of study evidence for devices were analyzed, focusing on cardiovascular devices, because it was expected they would undergo the most stringent approval process given their far-reaching impact on morbidity and mortality and their increasing use.

Data Acquisition

On October 15, 2008, a search of the PMA database was performed16 using the parameters of Advisory Committee: Cardiovascular and Supplement Type: Originals Only. All PMAs with a date received between January 1, 2000, and December 31, 2007, were included and their SSEDs were downloaded (Figure). Data were abstracted from each SSED's “Summary of Clinical Studies” section and from the adverse events section if it described clinical study information presented in the “Summary of Clinical Studies.” We recorded each PMA's number, the device's trade name as it appears in the SSED, the applicant's name as it appears in the SSED, the applicant's name as it appears on the FDA Web site with a link to the SSED, the date the PMA was received by the FDA, and the FDA's decision date. Except when coding surrogate measures, 1 author (S.S.D.) classified all data, which were verified by at least 1 other author (L.A.B. or R.F.R. or both).

Place holder to copy figure label and caption
Figure. Flowchart of Number of Individual Studies Included in PMAs
Graphic Jump Location

PMAs indicates premarket approvals; SSEDs, summaries of safety and effectiveness data.

Number of Studies

All 127 studies listed under the “Summary of Clinical Studies” sections in the SSEDs were included in the analysis. Pooled studies whose data were not presented separately were counted as 1 study, which occurred 4 times and encompassed 8 studies (in P010041, P020040, P000007, and P030047). Therefore, we coded 123 studies. We coded fields of data pertaining to essential characteristics of studies aimed to reduce bias and confounding and several characteristics of primary end points (Box).

Box. Coding of Studies and Primary End Points

Coding for Each Study

Demographic data. Coded as “stated” for each mean age and standard deviation, percentage of male participants, and racial breakdown of participants with the number of patients who represented these data or as “not stated.” If stated, data were recorded in the form presented. If a study reported demographic data but not the number of participants for which these data were available, the data were recorded as not stated. In studies with retrospective controls where demographic data were provided on retrospective patients, they were included along with those of enrolled participants. In 1 instance where 2 studies shared the same control group, its demographics were included only once. In 1 case when median instead of mean age was reported, median was substituted.

Number of enrolled patients. Coded as “stated” or “not stated.” If stated, the number of enrolled patients was recorded. Lead-in, roll-in, or training cases were added to the number enrolled if their number was provided and they had been excluded for analysis.

Randomization. Coded as “yes” if all patients were randomized or “no” if not randomized or if not stated that a study was randomized. If randomization occurred at any time other than enrollment, such as after a period of nonrandomized training/lead-in/roll-in cases, coded as no.

Blinding. Coded as “yes” if double-blind or single-blind stated or “no” if not blinded or not stated that a study had blinding. The specific type of blinding (single or double) also was recorded.

Single-center or multicenter. Coded as “stated” or “not stated” if study was single-center or multicenter. If stated multicenter, the total number of sites was recorded if it was reported.

US study location. Coded as having “all,” “some,” or “none” of their sites in the United States or “not stated.” Studies stating “North American” sites without further localization were coded as not stated.

Coding for Each Primary End Point

Identification. Each primary end point, objective, or outcome (hereafter all are referred to as “end point” for simplicity) was identified. When a study did not explicitly state a primary end point, if there were up to 3 analyzed end points, all 3 were designated as primary end points. If there was no discernable primary end point or more than 3 end points analyzed without any being designated as primary, this was recorded as having no primary end point.

Controls. Coded as “having any (active or retrospective) controls” or “not having controls.”

Retrospective controls. Coded as “having retrospective controls” or “not having retrospective controls.” All studies in which the premarket approval compared a primary end point with “historical controls” were coded as having retrospective controls.

Composite. Coded as “composite, more than 1 component” or “not composite, only 1 component.” Composites were further examined to determine if results for each component were provided. This was coded as “all,” “some,” or “none” of component results listed separately. End points measuring the total adverse events or complications were considered composites.

Surrogate. Coded as “surrogate” or “not surrogate” using the following definition: “a surrogate end point of a clinical trial is a laboratory measurement or a physical sign used as a substitute for a clinically meaningful end point that measures directly how a patient feels, functions, or survives. Changes induced by a therapy on a surrogate end point are expected to reflect changes in a clinically meaningful end point.”1,20 If a composite end point included a surrogate, it was classified as a surrogate end point. This field was coded by 2 authors, and disagreements were resolved by consensus in consultation with the third author.

Training/lead-in/roll-in patients excluded from analysis. Coded as “having lead-in, roll-in, or training cases” if their data were excluded from analysis of the primary end point or “not having such patients.”

Number analyzed. Coded as “stated” the number of patients analyzed for each primary end point or “not stated.” If stated, the number analyzed was recorded and compared with the number enrolled. In studies that reported primary end points as intent-to-treat and in other ways, we recorded the number of intent-to-treat patients. When data for retrospective controls were included in a primary end point analysis, these patients were added to the number analyzed.

Post hoc analysis. Coded as “post hoc analysis” if primary end points or study design were redefined at any point after study initiation, if new patients were added into the study after completion of initial enrollment, if prespecified controls were changed, if the initially designated statistical analysis of a primary end point was changed after study initiation, if the primary end point goal was not met but some post hoc data review justified device approval, or if the Food and Drug Administration deemed the study characteristics no longer applicable at the time of review and approved the device on qualitative merits—or coded as “no post hoc analysis.”

Interpretable. Coded as “interpretable” or “not interpretable.” If not interpretable, the reason for this was coded into 4 categories: no target goal, no statistical analysis, insufficient data, or no results.

Follow-up time. The length of time at which a primary end point analysis was performed was recorded. For primary end points analyzed for time to hemostasis, time to implantation, implant success, procedure success, hospital stay, or at the end of an intervention, “<24 hours” was recorded. When a range of times for follow-up was stated instead of a preplanned time, we recorded the longest follow-up duration. When the longest period was not mentioned, the cumulative follow-up time was divided by the number of patients analyzed for the primary end point to determine an average time of follow-up.

Studies With No Primary End Point

For studies in which no primary end point could be discerned, we coded as “not applicable” in the following fields: number of patients analyzed for primary end point, composite primary end point, surrogate primary end point, follow-up (time) at primary end point analysis, primary end point result interpretable, and post hoc end point analysis.

Data Analysis

For each category, data were tabulated across PMAs, studies, and primary end points. These summary data are presented as numbers (PMAs, studies, or primary end points); percentages of the whole category to which they refer; and mean, standard deviation, and range when applicable. In calculating the mean, standard deviation, and minimum number of end points, we excluded studies with no primary end points. In calculating the number of PMAs not reporting 1 or more US centers, we included studies that did not state their location. Median follow-up times were calculated based on subcategories of cardiovascular devices.

Eighty-one cardiovascular PMAs were approved between January 1, 2000, and December 31, 2007. Two SSEDs (P040016, P020035) were not available on the FDA Web site. Two PMAs had identical SSEDs (P030039 and P010022) and those data were included only once. The remaining 78 PMAs included in our analysis included 123 studies (eTable). With the exception of closure devices, all devices were either implanted or invasive during their use.

The mean (SD) number of studies stated in SSEDs supporting each PMA was 1.6 (0.9) (range, 1-5 studies). Of the 78 PMAs, 51 (65%) were supported by a single study. For the 123 studies, only 98 SSEDs (80%) reported the number of participants enrolled (mean [SD], 308 [284] participants) (Table 1). Both the number enrolled and number of sites were provided for 80 studies (65%). The median number of patients enrolled per site was 13 (interquartile range, 8-21 patients).

Demographic Data

Of 123 studies cataloged in SSEDs, mean age was stated in 87 (71%), enrollment by sex in 89 (72%), and enrollment by race in 11 (9%) (Table 1). The mean (SD) age was 62.7 (11.4) years, 66.9% of study participants were male, 87% white, 6% African American or black, 5% Hispanic or Latino, and 3% another race or ethnicity.

Strength of Study Design

Of 123 studies in SSEDs, 33 (27%) were randomized and 17 (14%) were blinded (Table 1). Some device groups had a higher proportion of randomized and blinded studies. For example, of the 24 studies for cardiac stents, 13 (54%) were randomized and 11 (46%) were blinded. One hundred ten studies were multicenter, although 20 (18%) did not specify the number of sites. Of studies stating the number of sites, the mean (SD) was 23 (17) sites (range, 1-80 sites).

Follow-up time varied by type of device; the longest median follow-up time for primary end point analysis was for intracardiac devices and endovascular grafts, both at 365 days, and the shortest was for hemostasis devices at 1 day (Table 1).

For SSEDs stating the number of patients enrolled and the number analyzed for each study, there was a discrepancy for mean age in 37 of 74 studies (50%), for number by sex in 37 of 78 studies (47%), and for number by race in 5 of 11 studies (45%).

Primary End Point Characteristics

Of 123 studies, 17 (14%) did not state a primary end point. There were a total of 213 primary end points and a mean (SD) of 2.0 (1.5) end points per study (range, 1-10 end points per study). Thirty of these end points (14%) were designated, all in cases where there was no explicitly stated primary end point for a study.

Of the 213 primary end points, 111 (52%) were compared with controls and of these, 34 (31%) were retrospective controls. Studies without controls were compared with objective performance criteria, which specified safety and/or efficacy targets for the device. One hundred nineteen primary end points (56%) were composites (Table 2).

Table Graphic Jump LocationTable 2. Characteristics of Primary End Points and Data Analyses

Of 213 primary end points, most (187, or 88%) were surrogate end points. Examples of surrogate end points include target lesion revascularization for a coronary stent, primary patency for an endoprosthesis, and lead implant success for an electrophysiology device.

In the SSEDs, there were 157 primary end points for which both the number enrolled and analyzed were stated. Of these, 122 (78%) had a discrepancy between the number enrolled and those analyzed. One hundred thirteen discrepancies (93%) were that more patients were enrolled than analyzed; for these primary end points, a median of 50 patients was enrolled but not analyzed (range, 1-604 patients). These 113 primary end point discrepancies totaled 10 351 patient exclusions, 27% of the total enrolled. In 9 of 122 primary end points (7%), there was a greater number analyzed than enrolled. All of these were due to retrospective controls, meaning that patients from a previous study were included in the PMA analysis, leading to more patients being analyzed than enrolled. For these, the median discrepancy was 238 excess analyzed patients (range, 50-848 patients).

Interpretation of Primary End Point Results

Of 213 primary end points in the SSEDs, the results of 32 (15%) were noninterpretable (Table 2). The most common reason was that no target goal for device performance was stated in 25 end points (78%), and in one instance the results were not stated. Forty primary end points (19%) had training, lead-in, or roll-in patients excluded from analyses and 21 (10%) had a post hoc analysis of the primary end point (Table 2).

In some instances, end points were interpreted to meet their targets when they may have met only a part of them. In 1 PMA, for example, 107 of 226 patients (47.3%) had chronic success in the effectiveness analysis cohort (defined as “no recurrence of clinically relevant monomorphic ventricular tachycardia that were targeted at ablation”17), with a 95% lower confidence bound of 41.7%. The target end point on that same table in the SSED,17 however, is shown to be 50% chronic success with a 95% lower confidence bound of 40%. The SSED explains this discrepancy as follows: “The results demonstrate that the percentage of subjects achieving chronic success (47.3%, 95% lower confidence bound of 41.7%) met the protocol end point for chronic success. This is due to the fact that although the point estimate for chronic success was lower than the protocol end point, the 95% lower confidence bound of the estimate was higher than the protocol end point.”17

PMA Analyses

Of 78 PMAs, 24 (31%) had at least 1 randomized study and 10 (13%) at least 1 blinded study (Table 3). Four PMAs (5%) were supported by at least 2 blinded randomized studies. More than one-third, 28 PMAs, did not report a study with at least 1 US center (Table 3). Seven PMAs had no primary end point for any study. Of the 71 PMAs with at least 1 primary end point, 16 (23%) used at least 1 retrospective control group and 16 (23%) based device approval at least in part on a post hoc analysis.

Table Graphic Jump LocationTable 3. Characteristics of Premarket Approvals

For example, 1 device was approved wholly on a post hoc analysis for a single subgroup studied in the single preapproval study discussed in the SSED. However, because this subgroup did not meet the prespecified target performance standard, the SSED stated the following: “CDRH [Center for Devices and Radiological Health] determined that the OPCs [objective performance criteria] designated for the overall study population should not be used to evaluate the safety and effectiveness of the device in the AVNRT [Atrioventricular Nodal Reentry Tachycardia] subgroup. As a result, the device system for the proposed indication was evaluated on its merits, as CDRH qualitatively considered the device risk-benefit profile.”18

The evidence presented in the SSEDs for FDA-approved cardiovascular device PMAs from 2000 through 2007 showed that the majority of studies are not blinded or randomized. Blinding for some devices such as left ventricular assist devices is not possible,1 but most cardiovascular device PMAs do not have even 1 blinded or 1 randomized study. Controls are used in a little more than half of studies, and the common use of retrospectively selected controls can introduce bias by allowing for the selection of control groups that favor the device. The vast majority of end points are surrogates, which may not be reliable predictors of actual patient benefit.19 Although surrogate outcomes are attractive because they decrease the time and costs required to do a study, they must be linked to a clinically meaningful end point to be valid.

Composite outcomes are also common, and in cardiovascular trials they have been shown to comprise individual end points that often vary in clinical significance and do not contribute equally to the composite measure.20 The frequent discrepancies between the number of enrolled patients and the number analyzed for primary end points, despite the short follow-up times, may introduce bias because patients with less favorable outcomes may be lost to follow-up and safety concerns may underlie this missing data. The common practice of excluding data from the training/roll-in/lead-in period also introduces bias because this preferentially excludes patients in whom the device may not be associated with a favorable outcome. In some instances where bias may not initially be present, devices are approved using a post hoc analysis of data, which could introduce bias favoring the device. The PMA is the most rigorous device approval process, and strict standards for cardiovascular devices are expected given their far-reaching effects, permanent nature, and use in critically ill patients.

Study populations should be representative of the patient populations in which these devices will be used. The FDA guidelines stating that data from outside the United States be “applicable to the US population and US medical practice”21 are increasingly important as more trials are conducted internationally. In more than one-third of PMAs, however, we were not able to ascertain that even 1 study had been conducted in the United States. This results in uncertain generalizability of approved medical devices to the US population.22

There are several possible reasons the criteria on which FDA device approval is based appear to be less rigorous than those for drug approvals. First, device approvals are a more recent activity for the FDA, having begun in 1976 with the FDA Device Amendment,6 so the agency has less experience with devices than it does with drugs. Further, the last decade has brought a significant increase in the number and complexity of devices. In addition, on the FDA approval continuum, devices, which are almost always implanted, are between drugs, which have relatively strict criteria for approval, and new surgical operations, which do not require FDA approval.

The importance of the “seal of FDA approval” cannot be overstated. Many manufacturers immediately encourage widespread use of their devices based on FDA approval through direct-to-consumer advertising,11,12 detailing to physicians, and continuing medical education venues. An oft-repeated assertion by the sponsor is that FDA approval is sufficient grounds for insurance coverage and rapid dissemination of new devices. This rapid diffusion encourages use beyond evidence or overutilization of the health care system.23 The findings in this study raise questions about the quality of data on which some cardiovascular device approvals are based.

There is a balance between getting new drugs and devices to market quickly and ensuring the evidence of benefit is sufficient before FDA approval and marketing. However, the bar for evidence of benefit should be higher for devices because they are implanted and cannot simply be discontinued, as drugs can. In addition, although devices can be lifesaving, they also have great potential for risk and adverse events. For example, after 268 000 implantations in 3 years after approval, the Medtronic Sprint Fidelis implantable cardioverter-defibrillator lead was found to have an increased risk of fracture.24 Further, despite the risks of using this device, lead removal is quite dangerous for patients.25

The importance of FDA device approval is magnified as it preempts consumer lawsuits on device safety. Drug approval by the FDA no longer guarantees preemption26; current legislation in Congress seeks to overturn this inconsistency and allow consumers’ lawsuits to become a part of the regulatory framework for devices.27 Postmarket surveillance is also weak protection because, although postmarket studies are sometimes required, manufacturers are not actively required to seek out device malfunctions, so device-related adverse events are substantially underreported.24,28,29 In addition, although FDA approval may address only a specific, narrow population and indication, physicians may use devices for unapproved indications.6 For example, Medicare data show that 69% of current drug-eluting stent use is “off-label.”30

All of these factors make it critical to public health that FDA device approval require sufficient high-quality evidence to support device safety and effectiveness, as determined in the PMA process. Yet, 65% of PMAs were based on a single study, which suggests that there may not be adequate evidence prior to FDA approval. Another option is to rely more on independent, systematic evidence-based assessments, although these will be hindered by the lack of rigorous clinical trial data and disincentives after approval for such studies.31 For example, after FDA approval of a medical device, interventionalists often are reluctant to randomize patients to a medical control group.32

The FDA approval process is an important determinant of health care spending: when the FDA approves devices more quickly, spending on new devices increases.6 Given that health care spending in the United States was 16.2% of the gross domestic product in 200733 and is projected to increase to 31% by 2031, and that rapid increase in health care spending is attributed principally to new technologies,34 rigorous outcome evaluation prior to approval is critical to increasing value for US health care expenditures. Fewer than half of medical decisions are supported by firm evidence of effectiveness, and many incentives in the US health care system encourage use of expensive treatments and procedures unrelated to evidence of patient benefit.35 There is a new focus on comparative effectiveness research, which has been allocated $1.1 billion by the Obama administration.36 The success of comparative effectiveness depends on the use of its principles in FDA reviews.37 Cost containment would likely occur if rigorous clinical effectiveness reviews are used for new drugs and technologies and spending concentrated on devices shown to benefit patients.38 Cardiovascular and peripheral vascular disease rank second among the primary research areas designated by the Institute of Medicine.39 This study suggests that the FDA device approval process would benefit from such rigorous research, using meaningful clinical outcomes and valid, active (not historical) controls in randomized, blinded studies conducted in populations that reflect the US population in which they are intended for use.

A limitation of this study may be that the data source is primarily publicly available SSEDs. However, it is possible that at least some of the data missing from the SSEDs were also omitted from the proprietary reports to the FDA. The SSEDs should contain all data presented to the FDA. If sufficient data are not presented or are inconsistent, the SSED should be checked for thoroughness prior to making a decision and posting on the FDA Web site. Given that the specific stated purpose of SSEDs is to present the basis of the FDA's decision,15 the SSEDs should be a thorough and accurate compilation of the FDA's critique of evidence. Further, SSEDs are the only FDA-reviewed evidence available for clinicians, and they form the sole basis of data that can be used for systematic reviews and guideline development. This study reinforces the need for improved access to complete FDA reviews40 for both pharmaceutical and device data.

The emphasis at the FDA in the last 17 years since the Prescription Drug User Fee Act has been rapid approval of new drugs. This study suggests that the emphasis for the FDA in 2009 and beyond must be approvals based on research that meets rigorous scientific standards for evidence of benefit and lack of harm to patients. To uphold the FDA's mission of ensuring “safe and effective” medical devices, it is essential that high-quality studies and data are available.

Corresponding Author: Rita F. Redberg, MD, MSc, Division of Cardiology, Department of Medicine, 505 Parnassus Ave, Ste M-1180, San Francisco, CA 94143-0124 (redberg@medicine.ucsf.edu).

Author Contributions: Dr Redberg had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Dhruva, Bero, Redberg.

Acquisition of data: Dhruva, Redberg.

Analysis and interpretation of data: Dhruva, Bero, Redberg.

Drafting of the manuscript: Dhruva, Redberg.

Critical revision of the manuscript for important intellectual content: Dhruva, Bero, Redberg.

Study supervision: Redberg.

Financial Disclosures: Dr Redberg reported being a member of the FDA Circulatory System Devices Panel and a member of the California Technology Assessment Forum. No other disclosures were reported.

Additional Contributions: Mark Pletcher, MD, Department of Epidemiology and Biostatistics, UCSF, provided statistical assistance, which was supported by NIH/NCRR grant UL1 RR024131 to the UCSF Clinical and Translational Science Institute. Jeffrey Tice, MD, and Steven Schroeder, MD, Department of Medicine, UCSF, provided helpful comments, and Deborah Airo, BA, Department of Epidemiology and Biostatistics, UCSF, provided editorial assistance. The UCSF Pathway to Discovery in Health and Society also provided assistance. None received compensation for their contribution.

Muni NI, Zuckerman BD. The process of regulatory review for new cardiovascular devices. In: Antman EM, ed. Cardiovascular Therapeutics: A Companion to Braunwald's Heart Disease, 3rd Edition. Philadelphia, PA: Elsevier; 2007
Zhan C, Baine WB, Sedrakyan A, Steiner C. Cardiac device implantation in the United States from 1997 through 2004: a population-based analysis.  J Gen Intern Med. 2007;23:(suppl 1)  13-19
PubMed   |  Link to Article
Rising K, Bacchetti P, Bero L. Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation.  PLoS Med. 2008;5(11):e217
PubMed   |  Link to Article
Feigal DW, Gardner SN, McClellan M. Ensuring safe and effective medical devices.  N Engl J Med. 2003;348(3):191-192
PubMed   |  Link to Article
 Medical Device User Fee and Modernization Act of 2002 frequently asked questions. US Food and Drug Administration. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/MedicalDeviceUserFeeandModernizationActMDUFMA/ucm109208.htm. Accessed July 22, 2009
Maisel WH. Medical device regulation: an introduction for the practicing physician.  Ann Intern Med. 2004;140(4):296-302
PubMed   |  Link to Article
 Device classes: general and special controls. US Food and Drug Administration. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/GeneralandSpecialControls/default.htm. Accessed July 22, 2009
 Premarket Approval device advice: overview. US Food and Drug Administration. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/HowtoMarketYourDevice/PremarketSubmissions/PremarketApprovalPMA/default.htm. Accessed July 22, 2009
 Testimony on FDA's medical device program by Michael Friedman, MD, lead deputy commissioner, US Food and Drug Administration, before the House Committee on Commerce, Subcommittee on Health and the Environment [April 30, 1997]. http://www.dhhs.gov/asl/testify/t970430a.html. Accessed July 22, 2009
Yock CA, Yock PG. The drug-eluting stent information gap.  Am Heart Hosp J. 2004;2(1):21-25
PubMed   |  Link to Article
Boden WE, Diamond GA. DTCA for PTCA: crossing the line in consumer health education?  N Engl J Med. 2008;358(21):2197-2200
PubMed   |  Link to Article
Mitka M. Direct-to-consumer advertising of medical devices under scrutiny.  JAMA. 2008;300(17):1985-1986
PubMed   |  Link to Article
Gostin LO. The deregulatory effects of preempting tort litigation: FDA regulation of medical devices.  JAMA. 2008;299(19):2313-2316
PubMed   |  Link to Article
 Assessing risk of bias in included studies. Higgins JPT, Altman DG, eds. In: Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions. http://www.cochrane-handbook.org. Accessed July 22, 2009
 PMA application contents: summary of safety and effectiveness data. US Food and Drug Administration. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/HowtoMarketYourDevice/PremarketSubmissions/PremarketApprovalPMA/ucm050289.htm#ssed. Accessed July 22, 2009
 Premarket approval: Center for Devices and Radiological Health SuperSearch. US Food and Drug Administration. http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPMA/pma.cfm. Accessed December 1, 2009
 Summary of safety and effectiveness data: PMA No. 040036, NaviStar ThermoCool deflectable diagnostic/ablation catheter [August 2006]. http://www.accessdata.fda.gov/cdrh_docs/pdf4/P040036b.pdf. Accessed July 22, 2009
 Summary of safety and effectiveness data: PMA No. 020045, cardiac cryoablation catheter and console system. http://www.accessdata.fda.gov/cdrh_docs/pdf2/P020045b.pdf. Accessed November 8, 2009
Fleming TR, DeMets DL. Surrogate end points in clinical trials: are we being misled?  Ann Intern Med. 1996;125(7):605-613
PubMed   |  Link to Article
Lim E, Brown A, Helmy A, Mussa S, Altman DG. Composite outcomes in cardiovascular research: a survey of randomized trials.  Ann Intern Med. 2008;149(9):612-617
PubMed   |  Link to Article
 Food and drugs: premarket approval of medical devices: research conducted outside the United States. 21 CFR §814.15
Glickman SW, McHutchinson JG, Peterson ED,  et al.  Ethical and scientific implications of the globalization of clinical research.  N Engl J Med. 2009;360(8):816-823
PubMed   |  Link to Article
Emanuel EJ, Fuchs VR. The perfect storm of overutilization.  JAMA. 2008;299(23):2789-2791
PubMed   |  Link to Article
Maisel WH. Semper fidelis: consumer protection for patients with implanted medical devices.  N Engl J Med. 2008;358(10):985-987
PubMed   |  Link to Article
 Medtronic recalls Sprint Fidelis cardiac leads: questions and answers for consumers. US Food and Drug Administration. http://www.fda.gov/ForConsumers/ConsumerUpdates/ucm103022.htm. Accessed November 22, 2009
 Wyeth v Levine, 555 US __ (2009) 
Curfman GD, Morrissey S, Drazen JM. The Medical Device Safety Act of 2009.  N Engl J Med. 2009;360(15):1550-1551
PubMed   |  Link to Article
Meier B. Maker of heart device kept flaw from doctors. The New York Times. May 24, 2005:1
Maisel WH. Safety issues involving medical devices: implications of recent implantable cardioverter-defibrillator malfunctions.  JAMA. 2005;294(8):955-958
PubMed   |  Link to Article
Douglas PS, Brennan JM, Anstrom KJ,  et al.  Clinical effectiveness of coronary stents in elderly persons: results from 262,700 Medicare patients in the American College of Cardiology–National Cardiovascular Data Registry.  J Am Coll Cardiol. 2009;53(18):1629-1641
PubMed   |  Link to Article
Feldman MD, Petersen AJ, Karliner LS, Tice JA. Who is responsible for evaluating the safety and effectiveness of medical devices? the role of independent technology assessment.  J Gen Intern Med. 2008;23:(suppl 1)  57-63
PubMed   |  Link to Article
Furlan AJ, Fisher M. Devices, drugs, and the Food and Drug Administration: increasing implications for ischemic stroke.  Stroke. 2005;36(2):398-399
PubMed   |  Link to Article
 National health expenditures: 2007 highlights. http://www.cms.hhs.gov/NationalHealthExpendData/downloads/highlights.pdf. Accessed July 22, 2009
 Technological change and the growth of health care spending: January 2008. Congressional Budget Office. http://www.cbo.gov/doc.cfm?index=8947. Accessed December 2, 2009
Orszag PR, Ellis P. Addressing rising health care costs: a view from the Congressional Budget Office.  N Engl J Med. 2007;357(19):1885-1887
PubMed   |  Link to Article
 Comparative effectiveness research in the USA.  Lancet. 2009;373(9665):694
PubMed
Alexander GC, Stafford RS. Does comparative effectiveness have a comparative edge?  JAMA. 2009;301(23):2488-2490
PubMed   |  Link to Article
Mongan JJ, Ferris TG, Lee TH. Options for slowing the growth of health care costs.  N Engl J Med. 2008;358(14):1509-1514
PubMed   |  Link to Article
Iglehart JK. Prioritizing comparative-effectiveness research: IOM recommendations.  N Engl J Med. 2009;361(4):325-328
PubMed   |  Link to Article
O’Connor AB. The need for improved access to FDA reviews.  JAMA. 2009;302(2):191-193
PubMed   |  Link to Article

Figures

Place holder to copy figure label and caption
Figure. Flowchart of Number of Individual Studies Included in PMAs
Graphic Jump Location

PMAs indicates premarket approvals; SSEDs, summaries of safety and effectiveness data.

Tables

Table Graphic Jump LocationTable 2. Characteristics of Primary End Points and Data Analyses
Table Graphic Jump LocationTable 3. Characteristics of Premarket Approvals

References

Muni NI, Zuckerman BD. The process of regulatory review for new cardiovascular devices. In: Antman EM, ed. Cardiovascular Therapeutics: A Companion to Braunwald's Heart Disease, 3rd Edition. Philadelphia, PA: Elsevier; 2007
Zhan C, Baine WB, Sedrakyan A, Steiner C. Cardiac device implantation in the United States from 1997 through 2004: a population-based analysis.  J Gen Intern Med. 2007;23:(suppl 1)  13-19
PubMed   |  Link to Article
Rising K, Bacchetti P, Bero L. Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation.  PLoS Med. 2008;5(11):e217
PubMed   |  Link to Article
Feigal DW, Gardner SN, McClellan M. Ensuring safe and effective medical devices.  N Engl J Med. 2003;348(3):191-192
PubMed   |  Link to Article
 Medical Device User Fee and Modernization Act of 2002 frequently asked questions. US Food and Drug Administration. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/MedicalDeviceUserFeeandModernizationActMDUFMA/ucm109208.htm. Accessed July 22, 2009
Maisel WH. Medical device regulation: an introduction for the practicing physician.  Ann Intern Med. 2004;140(4):296-302
PubMed   |  Link to Article
 Device classes: general and special controls. US Food and Drug Administration. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/GeneralandSpecialControls/default.htm. Accessed July 22, 2009
 Premarket Approval device advice: overview. US Food and Drug Administration. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/HowtoMarketYourDevice/PremarketSubmissions/PremarketApprovalPMA/default.htm. Accessed July 22, 2009
 Testimony on FDA's medical device program by Michael Friedman, MD, lead deputy commissioner, US Food and Drug Administration, before the House Committee on Commerce, Subcommittee on Health and the Environment [April 30, 1997]. http://www.dhhs.gov/asl/testify/t970430a.html. Accessed July 22, 2009
Yock CA, Yock PG. The drug-eluting stent information gap.  Am Heart Hosp J. 2004;2(1):21-25
PubMed   |  Link to Article
Boden WE, Diamond GA. DTCA for PTCA: crossing the line in consumer health education?  N Engl J Med. 2008;358(21):2197-2200
PubMed   |  Link to Article
Mitka M. Direct-to-consumer advertising of medical devices under scrutiny.  JAMA. 2008;300(17):1985-1986
PubMed   |  Link to Article
Gostin LO. The deregulatory effects of preempting tort litigation: FDA regulation of medical devices.  JAMA. 2008;299(19):2313-2316
PubMed   |  Link to Article
 Assessing risk of bias in included studies. Higgins JPT, Altman DG, eds. In: Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions. http://www.cochrane-handbook.org. Accessed July 22, 2009
 PMA application contents: summary of safety and effectiveness data. US Food and Drug Administration. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/HowtoMarketYourDevice/PremarketSubmissions/PremarketApprovalPMA/ucm050289.htm#ssed. Accessed July 22, 2009
 Premarket approval: Center for Devices and Radiological Health SuperSearch. US Food and Drug Administration. http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPMA/pma.cfm. Accessed December 1, 2009
 Summary of safety and effectiveness data: PMA No. 040036, NaviStar ThermoCool deflectable diagnostic/ablation catheter [August 2006]. http://www.accessdata.fda.gov/cdrh_docs/pdf4/P040036b.pdf. Accessed July 22, 2009
 Summary of safety and effectiveness data: PMA No. 020045, cardiac cryoablation catheter and console system. http://www.accessdata.fda.gov/cdrh_docs/pdf2/P020045b.pdf. Accessed November 8, 2009
Fleming TR, DeMets DL. Surrogate end points in clinical trials: are we being misled?  Ann Intern Med. 1996;125(7):605-613
PubMed   |  Link to Article
Lim E, Brown A, Helmy A, Mussa S, Altman DG. Composite outcomes in cardiovascular research: a survey of randomized trials.  Ann Intern Med. 2008;149(9):612-617
PubMed   |  Link to Article
 Food and drugs: premarket approval of medical devices: research conducted outside the United States. 21 CFR §814.15
Glickman SW, McHutchinson JG, Peterson ED,  et al.  Ethical and scientific implications of the globalization of clinical research.  N Engl J Med. 2009;360(8):816-823
PubMed   |  Link to Article
Emanuel EJ, Fuchs VR. The perfect storm of overutilization.  JAMA. 2008;299(23):2789-2791
PubMed   |  Link to Article
Maisel WH. Semper fidelis: consumer protection for patients with implanted medical devices.  N Engl J Med. 2008;358(10):985-987
PubMed   |  Link to Article
 Medtronic recalls Sprint Fidelis cardiac leads: questions and answers for consumers. US Food and Drug Administration. http://www.fda.gov/ForConsumers/ConsumerUpdates/ucm103022.htm. Accessed November 22, 2009
 Wyeth v Levine, 555 US __ (2009) 
Curfman GD, Morrissey S, Drazen JM. The Medical Device Safety Act of 2009.  N Engl J Med. 2009;360(15):1550-1551
PubMed   |  Link to Article
Meier B. Maker of heart device kept flaw from doctors. The New York Times. May 24, 2005:1
Maisel WH. Safety issues involving medical devices: implications of recent implantable cardioverter-defibrillator malfunctions.  JAMA. 2005;294(8):955-958
PubMed   |  Link to Article
Douglas PS, Brennan JM, Anstrom KJ,  et al.  Clinical effectiveness of coronary stents in elderly persons: results from 262,700 Medicare patients in the American College of Cardiology–National Cardiovascular Data Registry.  J Am Coll Cardiol. 2009;53(18):1629-1641
PubMed   |  Link to Article
Feldman MD, Petersen AJ, Karliner LS, Tice JA. Who is responsible for evaluating the safety and effectiveness of medical devices? the role of independent technology assessment.  J Gen Intern Med. 2008;23:(suppl 1)  57-63
PubMed   |  Link to Article
Furlan AJ, Fisher M. Devices, drugs, and the Food and Drug Administration: increasing implications for ischemic stroke.  Stroke. 2005;36(2):398-399
PubMed   |  Link to Article
 National health expenditures: 2007 highlights. http://www.cms.hhs.gov/NationalHealthExpendData/downloads/highlights.pdf. Accessed July 22, 2009
 Technological change and the growth of health care spending: January 2008. Congressional Budget Office. http://www.cbo.gov/doc.cfm?index=8947. Accessed December 2, 2009
Orszag PR, Ellis P. Addressing rising health care costs: a view from the Congressional Budget Office.  N Engl J Med. 2007;357(19):1885-1887
PubMed   |  Link to Article
 Comparative effectiveness research in the USA.  Lancet. 2009;373(9665):694
PubMed
Alexander GC, Stafford RS. Does comparative effectiveness have a comparative edge?  JAMA. 2009;301(23):2488-2490
PubMed   |  Link to Article
Mongan JJ, Ferris TG, Lee TH. Options for slowing the growth of health care costs.  N Engl J Med. 2008;358(14):1509-1514
PubMed   |  Link to Article
Iglehart JK. Prioritizing comparative-effectiveness research: IOM recommendations.  N Engl J Med. 2009;361(4):325-328
PubMed   |  Link to Article
O’Connor AB. The need for improved access to FDA reviews.  JAMA. 2009;302(2):191-193
PubMed   |  Link to Article
CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Data Supplement
Supplemental Content

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 63

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections