0
Original Contribution |

NIH Peer Review of Grant Applications for Clinical Research FREE

Theodore A. Kotchen, MD; Teresa Lindquist, MS; Karl Malik, PhD; Ellie Ehrenfeld, PhD
[+] Author Affiliations

Author Affiliations: Center for Scientific Review, National Institutes of Health, Bethesda, Md.


JAMA. 2004;291(7):836-843. doi:10.1001/jama.291.7.836.
Text Size: A A A
Published online

Context Support of research to facilitate translation of scientific discoveries to the prevention and treatment of human disease is a high priority for the US National Institutes of Health (NIH). Nevertheless, a perception exists among clinical investigators that the NIH peer review process may discriminate against clinical research.

Objective To describe recent trends and outcomes of peer review of grant applications to NIH requesting support for clinical research.

Design and Setting Peer review outcomes of grant applications submitted to NIH by MDs were compared with those of non-MDs, and outcomes of applications involving inclusion of human subjects were compared with those not involving human subjects. Analyses were carried out using an inclusive definition of clinical research and after stratifying clinical research into specific categories.

Main Outcome Measures Median priority scores and funding rates.

Results Between 1997 and 2002, on average, 25.2% of total grant applications (ranging from 27 607 to 34 422 per year) were submitted by MDs, and 27.5% of awards (ranging from 8495 to 10 769 awards per year) were made to MDs. Median priority scores (239.0 vs 250.0) and funding rates (31.4% vs 29.1%) reviewed in 2 grant cycles in 2002 were more favorable for MDs than for non-MDs (P<.001). However, median priority scores (254.0 vs 244.0) and funding rates (23.9% vs 28.1%) were less favorable (P<.001) for R01 applications for clinical research (n = 7227 applications) than for nonclinical research (n = 10 209). This trend was most convincingly observed for clinical research categorized as mechanisms of disease (P = .006) or clinical trials and interventions (P = .001). Similar trends were observed for grant mechanisms other than R01. Concerns about safety and privacy of human subjects may have contributed to the less favorable outcomes of clinical research applications.

Conclusion Although physicians compete favorably in the peer review process, review outcomes are modestly less favorable for grant applications for clinical research than for laboratory research.

Figures in this Article

More than 2 decades ago, before he became the director of the National Institutes of Health (NIH), Wyngaarden1 expressed the concern that the clinical investigator was becoming an "endangered species." This concern was echoed 10 years later by Healy,2 before she became the director of NIH, as well as by many others.312 Between 1983 and 1998, the number of physician-scientists nationwide decreased by 22%.13

In 1996, in response to these concerns, Varmus, then director of NIH, impaneled a group of experienced clinical investigators and academic health center administrators to make recommendations that might guide the NIH toward policy changes to alleviate the concerns in the clinical research community.14 Several of the panel's recommendations have been implemented, including increased support of the General Clinical Research Center budget, expanded support of training in clinical research, and the establishment of NIH-sponsored educational debt relief programs for clinical investigators.1518 The panel also recommended restructuring of NIH peer review groups so that patient-oriented grant applications would be evaluated by study sections in which at least half the grant applications involve patient-oriented research. This recommendation was rooted in an earlier observation by Williams et al,19 based on both priority scores and funding rates, that clinical grant applications do not fare as well in the review process when evaluated by study sections reviewing relatively few clinical applications.

The mandate to translate discoveries in the basic sciences into the clinical arena is widely acknowledged. Nevertheless, there is a continuing perception among clinical investigators that the NIH peer review process may discriminate against clinical research.11 The purpose of this analysis was to describe recent trends and outcomes of peer review for grant applications requesting support for clinical research.

Data for 1997-2001 were provided by the Office of Extramural Research at NIH. In addition, outcomes of reviews of all applications considered during 2 review cycles in 2002 (May and October councils) were analyzed. Comparisons of review outcomes and funding rates were made for the following: applications submitted by physicians vs nonphysicians; applications submitted by physicians with only the MD degree vs physicians with a combined MD/PhD degree; and applications for clinical vs nonclinical projects. A project was defined as clinical if the applicant checked yes on page 1 of the grant application in response to a query about involvement of human subjects (approximately 99% of clinical applications), as well as a small number of additional applications also assigned a human subjects code at the time of review. This broad definition was selected because it is based on available information. In addition, some data were analyzed for several subcategories of clinical research.

The Center for Scientific Review (CSR) manages the peer review process for approximately 70% of the grant applications submitted to NIH; the remainder are reviewed in peer review panels managed by the various funding institutes and centers at NIH. Similar trends were observed in separate analyses of grant reviews managed by CSR and grant reviews managed by the funding institutes and centers. Thus, unless otherwise specified, only results of pooled data from both CSR-managed and institute- or center-managed reviews are presented. Applications involving contracts (such as used for large, multicenter clinical trials) were not included in these analyses.

Analyses were conducted for all applications combined as well as separately for R01 applications (grant applications submitted by individual investigators), other R applications (academic research enhancement awards, exploratory/development grants, small grant program, small business awards), and applications in the K (clinical research training awards for junior and midcareer faculty), P (program projects and center grants), and F (predoctoral and postdoctoral fellowships) series. "Other" applications included C (research construction programs), D (training programs), G (resource programs), M01 (General Clinical Research Centers), S (research-related programs), T (training programs), and U (cooperative agreements) grant mechanisms.

Priority scores range from 100 (most favorable) to 500 (least favorable). Unscored applications are those applications that are considered to be in the lower 50% in the initial review and are reviewed but not discussed by the review group. For the purpose of determining a median priority score, unscored applications were assigned a dummy score of 501. To standardize scores among review groups for funding purposes, priority scores may be converted to percentile rankings (R01 applications only). This conversion is based on priority scores assigned to applications (including unscored applications) reviewed during the current plus the 2 previous review rounds.

In a substudy, review outcomes were also compared across different types of clinical research, based in large part on the designations and definitions derived from a number of sources, including a report by Nathan,14 the Institute of Medicine,20 the NIH Director's Panel on Clinical Research,9 the Association of American Medical Colleges and American Medical Association,21 and the Agency for Healthcare Research and Quality.22 All 3599 R01 applications involving human subjects that were submitted to NIH for the October 2002 council were categorized into 1 of the following: (1) patient-oriented studies of mechanisms of human disease (bench to bedside); (2) clinical trials and other clinical interventions; (3) patient-oriented research focusing on development of new technologies; (4) epidemiological studies; (5) behavioral studies (including studies of normal human behavior); (6) health services research; and (7) use of deidentified human tissue. After a brief orientation, categorization of the applications was undertaken by 10 coders, most of whom were members of the review and referral staff at CSR.

To assess interrater reliability for the assignment of specific categories, each of 12 coders initially categorized the same 180 randomly selected applications (5%) in the database. A consensus category was defined as the category assigned to an application by the majority of the coders. Ten of the 12 initial coders concurred with the consensus 71% to 83% of the time. Two coders concurred only 64% and 67% of the time, and all results for these 2 coders were deleted. Their applications were reassigned to another of the coders. Overall, the remaining 10 coders were in perfect or near-perfect agreement on 68% of the 180 R01 applications classified into specific categories by all coders. Interrater agreement among the 10 coders who classified the same 180 applications into a specific category of clinical research was generally moderate, as determined by κ statistic values ranging from 0.36 for behavioral studies to 0.74 for clinical trials/other clinical interventions (mean κ = 0.57).23,24 Intraobserver reproducibility was assessed by including 26 duplicate application abstracts within the full data set categorized by each coder. Based on categories assigned to these 26 duplicates, average reproducibility of the 10 coders was 85.3% (SD, 7.1%; range, 73.1%-96.2%).

Additional analyses were carried out to determine the relationship between outcomes of reviews of clinical applications and the "density" of clinical applications reviewed in a particular study section. The CSR study sections were segregated into the following 4 groups, depending on the percentage of applications within each study section that were designated as clinical applications (grouped as 1%-25%, 26%-50%, 51%-75%, or ≥76%). For this analysis, applications assigned a human subjects exemption code E4 were considered nonclinical applications. This exemption applies to the use of deidentified pathological or diagnostic specimens, as well as review of existing deidentified data as the exclusive involvement of human subjects. Review outcomes for R01 applications for clinical projects were compared across these groups of study sections. Study sections that reviewed no clinical applications were excluded from this analysis.

For 2-group comparisons, a 2 × 2 Yates corrected χ2 test was used to evaluate the statistical significance of group differences of percentages of unscored applications and percentages of funded applications. The significance of group differences of median priority scores was evaluated with the Mann-Whitney U test. Percentile scores and cumulative priority scores for R01 applications, comparing clinical and nonclinical applications, were evaluated with the Kolmogorov-Smirnov test. For comparisons involving more than 2 groups, statistical significance of differences of percentage unscored and percentage funded applications was evaluated with the Pearson χ2 test, and the significance of differences of median priority scores was evaluated with the Kruskal-Wallis test. Results were accepted as statistically significant at P<.05.

When significant differences were observed in multiple-group comparisons, the significance of specific 2-group differences was evaluated with a 2 × 2 Yates χ2 test for percentage unscored and percentage funded applications, and the method of multiple comparisons was used to test specific 2-group differences of median priority scores.25 For these post hoc analyses, P<.01 was required for statistical significance. Nevertheless, all levels of statistical significance should be interpreted cautiously, both because of the number of statistical tests performed and because of appreciable differences in sample sizes among the various comparisons. Analyses were conducted using SPSS software, version 11 (SPSS Inc, Chicago, Ill).

Funding Trends for MD and Non-MD Applicants

The total number of grant applications submitted to NIH increased from 27 607 to 34 422 between fiscal years (October 1 through September 30) 1997 and 2002 (Table 1). The percentage of applications submitted by MDs remained relatively constant at approximately 25%, whereas 27.5% of the awards were to investigators with the MD degree. During this same period, the number of applications submitted by investigators with a combined MD/PhD degree doubled, whereas the number of applicants with only an MD degree increased by 10%. Overall, there was no difference in proportion of awards to applicants with only an MD degree compared with applicants with combined MD/PhD degrees.

Table Graphic Jump LocationTable 1. Number of Applications Submitted and Funding Rates for Fiscal Years 1997-2002
Funding Rates for Clinical Investigators Early in Their Careers

In 1995, 248 MDs and 511 non-MDs who had not received prior NIH funding received their first R01 award. Between 1996 and 2002, 63.3% of the MDs and 61.6% of the non-MDs received a subsequent R01 award (either a competitive renewal or an additional R01 award). Information was also obtained about subsequent funding rates for K08 awardees. The current version of the K08 award (Mentored Clinical Scientist Development Award) was introduced in 1995, and in 1996, 275 MDs received this award. Between 1999 and 2002, 168 of these MD K08 awardees (61%) submitted at least 1 R01 application, and 90 of these applicants (33% of the original K08 awardees and 54% of those who subsequently applied) received R01 funding. For comparison, during the same 4-year period, the percentages of funded initial R01 applications submitted by all MDs (n = 21 800) and by all non-MDs (n = 53 843) were 25.1% and 21.7%, respectively.

Rates of Resubmission and Funding of Amended Applications

Based on combined data from all applications submitted in 1998 and 1999, resubmission rates were separately determined for the following 4 categories of applicants: MDs conducting clinical research, MDs conducting nonclinical research, non-MDs conducting clinical research, and non-MDs conducting nonclinical research. On average, 37% of investigators who were eligible to submit A1 (first revision) and A2 (second revision) applications did so, and resubmission rates of unfunded applications did not differ among the 4 applicant categories. Considering the initial and revised submissions, overall, the percentage of applications funded was higher for MD than for non-MD applicants (46.2% vs 41.7%, respectively; P<.001) and was lower for clinical than for nonclinical applications (40.9% vs 44.1%, respectively; P<.001).

Outcomes of Peer Review for MDs vs Non-MDs

Table 2 compares outcomes of review between MD and non-MD investigators for the May and October 2002 councils. Twenty-six percent of all applications were submitted by MDs, and 55% of MD-submitted applications were in the R01 series. There were relatively small and inconsistent differences of review outcome between MDs and non-MDs. Considering all applications together, median priority scores were more favorable for MD applicants (P<.001) and a higher percentage of MD applications was funded (P<.001) than for non-MD applications. Furthermore, considering all applications together, median priority scores were more favorable (P = .004) for applicants with only the MD degree than for applicants with combined MD/PhD degrees (Table 2). Overall, there were no significant differences in percentage of applications funded, comparing MD and MD/PhD applicants.

Table Graphic Jump LocationTable 2. Outcomes of Peer Review for Different Types of Grant Applications*
Outcomes of Peer Review for Clinical vs Laboratory Research

Overall, 43% of all applications submitted for the 2 funding cycles in May and October 2002 (58% of MD applications and 38% of non-MD applications) were for projects that included human subjects. For all applications considered together, and separately in the R01, other R, K, and P series, median priority scores were significantly less favorable for projects that included human subjects than for projects that did not include human subjects (Table 2). Similarly, the median percentile score of clinical R01 applications (32.6 percentile) was less favorable (P<.001) than that for nonclinical R01 applications (30.2 percentile). Priority scores were consistently less favorable (P<.001) for clinical applications throughout the most favorable and, hence, potentially fundable scoring ranges (Figure 1). Considering all applications and applications in the R01 and other R series, significantly smaller percentages of applications were funded for projects with human subjects (P<.001). Smaller percentages of clinical applications were also funded in the F series (P = .01) and the P series (P = .06). These differences of review outcomes, comparing applications including or not including human subjects, were observed among both MD and non-MD investigators (data not shown).

Figure. Percentage of Clinical and Nonclinical R01 Applications, by Priority Score
Graphic Jump Location
Peer Review Outcomes Among Clinical Research Categories

All R01 applications in the October 2002 council for projects involving human subjects were stratified into 1 of 7 categories of clinical research. Overall, approximately 50% of applications involving human subjects were categorized as being either mechanisms of disease or clinical trials or interventions (Table 3). For each of these 2 categories, median priority scores were significantly less favorable and lower percentages of applications were funded than for nonclinical applications (P<.02). Similar but non–statistically significant trends were observed for each of the other categories of clinical research, except for research involving deidentified human tissue.

Table Graphic Jump LocationTable 3. Number of R01 Grant Applications and Review Outcomes for Subcategories of Clinical Research vs Nonclinical Research*

Comparing review outcomes between MD and non-MD applicants within each category of clinical research, the only significant difference observed was a more favorable median priority score for MD applicants studying mechanisms of human disease (243.5 for MDs vs 270.0 for non-MDs; P = .04); however, percentage of applications funded did not differ.

Applications With Human Subject Concerns vs Those Without

Human subject concerns may relate to safety, confidentiality, or the appropriate inclusion of women, minorities, and children. When concerns are identified at the time of review, reviewers are advised to reflect these concerns in their assignment of a priority score. Some clinical applications are exempt from human subjects regulations (eg, research conducted in an educational setting involving normal educational practices, research involving the collection of deidentified existing data, research and demonstration projects). Of all applications in the May and October 2002 councils involving human subjects, 19% had human subject concerns and 11% were exempt from regulations for human subjects. Considering all applications together, as well as each type of application, median priority scores were less favorable and the percentages of applications funded were lower for applications with human subject concerns than for those without concerns (Table 4). However, even among R01 applications with no human subject concerns, median priority scores were less favorable (P = .003) and a smaller percentage were funded (P<.001) than R01 applications not involving human subjects.

Table Graphic Jump LocationTable 4. Outcomes of Review for Grant Applications for Clinical Research, by Human Subject Concerns and Exemption Status*
"Density" of Clinical Applications Reviewed by Study Sections

Among clinical R01 applications in the May and October 2002 councils, there were no significant differences of median priority scores or percentage of applications funded when evaluated in study sections in which 1% to 25% of applications considered were clinical applications, compared with review outcomes in study sections reviewing higher densities of clinical applications (Table 5). In each of the 4 density groupings, median priority scores and funding rates were less favorable for clinical than for nonclinical applications (P<.05 for each). Additionally, within each of the 7 specific categories of clinical research (October 2002 council only), there were no significant differences in review outcomes for applications reviewed by study sections reviewing 1% to 25% clinical applications compared with outcomes in study sections reviewing larger percentages of clinical applications.

Table Graphic Jump LocationTable 5. Outcomes of Review of Clinical and Nonclinical R01 Applications Submitted by Both MDs and Non-MDs and Reviewed by Center for Scientific Review Study Sections, by Percentage of Clinical Applications per Study Section*

Consistent with previous reports dating back to 1972,14 physicians fare well in the peer review process. Between 1997 and 2002, the percentage of applications submitted by MDs was relatively constant at approximately 25% of all submissions, and 27% of awards were made to MDs. Similarly, based on both 1998-1999 data and data from the May and October 2002 councils, the overall percentage of applications funded was slightly higher for MD than for non-MD applicants. However, not all applications submitted by MDs are proposals for clinical research. Although the percentage of initial R01 applications funded for the relatively small number of MDs with prior K08 funding was higher than that for all MD and PhD applicants, the attrition rate of K08 awardees (almost 40% of K08 awardees failed to subsequently apply for R01 awards) may indicate that additional strategies will be required to facilitate the continued development of clinical investigators. Other recent analyses also indicate a relatively high attrition rate for NIH-funded clinical investigators.18

During 2 funding cycles in 2002, both median priority scores and percentage of funded grant applications tended to be less favorable for projects involving human subjects than for projects not involving human subjects. This difference was small but consistent and was observed for applications submitted by both MDs and non-MDs. However, defining clinical research solely on the basis of inclusion of human subjects encompasses a diversity of applications, including applications for studies in which the only clinical contact is use of human tissues or cells as well as other categories of exempt applications. Consequently, this definition of clinical research may underestimate the difference in review outcomes between clinical and laboratory research. Similarly, based on this same definition of clinical research, according to a 1994 Institute of Medicine report, between 1977 and 1991, priority scores for R01 applications were less favorable for clinical than for laboratory research.20

Identification of the type of clinical research could potentially provide a more accurate and targeted approach for tracking outcomes of peer review and funding levels. Based on results of a pilot study to stratify one funding cycle of R01 applications for clinical research into component subtypes, statistically significant less favorable outcomes were observed for the categories of mechanisms of disease and clinical trials/interventions. Similar trends were also noted for most of the other categories with smaller sample sizes. Although informative, these results should be interpreted cautiously. The study was based on a single grant review cycle. Stratification of clinical research into component subtypes proved to be a time-consuming and somewhat arbitrary process. Only a single category was allowed for each application, whereas different aims of a project might appropriately fit into different research categories. Concurrence of category assignments among observers was generally only moderate and, perhaps, could be improved by additional training of coders and further clarification of the operational definitions of each category. Nevertheless, these observations highlight the challenge in developing a reliable classification system for clinical research.

In response to mounting concerns about the adequacy of protection of research participants, at a regulatory level, increasing attention is being focused on safety and confidentiality of human subjects participating in research protocols.2628 Beginning with applications submitted for the January 2001 council round, institutional review board (IRB) approval is no longer required prior to NIH peer review; previous NIH policy had been that IRB approval was required at the time of submission. Compared with the 19.1% of the applications with human subject concerns in the May and October 2002 councils, in the 3 councils preceding January 2001, human subject concerns were noted in 15.5% of the 17249 pre–rule change applications (P<.001). These observations suggest that human subject concerns are not being adequately addressed in the preparation of clinical grant applications, and this problem may have been augmented by rescinding the requirement for IRB approval prior to NIH peer review. In the current analysis, although applications with human subject concerns may have received less favorable priority scores for other reasons, they did not fare as well as clinical applications without these concerns. Reviewers are instructed to take such concerns into account when assigning a priority score. Consequently, human subject concerns raised at the time of review may have contributed to, although do not totally explain, the less favorable review outcomes for clinical applications.

Based on a review of R01 grant applications during 2 review councils in 1994, Williams et al19 concluded that applications for patient-oriented research fared less well than laboratory-oriented research applications. In part, these different review outcomes were attributed to review of patient-oriented applications in study sections that primarily reviewed laboratory-oriented applications. In the current analysis, we did not observe a disadvantage for clinical applications in study sections whose assignments included 25% or fewer clinical applications. However, review outcomes were less favorable for clinical applications, even in study sections reviewing relatively high percentages of clinical applications. Although applications involving use of deidentified human tissue were considered nonclinical applications in this analysis, it is possible that the definition of clinical research used by Williams identified a different subset of applications than were analyzed, even in the specific categories segregated in the current study. Alternatively, review of clinical applications in study sections reviewing predominantly laboratory-oriented research may have been less of a problem in 2002 than in 1994.

A limitation of our study is that we did not analyze funding levels or award amounts for clinical research grants. Although earlier reports emphasize that the percentage of NIH extramural grant dollars devoted to clinical research depends on the inclusiveness of the definition of clinical research,5,14,18,29 the primary purpose of the current analysis was to describe outcomes of the scientific review of grant applications rather than to provide a comprehensive overview of extramural NIH funding for clinical research. However, between 1997 and 2002, the actual dollar award (on average) to MDs was approximately 30% greater than awards to PhD investigators, possibly because of the greater cost of clinical investigations. In addition, when making funding decisions about grant applications, NIH institutes may not rely exclusively on priority scores. Funding rates also depend on budgets and priorities of the individual institutes. Review outcomes and funding rates for contracts (a mechanism often used to fund large multicenter clinical trials) were not included in these analyses.

As recently described,18 CSR is undertaking a number of initiatives to ensure appropriate peer review of applications for clinical research. It appears that the greatest threat to clinical research, however, is the relatively small and shrinking pool of clinical investigators. Between 1972 and 1995, although the absolute number of grant applications submitted by physicians increased from approximately 4000 to 6000, the percentage of applications submitted by physicians decreased from 40% to 25% because of a considerably greater increase in the number of PhD applicants during this time.14 Between 1990 and 2000, the number of physicians engaged in research careers has steadily declined (unpublished data based on American Medical Association surveys, American Medical Association Masterfile), and there has been a concomitant decline of interest in research careers among graduating medical students. Between 1989 and 1996, the percentage of graduating medical students expressing a strong interest in research as a career decreased from 14% to 10% (unpublished data, Association of American Medical Colleges). For the past several years, this number has been relatively constant at approximately 12%, or 1745 individuals, per year. In addition, between 1996 and 2002, there was a 28% decrease in the number of applicants to US medical schools.30 This small cadre of potential physician investigators and concern about adequate numbers of qualified students training for careers in medicine are issues that extend well beyond peer review.

In summary, our study results suggest that review outcomes do not differ appreciably for MD compared with non-MD applicants. During 2 funding cycles in 2002, applications involving human subjects tended to have less favorable median priority scores and less funding success than applications not involving human subjects. This unfavorable trend was most convincingly demonstrated for clinical research categorized as mechanisms of disease or clinical trials or interventions. Although applications with human subject concerns received poor priority scores, this did not account totally for the overall less favorable reviews and funding percentages of clinical applications. No consistent relationship was observed between review outcomes of clinical applications and the density of clinical applications per study section. Resolution of this last issue may require more prolonged tracking of specific categories of clinical research.

Wyngaarden JB. The clinical investigator as an endangered species.  N Engl J Med.1979;301:1254-1259.
PubMed
Healy B. Innovators for the 21st century: will we face a crisis in biomedical-research brainpower?  N Engl J Med.1988;319:1058-1064.
PubMed
Goldstein JL, Brown MS. The clinical investigator: bewitched, bothered, and bewildered—but still beloved.  J Clin Invest.1997;99:2803-2812.
PubMed
Schechter AN. The crisis in clinical research: endangering the half-century National Institutes of Health consensus.  JAMA.1998;280:1440-1442.
PubMed
Shine KL. Encouraging clinical research by physician scientists.  JAMA.1998;280:1442-1444.
PubMed
Rosenberg LE. The physician-scientist: an essential—and fragile—link in the medical research chain.  J Clin Invest.1999;103:1621-1626.
PubMed
Miller ED. Clinical investigators—the endangered species revisited.  JAMA.2001;286:845-846.
PubMed
Campbell EG, Weissman JS, Moy E, Blumenthal D. Status of clinical research in academic health centers: views from the research leadership.  JAMA.2001;286:800-806.
PubMed
 The NIH Director's Panel on Clinical Research report to the Advisory Committee to the NIH Director, December 1997. Available at: http://www.nih.gov/news/crp/97report/index.htmAccessed March 21, 2003.
Zemlo TR, Garrison HH, Partridge NC, Ley TJ. The physician-scientist: career issues and challenges at the year 2000.  FASEB J.2000;14:221-230.
PubMed
Sung NS, Crowley WF, Genel M.  et al.  Central challenges facing the national clinical research enterprise.  JAMA.2003;289:1278-1287.
PubMed
Neilson EG. The role of medical school admissions committees in the decline of physician-scientists.  J Clin Invest.2003;111:765-767.
PubMed
Ley TJ, Rosenberg LE. Removing career obstacles for young physician-scientists—loan-repayment programs.  N Engl J Med.2002;346:368-373.
PubMed
Nathan DG. Clinical research: perceptions, reality, and proposed solutions.  JAMA.1998;280:1427-1432.
PubMed
Nathan DG. Careers in translational clinical research—historical perspectives, future challenges.  JAMA.2002;287:2424-2427.
PubMed
Nathan DG, Varmus HE. The National Institutes of Health and clinical research: a progress report.  Nat Med.2000;6:1201-1204.
PubMed
Nathan DG. Educational debt relief for clinical investigators—a vote of confidence.  N Engl J Med.2002;346:372-374.
PubMed
Nathan DG, Wilson JD. Clinical research and the NIH—a report card.  N Engl J Med.2003;349:1860-1865.
PubMed
Williams GH, Wara DW, Carbone P. Funding for patient-oriented research: critical strain on a fundamental linchpin.  JAMA.1997;278:227-231.
PubMed
Kelley WN, Randolph MA. Careers in Clinical Research: Obstacles and OpportunitiesWashington, DC: National Academy Press; 1994.
Association of American Medical Colleges and American Medical Association.  Breaking the Scientific Bottleneck: Report of the Graylyn Consensus Development Conference. Washington, DC: Association of American Medical Colleges; 1990.
 What Is ARRQ? Available at: http://www.ahrq.gov/about/whatis.htm Accessed January 28, 2004.
Landis JR, Koch GG. The measurement of observer agreement for categorical data.  Biometrics.1977;33:159-174.
PubMed
 The measurement of inter-rater agreement. In: Fleiss JL.  Statistical Methods for Rates and Proportions.2nd ed. New York, NY: John Wiley & Sons; 1981:212-236.
 The case of k-independent samples. In: Siegel S, Castellan NJ.  Non-parametric Statistics for the Behavioral Sciences.2nd ed. New York, NY: McGraw-Hill; 1988:190-223.
Shalala D. Protecting research subjects—what must be done.  N Engl J Med.2000;343:808-810.
PubMed
Institute of Medicine.  Preserving Public Trust: Accreditation and Human Research Participant ProgramsWashington, DC: National Academy Press; 2001.
Steinbrook R. Improving protection for research subjects.  N Engl J Med.2002;346:1425-1430.
PubMed
Taylor AE, Stubbs, C, Singer DE, Curhan G, Crowley WF. An instrument for determining the amount of NIH support for clinical investigations at one academic health center.  Acad Med.2002;77:824-830.
PubMed
 AAMC Data Book: Statistical Information Related to Medical Education . Washington, DC: Association of American Medical Colleges; 2003.

Figures

Figure. Percentage of Clinical and Nonclinical R01 Applications, by Priority Score
Graphic Jump Location

Tables

Table Graphic Jump LocationTable 1. Number of Applications Submitted and Funding Rates for Fiscal Years 1997-2002
Table Graphic Jump LocationTable 2. Outcomes of Peer Review for Different Types of Grant Applications*
Table Graphic Jump LocationTable 3. Number of R01 Grant Applications and Review Outcomes for Subcategories of Clinical Research vs Nonclinical Research*
Table Graphic Jump LocationTable 4. Outcomes of Review for Grant Applications for Clinical Research, by Human Subject Concerns and Exemption Status*
Table Graphic Jump LocationTable 5. Outcomes of Review of Clinical and Nonclinical R01 Applications Submitted by Both MDs and Non-MDs and Reviewed by Center for Scientific Review Study Sections, by Percentage of Clinical Applications per Study Section*

References

Wyngaarden JB. The clinical investigator as an endangered species.  N Engl J Med.1979;301:1254-1259.
PubMed
Healy B. Innovators for the 21st century: will we face a crisis in biomedical-research brainpower?  N Engl J Med.1988;319:1058-1064.
PubMed
Goldstein JL, Brown MS. The clinical investigator: bewitched, bothered, and bewildered—but still beloved.  J Clin Invest.1997;99:2803-2812.
PubMed
Schechter AN. The crisis in clinical research: endangering the half-century National Institutes of Health consensus.  JAMA.1998;280:1440-1442.
PubMed
Shine KL. Encouraging clinical research by physician scientists.  JAMA.1998;280:1442-1444.
PubMed
Rosenberg LE. The physician-scientist: an essential—and fragile—link in the medical research chain.  J Clin Invest.1999;103:1621-1626.
PubMed
Miller ED. Clinical investigators—the endangered species revisited.  JAMA.2001;286:845-846.
PubMed
Campbell EG, Weissman JS, Moy E, Blumenthal D. Status of clinical research in academic health centers: views from the research leadership.  JAMA.2001;286:800-806.
PubMed
 The NIH Director's Panel on Clinical Research report to the Advisory Committee to the NIH Director, December 1997. Available at: http://www.nih.gov/news/crp/97report/index.htmAccessed March 21, 2003.
Zemlo TR, Garrison HH, Partridge NC, Ley TJ. The physician-scientist: career issues and challenges at the year 2000.  FASEB J.2000;14:221-230.
PubMed
Sung NS, Crowley WF, Genel M.  et al.  Central challenges facing the national clinical research enterprise.  JAMA.2003;289:1278-1287.
PubMed
Neilson EG. The role of medical school admissions committees in the decline of physician-scientists.  J Clin Invest.2003;111:765-767.
PubMed
Ley TJ, Rosenberg LE. Removing career obstacles for young physician-scientists—loan-repayment programs.  N Engl J Med.2002;346:368-373.
PubMed
Nathan DG. Clinical research: perceptions, reality, and proposed solutions.  JAMA.1998;280:1427-1432.
PubMed
Nathan DG. Careers in translational clinical research—historical perspectives, future challenges.  JAMA.2002;287:2424-2427.
PubMed
Nathan DG, Varmus HE. The National Institutes of Health and clinical research: a progress report.  Nat Med.2000;6:1201-1204.
PubMed
Nathan DG. Educational debt relief for clinical investigators—a vote of confidence.  N Engl J Med.2002;346:372-374.
PubMed
Nathan DG, Wilson JD. Clinical research and the NIH—a report card.  N Engl J Med.2003;349:1860-1865.
PubMed
Williams GH, Wara DW, Carbone P. Funding for patient-oriented research: critical strain on a fundamental linchpin.  JAMA.1997;278:227-231.
PubMed
Kelley WN, Randolph MA. Careers in Clinical Research: Obstacles and OpportunitiesWashington, DC: National Academy Press; 1994.
Association of American Medical Colleges and American Medical Association.  Breaking the Scientific Bottleneck: Report of the Graylyn Consensus Development Conference. Washington, DC: Association of American Medical Colleges; 1990.
 What Is ARRQ? Available at: http://www.ahrq.gov/about/whatis.htm Accessed January 28, 2004.
Landis JR, Koch GG. The measurement of observer agreement for categorical data.  Biometrics.1977;33:159-174.
PubMed
 The measurement of inter-rater agreement. In: Fleiss JL.  Statistical Methods for Rates and Proportions.2nd ed. New York, NY: John Wiley & Sons; 1981:212-236.
 The case of k-independent samples. In: Siegel S, Castellan NJ.  Non-parametric Statistics for the Behavioral Sciences.2nd ed. New York, NY: McGraw-Hill; 1988:190-223.
Shalala D. Protecting research subjects—what must be done.  N Engl J Med.2000;343:808-810.
PubMed
Institute of Medicine.  Preserving Public Trust: Accreditation and Human Research Participant ProgramsWashington, DC: National Academy Press; 2001.
Steinbrook R. Improving protection for research subjects.  N Engl J Med.2002;346:1425-1430.
PubMed
Taylor AE, Stubbs, C, Singer DE, Curhan G, Crowley WF. An instrument for determining the amount of NIH support for clinical investigations at one academic health center.  Acad Med.2002;77:824-830.
PubMed
 AAMC Data Book: Statistical Information Related to Medical Education . Washington, DC: Association of American Medical Colleges; 2003.
CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 55

Related Content

Customize your page view by dragging & repositioning the boxes below.

See Also...
Articles Related By Topic
Related Topics