0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Contribution |

Medicine Residents' Understanding of the Biostatistics and Results in the Medical Literature FREE

Donna M. Windish, MD, MPH; Stephen J. Huot, MD, PhD; Michael L. Green, MD, MSc
[+] Author Affiliations

Author Affiliations: Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut.

More Author Information
JAMA. 2007;298(9):1010-1022. doi:10.1001/jama.298.9.1010.
Text Size: A A A
Published online

Context Physicians depend on the medical literature to keep current with clinical information. Little is known about residents' ability to understand statistical methods or how to appropriately interpret research outcomes.

Objective To evaluate residents' understanding of biostatistics and interpretation of research results.

Design, Setting, and Participants Multiprogram cross-sectional survey of internal medicine residents.

Main Outcome Measure Percentage of questions correct on a biostatistics/study design multiple-choice knowledge test.

Results The survey was completed by 277 of 367 residents (75.5%) in 11 residency programs. The overall mean percentage correct on statistical knowledge and interpretation of results was 41.4% (95% confidence interval [CI], 39.7%-43.3%) vs 71.5% (95% CI, 57.5%-85.5%) for fellows and general medicine faculty with research training (P < .001). Higher scores in residents were associated with additional advanced degrees (50.0% [95% CI, 44.5%-55.5%] vs 40.1% [95% CI, 38.3%-42.0%]; P < .001); prior biostatistics training (45.2% [95% CI, 42.7%-47.8%] vs 37.9% [95% CI, 35.4%-40.3%]; P = .001); enrollment in a university-based training program (43.0% [95% CI, 41.0%-45.1%] vs 36.3% [95% CI, 32.6%-40.0%]; P = .002); and male sex (44.0% [95% CI, 41.4%-46.7%] vs 38.8% [95% CI, 36.4%-41.1%]; P = .004). On individual knowledge questions, 81.6% correctly interpreted a relative risk. Residents were less likely to know how to interpret an adjusted odds ratio from a multivariate regression analysis (37.4%) or the results of a Kaplan-Meier analysis (10.5%). Seventy-five percent indicated they did not understand all of the statistics they encountered in journal articles, but 95% felt it was important to understand these concepts to be an intelligent reader of the literature.

Conclusions Most residents in this study lacked the knowledge in biostatistics needed to interpret many of the results in published clinical research. Residency programs should include more effective biostatistics training in their curricula to successfully prepare residents for this important lifelong learning skill.

Physicians must keep current with clinical information to practice evidence-based medicine (EBM). In doing so, most prefer to seek evidence-based summaries, which give the clinical bottom line,1 or evidence-based practice guidelines.13 Resources that maintain these information summaries, however, currently include a limited number of common conditions.4 Thus, to answer many of their clinical questions, physicians need to access reports of original research. This requires the reader to critically appraise the design, conduct, and analysis of each study and subsequently interpret the results.

Several surveys in the 1980s demonstrated that practicing physicians, particularly those with no formal education in epidemiology and biostatistics, had a poor understanding of common statistical tests and limited ability to interpret study results.57 Many physicians likely have increased difficulty today because more complicated statistical methods are being reported in the medical literature.8 They may be able to understand the analysis and interpretation of results in only 21% of research articles.8

Educators have responded by increasing training in critical appraisal and biostatistics throughout the continuum of medical education. Many medical schools currently provide some formal teaching of basic statistical concepts.9 As part of the Accreditation Council for Graduate Medical Education's practice-based learning and improvement competency, residents must demonstrate ability in “locating, appraising, and assimilating evidence from scientific studies related to their patients' problems and apply knowledge of study designs and statistical methods to the appraisal of clinical studies.”10 Most residency programs address this competency through EBM curricula or journal clubs.1113 In 2000, the majority of these programs included training in appraisal of studies and study conduct, but fewer specifically addressed the selection and interpretation of statistical tests.11,14 In addition, the majority of published assessments of residents' knowledge and skills in EBM were performed at single programs, were conducted in the context of determining the impact of a specific curriculum, evaluated critical appraisal skills more commonly than biostatistics, and found that residents scored well below EBM “experts” on evaluation instruments.15 We performed a multiprogram assessment of residents' biostatistics knowledge and interpretation of study results using a new instrument developed for this study.

Survey Development

We developed an instrument to reflect the statistical methods and results most commonly represented in contemporary research studies (Appendix). Thus, we reviewed all 239 original articles published from January to March of 2005 in each issue of 6 general medical journals (American Journal of Medicine, Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine) and summarized the frequency of statistical methods described (Table 1). From this review, we developed questions that focused on identifying and interpreting the results of the most frequently occurring simple statistical methods (eg, χ2, t test, analysis of variance) and multivariate analyses (eg, Cox proportional hazards regression, multiple logistic regression).

Table Graphic Jump LocationTable 1. Statistical Methods Used in 239 Original Research Articles in 6 General Medical Journals, 2005
Survey Instrument

The survey (Appendix) contained 4 sets of questions: (1) 11 demographic questions that included age, sex, current training level, past training in biostatistics and EBM, and current journal-reading practices; (2) 5 attitude questions regarding statistics; (3) 4 confidence questions about interpreting and assessing statistical concepts; and (4) a 20-question biostatistics knowledge test that assessed understanding of statistical methods, study design, and interpretation of study results. Statistical attitudes and confidence questions were adapted from surveys on the Assessment Resource Tools for Improving Statistical Thinking (ARTIST) Web site, which is a resource for teaching statistical literacy, reasoning, and thinking.16 Attitudes regarding statistics were rated on a 5-point Likert scale. Confidence questions were assessed using a 5-point scale in which 1 indicated no confidence and 5 indicated complete confidence. The remaining 20 knowledge test questions addressed understanding of statistical techniques, study design, and interpretation of study results most commonly represented in our journal review. These questions were multiple-choice, clinically oriented with a case vignette, and required no calculations. Two questions were adapted from a study of Danish physicians' statistical knowledge.7 Seven questions were adapted from course materials used in statistics courses at the Johns Hopkins Bloomberg School of Public Health.17 The remaining questions were developed by one of the study authors (D.M.W.). The knowledge questions addressed research variable types, statistical methods, confidence intervals, P values, sensitivity and specificity, power and sample size, study design, and interpretation of study results.

Pilot Testing of Biostatistics Knowledge Test

The original test contained 22 knowledge questions and was pilot tested with 5 internal medicine faculty with advanced training in epidemiology and biostatistics and 12 primary care internal medicine residents at 1 residency program. Faculty reviewed the instrument for content validity, completed the test, and provided feedback. Residents completed the test and provided written and oral feedback. Four of the 5 faculty answered 21 of 22 questions correctly and 1 faculty member correctly answered 19 questions. This resulted in an overall mean score of 94%. Incorrect responses did not favor any particular question. Residents answered 53% of questions correctly. Based on feedback, 1 question was modified to improve clarity, 3 questions were eliminated to avoid duplicating similar concepts, and 1 question was added to further assess interpretation of results. Therefore, the final version of the test consisted of 20 questions.

Target Population and Survey Administration

We conducted an anonymous cross-sectional survey from February through July 2006 of 11 internal medicine residency programs in Connecticut, including 7 traditional internal medicine programs, 2 primary care medicine programs, 1 medicine/pediatrics program, and 1 medicine/preventive medicine program. We initially contacted all 15 internal medicine residency programs in Connecticut to ask for their participation in the study. All programs were successfully contacted and expressed interest. However, 3 programs could not accommodate the study because of scheduling conflicts, and 1 program was not included because its residents (medicine/pediatrics) were distributed to different training sites and therefore were not present at the conferences used for the survey.

Included residencies were both university affiliated (7 programs) and community based (4 programs). Residents at all postgraduate levels of training were invited to participate. Oral consent was obtained from each participant after providing a description of the survey's purpose. The survey was administered during the first 25 minutes of an inpatient noon conference lecture for current residents. After all questionnaires were collected, the remainder of the time was devoted to a seminar in statistical methods and interpretation of the literature. Four residency programs also allowed us to survey their entering intern classes during their orientations. To provide data for validity testing, an additional 10 faculty and fellows trained in clinical investigation also completed the final survey. The Yale University human investigation committee approved the study protocol.

Analysis

In addition to assessing the content validity, the psychometric properties of the 20-question knowledge test were determined by assessing internal consistency using Cronbach α. Discriminative validity was assessed by comparing the difference in mean scores obtained between residents and research-trained fellows and faculty using the t test.

The biostatistics knowledge test was scored by determining the percentage of questions correct, weighting each question equally. Missing values were counted as incorrect responses. The t test or a 1-way analysis of variance was used to compare survey scores by respondent characteristics. We calculated the percentage of residents who agreed or strongly agreed with each attitudinal question. We determined the percentage of respondents with fair to high confidence for each confidence question and the mean confidence score based on the sum of all 4 questions.

Correlation analyses were performed to test for multicollinearity between 3 sets of factors we hypothesized might be highly correlated (training outside of the United States and years since medical school graduation; training level and age; and past biostatistics training and past epidemiology training). Bivariate analyses were performed to identify factors that might be associated with knowledge scores. Candidate variables included sex, age, academic affiliation of residency program, advanced degrees, years since medical school graduation, training outside of the United States, current level of training, past biostatistics training, past epidemiology training, past EBM training, and currently reading medical journals. We also tested for effect modification for pairs of factors including past biostatistics training and past EBM training; past biostatistics training and past epidemiology training; and past biostatistics training and sex. The results of the correlation, bivariate, and effect modification analyses were used to determine which demographic variables to include in the multivariable model. Decisions to include factors in the multivariable regression analysis were based on the strength of correlated factors (r <0.75) or a P value <.05 on bivariate analyses. Forward stepwise regression was subsequently used to identify which demographic factors were independently associated with biostatistics knowledge scores.

To adjust for multiple pairwise comparisons, a 2-sided level of statistical significance was set at P < .01 using a Bonferroni correction. With a sample size of 277 and a P value of .01, the study had 80% power to detect a 4.4% difference in mean knowledge scores. All analyses were performed using Stata release 8.2 (StataCorp, College Station, Texas).

Training Program Characteristics

The 11 targeted training programs had 532 residents, with a mean of 53.6 trainees (range, 12-118).18,19 In comparison, the 388 internal medicine training programs in the United States have a total of 21 885 residents, with a mean of 56.4 trainees (range, 4-170) (P = .76 compared to targeted programs).20 The study programs had 41.9% women residents, compared with 42.1% nationally (P = .96), and 49.9% of residents with training outside of the United States vs 52.3% nationally (P = .51).19 Comparing targeted programs with all internal medicine programs, no statistically significant differences were seen for postgraduate year 1 trainees in mean duty hours per week (61.9 vs 65.2, P = .13), mean consecutive work hours (30 vs 27.5, P = .09), and mean number of days off per week (1.3 vs 1.2, P = .31).18 Targeted programs also did not differ in these characteristics from the remaining 4 Connecticut training programs.

Respondent Characteristics

Three hundred sixty-seven residents in the 11 targeted programs were on rotations that would make them available to attend their respective noon conferences on the day of the survey. Of these, 309 (84.2%) were in attendance. Of the total available residents, 277 (75.5%) completed the assessment. The response rate for individual programs ranged from 28.1% to 80%. No differences in response rates or attendance were seen based on sex, level of training, or past training outside of the United States. Table 2 lists the respondents' demographic characteristics. Approximately equal numbers of men and women were represented. Fifty-eight percent were enrolled in traditional internal medicine programs, 76.5% participated in university-based programs, 50.6% had some training outside of the United States, and 14.8% had advanced degrees. More than 68% of respondents had some training in biostatistics, with approximately 70% of this training occurring during medical school.

Table Graphic Jump LocationTable 2. Characteristics of the 277 Participants
Psychometric Properties of the Knowledge Test

The survey instrument had high internal consistency (Cronbach α = 0.81). Fellows and general medicine faculty with advanced training in biostatistics had a significantly higher score than residents (mean percentage correct, 71.5% [95% confidence interval {CI}, 57.5%-85.5%] vs 41.1% [95% CI, 39.7%-43.3%]; P < .001), indicating good discriminative validity.

Knowledge of Statistical Methods and Results

The overall mean resident knowledge score was 41.1% (SD, 15.2%; range, 10%-90%). Residents scored highest in recognition of double-blind studies (87.4% [95% CI, 83.5%-91.3%] answering correctly) and interpretation of relative risk (81.6% [95% CI, 77.0%-86.2%] answering correctly) (Table 3). They were least able to interpret the results of a Kaplan-Meier analysis, with 10.5% (95% CI, 6.9%-14.1%) answering correctly. Only 37.4% (95% CI, 31.9%-43.3%) understood how to interpret an adjusted odds ratio from a multivariate regression analysis, while 58.8% (95% CI, 53.0%-64.6%) could interpret the meaning of a P value.

Table Graphic Jump LocationTable 3. Percentages of Correct Answers for the Knowledge-Based Questions
Factors Associated With Statistical Knowledge

Training outside of the United States had moderate correlation with years since medical school graduation (r = 0.59), as did past epidemiology training with past biostatistics training (r = 0.53). Training level had a fair correlation with age (r = 0.46). No effect modification was seen for the 3 sets of factors assessed. In bivariate analyses, differences in scores were seen based on residency program type, with medicine/pediatric residents scoring the highest (Table 4). Residents with advanced degrees performed better than those without advanced training (50.0% [95% CI, 44.5%-55.5%] vs 40.1% [95% CI, 38.3%-42.0%]; P < .001). Statistically significant higher scores were also seen in residents who were just entering residency, had prior biostatistics training, were enrolled in a university-based training program, and were men (Table 4).

Table Graphic Jump LocationTable 4. Knowledge Scores by Resident Characteristicsa

Using forward stepwise regression, 5 factors were found to be independently associated with knowledge scores (Table 4). An advanced degree was associated with an absolute increase of 9.2% questions correct after adjustment for other factors (P < .001). Successive years since medical school graduation were associated with decreasing knowledge scores, with 11 years or more postgraduation associated with a 12.3% absolute decrease in score compared with less than 1 year postgraduation. Male sex, belonging to a university-based program, and past biostatistics training were all associated with higher scores.

Attitudes and Confidence

The majority of residents agreed or strongly agreed that to be an intelligent reader of the literature it is necessary to know something about statistics (95%) and indicated they would like to learn more about statistics (77%). Seventy-five percent reported they did not understand all of the statistics they encountered in the literature, whereas only 15% felt that they do not trust statistics “because it is easy to lie.” More than 58% of respondents indicated that they use statistical information in forming opinions or when making decisions in medical care.

The mean confidence score in understanding certain statistical concepts was 11.4 (SD, 2.7) (maximum possible confidence score, 20). The majority of residents reported fair to complete confidence in understanding P values (88%). Fewer were confident in interpreting results of statistical methods used in research (68%), identifying factors influencing a study's power (55%), or assessing if a correct statistical procedure was used (38%).

Respondents with higher confidence in their statistical knowledge (a score higher than the mean confidence score) performed better on the knowledge questions than those with lower confidence (43.6% [95% CI, 40.8%-46.3%] vs 39.3% [95% CI, 37.0%-41.6%]; P = .02). Those who reported fair to high confidence in interpreting a P value were more likely to correctly interpret its meaning (62.8% [95% CI, 56.8%-67.2%] vs 38.2% [95% CI, 24.3%-51.7%]; P = .006). No differences were seen in a resident's ability to appropriately identify the correct statistical procedure used based on their confidence to do so.

In this multiprogram survey of internal medicine residents' confidence in, attitudes toward, and knowledge of statistical methods and interpretation of research results, 95% believed that it was important to understand these concepts to be an intelligent reader of the literature, yet three-fourths of residents acknowledged low confidence in understanding the statistics they encounter in the medical literature. This lack of confidence was validated by their low knowledge scores, in which on average only 8 of 20 questions were answered correctly. Although past instruction in biostatistics and advanced degrees were associated with better performance, knowledge scores appeared to decline with progression through training.

The poor knowledge in biostatistics and interpretation of study results among residents in our study likely reflects insufficient training. Nearly one-third of trainees indicated that they never received biostatistics teaching at any point in their career. When training did occur, the majority of instruction took place during undergraduate medical education and was not reinforced in residency. The most recent comprehensive survey of medical school biostatistics teaching was conducted in the 1990s and found that more than 90% of medical schools focused their biostatistics teaching in the preclinical years without later instruction and that the depth and breadth of this education varied greatly among schools.21 That review reported that familiar concepts such as P values, t tests, and χ2 analyses were frequently addressed (95%, 92%, and 88%, respectively), but advanced methods (such as Cox proportional hazards regression, multiple logistic regression, and Kaplan-Meier analyses) were not included in instruction.21 If biostatistics teaching has continued at the same level in recent years, it would not be surprising that only a small percentage of residents in our survey (10.5%-37.6%) understood the results and use of these analyses.

The correlates of differences in knowledge scores might have been expected. Residents with prior biostatistical training and those with advanced instruction through a master's or PhD degree scored better than their counterparts. More senior residents performed worse than junior residents, potentially reflecting loss of knowledge over time, lack of reinforcement, or both. Although fourth-year residents were an exception to this pattern, these residents were part of a single medicine/pediatrics program that outperformed all other training programs. The higher scores in university-based residency programs may reflect exposure to faculty with more biostatistical training or teaching experience. In a survey study, community faculty considered EBM less important, were less confident in their EBM knowledge, and demonstrated poorer EBM skills than full-time faculty.22

Although sex was associated with a difference in scores, this finding is not supported by other literature. Studies of evidence-based practice knowledge and skills rarely report analyses by sex. In 2 studies, investigators found no sex differences in critical appraisal skills among family physicians23 or in use of online evidence databases among public health practitioners.24 Six studies assessing the biostatistics and epidemiology knowledge of physicians and trainees did not conduct comparisons by sex.57,2527 Furthermore, our result was not a confirmation of an a priori hypothesis and so should be interpreted with caution.

Our final regression model found 5 predictors of knowledge scores: advanced degrees, academic affiliation, prior biostatistics training, sex, and years since medical school graduation. The proportion of explained variation for the model was small, with R2 = 0.18. This likely reflects in part the low variance in resident scores.

Our results suggest the need for more effective training in biostatistics in residency education. Such training has proven difficult, with systematic reviews showing only limited effectiveness of many journal clubs and EBM curricula.14,2832 Thus, it is not surprising that prior EBM experience, which in the past has not included biostatistics training,11,14 was not associated with higher scores in our multivariable analysis. Interactive, self-directed, and clinically instructional strategies seem to stand the best chance of success.33 Involvement in hypothesis-driven research during training that requires comprehensive reading of the literature may also enhance residents' knowledge and understanding.34

Faculty who are implementing biostatistics curricula can access several teaching resources. In internal medicine, the American College of Physicians' ACP Journal Club has presented a series of reports emphasizing basic study designs and statistics.35CMAJ has published a series of EBM “teaching tips” for learners and teachers.36 A guide designed to help medical educators choose and interpret statistical tests when developing educational studies or when reading the medical literature is also available.37

Limitations of this study should be considered. First, while our instrument showed good content validity, internal consistency, and discriminative validity, these psychometric properties were not known in advance but were established in the current study. Second, our survey was purposely kept brief, thus limiting our ability to assess understanding of all biostatistical concepts and research results. Nonetheless, our questions focused on the most commonly used methods and results found in the contemporary literature. Third, we attempted to survey only those residents who were present at the time of their inpatient conference. Residents who did not attend, either by choice or by chance, might have scored differently. However, since we found no differences in demographic characteristics between responders and nonresponders, this is less likely. Fourth, our study was confined to internal medicine residents, limiting generalizability to other resident physicians. Nevertheless, we were able to assess multiple types of internal medicine training programs and found similar results.

Despite these limitations, this study also has several strengths. First, it was a multiprogram study that captured information on a wide range of internal medicine residents at different types of residency programs. Second, the residents in our survey, although limited to 1 state, possessed characteristics similar to all other trainees in internal medicine programs across the United States. Third, the 11 residency programs were similar in size and composition to the average US internal medicine program, and thus our study appears to be generalizable to internal medicine trainees and training programs in the United States.

Higher levels of statistical methods are being used in contemporary medical literature, but basic concepts, frequently occurring tests, and interpretation of results are not well understood by resident physicians. This inadequate preparation demonstrates lack of competence in meeting part of the Accreditation Council for Graduate Medical Education's practice-based learning and improvement requirement.10 If physicians cannot detect appropriate statistical analyses and accurately understand their results, the risk of incorrect interpretation may lead to erroneous applications of clinical research. Educators should reevaluate how this information is taught and reinforced in order to adequately prepare trainees for lifelong learning, and further research should examine the effectiveness of specific educational interventions.

Corresponding Author: Donna M. Windish, MD, MPH, Yale Primary Care Residency Program, 64 Robbins St, Waterbury, CT 06708 (donna.windish@yale.edu).

Author Contributions: Dr Windish had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Windish, Huot, Green.

Acquisition of data: Windish.

Analysis and interpretation of data: Windish, Green.

Drafting of the manuscript: Windish, Green.

Critical revision of the manuscript for important intellectual content: Windish, Huot, Green.

Statistical analysis: Windish, Green.

Administrative, technical, or material support: Huot.

Study supervision: Green.

Financial Disclosures: None reported.

McColl A, Smith H, White P, Field J. General practitioner's perceptions of the route to evidence based medicine: a questionnaire survey.  BMJ. 1998;316(7128):361-365
PubMed   |  Link to Article
Young JM, Ward JE. Evidence-based medicine in general practice: beliefs and barriers among Australian GPs.  J Eval Clin Pract. 2001;7(2):201-210
PubMed   |  Link to Article
Putnam W, Twohig PL, Burge FI, Jackson LA, Cox JL. A qualitative study of evidence in primary care: what the practitioners are saying.  CMAJ. 2002;166(12):1525-1530
PubMed
Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions.  Evid Based Med. 2006;11(6):162-164
PubMed   |  Link to Article
Berwick DM, Fineberg HV, Weinstein MC. When doctors meet numbers.  Am J Med. 1981;71(6):991-998
PubMed   |  Link to Article
Weiss ST, Samet JM. An assessment of physician knowledge of epidemiology and biostatistics.  J Med Educ. 1980;55(8):692-697
PubMed
Wulff HR, Andersen B, Brandenhoff P, Guttler F. What do doctors know about statistics?  Stat Med. 1987;6(1):3-10
PubMed   |  Link to Article
Horton NJ, Switzer SS. Statistical methods in the journal.  N Engl J Med. 2005;353(18):1977-1979
PubMed   |  Link to Article
Association of American Medical Colleges (AAMC).  Curriculum directory. http://services.aamc.org/currdir/section4/start.cfm. Accessed April 14, 2007
Accreditation Council for Graduate Medical Education (ACGME).  Outcome project: enhancing residency education through outcomes assessment. http://www.acgme.org/Outcome/. Accessed January 12, 2007
Green ML. Evidence-based medicine training in internal medicine residency programs: a national survey.  J Gen Intern Med. 2000;15(2):129-133
PubMed   |  Link to Article
Dellavalle RP, Stegner DL, Deas AM.  et al.  Assessing evidence-based dermatology and evidence-based internal medicine curricula in US residency training programs: a national survey.  Arch Dermatol. 2003;139(3):369-372
PubMed   |  Link to Article
Alguire PC. A review of journal clubs in postgraduate medical education.  J Gen Intern Med. 1998;13(5):347-353
PubMed   |  Link to Article
Green ML. Graduate medical education training in clinical epidemiology, critical appraisal, and evidence-based medicine: a critical review of curricula.  Acad Med. 1999;74(6):686-694
PubMed   |  Link to Article
Shaneyfelt T, Baum KD, Bell D.  et al.  Instruments for evaluating education in evidence-based practice: a systematic review.  JAMA. 2006;296(9):1116-1127
PubMed   |  Link to Article
Assessment Resource Tools for Improving Statistical Thinking (ARTIST) Web site.. https://ore.gen.umn.edu/artist/index.html. Accessed January 9, 2007
Department of Biostatistics; Johns Hopkins Bloomberg School of Public Health.  Course materials from Statistical Methods in Public Health II and III, 2003-2004 academic year. http://www.biostat.jhsph.edu/courses/bio622/index.html. Accessibility verified August 2, 2007
American College of Physicians.  Residency database search. http://www.acponline.org/residency/index.html?idxt. Accessed April 20, 2007
American Medical Association.  FREIDA online specialty statistics training statistics information. http://www.ama-assn.org/vapp/freida/spcstsc/0,1238,140,00.html. Accessed April 20, 2007
American Medical Association.  State-level data for accredited graduate medical education programs in the US: aggregate statistics on all resident physicians actively enrolled in graduate medical education during 2005-2006. http://www.ama-assn.org/ama/pub/category/3991.html#4. Accessed April 20, 2007
Looney SW, Grady CS, Steiner RP. An update on biostatistics requirements in U.S. medical schools.  Acad Med. 1998;73(1):92-94
PubMed   |  Link to Article
Beasley BW, Woolley DC. Evidence-based medicine knowledge, attitudes, and skills of community faculty.  J Gen Intern Med. 2002;17(8):632-639
PubMed   |  Link to Article
Godwin M, Seguin R. Critical appraisal skills of family physicians in Ontario, Canada.  BMC Med Educ. 2003;3:10
PubMed   |  Link to Article
Adily A, Westbrook J, Coiera E, Ward J. Use of on-line evidence databases by Australian public health practitioners.  Med Inform Internet Med. 2004;29(2):127-136
PubMed   |  Link to Article
Estellat C, Faisy C, Colombet I, Chatellier G, Burnand B, Durieux P. French academic physicians had a poor knowledge of terms used in clinical epidemiology.  J Clin Epidemiol. 2006;59(9):1009-1014
PubMed   |  Link to Article
Ambrosius WT, Manatunga AK. Intensive short courses in biostatistics for fellows and physicians.  Stat Med. 2002;21(18):2739-2756
PubMed   |  Link to Article
Cheatham ML. A structured curriculum for improved resident education in statistics.  Am Surg. 2000;66(6):585-588
PubMed
Coomarasamy A, Khan KS. What is the evidence that postgraduate teaching in evidence based medicine changes anything? a systematic review.  BMJ. 2004;329(7473):1017
PubMed   |  Link to Article
Ebbert JO, Montori VM, Schultz HJ. The journal club in postgraduate medical education: a systematic review.  Med Teach. 2001;23(5):455-461
PubMed
Taylor R, Reeves B, Ewings P, Binns S, Keast J, Mears R. A systematic review of the effectiveness of critical appraisal skills training for clinicians.  Med Educ. 2000;34(2):120-125
PubMed   |  Link to Article
Parkes J, Hyde C, Deeks J, Milne R. Teaching critical appraisal skills in health care settings.  Cochrane Database Syst Rev. 2001;(3):CD001270
PubMed
Norman GR, Shannon SI. Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal.  CMAJ. 1998;158(2):177-181
PubMed
Khan KS, Coomarasamy A. A hierarchy of effective teaching and learning to acquire competence in evidenced-based medicine.  BMC Med Educ. 2006;6:59
PubMed   |  Link to Article
Rogers LF. The “win-win” of research.  AJR Am J Roentgenol. 1999;172(4):877
PubMed   |  Link to Article
ACP Journal Club.  Evidence-based medicine for better patient care. http://www.acpjc.org/?hp. Accessed November 5, 2006
Wyer PC, Keitz S, Hatala R.  et al.  Tips for learning and teaching evidence-based medicine: introduction to the series.  CMAJ. 2004;171(4):347-348
PubMed   |  Link to Article
Windish DM, Diener-West M. A clinician-educator's roadmap to choosing and interpreting statistical tests.  J Gen Intern Med. 2006;21(6):656-660
PubMed   |  Link to Article

Figures

Tables

Table Graphic Jump LocationTable 1. Statistical Methods Used in 239 Original Research Articles in 6 General Medical Journals, 2005
Table Graphic Jump LocationTable 2. Characteristics of the 277 Participants
Table Graphic Jump LocationTable 3. Percentages of Correct Answers for the Knowledge-Based Questions
Table Graphic Jump LocationTable 4. Knowledge Scores by Resident Characteristicsa

References

McColl A, Smith H, White P, Field J. General practitioner's perceptions of the route to evidence based medicine: a questionnaire survey.  BMJ. 1998;316(7128):361-365
PubMed   |  Link to Article
Young JM, Ward JE. Evidence-based medicine in general practice: beliefs and barriers among Australian GPs.  J Eval Clin Pract. 2001;7(2):201-210
PubMed   |  Link to Article
Putnam W, Twohig PL, Burge FI, Jackson LA, Cox JL. A qualitative study of evidence in primary care: what the practitioners are saying.  CMAJ. 2002;166(12):1525-1530
PubMed
Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions.  Evid Based Med. 2006;11(6):162-164
PubMed   |  Link to Article
Berwick DM, Fineberg HV, Weinstein MC. When doctors meet numbers.  Am J Med. 1981;71(6):991-998
PubMed   |  Link to Article
Weiss ST, Samet JM. An assessment of physician knowledge of epidemiology and biostatistics.  J Med Educ. 1980;55(8):692-697
PubMed
Wulff HR, Andersen B, Brandenhoff P, Guttler F. What do doctors know about statistics?  Stat Med. 1987;6(1):3-10
PubMed   |  Link to Article
Horton NJ, Switzer SS. Statistical methods in the journal.  N Engl J Med. 2005;353(18):1977-1979
PubMed   |  Link to Article
Association of American Medical Colleges (AAMC).  Curriculum directory. http://services.aamc.org/currdir/section4/start.cfm. Accessed April 14, 2007
Accreditation Council for Graduate Medical Education (ACGME).  Outcome project: enhancing residency education through outcomes assessment. http://www.acgme.org/Outcome/. Accessed January 12, 2007
Green ML. Evidence-based medicine training in internal medicine residency programs: a national survey.  J Gen Intern Med. 2000;15(2):129-133
PubMed   |  Link to Article
Dellavalle RP, Stegner DL, Deas AM.  et al.  Assessing evidence-based dermatology and evidence-based internal medicine curricula in US residency training programs: a national survey.  Arch Dermatol. 2003;139(3):369-372
PubMed   |  Link to Article
Alguire PC. A review of journal clubs in postgraduate medical education.  J Gen Intern Med. 1998;13(5):347-353
PubMed   |  Link to Article
Green ML. Graduate medical education training in clinical epidemiology, critical appraisal, and evidence-based medicine: a critical review of curricula.  Acad Med. 1999;74(6):686-694
PubMed   |  Link to Article
Shaneyfelt T, Baum KD, Bell D.  et al.  Instruments for evaluating education in evidence-based practice: a systematic review.  JAMA. 2006;296(9):1116-1127
PubMed   |  Link to Article
Assessment Resource Tools for Improving Statistical Thinking (ARTIST) Web site.. https://ore.gen.umn.edu/artist/index.html. Accessed January 9, 2007
Department of Biostatistics; Johns Hopkins Bloomberg School of Public Health.  Course materials from Statistical Methods in Public Health II and III, 2003-2004 academic year. http://www.biostat.jhsph.edu/courses/bio622/index.html. Accessibility verified August 2, 2007
American College of Physicians.  Residency database search. http://www.acponline.org/residency/index.html?idxt. Accessed April 20, 2007
American Medical Association.  FREIDA online specialty statistics training statistics information. http://www.ama-assn.org/vapp/freida/spcstsc/0,1238,140,00.html. Accessed April 20, 2007
American Medical Association.  State-level data for accredited graduate medical education programs in the US: aggregate statistics on all resident physicians actively enrolled in graduate medical education during 2005-2006. http://www.ama-assn.org/ama/pub/category/3991.html#4. Accessed April 20, 2007
Looney SW, Grady CS, Steiner RP. An update on biostatistics requirements in U.S. medical schools.  Acad Med. 1998;73(1):92-94
PubMed   |  Link to Article
Beasley BW, Woolley DC. Evidence-based medicine knowledge, attitudes, and skills of community faculty.  J Gen Intern Med. 2002;17(8):632-639
PubMed   |  Link to Article
Godwin M, Seguin R. Critical appraisal skills of family physicians in Ontario, Canada.  BMC Med Educ. 2003;3:10
PubMed   |  Link to Article
Adily A, Westbrook J, Coiera E, Ward J. Use of on-line evidence databases by Australian public health practitioners.  Med Inform Internet Med. 2004;29(2):127-136
PubMed   |  Link to Article
Estellat C, Faisy C, Colombet I, Chatellier G, Burnand B, Durieux P. French academic physicians had a poor knowledge of terms used in clinical epidemiology.  J Clin Epidemiol. 2006;59(9):1009-1014
PubMed   |  Link to Article
Ambrosius WT, Manatunga AK. Intensive short courses in biostatistics for fellows and physicians.  Stat Med. 2002;21(18):2739-2756
PubMed   |  Link to Article
Cheatham ML. A structured curriculum for improved resident education in statistics.  Am Surg. 2000;66(6):585-588
PubMed
Coomarasamy A, Khan KS. What is the evidence that postgraduate teaching in evidence based medicine changes anything? a systematic review.  BMJ. 2004;329(7473):1017
PubMed   |  Link to Article
Ebbert JO, Montori VM, Schultz HJ. The journal club in postgraduate medical education: a systematic review.  Med Teach. 2001;23(5):455-461
PubMed
Taylor R, Reeves B, Ewings P, Binns S, Keast J, Mears R. A systematic review of the effectiveness of critical appraisal skills training for clinicians.  Med Educ. 2000;34(2):120-125
PubMed   |  Link to Article
Parkes J, Hyde C, Deeks J, Milne R. Teaching critical appraisal skills in health care settings.  Cochrane Database Syst Rev. 2001;(3):CD001270
PubMed
Norman GR, Shannon SI. Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal.  CMAJ. 1998;158(2):177-181
PubMed
Khan KS, Coomarasamy A. A hierarchy of effective teaching and learning to acquire competence in evidenced-based medicine.  BMC Med Educ. 2006;6:59
PubMed   |  Link to Article
Rogers LF. The “win-win” of research.  AJR Am J Roentgenol. 1999;172(4):877
PubMed   |  Link to Article
ACP Journal Club.  Evidence-based medicine for better patient care. http://www.acpjc.org/?hp. Accessed November 5, 2006
Wyer PC, Keitz S, Hatala R.  et al.  Tips for learning and teaching evidence-based medicine: introduction to the series.  CMAJ. 2004;171(4):347-348
PubMed   |  Link to Article
Windish DM, Diener-West M. A clinician-educator's roadmap to choosing and interpreting statistical tests.  J Gen Intern Med. 2006;21(6):656-660
PubMed   |  Link to Article
CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Supplemental Content
Supplemental Content

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles