0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Contribution |

Comparison of the Instructional Efficacy of Internet-Based CME With Live Interactive CME Workshops:  A Randomized Controlled Trial FREE

Michael Fordis, MD; Jason E. King, PhD; Christie M. Ballantyne, MD; Peter H. Jones, MD; Katharine H. Schneider, MBA; Stephen J. Spann, MD; Stephen B. Greenberg, MD; Anthony J. Greisinger, PhD
[+] Author Affiliations

Author Affiliations: Center for Collaborative and Interactive Technologies (Drs Fordis and King and Ms Schneider), Department of Pediatrics (Dr Fordis), Section of Atherosclerosis and Lipoprotein Research, Department of Medicine (Drs Ballantyne and Jones), Department of Family and Community Medicine (Drs King and Spann), and Department of Medicine (Dr Greenberg), Baylor College of Medicine, Houston, Tex; Center for Cardiovascular Disease Prevention, Methodist DeBakey Heart Center, Houston, Tex (Dr Ballantyne); and Kelsey Research Foundation and Kelsey-Seybold Clinic, Houston, Tex (Dr Greisinger).

More Author Information
JAMA. 2005;294(9):1043-1051. doi:10.1001/jama.294.9.1043.
Text Size: A A A
Published online

Context Despite evidence that a variety of continuing medical education (CME) techniques can foster physician behavioral change, there have been no randomized trials comparing performance outcomes for physicians participating in Internet-based CME with physicians participating in a live CME intervention using approaches documented to be effective.

Objective To determine if Internet-based CME can produce changes comparable to those produced via live, small-group, interactive CME with respect to physician knowledge and behaviors that have an impact on patient care.

Design, Setting, and Participants Randomized controlled trial conducted from August 2001 to July 2002. Participants were 97 primary care physicians drawn from 21 practice sites in Houston, Tex, including 7 community health centers and 14 private group practices. A control group of 18 physicians from these same sites received no intervention.

Interventions Physicians were randomly assigned to an Internet-based CME intervention that could be completed in multiple sessions over 2 weeks, or to a single live, small-group, interactive CME workshop. Both incorporated similar multifaceted instructional approaches demonstrated to be effective in live settings. Content was based on the National Institutes of Health National Cholesterol Education Program—Adult Treatment Panel III guidelines.

Main Outcome Measures Knowledge was assessed immediately before the intervention, immediately after the intervention, and 12 weeks later. The percentage of high-risk patients who had appropriate lipid panel screening and pharmacotherapeutic treatment according to guidelines was documented with chart audits conducted over a 5-month period before intervention and a 5-month period after intervention.

Results Both interventions produced similar and significant immediate and 12-week knowledge gains, representing large increases in percentage of items correct (pretest to posttest: 31.0% [95% confidence interval {CI}, 27.0%-35.0%]; pretest to 12 weeks: 36.4% [95% CI, 32.2%-40.6%]; P<.001 for all comparisons). Chart audits revealed high baseline screening rates in all study groups (≥93%) with no significant postintervention change. However, the Internet-based intervention was associated with a significant increase in the percentage of high-risk patients treated with pharmacotherapeutics according to guidelines (preintervention, 85.3%; postintervention, 90.3%; P = .04).

Conclusions Appropriately designed, evidence-based online CME can produce objectively measured changes in behavior as well as sustained gains in knowledge that are comparable or superior to those realized from effective live activities.

Figures in this Article

The quality of US health care has come under scrutiny following a series of Institute of Medicine reports that addressed the linkage between quality and professional education.14 They included recommendations for improvements spanning the health professional education continuum. The continuing challenges for physicians to maintain care quality was further emphasized in a recent report positing an inverse relationship between physicians’ years in practice and the quality of care provided.5 While educational intervention is not a panacea for all problems with health care quality,6 there is a need to improve the effectiveness of continuing medical education (CME),7,8 the longest and arguably the most important component of the medical education continuum.7

Continuing medical education has been criticized for being episodic, delivered in isolation from patient care and the health care team, and produced in “one-size-fits-all” formats relying heavily on didactic presentations.7 However, meta-analyses indicate that CME can influence physician behavior and health care quality when appropriate methods are used.912

As an educational trend, use of the Web by physicians is growing rapidly. Between 78% and 85% of physicians use the Web,13,14 with the fastest growth among those aged 60 years and older.13 In 2003, Internet-based formats accounted for 12.5% of all CME,15 with an estimated 45% to 64% of physicians participating in such offerings.13,14 Between 1998 and 2003, the number of Internet-based CME activities increased over 700%, compared with a 38% growth in total CME activities.15,16 The number of physicians receiving credit for Internet-based CME increased 1400%, compared with a 64% increase for all CME programs.15,16

Internet-based CME is attractive for both practical and theoretical reasons. Compared with traditional live programs, online CME may offer greater flexibility in training times and sequencing, improved access by geographically dispersed learners, reduced travel expenses and time, and greater adaptability to individual learner styles.17 Educational resources can be delivered at or near the point of care or “just-in-time.”18 The Web’s interactive and multimedia capabilities provide opportunities for realistic problem solving, performing tasks in the context of clinical problems, engaging in case studies with exposure to authentic clinical learning settings, linking to other resources, and participating in social dialogue. These approaches are emphasized in adult learning theories such as constructivism,19 derived cognitive flexibility theory,20 andragogy,21 and situated learning,22 which are relevant to online CME design.23,24

Despite a growing body of literature on Web-based health professional education,2527 few rigorous studies of Web-based CME have been reported. These have included self-controlled trials,2830 controlled trials,3133 and randomized controlled trials (RCTs).23,3439 Eighty percent of controlled studies reported positive change in 1 or more of the nonbehavioral measures—knowledge, attitudes, confidence, and satisfaction.23,2832,3539 With the exception of 2 RCTs, these studies compared online CME with no intervention,2833,3537,39 potentially confounding learning gains due to educational method with those due solely to content exposure. The other 2 RCTs (with mixed results38 and negative results34) compared an Internet-based intervention with a control group receiving Web-based content as text, an educational approach that has not been shown to change physician behavior.

Two RCTs have examined objective measures of physician behavior.38,39 However, both trials involved randomization of physician offices rather than individual physicians, which may dilute observable behaviors by mixing results for participating and nonparticipating physicians. Only one of these studies38 compared Internet-based education with a control intervention, which was Web-based content as text.

No investigations have simultaneously randomized the participants and compared performance outcomes for physicians participating in Internet-based CME with physicians participating in a live CME intervention using approaches documented to be effective. Thus, the purposes of this study were to determine if online CME can produce (1) changes in physician knowledge comparable to those achievable with appropriately designed live interventions, and (2) changes in behavior that have an impact on patient care. The multifaceted CME approaches we used have been shown to enhance knowledge, attitudes, and other nonbehavioral measures, and, in some cases, to contribute to physician behavioral change.9,11,25,40

Experimental Design

An RCT was designed to compare live, small-group, interactive CME workshops with online CME related to cholesterol management. To control for external effects (such as the impact of site activities apart from the educational study), a control group with no educational intervention was included in assessing behaviors. The hypotheses tested were (1) phase 1: whether there is a difference between the online CME group and the live CME group in knowledge gains related to National Institutes of Health (NIH) cholesterol management guidelines; and (2) phase 2: whether there is a difference between the online CME group and the live CME and control groups in gains in objectively measured practice behaviors associated with implementing the NIH cholesterol management guidelines.

In phase 1, lipid management–related knowledge was measured over time to detect any changes between physicians randomly assigned to the live CME and online CME groups. In phase 2, behavioral impact related to lipid management was assessed via 5-month preintervention and postintervention patient chart reviews, including a control group of physicians who did not participate in either educational program. The study protocol was approved by the Baylor College of Medicine institutional review board. Physicians in the intervention groups provided written consent.

Study Participants

To be eligible for the study, physicians were required to work full-time or part-time in a primary care setting and have privileges at a community health center or private practice primary care clinic. Physicians were excluded if they were unwilling or unable to participate in either randomly assigned educational program. Comfort in using the Internet was not required for participation, and each physician in the online CME group was offered a computer with Internet access if needed.

To assess knowledge change, a recruitment goal of 42 physicians in each of the 2 intervention arms was determined based on power analyses assuming an effect size of δ = 0.75, typical of moderate to large effect sizes observed in educational studies,41 and a correlation between measures of r  = 0.3. For purposes of oversampling, 103 physicians were assigned to an intervention (51 live CME and 52 online CME) (Figure 1). Recruitment took place between November 2001 and January 2002 using letters, presentations, and conferences at the practice sites. The educational interventions were conducted in January and February 2002. Continuing medical education credit was offered to those who completed either educational program, with a $750 honorarium for those who completed all data collection instruments. Randomization of participants to the live CME or online CME group, stratified by clinic type (public or private), was done using a pseudo random number generator. The data analyst was blinded to the identification of participants.

Figure 1. Randomization Scheme and Participation Flow of the Continuing Medical Education (CME) Study Groups
Graphic Jump Location

Physicians represented 21 practice sites broadly distributed around the Houston area. Participating practices included 7 community health centers affiliated with Baylor College of Medicine (47 study physicians) and 14 clinics in a private multispecialty group practice not affiliated with Baylor College of Medicine (50 study physicians). Physician specialties included internal medicine (47), family practice (47), family practice and internal medicine (1), family practice and obstetrics and gynecology (1), and obstetrics and gynecology (1).

Forty physicians (20 from each intervention arm) were randomly selected for chart reviews from among the 49 live CME and 44 online CME participants who completed all coursework and follow-up activities. A nonintervention control group included 20 physicians randomly selected from the pool of primary care physicians who elected not to participate in the study. This sample size provided sufficient power to detect medium to large effect sizes (δ = 0.75) under the assumption of a high correlation between preintervention and postintervention measures (r  = 0.75).42 A higher correlation was expected of the chart data because, being tied directly to behaviors, these assessments are less likely than knowledge tests to be influenced by measurement error. At least 25 patient charts per physician for the preintervention period and 25 per physician for the postintervention period had to meet inclusion criteria described below. Of 60 physicians selected, 54 (19 live CME, 17 online CME, 18 control) provided a sufficient number of patient charts.

Educational Design and Content

Physicians in both the live CME and the online CME groups received identical evidence-based content and participated in similar but not identical instructional activities (Table 1). All were exposed to a didactic component (a live lecture for the live CME group and a choice of multimedia presentations for the online CME group), interactive cases with feedback, enabling tools and supporting resources (a risk assessment calculator, step-by-step clinical practice guides, a guidelines summary), and access to expert advice throughout the postintervention period (via telephone or e-mail for live CME participants and via e-mail and live Web conferencing for online CME participants). The live CME sessions were completed as the online CME intervention began, with 1-day overlap. Program content was drawn from the NIH National Cholesterol Education Program (NCEP) Guidelines—Third Report of the Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III—ATP III).43 In addition to focusing on primary prevention as in the original ATP guidelines44 and intensive treatment of patients with coronary heart disease (CHD) as in the ATP II guidelines,45 the ATP III guidelines emphasize intensive lipid-modifying pharmacotherapy in conjunction with lifestyle modification for persons having multiple risk factors, including diabetes.43

Table Graphic Jump LocationTable 1. Instructional Elements of Live CME and Online CME Interventions
Interventions

Physicians in the live CME group attended 1 of 5 identical 1½- to 2-hour small-group, interactive workshops with a mean attendance of 10 participants over a 10-day period. On arrival at the workshop, participants completed survey forms and preintervention assessments. A faculty member, a cardiologist with expertise in lipid management, gave a lecture that included opportunities for questions and answers. Following this presentation, the participants used enabling tools to work through cases with the faculty member as facilitator. At the end of the workshop, participants completed postintervention survey forms and knowledge assessments.

During a 2-week period, online CME participants completed, at their convenience, an online program that incorporated educational elements similar to those provided in the live CME workshops. Before beginning any educational activity, participants completed an online survey and preintervention assessment form that included the same items as the paper form completed by live CME participants. The online offering provided access to didactic presentations, interactive cases, and enabling tools. Participants could also send questions to faculty members via e-mail. The educational sections were presented in a fixed sequence analogous to that used in the live CME workshops. However, online CME users were free to revisit program sections as desired.

Multiple viewing options for the content accommodated different connectivity speeds and learning styles. Didactic presentations could be viewed with streaming media using video, audio, scrolling text, and synchronized slides; with audio and synchronized slides; or with text and slides alone. The online cases featured clinical data summaries sequentially updated as cases progressed, interactive clinical problem-solving tools (eg, an online risk-assessment calculator that mirrored the paper-based version), a step-by-step management reference guide, feedback on case management, and opportunities to forward questions to faculty at any time. Following completion of the learning activity, users completed the postintervention survey and knowledge assessment. Administration tools allowed monitoring of participant progress, including survey and test completion.

Access to faculty was provided to both groups throughout the postintervention study period. The live CME group could use e-mail (asynchronous interactivity) and telephone contacts (real-time interactivity) to reach faculty. The online CME group was also provided access through e-mail (asynchronous interactivity) and optional participation in a single 45-minute, live Web conference (real-time interactivity) that was offered several times approximately 1 month after the program. In addition to having 2-way communication with the online audience and being able to show slides, the Web instructor could pose a multiple-choice question, poll the audience, and share the distribution of responses with the learners.

Technical assistance was provided to online CME participants by e-mail, telephone, or in person if required. Twenty-nine requests involving software, audio difficulties, firewall problems, and forgotten passwords were received from 19 online CME participants. All were satisfactorily resolved through online or telephone support.

Data Collection

Data were collected from the live CME participants using paper instruments, while the online CME participants completed most instruments online. Both groups completed a preactivity demographic survey, knowledge tests, activity evaluation, and outcomes survey. The demographic survey assessed sex, age, and years of practice; practice characteristics; preferred CME format; computer capabilities; comfort in using the Internet; familiarity with and readiness to use the NCEP guidelines; and preferred intervention group assignment. Both groups completed and returned by mail a paper-based final survey and test 12 weeks after the intervention.

The knowledge test was developed and validated by content experts and pilot tested. The test, reduced through item analysis to 39 items, assessed guideline knowledge and use in areas including risk factors, risk category assessment, screening tests, CHD risk equivalents, therapeutic goals, therapeutic agents, and the metabolic syndrome. The test consisted of multiple-choice questions and case vignettes with fixed-choice responses and produced a Cronbach α measure of internal reliability of 0.79 when averaged across testing occasions. The instrument was administered before (pretest), immediately after (posttest 1), and 12 weeks after the intervention (posttest 2); while the content was the same each time, the item order varied. Both groups also rated their satisfaction with the course and its perceived value for their clinical practice.

Chart review data were gathered by personnel trained in review procedures and condition coding. At the community health centers, personnel affiliated with Baylor College of Medicine were responsible for chart reviews. At the private multispecialty clinics, individuals engaged in the organization’s quality improvement reviews and research studies were responsible for chart reviews. Charts were reviewed at 5-month intervals before (August 1, 2001-December 31, 2001) and after (March 1, 2002-July 31, 2002) the interventions. For each selected physician, at least 25 patients meeting study criteria were randomly selected for review during each study period. Inclusion criteria were 2 visits with the physician, with the second visit occurring during the 5-month study window; and a diagnosis (International Classification of Diseases, Ninth Revision [ICD-9] code) consistent with CHD or diabetes mellitus, newly ranked in ATP III as a CHD risk-equivalent. While patients could have been randomly selected for inclusion in both study periods if they met criteria, this only occurred for 9% to 12% of all patients per group, and the percentage did not differ significantly among groups.

Data were collected on patient sex, birth year, date of most recent lipid panel, ICD-9 code, and cholesterol-lowering drugs used in therapy. Data analyses were completed for 2768 charts (Figure 1). The percentages of patients screened and treated with pharmacotherapeutics according to guidelines were calculated for each physician in each review period. The use of screening percentage as an outcome was based on a large-scale chart audit for patients at high risk of CHD in which only 52% of patients had a lipid profile reported.46

To estimate interrater reliability, all reviewers were asked to rate a subset of charts. Generalized κ values47 were averaged across the dichotomously scored outcomes of primary interest (lipid screening, drug treatment) to yield mean κ = 0.83. Other outcomes were assessed along varying scales of measurement and summarized with the percent agreement index. When averaged across all outcomes, 94.7% agreement was found. A 10% sample of all charts was examined for accuracy. A total of 19 errors out of 5200 fields yielded an error rate less than 0.4%.

Analysis

Knowledge test (phase 1) and chart review (phase 2) data were analyzed using repeated-measures analysis of variance (ANOVA), reporting the partial omega squared (ω2) effect size. For the knowledge test, scale scores were subjected to a 2 × 3 repeated-measures ANOVA having 1 between-subjects factor (live CME and online CME) and 1 within-subject factor (pretest, posttest 1, and posttest 2). Tests of Sidak-adjusted simple main effects were used for post hoc mean comparisons as needed. Orthogonal planned contrasts48,49 were formulated for the chart data to test the hypothesis about behaviors. Both data sets approximated assumptions closely but not perfectly. Nonparametric and robust analyses applied to the data yielded similar conclusions. Because of this and our intent of estimating and comparing means rather than medians or distributional shapes, we focused interpretation on the parametric results.

Additional analyses included examination of relationships between knowledge test outcomes and potential moderator variables including type of clinic, sex, year graduated from medical school, and other background characteristics; Web conference participation; study dropout; and satisfaction with the learning experience. Also considered were measures obtained at preintervention such as familiarity with prior and current guidelines, comfort with using the Internet, hesitancy in implementing the new guidelines, and frequency of attending live CME activities. For these analyses, relevant parametric methods (regression, repeated-measures ANOVA, analysis of covariance [ANCOVA]) and nonparametric methods (Wilcoxon-Mann-Whitney test, χ2 test of independence, γ measure of association) were used as needed based on data considerations. Holm’s modified Bonferroni correction was applied to control experiment-wise error (eg, in exploratory analyses).50,51 Analysis was based on intention to treat. Statistical significance was assessed at P<.05 throughout, and all analyses were conducted using SAS version 9.1 (SAS Institute, Cary, NC) and SPSS version 12.0 (SPSS Inc, Chicago, Ill).

Randomization and Dropout Comparisons

Of 97 physicians (50 live CME, 47 online CME) who began an educational activity, 93 (49 live CME, 44 online CME) completed all learning activities, tests, and follow-up measures (Figure 1). The 4% attrition rate was well within accepted guidelines for instructional RCTs.52 Dropouts from the online CME group (n = 3) did not indicate less Internet comfort than nondropouts, suggesting that failure to complete the study was unrelated to computer skills.

Participant Characteristics

Participants randomized to the live CME and online CME groups in phase 1 of the study did not differ significantly with regard to practice characteristics, years in practice, self-rated knowledge, preferred CME formats, or prior use of or readiness/reluctance to use the NCEP guidelines (Table 2). The control group added in phase 2 was drawn from the same practice sites as participants in the interventions, with 16 (89%) of 18 practicing in sites with 1 or more live CME or online CME physicians and 2 (11%) practicing in sites with no other participating physicians. Control physicians were in primary care specialties (5 internal medicine, 13 family practice) and did not significantly differ from the intervention groups with respect to specialty, sex, or years since graduation from medical school.

Table Graphic Jump LocationTable 2. Baseline Characteristics of Participants in Live CME and Online CME Groups
Use of Online Program

The online CME participants spent a mean (SD) of 3.8 (2.0) hours in accessing all components of the online program, including tests and surveys. The mean (SD) time spent accessing the educational components was 2.2 (0.8) hours. Though comparable to total time spent by live CME participants (range: 1.5-2 hours), the online CME physicians spread their involvement over a median of 3 sessions (range: 1-9), each lasting a mean (SD) of 1.4 (0.9) hours. An effort was made to exclude time periods in which physicians were logged on but not actively engaged. However, since participants were not visually monitored, active viewing time may have been overestimated.

Use of Reinforcing Options

Live CME participants made very little use of either e-mail or telephone to contact faculty, with inquiries made by only 2 (4%) of the physicians. The online CME participants submitted no questions by e-mail. However, 85% signed on at some point in time during a Web conference.

Knowledge Test

Knowledge test score means for both study arms are shown in Figure 2. No interaction effect between group membership and test performance across time emerged, indicating similar preintervention to postintervention improvements for both groups. Regarding the group main effect, the online CME group scored slightly higher than the live CME group when averaged across all 3 testing occasions (4.8% additional items correct, 95% confidence interval [CI], 0.6%-9.0%; partial ω2 = 0.01; P = .03). Knowledge retention at 12 weeks was comparable across both groups. There was also a statistically significant and large time main effect (partial ω2 = 0.70, P<.001). For both groups combined, posttest 1 scores surpassed pretest levels, and posttest 2 scores surpassed both pretest and posttest 1 levels (P<.001). The sizes of the differences from pretest to posttest 1 and pretest to posttest 2 were large, representing increases in percentage of items correct of 31.0% (95% CI, 27.0%-35.0%) and 36.4% (95% CI, 32.2%-40.6%), respectively. Although the increase from posttest 1 to posttest 2 was statistically significant (P<.001), the percentage increase was only 5.4% (95% CI, 2.6%-8.2%).

Figure 2. Knowledge Test Mean Estimates
Graphic Jump Location

Error bars represent the 95% confidence interval (CI) for mean percentage correct on a 39-item knowledge test. A repeated-measures analysis of variance revealed a significant increase in percentage correct between groups (P = .03) and across time (P<.001), but no interaction between group and performance across time (P = .70). CME indicates continuing medical education.

Potential Moderator Variables

Physicians assigned to the online CME group were actually less comfortable in using the Internet than physicians assigned to the live CME group (“very comfortable” indicated in 40% vs 66% of participants, respectively; Wilcoxon-Mann-Whitney test, P = .009). Participants indicating less familiarity with the new guidelines at pretest achieved slightly greater learning gains. The increase in number of items correct for those indicating being very familiar was 19%; moderately familiar, 33%; slightly familiar, 34%; and not familiar, 43% (χ2 test of independence, P = .03). Women registered greater pretest to posttest gains than men (35% increase vs 27% increase; repeated-measures ANOVA, P = .01). No other strong relationships between test scores and background variables emerged.

Participant Satisfaction

All live CME participants and 94% of online CME participants rated the learning experience as “good” or “excellent.” Nonparametric correlations revealed no significant associations between course satisfaction and test performance. Nearly all of the 40 online CME participants (95%) who attended the live Web conference rated it as “useful” or “very useful” and indicated that it provided an opportunity to solidify guideline knowledge and obtain answers to questions.

Chart Review

For appropriate screening for lipid abnormalities, the online CME group did not differ significantly from the live CME and control groups (P = .24) (Table 3). The live CME group did not differ from the control group (P = .16).

Table Graphic Jump LocationTable 3. Patients Appropriately Screened for Dyslipidemia (Lipid Panel on Chart)*

Regarding drug treatment for patients at high risk (Table 4), there was a statistically significant though relatively small increase (5.0% [95% CI, 1.0%-9.1%]) in the percentage of patients appropriately treated by the online CME group when compared with the live CME and control groups (partial ω2 = 0.16, P = .04). The live CME and control groups did not differ significantly in treatment of patients.

Table Graphic Jump LocationTable 4. Patients Appropriately Treated for Dyslipidemia (Lipid-Lowering Drug on Chart)*

To our knowledge, this study provides the first evidence at the individual physician level that Internet-based CME can produce objectively measured changes in behavior as well as gains in knowledge sustained over 12 weeks that are comparable or superior to those realized from effective live activities. Several factors indicate that these results may be generalized to other physician populations. The participants were diverse and represented varying levels of experience and content knowledge. Similar numbers of men and women participated, and they were drawn from a range of practice settings in urban/suburban environments including community health centers and a private, multispecialty group. The randomization process avoided the self-selection bias based on Internet comfort that is typically observed in studies of Internet use. Indeed, those randomized to the Internet-based intervention were significantly less comfortable with the Internet than those randomized to the live CME intervention.

While changes in knowledge and attitudes were comparable across both groups, only the online CME participants demonstrated behavioral change. Several factors could have contributed to this. First, online CME participants often completed the learning activity over several sessions, in contrast to a single, live interactive workshop. Repeated visits to the Web site may have provided additional reinforcement of learning. Second, performance may have been enhanced because the online CME participants were able to move about the Web site and structure their own learning—allocating time to each educational piece as desired—in keeping with current theories of adult learning.53

Third, whereas both study groups were given access to experts throughout the postintervention period, few participants in either group made use of learner-initiated modes of access (e-mail or telephone). However, 85% of the online CME participants signed on at some time during the scheduled option of the Web conference. This later exposure to an educational activity combined with the multisession use of the online materials may indicate an advantage of sequential reinforcement with Internet-based education, an option more challenging to achieve when using live education.

Several potential limitations of this study need to be considered. Because gains in knowledge were similar for both groups over time, it is reasonable to consider the possibility that cross-contamination occurred within those practices housing physicians in both study arms. However, because the live CME workshops primarily occurred before the beginning of the online CME interventions, and the pretest and posttest 1 measures for the live CME group were collected during the workshop, cross-contamination could not have affected the preintervention to postintervention gains for the live CME group. While the possibility of cross-contamination effects on the online CME group’s learning gains cannot be completely excluded, it is unlikely for 2 reasons. First, the knowledge assessment instrument was designed to be challenging and addressed a variety of domains relevant to the guidelines; any exchange of information between learners would need to address the complexity and comprehensiveness of the material covered. Second, 10 online CME physicians practicing at sites where there were no live CME participants achieved learning gains from pretest to posttest 1 (35.9% increase in percentage of correctly answered items) and from pretest to posttest 2 (38.7%) that were at least as great as those for 34 online CME physicians at mixed sites (27.8% and 34.5%, respectively).

Cross-contamination is also an unlikely explanation for the behavioral results. Although physicians in all 3 arms demonstrated similar lipid screening behavior, no change from baseline screening levels was observed following the intervention. While the absence of change cannot completely exclude the possibility of cross-contamination, improbable coincidental effects (decreases, increases, or both) of similar magnitudes in 2 or more groups would have been required to maintain the observed baselines. With respect to the use of appropriate drug therapy, improvement in treatment was observed only for the online CME group, but not for the live CME and control groups, which did not differ significantly from each other.

Payment of honoraria for completing data collection instruments could potentially have influenced study results, giving individuals a greater incentive to participate. However, since the same honorarium was provided to participants in both intervention arms, this should have affected the groups in the same manner.

There were high baseline levels of screening and compliance in these practices, and it is reasonable to ask how this might occur prior to the knowledge gains demonstrated by the knowledge test. There are several possible explanations. During this time period there were separate but analogous quality improvement initiatives in the public and private clinics that might have contributed to the high levels. In the community health centers, these included incentives that focused on blood pressure, glycosylated hemoglobin levels, lipid levels, and other screening measures. The private multispecialty group practices had launched an initiative to inform physicians about abnormal low-density lipoprotein cholesterol levels in patients with diabetes mellitus. Also, the test, while validated for content, examined a much larger knowledge base regarding ATP III guidelines than could be investigated in the behavioral component. In particular, while the guidelines categorize patients as low risk, moderate risk, high risk, and the metabolic syndrome, we only studied the impact on high-risk patients with CHD and diabetes mellitus. The measured behavioral performance (screening and treatment) for high-risk patients may differ from that for the general population of patients at risk for CHD. It is possible that these results would not hold for other risk groups. Had it been possible to study all risk categories, we might have observed a tighter linkage between preintervention knowledge levels and baseline performance.

While the objective of the study was to determine equivalence of the 2 approaches to CME, rather than superiority, we did find a 5% increase in compliance with treatment guidelines in high-risk patients among physicians in the online CME group, and no increase in the live CME group. The clinical significance of this magnitude of change needs to be considered. The change is modest, and it is not clear what impact it would have on patient health outcomes. At baseline, 84% to 87% of these physicians’ high-risk patients were receiving drug therapy as specified in the guidelines, and among those not on treatment may have been patients who declined to use medications or were intolerant of all lipid-lowering medications. On the other hand, the 5% change represents about a third of the maximum possible improvement that could have occurred. With the high baseline rate, there may have been a ceiling effect, and it is possible that the absolute increase could be greater in a population in which the baseline treatment rate is lower. Additional research among populations with lower baseline screening and treatment rates is required to define more precisely the magnitude and clinical significance of the observed improvement and to explore approaches to optimize behavioral gain.

Arguments supporting the advantages of Internet-based education have typically emphasized the convenience, interactivity, flexible scheduling, widespread availability, and potentially low cost per learner from an educational perspective.17 Studies are needed to provide additional estimates of physician behavioral change—and ultimately of health care outcomes—before we can fully understand the relative costs and benefits of these technologies in education.

The landmark report Crossing the Quality Chasm noted the challenge of identifying ways to foster ongoing skill development for professionals already in practice.2 The results of our study provide evidence for additional value that can be realized from the expanded use of appropriately designed Internet-based CME in promoting health care quality.

Corresponding Author: Michael Fordis, MD, Center for Collaborative and Interactive Technologies, Baylor College of Medicine, One Baylor Plaza, BCM-MS155, Houston, TX 77030 (fordis@bcm.tmc.edu).

Author Contributions: Dr King had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Fordis, King, Ballantyne, Greenberg, Schneider, Spann.

Acquisition of data: Fordis, King, Ballantyne, Jones, Greisinger.

Analysis and interpretation of data: Fordis, King, Schneider, Greisinger.

Drafting of the manuscript: Fordis, King.

Critical revision of the manuscript for important intellectual content: Fordis, King, Ballantyne, Jones, Schneider, Spann, Greenberg, Greisinger.

Statistical analysis: King.

Obtained funding: Fordis, Ballantyne, Schneider.

Administrative, technical, or material support: Fordis, Ballantyne, Schneider, Spann, Greisinger.

Study Supervision: Fordis, Greisinger.

Financial Disclosures: Drs King, Greenberg, Spann, and Greisinger and Ms Schneider have no financial relationships to disclose. Dr Fordis received grant support from AstraZeneca for the current study and has received grants from Merck and Merck/Schering-Plough. Dr Ballantyne has received grants and research support from AstraZeneca, diaDexus, Gene Logic, GlaxoSmithKline, Integrated Therapeutics, Kos, Merck, Novartis, Pfizer, Reliant, Sankyo Pharma, Schering-Plough, and Sanofi-Synthelabo; serves as a consultant for AstraZeneca, Bayer, Merck, Novartis, Pfizer, Reliant, Schering-Plough, and Sanofi-Synthelabo; and serves on the speakers’ bureau for AstraZeneca, Bristol-Meyers Squibb, Kos, Merck, Novartis, Pfizer, Reliant, Sanofi-Synthelabo, and Schering-Plough. Dr Jones has research support from Abbott, AstraZeneca, Kos, and Pfizer and serves as a consultant to AstraZeneca and Abbott.

Funding/Support: This study was supported by a grant from AstraZeneca Pharmaceuticals.

Role of the Sponsor: The study design was developed by the authors, and as part of the grant submission process, the study design was reviewed and approved for support by the funding organization. The funding organization had no role in the conduct of the study; in the collection, management, analysis, and interpretation of data; or in the preparation, review, or approval of the manuscript.

Acknowledgment: We wish to express appreciation to Baylor College of Medicine employees Quentin Smith, MS, for his assistance in manuscript preparation and editing and Brenda Galena for her assistance in data entry.

Kohn LT, Corrigan J, Donaldson MS. To Err Is Human: Building a Safer Health SystemWashington, DC: National Academy Press; 2000
Institute of Medicine.  Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001
Institute of Medicine.  Committee on Health Professions Education Summit. Health Professions Education: A Bridge to Quality. Washington, DC: National Academies Press; 2003
Institute of Medicine.  Committee on Data Standards for Patient Safety. Patient Safety: Achieving a New Standard for Care. Washington, DC: National Academies Press; 2004
Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care.  Ann Intern Med. 2005;142:260-273
PubMed   |  Link to Article
Grol R, Wensing M. What drives change? barriers to and incentives for achieving evidence-based practice.  Med J Aust. 2004;180:(6 suppl)  S57-S60
PubMed
Cohen J. A word from the president: transforming CME. Association of American Medical Colleges. October 2002. Available at: http://www.aamc.org/newsroom/reporter/oct02/word.htm. Accessed March 31, 2005
Whitcomb ME. CME reform: an imperative for improving the quality of medical care.  Acad Med. 2002;77:943-944
PubMed   |  Link to Article
Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance: a systematic review of the effect of continuing medical education strategies.  JAMA. 1995;274:700-705
PubMed   |  Link to Article
Davis DA, Taylor-Vaisey A. Translating guidelines into practice: a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines.  CMAJ. 1997;157:408-416
PubMed
Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes?  JAMA. 1999;282:867-874
PubMed   |  Link to Article
Mazmanian PE, Davis DA. Continuing medical education and the physician as a learner: guide to the evidence.  JAMA. 2002;288:1057-1060
PubMed   |  Link to Article
American Medical Association.  2002 AMA Study on Physicians' Use of the World Wide WebChicago, Ill: AMA Press; 2002
Bennett NL, Casebeer LL, Kristofco RE, Strasser SM. Physicians’ Internet information-seeking behaviors.  J Contin Educ Health Prof. 2004;24:31-38
PubMed   |  Link to Article
Accreditation Council for Continuing Medical Education.  ACCME Annual Report Data 2003. Accreditation Council for Continuing Medical Education; 2004. Available at: http://www.accme.org/dir_docs/doc_upload/97dd7a39-9746-4a5d-8c01-e56a9ffc0c8b_uploaddocument.pdf. Accessed July 19, 2005
Accreditation Council for Continuing Medical Education.  ACCME Annual Report Data 1998. Accreditation Council for Continuing Medical Education; 2000. Available at: http://www.accme.org/dir_docs/doc_upload/dc316660-2a48-46d4-916f-60334f7527ba_uploaddocument.pdf. Accessed July 19, 2005
Horton WK. Designing Web-based Training: How to Teach Anyone Anything Anywhere AnytimeNew York, NY: Wiley; 2000
Barnes BE. Creating the practice-learning environment: using information technology to support a new model of continuing medical education.  Acad Med. 1998;73:278-281
PubMed   |  Link to Article
Bednar A, Cunningham D, Duffy T, Perry J. Theory into practice: how do we link? In: Duffy T, Jonassen D, eds. Constructivism and the Technology of Instruction: A Conversation. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers; 1993:17-34
Spiro R, Feltovich P, Jacobson M, Coulson R. Cognitive flexibility, constructivism, and hypertext: random access instruction for advanced knowledge acquisition in ill-structured domains. In: Duffy T, Jonassen D, eds. Constructivism and the Technology of Instruction: A Conversation. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers; 1993:57-75
Knowles M. Self-directed Learning: A Guide for Learners and TeachersCambridge, England: Cambridge Book Co; 1988
Kearsley G. Exploration in learning and instruction: the theory into practice database. Available at: http://tip.psychology.org/index.html. Accessed February 24, 2004
Casebeer LL, Strasser SM, Spettell CM.  et al.  Designing tailored Web-based instruction to improve practicing physicians’ preventive practices.  J Med Internet Res. 2003;5:e20
PubMed   |  Link to Article
Sargeant J, Curran V, Jarvis-Selinger S.  et al.  Interactive on-line continuing medical education: physicians’ perceptions and experiences.  J Contin Educ Health Prof. 2004;24:227-236
PubMed   |  Link to Article
Chumley-Jones HS, Dobbie A, Alford CL. Web-based learning: sound educational method or hype? a review of the evaluation literature.  Acad Med. 2002;77:(10 suppl)  S86-S93
PubMed   |  Link to Article
Wutoh R, Boren SA, Balas EA. eLearning: a review of Internet-based continuing medical education.  J Contin Educ Health Prof. 2004;24:20-30
PubMed   |  Link to Article
Cobb SC. Internet continuing education for health care professionals: an integrative review.  J Contin Educ Health Prof. 2004;24:171-180
PubMed   |  Link to Article
Kronz JD, Silberman MA, Allsbrook WC, Epstein JI. A Web-based tutorial improves practicing pathologists’ Gleason grading of images of prostate carcinoma specimens obtained by needle biopsy: validation of a new medical education paradigm.  Cancer. 2000;89:1818-1823
PubMed   |  Link to Article
Harris JM, Salasche SJ, Harris RB. Can Internet-based continuing medical education improve physicians’ skin cancer knowledge and skills?  J Gen Intern Med. 2001;16:50-56
PubMed
Pagnanelli G, Soyer HP, Argenziano G.  et al.  Diagnosis of pigmented skin lesions by dermoscopy: Web-based training improves diagnostic performance of non-experts.  Br J Dermatol. 2003;148:698-702
PubMed   |  Link to Article
Curran VR, Hoekman T, Gulliver W, Landells I, Hatcher L. Web-based continuing medical education (I): field test of a hybrid computer-mediated instructional delivery system.  J Contin Educ Health Prof. 2000;20:97-105
PubMed   |  Link to Article
Curran VR, Hoekman T, Gulliver W, Landells I, Hatcher L. Web-based continuing medical education (II): evaluation study of computer-mediated continuing medical education.  J Contin Educ Health Prof. 2000;20:106-119
PubMed   |  Link to Article
Hinkka H, Kosunen E, Metsanoja R, Lammi UK, Kellokumpu-Lehtinen P. General practitioners’ attitudes and ethical decisions in end-of-life care after a year of interactive Internet-based training.  J Cancer Educ. 2002;17:12-18
PubMed
Chan DH, Leclair K, Kaczorowski J. Problem-based small-group learning via the Internet among community family physicians: a randomized controlled trial.  MD Comput. 1999;16:54-58
PubMed
Harris JM Jr, Kutob RM, Surprenant ZJ, Maiuro RD, Delate TA. Can Internet-based education improve physician confidence in dealing with domestic violence?  Fam Med. 2002;34:287-292
PubMed
Kemper KJ, Amata-Kynvi A, Sanghavi D.  et al.  Randomized trial of an Internet curriculum on herbs and other dietary supplements for health care professionals.  Acad Med. 2002;77:882-889
PubMed   |  Link to Article
Gerbert B, Bronstone A, Maurer T, Berger T, McPhee SJ, Caspers N. The effectiveness of an Internet-based tutorial in improving primary care physicians’ skin cancer triage skills.  J Cancer Educ. 2002;17:7-11
PubMed
Allison JJ, Kiefe CI, Wall T.  et al.  Multicomponent Internet continuing medical education to promote chlamydia screening.  Am J Prev Med. 2005;28:285-290
PubMed   |  Link to Article
Stewart M, Marshall JN, Ostbye T.  et al.  Effectiveness of case-based on-line learning of evidence-based practice guidelines.  Fam Med. 2005;37:131-138
PubMed
Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME: a review of 50 randomized controlled trials.  JAMA. 1992;268:1111-1117
PubMed   |  Link to Article
Cohen J. Statistical Power Analysis for the Behavioral Sciences2nd ed. Hillsdale, NJ: L. Erlbaum Associates; 1988
Dennis M, Lennox R, Foss M. Practical power analysis for substance abuse health services research. In: Bryant KJ, Windle MT, West SG, eds. The Science of Prevention: Methodological Advances From Alcohol and Substance Abuse Research. Washington, DC: American Psychological Association; 1997:367-404
National Cholesterol Education Program.  Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults. Third Report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III): Final Report. Washington, DC: National Cholesterol Education Program; 2002
 Report of the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults: The Expert Panel.  Arch Intern Med. 1988;148:36-69
PubMed   |  Link to Article
 Summary of the second report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel II).  JAMA. 1993;269:3015-3023
PubMed   |  Link to Article
Massing MW, Foley KA, Sueta CA.  et al.  Trends in lipid management among patients with coronary artery disease: has diabetes received the attention it deserves?  Diabetes Care. 2003;26:991-997
PubMed   |  Link to Article
Fleiss J. Measuring nominal scale agreement among many raters.  Psychol Bull. 1971;76:378-382
Link to Article
Thompson B. The importance of planned or focused comparisons in OVA research.  Meas Eval Couns Dev. 1988;21:99-101
Furr RM, Rosenthal R. Repeated-measures contrasts for “multiple-pattern” hypotheses.  Psychol Methods. 2003;8:275-293
PubMed   |  Link to Article
Holm S. A simple sequentially rejective multiple testing procedure.  Scand J Stat. 1979;6:65-70
Olejnik S, Li J, Supattathum S, Huberty C. Multiple testing and statistical power with modified Bonferroni procedures.  J Educ Behav Statistics. 1997;22:389-406
Coalition for Evidence-Based Policy.  Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide. Washington, DC: US Dept of Education; April 2003. ERIC Document Research Service No. ED477483
Smith-Gratto K. Strengthening learning on the Web: programmed instruction and constructivism. In: Abbey B, ed. Instructional and Cognitive Impacts of Web-based Education. Hershey, Pa: Idea Group Publishing; 2000:227-240

Figures

Figure 1. Randomization Scheme and Participation Flow of the Continuing Medical Education (CME) Study Groups
Graphic Jump Location
Figure 2. Knowledge Test Mean Estimates
Graphic Jump Location

Error bars represent the 95% confidence interval (CI) for mean percentage correct on a 39-item knowledge test. A repeated-measures analysis of variance revealed a significant increase in percentage correct between groups (P = .03) and across time (P<.001), but no interaction between group and performance across time (P = .70). CME indicates continuing medical education.

Tables

Table Graphic Jump LocationTable 1. Instructional Elements of Live CME and Online CME Interventions
Table Graphic Jump LocationTable 2. Baseline Characteristics of Participants in Live CME and Online CME Groups
Table Graphic Jump LocationTable 3. Patients Appropriately Screened for Dyslipidemia (Lipid Panel on Chart)*
Table Graphic Jump LocationTable 4. Patients Appropriately Treated for Dyslipidemia (Lipid-Lowering Drug on Chart)*

References

Kohn LT, Corrigan J, Donaldson MS. To Err Is Human: Building a Safer Health SystemWashington, DC: National Academy Press; 2000
Institute of Medicine.  Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001
Institute of Medicine.  Committee on Health Professions Education Summit. Health Professions Education: A Bridge to Quality. Washington, DC: National Academies Press; 2003
Institute of Medicine.  Committee on Data Standards for Patient Safety. Patient Safety: Achieving a New Standard for Care. Washington, DC: National Academies Press; 2004
Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care.  Ann Intern Med. 2005;142:260-273
PubMed   |  Link to Article
Grol R, Wensing M. What drives change? barriers to and incentives for achieving evidence-based practice.  Med J Aust. 2004;180:(6 suppl)  S57-S60
PubMed
Cohen J. A word from the president: transforming CME. Association of American Medical Colleges. October 2002. Available at: http://www.aamc.org/newsroom/reporter/oct02/word.htm. Accessed March 31, 2005
Whitcomb ME. CME reform: an imperative for improving the quality of medical care.  Acad Med. 2002;77:943-944
PubMed   |  Link to Article
Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance: a systematic review of the effect of continuing medical education strategies.  JAMA. 1995;274:700-705
PubMed   |  Link to Article
Davis DA, Taylor-Vaisey A. Translating guidelines into practice: a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines.  CMAJ. 1997;157:408-416
PubMed
Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes?  JAMA. 1999;282:867-874
PubMed   |  Link to Article
Mazmanian PE, Davis DA. Continuing medical education and the physician as a learner: guide to the evidence.  JAMA. 2002;288:1057-1060
PubMed   |  Link to Article
American Medical Association.  2002 AMA Study on Physicians' Use of the World Wide WebChicago, Ill: AMA Press; 2002
Bennett NL, Casebeer LL, Kristofco RE, Strasser SM. Physicians’ Internet information-seeking behaviors.  J Contin Educ Health Prof. 2004;24:31-38
PubMed   |  Link to Article
Accreditation Council for Continuing Medical Education.  ACCME Annual Report Data 2003. Accreditation Council for Continuing Medical Education; 2004. Available at: http://www.accme.org/dir_docs/doc_upload/97dd7a39-9746-4a5d-8c01-e56a9ffc0c8b_uploaddocument.pdf. Accessed July 19, 2005
Accreditation Council for Continuing Medical Education.  ACCME Annual Report Data 1998. Accreditation Council for Continuing Medical Education; 2000. Available at: http://www.accme.org/dir_docs/doc_upload/dc316660-2a48-46d4-916f-60334f7527ba_uploaddocument.pdf. Accessed July 19, 2005
Horton WK. Designing Web-based Training: How to Teach Anyone Anything Anywhere AnytimeNew York, NY: Wiley; 2000
Barnes BE. Creating the practice-learning environment: using information technology to support a new model of continuing medical education.  Acad Med. 1998;73:278-281
PubMed   |  Link to Article
Bednar A, Cunningham D, Duffy T, Perry J. Theory into practice: how do we link? In: Duffy T, Jonassen D, eds. Constructivism and the Technology of Instruction: A Conversation. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers; 1993:17-34
Spiro R, Feltovich P, Jacobson M, Coulson R. Cognitive flexibility, constructivism, and hypertext: random access instruction for advanced knowledge acquisition in ill-structured domains. In: Duffy T, Jonassen D, eds. Constructivism and the Technology of Instruction: A Conversation. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers; 1993:57-75
Knowles M. Self-directed Learning: A Guide for Learners and TeachersCambridge, England: Cambridge Book Co; 1988
Kearsley G. Exploration in learning and instruction: the theory into practice database. Available at: http://tip.psychology.org/index.html. Accessed February 24, 2004
Casebeer LL, Strasser SM, Spettell CM.  et al.  Designing tailored Web-based instruction to improve practicing physicians’ preventive practices.  J Med Internet Res. 2003;5:e20
PubMed   |  Link to Article
Sargeant J, Curran V, Jarvis-Selinger S.  et al.  Interactive on-line continuing medical education: physicians’ perceptions and experiences.  J Contin Educ Health Prof. 2004;24:227-236
PubMed   |  Link to Article
Chumley-Jones HS, Dobbie A, Alford CL. Web-based learning: sound educational method or hype? a review of the evaluation literature.  Acad Med. 2002;77:(10 suppl)  S86-S93
PubMed   |  Link to Article
Wutoh R, Boren SA, Balas EA. eLearning: a review of Internet-based continuing medical education.  J Contin Educ Health Prof. 2004;24:20-30
PubMed   |  Link to Article
Cobb SC. Internet continuing education for health care professionals: an integrative review.  J Contin Educ Health Prof. 2004;24:171-180
PubMed   |  Link to Article
Kronz JD, Silberman MA, Allsbrook WC, Epstein JI. A Web-based tutorial improves practicing pathologists’ Gleason grading of images of prostate carcinoma specimens obtained by needle biopsy: validation of a new medical education paradigm.  Cancer. 2000;89:1818-1823
PubMed   |  Link to Article
Harris JM, Salasche SJ, Harris RB. Can Internet-based continuing medical education improve physicians’ skin cancer knowledge and skills?  J Gen Intern Med. 2001;16:50-56
PubMed
Pagnanelli G, Soyer HP, Argenziano G.  et al.  Diagnosis of pigmented skin lesions by dermoscopy: Web-based training improves diagnostic performance of non-experts.  Br J Dermatol. 2003;148:698-702
PubMed   |  Link to Article
Curran VR, Hoekman T, Gulliver W, Landells I, Hatcher L. Web-based continuing medical education (I): field test of a hybrid computer-mediated instructional delivery system.  J Contin Educ Health Prof. 2000;20:97-105
PubMed   |  Link to Article
Curran VR, Hoekman T, Gulliver W, Landells I, Hatcher L. Web-based continuing medical education (II): evaluation study of computer-mediated continuing medical education.  J Contin Educ Health Prof. 2000;20:106-119
PubMed   |  Link to Article
Hinkka H, Kosunen E, Metsanoja R, Lammi UK, Kellokumpu-Lehtinen P. General practitioners’ attitudes and ethical decisions in end-of-life care after a year of interactive Internet-based training.  J Cancer Educ. 2002;17:12-18
PubMed
Chan DH, Leclair K, Kaczorowski J. Problem-based small-group learning via the Internet among community family physicians: a randomized controlled trial.  MD Comput. 1999;16:54-58
PubMed
Harris JM Jr, Kutob RM, Surprenant ZJ, Maiuro RD, Delate TA. Can Internet-based education improve physician confidence in dealing with domestic violence?  Fam Med. 2002;34:287-292
PubMed
Kemper KJ, Amata-Kynvi A, Sanghavi D.  et al.  Randomized trial of an Internet curriculum on herbs and other dietary supplements for health care professionals.  Acad Med. 2002;77:882-889
PubMed   |  Link to Article
Gerbert B, Bronstone A, Maurer T, Berger T, McPhee SJ, Caspers N. The effectiveness of an Internet-based tutorial in improving primary care physicians’ skin cancer triage skills.  J Cancer Educ. 2002;17:7-11
PubMed
Allison JJ, Kiefe CI, Wall T.  et al.  Multicomponent Internet continuing medical education to promote chlamydia screening.  Am J Prev Med. 2005;28:285-290
PubMed   |  Link to Article
Stewart M, Marshall JN, Ostbye T.  et al.  Effectiveness of case-based on-line learning of evidence-based practice guidelines.  Fam Med. 2005;37:131-138
PubMed
Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME: a review of 50 randomized controlled trials.  JAMA. 1992;268:1111-1117
PubMed   |  Link to Article
Cohen J. Statistical Power Analysis for the Behavioral Sciences2nd ed. Hillsdale, NJ: L. Erlbaum Associates; 1988
Dennis M, Lennox R, Foss M. Practical power analysis for substance abuse health services research. In: Bryant KJ, Windle MT, West SG, eds. The Science of Prevention: Methodological Advances From Alcohol and Substance Abuse Research. Washington, DC: American Psychological Association; 1997:367-404
National Cholesterol Education Program.  Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults. Third Report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III): Final Report. Washington, DC: National Cholesterol Education Program; 2002
 Report of the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults: The Expert Panel.  Arch Intern Med. 1988;148:36-69
PubMed   |  Link to Article
 Summary of the second report of the National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel II).  JAMA. 1993;269:3015-3023
PubMed   |  Link to Article
Massing MW, Foley KA, Sueta CA.  et al.  Trends in lipid management among patients with coronary artery disease: has diabetes received the attention it deserves?  Diabetes Care. 2003;26:991-997
PubMed   |  Link to Article
Fleiss J. Measuring nominal scale agreement among many raters.  Psychol Bull. 1971;76:378-382
Link to Article
Thompson B. The importance of planned or focused comparisons in OVA research.  Meas Eval Couns Dev. 1988;21:99-101
Furr RM, Rosenthal R. Repeated-measures contrasts for “multiple-pattern” hypotheses.  Psychol Methods. 2003;8:275-293
PubMed   |  Link to Article
Holm S. A simple sequentially rejective multiple testing procedure.  Scand J Stat. 1979;6:65-70
Olejnik S, Li J, Supattathum S, Huberty C. Multiple testing and statistical power with modified Bonferroni procedures.  J Educ Behav Statistics. 1997;22:389-406
Coalition for Evidence-Based Policy.  Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide. Washington, DC: US Dept of Education; April 2003. ERIC Document Research Service No. ED477483
Smith-Gratto K. Strengthening learning on the Web: programmed instruction and constructivism. In: Abbey B, ed. Instructional and Cognitive Impacts of Web-based Education. Hershey, Pa: Idea Group Publishing; 2000:227-240
CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 162

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles