0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Preliminary Communication |

Second-Order Peer Review of the Medical Literature for Clinical Practitioners FREE

R. Brian Haynes, MD, PhD; Chris Cotoi, MEng, MBA; Jennifer Holland, MLIS; Leslie Walters, BA; Nancy Wilczynski, MSc; Dawn Jedraszewski; James McKinlay, MSc; Richard Parrish, AD; K. Ann McKibbon, MLS, PhD; for the McMaster Premium Literature Service (PLUS) Project
[+] Author Affiliations

Author Affiliations: Health Information Research Unit, Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Ontario.

More Author Information
JAMA. 2006;295(15):1801-1808. doi:10.1001/jama.295.15.1801.
Text Size: A A A
Published online

Context Most articles in clinical journals are not appropriate for direct application by individual clinicians.

Objective To create a second order of clinical peer review for journal articles to determine which articles are most relevant for specific clinical disciplines.

Design and Setting A 2-stage prospective observational study in which research staff reviewed all issues of over 110 (number has varied slightly as new journals were added or discarded from review but number has always been over 110) clinical journals and selected each article that met critical appraisal criteria from January 2003 through the present. Practicing physicians were recruited from around the world, excluding Northern Ontario, to the McMaster Online Rating of Evidence (MORE) system and registered as raters according to their clinical disciplines. An automated system assigned each qualifying article to raters for each pertinent clinical discipline, and recorded their online assessments of the articles on 7-point scales (highest score, 7) of relevance and newsworthiness (defined as useful new information for physicians). Rated articles fed an online alerting service, the McMaster Premium Literature Service (PLUS). Physicians from Northern Ontario were invited to register with PLUS and then receive e-mail alerts about articles according to MORE system peer ratings for their own discipline. Online access by PLUS users of PLUS alerts, raters' comments, article abstracts, and full-text journal articles was automatically recorded.

Main Outcome Measures Clinical rater recruitment and performance. Relevance and newsworthiness of journal articles to clinical practice in the discipline of the rating physician.

Results Through October 2005, MORE had 2139 clinical raters, and PLUS had 5892 articles with 45 462 relevance ratings and 44 724 newsworthiness ratings collected since 2003. On average, clinicians rated systematic review articles higher for relevance to practice than articles with original evidence and lower for useful new information. Primary care physicians rated articles lower than did specialists (P<.05). Of the 98 physicians who registered for PLUS, 88 (90%) used it on 3136 occasions during an 18-month test period.

Conclusions This demonstration project shows the feasibility and use of a post-publication clinical peer review system that differentiates published journal articles according to the interests of a broad range of clinical disciplines.

Figures in this Article

Clinical journals support several lines of research communication, including scientist-to-scientist (preliminary studies, the predominant mode), scientist-to-clinician (more definitive, ready-for-application original studies), clinician-to-clinician (review articles), and clinician-to-scientist communication (case reports and case series).1 These communication lines are not clearly differentiated in journals, and content that is ready for direct clinical application by clinicians of a particular clinical discipline is of low frequency, perhaps contributing to the difficulty that clinicians have in keeping up to date.2

Working with medical professional groups, we developed secondary journals and information services, such as ACP Journal Club and Evidence-Based Medicine, and their Web sites, to identify and disseminate the most definitive original studies and systematic reviews. For these secondary journals, research staff select all articles at the time of publication in journals that meet explicit criteria for the critical appraisal of clinical research evidence, and a clinical editorial team selects qualifying studies for attention for a broad range of medical practice, including generalists and specialists.

Physicians reading these secondary publications must select the articles of most relevance to their own practice. To facilitate more exact matching of article content and relevance to individual clinical interests, we developed the McMaster Online Rating of Evidence (MORE) system. MORE collects and collates ratings from practicing clinicians for articles that have passed critical appraisal criteria (Box). MORE now supports the selection and reporting of articles for the secondary publications and also for new services downstream of MORE, including the McMaster Premium Literature Service (PLUS). Through PLUS, alerts about articles are sent to clinicians according to their self-stated discipline(s). In this article we describe the literature selection process and the MORE rating system, which activates a second-order peer review (peer review by clinicians for clinicians), as well as PLUS, a new service supported by MORE.

General

All English-language original and review articles in an issue of a candidate journal are considered for abstracting if they concern topics important to the clinical practice of internal medicine, general and family practice, surgery, psychiatry, pediatrics, or obstetrics and gynecology. Access to foreign-language journals is provided through the systematic reviews, especially those in the Cochrane Library, which summarizes articles from over 800 journals in several languages.

Original articles are classified by purpose (eg, treatment, diagnosis) and then passed or failed on the following methodologic criteria relevant to the purpose of the article. In order to pass, all criteria for the relevant purpose category must be met.

Prevention or Treatment; Quality Improvement

  • Random allocation of participants to interventions

  • Outcome measures of known or probable clinical importance for 80% or more of the participants who entered the investigation

Diagnosis

  • Inclusion of a spectrum of participants, some (but not all) of whom have the disorder or condition of interest

  • Each participant must receive the new test and the diagnostic standard test

  • Either an objective diagnostic standard or a contemporary clinical diagnostic standard must be used with demonstrably reproducible criteria for any subjectively interpreted component

  • Interpretation of the test without knowledge of the diagnostic standard result

  • Interpretation of the diagnostic standard without knowledge of the test result

Prognosis

  • An inception cohort of participants, all initially free of the outcome of interest

  • Follow-up of 80% or more of patients until the occurrence of either a major study end point or the end of the study

Causation

  • Observations concerning the relation between exposures and putative clinical outcomes

  • Prospective data collection with clearly identified comparison group(s) for those at risk for the outcome of interest (in descending order of preference from randomized controlled trials, quasi-randomized controlled trials, nonrandomized controlled trials, cohort studies with case-by-case matching or statistical adjustment to create comparable groups with nested case-control studies)

  • Masking of observers of outcomes to exposures (this criterion is assumed to be met if the outcome is objective)

Economics of Health Care Programs or Interventions

  • The economic question must compare alternative courses of action in real or hypothetical patients

  • The alternative diagnostic or therapeutic services or quality improvement strategies must be compared on the basis of both the outcomes they produce (effectiveness) and the resources they consume (costs)

  • Evidence of effectiveness must come from a study (or studies) that meets criteria for diagnosis, treatment, quality assurance, or review articles

  • Results should be presented in terms of the incremental or additional costs and outcomes incurred and a sensitivity analysis should be done

Clinical Prediction Guides

  • The guide must be generated in 1 set of patients (training set) and validated in an independent set of real not hypothetical patients (test set), and must pertain to treatment, diagnosis, prognosis, or causation

Differential Diagnosis

  • A cohort of patients who present with a similar clinical problem, initially undiagnosed but reproducibly defined

  • Clinical setting is explicitly described

  • Ascertainment of diagnosis for 80% or more of patients using a reproducible diagnostic workup strategy and follow-up until patients are diagnosed; or follow-up of 1 month or more for acute disorders or for 1 year or more for chronic or relapsing disorders

  • Articles that are classified as a review of the literature are passed or failed on the following methodologic criteria. In order to pass all criteria must be met.

Systematic Reviews

  • The clinical topic being reviewed must be clearly stated, with a description of how the evidence on this topic was researched, from what sources, and with what inclusion and exclusion criteria

  • More than 1 article included in the review must meet the previously noted criteria for treatment, diagnosis, prognosis, causation, quality improvement, or the economics of health care programs

*These criteria appear in each of these publications: ACP Journal Club , Evidence-Based Medicine , Evidence-Based Nursing , BMJ Updates+ , and Medscape Best Evidence Alerts .

Literature Selection

Research staff trained in health research methodology read all original and review articles at the time of publication (from January 2003-present), over 28 000 articles per year, in each issue (over 1600 issues/year), of over 110 clinical journals (number has varied slightly as new journals were added for regular review or periodically discarded from review if they did not have at least 1-2 articles that met our criteria over a 6- to 12-month period, but number has always been over 110). Articles that were read included a broad range of medical disciplines (http://bmjupdates.mcmaster.ca/journalslist.asp) including primary care, emergency medicine, internal medicine and its subspecialties, general surgery, obstetrics, gynecology, pediatrics, psychiatry, and nursing. Journals were selected based on suggestions by librarians, clinicians, editors, and editorial staff; Science Citation Index (SCI) impact factors; systematic examination of the contents of each selected journal for at least 6 months; and by ongoing yield of articles that meet basic criteria for assessing the quality of studies concerning the cause, course, prediction, diagnosis, prognosis, prevention, and treatment of medical disorders for criteria (Box). For example, criteria for a treatment study are that it must be a randomized trial reporting clinical outcomes, with at least 80% follow-up of participants. Of the 500 or more articles reviewed each week, only about 40 articles pass the methods filter, yielding between 2000 and 3000 articles each year with the strongest methods for clinical attention. The literature selection process is highly reproducible, based on calibration studies previously reported.3 Briefly, after a 1-year period of extensive calibration exercises, 6 research staff were involved in a blinded, inter-rater reliability study and were able to attain chance-corrected agreement coefficients (κ) over 0.80 for all classification areas (ie, article purpose and methods).

More than 400 journal titles have been assessed since 1991 and, based on the number of articles meeting criteria, the top journals form the core set reviewed in producing 3 evidence-based journals, ACP Journal Club, Evidence-Based Medicine, and Evidence-Based Nursing . Journal yield is reviewed annually and journals with very low yield (<1-2 articles/year) are dropped. New journals are added if nominated or if needed to expand the scope of disciplines covered and if they contain articles that meet our criteria based on review of at least 6 journal issues. A detailed description of the literature selection process has been previously published.4

Raters

We recruited practicing clinicians to rate the usefulness of articles that met our appraisal criteria. The purpose of the rating process was to determine the clinical disciplines for which the topic of each qualifying article was best suited. We estimated that about 1300 raters would be needed to provide adequate discipline coverage and response times. This estimation was based on 40 articles per week that would pass critical appraisal criteria (the methods filter), an average of 3 disciplines assigned to each passing article, the desire to obtain at least 3 ratings for each discipline per article, and the expectation that raters would only want to rate 1 to 2 articles per month (40 articles × 3 disciplines × 3 ratings × 52 weeks/12 months/1-2 articles/month = 780-1560 raters). Raters were recruited according to our target user groups, including primary care physicians and selected specialties (Table 1). Raters were recruited by e-mail, Web site notices, newsletters, and editorials in evidence-based journals.5 Selection criteria included: (1) physician in independent clinical practice in 1 or more of the disciplines covered by the service; (2) reliable Internet access, and (3) willingness to respond quickly (within 2 working days) to requests for ratings. Incentives for raters included access to pre-appraised articles in their discipline(s) , continuing medical education credits (15 minutes/article rated), comparison of their ratings with other ratings for the same article, and access to the highest-rated articles in their discipline. Raters registered the maximum frequency with which they could be sent articles for rating.

Table Graphic Jump LocationTable 1. McMaster Online Rating of Evidence (MORE) Raters by Discipline*
MORE System Overview

A system was required to handle the administrative functioning of the rating process. This led to the development of an Internet-based interface to support administrative functions for posting articles that had passed critical appraisal criteria, assigning articles to raters, collecting ratings and comments for each article, and transferring rated articles to the PLUS online delivery system. For each article that was selected based on this selection process, its citation, abstract, and Medical Subject Headings (MeSH) entered a database via an online request to PubMed for the MEDLINE data display using the article's unique PubMed identification number. The retrieved information was parsed into discrete data (eg, title, authors, abstract, MeSH).

The weekly operational cycle of moving articles from the literature selection phase (the hand search described previously) into the MORE system and then readying them for release via PLUS appears in Figure 1. For articles meeting criteria during the literature selection phase, research staff from the Health Information Research Unit assigned topic indexing and clinical discipline codes. All entries were checked by a physician (R.B.H.) then added to the MORE database. E-mail notices of articles to be rated were sent to 3 or 4 practicing physicians (depending on the number of available raters) for each pertinent clinical discipline. Thus, an article on diabetes would be separately rated by 3 to 4 each of family physicians, internists, and endocrinologists. These raters logged into the MORE system directly by clicking on the hyperlinked article title in their notification e-mail. Their individual identification and password were embedded in the link and if determined valid after decryption, the raters were logged onto the MORE system.

Figure 1. Quality Appraisal, Peer Rating, and Literature Distribution Processes
Graphic Jump Location

Abbreviations: ID, identification; CAP, critical appraisal process; MORE, McMaster Online Rating of Evidence; PLUS, McMaster Premium Literature Service.

Raters reviewed their assigned full-text articles online and rated them on two 7-point scales (highest score, 7) (Figure 2). The first scale, for relevance, concerned the extent to which the article was pertinent to practice in the rater's clinical discipline. If relevance was rated at least 3, the rater completed a second 7-point scale, on the extent to which the article's content represented news or something that clinicians in the rater's discipline were unlikely to know (which we labeled newsworthiness). Optionally, raters also provided up to 1000 characters of free-text comments. Rater's comments that appeared libelous, blasphemous, or thoughtless in the opinion of the MORE staff were not used. Any information that might identify individual raters was excluded.

Figure 2. Online Rating Scales Used by Raters
Graphic Jump Location

Abbreviations: GP, general practice; FP, family practice. Reproduced with permission from McMaster University.

When at least 3 raters for a clinical discipline had rated an article, the ratings were averaged within that discipline. Articles with mean ratings of 3 of 7 or more for each scale for at least 1 discipline were transferred to the PLUS database and an e-mail alert was sent to PLUS participants registered in that discipline if the ratings met the cutoff set by the user (the default was 5 for each scale). Other disciplines' ratings were added as separate lines to the article record as they became available, and registrants in those disciplines were alerted accordingly. Articles with mean scores of less than 3 of 7 for either scale for all disciplines assigned to the article were transferred to the quarantine database. Articles in the quarantine database therefore passed the methods filter but were considered to be not relevant or informative by practicing clinicians and were not made available to users (Figure 1).

PLUS Administrative Overview

Once in the PLUS database, article records were manually assigned links to their full text in licensed online journals or open-access journals, as available. Links to MedlinePlus about drug prescribing and information for patients were created as available. A link to the PubMed abstract was derived from the article's PubMed identification number and stored in the article record. Access was then provided for registered physician users via the PLUS (a free-access version of this service is now available as BMJUpdates+ [http://www.bmjupdates.com]).

Updates and corrections to each PLUS article's citation, abstract, and indexing data automatically occurred every 2 weeks by repeating the request to PubMed. The PLUS system reporting function identified aberrant data entry with respect to full-text links and patient and drug prescribing links. When alerts were sent to users from the PLUS system, all links displayed in the Web interface were tested for functionality and checked for proper pairing with the citation data.

Organizing and Evaluating PLUS

MORE was established to feed PLUS, an online service that provided physicians with discipline-specific alerts and a search engine for the database of accumulated alerts. An ongoing evaluation of McMaster PLUS is measuring use of, and satisfaction with the service in a randomized clinical trial, which was launched in April 2004. PLUS was created as an adjunct to the Northern Ontario Virtual Library digital library service, and provided free to physician trial participants in Northern Ontario. The Northern Ontario Virtual Library digital library included licensed collections from Ovid Technologies Inc (selected full-text journals, texts, and Evidence-Based Medicine Reviews) and Stat!Ref (a collection of medical texts). The majority of PLUS article citations included links to their Ovid full-text articles or to open-access full-text articles via their publisher. Beginning in November 2003, physicians in Northern Ontario were recruited to participate in the trial by Northern Ontario Virtual Library Website notices, e-mail, fax, and mail invitations. Following online consent, participants registered according to their clinical discipline(s) and since April 2004, were alerted by e-mail to the titles of articles rated through the MORE system for these disciplines. Participants specified the frequency of alerts and adjusted the rating cutoffs according to their preferences, within the range from 4 to 7 for each of the 2 scales. They registered for as many disciplines as they wished, including general practice/family medicine and primary care special interests including anesthesia, mental health, obstetrics, and emergency medicine; internal medicine and its individual subspecialties; obstetrics; gynecology; general surgery; pediatrics; and psychiatry. All user accesses were monitored for linking from article titles in e-mail alerts or search engine hits to the article record in the PLUS database (which included the article's full citation, mean ratings by clinical discipline, and links), and whether the user subsequently accessed links to raters' comments, the PubMed abstract, or if available, the full-text article, drug information and prescribing pages in MedlinePlus, or ACP Journal Club synopsis.

Evaluation of PLUS was reviewed and approved by the McMaster University Hamilton Health Sciences Research Ethics Board.

MORE Recruitment and Ratings

Rater recruitment for MORE began in 2002, exceeded 1300 in October 2003, and reached 2139 by October 2005. A total of 1042 raters (48.7%) indicated a primary care discipline (at least 1 of general practice, family practice, emergency medicine, or primary care internal medicine) but many indicated multiple interests (Table 1). Just over 57% resided in North America and 25% in the European Union, with all continents represented (Table 2). Approximately 74% of individuals registered at the end of October 2005 had rated an article and the number of articles rated ranged from 1 to 98. The number of raters for an article ranged from 3 (for an article with only 1 discipline) to 43 (for an article with multiple disciplines). Raters who did not respond to repeated requests to rate the literature were retired (286 by October 2005; 13.4% of the rating pool). To date, we have not compared raters who responded to rating requests with raters who did not.

Table Graphic Jump LocationTable 2. Geographic Distribution of Raters

For articles published in 2004 (the most recent complete year), ratings (Figure 2) had a mean score of 5.42 for relevance to practice and 4.65 for useful new information (newsworthiness) (Table 3). On average, primary care physicians rated articles lower than specialists for both relevance and new information (Table 3). Systematic review articles were rated higher than original articles for relevance, but lower for useful new information (newsworthiness) (Table 3).

Table Graphic Jump LocationTable 3. Mean Clinical Ratings From the McMaster Online Rating of Evidence (MORE) for Articles Passing Critical Appraisal Criteria During 2004

The distribution of ratings for articles for 2004 (Table 4) showed that 24.9% of articles that met critical appraisal criteria were rated at 6 or above for relevance, and fewer (6.2%) were rated at 6 or above for useful new information (newsworthiness). Thus, a primary care user of the PLUS system who had set the cutoffs for sending alerts at 7 of 7 for relevance and 6 of 7 for useful new information (newsworthiness) would have been sent e-mail notification of 40 articles published in 2004 (0.14% of the 28 737 articles with abstracts screened that year). Just 40 (1.8%) of the articles passing critical appraisal for 2004 received mean ratings of less than 3, the cutoff for the quarantine database.

Table Graphic Jump LocationTable 4. Distribution of Ratings of Articles Published in 2004
PLUS Usage

From November 2003 to April 2004, 98 physicians working in Northern Ontario were recruited to receive alerts from PLUS. From April 2004 through September 2005, alerts were sent to at least 1 participant for 2698 newly processed articles, for a total of 49 353 alerts, Thus, alerts about a given article were sent to about 18 participants on average, according to their registered clinical discipline(s). Participants were alerted by e-mail about 6 to 1931 articles per month, a mean of 28.7 per month (95% CI, 24.7-32.7). During an 18-month trial period, 88 (90%) used the service logging into PLUS 3136 times. Of these, 1784 (56.9%) logins were to seek information and 1352 logins (43.1%) were in response to article alerts. Just over 30% of participants used the service at least twice per month. Participants responded to a mean of 14.7% of e-mail alerts (95% CI, 10.6%-18.9%), with a range per participant from 0.1% to 85.4%. During this period, participants accessed 3137 article records (full citation, ratings, comments, and links), 2018 article abstracts, 742 full-text articles, and 317 rater comments.

A chilling view of the Internet era warns that high-quality information will inevitably be fully overwhelmed by low-quality information because the cost of production of low-quality information is less and the marginal cost of distribution of information on the Internet is close to zero.6 Recent studies show that several barriers block the acquisition and use of high-quality information in health care. Physicians' daily routines afford them little time to search and review new medical information.79 Physicians admit that they lack the skills required to navigate literature databases911 and to properly appraise medical literature.8,11,12

To try to help overcome these problems, we created a system that centralizes the basic processes of critical appraisal and clinical relevance rating and then channels articles to physicians according to their practice disciplines. For 28 737 original and review articles published in over 110 journals in 2004, we identified 2237 (7.8%) articles for clinical attention, with just a small percentage of these at the highest level of interest for a given clinical discipline. The clinical peer rating system revealed differences in the perceptions of primary care physicians and specialists concerning articles, including articles of potential interest to both groups, as well as differences in ratings for original articles compared with review articles. Of the 98 physicians who registered for PLUS, 10 (10%) did not make use of it. Some of the nonresponse was likely due to technical problems including firewalls, junk mail filters, slow dial-up Internet access, and plain text e-mail defaults (the alert links required that e-mail clients permit Hypertext Markup Language formatting), but we have not formally assessed this. Participants who responded to e-mail alerts (which contained only the titles of articles) clicked through to article records in the PLUS database with ratings only about 15% of the time, and clicked through to article abstracts much more often than to full-text articles. Article ratings for both primary care physicians and specialists were correlated with access rates (data not shown), suggesting that this information influenced the decisions of users concerning which articles to review.

The PLUS system includes rated articles from the beginning of 2003, has ongoing input of articles, and various user interfaces are currently being evaluated in a series of randomized trials. The scope of PLUS is expanding to include more clinical disciplines including selected subspecialties of surgery and nursing disciplines. We continue to recruit raters for all disciplines. MORE ratings are used to help select content for evidence-based journals, an evidence-based textbook of internal medicine (http://pier.acponline.org/index.html), and 2 free alerting services (www.bmjupdates.com and https://profreg.medscape.com/px/newsletter.do). The direct cost of developing the MORE rating system, including software and recruiting, was approximately Can $180 000, and supported by grants (see Acknowledgements section), with ongoing annual cost of about Can $80 000, fully supported by the sponsors of the evidence-based products it now feeds. Raters' time is voluntarily contributed but rewarded in kind (see Methods section). Our ultimate aim is to provide high-quality, clinically rated articles from MORE for all clinicians interested in keeping up to date with important new health care knowledge and all health care publications purporting to be evidence-based. By sharing the costs across many publications, we may be able to compete with lower-quality information on the Internet, challenging Coiera's bleak vision.6

Several limitations must be considered for our demonstration project to date. Our preliminary evaluation was pitched at the level of feasibility and usability testing and the data are observational. The raters were volunteers and their ratings may not be representative of those of most physicians. Articles being rated were not masked for authors and journal, and all corresponding editorials were sent along with the article being rated. The potential effect that this may have on the ratings has not been assessed. However, we thought that this information should be taken into account by raters. Our rating scales may not be optimal for reliability or their ability to discriminate between the interests of various clinical disciplines or convey useful information to users. These matters warrant further study, but our preliminary data suggest that the scales do discriminate among disciplines and study types and influence users' decisions about which articles to read.

The PLUS service was piloted in Northern Ontario because of the availability of project funds: the Ministry of Health and Long-term Care in Ontario wanted to build information access for physicians in Northern Ontario to enhance opportunities for continuing education and prepare for a new medical school. This was auspicious for our purposes because the region is served by a digital health library, the Northern Ontario Virtual Library, which permitted linking of alerts to their full-text articles. We do not know how adaptable this experience is to other locations and settings with higher population density. Whether participants learned and applied the information they reviewed is the subject of ongoing evaluation of the PLUS system; no assessment is currently under way to assess effects on clinician performance or patient outcomes. Thus, little basis exists at present for assessing whether the differences in article ratings by discipline, or by type of article (original vs review), are important, especially in terms of the overall objective of improving practitioner performance and patient care.

Nevertheless, clinical ratings may be useful in facilitating continuing education activities by identifying the most sound, relevant, and sought-after new studies and reviews, alerting clinicians to articles describing critically appraised studies deemed of relevance and interest by their peers.

Corresponding Author: R. Brian Haynes, MD, PhD, Department of Clinical Epidemiology and Biostatistics, McMaster University Faculty of Health Sciences, 1200 Main St W, Room 2C10b, Hamilton, Ontario L8N 3Z5, Canada (bhaynes@mcmaster.ca).

Author Contributions: Dr Haynes had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Haynes, Cotoi, Holland, Wilczynski, McKibbon.

Acquisition of data: Haynes, Cotoi, Holland, Walters, Jedraszewski, McKinlay, Parrish, McKibbon.

Analysis and interpretation of data: Haynes, McKinlay.

Drafting of the manuscript: Haynes, Cotoi, Holland, Wilczynski, Parrish.

Critical revision of the manuscript for important intellectual content: Haynes, Walters, Wilczynski, Jedraszewski, McKinlay, McKibbon.

Statistical analysis: Haynes, McKinlay.

Obtained funding: Haynes.

Administrative, technical, or material support: Cotoi, Holland, Walters, Jedraszewski, Parrish, McKibbon.

Study supervision: Haynes, Wilczynski.

Financial Disclosures: Dr Haynes did not receive remuneration for his roles in developing MORE or PLUS. He is the Editor of ACP Journal Club , for which he receives remuneration from the American College of Physicians, and Co-Editor of Evidence-Based Medicine , for which he receives remuneration from the BMJ Publishing Group. He is project director for BMJUpdates+ and MEDSCAPE Best Evidence Alerts, for which he also receives remuneration. All of the coauthors have been employed in part by contracts between the funders or the in-kind supporters of the project, as named above, and McMaster University, which holds the intellectual property rights for MORE and PLUS.

Funding/Support: Ontario Ministry of Health and Long-term Care (funds supported the development of the MORE rating system and PLUS system); Canadian Institutes of Health Research (funds supported the evaluation of the rating and literature service). In-kind supporters: The American College of Physicians and the BMJ Publishing Group (both supported the initial critical appraisal process); Ovid Technologies Inc (provided means for the investigators to create links from PLUS to Ovid's full-text journal articles and Evidence-Based Medicine Reviews ). PLUS was created as an adjunct to the Northern Ontario Virtual Library digital library service, and provided free to physician trial participants in Northern Ontario with development funding from the Ontario Ministry of Health and Long-term Care and trial funding from the Canadian Institutes of Health Research.

Role of the Sponsors: Funders and supporters had no role in the design, execution, analysis, or interpretation of the study, or preparation or approval of the manuscript.

Previous Presentation: Presented at the International Congress on Peer Review and Biomedical Publication; September 16, 2005; Chicago, Ill.

Members of the Premium Literature Service Project: principal investigator (Dr Haynes), project manager (Ms Wilczynski), chief software developer (Mr Cotoi), informatics specialist (Ms Holland), data analyst (Mr McKinlay), database manager (Ms Jedraszewski), research assistant (Ms Walters), information scientist (Dr McKibbon), C. Walker-Dilks (research associate), A. Eady (research associate), S. Marks (research associate), S. Werre (research associate), M. Kastner (research associate), S. Wong (research associate), N. Bordignon (editorial assistant), L. Gunderman (editorial assistant), N. Brown (information technology support), R. Parrish (software developer).

Collaborators: Northwestern Ontario Medical Program, Northeastern Ontario Medical Education Corporation, and Northern Ontario Virtual Library (all now integrated into the Northern Ontario School of Medicine).

Haynes RB. Loose connections between peer reviewed clinical journals and clinical practice.  Ann Intern Med. 1990;113:724-728
PubMed   |  Link to Article
Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care.  Ann Intern Med. 2005;142:260-273
PubMed   |  Link to Article
Wilczynski NL, McKibbon KA, Haynes RB. Enhancing retrieval of best evidence for health care from bibliographic databases: calibration of the hand search of the literature.  Medinfo. 2001;10:390-393
PubMed
McKibbon KA, Wilczynski NL, Haynes RB. What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals?  BMC Med. 2004;2:33
PubMed   |  Link to Article
Haynes RB. A win–win proposition: help us to build better evidence-based information systems for you.  ACP Journal Club. 2004;140:A13
PubMed
Coiera E. Information economics and the Internet.  J Am Med Inform Assoc. 2000;7:215-221
PubMed   |  Link to Article
McColl A, Smith H, White P, Field J. General practitioner's perceptions of the route to evidence based medicine: a questionnaire survey.  BMJ. 1998;316:361-365
PubMed   |  Link to Article
Young JM, Ward JE. Evidence-based medicine in general practice: beliefs and barriers among Australian GPs.  J Eval Clin Pract. 2001;7:201-210
PubMed   |  Link to Article
Ely JW, Osheroff JA, Ebell MH.  et al.  Obstacles to answering doctors' questions about patient care with evidence: qualitative study.  BMJ. 2002;324:1-7
PubMed   |  Link to Article
Wilson P, Droogan J, Glanville J, Watt I, Hardman G. Access to the evidence base from general practice: a survey of general practice staff in Northern and Yorkshire Region.  Qual Health Care. 2001;10:83-89
PubMed   |  Link to Article
McAlister FA, Graham I, Karr GW, Laupacis A. Evidence-based medicine and the practicing clinician.  J Gen Intern Med. 1999;14:236-242
PubMed   |  Link to Article
Putnam W, Twohig PL, Burge FI, Jackson LA, Cox JL. A qualitative study of evidence in primary care: what the practitioners are saying.  CMAJ. 2002;166:1525-1530
PubMed

Figures

Figure 1. Quality Appraisal, Peer Rating, and Literature Distribution Processes
Graphic Jump Location

Abbreviations: ID, identification; CAP, critical appraisal process; MORE, McMaster Online Rating of Evidence; PLUS, McMaster Premium Literature Service.

Figure 2. Online Rating Scales Used by Raters
Graphic Jump Location

Abbreviations: GP, general practice; FP, family practice. Reproduced with permission from McMaster University.

Tables

Table Graphic Jump LocationTable 1. McMaster Online Rating of Evidence (MORE) Raters by Discipline*
Table Graphic Jump LocationTable 2. Geographic Distribution of Raters
Table Graphic Jump LocationTable 3. Mean Clinical Ratings From the McMaster Online Rating of Evidence (MORE) for Articles Passing Critical Appraisal Criteria During 2004
Table Graphic Jump LocationTable 4. Distribution of Ratings of Articles Published in 2004

References

Haynes RB. Loose connections between peer reviewed clinical journals and clinical practice.  Ann Intern Med. 1990;113:724-728
PubMed   |  Link to Article
Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care.  Ann Intern Med. 2005;142:260-273
PubMed   |  Link to Article
Wilczynski NL, McKibbon KA, Haynes RB. Enhancing retrieval of best evidence for health care from bibliographic databases: calibration of the hand search of the literature.  Medinfo. 2001;10:390-393
PubMed
McKibbon KA, Wilczynski NL, Haynes RB. What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals?  BMC Med. 2004;2:33
PubMed   |  Link to Article
Haynes RB. A win–win proposition: help us to build better evidence-based information systems for you.  ACP Journal Club. 2004;140:A13
PubMed
Coiera E. Information economics and the Internet.  J Am Med Inform Assoc. 2000;7:215-221
PubMed   |  Link to Article
McColl A, Smith H, White P, Field J. General practitioner's perceptions of the route to evidence based medicine: a questionnaire survey.  BMJ. 1998;316:361-365
PubMed   |  Link to Article
Young JM, Ward JE. Evidence-based medicine in general practice: beliefs and barriers among Australian GPs.  J Eval Clin Pract. 2001;7:201-210
PubMed   |  Link to Article
Ely JW, Osheroff JA, Ebell MH.  et al.  Obstacles to answering doctors' questions about patient care with evidence: qualitative study.  BMJ. 2002;324:1-7
PubMed   |  Link to Article
Wilson P, Droogan J, Glanville J, Watt I, Hardman G. Access to the evidence base from general practice: a survey of general practice staff in Northern and Yorkshire Region.  Qual Health Care. 2001;10:83-89
PubMed   |  Link to Article
McAlister FA, Graham I, Karr GW, Laupacis A. Evidence-based medicine and the practicing clinician.  J Gen Intern Med. 1999;14:236-242
PubMed   |  Link to Article
Putnam W, Twohig PL, Burge FI, Jackson LA, Cox JL. A qualitative study of evidence in primary care: what the practitioners are saying.  CMAJ. 2002;166:1525-1530
PubMed
CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 43

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles
JAMAevidence.com