0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Commentary |

The Journal Impact Factor Denominator:  Defining Citable (Counted) Items FREE

Marie E. McVeigh, MS; Stephen J. Mann
[+] Author Affiliations

Author Affiliations: Bibliographic Policy, Journal Citation Reports, Thomson Reuters, Philadelphia, Pennsylvania.


JAMA. 2009;302(10):1107-1109. doi:10.1001/jama.2009.1301.
Text Size: A A A
Published online

Over its 30-year history, the journal impact factor has been the subject of much discussion and debate.1 From its first release in 1975, bibliometricians and library scientists discussed its value and its vagaries. In the last decade, discussion has shifted to the way in which impact factor data are used. In an environment eager for objective measures of productivity, relevance, and research value, the impact factor has been applied broadly and indiscriminately.2,3 The impact factor has gone from being a measure of a journal's citation influence in the broader literature to a surrogate that assesses the scholarly value of work published in that journal. These misappropriated metrics have been used to assess individual researchers, institutions, and departments.

Such evaluation practices put pressure on scholars to publish in high-impact journals. To continue attracting manuscripts and authors, editors and publishers are pressured to ensure a high impact factor for their journal.4 From a discussion of uses and merits and improvements and expansions of the impact factor, interest has turned to using publishing and editorial practices to influence, alter, and even manipulate the impact factor.5

The impact factor is calculated by considering all citations in 1 year to a journal's content published in the prior 2 years, divided by the number of substantive, scholarly items published in that journal in those same 2 years.1 The impact factor is a ratio of journal citations to the number of substantive, scholarly articles published by that journal, but not the mathematical average of the citations to all of the journal's content. Although the calculation treats each journal as a whole rather than as a simple count of all published items, the numerator and denominator of the impact factor ratio have different definitions. The numerator is the number of all citations to the journal from 1 year to the previous 2 years. Any citation to any item represents an acknowledgment of the journal. However, the denominator is the number of substantive, scholarly articles most likely to be cited.

The items counted in the denominator of the impact factor are identifiable in the Web of Science database by having the index field document type set as “Article,” “Review,” or “Proceedings Paper” (a specialized subset of the article document type). These document types identify the scholarly contribution of the journal to the literature and are counted as “citable items” in the denominator of the impact factor. A journal accepted for coverage in the Thomson Reuters citation database6 is reviewed by experts who consider the bibliographic and bibliometric characteristics of all article types published by that journal (eg, items), which are covered by that journal in the context of other materials in the journal, the subject, and the database as a whole. This journal-specific analysis identifies the journal sections, subsections, or both that contain materials likely to be considered scholarly works, and which therefore have the potential to be cited. Among the features considered during this review process are the following.

Descriptive article titles: Most primary research and scholarly items highlight their content with highly descriptive titles denoting the subject of the study. This is also true of some, but not all, reviews.

Named author(s) with author address: This information typically indicates that authors are affiliated with an academic, research, or other institution as opposed to paid contributors (eg, a staff writer or reporter).

Abstract or summary: Abstracts are another common feature of research and scholarly items and are used to provide detailed information about the content and conclusions detailed in the body of the item and to guide readers regarding the relevance of the item to their needs.

Article length: Items less than a half page in length are unlikely to provide significant scholarly contribution. Lengthy works are more suggestive of scholarly effort.

Data content: Whether primary research data or data collected and summarized from 1 or more prior works, the item is more likely to be of interest to scholars in the field if a figure or table was produced specifically for the current item.

Cited references: An item that cites extensively to prior or concurrent works will have the purpose of evidentiary support of the author's hypothesis, or of providing a novel synthesis of previous works. The presence of cited references indicates a level of participation in the academic discourse and scholarly development of the subject.

Cited reference density: This is defined as the number of cited references per text page of the item. A high density of references suggests a significant work of scholarship.

While no one of these characteristics is definitive, taken in concert, these features have proven accurate in identifying items that are later cited and that contribute to the impact factor. As an example of the effectiveness of the current approach used to identify citable items, we identified 1.1 million items published in 2000, and indexed for the Science Citation Index-Expanded and/or the Social Sciences Citation Index (ie, the 2 citation databases with journals that appear in the Journal Citation Reports6 listings). For each item, we recorded the number of citations accrued in each year from 2000 through 2007. The 8-year time window allowed the capture of a large proportion of total lifetime citation of the items. This count of citations included scholarly citations and citations in academic correspondence, such as a reply to a letter to the editor that cites the original letter. A total of 793 395 (72.8%) of the items from 2000 were indexed as articles or reviews, and these citable items accounted for 97.6% of all citations accrued in the 8 years since their publication (Table).

Table Graphic Jump LocationTable. Distribution of Citable (Countable) Content and Citations for Items Published in 2000 and 2005

Journals with a simple array of published content, such as the Annual Review of Medicine, show 100% of accrued citations are to citable items. Other journals, such as the BMJ, JAMA, Lancet, and the New England Journal of Medicine (NEJM), have more complex content, but show a similar pattern of the majority of citations accumulating to the citable (counted) items. The diverse content increases the relative proportion of citations to items indexed as not citable (not counted). Although these journals have a large number of discrete sections, and vary in presentation and size, the proportionate distribution of citations to citable items is consistent.

To focus on the contribution of published items to the citation count for the impact factor, only citations to the citable items in 2000 (ie, impact factor denominator) from works published in 2002 were considered. This represents one part of the numerator of the 2002 impact factor. Although fewer items contributed citations to the 2002 impact factor (65.9% of published items) than were cited within 8 years after publication (42.1% of published items), the concentration of citations to citable items was even more marked; 97.2% of citations in 2002 were to items indexed as citable items in 2000 (Table).

Because many changes to these journals have taken place in recent years, we considered another example using all items published in 2005. Four of 5 journals decreased both the total number of items and the number of citable items published from 2000 to 2005. One journal (NEJM) increased both total items and citable items, but had a net decrease in the percentage of citable items relative to all content (Table). For all journals, the percentage of citations in 2007 (the impact factor numerator) to the citable items published in 2005 (the impact factor denominator) was consistent with the values observed in the 2002 impact factor data. At the macro level of all indexed items and all journals, the percentage of citations associated with citable items did not change from 2002 to 2007. For both analysis years, citable items collected 97.2% of the total citation activity in the second year following their publication. Four of the 5 journals showed an overall increase in the percentage of impact factor citations associated with citable items from 2002 to 2007, whereas 1 journal (NEJM) showed a slight decrease (from 88.8% to 87.4%), but increased the total number of citations (from 9921 to 14 172).

The demonstration that a journal's citation impact is determined primarily by citations to items defined as citable is consistent with a report by Golubic et al,7 which also found that most of the materials cited in an impact factor numerator were designated as articles or reviews. Golubic et al7 also noted that the majority of items indexed as articles or reviews (ie, counted in the denominator of the impact factor) contained original data, defined as “previously unpublished research results presented in numerical or graphical form.” Although Golubic et al7 argued that the varying contribution of not citable materials to the impact factor was problematic, impact as a journal-level measure in the impact factor accommodates the notion that not citable items also contribute to a journal's influence in the literature because citations to any item reflect use of the journal. Considering that the impact factor then adjusts this according to the number of items most likely to attract citations because of their scholarly content, it is not a mathematical average of citations per item,13 but rather reflects the journal's participation in and its influence on the scholarly literature.

The impact factor has had success and utility as a journal metric8 due to its concentration on a simple calculation based on data that are fully visible in the Web of Science.6 These examples based on citable (counted) items indexed in 2000 and 2005 suggest that the current approach for identification of citable items in the impact factor denominator is accurate and consistent.

Corresponding Author: Marie E. McVeigh, MS, Thomson Reuters, 3501 Market St, Philadelphia, PA 19104 (marie.mcveigh@thomsonreuters.com).

Financial Disclosures: Ms McVeigh is an employee of Thomson Reuters, the publisher of the Journal Citation Reports. At the time of the preparation of the manuscript, Stephen J. Mann was employed by Thomson Reuters as a research intern.

Additional Contributions: We thank James Testa, vice president, Editorial Development and Publisher Relations, Thomson Reuters, for helpful review and discussion of the manuscript. Mr Testa was not compensated financially for his contribution.

Garfield E. The history and meaning of the journal impact factor.  JAMA. 2006;295(1):90-93
PubMed   |  Link to Article
Seglen PO. Why the impact factor of journals should not be used for evaluating research.  BMJ. 1997;314(7079):498-502
PubMed   |  Link to Article
Lawrence PA. The mismeasurement of science.  Curr Biol. 2007;17(15):R583-R585
PubMed   |  Link to Article
Pringle J. Trends in the use of ISI Citation databases for evaluation.  Learn Publ. 2008;21:85-91
Link to Article
Falagas ME, Alexiou VG. The top-ten in journal impact factor manipulation.  Arch Immunol Ther Exp (Warsz). 2008;56(4):223-226
PubMed   |  Link to Article
Golubic R, Rudes M, Kovacic N, Marusic M, Marusic A. Calculating impact factor: how bibliographic classification of journal items affects the impact factor of large and small journals.  Sci Eng Ethics. 2008;14(1):41-49
PubMed   |  Link to Article
Bensman SJ. Garfield and the impact factor.  Annu Rev Inform Sci Tech. 2007;41:93-155
Link to Article

Figures

Tables

Table Graphic Jump LocationTable. Distribution of Citable (Countable) Content and Citations for Items Published in 2000 and 2005

References

Garfield E. The history and meaning of the journal impact factor.  JAMA. 2006;295(1):90-93
PubMed   |  Link to Article
Seglen PO. Why the impact factor of journals should not be used for evaluating research.  BMJ. 1997;314(7079):498-502
PubMed   |  Link to Article
Lawrence PA. The mismeasurement of science.  Curr Biol. 2007;17(15):R583-R585
PubMed   |  Link to Article
Pringle J. Trends in the use of ISI Citation databases for evaluation.  Learn Publ. 2008;21:85-91
Link to Article
Falagas ME, Alexiou VG. The top-ten in journal impact factor manipulation.  Arch Immunol Ther Exp (Warsz). 2008;56(4):223-226
PubMed   |  Link to Article
Golubic R, Rudes M, Kovacic N, Marusic M, Marusic A. Calculating impact factor: how bibliographic classification of journal items affects the impact factor of large and small journals.  Sci Eng Ethics. 2008;14(1):41-49
PubMed   |  Link to Article
Bensman SJ. Garfield and the impact factor.  Annu Rev Inform Sci Tech. 2007;41:93-155
Link to Article

Letters

CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

See Also...
Articles Related By Topic
Related Collections
PubMed Articles