0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Peer Review Congress |

Can the Accuracy of Abstracts Be Improved by Providing Specific Instructions?  A Randomized Controlled Trial FREE

Roy M. Pitkin, MD; Mary Ann Branagan
[+] Author Affiliations

From Obstetrics & Gynecology , Los Angeles, Calif. Ms Branagan is now with Chest, Northbrook, Ill.


JAMA. 1998;280(3):267-269. doi:10.1001/jama.280.3.267.
Text Size: A A A
Published online

Context.— The most-read section of a research article is the abstract, and therefore it is especially important that the abstract be accurate.

Objective.— To test the hypothesis that providing authors with specific instructions about abstract accuracy will result in improved accuracy.

Design.— Randomized controlled trial of an educational intervention specifying 3 types of common defects in abstracts of articles that had been reviewed and were being returned to the authors with an invitation to revise.

Mean Outcome Measure.— Proportion of abstracts containing 1 or more of the following defects: inconsistency in data between abstract and body of manuscript (text, tables, and figures), data or other information given in abstract but not in body, and/or conclusions not justified by information in the abstract.

Results.— Of 250 manuscripts randomized, 13 were never revised and 34 were lost to follow-up, leaving a final comparison between 89 in the intervention group and 114 in the control group. Abstracts were defective in 25 (28%) and 30 (26%) cases, respectively (P=.78). Among 55 defective abstracts, 28 (51%) had inconsistencies, 16 (29%) contained data not present in the body, 8 (15%) had both types of defects, and 3 (5%) contained unjustified conclusions.

Conclusions.— Defects in abstracts, particularly inconsistencies between abstract and body and the presentation of data in abstract but not in body, occur frequently. Specific instructions to authors who are revising their manuscripts are ineffective in lowering this rate. Journals should include in their editing processes specific and detailed attention to abstracts.

Figures in this Article

THE ABSTRACT is by far the most widely read part of a research article. Much of the time it will be the only part that is read. In view of its importance, the accuracy of information provided by the abstract is critical.1 The purpose of this study was to test the hypothesis that specific instructions about abstract accuracy, provided to authors when manuscripts are being revised, will reduce the number of defective abstracts.

The study was conducted in the main editorial office of Obstetrics & Gynecology, a monthly medical specialty journal. Annual submissions total 1600 to 1700, and the acceptance rate is 25% to 29%. All submissions undergo outside review, and all potentially acceptable papers are screened by a statistician. This journal uses a 4-part structured abstract for research reports (objective, methods, results, and conclusion).

The study population involved manuscripts reporting original research returned to the authors after review with an invitation to revise, from August 12, 1994, to December 5, 1995. For purposes of this study, when an eligible manuscript was to be sent back to the corresponding author, a clerk either did or did not include a printed sheet of instructions related to preparation of the abstract.

The instruction sheet, which represented the intervention, stated the importance of preparing an accurate abstract and identified 3 types of errors found: (1) inconsistency between data in the abstract and the body of the manuscript, (2) data or other information in the abstract not found in the body, and (3) conclusions in the abstract not based on information presented in the abstract. The author was urged to check his or her own abstract carefully for these defects.

The assignment to intervention or control group was made from a computer-generated list of random numbers. To maintain allocation concealment, the clerk making the assignment was not involved in any aspect of the study, and the records were kept in her desk until the study was completed and the code broken.

When a revision was received, it was evaluated in our routine manner, including checking for completeness and for adequacy of responses by an editorial associate, who then copyedited the manuscript. As part of this process, the editorial associate scrutinized the abstract, verifying every datum or other bit of information in the abstract with those in text, tables, and figures. Any inconsistencies between abstract and body, any data or information in the abstract but not in the body, or any conclusions not based on information in the abstract were identified. If the editor confirmed that the abstract contained 1 or more such discrepancies, making it necessary to either contact the author for resolution or return the manuscript for additional revision, the abstract was considered defective for purposes of this study.

The editorial associate and the editor were masked with respect to assignment to intervention or control group, and there were no instances in which the author's response unmasked the assignment. At the conclusion of the study, the assignment code was broken, and the data were tabulated by a different staff member, the managing editor.

Experience indicated that at least a quarter of abstracts are defective in 1 or more of the ways described. Based on an assumed reduction from 25% to 10% with the intervention, and assuming α of .05 and β of .80, we projected a sample size of 100 in each arm. Because of anticipated losses, we enrolled 250 manuscripts. The proportion of defective abstracts in the instructed and uninstructed groups were analyzed by χ2 test.

Of 250 manuscripts enrolled, 119 were assigned to receive the instruction sheet and 131 to the uninstructed or control group. Thirteen manuscripts were withdrawn (ie, a revision was not returned) and 34 were otherwise lost to follow-up analysis, leaving for final analysis 89 in the intervention group and 114 controls (Figure 1).

One or another of the types of defects was identified in 25 instructed abstracts (28%; 95% confidence interval [CI], 19%-37%) and in 30 uninstructed abstracts (26%; 95% CI, 18%-34%), insignificant differences (P=.78). With respect to specific type of defect found, 28 of the 55 defective abstracts (51%; 95% CI, 38%-64%) had inconsistencies with the body of the manuscript, 16 (29%; 95% CI, 17%-41%) contained data or other information not found in the body, 8 (15%; 95% CI, 10%-20%) had both types of defects, and only 3 (5%; 95% CI, 3%-7%) contained inappropriate or unjustified conclusions. There were no differences apparent between intervention and control groups with respect to type of defect found.

The proportion of manuscripts withdrawn or otherwise lost to analysis was large, and the distribution between intervention groups was disproportional. Of the 13 withdrawals, 4 had been assigned to the instructed groups and 9 to the uninstructed. Of 34 otherwise unavailable, 26 had been assigned to the intervention group and 8 to the control group. We recalculated the results with the assumption that none of the 30 withdrawn or unavailable manuscripts assigned to the intervention would have been returned with defective abstracts, and that 9 (56%) of the 16 unavailable or withdrawn manuscripts assigned to control would have had defective abstracts. Under these highly unlikely conditions, the number of defective abstracts would be 25 (21%; 95% CI, 14%-28%) of 119 in the instructed group and 39 (30%; 95% CI, 22%-38%) of 131 in the uninstructed. This difference is still not statistically significant (χ2, 2.12; P <.15).

The abstract of a research article, it could reasonably be argued, is the most important part of the article. It is by far more likely to be read than any other section of the report. The ubiquitous availability and widespread use of automated literature search mechanisms, which provide an (often truncated) abstract, have done nothing but increase this likelihood.

Given the importance of the abstract, there has been surprisingly little research into its accuracy. Narine and colleagues2 analyzed the quality of abstracts of original research reports published in 1989 in the Canadian Medical Association Journal, using a special instrument developed specifically for the study. A number of deficiencies were identified, but none related specifically either to consistency of data in the abstract and the body or to the basis of data in the abstract. Roberts and associates3 examined the effects of peer review and editing on the readability of articles from the Annals of Internal Medicine; evidence was found of a modest increase in readability in both text and abstract, but accuracy was not addressed. Goodman and associates4 compared the effects of peer review and editing on manuscript quality in the same journal; only 1 of 34 items involved the abstract, and it was a very general assessment.

The major development involving abstracts during the last decade has been the introduction of structured abstracts.5 Although there have been objections to the structured format, based on length and aesthetic concerns, it has been adopted widely in one form or another, and there seems to be general acceptance that it is more informative than the unstructured variety. Some evidence exists that both quality6 and understanding7 may be improved by structured abstracts. However, there is little reason to suspect that requiring a structure will lessen the types of discrepancies and omissions we assessed.

The present study was designed to test the effectiveness of an educational intervention on types of defects in abstracts of papers undergoing revision. We found that the proportion of abstracts with one or another of 3 defects was 26% to 28% and—disappointingly—not affected by the simple intervention tested. The most common defect, present alone or in combination in 17% of all manuscripts and 65% of defective ones, was inconsistency between abstract and body.

To determine if the types of defects we identified are regularly recognized by all journals and dealt with during the copyediting process, much as are errors in grammar and syntax, we did a small study involving analysis of published abstracts in 4 journals: American Journal of Obstetrics and Gynecology (July 1996 issue), Pediatrics (October 1996 issue), JAMA (July 26, August 2, and August 9, 1995, issues), and The New England Journal of Medicine (August 29, September 5, and September 12, 1996, issues). The results of this analysis of published articles are summarized in Table 1. Surprisingly, the proportion of published abstracts that we found defective was as high or higher than what we observed in those that had been reviewed and revised but were not in final copyedited form. This was a small and not scientifically derived sample, but the findings suggest that journals do not, as part of their regular copyediting procedures, scrutinize abstracts for these types of problems.

Table Graphic Jump LocationAbstract Defects in Published Articles

We conclude that inconsistencies in data between abstract and body and reporting of data and other information solely in the abstract are relatively common and that a simple educational intervention directed to the author is ineffective in reducing that frequency. Until effective interventions are devised, journals and publishers should incorporate into their copyediting procedures the practice of detailed and specific verification of all data and other information in the abstract.

Pitkin RM. The importance of the abstract.  Obstet Gynecol.1987;70:267.
Narine L, Yee DS, Einarson TR, Ilersich AL. Quality of abstracts of original research articles in CMAJ in 1989.  CMAJ.1991;144:449-453.
Roberts JC, Fletcher RH, Fletcher SW. Effects of peer review and editing on the readability of articles published in Annals of Internal Medicine JAMA.1994;272:119-121.
Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before and after peer review and editing at Annals of Internal Medicine Ann Intern Med.1994;121:11-21.
Ad Hoc Working Group for Critical Appraisal of the Medical Literature.  A proposal for more informative abstracts of clinical articles.  Ann Intern Med.1987;106:598-604.
Taddio A, Pain T, Fassos FF, Boon H, Ilersich AL, Einarson TR. Quality of unstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal, and the Journal of the American Medical Association CMAJ.1994;150:1611-1615.
Hartley J, Benjamin M. An evaluation of structured abstracts in journals published by the British Psychological Society.  Br J Educ Psychol.In press.

Tables

Table Graphic Jump LocationAbstract Defects in Published Articles

References

Pitkin RM. The importance of the abstract.  Obstet Gynecol.1987;70:267.
Narine L, Yee DS, Einarson TR, Ilersich AL. Quality of abstracts of original research articles in CMAJ in 1989.  CMAJ.1991;144:449-453.
Roberts JC, Fletcher RH, Fletcher SW. Effects of peer review and editing on the readability of articles published in Annals of Internal Medicine JAMA.1994;272:119-121.
Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before and after peer review and editing at Annals of Internal Medicine Ann Intern Med.1994;121:11-21.
Ad Hoc Working Group for Critical Appraisal of the Medical Literature.  A proposal for more informative abstracts of clinical articles.  Ann Intern Med.1987;106:598-604.
Taddio A, Pain T, Fassos FF, Boon H, Ilersich AL, Einarson TR. Quality of unstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal, and the Journal of the American Medical Association CMAJ.1994;150:1611-1615.
Hartley J, Benjamin M. An evaluation of structured abstracts in journals published by the British Psychological Society.  Br J Educ Psychol.In press.
CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 60

Related Content

Customize your page view by dragging & repositioning the boxes below.