0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Contribution |

Do Quality Improvement Organizations Improve the Quality of Hospital Care for Medicare Beneficiaries? FREE

Claire Snyder, PhD; Gerard Anderson, PhD
[+] Author Affiliations

Author Affiliations: Johns Hopkins Bloomberg School of Public Health, Baltimore, Md.

More Author Information
JAMA. 2005;293(23):2900-2907. doi:10.1001/jama.293.23.2900.
Text Size: A A A
Published online

Context Quality improvement organizations (QIOs) are charged with improving the quality of medical care for Medicare beneficiaries.

Objective To explore whether the quality of hospital care for Medicare beneficiaries improves more in hospitals that voluntarily participate with Medicare’s QIOs compared with nonparticipating hospitals.

Design, Setting, and Data Data from 4 QIOs charged with improving the quality of care in 5 states (Maryland, Nevada, New York, Utah, and Washington) and the District of Columbia were used. Hospitals participate with the QIOs on quality improvement on a voluntary basis. A retrospective study was conducted comparing improvement in the quality of care of patients in hospitals that actively participated with the QIOs vs hospitals that did not. The medical records of approximately 750 Medicare beneficiaries per state in each of 5 clinical areas (atrial fibrillation, acute myocardial infarction, heart failure, pneumonia, and stroke) were abstracted at baseline (1998) and follow-up (2000-2001).

Main Outcome Measure Fifteen quality indicators associated with improved outcomes in the prevention or treatment of the 5 clinical areas were used as quality of care measures. These 15 indicators were specifically targeted by the QIOs for quality improvement during the study period.

Results Hospitals that voluntarily participate with the QIOs are more likely to be larger than nonparticipating hospitals (P<.05). At baseline, there were statistically significant (P<.05) differences between participating and nonparticipating hospitals on 5 of 15 quality indicators, with participating hospitals performing better on 3 of 5. There was no statistically significant difference in change from baseline to follow-up between participating and nonparticipating hospitals on 14 of 15 quality indicators. The one exception was that participating hospitals improved more on the pneumonia immunization indicator than nonparticipating hospitals (P = .005).

Conclusion Hospitals that participate with the QIO program are not more likely to show improvement on quality indicators than hospitals that do not participate.

Since its inception in 1965, the Medicare program has been concerned that Medicare beneficiaries receive appropriate and efficiently provided medical care. Initially, professional standards review organizations and, later, peer review organizations focused on identifying quality problems in the treatment of Medicare beneficiaries.1 Evaluations criticized these organizations for overly emphasizing cost containment, creating adversarial relationships with providers, and conducting quality reviews of unclear validity and effectiveness.14 A 1990 Institute of Medicine (IOM) report noted these and other problems with the peer review organizations, including their unproven effect on improving the quality of care for Medicare beneficiaries.5

In 1992, using the IOM report for guidance, the Medicare program shifted its quality efforts to partnering with hospitals to improve the quality of care overall in addition to the regulatory case reviews.6,7 Medicare currently contracts with quality improvement organizations (QIOs) and allocates approximately $200 million annually for quality improvement.8 Quality improvement organizations work with hospitals on quality improvement in a variety of ways, including providing educational materials, using data collection and feedback to track performance on quality indicators, and assisting hospitals in implementing systems changes (eg, standing orders, clinical pathways). In dollar terms, the QIOs are the federal government’s largest initiative for improving the quality of care.9

Despite this significant financial investment and the questionable impact of previous quality assurance efforts, the QIOs’ effectiveness at improving the quality of care has not been rigorously evaluated. Jencks et al8,10 have published 2 articles to track improvements in the quality of care for Medicare beneficiaries using quality process indicators adopted by Medicare and the QIOs. Their studies suggested widespread improvement on the indicators; however, they provide little evidence that the improvement can be specifically attributed to the QIOs because no concurrent control group was included in the analyses. Many other quality initiatives (eg, the National Committee for Quality Assurance, the Joint Commission on Accreditation of Healthcare Organizations, and internal hospital quality improvement efforts) were operating simultaneously. Analysis using only a historical control makes it difficult to attribute improvements specifically to the QIOs’ activities, particularly since the QIOs are focusing on many of the same indicators as these other organizations.

This study addresses this methodological weakness by using a concurrent control group. Because hospitals participate with the QIOs on quality improvement on a voluntary basis, it is possible to compare the improvement in hospitals that voluntarily partnered with the QIOs with the improvement in hospitals that did not voluntarily partner with the QIOs.

Objectives

Our study had 2 objectives: (1) to explore characteristics of hospitals that voluntarily participate with the QIOs vs those that do not and (2) to determine whether hospitals voluntarily participating with the QIOs improve the quality of care for Medicare beneficiaries more than nonparticipating hospitals.

Design

This was a retrospective comparative study that assessed performance of hospitals on 15 quality of care indicators using data from a cross-sectional sample of Medicare beneficiary medical records abstracted in 1998 (baseline) and a separate sample of medical records abstracted in 2000-2001 (follow-up).8,10 These 15 indicators were the focus of the QIOs’ improvement efforts in the inpatient setting during the study period. The improvement in the performance on these 15 quality indicators by hospitals that actively participated with the QIOs was compared with the improvement by hospitals that did not participate with the QIOs.

Medical records were assigned to the “participating” and “nonparticipating” groups based on information obtained from the QIOs. Using records kept by the QIOs and reported to Medicare during the study period regarding the QIOs’ collaborations with individual hospitals, the QIOs classified the hospitals using 4 nonmutually exclusive categories: (1) hospital did not participate with the QIO at all; (2) hospital expressed an interest in participating with the QIO but did not track performance using data or implement systems changes; (3) hospital used data collected by itself or by the QIO for quality performance tracking as a result of working with the QIO; and (4) hospital implemented systems changes as a result of working with the QIO.

While there is no single definition of “active” participation, based on consultations with internal experts at the QIOs, a hospital was considered as “actively” participating with the QIOs in the primary analysis if it either (1) used data collected by itself or by the QIO for quality performance tracking as a result of working with the QIO or (2) implemented systems changes (eg, standing orders, critical pathways, chart reminders, and the like) as a result of working with the QIO. A hospital was classified as (1) “participating” if it performed either function at any point during the study period and (2) “not actively participating” if it performed neither of these activities. To test the sensitivity of the study findings to alternative definitions of “active” participation, 4 different definitions of hospital participation were analyzed: (1) participation defined as hospitals that only used data to track quality performance; (2) participation defined as hospitals that only implemented systems changes; (3) participation defined as hospitals that both used data to track quality performance and implemented systems changes; and (4) nonparticipation defined as hospitals that did not participate with the QIO at all (ie, excludes hospitals that expressed an interest in participating from the control group).

Hospital participation was classified for each of the 5 clinical areas separately because a hospital may choose to work with the QIO on improving care in one clinical area but not another. Because the atrial fibrillation and stroke indicators were combined together in previous analyses,8,10 hospital participation for these 2 clinical areas was assigned jointly. Data on hospital bed size using 3 categories (1-100, 101-250, ≥251 beds) and profit status using 2 categories (not-for-profit/government, for-profit) were also provided by the QIOs.

Effectiveness Measures

Quality of care was assessed using 15 dichotomous process indicators of quality care associated with improved outcomes for treatment and prevention in 5 clinical areas prevalent among Medicare beneficiaries: atrial fibrillation, acute myocardial infarction, heart failure, pneumonia, and stroke (Table 1). Two other inpatient indicators (time to thrombolytic therapy and time to angioplasty in acute myocardial infarction) are not included in this analysis or in the analyses by Jencks et al8,10 because they did not have sufficient sample sizes due to the limited number of eligible cases. Also, the heart failure indicator is a combination of 2 separate indicators (evaluation of left ventricular ejection fraction [LVEF]; angiotensin-converting enzyme inhibitor prescribed at discharge for patients with LVEF <40%). This combined indicator has also been used in other analyses.11 (For more information on the development and testing of these indicators, see Jencks et al.8)

Table Graphic Jump LocationTable 1. Quality Indicators by Clinical Area and Sample Sizes
Data Sources and Variables

All QIOs were invited to participate in this study. Four QIOs with responsibility for 5 states (Maryland, New York, Nevada, Utah, and Washington) and the District of Columbia were both willing to participate in the study and able to provide the data required to conduct the analyses. The QIOs provided exactly the same data used by Jencks et al.8,10 Specifically, approximately 700 to 800 medical records per state per clinical area were abstracted at baseline and follow-up as part of the Jencks et al studies8,10 to provide state-level measures of performance on the quality indicators. The medical records selected for abstraction were systematically sampled from a random starting point after sorting by age, race, sex, and hospital.8 The baseline sample of records was collected in 1998 (prior to the start of the QIOs’ contract cycle that began in 1999), and the follow-up sample of records was collected in 2000-2001 (toward the end of the QIOs’ contract cycle that ended in 2002) for the studies by Jencks et al.8,10 These medical records were abstracted by 2 Medicare contractors.

Using data from each record, algorithms were used by Jencks et al8,10 to determine whether patients were eligible for a given quality indicator based on guidelines and contraindications and whether eligible patients received the care outlined by the quality indicator. Table 1 reports both the total number of records abstracted for each clinical area and the number of patients eligible for each of the 15 quality indicators. The data set includes only Medicare beneficiaries enrolled in the traditional fee-for-service plans (approximately 85% of beneficiaries during this time period) because the quality of Medicare managed care is tracked separately.

Data on patient age (continuous), sex, and race (white vs nonwhite) were provided by the QIOs. These variables were included because of their association with the quality of care delivered. The race variable was derived from Social Security Administration files for which persons self-identify their race using predefined categories. For the purposes of this study, race was collapsed into 2 categories to maintain patient confidentiality. Overall, 82% of the sample was classified as white and 18% was classified as nonwhite.

Statistical Analyses

The analysis of the first question, to determine whether hospital characteristics (bed size, profit status) are associated with participation with the QIOs, used χ2 tests. This analysis was performed at the hospital level for each clinical area separately, with each unique hospital identifier from baseline representing a single observation. The bed size and profit status of participating hospitals were compared with nonparticipating hospitals. Data from all 5 states and the District of Columbia were combined, and then analyses were conducted on a state-by-state basis. Also, the baseline and follow-up performance on the 15 quality indicators of participating and nonparticipating hospitals in the data set combining all states and the District of Columbia were compared using χ2 tests in unadjusted analyses and using logistic regression adjusting for both hospital and patient characteristics.

The analysis of the second question, whether hospitals participating with the QIOs improved the quality of care more than hospitals not participating with the QIOs, used logistic regression models. The analyses were conducted aggregating all medical records from participating hospitals into one group and all medical records from nonparticipating hospitals into the comparison group. Sample sizes were insufficient for reliable estimation of individual hospital performance. Separate models were constructed for each of the 15 dichotomous quality indicators using patient-level data. The outcome of performance of the quality indicator among eligible patients (yes/no) was modeled as a function of participation with the QIOs (yes/no), period (baseline/follow-up), and an interaction between participation and period. The coefficient for the interaction term in this model indicates whether participating hospitals improved more or less than nonparticipating hospitals from baseline to follow-up. These models were analyzed adjusting for patient age, sex, and race, and hospital bed size and profit status.

Generalized estimating equations12 were also tested because logistic regression assumes independence of observations, but these data sets could include more than 1 patient per hospital, leading to potential clustering. However, the generalized estimating equation analyses produced similar results and are, therefore, not reported.

The initial analysis was conducted combining the data from all 5 states and the District of Columbia; the same analyses were also conducted on a state-by-state basis. State-level analyses were considered exploratory due to small sample sizes.

Because this study used separate cross-sectional samples of medical records for baseline and follow-up, there may be some differences in which hospitals were sampled at the 2 time points (ie, a hospital may appear in the baseline sample but not the follow-up sample and vice versa). A sensitivity analysis was conducted using only hospitals included in both the baseline and follow-up data sets. Analysis of this subset reached similar study conclusions, probably because 98% of patient records were sampled from hospitals included in both the baseline and follow-up data sets. As a result, only the full data set results are presented here.

All tests were conducted at the P<.05 level of significance. This is a liberal definition of significance for this study because using a .05 significance level would be expected to produce a statistically significant result in approximately 1 of 20 tests by chance alone, and 15 indicators are being tested in this study. SAS software, version 9.00 (SAS Institute Inc, Cary, NC) was used for all analyses.

The initial data collection did not require informed consent because the data were collected by the Medicare program for administrative purposes.8 Prior to these data being shared with the study authors, all identifying information was removed. The Committee on Human Research of the Johns Hopkins Bloomberg School of Public Health reviewed this study and determined that it qualified as exempt because the data sets used included no identifiable information.

Across the 5 clinical areas, between 56% and 69% of hospitals participated with the QIOs using the primary definition of active participation. There were statistically significant differences in the characteristics of hospitals that participated with the QIOs vs those that did not (Table 2). Across all clinical areas, nonparticipating hospitals were more likely to be smaller and for-profit. The findings related to differences in bed size persisted in state-by-state analyses, but the findings related to differences in profit status varied from state to state.

Table Graphic Jump LocationTable 2. Characteristics of Participating and Nonparticipating Hospitals at Baseline

There were also statistically significant differences on the baseline performance of 5 of 15 quality indicators, with the participating hospitals performing better at baseline on 3 of 5 indicators. At follow-up, there were statistically significant differences on 4 of 15 quality indicators, with participating hospitals performing better at follow-up on 4 of 4 indicators. In unadjusted analyses, participating hospitals improved on 13 of 15 indicators and nonparticipating hospitals improved on 12 of 15 indicators (Table 3).

Table Graphic Jump LocationTable 3. Performance at Baseline and Follow-up of Participating and Nonparticipating Hospitals

Participating hospitals showed statistically significantly greater improvement than nonparticipating hospitals controlling for hospital and patient characteristics on only 1 of 15 indicators (Table 4). The one indicator showing greater improvement in participating hospitals was patient screened for or given pneumococcal vaccine (P = .005). For the remaining nonstatistically significant 14 indicators, the participating hospitals improved more on 8 indicators and the nonparticipating hospitals improved more on 6 indicators. There was no trend regarding which indicators the participating hospitals improved more on based on either clinical area or baseline performance.

Table Graphic Jump LocationTable 4. Odds Ratios of Improvement: Baseline to Follow-up in Participating vs Nonparticipating Hospitals on Quality Measures*

The sensitivity analyses that used alternative definitions of hospital participation had little impact on the findings. There were never more than 2 indicators statistically significantly different between groups. Some of these statistically significant differences showed greater improvement among participating hospitals and some showed greater improvement among nonparticipating hospitals.

The same trend was found in the exploratory analyses conducted at the state level. In each of the 5 states and the District of Columbia, there were never any more than 2 statistically significant differences by participation and there was no consistent trend of one group improving more than the other. Participating hospitals improved more than nonparticipating hospitals on between 5 and 8 indicators but improved less on between 6 and 9 indicators (some indicators were not analyzable due to small sample sizes). The one exception is the state of Washington, where participating hospitals improved more than nonparticipating hospitals on 11 of 13 indicators, although none were statistically significant. Whether the differences here represent a true effect of the specific QIO should be further explored. This is not possible with the existing data.

The IOM is currently evaluating the QIO program, including operations, program evaluations, and whether other entities could perform the QIOs’ functions.13 The findings from this study do not support the hypothesis that the QIO program improves the quality of care for Medicare beneficiaries in the inpatient setting.

The QIO program was initiated because evaluations of previous Medicare quality efforts did not find clear evidence of their effectiveness.14,14,15 Evaluations of the QIO program suggest that it leads to improvement in the quality of care, but these evaluations have significant methodological flaws. One study found that providing participating hospitals with performance data was associated with improved care processes.16 However, this study focused on only a small percentage (4%) of quality improvement projects and had no control group. The Cooperative Cardiovascular Project assessed improvements in care for acute myocardial infarction following implementation of a demonstration project in 4 pilot states.17,18 Significant improvements were found in the pilot states, but evidence from the study suggests the nation as a whole also improved during this time. The one national study examined change across all states and all clinical areas and found improvement over time.8,10 It is important to note that this statewide improvement over time was what the QIOs were contracted to accomplish. However, because this national study used only the historical controls to calculate improvement, it is difficult—if not impossible—to determine whether the improvement found was directly related to the QIOs.

The current study was undertaken to incorporate a control group but has some of its own limitations. This study focuses on inpatient care only and does not evaluate the QIOs’ efforts in the outpatient setting. Also, the baseline and follow-up medical records were not necessarily sampled from the same hospitals. However, 98% of patient records sampled were from hospitals included in both the baseline and follow-up data sets, and analysis of the subset of overlapping hospitals did not alter the study’s overall conclusions.

Medicare was unable to provide the national-level data, so individual QIOs had to be recruited for participation. Only 4 QIOs representing 5 states and the District of Columbia participated in this study. While these QIOs are from different regions of the country and the 4 QIOs represent a range of average performance on the quality measures based on the analyses by Jencks et al,10 care should be taken in generalizing these findings across all QIOs, especially given the high proportion of positive but statistically insignificant results from Washington State.

Defining hospital participation was one of the greatest challenges in this study; however, sensitivity analyses that varied the definition of participation did not significantly affect the findings. Also, the classification of hospital participation by the QIOs is subject to bias. While records were kept by the QIOs and reported to Medicare during the study period to track hospital participation, the validity of these data is difficult to evaluate retrospectively. Further, at that time, QIOs had the incentive to attribute hospital activities as being related to the QIO. This potential misclassification could lead to biased estimation of the QIOs’ impact. Spillover effects of QIO interventions to nonparticipating hospitals and the conduct of some QIO interventions statewide could also contribute to biased estimates.

This study has a relatively short period between baseline (1998) and follow-up (2000-2001) for demonstrating change. The analysis by Jencks et al10 showed that change did occur over this time period, but some hospitals classified as participating with the QIOs may not have fully implemented their QIO-related quality improvement interventions at the time of follow-up data collection. Because the QIO contract period did not end until 2002, the follow-up data may not reflect the full extent of the QIO interventions.

While these data are several years old, they are from the most recently completed QIO contract cycle. The seventh contract cycle will end in 2005, and evaluations of the QIOs’ impact in the nursing home and outpatient settings using prospectively identified participants may be available.

Because hospitals participate with the QIOs on a voluntary basis and because QIOs specifically target certain hospitals for intervention, there may be important differences between participating and nonparticipating hospitals other than their participation that could affect their ability to improve care, thus leading to selection bias. Whether the differences in hospital characteristics and baseline performance found here reflect other important differences in participating vs nonparticipating hospitals that may have affected their ability to improve the quality of care requires further research. Hospital bed size and profit status, 2 important factors associated with the quality of care,19,20 were controlled for. Because hospitals had to be nonidentifiable in the data set, additional information on hospital characteristics could not be obtained.

While this study does not definitively answer the question of whether the QIOs improve the quality of care for Medicare beneficiaries, the findings suggest that the improvement demonstrated over time by Jencks et al8,10 using these inpatient quality indicators cannot be attributed to the QIOs. Additional efforts to assess and improve the QIOs’ effectiveness may be needed. The current IOM assessment of the QIO program provides an opportunity to evaluate the program further and recommend program modifications as needed.13

Corresponding Author: Claire Snyder, PhD, Johns Hopkins Bloomberg School of Public Health, 624 N Broadway, Sixth Floor, Baltimore, MD 21205-1901 (claire.snyder@alumni.duke.edu).

Author Contributions: Dr Snyder had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Snyder, Anderson.

Acquisition of data: Snyder, Anderson.

Analysis and interpretation of data: Snyder, Anderson.

Drafting of the manuscript: Snyder.

Critical revision of the manuscript for important intellectual content: Anderson.

Statistical analysis: Snyder.

Obtained funding: Snyder.

Study supervision: Anderson.

Financial Disclosures: None reported.

Funding/Support: This project was supported by grants 1 R36 HS014509 and T32 HS 00029 from the Agency for Healthcare Research and Quality.

Role of the Sponsor: The conclusions presented are solely those of the authors and do not represent those of Delmarva Foundation, Qualis Health, IPRO, HealthInsight, or the Centers for Medicare and Medicaid Services. However, these groups were provided 30 days to review the manuscript prior to its submission. The funding organization had no participation in the study other than supporting Dr Snyder’s doctoral education and dissertation research.

Acknowledgment: The authors acknowledge the assistance of Delmarva Foundation for Medical Care, Inc, Qualis Health, IPRO, HealthInsight, and the Centers for Medicare and Medicaid Services in providing data that made this research possible. Special thanks to Matthew E. Fitzgerald, DrPH, and Mariana Albert Lesher, MS, from Delmarva Foundation, Sharon Eloranta, MD, and Greg Baumgardner, MS, from Qualis Health, Anthony Shih, MD, MPH, and Ti-Kuang Lee, ScM, from IPRO, and Michael Silver, MPH, and Emily Sim, MStat, from HealthInsight. Finally, many thanks to dissertation committee members Leon Gordis, MD, DrPH, Giovanni Parmigiani, PhD, and Donald Steinwachs, PhD.

Sprague L. Contracting for Quality: Medicare's Quality Improvement Organizations. Washington, DC: George Washington University; June 3, 2002. NHPF Issue Brief No. 774
Dans PE, Weiner JP, Otter SE. Peer review organizations: promises and potential pitfalls.  N Engl J Med. 1985;313:1131-1137
PubMed   |  Link to Article
Smits HL. The PSRO in perspective.  N Engl J Med. 1981;305:253-259
PubMed   |  Link to Article
Rubin HR, Rogers WH, Kahn KL, Rubenstein LV, Brook RH. Watching the doctor-watchers: how well do peer review organization methods detect hospital care quality problems?  JAMA. 1992;267:2349-2354
PubMed   |  Link to Article
Institute of Medicine.  Medicare: A Strategy for Quality Assurance. Washington, DC: National Academy Press; 1990
Jencks SF, Wilensky GR. The Health Care Quality Improvement Initiative: a new approach to quality assurance in Medicare.  JAMA. 1992;268:900-903
PubMed   |  Link to Article
Jencks SF. Changing health care practices in Medicare’s Health Care Quality Improvement Program.  Jt Comm J Qual Improv. 1995;21:343-347
PubMed
Jencks SF, Cuerdon T, Burwen DR.  et al.  Quality of medical care delivered to Medicare beneficiaries: a profile at state and national levels.  JAMA. 2000;284:1670-1676
PubMed   |  Link to Article
Institute of Medicine.  Leadership by Example. Washington, DC: National Academy Press; 2002
Jencks SF, Huff ED, Cuerdon T. Change in the quality of care delivered to Medicare beneficiaries 1998-1999 to 2000-2001.  JAMA. 2003;289:305-312
PubMed   |  Link to Article
Silver MP, Geis MS, Bateman KA. Improving health care systems performance: a human factors approach.  Am J Med Qual. 2004;19:93-102
PubMed   |  Link to Article
Liang KY, Zeger SL. Regression analysis for correlated data.  Annu Rev Public Health. 1993;14:43-68
PubMed   |  Link to Article
Institute of Medicine.  Redesigning Health Insurance Benefits, Payment and Performance Improvement Programs. Available at: http://www.iom.edu/project.asp?id=19805. Accessed October 27, 2004
Rubenstein LV, Kahn KL, Reinisch EJ.  et al.  Changes in quality of care for five diseases measured by implicit review, 1981 to 1986.  JAMA. 1990;264:1974-1979
PubMed   |  Link to Article
Keeler EB, Kahn KL, Draper D.  et al.  Changes in sickness at admission following the introduction of the prospective payment system.  JAMA. 1990;264:1962-1968
PubMed   |  Link to Article
Cleves MA, Weiner JP, Cohen W.  et al.  Assessing HCFA’s Health Care Quality Improvement Program.  Jt Comm J Qual Improv. 1997;23:550-560
PubMed
Ellerbeck EF, Jencks SF, Radford MJ.  et al.  Quality of care for Medicare patients with acute myocardial infarction: a four-state pilot study from the Cooperative Cardiovascular Project.  JAMA. 1995;273:1509-1514
PubMed   |  Link to Article
Marciniak TA, Ellerbeck EF, Radford MJ.  et al.  Improving the quality of care for Medicare patients with acute myocardial infarction: results from the Cooperative Cardiovascular Project.  JAMA. 1998;279:1351-1357
PubMed   |  Link to Article
Luft HS, Hunt SS, Maerki SC. The volume-outcome relationship: practice-makes-perfect or selective-referral patterns?  Health Serv Res. 1987;22:157-182
PubMed
Devereaux PJ, Choi PTL, Lacchetti C.  et al.  A systematic review and meta-analysis of studies comparing mortality rates of private for-profit and private not-for-profit hospitals.  CMAJ. 2002;166:1399-1406
PubMed

Figures

Tables

Table Graphic Jump LocationTable 1. Quality Indicators by Clinical Area and Sample Sizes
Table Graphic Jump LocationTable 2. Characteristics of Participating and Nonparticipating Hospitals at Baseline
Table Graphic Jump LocationTable 3. Performance at Baseline and Follow-up of Participating and Nonparticipating Hospitals
Table Graphic Jump LocationTable 4. Odds Ratios of Improvement: Baseline to Follow-up in Participating vs Nonparticipating Hospitals on Quality Measures*

References

Sprague L. Contracting for Quality: Medicare's Quality Improvement Organizations. Washington, DC: George Washington University; June 3, 2002. NHPF Issue Brief No. 774
Dans PE, Weiner JP, Otter SE. Peer review organizations: promises and potential pitfalls.  N Engl J Med. 1985;313:1131-1137
PubMed   |  Link to Article
Smits HL. The PSRO in perspective.  N Engl J Med. 1981;305:253-259
PubMed   |  Link to Article
Rubin HR, Rogers WH, Kahn KL, Rubenstein LV, Brook RH. Watching the doctor-watchers: how well do peer review organization methods detect hospital care quality problems?  JAMA. 1992;267:2349-2354
PubMed   |  Link to Article
Institute of Medicine.  Medicare: A Strategy for Quality Assurance. Washington, DC: National Academy Press; 1990
Jencks SF, Wilensky GR. The Health Care Quality Improvement Initiative: a new approach to quality assurance in Medicare.  JAMA. 1992;268:900-903
PubMed   |  Link to Article
Jencks SF. Changing health care practices in Medicare’s Health Care Quality Improvement Program.  Jt Comm J Qual Improv. 1995;21:343-347
PubMed
Jencks SF, Cuerdon T, Burwen DR.  et al.  Quality of medical care delivered to Medicare beneficiaries: a profile at state and national levels.  JAMA. 2000;284:1670-1676
PubMed   |  Link to Article
Institute of Medicine.  Leadership by Example. Washington, DC: National Academy Press; 2002
Jencks SF, Huff ED, Cuerdon T. Change in the quality of care delivered to Medicare beneficiaries 1998-1999 to 2000-2001.  JAMA. 2003;289:305-312
PubMed   |  Link to Article
Silver MP, Geis MS, Bateman KA. Improving health care systems performance: a human factors approach.  Am J Med Qual. 2004;19:93-102
PubMed   |  Link to Article
Liang KY, Zeger SL. Regression analysis for correlated data.  Annu Rev Public Health. 1993;14:43-68
PubMed   |  Link to Article
Institute of Medicine.  Redesigning Health Insurance Benefits, Payment and Performance Improvement Programs. Available at: http://www.iom.edu/project.asp?id=19805. Accessed October 27, 2004
Rubenstein LV, Kahn KL, Reinisch EJ.  et al.  Changes in quality of care for five diseases measured by implicit review, 1981 to 1986.  JAMA. 1990;264:1974-1979
PubMed   |  Link to Article
Keeler EB, Kahn KL, Draper D.  et al.  Changes in sickness at admission following the introduction of the prospective payment system.  JAMA. 1990;264:1962-1968
PubMed   |  Link to Article
Cleves MA, Weiner JP, Cohen W.  et al.  Assessing HCFA’s Health Care Quality Improvement Program.  Jt Comm J Qual Improv. 1997;23:550-560
PubMed
Ellerbeck EF, Jencks SF, Radford MJ.  et al.  Quality of care for Medicare patients with acute myocardial infarction: a four-state pilot study from the Cooperative Cardiovascular Project.  JAMA. 1995;273:1509-1514
PubMed   |  Link to Article
Marciniak TA, Ellerbeck EF, Radford MJ.  et al.  Improving the quality of care for Medicare patients with acute myocardial infarction: results from the Cooperative Cardiovascular Project.  JAMA. 1998;279:1351-1357
PubMed   |  Link to Article
Luft HS, Hunt SS, Maerki SC. The volume-outcome relationship: practice-makes-perfect or selective-referral patterns?  Health Serv Res. 1987;22:157-182
PubMed
Devereaux PJ, Choi PTL, Lacchetti C.  et al.  A systematic review and meta-analysis of studies comparing mortality rates of private for-profit and private not-for-profit hospitals.  CMAJ. 2002;166:1399-1406
PubMed
CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Please click the checkbox indicating that you have read the full article in order to submit your answers.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 37

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles