0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
The JAMA Forum |

If You Can’t Measure Performance, Can You Improve It? FREE

Robert A. Berenson, MD1
[+] Author Affiliations
1institute fellow at the Urban Institute. An internist who practiced twenty years, he has served in various government positions, including, Assistant Director of the White House Domestic Policy Staff under President Carter, Director of the Center for Health Plans and Providers in the Centers for Medicare & Medicaid Services in the Clinton Administration, and Vice Chair of the Medicare Payment Advisory Commission. He graduated from Brandeis University and received his MD from the Mount Sinai School of Medicine
JAMA. 2016;315(7):645-646. doi:10.1001/jama.2016.0767.
Text Size: A A A
Published online

“If you can’t measure it, you can’t manage it” is an often-quoted admonition commonly attributed to the late W. Edwards Deming, a leader in the field of quality improvement. Some well-respected health policy experts have adopted as a truism a popular variation of the Deming quote—“if something cannot be measured, it cannot be improved”—and point to the recent enactment of the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) as a confirmation of “the broadening societal embrace” of this concept (http://bit.ly/1Fvg96E).

Place holder to copy figure label and caption

Graphic Jump LocationImage not available.

Robert A. Berenson, MD

Urban Institute

The problem is that Deming actually wrote, “It is wrong to suppose that if you can’t measure it, you can’t manage it—a costly myth” (my emphasis added)—the exact opposite (http://bit.ly/1Ps40PZ). Deming consistently cautioned against requiring measurement to guide management decisions, observing that the most important data needed to manage often are unknown and unknowable.

Critics of policy makers’ infatuation with reliance on performance measures to support public reporting and “pay for performance” rewards or penalties to clinicians and health care facilities offer a quotation attributed to their own heavy hitter, Albert Einstein: “Not everything that can be counted counts, and not everything that counts can be counted.” If you Google this quote, you will find dozens of images of the learned professor linked with this quotation, (http://bit.ly/1PmfMBO) but there is a problem here also: sociologist William Bruce Cameron apparently penned this in 1963 (http://bit.ly/1JnsBt3), years after Einstein’s death.

So much for “evidence-based policymaking”—we can’t even get quotations right. No wonder there is such disagreement over the effect of Obamacare.

MANY ROUTES TO IMPROVEMENT

The requirement for measurement as essential to management and improvement is a fallacy, not a self-evident truth and not supported by Deming, other management experts, or common sense. There are many routes to improvement, such as doing things better based on experience, example, as well as evidence from research studies.

Surely public reporting of performance has changed medical culture for the better, leading to a growing acceptance that the quality of clinical practice does not depend on the unmeasurable “art of medicine.” Comparative public performance using meaningful and accurate measures has led to quality improvements, as clinicians and hospitals reflect on their own comparative performance and seek to improve their public standing. Examples include improved hospital care for patients experiencing heart attacks (http://bit.ly/1nqdOUO) and improved renal dialysis (http://1.usa.gov/1SAOvvG). In most clinical areas, however, we lack readily available measures to use as valid benchmarks to assess performance.

Not deterred, however, last year a rarely bipartisan Congress passed the MACRA legislation. Its core element was repealing the unsustainable sustainable growth rate mechanism threatening huge payment cuts to physicians caring for Medicare patients. The law called for development of “value based” payment approaches that would pay for quality and cost outcomes, rather than just for the myriad services physicians provide or order, whether or not the services are needed or well performed. “Paying for value, not volume” has become the slogan du jour, itself assuming a mostly unchallenged position in health policy circles.

Now comes the hard part: actually achieving greater value, rather than fashioning an increasingly complex, intrusive, and likely doomed attempt to measure value.

After the MACRA’s Merit-Based Incentive Payment System (MIPS) (http://1.usa.gov/1Nox2i8) is fully phased in early in the next decade, a physician caring for Medicare patients under MIPS stands to lose up to 9% of their Medicare payments or conceivably gain 27%, based on their performance on measures of quality, their use of health care resources, the extent to which they have implement electronic health records, and their participation in quality improvement activities.

MIPS is an outgrowth of a decade of smaller pay-for-reporting and pay-for-performance programs. Realizing that physicians basically ignored the small rewards and penalties limited to 2% of Medicare physician payments, Congress raised the financial stakes enormously, making sure physicians pay attention—an approach that brings to mind the Catskills-era quip, “The food here is terrible, and the portions are too small.”

Improving physician performance on particularly significant health problems amenable to accurate measurement would be worthy application of a few measures, such as physicians’ performance in controlling blood pressure in the millions of patients with inadequately controlled hypertension. But Congress in MACRA has a different purpose. Within a few years, MIPS will publish a performance scorecard for each physician participating in Medicare.

But performance on a few, random and often unreliable measures of performance can provide a highly misleading snapshot of any physician’s value (http://bit.ly/1cU6jtK). So it’s no surprise that only about half of physicians participate ( http://go.cms.gov/1Ku2UC6).

MACRA’s bipartisan consensus included the House GOP Doctors Caucus, where 17 of its 18 members voted for legislation requiring the Centers for Medicare & Medicaid Services to rank physicians in the country based on its calculation of their value. Having government rate physicians would be a step too far even if we had important and valid measures of physician performance.

A BAD IDEA?

Practical challenges aside, pay for performance for health professionals may simply be a bad idea. Behavioral economists find that tangible rewards can undermine motivation for tasks that are intrinsically interesting or rewarding. Furthermore, such rewards have their strongest negative impact when they are perceived as being large, controlling, contingent on very specific task performance (http://bit.ly/1OB5Lx9), or associated with surveillance, deadlines, or threats, as with MIPS (http://bit.ly/1qhAzql).

Another major problem with the current preoccupation with measurement as the central route to improvement is the assumption that if a quality problem isn’t being measured, it basically doesn’t exist. A prime example is diagnosis errors. Recently, an Institute of Medicine (IOM) committee, on which I was a member, issued Improving Diagnosis in Health Care, documenting serious errors of diagnosis in 5% to 15% of interactions with the health care system (http://bit.ly/23ikpAZ).

As the report emphasizes, we cannot now measure the accuracy of diagnoses, which means MIPS scores will not include performance on this core physician competency. Still, the IOM committee proposed numerous improvement strategies. These include development of immediate feedback programs to erring clinicians from patients and other health professionals when a serious misdiagnosis occurs (making errors memorable if not measureable), greater attention in medical education to the cognitive bias that commonly clouds clinicians’ judgment, improved systems to ensure that abnormal test results are promptly communicated to patients and diagnostic team members, and giving patients direct access to their medical records so they can introduce relevant, missing information and correct the misinformation that is common in clinical records.

These and other IOM recommendations represent better practices that might dramatically improve diagnostic accuracy, relying not on performance measures but on adopting better work processes and focused education. Measures would help, but substantial progress can be made regardless.

The overarching concern is that under MIPS and similar programs, physicians will focus on the money while their intrinsic motivation to make accurate, timely diagnoses as a core responsibility will be crowded out. If so, the worthwhile recommendations in the IOM report will likely sit on the shelf, gathering dust, thanks to the misguided supposition that “if you can’t measure it, you can’t manage it.”

ARTICLE INFORMATION

Corresponding Author: Robert A. Berenson, MD (RBerenson@urban.org).

Published online: January 13, 2016, at http//:newsatjama.jama.com/category/the-jama-forum/.

Disclaimer: Each entry in The JAMA Forum expresses the opinions of the author but does not necessarily reflect the views or opinions of JAMA, the editorial staff, or the American Medical Association.

Additional Information: Information about The JAMA Forum is available at http://newsatjama.jama.com/about/. Information about disclosures of potential conflicts of interest may be found at http://newsatjama.jama.com/jama-forum-disclosures/.

Figures

Place holder to copy figure label and caption

Graphic Jump LocationImage not available.

Robert A. Berenson, MD

Urban Institute

Tables

References

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Please click the checkbox indicating that you have read the full article in order to submit your answers.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

8,012 Views
0 Citations
×

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles
Jobs
JAMAevidence.com

Care at the Close of Life: Evidence and Experience
What Should the Physician Expect of the Hospital-Based Palliative Care Service?