March 2020, Volume XXXIII, No 12
Health Care Quality Reporting
Re-evaluating “performance” measurement
Minnesota’s teachable moment
easuring and reporting in health care has gone through three phases corresponding roughly to the 1990s, 2000s, and 2010s. During the 1990s, policymakers claimed “report cards” on the quality of clinics, hospitals, and insurance companies should be published so that “consumers” could avoid the bad actors and patronize the good ones. The doomed hospital mortality report card (dubbed the “hospital death list”), published for a few years in the early 1990s by the former Health Care Financing Administration (now CMS), and the useless report card on Minnesota insurance companies published by the Minnesota Health Data Institute in 1995, are examples.
In the early 2000s, by which time it was obvious report cards had accomplished little because “consumers” didn’t use them, Minnesota and federal policymakers decided that if report cards were not going to shift market share from the bad actors to the good, then payers (insurance companies, government programs, and self-insured employers) should punish and reward doctors and hospitals directly with bonuses and financial penalties based on report card scores. This new use of report cards was dubbed “pay for performance” (P4P) circa 2003. By the late 2000s, quality scores on report cards were being arbitrarily smooshed together with cost scores to create scores that allegedly measured “value.” With a few exceptions (Consumer Reports, CMS’s website), insurance companies were now off the hook; in the new millennium, P4P would apply only to “providers” (doctors, hospitals, nursing homes, etc.).
By the late 2010s, the proliferation of P4P and “value-based purchasing” schemes had created a backlash. To take three prominent examples:
Critics raised three objections to the rising tide of “performance” reporting: 1) it is inaccurate, which unfairly punishes providers who treat sicker and poorer patients and, conversely, unfairly rewards providers who treat wealthier and healthier patients; 2) it imposes high costs on providers; and 3) it aggravates physician burnout.
Several sessions of the Legislature have engaged in very sloppy policymaking.
Today, even former proponents of report cards and P4P are asking whether the costs exceed the benefits. They clearly have. There are multiple reasons, the single most important of which is that measurement is grossly inaccurate. Now is the time for Minnesota and federal policymakers to look back over the last three decades and re-evaluate “performance” reporting. Here in Minnesota, MDH should lead the way.
Minnesota’s teachable moment
The 2017 law authorized MDH to review MDH’s Statewide Quality and Reporting System (SQRMS), a program authorized by legislation enacted in 2008 and implemented in 2010. The 2017 law was so vaguely worded that it gave MDH discretion to do what it thinks best. Here are the instructions for MDH, such as they are, in the 2017 law: “[D]evelop a measurement framework that identifies the most important elements for assessing the quality of care, articulates statewide quality improvement goals, ensures clinical relevance, fosters alignment with other measurement efforts, and defines the roles of stakeholders.” (Minnesota Laws 2017, Chapter 6, Article 4, Section 3). There was additional language requiring MDH to reduce the total number of measurements, but that was it. The law offers no useful information on the problem the Legislature wanted solved, nor on the solution.
The 2008 law was equally vague. It instructed MDH to create a “standardized set” of quality measures for Minnesota “health care providers” (Minnesota Statutes, Section 62U.02). It also said the measures should be used to punish and reward providers, and MDH should “risk adjust” provider rewards and penalties to reflect the health status of their “populations.” But that’s essentially all the Legislature said. Like the 2017 law, the 2008 law offered no definition of the problem nor any information on the solution, that is, on how MDH was supposed to measure accurately the quality of tens of thousands of services offered by Minnesota’s 143 hospitals and 25,000 doctors.
The reports of the two health care commissions published early in 2008 (the Health Care Transformation Task Force and the Legislative Commission on Health Care Access), both of which urged the Legislature to authorize systemwide quality and cost measurement, were equally vague and baffling. Neither commission identified the problem they wanted solved, and neither offered any details on their proposed solution. Rather than define the problem, the commissions offered sweeping complaints in the most abstract terms possible, such as, “The quality of health care is uneven...” (p. v, Transformation Task Force) and, “The current payment structure is episode driven....” (p. 57, Legislative Commission). Rather than describe solutions, the commissions offered exhortations such as, “We need to come together as a community to agree on what constitutes high quality care” (p. 5, Transformation Task Force) and aspirations such as “Individuals should be empowered with information on the quality and cost of care....” (p. 57, Legislative Commission).
This information vacuum would make it difficult for any agency to select “performance” measures for SQRMS (MDH currently uses 29), evaluate those measures, and create the “framework” the Legislature asked for in 2017. MDH acknowledged this difficulty in their February 2019 interim report to the Legislature (https://tinyurl.com/mp-framework). “The Quality Reporting System has not been paired with an explicit quality improvement strategy or related goals,” they wrote. “As a result, we at MDH do not have firm criteria for adding and removing measures, and we do not have a good sense for whether measures are impactful....” (p. 7). If MDH doesn’t know what SQRMS’ goals are or whether SQRMS is working, you won’t be surprised to learn that the “stakeholders” MDH has invited to help create the “framework” appear to be equally clueless. In the same report, MDH stated that the “stakeholders” they have interviewed “agree that there needs to be a clear sense of why a Minnesota-specific measurement system is ... needed....” Obviously, they lack that “sense” now.
MDH is in this quandary because several commissions and several sessions of the Legislature have engaged in very sloppy policymaking. Those policymakers recommended solutions to problems they defined in only the crudest and most abstract terms; their “solutions” consisted of evidence-free aspirations for “performance measurement” rather than detailed, evidence-based programs; and they dumped their evidence-free aspirations on MDH with the unrealistic expectation that MDH would somehow translate their vaguely articulated aspirations into a useful program. MDH should now do what the Legislature and multiple commissions have refused to do since 2008: squarely address the question, Why do we need SQRMS and programs like it? They should begin by clearly defining the problem they think a measurement scheme like SQRMS can solve, and then present a detailed, evidence-based description of a measurement scheme that will solve or at least ameliorate the alleged problem without making other problems worse. If MDH finds they cannot do that, they should say so.
The single most important principle of the new “framework” should be accuracy.
Defining the problem: “Variation” is not a diagnosis
The 2008 commissions and the 2008 Legislature did an awful job of defining the problem they wanted MDH to address. The closest they came to defining a problem was their evidence-free assertion that “variation in quality” was the primary cause of high health care costs, and variation was due to factors under the control of doctors and hospitals. The 2008 Transformation Task Force report offered this example to illustrate the “variation” problem: “[T]he percentage of diabetics receiving optimal care ranges from 1% to 20% across Minnesota clinics.” (p. 4) The task force cited a 2007 report by MNCM for that statistic. MNCM’s report stated that “optimal diabetes care” was defined as having been received by a diabetic patient who met all five of these criteria: their hemoglobin A1c was less than 7 percent (today it’s less than 8); their blood pressure was less than 130/80 mmHg (today it’s less than 140/90); their LDL-cholesterol was less than 100 mg/dl (today it is “taking a statin”); they were taking aspirin daily (if they were between ages 41 and 75); and they didn’t use tobacco.
You don’t have to have a PhD in anything to know that these are not measures of “physician quality.” Those outcomes are the result of multiple factors, only one of which is physician expertise. Dozens of other factors outside physician control contribute to those outcomes, including patient income, literacy, willingness and ability to exercise, access to transportation, whether their insurance requires high out-of-pocket payments for medications, etc. MNCM’s “optimal diabetes care” measure is no more a quality measure than precinct-level crime rates are a measure of the quality of the police departments in those precincts. Crime rates reflect the impact of numerous factors outside the control of the police.
Accuracy: The cornerstone of the new “framework”
MDH should interpret the word “framework” in the 2017 law to mean a set of principles that guide the Legislature and other policymakers in all future decisions about how to improve the health of Minnesotans. Note I did not say “all decisions about how to improve quality.” Because so many factors that affect health are outside the control of the medical sector, it’s a huge mistake to assume that improving health and improving the quality of clinics and hospitals are synonymous.
The Legislature should recognize that the single most important principle of the new “framework” should be accuracy. The accuracy principle must take precedence over all other measurement criteria for the simple and obvious reason that feedback of any sort is useless if it is not accurate. In fact, inaccurate feedback can be worse than useless if it leads to harmful consequences, such as punishment of safety-net hospitals or addiction clinics. And yet one looks in vain for the word “accuracy” in nearly all legislation, commission reports, regulations, and commentary on measurement schemes.
It is very difficult to explain this casual attitude toward accuracy. Policymakers understand, presumably, that providers are not omnipotent, but they have convinced themselves it’s possible to “risk adjust” scores to account for factors providers have no control over. The new framework should explicitly state that providers are not omnipotent, that the vast majority of “performance” measures in use today are grossly inaccurate, and that it is either financially or technically impossible to make them more accurate. To anticipate the cries of outrage from “performance” measurement proponents, the new framework should also explicitly reject the folklore that Minnesota’s health care system can only be improved with “performance” reporting. It should lay out a more rational approach to improving the health of Minnesotans that abandons crude, static measurement at 30,000 feet and instead relies primarily on targeted solutions designed to address carefully defined problems.
The new framework could begin with a statement like this: “Policymakers should not assume that deficits in the health of Minnesotans are caused by factors controllable by health care professionals, but should instead do research on a problem-by-problem basis to determine the most likely causes of the problem.” The framework should urge policymakers to follow this decision tree:
Thus, to take an example of an issue currently in the news, if the Legislature decides it wants to increase the percentage of diabetics who have A1c levels below 8 percent, it should ask MDH first to determine whether doctors really don’t know anything about A1c levels, or whether the problem lies elsewhere, for example, with the high price of insulin and/or high out-of-pocket costs for insulin. If MDH determines the problem isn’t caused by physician stupidity, but is rather caused largely or in part by the high cost of insulin, a factor physicians can obviously do nothing about, the Legislature will know it should do something to make insulin more affordable. Conversely, the Legislature will know it should not ask MDH to issue annual report cards on the percentage of clinics’ diabetics who have their blood sugar levels under control.
Proponents of “performance” measurement will no doubt oppose the decision tree described above on the ground that it’s possible to adjust scores for factors providers have no control over. I will discuss this myth in a separate article next month.
Kip Sullivan, JD,
CONTACT INFO
PO Box 6674, Minneapolis, MN 55406
(612) 728-8600
comments@mppub.com
© Minnesota Physician Publishing · All Rights Reserved. 2019
QUICK LINKS
about us