Is MIPS really doing what it is supposed to do? Research suggests that it is not.
How well does the Merit-based Incentive Payment Program (MIPS) of Medicare measure the caliber of medical treatment that is given? According to the findings of a recent study, not very.
The 2017 introduction of MIPS, which replaced three prior quality measurement programs, aimed to enhance patient care by financially rewarding or penalizing physicians based on their performance on particular “process” and “outcome” metrics in four key areas: cost, quality, improvement activities, and fostering interoperability.
The six metrics that participating physicians choose to report on must include one outcome indicator, such as a hospital admission for a particular disease or condition. Currently, MIPS is the biggest value-based payment program in the country.
Data from Medicare statistics and claims records for 3.4 million individuals who saw about 80,000 primary care providers in 2019 were evaluated for the study by researchers. They compared doctors’ overall MIPS scores with their scores on five process measures, including breast cancer screening, tobacco screening, and diabetic eye exams, and six outcome measures, including ED visits and hospitalizations.
The findings showed there was no consistent relationship between the measures’ performance and the final MIPS ratings. For instance, doctors with low MIPS scores scored somewhat better on the other two process measures, while having much lower average MIPS scores than physicians with high MIPS scores on three of the five process measures examined.
Low-scoring doctors performed much worse on the all-cause hospitalizations per 1,000 patients metric than they did on the other four outcome measures, although they performed significantly better on the metric of ED visits per 1,000 patients. Similar to this, 21% of physicians with high MIPS scores had outcomes that were in the poorest percentile, compared to 19% of those with low MIPS scores who performed in the top quintile for composite outcomes performance.
The findings suggest that the MIPS program’s accuracy in identifying high- versus low-performing providers is really no better than chance.
For these findings, the authors provide a number of interpretations. Among them are the challenges in making meaningful comparisons when doctors are free to select the metrics they report on, the fact that many program metrics, as other research has shown, are either invalid or of dubious validity and thus may not be linked to better outcomes, and the possibility that high scores may simply be an indicator of a program’s capacity for data collection, analysis, and reporting rather than of higher quality medical care.
They claim that the latter conclusion is supported by the discovery that participants with low MIPS scores were more likely to work in independent, small practices even though their clinical outcomes were frequently comparable to those of medical professionals in large, system-affiliated practices with high MIPS scores.
This research was released in JAMA on December 6. https://jamanetwork.com/journals/jama/article-abstract/2799153