• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Do Health Systems Respond to the Quality of Their Competitors?

Publication
Article
The American Journal of Managed CareApril 2019
Volume 25
Issue 4

The authors determined whether Minnesota health systems responded to competitors’ publicly reported performance. Low performers fell further behind high performers, suggesting that reporting was not associated with quality competition.

ABSTRACT

Objectives: Some large employers and healthcare analysts have advocated for retail competition that relies on providers competing on performance metrics to improve care quality. Using publicly available performance measures, we determined whether health systems increased the quality of diabetes care provided by their clinics based on performance relative to competitors.

Study Design: Our analysis examined publicly reported performance measures of diabetes care from 2006 to 2013 for clinics in Minnesota health systems.

Methods: We obtained data for 654 clinics, of which 572 publicly reported diabetes care performance. Because some clinics did not report performance, we estimated a Heckman selection model. First, we predicted whether or not clinics reported performance. Second, we estimated the effect of relative performance (a clinic’s performance minus the mean performance of clinics in competing health systems) on clinic performance using the results of the reporting model to control for selection into the sample of reporting clinics.

Results: Although diabetes care performance improved during our study, health systems did not differentially improve the diabetes care performance of their clinics performing worse than clinics in competing systems. This result indicates divergence between high-performing and low-performing clinics. This result does not appear to be due to risk selection.

Conclusions: Publicly reporting quality information did not incentivize health systems to increase the performance of their clinics with lower performance than competitors, as would be expected under retail competition. Our results do not support strategies that rely on competition on publicly reported performance measures to improve quality in diabetes care management.

Am J Manag Care. 2019;25(4):e104-e110Takeaway Points

Our results suggest that from 2006 to 2013, health systems in Minnesota did not compete on publicly reported diabetes care measures as envisioned under retail competition.

  • Health systems did not differentially improve the diabetes care quality of their clinics performing worse than those in competing systems. Low-performing clinics fell further behind high-performing clinics.
  • A variety of reasons, including a lack of consumer awareness of publicly reported performance measures, may dissuade low-performing health systems from focusing on quality competition.
  • These results do not support strategies of competition on public performance measures as a means of achieving quality gains in diabetes care management.

Many economists hold the belief, first articulated by Kenneth Arrow, that competitive models have limitations in describing healthcare markets. Supporting this reasoning is imperfect information about outcomes, or quality, tied to specific services.1 Not only do patients typically lack information when selecting providers or treatments, but asymmetric information also applies across providers, who may be unaware of how their quality compares with that of competitors.2 These aspects create difficulties in compensating providers on value and therefore discourage them from competing on quality.3

Some large employers and analysts have advocated for increased retail competition to control medical care costs and improve quality of care. As summarized by Galvin and Milstein, “…providing consumers with compelling performance data and increasing their responsibility for the costs of care will slow the increase in health care expenditures and motivate clinicians to improve the quality and efficiency of their care.”4 Providers presumably would attempt to increase their performance relative to competitors to attract new patients, or retain existing ones, and to receive preferential treatment in health plan benefit designs that increase access to patients. Low-performing providers would be encouraged to catch up to high-performing providers, quality variation across providers would decrease, and quality throughout a market would rise.

Attempts to address information asymmetries by increasing the availability of performance information may not always result in a competitive effect.3 Providers may instead devote resources to other strategies effective in increasing revenues and patient flows. They may invest in new service lines or acquire physician practices to improve their bargaining position with payers. Some providers may attempt to improve performance by attracting healthier patients or avoiding patients who may be difficult to treat and contribute to lower performance. Furthermore, if providers believe that consumers do not use publicly available comparative quality information, such as provider report cards, when choosing physicians or hospitals, they might then be less likely to make quality improvement decisions with regard to the performance of competitors.

This study addresses whether health systems increased the quality of their clinics in response to their reported performance relative to competitors. Successful retail competition presumably would narrow the gap between low-performing and high-performing clinics.

METHODS

Study Setting

The health systems and associated clinics in our study are located in Minnesota, a state dominated by a relatively small number of nonprofit integrated delivery systems.5 Minnesota Community Measurement (MNCM), a voluntary stakeholder collaborative, began an annual public reporting program for diabetes care at the clinic level starting with 2006 performance (reported in 2007). A 2008 Minnesota statute mandated reporting on a standardized set of quality measures,6 which ultimately included the diabetes care metrics reported by MNCM beginning with 2009 performance; however, there is no apparent penalty for not reporting. We identified 654 clinics (from 184 health systems and independent clinics) that offered diabetes care between 2006 and 2013, of which 572 reported their performance in at least 1 year. Diabetes performance measures have been publicly available in Minnesota for longer than in any other geographic area and, in a study involving 14 communities, Minnesota had the second-highest level of awareness of diabetes performance measures in 2012.7

Modeling Clinic Performance

We assumed that decisions regarding quality improvement, including whether or not a clinic submits reports, are made by health systems. This assumption is supported by the majority of clinics within a given health system beginning to report in the same year (eAppendix A Table 1 [eAppendices available at ajmc.com]). Nevertheless, individual clinic characteristics other than competition measures may influence performance. Therefore, we took into account the health system’s competitive environment and individual clinic attributes.

Public reporting often is voluntary, and even mandated reporting may not result in 100% compliance. Estimating clinic performance using only clinics that submitted reports may lead to bias because of unobserved factors associated with both reporting and performance. To address this issue, we employed a Heckman selection model. In the first stage, we predicted clinic reporting status, allowing it to depend on prior-year reporting status, competitive environment, clinic characteristics, and the performance year.

In the second stage, we predicted clinic performance using a framework similar to that of Kolstad, which estimated how the performance of surgeons changed after obtaining information about competitors through report cards.2 This framework determines how providers respond to their relative performance—in our case, how much better (or worse) a clinic is compared with competitors—while controlling for patient volume to capture the response associated with patient demand. For both relative performance and patient volume, we used prior-year measures (ie, lagged) to reflect available information (eg, clinics had 2008 performance data in 2009) and to allow time to react to demand changes. Like the first-stage reporting model, performance varies by competitive attributes, clinic characteristics, and performance year. We used the results of the reporting model to control for selection into the sample of reporting clinics. (eAppendix B provides a mathematical exposition.)

Market segmentation could affect performance and, therefore, our results. Some clinics may attract healthier patients or avoid difficult-to-treat ones to achieve higher performance that then would be attributable to changes in patient population rather than quality improvement. For example, some Medicaid patients are less adherent to medications, which could lead to worse clinical performance.8 To examine market segmentation, we re-estimated the model using patient volume as the dependent variable to determine whether volume differentially changed by clinics’ relative performance. If clinics of either relatively high or low performance differentially avoid difficult-to-treat patients whom they perceive will contribute to lower performance, then the results of this sensitivity analysis likely would find those clinics managing fewer Medicaid patients. If patients are not shifting between relatively high-performing and low-performing clinics, then it is unlikely that market segmentation influences our results.

Data and Measures

Our measure of performance is the Optimal Diabetes Care (ODC) score. The ODC score is the percentage of patients (aged 18-75 years) with diabetes who simultaneously achieve 5 treatment goals: (1) glycated hemoglobin (A1C) less than 7% (<8% starting with 2009 performance); (2) blood pressure less than 130/80 mm Hg (<140/80 mm Hg starting with 2010 performance); (3) low-density lipoprotein cholesterol less than 100 mg/dL; (4) daily aspirin use, unless contraindicated (includes only patients with ischemic vascular disease starting with 2009 performance); and (5) documented tobacco-free status.

We constructed indicators of relative performance and competitive environment. First, we calculated the mean ODC score of clinics in competing health systems. Each urban and rural health system’s competitors consisted of all clinics in the other systems within 5 miles and 25 miles, respectively, of any one of the system’s clinics. To measure relative performance, we subtracted each clinic’s competitor ODC measure (ie, mean ODC score of clinics in competing health systems) from its own ODC score. We created quintiles for this measure separately for urban and rural settings. Because we cannot measure relative performance for nonreporting clinics, we used the competitor ODC measure by itself in the reporting model. eAppendix C provides further explanation and descriptive statistics for these measures. We also created measures for the percentage of clinics in competing health systems that submitted reports and the number of competing systems to additionally control for competitive environment.

The MNCM data include the annual number of patients with diabetes by clinic, which we used to control for responses associated with patient demand in the performance model. Beginning in 2009, these data include the number of patients enrolled in Medicare, Minnesota Health Care Programs (MHCP) (ie, Medicaid and other programs for low-income families and individuals), and private insurance, which we used to examine market segmentation.

We constructed explanatory variables for the reporting model from available single-year data sources. These measures are excluded from the performance model, in which we employed clinic fixed effects. We used 2009 American Community Survey 5-year estimates to create indicators of each clinic’s potential patient population. These indicators include the mean age of the population and the proportion of the population on any type of public assistance within 5 miles and 25 miles of each urban and rural clinic, respectively. We used a 2012 licensure data set to determine the number of physicians, percentage of specialists, mean physician age, and percentage of female physicians at each clinic. We determined federally qualified health center (FQHC) status and affiliation with a critical access hospital (CAH) through Web searches.

Estimation

We estimated separate urban and rural models to account for location-driven differences in competition. In the first stage, we estimated reporting status using a probit regression. Our data suggest that many smaller clinics, and clinics perceived to have difficult-to-treat patient populations, were more likely to have delayed reporting until the mandate. Therefore, we interacted the 2009 performance year indicator with stand-alone clinic status, number of physicians, FQHC status, CAH affiliation, and the proportion of the potential patient population on public assistance. Because the mandate had a large influence on reporting decisions, we present average marginal effects on the probability of reporting over 3 periods: premandate (2007-2008), first mandate year (2009), and post mandate (2010-2013). In the second stage, we employed a fixed-effects model to estimate clinic performance that includes the inverse Mills ratio, obtained from the first-stage estimation. Because the fixed-effects model required 2 observations per clinic, we excluded any clinic that reported in only 1 year, including all clinics that began reporting with 2013 performance, resulting in an estimation sample of 288 urban and 244 rural clinics. We present average marginal effects of each explanatory variable on clinic performance.

RESULTS

Descriptive Statistics

Of 654 clinics providing diabetes care, 572 (87.5%) reported at least once between performance years 2006 and 2013. Urban clinics were more likely to be early reporters than rural clinics (Figure): For 2006, 32.1% of urban clinics and 14.8% of rural clinics reported their performance. The number of clinics reporting increased through performance year 2009, the year reporting became mandated, and then leveled off. By 2013, approximately 80% of clinics reported. Those not reporting were smaller, independent clinics that often had a higher percentage of specialists than reporting clinics (Table 1). Of clinics that never reported, more than half were stand-alone clinics.

Although a large improvement in publicly reported performance occurred between 2008 and 2010 (Figure), previous research using these data found that this increase is mainly attributable to changes in the definitions of measures for A1C, blood pressure, and daily aspirin, which made it easier for clinics to achieve the performance goal.9 Adjusting for these definition changes, performance improved modestly over the study period.9

Decision to Report

We present average marginal effects of the reporting model in Table 2 (coefficients in eAppendix A Table 2). Overall, reporting was highly persistent. Urban clinics whose health systems faced higher-performing competitors were less likely to report than urban clinics with lower-performing competitors. Prior to the mandate, each increase of 1 percentage point in the mean ODC score of clinics in competing health systems was associated with a decrease of 0.74 (95% CI, 0.01-1.47) percentage points in the probability of reporting for urban clinics. Most clinics faced competition that was within 5 percentage points of what would, based on performance, be considered average competition (ie, the mean of the competitor performance measure across the sample), implying that relative to a clinic facing average competition, competitor performance typically affected the probability of reporting by less than 4 percentage points. This effect diminished over the study period, although it remained significant.

Diabetes Care Performance

Table 3 presents the average marginal effects for the clinic performance model. These average effects apply to all clinics regardless of whether or not they reported (see eAppendix A Table 3 for effects conditional on reporting). Although, on average, clinics improved over time, responses to competitor performance imply divergence between high-performing and low-performing clinics. Clinics that had performed much better than competitors in the prior year improved their performance in the following year more than clinics that had performed similarly to competitors, on average, by 1.90 (95% CI, 1.35-2.61) percentage points in urban areas and 1.35 (95% CI, 0.61-1.94) percentage points in rural areas. The divergence was greatest in urban areas, where clinics that had performed much better than competitors improved their ODC scores by 2.99 (95% CI, 1.96-4.05) percentage points more than clinics that had performed slightly worse than their competitors, and by 4.06 (95% CI, 2.54-5.96) percentage points more than clinics that had performed much worse than their competitors. These results imply that relatively high-performing clinics were improving faster than low-performing clinics.

We found no significant effect of patient volume on performance, suggesting that clinics did not increase their performance in response to greater or fewer patients in the prior year. The coefficient on the inverse Mills ratio (eAppendix A Table 3) for urban clinics was 0.33 (95% CI, 0.03-0.62), implying that reporting clinics had higher performance than nonreporting clinics. The inverse Mills ratio was not significant for rural clinics.

Market Segmentation

We only found significant associations between patient volume and relative performance in urban areas (Table 4). Among all payers, clinics that had performed much better than competitors gained, on average, 16.1 (95% CI, 7.2-26.4) patients with diabetes compared with clinics that had performed similar to competitors, and a similar number of patients was gained by clinics performing slightly better than their competitors. Analyzing volume by payer (from 2009 onward), gains in volume were attributable only to privately insured patients. These results imply that high-performing clinics were attracting privately insured patients—a likely intended outcome of public reporting efforts aiming to shift patients to higher-quality clinics. If these patients were relatively healthy, then they potentially contributed to higher performance scores. However, neither relatively high-performing nor low-performing clinics were differentially avoiding MHCP patients, suggesting that the divergence in performance between clinics is not attributable to market segmentation of this more difficult-to-treat population.

DISCUSSION

We examined whether health systems respond to the performance of their competitors—a behavior expected under retail competition that could lead to quality improvements. Although diabetes care performance improved in Minnesota clinics during our study, clinics that outperformed competitors subsequently improved more than clinics that had performed worse than competitors, indicating a divergence between high-performing and low-performing clinics. This result suggests that public reporting did not incentivize health systems to improve their low-performing clinics in response to competing against high-performing clinics in other systems.

Our results differ from those of Kolstad, who found that surgeons improved their mortality rate if they were performing worse than expected after the introduction of report cards.2 However, differences between surgical mortality and diabetes outcomes likely limit the comparability of these findings. Compared with individual surgeons, health systems and their associated clinics may also have access to a variety of alternatives to increase revenues or patient flows when faced with publicly reporting performance. For example, they may acquire physician practices or invest in new service lines to attract patients, methods that are unlikely to be available to individual physicians.

Public reporting may encourage some health systems to take their first steps toward improving diabetes care quality, but these systems may lack the resources needed to develop sophisticated strategies focused on retail competition. Smith and colleagues found that several physician groups had little focus on diabetes care performance prior to reporting as part of the Wisconsin Collaborative for Healthcare Quality.10 These physician groups were likely to implement simple quality improvement strategies when they started reporting compared with the multiple intervention strategies of higher-performing physician groups. In our study, many clinics that were independent or from smaller systems did not begin reporting until it was mandated. Clinics that began reporting with the mandate scored 10 to 30 percentage points lower on the ODC score’s individual measures relative to clinics that began reporting earlier.9

Initial quality improvement efforts take time to execute, because they may include improvements in health information technology, changes in office procedures, and recruitment of quality improvement champions, among others. In this study, some health systems may have been undertaking these steps without yet realizing large gains in quality improvement. These smaller-system and stand-alone clinics also may lack resources needed to implement specific interventions needed to compete based on quality.11

Some health systems may not believe that public reporting ameliorates imperfect information between consumers and providers. A relatively small percentage of patients use public quality measures,7 and the evidence that public reports influence demand is mixed.12 High-performing clinics differentially attracted privately insured patients in our study, although these effects are likely relatively small in terms of total revenue, considering that patients with diabetes represent a fraction of patients. In addition, neither MHCP nor Medicare patients shifted from low-performing to high-performing clinics, therefore reinforcing concerns about the usefulness of public reporting for publicly insured patients, including the Medicare Payment Advisory Commission’s contention regarding quality reporting in the Merit-based Incentive Payment System.13 Health systems may believe that the current level of awareness and engagement is below the threshold needed to make investments in quality competition preferable to other uses of quality improvement resources. This explanation is supported by the evaluation of the Robert Wood Johnson Foundation’s Aligning Forces for Quality (AF4Q) initiative, in which MNCM participated. At the end of between 4 and 8 years of participating in AF4Q, community coalition leaders generally “…did not believe that the ‘competitive market strategy’…would improve provider quality or efficiency. In their experience, too few consumers sought out and used the information in this way…”14 One alternative to attracting patients and improving quality is mergers and acquisitions that allow for horizontal and vertical integration. During this study, larger health systems acquired several smaller systems and stand-alone clinics. These acquisitions were likely mutually beneficial, as they increased the market share of the larger health systems and improved performance at the acquired clinics.15

Limitations

There is potential that competitor performance was mismeasured. Health systems may not view some clinics in their market areas as competitors. For example, a large integrated delivery system may not treat small independent clinics as competitors. However, in our study, a handful of large health systems dominate each market and stand-alone clinics comprise only 13% of clinics.

A complete model of competition ideally would incorporate price and quality information in relation to competitive responses. Although health systems may have attempted to adjust prices to gain bargaining power, MNCM did not report total cost measures until after the conclusion of this study.16 Health systems would have had little reason to adjust quality based on competitor pricing, as it is doubtful that consumer decisions would be based on prices without the appropriate information available.

The substantial presence in this study market of large health systems may raise questions about generalizability to areas where smaller health systems are more dominant. However, recent trends have shown an increase in mergers and acquisitions throughout the United States, making vertically integrated health systems and concentrated markets increasingly the norm.17-19

CONCLUSIONS

Unique aspects of the healthcare market make it difficult to reward and incentivize quality improvement as envisioned in the competitive market paradigm.1 Even when market information asymmetries were addressed through public reporting, we find that health systems did not compete on quality as proponents of retail competition intended. Although public reporting may incentivize quality gains in diabetes care management through other mechanisms, relying on it to promote retail competition among physicians on performance measures is unlikely to be an effective strategy.Author Affiliations: Division of Health Policy and Management, University of Minnesota School of Public Health (DJC, JBC), Minneapolis, MN; Department of Health Management and Policy, University of Michigan School of Public Health (JSM), Ann Arbor, MI; Children’s Minnesota Research Institute (MDF), Minneapolis, MN.

Source of Funding: This research was funded in part by a grant from Pennsylvania State University as part of the evaluation of the Robert Wood Johnson Foundation’s Aligning Forces for Quality initiative. The funding sources had no role in the design and conduct of the study; collection, analysis, and interpretation of data; or preparation of the manuscript.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (DJC, JBC, JSM, MDF); acquisition of data (DJC, JBC, MDF); analysis and interpretation of data (DJC, JBC, JSM); drafting of the manuscript (DJC, JBC); critical revision of the manuscript for important intellectual content (DJC, JBC, JSM, MDF); statistical analysis (DJC, JSM); and obtaining funding (JBC); administrative, technical, or logistic support (JBC).

Address Correspondence to: Daniel J. Crespin, PhD, Division of Health Policy and Management, University of Minnesota School of Public Health, 420 Delaware St SE, MMC 729, Minneapolis, MN 55455. Email: daniel.crespin@gmail.com.REFERENCES

1. Arrow KJ. Uncertainty and the welfare economics of medical care. Am Econ Rev. 1963;53(5):941-973.

2. Kolstad JT. Information and quality when motivation is intrinsic: evidence from surgeon report cards. Am Econ Rev. 2013;103(7):2875-2910.

3. Town R, Wholey DR, Kralewski J, Dowd B. Assessing the influence of incentives on physicians and medical groups. Med Care Res Rev. 2004;61(suppl 3):80S-118S. doi: 10.1177/1077558704267507.

4. Galvin R, Milstein A. Large employers’ new strategies in health care. N Engl J Med. 2002;347(12):939-942. doi: 10.1056/NEJMsb012850.

5. Christianson JB, Carlin CS, Warrick LH. The dynamics of community health care consolidation: acquisition of physician practices. Milbank Q. 2014;92(3):542-567. doi: 10.1111/1468-0009.12077.

6. Minn Stat ch 62U, §62U.02 1-5. revisor.mn.gov/statutes/cite/62U.02. Accessed March 8, 2019.

7. Scanlon DP, Shi Y, Bhandari N, Christianson JB. Are healthcare quality “report cards” reaching consumers? awareness in the chronically ill population. Am J Manag Care. 2015;21(3):236-244.

8. Foster SA, Foley KA, Meadows ES, et al. Adherence and persistence with teriparatide among patients with commercial, Medicare, and Medicaid insurance. Osteoporos Int. 2011;22(2):551-557. doi: 10.1007/s00198-010-1297-z.

9. McCullough JS, Crespin DJ, Abraham JM, Christianson JB, Finch M. Public reporting and the evolution of diabetes quality. Int J Health Econ Manag. 2015;15(1):127-138. doi: 10.1007/s10754-015-9167-z.

10. Smith MA, Wright A, Queram C, Lamb GC. Public reporting helped drive quality improvement in outpatient diabetes care among Wisconsin physician groups. Health Aff (Millwood). 2012;31(3):570-577. doi: 10.1377/hlthaff.2011.0853.

11. Wolfson D, Bernabeo E, Leas B, Sofaer S, Pawlson G, Pillittere D. Quality improvement in small office settings: an examination of successful practices. BMC Fam Pract. 2009;10:14. doi: 10.1186/1471-2296-10-14.

12. Mukamel DB, Haeder SF, Weimer DL. Top-down and bottom-up approaches to health care quality: the impacts of regulation and report cards. Annu Rev Public Health. 2014;35:477-497. doi: 10.1146/annurev-publhealth-082313-115826.

13. Medicare Payment Advisory Commission. Report to the Congress: Medicare Payment Policy. Washington, DC: MedPAC; March 2017. medpac.gov/docs/default-source/reports/mar17_entirereport.pdf. Accessed July 23, 2018.

14. Christianson JB, Shaw BW, Greene J, Scanlon DP. Reporting provider performance: what can be learned from the experience of multi-stakeholder community coalitions? Am J Manag Care. 2016;22(suppl 12):S382-S392.

15. Crespin DJ, Christianson JB, McCullough JS, Finch MD. Health system consolidation and diabetes care performance at ambulatory clinics. Health Serv Res. 2016;51(5):1772-1795. doi: 10.1111/1475-6773.12450.

16. 2014 Total Cost of Care Report: Comparison of Medical Group Costs Across Minnesota. Minneapolis, MN: Minnesota Community Measurement; 2015. mncm.org/wp-content/uploads/2015/06/2014-TCOC-Report-Final.pdf. Accessed September 6, 2017.

17. Baker LC, Bundorf MK, Kessler DP. Vertical integration: hospital ownership of physician practices is associated with higher prices and spending. Health Aff (Millwood). 2014;33(5):756-763. doi: 10.1377/hlthaff.2013.1279.

18. Kirchhoff SM. Physician Practices: Background, Organization, and Market Consolidation. Washington, DC: Congressional Research Service; 2013. https://digitalcommons.ilr.cornell.edu/cgi/viewcontent.cgi?article=2007&context=key_workplace. Accessed September 6, 2017.

19. Fulton BD. Health care market concentration trends in the United States: evidence and policy responses. Health Aff (Millwood). 2017;36(9):1530-1538. doi: 10.1377/hlthaff.2017.0556.

Related Videos
dr carol regueiro
dr carol regueiro
dr carol regueiro
Corey McEwen, PharmD, MS
dr linda bosserman
Benjamin Scirica, MD, MPH, associate professor of medicine at Harvard Medical School and director of quality initiatives at Brigham and Women’s Hospital’s Cardiovascular Division
dr andrew leitner
Laurence Sperling, MD
dr joseph alvarnas
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.