This study identifies practices and perceptions around public reporting of “roll-upâ€
ABSTRACT
Objectives: To understand the views of prominent organizations in the field of healthcare quality on the topic of reporting roll-up measures that combine indicators of multiple, often disparate, dimensions of care to consumers.
Study Design: This study used a semi-structured, qualitative interview design.
Methods: We conducted 30- to 60-minute semi-structured telephone interviews with representatives of 10 organizations that sponsor public healthcare quality reports and 3 national alliances representing multiple stakeholder groups. We conducted a thematic analysis of interview transcriptions to identify common issues and concerns related to reporting roll-up measures.
Results: Among sponsors reporting roll-up measures, current practices for calculating and reporting these measures are diverse. The main perceived benefit of reporting roll-up measures is that they simplify large amounts of complex information for consumers. The main concern is the potential for consumers to misunderstand the measures and what associated roll-up scores communicate about provider performance. Report sponsors and national alliances feel that more guidance and research on the methods for producing and reporting scores for roll-up measures are needed.
Conclusions: The results of the interviews elucidate the need for research focused on construction and reporting of roll-up measures. Studies are needed to determine if roll-up measures are indeed perceived by consumers as being less complex and easier to understand.
Takeaway Points
Some organizations that sponsor public reports of healthcare quality present roll-up measures that combine indicators of multiple, often disparate, dimensions of care. We found that:
A major objective of public reports on healthcare quality is to enable consumers to make well-informed choices about their care. To meet this objective, report sponsors often use strategies intended to reduce the cognitive burden of healthcare quality data and improve consumers’ comprehension and use of the information.1,2 One strategy is the provision of “roll-up” measures (ie, summary measures of healthcare quality that combine indicators of multiple, often disparate, dimensions of care). For example, a roll-up measure of patient experience with primary care could aggregate composite measures of patient—provider communication, access to care, and interactions with office staff. At a higher level of aggregation, a roll-up measure of overall hospital quality could combine measures of patient experience, clinical processes, and patient outcomes. The most prominent use of roll-up measures within healthcare is CMS’ Five-Star Quality Rating System, which assigns an overall rating of 1 to 5 stars to providers and plans based on multiple quality dimensions.3 Roll-up measures could also aggregate component measures at other levels.
Publicly reporting roll-up measures offers advantages to consumers insofar as the roll-ups are created scientifically and grounded in high-quality research. These advantages include offering consumers a summary of multiple dimensions of care and, in many cases, a measure of healthcare quality that has superior reliability.4 However, this strategy is not without controversy: roll-up measures may obscure important nuances of quality and not align with consumer preferences or information needs. Also, the calculation of roll-up scores relies on assumptions about the relative importance of each component measure.5-10
We aimed to understand how prominent organizations in the field of healthcare quality view the reporting of roll-up measures and obtain examples of current approaches to reporting roll-up measures. To that end, this study gathered information from representatives of report sponsors and national healthcare alliances on current practices and positive and negative perceptions with regard to creating and reporting roll-up measures to consumers. We sought to identify these organizations’ concerns given the current interest in reporting these measures to consumers and to determine the need for additional evidence to address these issues.
METHODS
In this qualitative study, we conducted 30- to 60-minute semi-structured telephone interviews with representatives from 10 organizations that publish comparative reports on healthcare quality (Table 1) and 3 national alliances that bring together multiple stakeholder groups. These alliances—the AQA (formerly the Ambulatory Care Quality Alliance), Consumer-Purchaser Alliance, and the National Quality Forum—were strategically selected for their expertise in healthcare quality measurement and reporting.
Interviews with report sponsors explored their perceptions of the benefits and drawbacks of reporting roll-up measures, the factors that influenced whether they report these measures, and for those that do report, how they calculate and report these measures to consumers. Interviews with the national alliances focused on whether the reporting of roll-up measures had been discussed within the organization, perceptions of the most important issues or concerns, the types of stakeholders involved in the discussion, and whether they created position statements or recommendations on reporting roll-up measures. Our qualitative approach enabled participants to describe their perceptions and experiences in their own words and was well-suited to eliciting participants’ diverse perspectives.11-13
We used thematic analysis to extract concepts that emerged from multiple interviews.14,15 Each interview transcription was reviewed by the first author (who also conducted several interviews with report sponsors and all 3 interviews with national alliances). The author extracted key information related to roll-up measure construction and reporting, identified consistent themes across the interviews, and produced short memos describing themes and the evidence for each. After clarifying assumptions and reviewing the themes with the other authors, the author updated the memos to reflect a consensus around the main themes extracted from the data. Key themes were confirmed through an independent review of transcriptions by the fourth author, who did not conduct any of the interviews.
RESULTS
Diversity of Roll-up Measure Reporting Practices
Among sponsors reporting roll-up measures at the time of the interviews, practices for calculating and reporting roll-up measures were diverse. Reporting practices for different providers varied in terms of construction and labeling (Table 2).16 For example, the Wisconsin Department of Employee Trust Funds differentially weighted and combined multiple Consumer Assessment of Healthcare Providers and Systems (CAHPS) and Healthcare Effectiveness Data and Information Set (HEDIS) indicators into a roll-up measure called the “Overall Quality Score” for health plans. In contrast, the Pacific Business Group on Health used an equal-weighting approach to combine several HEDIS measures of medical group performance into the measure, “Medical Group Provides Recommended Care.”
In the absence of well-established methodological criteria, report sponsors based their decisions about how to calculate and label roll-up scores on information from a variety of sources, including research literature on reporting practices, technical expert guidance, information about other report sponsors’ practices, and the results of psychometric analyses. It is unclear exactly how they incorporated these sources of information into their choices about roll-up measure construction. Some participants reported testing measure labels with consumers or working with advisory groups to construct roll-up measures. All advisory groups included a diverse mix of stakeholders, such as purchasers, providers, state medical associations, and patient advocates.
Simplification of Information
The main perceived benefit of reporting roll-up measures is the simplification of large amounts of complex information for consumers. Nearly all participants pointed out that roll-up measures make it easier for consumers to understand and use the information provided in quality reports. For example, one report sponsor representative said, “It’s a huge win for consumers to be able to go in and not have to read through 100 technical measures about hospitals. Even in plain language, 100 measures is a lot of stuff to expect people to go through.” Representatives of the national alliances suggested multiple ways in which reporting roll-up measures could reduce complexity for consumers. One representative asserted that processing roll-up measures requires less effort on the part of the consumer; another argued that reporting these measures allows consumers to gain a comprehensive view of care. In addition to reducing complexity, a third representative suggested that the reporting of roll-up measures could draw attention to the availability of comparative quality information, saying, “…to a large extent, the complexity of existing public reporting has meant that people don’t pay any attention and still ask the neighbors next door, ‘Who do you like?’…so you have to get it to the level that garners attention….”
However, participants recognized that consumers vary in their need for information and some may want simplified information, but others desire detail (eg, the ability to drill down to scores for the measures that compose the roll-up measures). According to one national alliance representative, high-profile consumer groups argue that when consumers are shown roll-up measures, they should also have the option to view all component measures.
Understanding Roll-up Measures
The main concern about reporting roll-up measures is the potential for consumers to misunderstand the measures and what they communicate about provider quality. As one participant noted, if the weighting of individual scores in roll-up measures does not correspond to the relative importance of different aspects of care to consumers, consumers may draw incorrect conclusions about provider performance. Another participant reported that his organization opted against reporting a particular roll-up measure because of uncertainty about how to appropriately weight the component measures. Several report sponsors shared the concern about a potential mismatch between roll-up measures and the salience of individual measures to consumers. One sponsor suggested giving consumers a way to roll up information according to their preferences via an interactive online report.
Several other participants expressed concerns about consumers’ potential misunderstanding of roll-up measures and what they communicate. For example, consumers “might not read the fine print” and, therefore, might miss information about how the roll-up measures were calculated. They may also misunderstand a roll-up measure labeled as “overall quality,” as such a label may imply that all of the important aspects of quality have been measured and are thus represented by the score. In addition, these measures may obscure areas in which providers perform especially well or especially poorly. One report sponsor noted, “That’s always a risk—a person walking away with a generalized sense of performance, but it doesn’t encapsulate the variation of performance of the measures within the composite.”
Guidance and Research
More guidance and research on the methods for producing and reporting scores for roll-up measures are needed. Several participants expressed a need for more evidence-based guidance on the construction and use of roll-up measures. In the words of one national alliance representative: “I think we could use more science of what logically hangs together, whether there are some things that should never be broken apart, whether there are some things that logically should always hang together.”
DISCUSSION
We conducted interviews with representatives of report sponsors and national alliances to document attitudes and practices around the calculation and reporting of roll-up measures to healthcare consumers. We identified diverse practices in the construction of roll-up scores; however, our study is limited by its small sample of strategically selected organizations, and we cannot determine the prevalence of roll-up score reporting. The interviews revealed a desire for more guidance and research on the methods for producing and reporting scores for roll-up measures. These findings point to the need for a research agenda focused on developing standards for the construction, calculation, and reporting of roll-up measures that are rooted in empirical data on how constructs assessed by various performance measures relate to each other and desired health outcomes.
One key topic for exploration is the appropriate level of aggregation for roll-up measures. Roll-ups could aggregate measures from a single source (eg, the CAHPS Clinician & Group Survey) or a single domain (eg, clinical quality, patient experience). Alternatively, a single overall roll-up measure, such as the star ratings provided through the CMS Five-Star Rating System, could encompass multiple domains of quality. Although this type of measure may maximize simplicity for consumers, constructing such a score requires strong and nontrivial assumptions about how best to reliably and accurately use a single indicator to communicate the multidimensional construct of healthcare quality.
Although participants perceived that reporting roll-up scores to consumers helps simplify the complex array of healthcare quality information presented in reports, they were concerned about the potential for consumers to misunderstand the aspects of quality captured by roll-up measures and how providers perform on discrete aspects of quality. Whether these beliefs are well-founded is a question worth investigating. Some report sponsors conduct research on roll-up measure construction,17-19 but the body of published research on consumer use of, and responses to, roll-up measures is minimal.
Consumers are used to seeing roll-ups in other realms (eg, Consumer Reports product reviews), but reporting roll-up measures of healthcare quality is relatively new. It is unclear whether consumers find these measures to be easier to understand than the component measures from which they are constructed or how these measures are interpreted. It is also unclear how viewing roll-up measures affects consumers’ choices of plans and providers, what consumers interpret rolled-up measures to signify, or whether roll-up measures simplify available data to the point where differences in performance no longer seem meaningful. Finally, little is known about whether roll-ups might deter crucial forms of consumer learning about performance metrics and provider variation in quality.
Consumer interpretation and use of roll-up measures likely vary depending on the level of aggregation of scores. The benefits and drawbacks of other stakeholders’ use of roll-up measures (eg, providers’ potential use of roll-ups for quality improvement purposes) were not documented here given the focus on report sponsors’ expectations about consumers’ use of roll-up scores; however, these expectations are important to understand. Because the reporting of roll-up measures is underway at federal agencies and among some report sponsors, it is critical to better understand how consumers perceive and respond to such measures.
CONCLUSIONS
We found that report sponsors’ practices for calculating and reporting these measures are diverse. Report sponsors and national alliances that represent stakeholder groups affected by quality reporting feel that roll-up measures simplify large amounts of complex information for consumers, but run the risk of misinterpretation. The results of the study elucidate the need for research focused on the construction and reporting of roll-up measures, as well as consumer interpretations of roll-up measures.Author Affiliations: RAND Corporation (JLC, SCM, MLF, AMP, GM), Pittsburgh, PA; Severyn Group (LR), Ashburn, VA; University of Wisconsin (RG), Madison, WI; Yale University (MS), New Haven, CT; Shaller Consulting Group (DS), Stillwater, MN.
Source of Funding: This paper was supported by 2 cooperative agreements (2U18HS016980 and 1U18HS016978) from AHRQ to RAND Corporation and Yale University, respectively.
Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (JLC, RG, GM, SCM, AMP, LR, DS, MS); acquisition of data (JLC, RG, GM, SCM, LR); analysis and interpretation of data (JLC, MLF, RG, GM, SCM, LR, DS, MS); drafting of the manuscript (JLC, MLF, AMP, MS); critical revision of the manuscript for important intellectual content (JLC, MLF, RG, GM, SCM, AMP, LR, DS); statistical analysis (JLC, MLF).
Address Correspondence to: Jennifer L. Cerully, PhD, RAND Corporation, 4570 Fifth Ave, Ste 600, Pittsburgh, PA 15213. E-mail: jcerully@rand.org.REFERENCES
1. Schlesinger M, Kanouse DE, Rybowski L, Martino SC, Shaller D. Consumer response to patient experience measures in complex information environments. Med Care. 2012;50(suppl):S56-S64. doi: 10.1097/MLR.0b013e31826c84e1.
2. Hibbard JH, Slovic P, Peters E, Finucane ML. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res. 2002;37(2):291-313.
3. Hospital Compare star ratings fact sheet. CMS website. https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2015-Fact-sheets-items/2015-04-16.html. Published April 16, 2015. Accessed October 20, 2015.
4. Zaslavsky AM, Shaul JA, Zaborski LB, Cioffi MJ, Cleary PD. Combining health plan performance indicators into simpler composite measures. Health Care Financ Rev. 2002;23(4):101-115.
5. Romano P, Hussey P, Ritley D. Selecting quality and resource use measures: a decision guide for community quality collaboratives. Agency for Healthcare Research and Quality website. https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/perfmeasguide/index.html. Published May 2010. Accessed January 26, 2016.
6. Martsolf GR, Scanlon DP, Christianson JB. Multistakeholder perspectives on composite measures of ambulatory care quality: a qualitative descriptive study. Med Care Res Rev. 2013;70(4):434-448. doi: 10.1177/1077558713485134.
7. Boyce T, Dixon A, Fasolo B, Reutskaja E. Choosing a high quality hospital: the role of nudges, scorecard design and information. The King’s Fund website. https://www.kingsfund.org.uk/sites/files/kf/field/field_publication_file/Choosing-high-quality-hospital-role-report-Tammy-Boyce-Anna-Dixon-November2010.pdf. Published November 2010. Accessed January 26, 2016.
8. Guiding principles for public reporting of provider performance. Association of American Medical Colleges website. https://www.aamc.org/download/370236/data/guidingprinciplesforpublicreporting.pdf. Published March 2014. Accessed May 27, 2015.
9. Orlowski JM. Re: AAMC comments on the measure selection for Hospital Compare Star Ratings TEP Report. Association of American Medical Colleges website. https://www.aamc.org/download/425936/data/aamccommentletteroncmsstarratingstep.pdf. Published February 25, 2015. Accessed May 27, 2015.
10. Thompson A. Re: comments on the Hospital Star Ratings methodology report. American Hospital Association website. http://www.aha.org/advocacy-issues/letter/2015/150914-cl-StarRatings.pdf. Published September 14, 2015. Accessed October 20, 2015.
11. Pope C, Mays N. Qualitative research: reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ. 1995;311(6996):42-45.
12. Morgan DL, ed. Successful Focus Groups: Advancing the State of the Art. Newbury Park, CA: SAGE Publications; 1993.
13. O’Brien K. Improving survey questionnaires through focus groups. In: Morgan DL, ed. Successful Focus Groups: Advancing the State of the Art. Newbury Park, CA: SAGE Publications; 1993: 105-118.
14. Boyatzis RE. Transforming Qualitative Information: Thematic Analysis and Code Development. Thousand Oaks, CA: SAGE Publications; 1998.
15. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77-101.
16. Five-Star Quality Rating System. CMS website. www.cms.gov/Medicare/Provider-Enrollment-and-Certification/CertificationandComplianc/FSQRS.html. Updated March 22, 2017. Accessed March 3, 2014.
17. Quality of patient care star ratings methodology. CMS website. www.cms.gov/Medicare/Quality-Initiatives-
Patient-Assessment-Instruments/HomeHealthQualityInits/Downloads/Quality-of-Patient-Care-Star-Ratings-Methodology-Report-updated-5-11-15.pdf. Published May 5, 2015. Accessed January 22, 2016.
18. Home health star ratings. CMS website. www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-instruments/HomeHealthQualityInits/HHQIHomeHealthStarRatings.html. Updated January 4, 2017. Accessed May 11, 2017.
19. Medicare 2016 Part C & D star rating technical notes: first plan preview: draft. CMS website. www.cms.gov/Medicare/Prescription-Drug-Coverage/PrescriptionDrugCovGenIn/Downloads/2016-Technical-Notes-Preview-1-v2015_08_05.pdf. Published August 5, 2015. Accessed January 22, 2016. 
Examining Low-Value Cancer Care Trends Amidst the COVID-19 Pandemic
April 25th 2024On this episode of Managed Care Cast, we're talking with the authors of a study published in the April 2024 issue of The American Journal of Managed Care® about their findings on the rates of low-value cancer care services throughout the COVID-19 pandemic.
Listen
Overhauling Quality Measurement in the US: Measure What Matters
October 30th 2024As the US charts its course through the next political era, it is crucial that we boldly allocate resources and prioritize what truly impacts patients. When faced with complexity, feasibility concerns, or entrenched norms, we must proclaim: “It’s the outcomes, stupid.”
Read More
Oncology Onward: A Conversation With Penn Medicine's Dr Justin Bekelman
December 19th 2023Justin Bekelman, MD, director of the Penn Center for Cancer Care Innovation, sat with our hosts Emeline Aviki, MD, MBA, and Stephen Schleicher, MD, MBA, for our final episode of 2023 to discuss the importance of collaboration between academic medicine and community oncology and testing innovative cancer care delivery in these settings.
Listen
No Free Lunch: The Misaligned Incentives of the American Health Care System
October 30th 2024The author highlights reasons why we have not seen substantial cost savings in the health care industry and why future efforts are likely to continue to see forceful pushback, as well as offers potential solutions.
Read More