• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

How Pooling Fragmented Healthcare Encounter Data Affects Hospital Profiling

Publication
Article
The American Journal of Managed CareFebruary 2015
Volume 21
Issue 2

Incomplete records of patient history can bias hospital profiling. Completing health records for Medicare-covered patients in VA hospitals resulted in modest changes in hospital performance.

ABSTRACT

Objectives

People receiving healthcare from multiple payers (eg, Medicare and the Veterans Health Administration [VA]) have fragmented health records. How the use of more complete data affects hospital profiling has not been examined.

Study Design

Retrospective cohort study.

Methods

We examined 30-day mortality following acute myocardial infarction at 104 VA hospitals for veterans 66 years and older from 2006 through 2010 who were also Medicare beneficiaries. Using VA-only data versus combined VA/Medicare data, we calculated 2 risk-standardized mortality rates (RSMRs): 1 based on observed mortality (O/E) and the other from CMS’ Hospital Compare program, based on model-predicted mortality (P/E). We also categorized hospital outlier status based on RSMR relative to overall VA mortality: average, better than average, and worse than average. We tested whether hospitals whose patients received more of their care through Medicare would look relatively better when including those data in risk adjustment, rather than including VA data alone.

Results

Thirty-day mortality was 14.8%. Adding Medicare data caused both RSMR measures to significantly increase in about half the hospitals and decrease in the other half. O/E RSMR increased in 53 hospitals, on average, by 2.2%, and decreased in 51 hospitals by —2.6%. P/E RSMR increased, on average, by 1.2% in 56 hospitals, and decreased in the others by –1.3%. Outlier designation changed for 4 hospitals using O/E measure, but for no hospitals using P/E measure.

Conclusions

VA hospitals vary in their patients’ use of Medicare-covered care and completeness of health records based on VA data alone. Using combined VA/Medicare data provides modestly different hospital profiles compared with those using VA-alone data.

Am J Manag Care. 2015;21(2):129-138

  • Evaluation of hospital performance in terms of risk-adjusted outcomes (eg, 30-day mortality rate) can be biased if patient health records are incomplete due to fragmentation of data across multiple healthcare systems.
  • For Medicare-covered veterans admitted to Veterans Health Administration (VA) hospitals for acute myocardial infarction, we found that using VA-only data substantially undercounts patient comorbidity compared with combined VA-Medicare data.
  • Using combined VA-Medicare data, instead of VA-only data, resulted in a modest change in relative hospital performance based on risk-adjusted 30-day mortality as the outcome.

Evaluating hospital performance based on patient outcomes, such as complications, readmissions, and mortality, has become a mainstay of ongoing healthcare quality improvement initiatives in the United States.1-3 Hospital performance measures have traditionally been used for monitoring outcomes of high-risk patients, and more recently, for public reporting and determining financial incentives.1,4,5 The CMS Hospital Compare program evaluates nearly all US acute care hospitals based on their patients’ risk of 30-day mortality and readmission following an admission for acute myocardial infarction (AMI), heart failure, or pneumonia.6

Ideally, such evaluations would rely on detailed clinical information on patient morbidity and severity at admission. However, due to issues with completeness, access, and comparability with such data, many profiling programs, including Hospital Compare, use administrative discharge data (“billing” or “encounter” records).7-9 A potential shortcoming—the consequences of which have not been previously explored—is that some individuals receive substantial care in multiple systems, making the data in any single system incomplete.10,11 Dual or triple eligibility for Medicare, Medicaid, and Veterans Health Administration (VA) healthcare; changes in Medicaid coverage; and switches into and among private managed care plans are common sources of “fragmentation” of patient data into payer-specific silos.12-15 This problem of incompleteness could also arise when hospital profiling is based only on records of patient care obtained at that hospital. The 2 most common current strategies are: 1) to use the data at hand, ignoring its incompleteness; and 2) to exclude patients with dual coverage. Neither is ideal.16,17

When patients use different systems for distinct medical problems, single data source assessments can miss important differences in patient risk that could bias performance measures; in this case, pooled data should add consequential new clinical information.10,11 If hospitals vary substantially in how much of their patients’ data is unobserved, single-source hospital profiles could disadvantage facilities whose patients’ data are particularly incomplete.

To explore this, we examined hospital profiling in the VA—the largest integrated healthcare provider system in the United States—which offers comprehensive healthcare to 7.9 million enrollees (2010).18 The VA’s highly integrated, comprehensive, and systematized healthcare information system has been used extensively to evaluate patient outcomes and hospital performance.17,19 However, VA data can still be missing important diagnoses since 77% of veterans have dual or multiple coverage, involving Medicare (51%), Medicaid (7%), TRICARE (16%), or commercial insurance (29%).12-14,18,20-22 How might evaluations of VA hospital performance based on only VA data differ from those based on more complete data? We compared VA only—based profiles with VA-Medicare-based profiles for veterans 66 years and older who are receiving Medicare’s Fee for Service (FFS) benefit; this cohort receives the vast majority of its healthcare within these 2 systems.18 We selected 30-day mortality for patients admitted for AMI to evaluate VA hospitals, as it remains a key performance criterion for stakeholder groups nationally.6,23-25 We examined the effects under 2 population evaluation methods, including CMS’ Hospital Compare, to facilitate comparability and enhance relevance.

METHODS

Study data for 2006 to 2010 were obtained from the VA and Medicare administrative inpatient and outpatient files. We used VA patient treatment files; outpatient clinical files; and vital status files; and Medicare FFS beneficiary, inpatient/skilled nursing facility, carrier, and outpatient files. Our goal was to compare VA hospitals on their AMI admissions (found in the VA inpatient file), based on their risk-adjusted 30-day survival. Because “risk” is calculated from diagnoses recorded on claims incurred during the 365 days prior to admission, we obtained Medicare FFS data during this period (to add to VA data) to create “complete” risk profiles. We applied the protocol adopted by CMS Hospital Compare and endorsed by the National Quality Forum to identify the study cohort and calculate risk.6

Study Cohort

Using VA acute inpatient discharge data for fiscal years 2006 to 2010, we identified all admissions, henceforth termed “index admissions,” for patients 66 years or older with the principal diagnosis of AMI (all International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] 410.xx codes, except 410.x2).6 We only retained hospitalizations for veterans who were both continuously enrolled in Medicare FFS and never enrolled in a VA hospice program during the 12 months preceding the admission date. We additionally excluded admissions that: a) were transfers from another acute care hospital; b) resulted in discharges against medical advice or discharges alive on the same or next day following admission; or c) had missing data on key measures. For patients with multiple, otherwise eligible admissions in a year, we randomly selected 1 for our study sample so as to avoid survival bias.6 Following VA Hospital Compare methodology, we excluded admissions from hospitals with fewer than 25 otherwise eligible admissions during the 5-year study period.

Patient Outcome and Risk Factors

We examined patient death within 30 days of the index admission date. Risk factors consisted of demographics and comorbid conditions identified using ICD-9-CM diagnosis codes in the index admission record and in records of inpatient discharges and outpatient visits during the 12 months preceding the index admission date.6 Following CMS Hospital Compare protocols, we excluded selected secondary diagnosis codes in the index record identified as potential complications of the admission itself.6 The diagnosis codes were then classified using DxCG condition categories.26,27 To examine the impact of combining patient data from Medicare records, we produced 2 sets of risk factors: 1 based on VA data alone and the other based on combined VA and Medicare data.

Medicare Utilization Measures

To quantify patient- and hospital-level differences in Medicare utilization, we identified all acute inpatient care and outpatient visits in the 12 months preceding the index admission, separately in VA and Medicare data. For each index admission, we defined 2 measures: 1) a categorical grouping of the relative volume of Medicare use: a) no Medicare-covered inpatient or outpatient care (“none”), b) at least some, but less than 25% of outpatient visits covered by Medicare (“moderate”), and c) at least 25% of outpatient visits covered by Medicare (“high”); and 2) the proportion (%) of a patient’s outpatient visits that had been covered by Medicare. Using the latter measure, we grouped hospitals into tertiles by increasing the proportion of Medicare-covered outpatient use.

Risk Adjustment Models

To obtain the weight associated with each risk factor and the adjusted discharge-level predicted probability of 30-day mortality, we estimated: a) a logistic regression model (GLM); and b) a hierarchical logistic regression model (HLM) wherein the log-odds of the dichotomous outcome of 30-day mortality was specified as a linear function of patient risk factors—HLM also included unobserved hospital effect.4,28,29 We reported odds ratios (ORs) associated with each risk factor. For both models, overall model fit was evaluated by the area under the receiver operating characteristic curve (C statistic), the percentage of outcome variation explained, and the observed outcome rate in the lowest and highest deciles of predicted probability of death.30 We calculated 2 predicted numbers of deaths for each hospital: the first, denoted E, is an expected number of deaths at that hospital, assuming average VA care, but adjusted for its patients’ risk (from GLM and HLM models); and the second, denoted P, is an expected number of deaths in that hospital that accounts for both patient risk and hospital effect (from the HLM model).30

As a potential measure of the incremental patient risk captured in Medicare data, we estimated a separate hierarchical logistic regression model in which we added a patient discharge-level categorical indicator of Medicare use as a covariate.

Risk-Standardized Mortality Rates (RSMRs)

Hospital performance was expressed in terms of risk standardized mortality rates (RSMRs).28,30 Two measures of RSMR are commonly used for hospital profiling: a) the traditional measure, based on the ratio of observed number of hospital deaths (O) to E28,30; and b) the Hospital Compare measure, based on the ratio of P to E.28,29 Each ratio is multiplied by the overall observed death rate across all hospitals to obtain O/E RSMR and P/E RSMR estimates, respectively. Since the 2 RSMR measures behave differently in profiling, primarily because the P/E ratio “shrinks” estimates toward an overall average, we separately examined the impact of adding Medicare data when profiling hospitals using each measure.28

Analysis of Impact of Pooling Medicare Data

The direct impact of adding Medicare data is through finding risk factors not documented in VA records. Accordingly, we first calculated the prevalence of individual risk factors in VA data alone and in combined VA/Medicare data. Changes in risk factor prevalence require recalibrating the mortality model, leading to new risk weights. We refer to the changes in E associated with changing risk weights as an indirect effect. We estimated the direct, indirect, and overall changes in RSMR from adding Medicare data. The direct effect was measured by the change in RSMRs due to the change in risk prevalence, holding risk weights unchanged, while the indirect effect was measured by the change due to changing in risk weights, keeping risk prevalence unchanged.

Importantly, when a risk prediction model is calibrated to data, the sum of the expected probabilities equals the observed mortality; therefore, the sums of the expected probabilities of mortality from a model with and without Medicare data are equal. Thus, if adding Medicare data causes the expected probabilities in some hospitals to increase, it must decrease in others. Given this, we reported overall change in RMSRs for hospitals that experience an increase or decrease in RSMR separately.

Our core measures of the impact of adding Medicare data are absolute and relative (%) overall RSMR change. We calculated 95% CIs for these, defined as the (2.5th, 97.5th) percentile range of the RSMR change, from 1000 bootstrap resamplings, with each stratified by hospital.31,32 A second outcome of interest is change in outlier status, defined by RSMR bootstrap CI, lying either entirely above (“worse-than-average”), or entirely below (“better-than-average”) the VA national average mortality rate. We examined changes in outlier status after adding Medicare data. Further, to evaluate associations with extent of Medicare utilization, we estimated RSMRs and outlierstatus changes for hospitals within tertiles of Medicare utilization.

All analyses were performed using Stata version 11.1 (StataCorp, College Station, Texas).33 This study was approved by the Boston VA Healthcare System Institutional Review Board.

Sensitivity Analysis

We used all discharges for both estimation and prediction in order to ensure the largest sample size for each hospital for obtaining RSMR CIs. However, predictions on the same data used to estimate the model may overestimate model predictive power. To examine the extent of overprediction, we divided discharges into 2 halves by randomly allocating discharges in each hospital, and calculated model performance after using risk weights estimated for each half to obtain predictions on the other. We then compared the performance measures using this approach with our initial estimates.

RESULTS

We identified 28,700 AMI discharges for patients aged 66 years or older from 138 VA hospitals in the inpatient VA administrative records from 2006 to 2010. The aforementioned exclusions resulted in a final study population of 11,373 index admissions at 104 hospitals. Of the discharges excluded, the main reasons were noncontinuous enrollment in Medicare FFS during the 12 months prior to index admission (45%), live discharge on the same/next day (11%), and transfer from another acute care hospital (10%) (see eAppendix Table 1, available at www.ajmc.com). The mean observed 30-day mortality rate during the years 2005 to 2009 was 14.8% (Table 1).

Medicare Utilization

Although all patients were enrolled in Medicare, 51% received no Medicare-covered inpatient or outpatient care (ie, all their care was obtained in the VA); another 24% and 25% were in the “moderate” and “high” user categories of Medicare-covered outpatient visits, respectively. The Medicare-covered share of all outpatient visits varied considerably, from 8% to 47%, across hospitals. In addition, the proportion of patients in the high Medicare-user category averaged 26% (Table 1, section B) (standard deviation = 8%). Impact of Pooling Medicare Data on Direct and

Indirect Effects

Table 2 details the 2 ways in which adding Medicare data can affect the risk associated with 30-day mortality: first, the direct effect of higher prevalence of risk factors, and second, the indirect effect on the risk weights associated with the risk factors. The direct effect increased the prevalence of all 25 risk factors, with 13 increasing by over 20%. The impact of the indirect effect of change in risk weights was mixed: 5 coefficients changed no more than 3%, 12 coefficients dropped by at least 4%, and 3 increased by at least 4%. Decreases in risk weights occurred more often than increases. This is not surprising: since, as noted, the sum of the expected equals the observed mortality rate, if disease prevalence increases, weights on average”must decrease. Adding Medicare data did not improve the overall performance of the risk adjustment model, as measured by the C statistic, or the ability to distinguish differences in observed mortality across deciles of predicted risk (Table 2). Re-estimated model fit measures using the split-sample method described earlier revealed no significant decline in the C statistic from 0.74 (all sample) to 0.72 (split sample; see eAppendix Table 2).

Medicare Utilization and Risk of 30-Day Mortality

We included the extent of patient Medicare utilization as a covariate in a risk prediction model with other risk factors (Table 3). When we estimated this model with risk prevalence identified in VA-only data, there was higher risk of 30-day mortality for high Medicare—use patients (OR, 1.21; 95% CI, 1.07-1.38). When we re-estimated this model using risk factors whose prevalence was identified in either VA or Medicare data, the risk associated with high Medicare use was no longer statistically significant (OR, 0.98; 95% CI, 0.85-1.11).

RSMR Change From Adding Medicare Data

The impact of adding Medicare data was qualitatively similar for the 2 RSMR measures (Table 4). As expected, with about half of hospitals experiencing an increase, and the other half a decrease, there was virtually no overall change in O/E and P/E RSMRs. In the 53 hospitals for which O/E RSMR increased, the average change in the RSMR was 2.2% (95% CI, 1.6%-2.8%). Among the 51 hospitals with a decrease in O/E RSMR, the average change was —2.6% (95% CI, –3.3% to –1.9%); corresponding changes in P/E RSMR were smaller. The average change was 1.2% (95% CI, 0.0%-2.7%) and –1.3% (95% CI, –2.9% to –0.3%) among the 56 and 48 hospitals where P/E RSMRs increased and decreased, respectively.

After adding Medicare data, 4 hospitals changed outlier status based on O/E RSMR and no hospitals changed outlier status based on P/E RSMR (Table 4). Examination of the 4 hospitals with changed outlier status indi cated that adding Medicare data shifted the RSMR 95% CI just enough to narrowly exclude the national VA mortality rate of 14.8%, thereby reclassifying each hospital from average to worse-than-average (eAppendix Table 1 and eAppendix Figure 1).

Table 5 reports the RSMR change for hospitals grouped by tertiles of Medicare-share of total outpatient care. For both RSMR measures, we found no significant differences across tertiles, separately among hospitals that experienced an RSMR increase or decrease.

Decomposition of the overall RSMR change into direct and indirect effects indicated similar trends between O/E RSMR and P/E RSMR. As expected, direct effect decreased the RSMR, while indirect effect increased the RSMR (eAppendix Figures 2a and 2b).

DISCUSSION

Veterans 66 years and older seeking care in VA hospitals differed considerably in their use of non-VA providers, leading to differential amounts of data fragmentation. To inform methods for improved hospital profiling, we compared profiles of VA hospitals, based on 30-day mortality for AMI admissions, and using VA-only versus combined VA/Medicare data. Results from 2 commonly used profiling measures, the traditional (O/E) and CMS Hospital Compare (P/E) measures of RSMRs, indicated modest RSMR change. Roughly equal numbers of hospitals experienced an increase or decrease in RSMR, with an average increase of more than 1.2% and an average decrease of more than —2.6% across both measures. The magnitude of RSMR change was not associated with hospital differences in the Medicare share of outpatient care. In terms of hospital outlier status, based on the O/E RSMR measure, only 4 out of the 104 hospitals experienced reclassification, all resulting from small RSMR changes for hospitals at the threshold borderline; none experienced reclassification based on the P/E RSMR. Thus, in our study of profiling using O/E or P/E RSMR measures, evaluating VA hospitals without using their patients’ Medicare data to more fully specify their “risk” led to only modest differences in performance assessments.

To better understand how adding Medicare data affects profiling, we decomposed the final RSMR change into 2 components: first, the direct effect of higher-risk prevalence from comorbidities identified only in Medicare data; second, the indirect effect on risk weights from the reestimated risk adjustment model using newly identified risk factors. The traditional and Hospital Compare RSMR measures were similar in their sensitivity to both the direct and indirect effects, with the former lowering and the latter increasing RSMRs significantly across all hospitals.

Although profiling of VA hospitals based on patient outcomes among AMI patients is common, the effect of data fragmentation on these assessments has not been previously examined.6,17,34 For instance, 2 recent studies estimated RSMR for elderly veterans using similar risk adjustment models, 1 using only VA data,34 and the other using combined VA/Medicare data.6 Findings from our study favor using VA/Medicare data, reinforcing a similar conclusion from other studies that found significant differences in the risk prevalence identified in the 2 data sources.10,11,35

We also examined whether elderly veterans with relatively higher use of Medicare-covered care had higher or lower risk than that captured in VA administrative data. Consistent with previous studies,10,11 we found that veterans with higher use of Medicare-covered care were sicker. In quantifying this additional comorbidity, we estimated that high—Medicare-use veterans experienced 21% higher mortality (95% CI, 7% to 38%) compared with non–Medicare-using veterans after adjusting for risk identified in VA data. This difference disappeared once Medicare data were included. Therefore, hospital profiling without using Medicare data would result in this unobserved patient risk being wrongly characterized as higher mortality arising from relatively worse hospital quality.

We found that using combined VA/non-VA data adds substantial morbidity information, suggesting more favorable profiles for hospitals whose patients have higher Medicare use than those of other VA hospitals, although the observed size of this difference was modest and not statistically significant. Broadly, assessments based on complete data seem more fair, yet the time required to prepare combined files is a concern. Policy makers have to weigh the trade-off between completeness and timeliness.

With respect to generalizability of the study findings, we noted 2 issues. First, data fragmentation can arise in different contexts. As in this study, dual insurance coverage can lead to fragmentation of patient data among veterans, active military service members (TRICARE and commercial coverage), and low-income elderly dual Medicare and Medicaid coverage. Since receipt of care from multiple providers is common within and outside the VA, data fragmentation can also arise when all patient data are not consolidated within a given healthcare system or between health systems. Registry data relating to a disease or treatment is limited to patient data at participating hospitals. Additionally, individual hospitals or hospital networks evaluate performance metrics based only on in-house patient data.36,37

Second, gains from combining fragmented data could vary depending on the context. In this study, there was significant change in risk prevalence, but not in risk-adjusted mortality. In practice, even a modest change in a metric can lead to a change in performance classification, with associated potential implications for incentive payments or penalties.38 Gains for quality measurement from combining data can also be high when fragmented data captures important health service use, such as mental health services in Medicaid data or pharmacy use in VA data.

Limitations

First, we treated the healthcare system where care is received—VA or the private sector—as exogenous to patient mortality and hospital performance assessment.39-41 For example, our combined model gives equal weight to diagnoses found either in the VA or Medicare, although that a veteran was seeking care outside the VA might indicate greater severity.11 Second, because we only had access to Medicare data, besides VA data, we limited our study to the elderly (66 years or older); in practice, VA hospital profiling should be based on outcomes for all patients treated. Our intent was to quantify the impact of data fragmentation among patients for whom we have complete data. Further, our finding of modest impact indicates that if we had included all patients in profiling hospitals, then the impact of adding Medicare data would also be modest.

CONCLUSIONS

We found that Medicare data on care received by elderly veterans from the VA adds information on patient risk and that VA hospital profiling is modestly affected by whether administrative data from private sector care are included. These findings have salience for other dual-care settings, including Medicare and Medicaid, Medicare FFS and Managed Care, and the Department of Defense TRICARE and private sector.

Acknowledgments

This research has been funded by a VA HSR&D grant (IIR 08-351, A. Hanchate, PI). Dr Hanchate had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs, Boston University, or the National Institutes of Health.Author Affiliations: Center for Healthcare Organization and Implementation Research, VA Boston Healthcare System (ADH, HA, AKR, KLS, MS, PS), Boston, MA; Section of General Internal Medicine (ADH, AB), Department of Surgery (AKR), School of Medicine, Department of Health Policy and Management, School of Public Health (AB), and Department of Operations and Technology Management, School of Management (MS), Boston University, Boston, MA; Department of Quantitative Health Sciences, University of Massachusetts Medical School (ASA), Worcester, MA; Center for Healthcare Organization and Implementation Research, Bedford Veterans Affairs Medical Center (AB), Bedford, MA; Emory University School of Medicine (ASF), Atlanta, GA; South Texas Veterans Health Care System (MJVP), San Antonio, TX; Department of Epidemiology and Biostatistics, University of Texas Health Science Center (MJVP), San Antonio, TX.

Source of Funding: Veterans Health Administration Health Services Research & Development. The research reported in this publication was supported by the National Center for Advancing Translational Sciences of the NIH under award number UL1TR000161 (Arlene Ash).

Author Disclosures: The authors report no conflicts of interest.

Authorship Information: Concept and design (ASA, HA, AB, AH, MJP, AR, MS); acquisition of data (HA, ASF, AH, AR, KLS, PS); analysis and interpretation of data (ASA, HA, AB, ASF, AH, MJP, KLS, MS); drafting of the manuscript (ASA, HA, AB, AH, KLS, MS, PS); critical revision of the manuscript for important intellectual content (AB, ASF, AH, MJP, AR, MS); statistical analysis (HA, ASF, AH, MS); obtaining funding (AH, PS); administrative, technical, or logistic support (AH, PS); and supervision (AH, PS).

Address correspondence to: Amresh D. Hanchate, PhD, Section of General Internal Medicine, Boston University School of Medicine, 801 Massachusetts Ave, Boston, MA 02118. E-mail: hanchate@bu.edu.REFERENCES

1. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111-123.

2. Department of Health and Human Services. National strategy for quality improvement in health care. Washington DC; 2011.

3. Krumholz HM, Normand SL, Spertus JA, Shahian DM, Bradley EH. Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement. Health Aff (Millwood). 2007;26(1):75-85.

4. Krumholz HM, Brindis RG, Brush JE, et al; American Heart Association; Quality of Care and Outcomes Research Interdisciplinary Writing Group; Council on Epidemiology and Prevention; Stroke Council; American College of Cardiology Foundation. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456-462.

5. Boutwell A. Time to get serious about hospital readmissions. Health Affairs website. www.healthaffairs.org/blog/2012/10/10/time-to-get-serious-about-hospital-readmissions/. Published 2012. Accessed January 30, 2015.

6. Bernheim SM, Wang Y, Grady JN, et al. 2011 Measures Maintenance Technical Report: Acute Myocardial Infarction, Heart Failure, and Pneumonia 30-Day Risk-Standardized Mortality Measures. Published 2011.

7. Shahian DM, Silverstein T, Lovett AF, Wolf RE, Normand SL. Comparison of clinical and administrative data sources for hospital coronary artery bypass graft surgery report cards. Circulation. 2007;115(12):1518-1527.

8. Fry DE, Pine MB, Jordan HS, Hoaglin DC, Jones B, Meimban R. The hazards of using administrative data to measure surgical quality. Am Surg. 2006;72(11):1031-1037; discussion 1061-1069,1133-1148.

9. Pizer SD, Gardner JA. Is fragmented financing bad for your health? Inquiry. 2011;48(2):109-122.

10. Byrne MM, Kuebeler M, Pietz K, Petersen LA. Effect of using information from only one system for dually eligible health care users. Med Care. 2006;44(8):768-773.

11. Rosen AK, Gardner J, Montez M, Loveland S, Hendricks A. Dual-system use: are there implications for risk adjustment and quality assessment? Am J Med Qual. 2005;20(4):182-194.

12. Weeks WB, Bott DM, Bazos DA, et al. Veterans Health Administration patients’ use of the private sector for coronary revascularization in New York: opportunities to improve outcomes by directing care to high-performance hospitals. Med Care. 2006;44(6):519-526.

13. Wright SM, Petersen LA, Lamkin RP, Daley J. Increasing use of Medicare services by veterans with acute myocardial infarction. Med Care. 1999;37(6):529-537.

14. Liu CF, Manning WG, Burgess JF Jr, et al. Reliance on Veterans Affairs outpatient care by Medicare-eligible veterans. Med Care. 2011;49(10):911-917.

15. Adler-Milstein J, Jha AK. Healthcare’s “big data” challenge. Am J Manag Care. 2013;19(7):537-538.

16. Love D, Custer W, Miller P. All-Payer Claims Databases: State Initiatives to Improve Health Care Transparency. The Commonwealth Fund; 2010.

17. Render ML, Almenoff PL, Christianson A, et al. A hybrid Centers for Medicaid and Medicare service mortality model in 3 diagnoses. Med Care. 2012;50(6):520-526.

18. Office of the Assistant Deputy Under Secretary for Health for Policy and Planning. 2011 Survey of Veteran Enrollees’ Health and Reliance Upon VA: Department of Veterans Affairs; 2011.

19. Jha AK, Perlin JB, Kizer KW, Dudley RA. Effect of the transformation of the Veterans Affairs Health Care System on the quality of care. N Engl J Med. 2003;348(22):2218-2227.

20. Fleming C, Fisher ES, Chang CH, Bubolz TA, Malenka DJ. Studying outcomes and hospital utilization in the elderly. the advantages of a merged data base for Medicare and Veterans Affairs hospitals. Med Care. 1992;30(5):377-391.

21. Gardner JA, Hendricks AM, Wolfsfeld L. Technical documentation: Medicaid enrollment, utilization and outcomes for VA patients, V1.02008.

22. Trivedi AN, Grebla RC, Jiang L, Yoon J, Mor V, Kizer KW. Duplicate federal payments for dual enrollees in Medicare Advantage plans and the Veterans Affairs health care system. JAMA. 2012;308(1):67-72.

23. Chen J, Radford MJ, Wang Y, Marciniak TA, Krumholz HM. Do “America’s best hospitals” perform better for acute myocardial infarction? N Engl J Med. 1999;340(4):286-292.

24. National Quality Forum. National Voluntary Consensus Standards for Patient Outcomes: A Consensus Report. Washington DC: NQF; 2011.

25. Agency for Healthcare Research and Quality. Guide to Inpatient Quality Indicators: Quality of Care in Hospitals - Volume, Mortality and Utilization. Rockville, MD: AHRQ; 2011.

26. DxCG Inc. DxCG RiskSmart: Clinical Classifications Guide. Boston, MA: DxCG; 2011.

27. Yale New Haven Health Services Corporation/Center for Outcomes Research and Evaluation. 2010 Condition Category—ICD-9-CM Crosswalks: Acute Myocardial Infarction. 2013. QualityNet website. https://www.qualitynet.org/dcs/contentserver?c=page&pagename=QnetPublic%2FPage%2FQnetTier4&cid=1182785083979. Accessed September 22, 2010.

28. The COPSS-CMS White Paper Committee. Statistical issues in assessing hospital performance: commissioned by the Committee of Presidents of Statistical Societies. Published November 28, 2011. Revised January 27, 2012.

29. Krumholz HM, Wang Y, Mattera JA, et al. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation. 2006;113(13):1683-1692.

30. Ash AS, Shwartz M, Pekoz EA, Hanchate AD. Comparing Outcomes Across Providers. In: Iezzoni L, ed. Risk Adjustment for Measuring Health Care Outcomes, Third Edition. Chicago, Illinois: Health Administration Press; 2012.

31. Efron B, Tibshirani R. Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Statistical Science. 1986;1(1):54-75.

32. Poi BP. From the help desk: some bootstrapping techniques. The Stata Journal. 2004;4(3):312-328.

33. StataCorp. Stata Statistical Software: Release 12. College Station, TX: StataCorp LP; 2012.

34. Ross JS, Maynard C, Krumholz HM, et al. Use of administrative claims models to assess 30-day mortality among Veterans Health Administration hospitals. Med Care. 2010;48(7):652-658.

35. Burgess JF Jr, Maciejewski ML, Bryson CL, et al. Importance of health system context for evaluating utilization patterns across systems. Health Econ. 2011;20(2):239-251.

36. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232-239.

37. Nasir K, Lin Z, Bueno H, et al. Is same-hospital readmission rate a good surrogate for all-hospital readmission rate? Med Care. 2010;48(5):477-481.

38. Joynt KE, Jha AK. A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):1175-1177.

39. Shen Y, Hendricks A, Wang F, Gardner J, Kazis LE. The impact of private insurance coverage on veterans’ use of VA care: insurance and selection effects. Health Serv Res. 2008;43(1, pt 1):267-286.

40. Pizer SD, Prentice JC. Time is money: outpatient waiting times and health insurance choices of elderly veterans in the United States. J Health Econ. 2011;30(4):626-636.

41. Maciejewski ML, Liu CF, Kavee AL, Olsen MK. How price responsive is the demand for specialty care? Health Econ. 2012;21(8):902-912.

Related Videos
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.