• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

How Patients Distinguish Between Clinical and Administrative Predictive Models in Health Care

Publication
Article
The American Journal of Managed CareJanuary 2024
Volume 30
Issue 1
Pages: 31-37

Patients are less comfortable with predictive models used for health care administration compared with those used in clinical practice, signaling misalignment between patient comfort, policy, and practice.

ABSTRACT

Objectives: To understand patient perceptions of specific applications of predictive models in health care.

Study Design: Original, cross-sectional national survey.

Methods: We conducted a national online survey of US adults with the National Opinion Research Center from November to December 2021. Measures of internal consistency were used to identify how patients differentiate between clinical and administrative predictive models. Multivariable logistic regressions were used to identify relationships between comfort with various types of predictive models and patient demographics, perceptions of privacy protections, and experiences in the health care system.

Results: A total of 1541 respondents completed the survey. After excluding observations with missing data for the variables of interest, the final analytic sample was 1488. We found that patients differentiate between clinical and administrative predictive models. Comfort with prediction of bill payment and missed appointments was especially low (21.6% and 36.6%, respectively). Comfort was higher with clinical predictive models, such as predicting stroke in an emergency (55.8%). Experiences of discrimination were significant negative predictors of comfort with administrative predictive models. Health system transparency around privacy policies was a significant positive predictor of comfort with both clinical and administrative predictive models.

Conclusions: Patients are more comfortable with clinical applications of predictive models than administrative ones. Privacy protections and transparency about how health care systems protect patient data may facilitate patient comfort with these technologies. However, larger inequities and negative experiences in health care remain important for how patients perceive administrative applications of prediction.

Am J Manag Care. 2024;30(1):31-37. https://doi.org/10.37765/ajmc.2024.89484

_____

Takeaway Points

Predictive models are proliferating in health care with the rise of artificial intelligence and other advanced data analytic approaches. Efforts to ensure patient safety in this context are focused on clinical applications, such as diagnosis and treatment. However, such efforts are misaligned with patient comfort.

  • Patients are more comfortable with clinical model applications than administrative ones.
  • Transparent privacy policies and experiences of discrimination predict patient comfort with predictive technologies.
  • Oversight of nonclinical prediction in the US health care system is necessary.

_____

Predictive models are used in the health care system for both clinical decision-making and analysis of administrative or operational data.1,2 These models use large amounts of historical patient data to make predictions or produce risk scores related to a variety of diagnoses, outcomes, and behaviors.3 For example, predictive models are applied to anticipate the onset of sepsis, cardiac events, and kidney disease progression.4-6 Empirical analysis of predictive models is typically focused on these types of clinical applications.7-10 However, predictive technologies are also applied to administrative or operational functions in health care to predict health service utilization, staffing needs, and patients’ missed appointments,11,12 often with the goal of lowering costs and targeting resource utilization.13

The current regulatory framework for software in health care focuses on risks to health and safety. Predictive models that fall on the clinical and critical (ie, high-risk) end of the spectrum are most likely to be regulated by the FDA.14 A model used to diagnose a life-threatening condition or prescribe a high-risk treatment, for example, would receive the most oversight.14 On the other end of the spectrum, models that support administrative functions or management are generally unregulated. The 21st Century Cures Act, for example, explicitly excludes models used for health care management from regulation as medical devices unless the FDA sees a likelihood of serious health consequences related to their use.15 Given the FDA’s current definitions of risk, administrative models do not currently receive oversight.13

However, a growing body of literature indicates that administrative models present risks to patients by building barriers to care or inappropriately restricting access to beneficial programs and resources.13,16 These types of barriers and issues are unaddressed by current policy. In the context of a generally lax regulatory approach at the federal level,17 it is possible that the exclusion of administrative predictive models from oversight can have additional negative implications for patient safety and quality care that health systems will need to account for as they continue to use predictive tools.

In the absence of comprehensive regulation, industry self-governance has been proposed. In this proposed approach, the health care system would develop best practices and self-monitor for patient safety, quality, and accuracy.18 Multiple efforts are underway to establish fairness guidelines and measures of bias to encourage the industry to adopt standards.19,20 However, there is no consensus on whether or how administrative applications would be managed. Despite routine calls for the design of patient-centered systems, there is also a lack of robust evidence on patient perspectives, trust in these tools, and concerns about their effects.21

Previous examples of data use in health care demonstrate that policy that is unresponsive to privacy and trust concerns has negative consequences for patient trust and engagement.22 For example, the revelation that Ascension Health was sharing patient data with Google led to public outcry and mistrust.22 Similarly, the realization that patient data were being commercialized by researchers at Memorial Sloan Kettering Cancer Center through an agreement with Paige.AI, Inc led to legal review and systemwide revisions of conflict-of-interest policies after significant public attention.23 Although these commercial partnerships and uses of patient data were not illegal, they caused concern about patient trust and privacy. Similarly, applications of prediction that violate patient comfort or expectations may pose a threat to patient trust or engagement in data-driven health care.

In addition to public revelations of commercial data sharing, patient experiences in the health care system may also shape the quality of the data-driven care they receive. For example, prior work has identified that patients who have experienced discrimination while seeking care are more likely to withhold information from providers than patients who have not experienced discrimination.24 When patient data are systematically missing due to violations of trust or discrimination, the data available on those patients are of poorer quality and can produce information technologies that do not perform as well for these patients. This dynamic of worse data and resulting lower performance of the tools built with those data has been termed exclusion cycles.25 In these exclusion cycles, patient trust is violated and patients may withhold their data. The data quality in their medical record is then comparatively poorer, resulting in lower-quality predictive outputs. Because predictive models rely on high volumes of patient data, these exclusion cycles can have increasingly negative impacts on patients who are already excluded, discriminated against, and marginalized.

Objective

This study analyzes patient comfort with prediction to understand whether the current policy distinctions between clinical and administrative models align with patient perspectives. It also analyzes predictors of comfort with these models to empirically identify systematic variation. The research questions are:

  1. Does public comfort differ between clinical and administrative predictive models?
  2. What are the predictors of comfort with clinical and administrative predictive models?

METHODS

This study uses data from a cross-sectional national survey of US adults who can speak English. The survey was fielded with the National Opinion Research Center (NORC) AmeriSpeak Panel from mid-November to December 2021. A total of 1541 participants completed the survey. This included oversamples of African American respondents, Hispanic respondents, and respondents earning less than 200% of the federal poverty level to ensure adequate representation of groups typically underrepresented in research. Upon completion of cognitive interviews (n = 17), the survey was pretested using MTurk (n = 550) and pilot tested with a sample of AmeriSpeak panel participants (n = 150). This process ensured that the definitions and survey questions were understandable to the public. After excluding observations with missing data for the variables of interest, the final analytic sample size was 1488.

Straightforward definitions of key terms such as health care provider were provided to participants. Definitions were available to participants as hover-over text each time the term was mentioned in the survey (eAppendix [available at ajmc.com]). Participants also viewed a short explanatory video describing how health information is used in the health care system. As described elsewhere, the video has been reviewed by experts in the field, tested, and used in multiple previous surveys.24,26,27 Predictive models were defined and described in a short paragraph (Flesch-Kincaid score 8.7) immediately preceding survey questions about predictive models (eAppendix). Accessibility and clarity of the content were confirmed through the cognitive interview and piloting process.

This study was determined to be exempt by the University of Michigan Institutional Review Board. Participants were compensated for their time according to standard NORC remuneration policies.

Measures

The outcome measure for this analysis is comfort with specific applications of predictive models, spanning regulated and unregulated categories of predictive models (Table 1). Respondents indicated their comfort level on a 4-point scale (1, not comfortable, to 4, very comfortable) with each of the 6 predictive model applications listed on the survey. Model applications were displayed in random order.

The independent variables of interest included patient experiences of discrimination in the health care system. Used in multiple previous surveys, the survey measure is adapted from the Williams major and everyday discrimination measures.28,29 To account for perceptions of the health system, a measure of perceived clarity of privacy policies was also included as an independent variable of interest. For this measure, participants responded on a 4-point scale (1, not true, to 4, very true) to indicate their agreement with the statement, “The privacy policies of my health care system are clear to me.”

Covariates include self-reported age in years, a binary measure of sex (male, female), and race/ethnicity (multiracial, Hispanic, non-Hispanic Asian, Black, White, and other). Respondents reported their annual household income and personal education level (no high school diploma, high school or equivalent, some college, bachelor’s degree or more). Health-related independent variables include health insurance status (insured/uninsured), health care utilization in the past 12 months, self-reported health status (poor to very good), previous cancer diagnosis, and experiences of discrimination in the health care system (yes/no).

Analysis

Correlations between comfort with all 6 predictive model applications were calculated. Measures of internal consistency (Cronbach α) were also calculated to identify the degree to which participant responses indicated that clinical and administrative categories of predictive models were valid. Paired t tests were used to analyze whether there were differences in mean comfort between the predictive model categories. Comfort with the 2 categories of predictive models (clinical and administrative) was then used to create composite measures for additional analysis.

Multivariable logistic regression models were run on the composite measures of comfort with clinical and administrative models (see eAppendix for details). Independent variables of interest were experiences of discrimination and clarity of system privacy policies. Covariates included self-reported demographics (sex, race/ethnicity, age), education, income, health care utilization, self-reported health status, insurance, and previous cancer diagnosis (Table 2).

RESULTS

The sample was 49.7% female. Most respondents had either some college education (46.4%) or a bachelor’s degree or more (33.8%). Half of respondents reported at least $50,000 in annual household income (50.1%), and most reported having health insurance (93.2%). Insured adults were slightly overrepresented in this sample (national estimate, 91.7%) as were those earning less than $75,000 per year. Generally, the sample reflects the demographics of the US adult population, with the additional exceptions of marital status and home ownership. Participants in this survey were less likely to be married and more likely to rent their home than the national population. For more details on how the sample reflects and deviates from the general US adult population according to the US Census Bureau Current Population Survey results, see the eAppendix.

Research Question 1: Does Public Comfort Differ Between Clinical and Administrative Predictive Models?

As depicted in the Figure, discomfort with prediction of bill payment and missed appointments was very high (78.4% and 63.4%, respectively). This contrasts with participants’ comfort with clinical predictive models. Comfort was highest with models used to diagnose stroke in an emergency (55.8%). Comfort with other clinical predictions was approximately 50%, with 50.6% of participants reporting comfort with prediction of sepsis and 52.3% reporting comfort with prediction of colon cancer. Comfort with prediction for kidney transplant eligibility was 46.5%.

Correlations between comfort with the predictive model types were calculated. Correlations were high between the clinical model types (r = 0.59-0.73). Administrative model types were also correlated (r = 0.55). For full correlation matrix results, see the eAppendix. Cronbach α was calculated to determine whether internal consistency of these measures was high enough to construct composite measures. The Cronbach α was 0.89 for clinical model types (colon cancer, kidney transplant, sepsis, and stroke) and 0.71 for the administrative model types (payment and missed appointments). Both scores are above the acceptability threshold of 0.65 and indicate that composite measures are internally consistent.

Mean comfort with clinical model types was 2.55 (range, 1-4). For administrative models, mean comfort was 1.96. Results of a paired t test indicated that these means are significantly different from each other (P < .001), demonstrating that public comfort is different for clinical models compared with administrative ones.

Research Question 2: What Are the Predictors of Comfort With Clinical and Administrative Predictive Models?

To identify individual-level predictors of comfort, bivariable and multivariable logistic regressions were conducted with (1) binary measures of comfort with each individual model type and (2) binary composite measures of comfort with administrative and clinical models (see eAppendix for full bivariable results). Dichotomized measures of comfort were created in which responses of not comfortable and somewhat comfortable with a model were equal to 0. Responses of fairly and very comfortable were coded as 1. Composite measures of comfort with predictive models were also calculated. For each model category (clinical and administrative), the binary measure of comfort with each model was used. These composite measures were top coded so that participants who indicated comfort with all predictive models that compose the category were coded as 1. All other participants were coded as 0. In this way, participants who indicated comfort with all 4 clinical models (colon cancer, kidney transplant, sepsis, stroke) were considered comfortable with the composite measure. The same approach was used for the administrative models (payments and missed appointments).

Comfort with clinical and administrative models was regressed on the independent variables of interest and all covariates (Table 3). These analyses identified that both independent variables of interest (experiences of discrimination and transparent privacy policies) were significant predictors of comfort with both clinical and administrative models. Sex, age, and health insurance coverage were significantly associated with comfort with clinical models but not with administrative models.

Multivariable logistic regressions were run for each individual predictive model application (see eAppendix for full results). The perceived clarity of health system privacy policies was a significant positive predictor of comfort with every individual predictive model application. Experience of discrimination was a significant negative predictor of comfort with the administrative models and the kidney transplant eligibility prediction model but not with the other clinical predictive models. Having health insurance was significantly and positively associated with comfort with sepsis, stroke, and colon cancer prediction models.

Multivariable logistic regressions were also run with the composite measures of comfort as the dependent variables (Table 3). In these multivariable regressions, experiences of discrimination were a negative predictor of comfort with administrative models (OR, 0.48; P = .001). Transparent privacy policies remained significant positive predictors of comfort with both clinical (OR, 2.35; P < .001) and administrative models (OR, 1.68; P < .001). As was observed in the bivariable logistic regressions, sex (female: OR, 0.74; P = .01 vs male) and age (45-59 years: OR, 0.51; P = .01 vs 18-29 years) were significantly negatively predictive of comfort with clinical models.

DISCUSSION

This study analyzes public comfort with prediction to understand whether current policy distinctions between clinical and administrative applications align with patient perspectives. Patients are more comfortable with clinical predictive models than with administrative models. We found that transparency of privacy policies and experiences of discrimination are important predictors of comfort. In multivariable logistic regression, patients who have not experienced discrimination while seeking health care are more likely to report comfort with administrative predictive models than patients who have.

These results identify that public discomfort with administrative models is significantly higher than discomfort with clinical models, suggesting that current policy is misaligned with patient perspectives. Clinical models, with which patients report higher comfort, receive oversight, whereas administrative models do not. It is possible that the public is more comfortable with clinical prediction because they perceive or assume that clinicians’ expertise protects them from potential harms.30 However, patients may feel more qualified to express discomfort or concern related to the use of administrative models than with models related to clinical expertise. For example, a patient may not feel they have the expertise to identify equity or bias issues related to an oncology predictive model. They do, however, have experience with their hospital’s billing and scheduling processes. Their experience trying to schedule an appointment with a specialist or attempting to contact a representative about a billing error may make potential risks or discomfort with these models more salient.31 Patients have also indicated that they expect clinicians to understand and manage artificial intelligence.30 For clinical predictive models, this assumption may be engendering comfort, whereas administrative predictions are removed from clinician engagement and the assumed protection that offers.

This analysis identifies that patients are least comfortable with the prediction of bill payment. Prior work has indicated that patients tend to be uncomfortable with commercial interests in health care.26 Aggressive debt collection practices and disclosure of commercialization of health data have elicited strong negative reactions among patients and the public.32,33 The expectation that patients behave as consumers can contradict aspects of cultural expectations around the role of medicine.34 Thus, the observation of discomfort with prediction of payment in this analysis is expected.

Predictors of comfort differ for clinical and administrative prediction models, with one exception. Perception that one’s health system has transparent privacy policies is positively predictive of comfort with both model categories. Experiences of discrimination are predictive of comfort with administrative models only. Prior literature indicates that patients with cancer or those with a family history of cancer may be more willing to share their health data and exhibit more enthusiasm for tools such as precision medicine.35 However, this analysis does not identify that patients with cancer or with a family history of cancer are more comfortable with the use of prediction. With the exception of sex and age for comfort with clinical predictions, self-reported demographic characteristics are not significantly associated with comfort.

Limitations

The data analyzed here are cross-sectional, limiting the inferences that can be made about predictors of comfort with prediction. For example, it is not possible to identify longitudinal relationships between experiences of discrimination and comfort with predictive models. Additionally, although experiences of discrimination are relevant, they capture only interpersonal discrimination rather than structural inequities that inform how patients experience the health care system. Although self-reported racial or ethnic identity can be understood as an indicator of exposure to structural racism, this is an imperfect measure. Future work will explore additional structural inequities and measures of structural racism in relation to comfort with prediction. Future work should also be more inclusive in addressing gender, rather than self-reported sex, as well as explicitly including the perspectives of individuals not identified in our sample, such as Native American and Alaska Native respondents. Qualitative work will also be necessary to identify why and how violations of patient comfort may affect patient behaviors such as withholding information from providers or seeking care.

CONCLUSIONS

This analysis provides empirical evidence of misalignment between public comfort with predictive models and current regulation, with implications for federal policy as well as health system practice. The models that patients are least comfortable with fall outside the FDA’s oversight framework. Policies that respond to these concerns at the health system and federal policy levels will thus be important as predictive tools proliferate.

Given federal prioritization of engendering trust and confidence in artificial intelligence and similar tools among the public, the results of this study are particularly important. Prediction has multiple implications for patients that are not overtly clinical but stand to affect patients across the health care system. These issues of access, equity, and quality are inextricably linked to the clinical care that patients receive. Regulatory frameworks that seriously engage with administrative or managerial predictive modeling are needed to make the system of prediction more patient centered and equitable.

Author Affiliations: Division of Health Policy and Management, University of Minnesota School of Public Health (PN), Minneapolis, MN; Division of Clinical Informatics and Digital Transformation, Department of Medicine, University of California, San Francisco (JA-M), San Francisco, CA; Department of Learning Health Sciences, Michigan Medicine (JP), Ann Arbor, MI.

Source of Funding: National Institutes of Health 5R01EB030492-02.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (PN, JA-M, JP); acquisition of data (PN); analysis and interpretation of data (PN, JP); drafting of the manuscript (PN); critical revision of the manuscript for important intellectual content (JA-M, JP); statistical analysis (PN); obtaining funding (JP); and supervision (JA-M, JP).

Address Correspondence to: Paige Nong, PhD, Division of Health Policy and Management, University of Minnesota School of Public Health, 516 Delaware St SE, Minneapolis, MN 55455. Email: Nong0016@umn.edu.

REFERENCES

1. Apathy NC, Holmgren AJ, Adler-Milstein J. A decade post-HITECH: critical access hospitals have electronic health records but struggle to keep up with other advanced functions. J Am Med Inform Assoc. 2021;28(9):1947-1954. doi:10.1093/jamia/ocab102

2. Healthcare’s most wired: national trends 2021. College of Healthcare Information Management Executives. November 2021. Accessed April 7, 2023. https://chimecentral.org/wp-content/uploads/2021/11/Digital-Health-Most-Wired_National-Trends-2021.pdf

3. Waljee AK, Higgins PDR, Singal AG. A primer on predictive models. Clin Transl Gastroenterol. 2014;5(1):e44. doi:10.1038/ctg.2013.19

4. Nemati S, Holder A, Razmi F, Stanley MD, Clifford GD, Buchman TG. An interpretable machine learning model for accurate prediction of sepsis in the ICU. Crit Care Med. 2018;46(4):547-553. doi:10.1097/CCM.0000000000002936

5. Niederer SA, Lumens J, Trayanova NA. Computational models in cardiology. Nat Rev Cardiol. 2019;16(2):100-111. doi:10.1038/s41569-018-0104-y

6. Singh K, Valley TS, Tang S, et al. Evaluating a widely implemented proprietary deterioration index model among hospitalized patients with COVID-19. Ann Am Thorac Soc. 2021;18(7):1129-1137. doi:10.1513/AnnalsATS.202006-698OC

7. Beaulieu-Jones BK, Yuan W, Brat GA, et al. Machine learning for patient risk stratification: standing on, or looking over, the shoulders of clinicians? NPJ Digit Med. 2021;4(1):62. doi:10.1038/s41746-021-00426-3

8. Brenner SK, Kaushal R, Grinspan Z, et al. Effects of health information technology on patient outcomes: a systematic review. J Am Med Inform Assoc. 2016;23(5):1016-1036. doi:10.1093/jamia/ocv138

9. Middleton B, Sittig DF, Wright A. Clinical decision support: a 25 year retrospective and a 25 year vision. Yearb Med Inform. 2016;(suppl 1):S103-S116. doi:10.15265/IYS-2016-s034

10. Sendak M, Elish M, Gao M, et al. “The human body is a black box”: supporting clinical decision-making with deep learning. ArXiv. Preprint posted online December 7, 2019. doi:10.48550/arXiv.1911.08089

11. Ding X, Gellad ZF, Mather C III, et al. Designing risk prediction models for ambulatory no-shows across different specialties and clinics. J Am Med Inform Assoc. 2018;25(8):924-930. doi:10.1093/jamia/ocy002

12. Futoma J, Morris J, Lucas J. A comparison of models for predicting early hospital readmissions. J Biomed Inform. 2015;56:229-238. doi:10.1016/j.jbi.2015.05.016

13. Murray SG, Wachter RM, Cucina RJ. Discrimination by artificial intelligence in a commercial electronic health record—a case study. Health Affairs Forefront. January 31, 2020. Accessed February 28, 2022.
https://www.healthaffairs.org/do/10.1377/forefront.20200128.626576/full/

14. Clinical decision support software: guidance for industry and Food and Drug Administration staff. FDA. September 28, 2022. Accessed April 7, 2023. https://www.fda.gov/media/109618/download

15. Price WN II, Sachs RE, Eisenberg RS. New innovation models in medical AI. Wash Univ Law Rev. 2022;99(4):1121-1173. doi:10.2139/ssrn.3783879

16. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342

17. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):1148-1154. doi:10.1377/hlthaff.2014.0352

18. Roski J, Maier EJ, Vigilante K, Kane EA, Matheny ME. Enhancing trust in AI through industry self-governance. J Am Med Inform Assoc. 2021;28(7):1582-1590. doi:10.1093/jamia/ocab065

19. Bedoya AD, Economou-Zavlanos NJ, Goldstein BA, et al. A framework for the oversight and local deployment of safe and high-quality prediction models. J Am Med Inform Assoc. 2022;29(9):1631-1636. doi:10.1093/jamia/ocac078

20. Smith J. Setting the agenda: an informatics-led policy framework for adaptive CDS. J Am Med Inform Assoc. 2020;27(12):1831-1833. doi:10.1093/jamia/ocaa239

21. CDS Innovation Collaborative. Agency for Healthcare Research and Quality. Accessed November 10, 2022. https://cds.ahrq.gov/cdsic

22. Wachter RM, Cassel CK. Sharing health care data with digital giants: overcoming obstacles and reaping benefits while protecting patients. JAMA. 2020;323(6):507-508. doi:10.1001/jama.2019.21215

23. Dyer O. Memorial Sloan Kettering Cancer Center tightens conflict of interest rules after scandals. BMJ. 2019;365:l1762. doi:10.1136/bmj.l1762

24. Nong P, Williamson A, Anthony D, Platt J, Kardia S. Discrimination, trust, and withholding
information from providers: implications for missing data and inequity. SSM Popul Health. 2022;18:101092. doi:10.1016/j.ssmph.2022.101092

25. Bracic A, Callier SL, Price WN II. Exclusion cycles: reinforcing disparities in medicine. Science. 2022;377(6611):1158-1160. doi:10.1126/science.abo2788

26. Trinidad MG, Platt J, Kardia SLR. The public’s comfort with sharing health data with third-party commercial companies. Humanit Soc Sci Commun. 2020;7(1):149. doi:10.1057/s41599-020-00641-5

27. Spector-Bagdady K, Trinidad G, Kardia S, et al. Reported interest in notification regarding use of health information and biospecimens. JAMA. 2022;328(5):474-476. doi:10.1001/jama.2022.9740

28. Krieger N, Smith K, Naishadham D, Hartman C, Barbeau EM. Experiences of discrimination: validity and reliability of a self-report measure for population health research on racism and health. Soc Sci Med. 2005;61(7):1576-1596. doi:10.1016/j.socscimed.2005.03.006

29. Nong P, Raj M, Creary M, Kardia SLR, Platt JE. Patient-reported experiences of discrimination in the US health care system. JAMA Netw Open. 2020;3(12):e2029650. doi:10.1001/jamanetworkopen.2020.29650

30. Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. 2021;4(1):140. doi:10.1038/s41746-021-00509-1

31. Anderson RT, Camacho FT, Balkrishnan R. Willing to wait?: the influence of patient wait time on satisfaction with primary care. BMC Health Serv Res. 2007;7:31. doi:10.1186/1472-6963-7-31

32. O’Toole TP, Arbelaez JJ, Lawrence RS; Baltimore Community Health Consortium. Medical debt and aggressive debt restitution practices: predatory billing among the urban poor. J Gen Intern Med. 2004;19(7):772-778. doi:10.1111/j.1525-1497.2004.30099.x

33. Ornstein C, Thomas K. Sloan Kettering’s cozy deal with start-up ignites a new uproar. ProPublica. September 20, 2018. Accessed September 6, 2022. https://bit.ly/3TBwJyh

34. Khullar D. Building trust in health care—why, where, and how. JAMA. 2019;322(6):507-509. doi:10.1001/jama.2019.4892

35. Grande D, Asch DA, Wan F, Bradbury AR, Jagsi R, Mitra N. Are patients with cancer less willing to share their health information? privacy, sensitivity, and social purpose. J Oncol Pract. 2015;11(5):378-383. doi:10.1200/JOP.2015.004820

Related Videos
Masanori Aikawa, MD
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Benjamin Scirica, MD, MPH, associate professor of medicine at Harvard Medical School and director of quality initiatives at Brigham and Women’s Hospital’s Cardiovascular Division
Glenn Balasky during a video interview
dr joseph alvarnas
Michael Lynch, MD, UPMC
dr alex jahangir
Fahad Tahir, MAS, MBA, FACHE, Ascension St Thomas
Leland Metheny, MD, University Hospitals Seidman Cancer Center
Andrew Cournoyer
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.