• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Emergency Department Risk Model: Timely Identification of Patients for Outpatient Care Coordination

Publication
Article
The American Journal of Managed CareMay 2024
Volume 30
Issue 5
Pages: e147-e156

The authors created a machine learning–based model to identify patients with major depressive disorder in the primary care setting at high risk of frequent emergency department visits, enabling prioritization for a care coordination program.

ABSTRACT

Objective: Major depressive disorder (MDD) is linked to a 61% increased risk of emergency department (ED) visits and frequent ED usage. Collaborative care management (CoCM) models target MDD treatment in primary care, but how best to prioritize patients for CoCM to prevent frequent ED utilization remains unclear. This study aimed to develop and validate a risk identification model to proactively detect patients with MDD in CoCM at high risk of frequent (≥ 3) ED visits.

Study Design: This retrospective cohort study utilized electronic health records from Mayo Clinic’s primary care system to develop and validate a machine learning–based risk identification model. The model predicts the likelihood of frequent ED visits among patients with MDD within a 12-month period.

Methods: Data were collected from Mayo Clinic’s primary care system between May 1, 2006, and December 19, 2018. Risk identification models were developed and validated using machine learning classifiers to estimate frequent ED visit risks over 12 months. The Shapley Additive Explanations model identified variables driving frequent ED visits.

Results: The patient population had a mean (SD) age of 39.78 (16.66) years, with 30.3% being male and 6.1% experiencing frequent ED visits. The best-performing algorithm (elastic-net logistic regression) achieved an area under the curve of 0.79 (95% CI, 0.74-0.84), a sensitivity of 0.71 (95% CI, 0.57-0.82), and a specificity of 0.76 (95% CI, 0.64-0.85) in the development data set. In the validation data set, the best-performing algorithm (random forest) achieved an area under the curve of 0.79, a sensitivity of 0.83, and a specificity of 0.61. Significant variables included male gender, prior frequent ED visits, high Patient Health Questionnaire-9 score, low education level, unemployment, and use of multiple medications.

Conclusions: The risk identification model has potential for clinical application in triaging primary care patients with MDD in CoCM, aiming to reduce future ED utilization.

Am J Manag Care. 2024;30(5):e147-e156. https://doi.org/10.37765/ajmc.2024.89542

_____

Takeaway Points

In this study, we present a machine learning model to identify patients with major depressive disorder at high risk of frequent emergency department visits, enabling them to be prioritized for collaborative care management of depression in primary care settings.

  • The model helps prioritize patients in primary care settings for collaborative care management of depression to prevent excessive emergency department utilization.
  • The risk identification model can be used by nonclinical decision makers to optimize resource allocation and improve patient outcomes.
  • Incorporating the model into everyday practice and policy decisions may lead to reduced health care costs by lowering the frequency of emergency department visits.
  • The approach supports proactive and targeted management of primary care patients with depression.

_____

From 2006 to 2014, emergency department (ED) visits for adults with major depressive disorder (MDD) increased by 25.9%, a rate significantly higher than the 14.8% increase observed in total ED visits during the same period.1 Even among patients not presenting to the ED for a mental health condition, the presence of MDD as a comorbidity increases the rate of ED visits up to 61%, with depression severity being linked to repeated visits.2 Because EDs are increasingly struggling with overcrowding and boarding of patients needing psychiatric care, efforts to reduce overcrowding often target the patients currently in the ED, with less attention on prevention.

Most patients with primary MDD present in primary care,3 where long-term relationships enable early interventions to prevent emergency service use. Collaborative care management (CoCM) of depression has been increasingly implemented in primary care, showing improved clinical effectiveness and cost-effectiveness across diverse settings and populations.3,4 CoCM primarily focuses on depression response/remission, usually using the Patient Health Questionnaire-9 (PHQ-9). Entry criteria typically include a score of 10 or more on the PHQ-9 and willingness to participate. However, with rising depression prevalence, these programs lack specific ways to prioritize needier patients to prevent higher-cost care. Ideally, practices should use their own patient data to identify opportunities for preventing ED use.

In 2008, the Division of Integrated Behavioral Health (IBH) at Mayo Clinic in Rochester, Minnesota, implemented the CoCM model for adult depression in primary care. As a part of that effort, a patient-centered registry was built and equipped with tools to systematically record and summarize patient progress with depression treatment in a transparent and actionable manner.4,5

Machine learning (ML) algorithms have proven to be an effective tool for development of risk identification models in various health care domains to identify patients at risk of negative outcomes such as hospital readmissions.6 However, they have been underutilized in primary care for characterizing and prioritizing patients within CoCM. In this study, we aimed to develop and test risk identification ML algorithms for estimating the risk of frequent ED visits for adult patients with MDD presenting in primary care, potentially allowing the CoCM team to develop objective criteria for prioritization within the CoCM program.

METHODS

Study Setting and Design

This study was approved by the Mayo Clinic Institutional Review Board and conforms to the Strengthening the Reporting of Observational Studies in Epidemiology statement.7

Study Population and Identifying Study Cohort

We used the Mayo Clinic IBH registry data collected between May 1, 2006, and December 19, 2018, for this study. The primary criteria for enrolling patients in the CoCM intervention and for inclusion into the IBH registry were being 18 years and older and having a PHQ-9 score8 of 10 or greater and no history of bipolar disorder. See eAppendix A (eAppendices available at ajmc.com) for the study flow diagram. All patients who died during the outcome window were excluded, specifically omitting 4 patients from the follow-up period.

Some enrolled patients were further evaluated for the presence and tracking of bipolar symptoms, anxiety symptoms, and alcohol use disorder using the Mood Disorder Questionnaire9 (MDQ), Generalized Anxiety Disorder 7-item scale (GAD-7),10 and Alcohol Use Disorders Identification Test (AUDIT),11 respectively. We decided to incorporate the data from these questionnaires where available, primarily because these data were frequently present for patients who consented to participate in the CoCM intervention.

Data in the IBH registry were further cross-referenced with the patients’ clinical data in the Mayo Clinic electronic health record (EHR) system. To be included in the study cohort, patients had to meet the following criteria: (1) a medical history of at least 2 years and (2) a minimum of 2 recorded PHQ-9 scores. The index date was defined as the date when a PHQ-9 score was recorded, provided that the patient already had at least 1 year of documented medical history and a minimum of 1 PHQ-9 score during that year, not including the one recorded on the index date. The year leading up to this index date was termed the observation window. Additionally, patients were expected to have an ensuing year of medical history after the index date, which we refer to as the outcome window. eAppendix B illustrates the observation and outcome windows. We chose to set a requirement for 2 PHQ-9 scores as a means of narrowing our focus to individuals who were flagged by their health care providers for depression concerns enough to warrant being screened twice. Given that the central theme of our research revolves around depression, we aimed to focus on those most likely to be dealing with this condition.

Measures

Outcome variable: Frequent ED visits. The term frequent ED visits denotes repeated ED visits within a set period, such as 12 months.12 Although there is no standard definition for the number of visits characterizing frequent users, such patients are often seen as high utilizers of health care, cycling through EDs, inpatient care, and other services without effectively managing their illnesses. In this study, we defined frequent ED visits as 3 or more ED visits in the outcome window. Although this is lower than some definitions in the literature, this resulted in a population of 474 patients, or 6.1% of the total sample of 7762 patients of the study cohort. See eAppendix C for the distribution of ED visits in the study cohort.

Predictor variables in observation window. During our observation window, we collected the following:

  1. Demographics: Information included patient gender, marital status, race, education, and employment.
  2. Mental health assessments: Metrics from questionnaires (PHQ-9, MDQ, GAD-7, and AUDIT) included counts, mean scores, and the score of the most recent completed questionnaire.
  3. Socioeconomic status (SES): This was derived from the HOUSES index. Higher quartiles denote a higher SES (further details in eAppendix D).
  4. History of ED visits: Data were collected on the count of visits, total time span of these visits, and most recent visit prior to the index date.
  5. Prescribed medications: Names of prescribed medications were standardized using a fuzzy string matching algorithm13 and then categorized within the Mayo Clinic classification system (MayoPharm), noting both the presence and count of each medication.
  6. Diagnostic codes: Diagnosis codes were recorded using International Statistical Classification of Diseases, Tenth Revision; Systematized Nomenclature of Medicine—Clinical Terms; and International Classification of Diseases, Ninth Revision (ICD-9). These codes were standardized to ICD-9 and subsequently summarized using 2 classification systems: Clinical Classifications Software (CCS) and ambulatory care–sensitive conditions (ACSCs).

The CCS includes 285 mutually exclusive categories of diagnosis that facilitate risk adjustment for research.14 The ACSCs include a list of comorbidities that are typically used as a measure of health system performance in the US, reflecting missed opportunities to address conditions that might have been managed in primary care.15,16 Comorbidities in ACSCs can be aggregated into 3 main categories: acute, chronic, and vaccine-preventable conditions (further referred to as aggregated ACSCs). We used these 3 categories to construct 3 indicator variables indicating whether the patient had each of them or not. ACSCs generally do not include mental disorders. However, Sarmento et al17 showed that an ACSCs list with mental disorders could identify a higher number of hospitalizations in Portugal. Additionally, the impact of comorbid mental health conditions on ED utilization is well documented.18,19 Therefore, we augmented the original version of ACSCs with a list of mental disorder morbidities: borderline personality disorder, posttraumatic stress disorder, schizophrenia, anxiety, depression, obsessive-compulsive disorder, psychosis, alcohol use disorder, attention-deficit disorder, bipolar disorder, somatic symptom disorder, and substance use disorder.

For the study, we compared the performance of models trained using the following 4 comorbidity groups: (1) ACSCs, (2) ACSCs and mental disorder comorbidities, (3) aggregated ACSCs and mental disorder comorbidities (further referred to as lumped), and (4) CCS.

Handling Missing Values

Variables with no more than 35% missing observations were imputed using the random forest imputation method missForest (see eAppendix E for more information).20 A sensitivity analysis was performed with all missing values dropped.

ML Prediction Algorithms

We trained 3 ML algorithms—elastic-net logistic regression (GLMnet), random forest (RF), and gradient boosting machine (GBM)—to predict frequent ED visits. See eAppendix F for the details of the algorithms.

Training and Validation

We partitioned the complete sample of 7762 patients into development and validation subsets according to whether the index date was before or on (development cohort) or after (validation cohort) December 31, 2014. ML models were trained using the development group with a 10-fold cross-validation strategy. We also tuned the model parameters using grid search within the 10-fold cross-validation and selected the best parameters according to the highest area under the receiver operating characteristic (ROC) curve over the 10 repeated runs. Then we evaluated the final models on the validation group (refer to eAppendix G for information on the potential risk of data leakage using this method). See Figure 1 for an illustration of this procedure. We also used a traditional 80/20 train-test split approach to train and evaluate the performance of ML classifiers. See eAppendix H for the details.

Performance Measures

To measure the performance of our trained ML models, we used various metrics, including accuracy, ROC curve, area under the curve (AUC), cumulative gain curve, Gini Index,21 sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and decision curve,22,23 across all possible classification thresholds.24 The benefit-harm relationship of making clinical decisions based on a ML model is illustrated by the decision curve, where the model is compared against 2 default clinical decisions: intervene on all patients (“all”) or no intervention (“none”).

Statistical Analysis

All data analyses were performed using R 3.6.0 (R Foundation for Statistical Computing), with the packages glmnet,25 GBM,26 RF,27 and fastshap.28 We reported descriptive statistics of patients. For each ML model, we reported performance metrics including mean and 95% CI for the 10-fold cross-validations and point estimates of the final models on the validation cohort. For the best-performing ML model, we also reported the mean Shapley Additive Explanations (SHAP) values for the top 30 variables, which help explain the model’s predictions (see eAppendix I for more information).

RESULTS

Baseline Characteristics

Of 9932 patients with MDD, 7762 met the inclusion criteria, with 5914 in the development cohort and 1823 in the validation cohort. The mean age, proportion of men, and proportion of White patients were 39.78 years, 30.3%, and 91.2%, respectively (Table 1 [part A and part B]). All patients were 18 years and older and had a primary care clinician at Mayo Clinic, and most resided in Olmsted County, Minnesota. Approximately 16.49% reported a PHQ-9 score of 20 or higher in the observation window, and 33.94% experienced thoughts of death or self-harm (according to question 9 of the PHQ-9), with 3.45% reporting a severity level of 3. Nearly 50% completed a GAD-7 questionnaire within their 1 year of treatment before the index date (observation window), and 18.90% completed an AUDIT assessment. For the HOUSES index, 36.15% of patients with frequent ED visits were assigned to quartile 1 (lowest SES) compared with 22.44% for those without frequent ED visits. Patients with frequent ED visits also had a higher number of comorbidities within the observation window along with a greater number of prescribed medications. Regarding comorbid mental disorders, 56.71% of patients had depression, followed by 14.26% with anxiety, 2.62% with alcohol use disorder, and 2.58% with substance use disorder (see eAppendix J).

Missing Values

A total of 13 variables comprising mostly the mental health questionnaires (PHQ-9, MDQ, GAD-7, and AUDIT) had at least 1 missing observation (eAppendix E). After dropping variables with more than 35% missing observations, we were left with only 3 variables, which had less than 31% (< 0.1%-30.19%) missing observations. At this point, only 12.7% of patients were missing data for 1 or more of the variables that remained in our study.

Predicting Frequent ED Visits

Table 2 presents the performance metrics (accuracy, AUC, sensitivity, specificity, PPV, and NPV) for predicting frequent ED visits using the 10-fold cross-validated models on the development cohort and the final models on the validation cohort. All 3 ML models (GLMnet, RF, and GBM) on the 4 comorbidity measures demonstrated comparable AUC performances. However, GLMnet showed slightly better performance, with an AUC of 0.79 (95% CI, 0.74-0.84) for ACSCs, ACSCs and mental disorder comorbidities, and lumped comorbidity and an AUC of 0.77 (95% CI, 0.70-0.84) for the CCS comorbidity measure.

In the validation data set, the RF classifier had the highest performance, with AUC values between 0.77 and 0.79, compared with GLMnet (AUC, 0.73-0.79) and GBM (AUC, 0.75-0.78). Additionally, the RF classifier provided more consistent results on the development (AUC, 0.77-0.79) data set compared with the other classifiers.

The sensitivity (0.69-0.72) and specificity (0.72-0.78) measures of the classifiers were also comparable across different comorbidity measures in the development data set. The RF classifier provided the highest sensitivity (0.71-0.72), and the GBM offered the highest specificity (0.75-0.78) across comorbidity measures. In the validation data set, the RF classifier delivered the best sensitivity (0.83) for CCS and specificity (0.83) for ACSC comorbidity measures.

Figure 2 shows the ROC, cumulative gain, decision curve, sensitivity, and PPV curves for the 4 comorbidity groups based on the RF model predicting frequent ED visits on validation cohorts. We used the RF model because its results on both development and validation data sets were more consistent compared with other models. As this figure shows, all 4 comorbidity groups had comparable performances across the different performance metrics.

The results of the sensitivity analysis where we dropped all missing values (complete data) showed less-performing ML models compared with the performance of the models on the imputed analysis. Also, the result of the sensitivity analysis equally demonstrated substantial variation between the development and validation cohorts (see eAppendix K for the result of sensitivity analysis on both cohorts).

Characterizing Frequent ED Visits

Figure 3 (A) displays the SHAP values for the top 30 variables from the RF model based on the lumped comorbidity group influencing frequent ED visits. These 30 variables were categorized into 5 groups: variables related to ED visits; AUDIT, PHQ-9, and GAD-7 questionnaires; demographic information; medications; and comorbidities (Figure 3 [B]). On average, factors such as a history of frequent ED visits, high mean scores on the PHQ-9 completed in the observation window, high PHQ-9 score at the index date, high score on question 9 of the PHQ-9 at the index date, being male, being Black/African American, having low levels of education (no high school degree or some college/associate degree), being unemployed or disabled, using multiple (> 1) medications, and having any avoidable ACSCs were among the most important predictors of frequent ED visits.

Age was an informative variable but not a strong risk factor because older patients had both high and low risk of frequent ED visits. Conversely, fewer AUDIT questionnaires measured, being employed, having a bachelor’s degree, and being married were associated with lower risk of frequent ED visits. However, these trends do not apply universally. Although some variables generally increased risk for most patients (eg, total number of medications used skewed toward positive values), those variables may contribute to greater or lower risk factors for others (each unique point on the graph on Figure 3 [A]).

Contrary to expectations, the HOUSES index (SES) was not among the top 30 variables. This might be due to collinearity with other variables such as low education level or employment status in the study cohort. A comprehensive analysis of the impact of the top 30 variables on frequent ED visits can be found in eAppendix I.

DISCUSSION

This study took a proactive approach to developing a prediction model for estimating the risk of frequent ED visits for patients with MDD in primary care, with the goal of providing a tool for those triaging which primary care patients to target for care coordination. Given the growing efforts to address mental health issues in primary care settings,29,30 ways to identify patients at higher risk of health care utilization are needed.

We developed and evaluated 3 ML risk models using a patient cohort in the IBH registry (implemented as part of CoCM for patients with MDD in primary care). All risk models had good discrimination capabilities on the development (AUC, 0.77-0.79) and validation (AUC, 0.73-0.79) cohorts. Further, we showed that summarizing the diagnosis data using different comorbidity measures (ACSCs, ACSCs with mental disorders, lumped, and CCS) did not significantly affect the performance of risk models.

We followed a robust model development to prevent model bias and overfitting and performed validation on the outcome window to assess model generalizability to future observations, thus mimicking how the model will be used in practice. We demonstrated that the RF model was more consistent—and thus more reliable—than the other ML classifiers across different comorbidity categories and can be translated into clinical practice to help with the triage of patients with MDD for the CoCM intervention in primary care.

Using this risk model, we characterized patients who are most likely to have frequent ED visits. In line with previous studies, patients in our population with a history of frequent ED visits,31,32 having ACSCs33,34 that could be categorized as avoidable (eg, dental conditions), using multiple medications,35,36 with a high initial PHQ-9 score (≥ 15) at index date,2,34 with a high score on question 9 of the PHQ-9 (screens for the presence and duration of suicide ideation), and who are Black/African American37 are at high risk for repeated ED visits. On the other hand, age and alcohol use disorder were weakly associated with frequent ED visits in this population.38 Specifically, some older patients and patients with depressive/alcohol use disorder conditions experienced frequent ED visits, but others did not. These findings are not consistent with the literature indicating that age39 and alcohol use disorder40,41 are strong predictors of frequent ED visits and may have occurred due to the presence of confounding variables such as employment status. In line with previous studies, demographic variables, specifically being married,42 being employed, and having a higher level of education,32 were protective factors for frequent ED visits.

There are increasing efforts to add more variables to the electronic data system, such as genomic information, more details on social determinants of care, and so on. There are also increasing examples of algorithms being imbedded in EHR systems to alert health providers to pay more attention to a pattern that the computer recognizes and to adjust the care provided. Differences in patient populations and social determinants of health may require health care organizations to adapt these approaches to local data to improve the ability to target the neediest of primary care patients. There is also much need to explore interventions that will work to prevent future ED use, but such work starts with improved ability to distinguish which of many patients with MDD are at highest risk. The primary care setting and the CoCM model offer the advantage of ongoing and repeated contacts with patients at risk and are ideal for testing prevention of higher-cost health care utilization. Future research will involve broadening the range of predictor variables to enhance predictive accuracy, particularly focusing on the influence of social determinants of health. Also, we intend to compare the performance of the risk identification model with clinician evaluations. By doing so, we aim to validate our model and integrate the expertise of physicians, further enhancing patient care. Moreover, we will examine clinical pathways for incorporating algorithms into patient management and conduct small-scale tests of interventions aimed at reducing risk.

Study Limitations

Our study population included primary patient populations at Mayo Clinic, a multistate, integrated health care delivery system; thus, an external validation study would be needed to ensure generalizability. Only structured clinical data from the IBH registry and EHR system were used for model training; however, other data or variables may have impacted the outcomes. Specifically, our study lacked patient health insurance data, which can influence health care outcomes. This omission could restrict the model’s generalizability in varied health care settings. We also did not have access to the time of day in the history of ED visits in our analysis. This exclusion could limit the model’s sensitivity to trends influenced by visit timings, such as resource availability during nighttime hours. Additionally, we did not incorporate laboratory data into the model due to significant missing values for the study cohort. Furthermore, we lacked access to the history of inpatient and outpatient care, which might have influenced the predictive power of our model. In the follow-up analyses, we intend to integrate these variables into the model. Another limitation of our study is the threshold set for frequent ED visits at 3 or more visits within the outcome window. This definition, although derived from our data set’s distribution, might differ from commonly used standards in the literature. As such, it could impact the comparability of our findings with studies employing different criteria.

In the future, we plan to test the predictors of frequent ED visits for ways we might adapt our CoCM interventions to target ED utilization. We also plan to investigate the utility of unstructured sources of data such as images, signals, or clinical notes and other ML algorithms such as deep learning to improve performance of our model. Finally, although the SHAP values provide a means to interpret any ML model, the values themselves do not provide causal knowledge and should be interpreted as associations.

CONCLUSIONS

We developed a robust risk identification model for proactive identification of patients with MDD at risk of frequent ED visits using the routinely and easily accessible generated data in primary care. This model has potential to be translated into clinical practice and help with prioritizing of patients with MDD for active care coordination to reduce ED risk. Larger and more heterogeneous patient populations and data types are needed to enable the universal applicability of our risk identification model for frequent ED visits.

Acknowledgments

This work was made possible by funding from the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery.

Author Affiliations: Department of Psychiatry and Psychology (MZ, MDW, KBA, WBL), Department of Pediatric and Adolescent Medicine (CIW), Department of Artificial Intelligence and Informatics (CN), and Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery (SP, CN), Mayo Clinic, Rochester, MN; School of Nursing, Columbia University Irving Medical Center (MZ), New York, NY.

Source of Funding: Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (MZ, MDW, KBA, WBL, CN); acquisition of data (MZ, MDW, CIW, CN); analysis and interpretation of data (MZ, MDW, KBA, CIW, SP, CN); drafting of the manuscript (MZ, MDW, KBA, SP, CN); critical revision of the manuscript for important intellectual content (MZ, MDW, KBA, CIW, WBL, CN); statistical analysis (MZ, SP, CN); obtaining funding (CN); administrative, technical, or logistic support (MZ, WBL, CN); and supervision (MZ, MDW, CN).

Address Correspondence to: Maryam Zolnoori, PhD, School of Nursing, Columbia University Irving Medical Center, 390 Fort Washington Ave, New York, NY 10033. Email: mz2825@cumc.columbia.edu.

REFERENCES

1. Ballou S, Mitsuhashi S, Sankin LS, et al. Emergency department visits for depression in the United States from 2006 to 2014. Gen Hosp Psychiatry. 2019;59:14-19. doi:10.1016/j.genhosppsych.2019.04.015

2. Beiser DG, Ward CE, Vu M, Laiteerapong N, Gibbons RD. Depression in emergency department patients and association with health care utilization. Acad Emerg Med. 2019;26(8):878-888. doi:10.1111/acem.13726

3. Unützer J, Park M. Strategies to improve the management of depression in primary care. Prim Care. 2012;39(2):415-431. doi:10.1016/j.pop.2012.03.010

4. Unützer J, Harbin H, Schoenbaum M, Druss B. The collaborative care model: an approach for integrating physical and mental health care in Medicaid health homes. Center for Health Care Strategies and
Mathematica Policy Research brief. May 2013. Accessed March 1, 2022. https://www.chcs.org/media/HH_IRC_Collaborative_Care_Model__052113_2.pdf

5. Zolnoori M, Williams MD, Leasure WB, Angstman KB, Ngufor C. A systematic framework for analyzing observation data in patient-centered registries: case study for patients with depression. JMIR Res Protoc. 2020;9(10):e18366. doi:10.2196/18366

6. Huang Y, Talwar A, Chatterjee S, Aparasu RR. Application of machine learning in predicting hospital readmissions: a scoping review of the literature. BMC Med Res Methodol. 2021;21(1):96. doi:10.1186/s12874-021-01284-z

7. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Bull World Health Organ. 2007;85(11):867-872. doi:10.2471/blt.07.045120

8. Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606-613. doi:10.1046/j.1525-1497.2001.016009606.x

9. Miller CJ, Klugman J, Berv DA, Rosenquist KJ, Ghaemi SN. Sensitivity and specificity of the Mood Disorder Questionnaire for detecting bipolar disorder. J Affect Disord. 2004;81(2):167-171. doi:10.1016/S0165-0327(03)00156-3

10. Spitzer RL, Kroenke K, Williams JB, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. 2006;166(10):1092-1097. doi:10.1001/archinte.166.10.1092

11. Saunders JB, Aasland OG, Babor TF, de la Fuente JR, Grant M. Development of the Alcohol Use Disorders Identification Test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption-II. Addiction. 1993;88(6):791-804. doi:10.1111/j.1360-0443.1993.tb02093.x

12. Pham JC, Bayram JD, Moss DK. Characteristics of frequent users of three hospital emergency departments. Agency for Healthcare Research and Quality. Updated July 2017. Accessed April 1, 2022. https://www.ahrq.gov/patient-safety/settings/emergency-dept/frequent-use.html

13. Singla N, Garg D. String matching algorithms and their applicability in various applications. Int J Soft Comput Eng. 2012;1(6):218-222.

14. Salsabili M, Kiogou S, Adam TJ. The evaluation of clinical classifications software using the National Inpatient Sample Database. AMIA Jt Summits Transl Sci Proc. 2020;2020:542-551.

15. Poblano Verástegui O, Torres-Arreola LDP, Flores-Hernández S, Nevarez Sida A, Saturno Hernández PJ. Avoidable hospitalization trends from ambulatory care-sensitive conditions in the public health system in México. Front Public Health. 2022;9:765318. doi:10.3389/fpubh.2021.765318

16. Hodgson K, Deeny SR, Steventon A. Ambulatory care-sensitive conditions: their potential uses and limitations. BMJ Qual Saf. 2019;28(6):429-433. doi:10.1136/bmjqs-2018-008820

17. Sarmento J, Rocha JVM, Santana R. Defining ambulatory care sensitive conditions for adults in Portugal. BMC Health Serv Res. 2020;20(1):754. doi:10.1186/s12913-020-05620-9

18. Sporinova B, Manns B, Tonelli M, et al. Association of mental health disorders with health care utilization and costs among adults with chronic disease. JAMA Netw Open. 2019;2(8):e199910. doi:10.1001/jamanetworkopen.2019.9910

19. Bergamo C, Juarez-Colunga E, Capp R. Association of mental health disorders and Medicaid with ED admissions for ambulatory care–sensitive condition conditions. Am J Emerg Med. 2016;34(5):820-824. doi:10.1016/j.ajem.2016.01.023

20. Stekhoven DJ, Bühlmann P. MissForest—non-parametric missing value imputation for mixed-type data. Bioinformatics. 2012;28(1):112-118. doi:10.1093/bioinformatics/btr597

21. Greene HJ, Milne GR. Assessing model performance: the Gini statistic and its standard error. J Database Mark Cust Strategy Manag. 2010;17:36-48. doi:10.1057/dbm.2010.2

22. Vickers AJ, Cronin AM, Elkin EB, Gonen M. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers. BMC Med Inform Decis Mak. 2008;8:53. doi:10.1186/1472-6947-8-53

23. Vickers AJ, Elkin EB. Decision curve analysis: a novel method for evaluating prediction models. Med Decis Making. 2006;26(6):565-574. doi:10.1177/0272989X06295361

24. Song B, Zhang G, Zhu W, Liang Z. ROC operating point selection for classification of imbalanced data with application to computer-aided polyp detection in CT colonography. Int J Comput Assist Radiol Surg. 2014;9(1):79-89. doi:10.1007/s11548-013-0913-8

25. Friedman J, Hastie T, Tibshirani R. Regularization paths for generalized linear models via coordinate descent. J Stat Softw. 2010;33(1):1-22.

26. Ridgeway G. Generalized boosted models: a guide to the gbm package. Comprehensive R Archive Network. January 10, 2024. Accessed April 17, 2024. https://cran.r-project.org/web/packages/gbm/vignettes/gbm.pdf

27. Wright MN, Ziegler A. ranger: a fast implementation of random forests for high dimensional data in C++ and R. ArXiv. Preprint posted online August 18, 2015. Updated May 17, 2018. doi:10.48550/arXiv.1508.04409

28. Greenwell B. Package ‘fastshap.’ Comprehensive R Archive Network. February 22, 2024. Accessed April 17, 2024. https://cloud.r-project.org/web/packages/fastshap/fastshap.pdf

29. Archer J, Bower P, Gilbody S, et al. Collaborative care for depression and anxiety problems. Cochrane Database Syst Rev. 2012;10:CD006525. doi:10.1002/14651858.CD006525.pub2

30. Dendukuri N, McCusker J, Belzile E. The identification of seniors at risk screening tool: further evidence of concurrent and predictive validity. J Am Geriatr Soc. 2004;52(2):290-296. doi:10.1111/j.1532-5415.2004.52073.x

31. Slankamenac K, Heidelberger R, Keller DI. Prediction of recurrent emergency department visits in patients with mental disorders. Front Psychiatry. 2020;11:48. doi:10.3389/fpsyt.2020.00048

32. Krieg C, Hudon C, Chouinard MC, Dufour I. Individual predictors of frequent emergency department use: a scoping review. BMC Health Serv Res. 2016;16(1):594. doi:10.1186/s12913-016-1852-1

33. Johnson PJ, Ghildayal N, Ward AC, Westgard BC, Boland LL, Hokanson JS. Disparities in potentially avoidable emergency department (ED) care: ED visits for ambulatory care sensitive conditions. Med Care. 2012;50(12):1020-1028. doi:10.1097/MLR.0b013e318270bad4

34. Helmer DA, Dwibedi N, Rowneki M, et al. Mental health conditions and hospitalizations for ambulatory care sensitive conditions among veterans with diabetes. Am Health Drug Benefits. 2020;13(2):61-71.

35. Allin S, Rudoler D, Laporte A. Does increased medication use among seniors increase risk of hospitalization and emergency department visits? Health Serv Res. 2017;52(4):1550-1569. doi:10.1111/1475-6773.12560

36. Shehab N, Lovegrove MC, Geller AI, Rose KO, Weidle NJ, Budnitz DS. US emergency department visits for outpatient adverse drug events, 2013-2014. JAMA. 2016;316(20):2115-2125. doi:10.1001/jama.2016.16201

37. Karaca Z, Wong HS. Racial disparity in duration of patient visits to the emergency department: teaching versus non-teaching hospitals. West J Emerg Med. 2013;14(5):529-541. doi:10.5811/westjem.2013.3.12671

38. Beiser DG, Ward CE, Vu M, Laiteerapong N, Gibbons RD. Depression in emergency department patients and association with health care utilization. Acad Emerg Med. 2019;26(8):878-888. doi:10.1111/acem.13726

39. Lee JH, Park GJ, Kim SC, Kim H, Lee SW. Characteristics of frequent adult emergency department users: a Korean tertiary hospital observational study. Medicine (Baltimore). 2020;99(18):e20123. doi:10.1097/MD.0000000000020123

40. Dark T, Flynn HA, Rust G, Kinsell H, Harman JS. Epidemiology of emergency department visits for anxiety in the United States: 2009-2011. Psychiatr Serv. 2017;68(3):238-244. doi:10.1176/appi.ps.201600148

41. Blow FC, Walton MA, Murray R, et al. Intervention attendance among emergency department patients with alcohol- and drug-use disorders. J Stud Alcohol Drugs. 2010;71(5):713-719. doi:10.15288/jsad.2010.71.713

42. Pandey KR, Yang F, Cagney KA, Smieliauskas F, Meltzer DO, Ruhnke GW. The impact of marital status on health care utilization among Medicare beneficiaries. Medicine (Baltimore). 2019;98(12):e14871. doi:10.1097/MD.0000000000014871

Related Videos
Milind Desai, MD
Masanori Aikawa, MD
Cesar Davila-Chapa, MD
Female doctor in coat with stethoscope on blue background - Pixel-Shot - stock.adobe.com
Krunal Patel, MD
Juan Carlos Martinez, MD
Benjamin Scirica, MD, MPH, associate professor of medicine at Harvard Medical School and director of quality initiatives at Brigham and Women’s Hospital’s Cardiovascular Division
Laurence Sperling, MD
Rachel Dalthorp, MD
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.