• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

E-Consult Implementation: Lessons Learned Using Consolidated Framework for Implementation Research

Publication
Article
The American Journal of Managed CareDecember 2015
Volume 21
Issue 12

This paper identified 4 factors associated with implementation success of e-consults in 8 VA medical centers, with implications for implementing similar health IT initiatives elsewhere.

ABSTRACT

Objectives: In 2011, the Veterans Health Administration (VHA) implemented electronic consults (e-consults) as an alternative to in-person specialty visits to improve access and reduce travel for veterans. We conducted an evaluation to understand variation in the use of the new e-consult mechanism and the causes of variable implementation, guided by the Consolidated Framework for Implementation Research (CFIR).

Study Design: Qualitative case studies of 3 high- and 5 low-implementation e-consult pilot sites. Participants included e-consult site leaders, primary care providers, specialists, and support staff identified using a modified snowball sample.

Methods: We used a 3-step approach, with a structured survey of e-consult site leaders to identify key constructs, based on the CFIR. We then conducted open-ended interviews, focused on key constructs, with all participants. Finally, we produced structured, site-level ratings of CFIR constructs and compared them between high- and low-implementation sites.

Results: Site leaders identified 14 initial constructs. We conducted 37 interviews, from which 4 CFIR constructs distinguished high implementation e-consult sites: compatibility, networks and communications, training, and access to knowledge and information. For example, illustrating compatibility, a specialist at a high-implementation site reported that the site changed the order of consult options so that all specialties listed e-consults first to maintain consistency. High-implementation sites also exhibited greater agreement on constructs.

Conclusions: By using the CFIR to analyze results, we facilitate future synthesis with other findings, and we better identify common patterns of implementation determinants common across settings.

Am J Manag Care. 2015;21(12):e640-e647

Take-Away Points

Our research identified implementation factors that distinguished between medical centers that were less versus more successful at implementing a health information technology initiative: electronic consults (e-consults). These factors and their implications for implementing new health information technology programs include:

  • Compatibility: design initiative to fit in with existing work processes.
  • Networks and communications: assess degree of communication among participants; attend to indications of poor communication.
  • Training resources: expend effort on training.
  • Access to knowledge and information: establish key contacts easily accessible to program participants.

In 2010, the Secretary for the Department of Veterans Affairs (VA) identified improving access to care as a top priority.1 The Veterans Health Administration (VHA) had been collecting and analyzing data on wait times for more than a decade, and observational studies found associations between wait times and poorer short- and long-term quality indicators.2 Research also highlighted challenges faced by veterans in rural communities and by female veterans, with travel demands and transportation difficulties sometimes exacerbated by veterans’ functional status, resulting in delayed or forgone care.3,4

Technology was seen as part of the solution by offering alternate ways to access care.5 Research suggested telehealth interventions could improve access, including speeding time to treatment while achieving results similar to in-person visits in terms of patient satisfaction and experience of care.6 Simultaneously, there were concerns about implementation of new technologies introducing problems such as privacy and confidentiality vulnerabilities and disruption to clinic work flow.7

In 2011, the VHA implemented specialty care electronic consults (e-consults) at 15 pilot sites. E-consults offer primary care providers (PCPs) the option to obtain specialty care expertise by submitting patient consults via the VHA’s electronic health record (EHR)8,9; e-consults have been implemented in other healthcare systems as well.10-13 Specialists then respond with advice and/or recommendations on whether veterans should be seen in person. If implemented effectively, e-consults should improve specialty care access and reduce travel for veterans.

The VHA’s Office of Specialty Care Transformation (OSCT), which was responsible for overseeing the dissemination of e-consults, requested assistance in identifying the challenges associated with implementation to facilitate further dissemination. Thus, the Specialty Care Evaluation Center was created to evaluate e-consult implementation. We used the Consolidated Framework for Implementation Research (CFIR) to identify those factors that facilitated or hindered e-consult implementation among pilot sites. The CFIR consolidates and standardizes definitions of implementation factors, thereby providing a pragmatic structure for identifying potential influences on implementation and comparing findings across sites and studies.14,15 The CFIR is composed of 5 domains: intervention characteristics, outer setting, inner setting, characteristics of individuals involved in implementation, and the process of implementation.14 Thirty-seven constructs characterize these domains. The objective of this study is to use the CFIR for identification and comparison of implementation factors across sites in an effort to learn from their experiences.

METHODS

A post implementation interpretive evaluation16 was conducted using semi-structured, key informant interviews with structured ratings of CFIR constructs. The unit of analysis was the site and included 8 of 15 pilot sites (geographic-site/specialty combinations), selected for variation on overall e-consult implementation rates, measured as a ratio of e-consults to all consults for specialties of interest. Three e-consult sites were randomly selected from the 7 sites in the top half of e-consult implementation rates, and 5 were selected from the 8 sites in the bottom half (53% of sites interviewed). E-consult volume data were assessed from the beginning of the pilot period to initial site selection, May 2011 to February 2012.

A modified snowball sample was used to recruit participants, beginning with local site leaders and directors from both primary care and specialty care; e-consult programs straddle multiple clinical divisions, so some sites had multiple leaders. Interview participants were asked to identify specialists, PCPs, and support staff (nurse practitioners, pharmacists, and medical support assistants) engaged in initiatives. The rationale for conducting interviews at a small, but purposefully selected, sample of sites was to focus on obtaining an in-depth understanding of the differences in context in which implementation occurred, and how these differences might be related to implementation success.17-19

Data and Analysis

To identify a subset of high-probability CFIR constructs, a Web-based survey (available in the eAppendix A [eAppendices available at www.ajmc.com]) was first conducted of e-consult pilot site leaders to rate relevance of CFIR constructs to e-consult implementation. The initial CFIR survey was returned by all 21 e-consult site leaders. Of the 37 CFIR constructs, 14 were rated as important or very important by at least 90% of participants (Table 1). An interview guide (eAppendix B) was developed around those constructs and updated iteratively, a standard accepted practice in qualitative evaluations.20,21 Prior to conducting the interviews, analysts participated in 2 in-person, 2-day CFIR qualitative analysis training meetings, which included conducting CFIR ratings, group debriefings, and discussions of ratings. Interviews were conducted by telephone by an interviewer and note-taker and were digitally recorded. Interview pairs reviewed and clarified interview notes post interview, referring to recordings as needed. The pairs then independently coded interview notes from each participant according to CFIR constructs, ensuring that notes were consistent with the definitions of the CFIR constructs.

Following coding of the interview responses, they rated the influence of each construct in the organization (positive or negative) and magnitude or strength of its influence22 (—2, –1, 0, +1, +2) using established criteria (Table 2). Pairs distinguished constructs that were not specifically mentioned (missing) from those with ambiguous or neutral effects (rated 0). Following independent coding, pairs convened via phone or in person to resolve discrepancies and reach consensus, based on consensual qualitative research methods.20,21 Using ratings across participants and participants’ roles (some participants’ responses were weighted more heavily than others), pairs derived an overall rating for each construct for each site, and noted if there was significant variability for constructs (a difference of at least 2 points across 2 or more participants). Assigning ratings to the qualitative interview data in this way allows for a systematic, rapid comparison of findings across sites.23 A matrix of ratings for all constructs across sites was developed and used to examine the extent to which constructs were more likely to be rated as negative or zero/mixed among sites with low volume and more likely to be rated as positive at sites with a high volume of e-consults.14

RESULTS

Thirty-seven interviews were completed with participants across 8 sites (Table 3). At all sites, a minimum of 3 people were interviewed, including e-consult site leader(s). In site-level CFIR ratings, 3 CFIR constructs had negative ratings in both low- and high-volume sites: design quality and packaging (perceptions of how the intervention is bundled and presented), leadership engagement, and goals and feedback, suggesting that these might be areas of concern for VHA. Nevertheless, the high-volume sites were able to overcome these challenges. Specifically, 4 CFIR constructs had more positive ratings at high-volume sites and more negative, neutral, or mixed ratings at low-volume sites, suggesting they might be critical implementation determinants: 1) compatibility, 2) networks and communications, 3) available resources (specifically training), and 4) access to knowledge and information. Differences between the low- and high-volume sites for each of these constructs are described below, and more examples are provided in Table 4.

Compatibility. Compatibility refers to the degree of tangible fit between meaning and values attached to the intervention, as well as those of the individuals’ own norms, values, perceived risks, and needs in the context of how the intervention fits with existing work flows and systems.14 Participants’ opinions on compatibility varied at low-volume sites. Some PCPs perceived e-consults as adding to their workload and were not happy with the transfer of responsibility for certain tasks: “I feel like they’ve tried to transfer a lot of the work and basically [are] making the PCP a clerical person to collect and collate and put all this data together.” Others at low-volume sites saw the potential for e-consults to make a difference, but were frustrated with the need to account for numbers of e-consults: “Here’s what drives me nuts. We have always done e-consults here. We just didn’t call them e-consults…Then suddenly someone gave them a name—e-consults—and someone decided we could measure them. Had to change the process to deliver advice so they could get counted…[the] number of e-consults isn’t the be-all, end-all. [We] do [e-consults] to decrease visits. Appropriate to measure prompt access, not number [of e-consults].”

For high-volume sites, e-consults were more consistently described as good for work flow by streamlining existing consult processes. One e-consult site leader from a high-volume site thought the process was very efficient: “I love it; I think it’s fantastic. There are many times things come up and I would like opinions on and get notes in [the] chart but I don’t think the provider needs to see the patient. I can do it when I have time to organize my time and thoughts…Most of the time this is faster than [a] face-to-face appointment.”

Examining further the differences in context between the low- and high-volume sites that might account for differences in perception of the compatibility of e-consults with existing processes, the high-volume sites incorporated e-consults in ways that improved efficiency of operations, whereas the low-volume sites did not. Specifically, high-volume sites spent considerable time and effort tailoring the EHR templates to be completed easily and quickly. One site hired a pharmacist to handle the additional workload needed to generate and follow-up on e-consults. In contrast, low-volume sites did not take extra steps to facilitate implementation.

Networks and communications. The networks and communications construct refers to the nature and quality of webs of social networks and of formal and informal communications within an organization.14 With e-consults, specialists must reach out to PCPs to engage them, so it is important that good networks and communications exist to facilitate this engagement. Most low-volume sites noted that there was little to no communication around implementation of e-consults. One specialist at a low-volume site said variability in communication within their Patient-Aligned Care Team (PACT; the VHA’s version of patient-centered medical home) created a barrier to implementation of e-consults: “…meaning, there’s many, many different ways these PACT teams communicate with each other, and because they’re ultimately responsible for implementing the e-consults, I would say the greatest barrier is the way they communicate. By sticky notes, phone calls, CPRS [EHR], I would say the variability [among] those methods is our largest barrier to successfully implement these, because [the] system needs to take into account that variability and there’s a significant amount.”

In contrast, PCPs at high-volume sites noted the existence of good communication and relationships between PCPs and specialists: “There’s been a lot of consultation between the e-consult team and us, so we are happy with the product we have… I think there’s a better spirit of collegiality from e-consults too.” The same PCP added,“…specialists are accommodating, easily approachable, stop by and talk to us… We are at an advantage, even though we are [a] large [medical center, because] there is a good relationship between PCP and specialist… Everyone is all on the same floor, so [there is] definitely good communication between PCPs and specialists.” Another high-volume site participant noted that communication with e-consults was vital to successful implementation: “Everyone’s been very helpful especially in neurosurgery, especially in communicating and I think that’s the biggest key.”

Available resources: training. Training refers to a sub-construct of available resources, focusing on whether training has been helpful. Results showed training must include one-on-one, hands-on demonstrations to reach greatest effectiveness. At low-volume sites, training was not as common. One participant wished there was “an education component, educating providers about how to follow this new pathway,” and noted that, “I didn’t get any training whatsoever. Should have been initial standard training, not just [an] implementation guide; that would have been very beneficial.” In contrast, a high-volume site participant noted that training was crucial, “the key thing to getting this [e-consults] implemented.” Another high-volume site participant said it was instrumental that a specialist was hired specifically to implement e-consults, and under the specialist’s leadership, several trainings were conducted with PCPs to familiarize them with the program.

Access to knowledge and information. Access to knowledge and information refers to the ease of access to knowledge and information about e-consults in relation to work tasks.14 At low-volume sites, concerns were expressed with acquiring access to information at local and national levels. Some participants felt they were provided with very little assistance with implementation, and that processes were confusing. One PCP said, “[I’m] not sure who [the] e-consult coordinator is now. [I] felt like I was left on my own.” Another noted, “It took 6 months from wanting to start [e-consults] to launching because of misinformation we were getting from both what Central Office wants on forms and what we could link and how to do this and getting time and attention from CAC [Information Technology Services].” At high-volume sites, participants felt there was a clear point of contact for obtaining information. One specialist noted their point of contact was “a really good source of information, and constructive in putting me in contact with other diabetes specialists in the country, and supporting the efforts to learn from our colleagues…[I’ve] been able to grow in unique ways because of his guidance.”

DISCUSSION

Interviews and structured site-level ratings were used to identify a subset of 4 CFIR constructs that may be critical factors for implementing an e-consult initiative. A closer review of the interview responses offers suggestions for why the low- and high-volume sites implemented the initiative differently, which helps to explain why they differed with respect to their perceptions of compatibility, networks and communications, training, and access to knowledge and information. Basically, the high-volume sites expended more time, energy, and resources into implementing the program than did the low-volume sites. This is not a surprising or very informative conclusion by itself, but the benefit of our approach lies in the identification of the specific areas (specific CFIR constructs) in which time, energy, and resources should be expended to achieve the best results. Specifically, the high-volume sites devoted greater effort to developing mechanisms to make e-consults more efficient for staff (ie, developing easy-to-use templates and designating staff to help with the additional workload generated by the consults); investing in training; and designating a point person to answer questions and provide information about the program. These efforts helped achieve positive results despite challenges with leadership engagement, limited materials received for implementation, and poor feedback on the status of goals or implementation.

Although the quality of networks and communications was also different between the low- and high-volume sites, it is more difficult to understand the reason for this difference and, in turn, to make specific recommendations for how sites interested in implementing e-consults might address this. Instead, sites with good networks and communications between PCPs and specialists have a better chance of succeeding with implementing a program such as e-consults that requires coordination across departments. Sites at which this is a problem should consider this a red flag before implementing similar programs.

While we cannot conclude that implementation success was determined by these factors, or that these same factors would be important in other sites or initiatives, the findings about the potential role of specific contextual factors were helpful to the VHA OSCT and were used to generate recommendations in subsequent implementation guides for e-consults and for other initiatives focused on improving access to specialty care.

Qualitative data collection and analysis is a common approach in implementation studies, because of the wealth of information that can be obtained from in-depth interviews, compared with closed-ended survey questions. However, there is a need to use a common terminology and to code qualitative data in a way that can facilitate analysis of data not only across cases within a single study, but also across studies. We hope this paper serves to illustrate the utility of the study’s methods for these purposes, encompassing both quality improvement (QI) and research. Although attention was paid to ensuring qualitative rigor by means of a clear and transparent data collection/analysis protocol, reflexivity, and peer debriefing, this protocol could be further improved with the use of a true consensual approach and/or with triangulation via multiple methods to validate the analyses.24,25 Regardless of the specific methods used for increasing reliability of the coding process, by applying the widely accepted terminology of the CFIR to analyze differences between low- and high-implementation sites, findings from other small-sample, qualitative QI studies can begin to be combined. Meta-analyses of these data can then be conducted to improve generalizability and add to our understanding of the most important factors affecting implementation success.

Limitations

One of this study’s limitations is limited generalizability of the findings beyond the participating sites and beyond e-consults. However, the purpose of this study was not to contribute to general, context-independent conclusions. Instead, the purpose was to obtain context-dependent knowledge, to help program leaders better understand the important role of context in implementation. In addition, while the research team took many steps to ensure rigorous qualitative analysis when applying the CFIR, it cannot eliminate the risk of researcher subjectivity, which is inherent to all qualitative analysis. However, the steps applied in the analysis give us confidence in the reliability of the ordinal CFIR ratings and that the rating process could be replicated and generate similar results. Nevertheless, because of the rapid analytic approach used, we did not have the time to follow a true consensual research approach,15,26 which we recommend be used whenever possible, to help reduce bias.

Another limitation is that not all CFIR constructs were considered in the coding process, as analysts focused on the shortened list of constructs, informed by the survey completed by clinical leads prior to the interviews. In addition, not all sites had sufficient data to assign a code or rating to each construct from the shortened list; thus, data on some constructs were missing for some sites.

CONCLUSIONS

The veteran population the VA serves may benefit from VHA healthcare providers utilizing e-consults to improve access to, and quality of, patient care. Our study identified 4 critical implementation factors: compatibility, networks and communications, available resources (specifically training), and access to knowledge and information, on which future VHA medical centers can focus to successfully implement e-consults and similar telehealth initiatives. Furthermore, we observed sites that devoted effort to make e-consults more efficient for staff, invested in training, and designated a point person to answer questions and provide information about the program were more successful in implementing e-consults than were other sites. These results have important policy implications, in that successfully implemented initiatives like e-consults may lead to improved patient care and shorter patient wait time, and it can spare patients time and travel for specialty care visits that can be addressed instead through e-consults. Further, our study demonstrates the importance of rigorous evaluation measures such as CFIR to fully understand implementation processes of such initiatives both within and outside of the VA.

Acknowledgments

The authors represent the VHA Specialty Care Transformation Initiative Evaluation Center, and greatly appreciate and thank many other evaluation center members who contributed to data collection and analysis. The authors also wish to thank Tabitha Metreger for scheduling and coordination; Jeffrey Todd-Stemberg for obtaining data from the Corporate Data Warehouse; Omar Cardenas at VA Central Office for providing a variety of information on the e-consult initiative; and Rachel Orlando for assistance in submitting the manuscript. The authors are extremely grateful to all of the VHA clinicians and staff at the e-consult sites who generously shared their experiences and insights.Author Affiliations: VA Eastern Colorado Health Care System (LMH, CB, PMH), Denver, CO; VA Puget Sound Health Care System (GS, CDH), Seattle, WA; Louis Stokes Cleveland VA Medical Center (DA, LDS, SK), Cleveland, OH; Office of Specialty Care and Specialty Care Transformation (SK), Washington, DC; VA Ann Arbor Health Care System (JL), Ann Arbor, MI.

Source of Funding: This work was supported by the US Department of Veterans Affairs, Office of Specialty Care Transformation and Office of Health Services Research, and undertaken by the Specialty Care Transformation Initiative Evaluation Center. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

Author Disclosures: Drs Helfrich, Stevenson, Kirsh, and Ho, and Ms Haverhals are employees of VA (the evaluation in this study was of VA specialty-care initiatives). Dr Helfrich also has received VA grants. Dr Battaglia works for VHA in nursing/research. The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article. Findings from this paper were presented in a poster session at the Society for General Internal Medicine Conference in Denver, Colorado, on April 25, 2013.

Authorship Information: Concept and design (LMH, CDH, CB, DA, SK, JL); acquisition of data (LMH, GS, CDH, LDS, SK); analysis and interpretation of data (LMH, GS, CDH, DA, SK, PMH, JL); drafting of the manuscript (LMH, GS, CDH, CB, LDS, SK, JL); critical revision of the manuscript for important intellectual content (LMH, CDH, CB, DA, LDS, SK, PMH, JL); statistical analysis (LMH); provision of patients or study materials (LMH); obtaining funding (CDH, DA, SK, PMH); administrative, technical, or logistic support (LMH); and supervision (JL).

REFERENCES

Address correspondence to: Leah M. Haverhals, MA, Health Research Specialist, VA Eastern Colorado Health Care System, 1055 Clermont St, Research A151, Denver, CO 80220. E-mail: leah.haverhals@va.gov.

1. Fortney J, Kaboli P, Eisen S. Improving access to VA care. J Gen Intern Med. 2011;26(suppl 2):621-622.

2. Pizer SD, Prentice JC. What are the consequences of waiting for health care in the veteran population? J Gen Intern Med. 2011;26(suppl 2):676-682.

3. Buzza C, Ono SS, Turvey C, et al. Distance is relative: unpacking a principal barrier in rural healthcare. J Gen Intern Med. 2011;26(suppl 2):648-654.

4. Washington DL, Bean-Mayberry B, Hamilton AB, Cordasco KM, Yano EM. Women veterans’ healthcare delivery preferences and use by military era: findings from the National Survey of Women Veterans. J Gen Intern Med. 2013;28(2):571-576.

5. Fortney JC, Burgess JF Jr, Bosworth HB, Booth BM, Kaboli PJ. A re-conceptualization of access for 21st century healthcare. J Gen Intern Med. 2011;26(suppl 2):639-647.

6. Kehle S, Greer N, Rutks I, Wilt T. Interventions to improve veterans’ access to care: a systematic review of the literature. J Gen Intern Med. 2011;26(suppl 2):689-696.

7. Kvedar JC, Nesbitt T, Kvedar JG, Darkins A. E-patient connectivity and the near term future. J Gen Intern Med. 2011;26(suppl 2):636-638.

8. Rosland AM, Nelson K, Sun H, et al. The patient-centered medical home in the Veterans Health Administration. Am J Manag Care. 2013;19(7):e263-e272.

9. Wild K, Tanner C, Kaye J, et al. Electronic consults to facilitate specialty dementia assessment and care. Alzheimers Dement. 2012;8(suppl 4):231.

10. Pap SA, Lach E, Upton J. Telemedicine in plastic surgery: e-consult the attending surgeon. Plast Reconstr Surg. 2002;110(2):452-456.

11. Salvo M, Nigro SC, Ward D. Pharmacist-generated electronic consults to improve hypertension management in a multisite health centre: pilot study. Inform Prim Care. 2012;20(3):181-184.

12. Angstman KB, Rohrer JE, Adamson SC, Chaudhry R. Impact of e-consults on return visits of primary care patients. Health Care Manag (Frederick). 2009;28(3):253-257.

13. Ackerman S, Intinarelli G, Gleason N, et al. “Have you thought about sending that as an e-consult?”: primary care providers’ experiences with electronic consultations at an academic medical center. J Gen Intern Med. 2014;29 (suppl 1):15.

14. Damschroder LJ, Aaron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

15. Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR). Implement Sci. 2013;8:51.

16. Stetler CB, Legro MW, Wallace CM, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21(suppl 2);S1-S8.

17. Flyvbjerg B. Five misunderstandings about case-study research. Qualitative Inquiry. 2006;12(2):219-245.

18. Yin, RK. Applications of Case Study Research, Volume 34. 2nd ed. Thousand Oaks, CA: Sage; 1993.

19. Yin RK. Case Study Research: Design and Methods (Applied Social Research Methods Series, Volume 5). 2nd ed. Thousand Oaks: Sage; 1994.

20. Hill CE, Knox S, Thompson BJ, Williams EN, Hess SA, Ladany N. Consensual qualitative research: an update. J Couns Psychol. 2005;52(2):196-205.

21. Hill CE, Thompson BJ, Williams EN. A guide to conducting consensual qualitative research. The Counseling Psychologist. 1997;25(4):517-572.

22. Averill JB. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qual Health Res. 2002;12(6):855-866.

23. Rihoux B, Ragin CC. Why compare? why configurational comparative methods? In: Rihoux B, Ragin CC, eds. Configurational Comparative Methods. Thousand Oaks, CA: Sage; 2009.

24. Gilson L, Hanson K, Sheikh K, Agyepong IA, Ssengooba F, Bennett S. Building the field of health policy and systems research: social science matters. PLoS Med. 2011;8(8):e1001079.

25. Mays N, Pope C. Qualitative research in health care. assessing quality in qualitative research. BMJ. 2000;320(7226):50-52.

26. Damschroder LJ, Goodrich DE, Robinson CH, Fletcher CE, Lowery JC. A systematic exploration of differences in contextual factors related to implementing the MOVE! weight management program in VA: a mixed methods study. BMC Health Serv Res. 2011;11;248. 

Related Videos
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Benjamin Scirica, MD, MPH, associate professor of medicine at Harvard Medical School and director of quality initiatives at Brigham and Women’s Hospital’s Cardiovascular Division
Glenn Balasky during a video interview
dr joseph alvarnas
Michael Lynch, MD, UPMC
dr alex jahangir
Fahad Tahir, MAS, MBA, FACHE, Ascension St Thomas
Leland Metheny, MD, University Hospitals Seidman Cancer Center
Andrew Cournoyer
Kelly Harris, APRN
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.