• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Why Human Oversight Matters When Implementing AI in Diabetes Care: Marry Vuong, PharmD, BCPPS

Commentary
Video

Artificial intelligence–powered insulin dosing and virtual coaching offer faster answers and convenience, but accuracy and oversight are key to building trust.

Artificial intelligence (AI) is transforming how people manage diabetes, offering quick answers and personalized support through insulin dosing tools and virtual coaching, explains Marry Vuong, PharmD, BCPPS, chief of clinical operations, Perfecting Peds. But although these innovations promise greater convenience and efficiency, experts say the technology must be paired with trained oversight and patient-specific context to ensure safe, reliable care.

This transcript was lightly edited for clarity; captions were auto-generated.

Transcript

With AI-powered insulin dosing and virtual coaching gaining traction, how do you see these digital solutions improving diabetes medicine adherence?

I think it's awesome. We love AI, and I think it's great to a certain point. It's great in finding shortcuts and quick answers. From the perspective of a consumer, it's great because you don't have to necessarily call and wait to try to find an answer, and you don't have to ask someone directly.

But you have to be careful, because these models have to be trained so that they are as accurate as possible. It's like a double-edged sword: great for convenience, great for quick answers—but just buyer beware, because a lot of times they can make mistakes.

With the dose changes and with all the technologies that are being brought out for these populations, I think it is awesome. But we should definitely be a little bit careful and always have someone double-check.

My perfect world would be the AI would generate it, and then a licensed professional would just double-check it before it went off to the patient.

Are you cautious of AI letting details slip through the cracks or misinterpreting data due to research bias?

I think it's more so not really slipping through the cracks, but just—sometimes we've asked different models. We use AI a lot to try to generate answers or to try to generate quick answers. I'll go through different bots to see what they'll derive, and I have noticed some differences based on the sources.

I do like to see where their sources are. I'll ask, “What are your citations?” And sometimes, when I reverse search the citation, it won’t necessarily be the right citation. You have to be very specific with your questions, and you also have to train your AI model.

If you have your favorite one that you normally use—I kind of talk to it a lot. I'll give it my background, and I'll give it my voice, and I'll also give it the resources that I like to use. Just learning how to ask a question, and then edit the question, and then continue to ask more questions to get the right answer—I think that’s great. Your AI model will continue to learn who you are.

I can't give up the one I have in my browser because it knows me. It knows everything in my voice. If I ask it for something, it knows my favorite databases, it knows how I like to answer things, what reading level I like my answers generated in—and it's just completely customized.

But it doesn’t come just cold turkey. You have to really train it and ask it lots of questions so it knows how to answer them.

Related Videos
Dr Elise Tremblay
Surbhi Sidana, MD, MBBS, Stanford University
Hadar Avihai Lev-Tov, MD
Rakendu Rajendran, MBBS
Vivek Bhalla, MD, Stanford University School of Medicine
Dr Carla Nester
Rayan Salih, MD, Northeast Georgia Health System
Deepak L. Bhatt, MD, MPH, MBA, FACC, FAHA, FESC, MSCAI, Mount Sinai Fuster Heart Hospital
Varsha Tanguturi, MD, MPH, Mass General Hospital
Related Content
© 2025 MJH Life Sciences
AJMC®
All rights reserved.