• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Contributor: When AI Sees, Yet Cannot Judge—Why Health Care Must Remain Human-Led

Commentary
Article

Although artificial intelligence (AI) is quickly becoming a tool used in health care, the human aspect of health care is still necessary for continued care.

Artificial intelligence (AI) can now see more than a clinician and act faster than a team. The temptation is clear: if the system generates the insight and recommends the action, why retain the human? There is a fundamental problem. Care is not only about what works. It is about what is right. Each time we remove the human, we also remove something that only a human can carry.

There are 5 reasons why full automation cannot be the goal in health care.

Andy Truscott | Image credit: Accenture

Andy Truscott | Image credit: Accenture

1. Health care risk is not technical risk

AI’s advantage is its lack of sentience. It detects patterns without bias and acts without fatigue. It reduces technical errors such as missed diagnoses, inconsistent documentation, and delayed escalation.

Yet it cannot carry moral risk. It cannot weigh harm that is ethical, relational, or situational. Health care involves judgment about what should happen, not simply what can.

Failures in clinical alerting, escalation models, and triage prioritization have demonstrated that high-accuracy systems can still cause significant harm when context is absent or assumptions go unchallenged.

2. Context is not always in the data

AI operates on the inputs it receives. Clinicians respond to what is present. A patient’s tone, posture, silence, or glance may redirect a clinical course. Technologies exist to parse vocal inflection, blink rates, and ambient behavior. Although astonishing in the degree of insights these can offer, they are still generalized. Clinical context often lies in the unsaid and the unseen, and traits that are unique to the patient. It remains perceptible and discernable only to human intuition, and resists efforts to be codified.

When context is invisible to the model, it is excluded from the decision.

3. Trust in systems depends on transparency; trust in people depends on relationships

People trust automation when it is explainable and consistent. They accept systems governed by visible rules. They do not trust black-box logic to make personal decisions.

Patients want to be seen. They want to know that someone is accountable. That trust does not derive from interface design. It stems from clinical presence. The World Health Organization’s Ethics and Governance of Artificial Intelligence for Health affirms that explainability and accountability are not optional design features. They are ethical imperatives.

4. Failure still needs a human face

When harm occurs, someone must explain the decision. Systems cannot answer the question, “Why was this done to me?” Machines cannot apologize. Models cannot testify. Someone must own the choice. Accountability requires human presence. Governance frameworks such as the Global Strategy on Digital Health 2020–2025 and the NIST AI Risk Management Framework emphasize the need for oversight where outcomes are irreversible and human dignity is at stake.

5. Automation is not neutral: it must be governed

AI expands to fill the process. Without boundaries, it enters domains where failure remains silent until it is public. Some functions may be safely delegated. Others must remain human-led by design. That distinction must be deliberate. It cannot be left to drift.

Global standards such as ISO/IEC 42001 and the OECD AI Principles call for governance structures that assign responsibility in contexts of moral consequence.

Who carries the ethical burden?

If machines cannot carry moral weight, someone else must. That burden does not disappear. It transfers. Clinical leaders, boards, and system designers must not only ask, “What can we automate?” They must also ask, “What responsibility are we accepting by doing so?”

Delegating a decision to AI does not eliminate its ethical complexity. It simply moves it upstream to those who permitted the delegation. When harm occurs, the system is not accountable. The approver is.

We must govern not only for technical failure but for moral accountability. That means defining ownership, conducting ethical review of automation decisions, and building the capacity to intervene when machine logic outpaces human sense.

What leaders must do now

1. Rebuild workflows around decision time, not tool access

Clinical safety lives in the margin between insight and action. Protect that space. If the workflow compresses it, redesign the workflow.

2. Be honest about where humans are essential and where they are not

Not every decision needs a human. Some must have one. That distinction is strategic and must be drawn deliberately.

3. Train clinicians to use AI and to challenge it

The skill is not technical fluency. It is knowing when to accept, when to pause, and when to say no.

4. Treat time as a clinical asset

Time is not a cost center. It is where care occurs. Design systems that preserve it, especially when stakes are high and complexity is irreducible.

These are not aspirational principles. They are operational imperatives. AI is reshaping what we see, how we act, and how fast we must respond. It is accelerating care delivery while introducing complexity that outpaces adaptation.

If we remove the space for judgment, we remove judgment itself. What remains is automated throughput, resembling care yet absent its core.

Clinical judgment does not occur in milliseconds. It occurs in moments. If we automate those moments out of existence, we will not modernize health care. We will dismantle the conditions under which care is possible.

Related Videos
Screenshot of a webinar panel discussion with Viet Le, PA-C; Nathan Wong, PhD, MPH; Alison Bailey, MD; and Martha Gulati, MD, MS
Mehmet Oz, MD, MBA
Senator Vincent Polistina (R, New Jersey)
Ayodeji Adegunsoye, MD, PhD, MSc
Marilyn Glassberg, MD
Interview with Miriam Freimer, MD, The Ohio State University Wexner Medical Center
Merrill H. Stewart, MD
Rachel Rohaidy, MD
Mansi Shah, MD
Mansi Shah, MD
Related Content
© 2025 MJH Life Sciences
AJMC®
All rights reserved.