Document Type

Article

Publication Date

2025

Abstract

As medical AI begins to mature as a health-care tool, the task of governance grows increasingly important. Ensuring that medical AI works, works where it’s used, and works for the patient in the moment is a challenging, multifaceted task. Some of this governance can be centralized—in review by FDA or by national accreditation labs, for instance. Some must be local, performed by the hospital or health system about to use the product in their own, unique environment. But a large amount of governance is left to the individual provider in the room, the human in the loop who presumably knows the patient and the health system environment, and who can ensure that the AI system is being used in a safe and effective manner. This is a hefty burden, and a growing body of empirical research shows that physicians and other providers are poorly prepared to carry this burden. How should policymakers and industry leaders develop standards for performance that account for the variability of humans in the loop and the variation among situations they will face? The notion that the final responsibility belongs to the physician poorly reflects the reality of modern medical technology and practice. Policymakers will need to come to grips with this new reality if they aim to ensure the safe, effective use of AI accessible to patients across the entire spectrum of the health-care system.

Comments

Originally published as W. Nicholson Price II, Clinicians in the Loop of Medical AI, 74 Emory L. J. 1265 (2025). Available at: https://scholarlycommons.law.emory.edu/elj/vol74/iss5/6


Share

COinS