Document Type

Article

Publication Date

10-2019

Abstract

Artificial intelligence (AI) is quickly making inroads into medical practice, especially in forms that rely on machine learning, with a mix of hope and hype. Multiple AI-based products have now been approved or cleared by the US Food and Drug Administration (FDA), and health systems and hospitals are increasingly deploying AI-based systems. For example, medical AI can support clinical decisions, such as recommending drugs or dosages or interpreting radiological images.2 One key difference from most traditional clinical decision support software is that some medical AI may communicate results or recommendations to the care team without being able to communicate the underlying reasons for those results. Medical AI may be trained in inappropriate environments, using imperfect techniques, or on incomplete data. Even when algorithms are trained as well as possible, they may, for example, miss a tumor in a radiological image or suggest the incorrect dose for a drug or an inappropriate drug. Sometimes, patients will be injured as a result. In this Viewpoint, we discuss when a physician could likely be held liable under current law when using medical AI.

Comments

Published as Price, W. Nicholson, II, Sara Gerke, I, and Glenn Cohen. "Potential Liability for Physicians Using Artificial Intelligence." AMA: Journal of the American Medical Association 322, no. 18 (2019): 1765-1766. DOI: 10.1001/jama.2019.15064

DOI

https://doi.org/10.1001/jama.2019.15064


Share

COinS