Document Type
Article
Publication Date
2024
Abstract
When medical AI systems fail, who should be responsible, and how? We argue that various features of medical AI complicate the application of existing tort doctrines and render them ineffective at creating incentives for the safe and effective use of medical AI. In addition to complexity and opacity, the problem of contextual bias, where medical AI systems vary substantially in performance from place to place, hampers traditional doctrines. We suggest instead the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed for adaptation and monitoring. If that information is unavailable, we suggest that liability should shift from hospitals to the developers keeping information secret.
Recommended Citation
Price, W. Nicholson, II. "Locating Liability for Medical AI." DePaul Law Review 73, no. 2 (2024).
Included in
Artificial Intelligence and Robotics Commons, Bioethics and Medical Ethics Commons, Health Law and Policy Commons, Science and Technology Law Commons, Torts Commons
Comments
Digital Commons@DePaul © 2024 - Originally published as W. N. Price II & I. G. Cohen, Locating Liability for Medical AI, 73 DePaul L. Rev. 339 (2024) Available at: https://via.library.depaul.edu/law-review/vol73/iss2/8