Abstract

Artificial intelligence (AI) promises to bring substantial benefits to medicine. In addition to pushing the frontiers of what is humanly possible, like predicting kidney failure or sepsis before any human can notice, it can democratize expertise beyond the circle of highly specialized practitioners, like letting generalists diagnose diabetic degeneration of the retina. But AI doesn’t always work, and it doesn’t always work for everyone, and it doesn’t always work in every context. AI is likely to behave differently in well-resourced hospitals where it is developed than in poorly resourced frontline health environments where it might well make the biggest difference for patient care. To make the situation even more complicated, AI is unlikely to go through the centralized review and validation process that other medical technologies undergo, like drugs and most medical devices. Even if it did go through those centralized processes, ensuring high-quality performance across a wide variety of settings, including poorly resourced settings, is especially challenging for such centralized mechanisms. What are policymakers to do? This short Essay argues that the diffusion of medical AI, with its many potential benefits, will require policy support for a process of distributed governance, where quality evaluation and oversight take place in the settings of application—but with policy assistance in developing capacities and making that oversight more straightforward to undertake. Getting governance right will not be easy (it never is), but ignoring the issue is likely to leave benefits on the table and patients at risk.

Disciplines

Health Law and Policy | Law and Economics | Science and Technology Law

Date of this Version

3-7-2022

Share

COinS