The interaction of artificial intelligence (“AI”) and health privacy is a two-way street. Both directions are problematic. This Article makes two main points. First, the advent of artificial intelligence weakens the legal protections for health privacy by rendering deidentification less reliable and by inferring health information from unprotected data sources. Second, the legal rules that protect health privacy nonetheless detrimentally impact the development of AI used in the health system by introducing multiple sources of bias: collection and sharing of data by a small set of entities, the process of data collection while following privacy rules, and the use of non-health data to infer health information. The result is an unfortunate anti- synergy: privacy protections are weak and illusory, but rules meant to protect privacy hinder other socially valuable goals. This state of affairs creates biases in health AI, privileges commercial research over academic research, and is ill-suited to either improve health care or protect patients’ privacy. The ongoing dysfunction calls for a new bargain between patients and the health system about the uses of patient data.
Health Law and Policy | Law and Economics | Science and Technology Law
Date of this Version
Working Paper Citation
Price, W. Nicholson II, "Problematic Interactions between AI and Health Privacy" (2021). Law & Economics Working Papers. 205.