•  
  •  
 

Abstract

AI systems have become increasingly integrated into our everyday lives, and harms caused by these systems have graduated from raising hypothetical ethical concerns to questions of actual legal liability. Civil liability schemes are generally designed to address harms caused by humans; thus, it may be tempting to analogize new types of harms caused by AI systems to familiar harms caused by humans in order to justify commandeering existing human-centered legal tools to assess AI liability. However, the analogy is inappropriate and misrepresents salient legal differences in how harms are committed by humans and AI systems. Thus, “as is often the case when analogical reasoning cannot justifiably stretch extant law to address novel legal questions raised by a new technology, new law is needed.”

First, I will discuss the legally salient difference between human and AI decision-making. Second, I will highlight two specific AI harms – autonomous vehicle product liability harms and predictive privacy harms – for which the analogy of human liability is insufficient. Finally, I will propose a new legal tool that may supplement the deficiencies in applying human liability schemes to AI harms.

Share

COinS