•  
  •  
 

Abstract

The increasing prominence of artificial intelligence (AI) systems in daily life and the evolving capacity of these systems to process data and act without human input raise important legal and ethical concerns. This article identifies three primary AI actors in the value chain (innovators, providers, and users) and three primary types of AI (automation, augmentation, and autonomy). It then considers responsibility in AI innovation from two perspectives: (i) strict liability claims arising out of the development, commercialization, and use of products with built-in AI capabilities (designated herein as “AI artifacts”); and (ii) an original research study on the ethical practices of developers and managers creating AI systems and AI artifacts.

The ethical perspective is important because, at the moment, the law is poised to fall behind technological reality—if it hasn’t already. Consideration of the liability issues in tandem with ethical perspectives yields a more nuanced assessment of the likely consequences and adverse impacts of AI innovation. Companies should consider both legal and ethical strategies thinking about their own liability and ways to limit it, as well as policymakers considering AI regulation ex ante.

Share

COinS