•  
  •  
 

Abstract

Political campaigns have always attracted significant attention, and politicians have often been the subjects of controversial—even outlandish—discourse. In the last several years, however, the risk of deception has drastically increased due to the rise of “deepfakes.” Now, practically anyone can make audiovisual media that are both highly believable and highly damaging to a candidate. The threat deepfakes pose to our elections has prompted several states and Congress to seek legislative remedies that ensure recourse for victims and hold bad actors liable. These recent attempts at deepfake laws are open to attack from two different loci. First, there is a question as to whether these laws unconstitutionally infringe on deepfake creators’ First Amendment rights. Second, some worry that these laws do not adequately protect against the most harmful deepfakes. This Note proposes a new approach to regulating deepfakes. By delineating a “foreseeable harm” standard, with a totality-of-the-circumstances test rather than a patchwork system of discrete elements, this Note addresses both major concerns. Not only is a foreseeable harm standard effective, workable, and constitutionally sound; it is also grounded in existing tort law. Moreover, a recent Supreme Court decision pertaining to false statements and the First Amendment, United States v. Alvarez, lends support to such a standard. Adopting this standard will combat the looming threat of politically oriented deepfakes while preserving the constitutional right to free speech.

Share

COinS