•  
  •  
 

Abstract

Before police perform a search or seizure, they typically must meet the probable cause or reasonable suspicion standard. Moreover, even if they meet the appropriate standard, their evidence must be individualized to the suspect and cannot rely on purely probabilistic inferences. Scholars and courts have long defended the distinction between individualized and purely probabilistic evidence, but existing theories of individualization fail to articulate principles that are descriptively accurate or normatively desirable. They overlook the only benefit that the individualization requirement can offer: reducing hassle. Hassle measures the chance that an innocent person will experience a search or seizure. Because some investigation methods meet the relevant suspicion standards but nevertheless impose too many stops and searches on the innocent, courts must have a lever independent from the suspicion standard to constrain the scope of criminal investigations. The individualization requirement has unwittingly performed this function, but not in an optimal way. Individualization has kept hassle low by entrenching old methods of investigation. Because courts designate practices as individualized when they are costly (for example, gumshoe methods) or lucky (for example, tips), the requirement has confined law enforcement to practices that cannot scale. New investigation methods such as facial-recognition software and pattern-based data mining, by contrast, can scale up law-enforcement activities very quickly. Although these innovations have the potential to increase the accuracy of stops and searches, they may also increase the total number of innocent individuals searched because of the innovations’ speed and cost-effectiveness. By reforming individualization to minimize hassle, courts can enable law-enforcement innovations that are fairer and more accurate than traditional police investigations without increasing burdens on the innocent.

Share

COinS