Abstract
Artificial Intelligence (AI) is being used by governments across the world to enforce regulatory mandates, adjudicate benefits and privileges, predict and analyze risks, and much more. Although this has significant potential to increase efficiency and responsiveness, it also comes with several risks of transparency, government accountability, and the amplification of discrimination and bias. It is crucial we oversee and regulate these AI systems effectively. This essay argues against the models that current regulatory frameworks are adopting to govern AI use: those resembling product liability.
Through the lens of the European Union's AI Act and Liability Directive, it highlights the unsuitability of the product liability framework for public administration: liability is only accorded to the developer of AI, allowing governments to circumvent liability by (falsely) distancing themselves from the AI functioning; governments are incentivised to outsource rather than develop AI themselves; and redressal mechanisms for AI-induced harm are difficult to navigate. This essay asks that we look at AI in government not as a product, but as a system, and ask the government to shoulder some of the responsibility for AI harm: through a joint liability system, where both the AI developer and the government user share responsibility; by holding government users accountable if they use the system in a way not originally intended.
Recommended Citation
Shruti Trikanad,
Regulating AI Beyond Product Liability,
31
Mich. Tech. L. Rev.
341
(2025).
Available at:
https://repository.law.umich.edu/mtlr/vol31/iss2/4
Included in
Artificial Intelligence and Robotics Commons, European Law Commons, Science and Technology Law Commons