Document Type


Publication Date



Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within just a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. Despite the growing power of AI, proprietary algorithmic systems can be technically complex, legally claimed as trade secrets, and managerially invisible to outsiders, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding data privacy rules, human rights, and democratic norms in modern society.

The emergence of AI systems has thus generated a deep tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an equally important issue that is nonetheless managerially disregarded, commercially evasive, and legally unactualized. This Note is the first to illustrate how regulators should rethink strategies regarding data privacy through the interplay of human rights, algorithmic disclosures, and whistleblowing systems. As the world increasingly looks to the European Union’s (EU) data protection law—the General Data Protection Regulation (GDPR)—as a regulatory frame of reference, this piece assesses the effectiveness of the GDPR’s response to data protection issues raised by opaque AI systems. Based on a case study of Google’s AI applications and privacy disclosures, this piece demonstrates that even the EU fails to enforce data protection rules to address issues caused by algorithmic opacity.

This Note argues that as algorithmic opacity has become a primary barrier to oversight and enforcement, regulators in the EU, the United States, and elsewhere should not overprotect the secrecy of every aspect of AI applications that implicate public concerns. Rather, policymakers should consider imposing a duty of algorithmic disclosures through sustainability reporting and whistleblower protection on firms deploying AI to maximize effective enforcement of data privacy laws, human rights, and other democratic values.


Copyright © 2022 Sylvia Lu. DOI:

Originally published as Lu, Sylvia. "Data Privacy, Human Rights, and Algorithmic Opacity." California Law Review 110, no. 6 (2022): 2087-2147.