New technologies allow users to communicate ideas to a broad audience easily and quickly, affecting the way ideas are interpreted and their credibility. Each and every social network user can simply click “share” or “retweet” and automatically republish an existing post and expose a new message to a wide audience. The dissemination of ideas can raise public awareness about important issues and bring about social, political, and economic change.

Yet, digital sharing also provides vast opportunities to spread false rumors, defamation, and Fake News stories at the thoughtless click of a button. The spreading of falsehoods can severely harm the reputation of victims, erode democracy, and infringe on the public interest. Holding the original publisher accountable and collecting damages from him offers very limited redress since the harmful expression can continue to spread. How should the law respond to this phenomenon and who should be held accountable?

Drawing on multidisciplinary social science scholarship from network theory and cognitive psychology, this Article describes how falsehoods spread on social networks, the different motivations to disseminate them, the gravity of the harm they can inflict, and the likelihood of correcting false information once it has been distributed in this setting. This Article will also describe the top-down influence of social media platform intermediaries, and how it enhances dissemination by exploiting users’ cognitive biases and creating social cues that encourage users to share information. Understanding how falsehoods spread is a first step towards providing a framework for meeting this challenge.

The Article argues that it is high time to rethink intermediary duties and obligations regarding the dissemination of falsehoods. It examines a new perspective for mitigating the harm caused by the dissemination of falsehood. The Article advocates harnessing social network intermediaries to meet the challenge of dissemination from the stage of platform design. It proposes innovative solutions for mitigating careless, irresponsible sharing of false rumors.

The first solution focuses on a platform’s accountability for influencing user decision-making processes. “Nudges” can discourage users from thoughtless sharing of falsehoods and promote accountability ex ante. The second solution focuses on allowing effective ex post facto removal of falsehoods, defamation, and fake news stories from all profiles and locations where they have spread. Shaping user choices and designing platforms is value laden, reflecting the platform’s particular set of preferences, and should not be taken for granted. Therefore, this Article proposes ways to incentivize intermediaries to adopt these solutions and mitigate the harm generated by the spreading of falsehoods. Finally, the Article addresses the limitations of the proposed solutions yet still concludes that they are more effective than current legal practices.