The article is here; the introduction:
As articulated by Judge Brandeis in Whitney v. California (1927), a fundamental assumption of First Amendment jurisprudence is that the best remedy for potentially harmful speech, including false and misleading speech, is “more speech, not enforced silence.” This extended Oliver Wendell Holmes’s “free exchange of ideas” discourse model in which the ultimate good is achieved when people are free to exchange ideas in a market without fear of punishment from the government (Nunziato 2018). However, the idea that an unregulated marketplace of ideas leads to the greatest public good has been increasingly challenged as our politics have become more divisive, polarized, and burdened by conspiracy theories that could potentially spread unhindered through online networks (e.g., Sunstein 2021).
The January 6 Capitol riot provides the most egregious example of this current state of affairs: Supporters of the sitting president, believing conspiracy theories about a stolen election (many of which were broadcast through social media) , attacked the Capitol to disrupt the certification of the 2020 election. Of course, this is not an isolated incident: believers in conspiracy theories have been linked to numerous cases of social harm. Supporters of the conspiracy theory-laden QAnon movement have engaged in harassment, kidnapping, domestic terrorism, and murder (Bump 2019). Those who believe in COVID-19 conspiracy theories, of which there are many, reject social distancing, masking, and vaccination (Romer and Jamieson 2020), allowing the virus to spread unhindered. If conspiracy theories push people to take violent or otherwise harmful actions, doesn’t the government have a responsibility to prevent such harm by limiting the reach of conspiracy theories?
It is clear that conspiracy theories (and other equally dubious ideas) are subject to existing jurisprudential doctrine regarding defamation, imminent wrongful action, threats, and misrepresentations (Han 2017, 178). Indeed, one could argue with relative ease at least this much Some Conspiracy theories serve no purpose in contributing to the marketplace of ideas, promoting healthy democracy, or aiding in the search for truth, and that any personal or social harm resulting from such conspiracy theories outweighs the merits of their protection. But as with all other forms of speech, circumstances matter and, under current legal frameworks, only particular conspiracy theories – those that fall into one of the low-value speech categories listed above – will be denied constitutional protection. The result is that most conspiracy theories, even those that are intentional lies, will constitute protected speech.
Anxiety about the role that conspiracy theories have played in recent illegal and normatively undesirable actions such as those described above has led some legal scholars to argue that these theories should receive less protection under the First Amendment than they currently receive (Sunstein 2021; Han 2017; Hay 2019; Waldman 2017; Schroeder 2019; Thorson and Stohler 2017). Their argument is that existing doctrine is antiquated and ill-suited to alleviating increasingly dire social ills in an era when ideas can travel farther and faster than ever before.
This also appears to be the position of many policy makers (e.g., Klobuchar 2022). In recent years, the President of the United States and members of Congress have publicly intimidated social media companies for promoting conspiracy theories (and other dubious ideas) on their platforms, demanding these companies take “further steps” and warning them to “killing people” (Bose and Culliford 2021). Congress has held hearings on the scope of online conspiracy theories, resulting in a series of proposals at the national and state levels to curb this type of potentially harmful speech with respect to content moderation and legal sanctions (Walker 2020; Riggleman 2020; Heilweil 2020). For example, Sen. Amy Klobuchar, D-MN, sponsored a bill that would eliminate the protections afforded by Section 230 of the Communications Decency Act if health misinformation, as defined by the Department of Health and Human Services, was algorithmically promoted by a platform (MacCarthy 2021).
In this article, we argue that, from a normative perspective, laws limiting the spread of conspiracy theories should only be permissible if two conditions can be met: 1) “conspiracy theory” can be specifically defined and the ideas can be, with a minimum of error, classified as conspiracy theories; 2) the causal impact of conspiracy theories on illicit or otherwise dangerous behavior can be demonstrated empirically. Satisfaction of the first condition prevents the limitation of speech based exclusively on the ideology of the ideas expressed (i.e. the discrimination of points of view); satisfying the second condition guarantees that there is a reasonable social interest in preventing speech.
Drawing on an interdisciplinary body of literature on the basic nature, epistemology, and correlates of belief in conspiracy theories, we demonstrate that neither condition can be met. Indeed, the concise definition of “conspiracy theory” is hampered by age-old epistemological dilemmas, the accurate categorization of ideas as conspiracy theories is impeded by a combination of definitional challenges and human psychology, and by researchers’ ability to explain or predict illicit, dangerous behavior using communication or belief in conspiracy theories are extremely weak.
Furthermore, we question the premises underlying the desire to build a new legal framework to address conspiracy theories. Specifically, we argue that conspiracy theories pose no greater problems today than in the past, that social media and other new communications technologies have not ushered in an increase in conspiracy theories, and that the dangers arising from conspiracy theories are realized more so when political leaders, and not private citizens, traffic in them.
Finally, we argue that conspiracy theories often possess the qualities of protected speech; that is, they can and, historically, have promoted democracy and the search for truth. Not only does this evidence preclude the construction of a new legal framework designed to limit conspiratorial speech, but it shows how other proposals along these lines would capriciously censor ideas based on personal views, cause a severe chilling effect, trap more speech than intended declared claims, and do little to hinder the harms to be prevented.
The Journal of Free Speech Law post: "What is the damage?," by Profs. Adam Enders and Joseph Uscinski first appeared on Reason.com.