Supreme Court rejects stricter liability for social media companies

Jason M. Shepard, Ph.D.
6 min readDec 18, 2023

Originally published in the Summer 2023 issue of California Publisher.

By Jason M. Shepard

Social media companies and online free speech advocates are breathing a sigh of relief after the U.S. Supreme Court rejected two efforts this term to impose greater legal liability for user-generated content posted online.

In two unanimous decisions issued in May, the Supreme Court rejected claims that social media companies should be held secondarily liable for aiding and abetting terrorist attacks because of extremist content posted on their platforms.

“With this decision, free speech online lives to fight another day,” Patrick Toomey, deputy director of ACLU’s National Security Project, said in a statement. “Twitter and other apps are home to an immense amount of protected speech, and it would be devastating if those platforms resorted to censorship to avoid a deluge of lawsuits over their users’ posts. Today’s decisions should be commended for recognizing that the rules we apply to the internet should foster free expression, not suppress it.”

Both cases were brought by families of individuals killed in terrorist attacks coordinated by the Islamic State of Iraq and Syria, or ISIS.

In Twitter v. Taamneh, U.S. relatives of Nawras Alassaf sued Facebook, Twitter and Google following a 2017 terrorist attack at a nightclub in Istanbul, Turkey, that killed 39 people and injured 69 others.

In Gonzalez v. Google, the family of Nohemi Gonzalez sued Google after she was killed in a 2015 terrorist attack in Paris that killed 130 people. Gonzalez was a 23-year-old California State University, Long Beach student studying abroad when she was killed.

In both cases, the families alleged that social media companies were complicit in promoting terrorist ideology by allowing ISIS messages to be posted and promoted on their platforms.

The plaintiffs alleged that social media companies aided and abetted ISIS to recruit new terrorists and raise funds for terrorism, including through the promotion of terrorist content by algorithms, and the platforms knew of these uses and failed to stop them.

The cases reached the Supreme Court on appeal from the Ninth Circuit Court of Appeals, based in San Francisco. In the Twitter case, the Ninth Circuit had allowed the case to move forward, while in the Gonzalez case, the Ninth Circuit ruled the plaintiffs were precluded from suing based on Section 230 of the Communications Decency Act, a landmark law providing websites with broad legal immunity for content posted by others.

The Court chose the Twitter v. Taamneh case for a 30-page detailed decision written by Justice Clarence Thomas. The Gonzalez v. Google decision was brief 3-page decision, also unanimous, largely referencing the Twitter decision.

Under the Antiterrorism Act and other federal laws, plaintiffs who have been injured by international terrorism can seek damages from individuals and entities who “aided and abetted” or provided “substantial assistance” to terrorist acts. A 2016 law clarifies the ability to seek damages for “secondary liability” for acts of terrorism committed by designated foreign terrorism organizations.

Among the key questions in Twitter v. Taamneh was whether Twitter and other platforms aided and abetted terrorists, which the Court detailed under common law principles means the plaintiffs had to prove the social media companies played “a conscious, voluntary and culpable participation in another’s wrongdoing.”

The plaintiffs alleged in court records that the social media companies “aided and abetted ISIS by knowingly allowing ISIS and its supporters to use their platforms and benefit from their ‘recommendation’ algorithms, enabling ISIS to connect with the broader public, fundraise, and radicalize new recruits. And, in the process, defendants allegedly have profited from the advertisements placed on ISIS’ tweets, posts, and videos.”

While terrorists used Twitter to communicate, much of the Court’s analysis focused on how the social media platforms’ hands-off approach to content moderation meant it would be difficult to prove it aided and abetted ISIS.

Justice Thomas said the social media platforms treated ISIS the same as “their billon-plus other users: arm’s length, passive and largely indifferent.”

To rule otherwise, Justice Thomas said, would open the door for liability for any communications provider “merely for knowing that the wrongdoers were using its services and failing to stop them.”

“To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal — and some-times terrible — ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large,” Justice Thomas wrote.

The Court also noted that the shooter in the Istanbul nightclub attack never used social media himself, so the nexus between the event and the social media’s liability was even more tenuous.

Based on the Twitter holding, the Court ruled in Gonzalez that it need not reach a ruling on whether Section 230 precluded liability separate from the aiding and abetting rationale.

The unanimous decisions suggest the Court is leery to create broad new legal liabilities for social media companies as they balance content moderation and editorial discretion on their sites.

The cases could be important precedents as the Court will likely face social media cases that raise broader First Amendment problems.

For example, laws passed in Texas and Florida regulating content on social are making their way through the lower courts. Lawmakers in both states branded their bills as protecting political conservatives from censorship and mandating political balance on social media.

The Texas law bans large social media companies from blocking users based on content they post, while the Florida law prohibits blocking candidates for political office and requires companies to publish and follow rules for blocking content and users.

Judges have stayed both laws while they are under appeal, and they could reach the Supreme Court next term. The Texas case is NetChoice v. Paxton. The Florida case is Netchoice LLC v. Moody.

In California, a lawsuit filed in April challenges the constitutionality of AB 587, a law passed in September 2022 requiring social media companies to publish their content moderation policies and provide reports to the government regarding hate speech, disinformation, harassment and extremism on their sites.

“California will not stand by as social media is weaponized to spread hate and disinformation that threaten our communities and foundational values as a country,” Governor Gavin Newsom said in a signing statement. “Californians deserve to know how these platforms are impacting our public discourse, and this action brings much-needed transparency and accountability to the policies that shape the social media content we consume every day.”

But it’s not clear the law can withstand First Amendment scrutiny. A lawsuit filed in April by several plaintiffs in the Central District of California argues the law violates the First Amendment as “viewpoint discriminatory and designed to chill expression of lawful speech the State of California disapproves of.” The case is Minds, Inc. v. Bonta.

The recent Supreme Court cases may also increase pressure for Congress to reform Section 230, the federal law that gives websites broad immunity from liability for user-generated content.

Others, however, say the cases underscore why Section 230 remains an important law protecting free speech on social media platforms.

“While tech companies still need to do far better at policing heinous content on their sites, gutting Section 230 is not the solution,” Senator Ron Wyden (D-Oregon), one of the original authors of Section 230, said in a statement. “I urge Congress to focus on things that will truly address abusive practices by tech companies, including passing a strong consumer privacy law, reigning in unethical data brokers and tackling harmful design elements in ways that don’t make it harder for users to speak or receive information.”

Jason M. Shepard, Ph.D., is a media law scholar, and professor and chair of the Department of Communications at California State University, Fullerton. Contact him at jshepard@fullerton.edu or Twitter at @jasonmshepard.

--

--

Jason M. Shepard, Ph.D.

Media law prof and COMM dept chair @CSUF. Past: @CapTimes @isthmus @TeachForAmerica @UWMadison PhD. More at jasonmshepard.com.