Case over misinformation on social media tests limits of government ‘jawboning’
Originally published in the Winter 2024 issue of California Publisher.
By Jason M. Shepard
Misinformation can spread like wildfire on social media, sometimes with dangerous effects. Can the government do much about it?
The U.S. Supreme Court this term will hear a case testing the limits of “jawboning,” when government officials attempt to persuade or cajole private companies to moderate content in the absence of legal orders.
The case is Murthy v. Missouri, in which the Court will review a ruling by the Fifth Circuit Court of Appeals that found government officials impermissibly coerced social media platforms into censoring content protected by the First Amendment.
The government’s defense is that it was simply pointing out to social media companies the dangerous false content that was spreading online in violation of the social media’s own content moderation policies.
Examples of online misinformation and extremism seem to be everywhere. Following the October 7 Hamas terrorist attack against Israeli, experts documented how false claims and deceptively edited videos fueled extremist speech and racist and anti-Semitic hate crimes.
The “demand for an intimate view of the war has created ample openings for disinformation peddlers, conspiracy theorists and propaganda artists — malign influences that regulators and researchers now warn pose a dangerous threat to public debates about the war,” CNN reported.
“Getting information from social media is likely to lead to you being severely disinformed,” Imran Ahmed, founder and CEO of the social media watchdog group Center for Countering Digital Hate, told CNN.
Falsehoods on social media are nothing new. An early example of real-world harm from online conspiracy theories was the case of “Pizzagate,” when a man in 2016 fired a shotgun in the Comet Ping Pong pizza shop in Washington, D.C., believing he was exposing a child sex ring run by supporters of presidential candidate Hillary Clinton.
And in dozens of cases in which individuals have been sentenced for crimes related to the attack on the U.S. Capitol on January 6, 2021, defendants blamed false election conspiracies fueled by President Donald Trump and his supporters to justify their actions.
In his latest book, “Liar in a Crowded Theater,” law professor Jeff Kosseff traces the history of regulations of false speech and defends traditional First Amendment rationales that call for skepticism of regulation, including theories related to marketplace of ideas, democratic citizenship, and the philosophy of uncertainty.
Instead of banning false speech under the First Amendment, Kosseff proposes solutions that give targets of false speech greater ability to respond, supporting online platforms in content moderation, holding people accountable for how they respond to false speech, increasing media literacy skills, and restoring public trust in institutions such as government agencies and the media.
One would think the government actions under review in the Murthy case fit squarely within Kosseff’s proposals, allowing the government to take actions to mitigate harmful misinformation short of censorship.
But as the Murthy case shows, it’s not so simple. The stakes are high, and the issues are complex.
The Murthy case (originally filed in the lower courts as Missouri v. Biden) focuses on complaints about removed content on topics such as the COVID-19 lab-leak theory, pandemic lockdowns, vaccine effects, election fraud and the Hunter Biden laptop story.
Four individuals, a conservate website and the states of Missouri and Louisiana, filed the lawsuit in U.S. district court in Louisiana. They alleged that government officials in the Biden administration, including those in the White House, Surgeon General’s office, the CDC and the FBI, improperly pressured Twitter, Facebook, YouTube and Google to remove or downgrade posts and stories by conservative speakers.
In July, a U.S. district court issued a sweeping injunction against the Biden administration, finding that “unrelenting pressure” from government officials likely “had the intended result of suppressing millions of protected free speech postings by American citizens.” The judge ruled that government’s actions constituted viewpoint discrimination against conservative speakers.
Notably, the judge’s injunction banned government officials from even communicating with social media companies about content moderation practices.
In September, the appellate court limited but affirmed some central findings of the district court.
In a 74-page ruling, the Fifth Circuit scrutinized communications between government officials and social media companies, including emails demanding to take down flagged content “ASAP” and “remove [an] account immediately.”
Social media companies responded to the requests “with total compliance,” the appellate court ruled. The government said in filings that social media companies rejected requests as often as they agreed to them.
Government officials have the right to speak about public issues. Legal precedents make it clear the government “can speak for itself, which includes the right to advocate and defend its own policies.”
But when officials take action to induce private parties into violating the First Amendment, such actions may be viewed as improper state action.
“On the one hand there is persuasion, and on the other is coercion and significant encouragement,” the appellate court ruled.
Under current precedents, the first is permissible, the latter is not.
To find significant encouragement, a court must find the government exercised “active, meaningful control over the private party’s decision.”
To find coercion, a court must find the government compelled a private party’s decision. Lower courts have used a nuanced four-part test to determine whether government actions “can reasonably be interpreted as intimating that some form of punishment or adverse regularly action will follow the failure to accede to the official’s request.”
While acknowledging that officials “have an interest in engaging with social media companies, including on issues such as misinformation and election interference,” the Seventh Circuit concluded that the government likely “coerced the platforms into direct action via urgent, uncompromising demands to moderate content” with express or implied threats of adverse consequences if they failed to comply, in violation of the First Amendment.
In October, the Supreme Court lifted the temporary restraining order against the government, and it granted review of the Fifth Circuit’s ruling, for likely oral arguments in the spring and a ruling before the term ends in June.
The U.S. Solicitor General criticized the appellate court’s ruling on multiple fronts, saying its holding would subject compromise private companies from making its own content decisions and setting new limits on government speech.
The government also said the appellate court overinflated the government’s actions, saying that many of the government’s statements were made in public statements pointing out inaccuracies of viral posts. “The court did not identify any threat, implicit or explicit, of adverse consequences for noncompliance,” the Solicitor General wrote in a brief.
“This is an immensely important case,” said Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, in a statement.
“The First Amendment has long been understood to prohibit the government from coercing bookstores and other speech intermediaries to suppress speech, but the Supreme Court hasn’t had occasion to apply this rule in the context of social media. Even outside that context, it’s said very little about how lower courts should distinguish permissible persuasion from unconstitutional coercion,” Jaffers said. “These are momentous, thorny issues, and how the Court resolves them will have broad implications for the digital public sphere.”
The Murthy case is but one of several cases involving social media regulations under review by the U.S. Supreme Court this term. The Court is hearing cases about whether government officials can block citizens from their social media accounts and whether Texas and Florida can stop social media companies for blocking users or enforcing their own content moderation rules.
Jason M. Shepard, Ph.D., is a media law scholar, and professor and chair of the Department of Communications at California State University, Fullerton. Contact him at jshepard@fullerton.edu or Twitter at @jasonmshepard.