Content moderation online sparks review of Section 230
Originally published in California Publisher, Summer 2020.
By Jason Shepard
New legal battles lie ahead as social media companies face a backlash for blocking disinformation, hate speech, threats and violence that proliferate on their networks.
President Donald Trump is calling for new regulations to limit content moderation by social media companies. He and others accuse the tech giants of having anti-conservative bias.
“Online platforms are engaging in selective censorship that is harming our national discourse,” Trump said in an executive order he signed in May titled “Preventing Online Censorship.”
“The growth of online platforms in recent years raises important questions about applying the ideals of the First Amendment to modern communications technology,” Trump wrote.
“Twitter, Facebook, Instagram, and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see.”
Trump’s executive order comes after social media companies have cracked down on Trump’s posts that violated policies against hate symbols and inciting violence, and others labeled as harmful disinformation, including posts about the census and election integrity.
In July, Facebook removed a video that had garnered 17 million views in eight hours of a press conference hosted by the Tea Party Patriots that featured doctors who made false claims about the coronavirus.
After years of criticism for allowing harmful speech to flourish on their platforms, Facebook, Twitter and Google are expanding their lists of prohibited content. Twitter lists 15 categories of rules and prohibited content. Facebook has five broad categories of rules about prohibited content, each with multiple subcategories.
The companies are spending millions to remove content that violate their rules and policies. Facebook pays about 15,000 content moderators, reports say. Google-owned YouTube reported removing 6.1 million videos in the first quarter of 2020.
Recently, experts say false information about COVID-19 spread on social media has had dire effects on public health efforts, and disinformation campaigns calculated to influence elections will only worsen in the lead-up to the November election. Social media companies are cracking down on both.
Trump’s executive order is unlikely to have any immediate effects. It calls on the Federal Communications Commission, the Federal Trade Commission and the Attorney General to propose new rules.
Lawsuits challenging the constitutionality of the executive order have already been filed.
The law at the center of the debate over social media policies is Section 230 of the Communications Decency Act, written in 1996 by Rep. Chris Cox (R-CA) and Sen. Ron Wyden (D-OR).
Section 230 provides broad immunity from legal liability for internet service providers, including website hosts, for content posted by others.
Congress passed the law in the early days of the internet, after courts began to hold internet service providers legally liable for the content posted by others. Lawmakers feared that internet innovation would suffer if tech companies could be sued for the actions of their users.
Section 230 has allowed websites that host third-party content to flourish without fear of lawsuits for content that might be harassing, libelous, or offensive, for example. The law protects companies from liability even if they edit and moderate content to their own standards, in “good faith.”
“The story of Section 230 is the story of American free speech in the Internet age,” writes professor Jeff Kosseff in his book “The Twenty-Six Words that Created the Internet,” which he calls a biography of Section 230.
But 25 years later, Section 230 is under intense scrutiny. In today’s world where digital communications are ubiquitous, does Section 230 provide too much legal protection for internet companies?
Both President Trump and his Democratic presidential candidate Joe Biden have criticized the law. “Revoke 230!” President Trump posted online after Twitter labeled one of his tweets about protests of the George Floyd killing as “glorifying violence.”
Biden, meanwhile, has said Section 230 should be “revoked” for companies like Facebook that spread falsehoods and violate people’s privacy.
Section 230 has recently seen some narrowing. In 2018, Congress passed and the president signed two laws that limited Section 230 protections for websites that facilitated sex trafficking. Lawmakers found that Section 230 protected websites like Backpage.com from legal liability for posting advertisements for sex trafficking, including of children, and passed new limits.
Now, some lawmakers are calling for other limits to Section 230.
In June, Sen. Josh Hawley (R-MO), introduced the “Ending Support for Internet Censorship Act,” which would require interactive computer services with more than 30 million U.S. users or 300 million users globally to have “politically neutral” content moderation policies in order to qualify for Section 230 protections.
In June, four Republican senators called on the FCC to create more specific guidelines in order for internet companies to qualify for Section 230 protections. Their letter suggests that once social media companies begin to exert editorial judgment and moderation, they could lose protections under Section 230.
“The unequal treatment of different points of view across social media presents a mounting threat to free speech,” wrote Marco Rubio (R-FL), Kelly Loeffer (R-GA), Kevin Cramer (R-ND) and Josh Hawley (R-MO). “It is time to take a fresh look at Section 230 and to interpret the vague standard of ‘good faith’ with specific guidelines and direction.”
Wyden, the co-author of Section 230, worries that changes will undermine free speech online.
“Without Section 230, sites would have strong incentives to go one of two ways: either sharply limit what users can post, so as to avoid being sued, or to stop moderating entirely,” Sen. Wyden wrote in June. Wyden said that individuals who posted stories on social media about sexual misconduct or police abuse might have been censored if internet companies feared legal liability for user content.
“But without 230, the people without power — people leading the movements like #MeToo and Black Lives Matter — would find it harder to challenge the big corporations and powerful institutions. Without 230, I believe that not a single #MeToo post would have been allowed on moderated sites.”
Republican FCC Commissioner Brendan Carr wrote in Newsweek that conservatives should push for reforms that require internet companies to disclose policies for blocking, prioritizing, and discriminating against content; the Federal Trade Commission should increase its scrutiny of unfair or deceptive practices of internet businesses; and internet companies should explore ideas such as “bias filters” that users can turn off and on.
Content moderation raises all sorts of challenges. Who should be deciding what we read and write online? Drawing lines between fact and harmful fiction can be difficult. And what’s offensive to some may not be offensive to others. As Supreme Court Justice John Marshall Harlan II famously wrote in the 1971 decision Cohen v. California, upholding the First Amendment right of a man to wear jacket emblazoned with “Fuck the Draft” in the Los Angeles courthouse, “One man’s vulgarity is another’s lyric.”
When it comes to the First Amendment, the law prohibits the government from abridging the rights of speech and press. Social media companies are private companies, and courts have ruled that citizens don’t have a First Amendment right of access to social media platforms.
In May, the federal appeals court in the District of Columbia upheld the dismissal of a lawsuit against Twitter by Freedom Watch and Laura Loomer over Loomer’s permanent ban for posts Twitter said were anti-Muslim. Loomer is one of several high-profile conservative provocateurs banned from Twitter, along with Alex Jones, Roger Stone and Milo Yiannopoulos.
“Freedom Watch’s First Amendment claim fails because it does not adequately allege that the Platforms can violate the First Amendment,” the appellate court wrote. “In general, the First Amendment ‘prohibits only governmental abridgement of speech.’”
The case is consistent with a Ninth Circuit Court of Appeals decision in February, dismissing a case filed against Google by PragerU, a conservative media company that posts videos by founder Dennis Prager on YouTube. YouTube labeled several of PragerU’s videos as “restricted,” limiting their audience reach.
“Despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment,” the appeals court wrote.
Social media companies don’t fit into simple boxes of traditional communications companies. They are both network providers, like phone companies, and content providers, like traditional publishers, raising difficult questions about when they responsible for the content that is published on their networks. Section 230 has given them broad rights to exercise editorial judgment on their platforms without fear of legal liability.
Last October, Facebook founder Mark Zuckerberg gave a speech at Georgetown University in which he discussed how the internet has changed communications. He acknowledged the dark side of social media, but he also worried about increasing calls for content restrictions.
“Increasingly, we’re seeing people try to define more speech as dangerous because it may lead to political outcomes they see as unacceptable. Some hold the view that since the stakes are so high, they can no longer trust their fellow citizens with the power to communicate and decide what to believe for themselves,” Zuckerberg said.
“I personally believe this is more dangerous for democracy over the long term than almost any speech,” Zuckerberg said.