Twenty-six words tucked into a 1996 law overhauling telecommunications have allowed companies like Facebook, Twitter and Google to grow into the giants they are today.
A case the U.S. Supreme Court heard Tuesday, Gonzalez v. Google, challenges this law — namely whether tech companies are liable for the material posted on their platforms.
Justices will decide whether the family of an American college student killed in a terror attack in Paris can sue Google, which owns YouTube, over claims that the video platform’s recommendation algorithm helped extremists spread their message.
They seemed unlikely to side with the family, but indicated they are wary of Google’s claims that the law gives it and other companies immunity from lawsuits.
A second case being heard Wednesday, Twitter v. Taamneh, also focuses on liability, though on different grounds. That case involves the family members of a man killed in an Istanbul nightclub attack for which the Islamic State group claimed responsibility.
The family accuses Twitter, Facebook and YouTube parent Google of assisting in the growth of IS by recommending extremist content through their algorithms. The platforms argue that they can’t be sued because they did not knowingly or substantially assist in the attack.
The outcomes of these cases could reshape the internet as we know it. Section 230 won’t be easily dismantled. But if it is, online speech could be drastically transformed.
WHAT IS SECTION 230?
If a news site falsely calls you a swindler, you can sue the publisher for libel. But if someone posts that on Facebook, you can’t sue the company — just the person who posted it.
That’s thanks to Section 230 of the 1996 Communications Decency Act, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
That legal phrase shields companies that can host trillions of messages from being sued into oblivion by anyone who feels wronged by something someone else has posted — whether their complaint is legitimate or not.
Politicians on both sides of the aisle have argued, for different reasons, that Twitter, Facebook and other social media platforms have abused that protection and should lose their immunity — or at least have to earn it by satisfying requirements set by the government.
Section 230 also allows social platforms to moderate their services by removing posts that, for instance, are obscene or violate the services’ own standards, so long as they are acting in “good faith.”
WHERE DID SECTION 230 COME FROM?
The measure’s history dates back to the 1950s, when bookstore owners were being held liable for selling books containing “obscenity,” which is not protected by the First Amendment. One case eventually made it to the Supreme Court, which held that it created a “chilling effect” to hold someone liable for someone else’s content.
That meant plaintiffs had to prove that bookstore owners knew they were selling obscene books, said Jeff Kosseff, the author of “The Twenty-Six Words That Created the Internet,” a book about Section 230.
Fast-forward a few decades to when the commercial internet was taking off with services like CompuServe and Prodigy. Both offered online forums, but CompuServe chose not to moderate its, while Prodigy, seeking a family-friendly image, did.
CompuServe was sued over that, and the case was dismissed. Prodigy, however, got in trouble. The judge in their case ruled that “they exercised editorial control — so you’re more like a newspaper than a newsstand,” Kosseff said.
That didn’t sit well with politicians, who worried that outcome would discourage newly forming internet companies from moderating at all. And Section 230 was born.
“Today it protects both from liability for user posts as well as liability for any claims for moderating content,” Kosseff said.
WHAT HAPPENS IF SECTION 230 GOES AWAY?
“The primary thing we do on the internet is we talk to each other. It might be email, it might be social media, might be message boards, but we talk to each other. And a lot of those conversations are enabled by Section 230, which says that whoever’s allowing us to talk to each other isn’t liable for our conversations,” said Eric Goldman, a professor at Santa Clara University specializing in internet law. “The Supreme Court could easily disturb or eliminate that basic proposition and say that the people allowing us to talk to each other are liable for those conversations. At which point they won’t allow us to talk to each other anymore.”
There are two possible outcomes. Platforms might get more cautious, as Craigslist did following the 2018 passage of a sex-trafficking law that carved out an exception to Section 230 for material that “promotes or facilitates prostitution.” Craigslist quickly removed its “personals” section, which wasn’t intended to facilitate sex work, altogether. But the company didn’t want to take any chances.
“If platforms were not immune under the law, then they would not risk the legal liability that could come with hosting Donald Trump’s lies, defamation, and threats,” said Kate Ruane, former senior legislative counsel for the American Civil Liberties Union who now works for PEN America.
Another possibility: Facebook, Twitter, YouTube and other platforms could abandon moderation altogether and let the lowest common denominator prevail.
Such unmonitored services could easily end up dominated by trolls, like 8chan, a site that was infamous for graphic and extremist content.
Any change to Section 230 is likely to have ripple effects on online speech around the globe.
“The rest of the world is cracking down on the internet even faster than the U.S.,” Goldman said. “So we’re a step behind the rest of the world in terms of censoring the internet. And the question is whether we can even hold out on our own.”
Copyright © 2023 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.