Tech

Kagan: Florida social media law seems like “classic First Amendment violation”

Kagan: Florida social media law seems like “classic First Amendment violation”

Enlarge / The Supreme Court of the United States in Washington D.C. in May 2023.

Getty Images | NurPhoto

The US Supreme Court today heard oral arguments on Florida and Texas state laws that impose limits on how social media companies can moderate user-generated content.

The Florida law prohibits large social media sites like Facebook and Twitter (aka X) from banning politicians, and says they must “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.” The Texas statute prohibits large social media companies from moderating posts based on a user’s “viewpoint.” The laws were supported by Republican officials from 20 other states.

The tech industry says both laws violate the companies’ First Amendment right to use editorial discretion in deciding what kinds of user-generated content to allow on their platforms, and how to present that content. The Supreme Court will decide whether the laws can be enforced while the industry lawsuits against Florida and Texas continue in lower courts.

How the Supreme Court rules at this stage in these two cases could give one side or the other a big advantage in the ongoing litigations. Paul Clement, a lawyer for Big Tech trade group NetChoice, today urged justices to reject the idea that content moderation conducted by private companies is censorship.

“I really do think that censorship is only something that the government can do to you,” Clement said. “And if it’s not the government, you really shouldn’t label it ‘censorship.’ It’s just a category mistake.”

Companies use editorial discretion to make websites useful for users and advertisers, he said, arguing that content moderation is an expressive activity protected by the First Amendment.

Justice Kagan talks anti-vaxxers, insurrectionists

Henry Whitaker, Florida’s solicitor general, said that social media platforms marketed themselves as neutral forums for free speech but now claim to be “editors of their users’ speech, rather like a newspaper.”

“They contend that they possess a broad First Amendment right to censor anything they host on their sites, even when doing so contradicts their own representations to consumers,” he said. Social media platforms should not be allowed to censor speech any more than phone companies are allowed to, he argued.

Contending that social networks don’t really act as editors, he said that “it is a strange kind of editor that does not actually look at the material” before it is posted. He also said that “upwards of 99 percent of what goes on the platforms is basically passed through without review.”

Justice Elena Kagan replied, “But that 1 percent seems to have gotten some people extremely angry.” Describing the platforms’ moderation practices, she said the 1 percent of content that is moderated is “like, ‘we don’t want anti-vaxxers on our site or we don’t want insurrectionists on our site.’ I mean, that’s what motivated these laws, isn’t it? And that’s what’s getting people upset about them is that other people have different views about what it means to provide misinformation as to voting and things like that.”

Later, Kagan said, “I’m taking as a given that YouTube or Facebook or whatever has expressive views. There are particular kinds of expression defined by content that they don’t want anywhere near their site.”

Pointing to moderation of hate speech, bullying, and misinformation about voting and public health, Kagan asked, “Why isn’t that a classic First Amendment violation for the state to come in and say, ‘we’re not going to allow you to enforce those sorts of restrictions?'”

Whitaker urged Kagan to “look at the objective activity being regulated, namely censoring and deplatforming, and ask whether that expresses a message. Because they [the social networks] host so much content, an objective observer is not going to readily attribute any particular piece of content that appears on their site to some decision to either refrain from or to censor or deplatform.”

Thomas: Who speaks when an algorithm moderates?

Justice Clarence Thomas expressed doubts about whether content moderation conveys an editorial message. “Tell me again what the expressive conduct is that, for example, YouTube engages in when it or Twitter deplatforms someone. What is the expressive conduct and to whom is it being communicated?” Thomas asked.

Clement said the platforms “are sending a message to that person and to their broader audience that that material” isn’t allowed. As a result, users are “not going to see material that violates the terms of use. They’re not going to see a bunch of material that glorifies terrorism. They’re not going to see a bunch of material that glorifies suicide,” Clement said.

Thomas asked who is doing the “speaking” when an algorithm performs content moderation, particularly when “it’s a deep-learning algorithm which teaches itself and has very little human intervention.”

“So who’s speaking then, the algorithm or the person?” Thomas asked.

Clement said that Facebook and YouTube are “speaking, because they’re the ones that are using these devices to run their editorial discretion across these massive volumes.” The need to use algorithms to automate moderation demonstrates “the volume of material on these sites, which just shows you the volume of editorial discretion,” he said.


Source link

Related Articles

Back to top button