Most Shared

Are Social-Media Companies Ready for Another January 6?

Are Social-Media Companies Ready for Another January 6?

In January, Donald Trump laid out in stark terms what consequences await America if charges against him for conspiring to overturn the 2020 election wind up interfering with his presidential victory in 2024. “It’ll be bedlam in the country,” he told reporters after an appeals-court hearing. Just before a reporter began asking if he would rule out violence from his supporters, Trump walked away.

This would be a shocking display from a presidential candidate—except the presidential candidate was Donald Trump. In the three years since the January 6 insurrection, when Trump supporters went to the U.S. Capitol armed with zip ties, tasers, and guns, echoing his false claims that the 2020 election had been stolen, Trump has repeatedly hinted at the possibility of further political violence. He has also come to embrace the rioters. In tandem, there has been a rise in threats against public officials. In August, Reuters reported that political violence in the United States is seeing its biggest and most sustained rise since the 1970s. And a January report from the nonpartisan Brennan Center for Justice indicated that more than 40 percent of state legislators have “experienced threats or attacks within the past three years.”

What if January 6 was only the beginning? Trump has a long history of inflated language, but his threats raise the possibility of even more extreme acts should he lose the election or should he be convicted of any of the 91 criminal charges against him. As my colleague Adrienne LaFrance wrote last year, “Officials at the highest levels of the military and in the White House believe that the United States will see an increase in violent attacks as the 2024 presidential election draws nearer.”

Any institutions that hold the power to stave off violence have real reason to be doing everything they can to prepare for the worst. This includes tech companies, whose platforms played pivotal roles in the attack on the Capitol. According to a drafted congressional investigation released by The Washington Post, companies such as Twitter and Facebook failed to curtail the spread of extremist content ahead of the insurrection, despite being warned that bad actors were using their sites to organize. Thousands of pages of internal documents reviewed by The Atlantic show that Facebook’s own employees complained about the company’s complicity in the violence. (Facebook has disputed this characterization, saying, in part, “The responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them.”)

I asked 13 different tech companies how they are preparing for potential violence around the election. In response, I got minimal information, if any at all: Only seven of the companies I reached out to even attempted an answer. (Those seven, for the record, were Meta, Google, TikTok, Twitch, Parler, Telegram, and Discord.) Emails to Truth Social, the platform Trump founded, and Gab, which is used by members of the far right, bounced back, while X (formerly Twitter) sent its standard auto reply. 4chan, the site notorious for its users’ racist and misogynistic one-upmanship, did not respond to my request for comment. Neither did Reddit, which famously banned its once-popular r/The_Donald forum, or Rumble, a right-wing video site known for its affiliation with Donald Trump Jr.

The seven companies that replied each pointed me to their community guidelines. Some flagged for me how big of an investment they’ve made in ongoing content-moderation efforts. Google, Meta, and TikTok seemed eager to detail related policies on issues such as counterterrorism and political ads, many of which have been in place for years. But even this information fell short of explaining what exactly would happen were another January 6–type event to unfold in real time.

In a recent Senate hearing, Meta CEO Mark Zuckerberg indicated that the company spent about $5 billion on “safety and security” in 2023. It is impossible to know what those billions actually boughtand it’s unclear whether Meta plans to spend a similar amount this year.

Another example: Parler, a platform popular with conservatives that Apple temporarily removed from its App Store following January 6 after people used it to post calls for violence, sent me a statement from its chief marketing officer, Elise Pierotti, that read in part: “Parler’s crisis response plans ensure quick and effective action in response to emerging threats, reinforcing our commitment to user safety and a healthy online environment.” The company, which has claimed it sent the FBI information about threats to the Capitol ahead of January 6, did not offer any further detail about how it might plan for a violent event around the November elections. Telegram, likewise, sent over a short statement that said moderators “diligently” enforce its terms of service, but stopped short of detailing a plan.

The people who study social media, elections, and extremism repeatedly told me that platforms should be doing more to prevent violence. Here are six standout suggestions.


1. Enforce existing content-moderation policies.

The January 6 committee’s unpublished report found that “shoddy content moderation and opaque, inconsistent policies” contributed to events that day more than algorithms, which are often blamed for circulating dangerous posts. A report published last month by NYU’s Stern Center for Business and Human Rights suggested that tech companies have backslid on their commitments to election integrity, both laying off workers in trust and safety and loosening up policies. For example, last year, YouTube rescinded its policy of removing content that includes misinformation about the 2020 election results (or any past election, for that matter).

In this respect, tech platforms have a transparency problem. “Many of them are going to tell you, ‘Here are all of our policies,’” Yaёl Eisenstat, a senior fellow at Cybersecurity for Democracy, an academic project focused on studying how information travels through online networks, told me. Indeed, all seven of the companies that got back to me touted their guidelines, which categorically ban violent content. But “a policy is only as good as its enforcement,” Eisenstat said. It’s easy to know when a policy has failed, because you can point to whatever catastrophic outcome has resulted. How do you know when a company’s trust-and-safety team is doing a good job? “You don’t,” she added, noting that social-media companies are not compelled by the U.S. government to make information about these efforts public.

2. Add more moderation resources.

To assist with the first recommendation, platforms can invest in their trust-and-safety teams. The NYU report recommended doubling or even tripling the size of the content-moderation teams, in addition to bringing them all in house, rather than outsourcing the work, which is a common practice. Experts I spoke with were concerned about recent layoffs across the tech industry: Since the 2020 election, Elon Musk has decimated the teams devoted to trust and safety at X, while Google, Meta, and Twitch all reportedly laid off various safety professionals last year.

Beyond human investments, companies can also develop more sophisticated automated moderation technology to help monitor their gargantuan platforms. Twitch, Discord, TikTok, Google, and Meta all use automated tools to help with content moderation. Meta has started training large language models on its community guidelines, to potentially use them to help determine whether a piece of content runs afoul of its policies. Recent advances in AI cut both ways, however; it also enables bad actors to make dangerous content more easily, which led the authors of the NYU report to flag AI as another threat to the next election cycle.

Representatives for Google, TikTok, Meta, and Discord emphasized that they still have robust trust-and-safety efforts. But when asked how many trust-and-safety workers had been laid off at their respective companies since the 2020 election, no one directly answered my question. TikTok and Meta each say they have about 40,000 workers globally working in this area—a number that Meta claims is larger than its 2020 number—but this includes outsourced workers. (For that reason, Paul Barrett, one of the authors of the NYU report, called this statistic “completely misleading” and argued that companies should employ their moderators directly.) Discord, which laid off 17 percent of its employees in January, said that the ratio of people working in trust and safety—more than 15 percent—hasn’t changed.

3. Consider “pre-bunking.”

Cynthia Miller-Idriss, a sociologist at American University who runs the Polarization and Extremism Research & Innovation Lab (or PERIL for short), compared content moderation to a Band-Aid: It’s something that “stems the flow from the injury or prevents infection from spreading, but doesn’t actually prevent the injury from occurring and doesn’t actually heal.” For a more preventive approach, she argued for large-scale public-information campaigns warning voters about how they might be duped come election season—a process known as “pre-bunking.” This could take the form of short videos that run in the ad spot before, say, a YouTube video.

Some of these platforms do offer quality election-related information within their apps, but no one described any major public pre-bunking campaign scheduled in the U.S. for between now and November. TikTok does have a “US Elections Center” that operates in partnership with the nonprofit Democracy Works, and both YouTube and Meta are making similar efforts. TikTok has also, along with Meta and Google, run pre-bunking campaigns for elections in Europe.

4. Redesign platforms.

Ahead of the election, experts also told me, platforms could consider design tweaks such as putting warnings on certain posts, or even massive feed overhauls to throttle what Eisenstat called “frictionless virality”—preventing runaway posts with bad information. Short of getting rid of algorithmic feeds entirely, platforms can add smaller features to discourage the spread of bad info, like little pop-ups that ask a user “Are you sure you want to share?” Similar product nudges have been shown to help reduce bullying on Instagram.

5. Plan for the gray areas.

Technology companies sometimes monitor previously identified dangerous organizations more closely, because they have a history of violence. But not every perpetrator of violence belongs to a formal group. Organized groups such as the Proud Boys played a substantial role in the insurrection on January 6, but so did many random people who “may not have shown up ready to commit violence,” Fishman pointed out. He believes that platforms should start thinking now about what policies they need to put in place to monitor these less formalized groups.

6. Work together to stop the flow of extremist content.

Experts suggested that companies should work together and coordinate on these issues. Problems that happen on one network can easily pop up on another. Bad actors sometimes even work cross-platform, Fishman noted. “What we’ve seen is organized groups intent on violence understand that the larger platforms are creating challenges for them to operate,” he said. These groups will move their operations elsewhere, he said, using the bigger networks both to manipulate the public at large and to “draw potential recruits into those more closed spaces.” To combat this, social-media platforms need to be communicating among themselves. For example, Meta, Google, TikTok, and X all signed an accord last month to work together to combat the threat of AI in elections.


All of these actions may serve as checks, but they stop short of fundamentally restructuring these apps to deprioritize scale. Critics argue that part of what makes these platforms dangerous is their size, and that fixing social media may require reworking the web to be less centralized. Of course, this goes against the business imperative to grow. And in any case, technologies that aren’t built for scale can also be used to plan violence—the telephone, for example.

We know that the risk of political violence is real. Eight months remain until November. Platforms ought to spend them wisely.




Source link

Related Articles

Back to top button