Twitter is scaling back its content moderation tools ahead of the midterm elections

Twitter Inc., the social network being overhauled by new owner Elon Musk, has frozen some employees’ access to internal tools used for content moderation and other enforcement policies, limiting staff’s ability to curb misinformation ahead of a major election in USA.

Most people working in Twitter’s Trust and Safety organization cannot currently change or punish accounts that violate rules about misleading information, offensive posts and hate speech, except for the most high-profile violations that could harm the real world, according to people. familiar with the subject. Those positions were prioritized for manual enforcement, they said.

Read more: Inside Twitter’s Chaotic First Weekend Under Elon Musk

People on call to enforce Twitter’s policies during Brazil’s presidential election had access to the internal tools on Sunday, but in a limited capacity, according to two of the people. The company still uses automated enforcement technology and third-party contractors, according to one person, although higher-profile breaches are typically handled by Twitter employees.

San Francisco-based Twitter declined to comment on the new limits placed on its content moderation tools.

Twitter staff use dashboards, known as agent tools, to take actions such as banning or suspending an account deemed to have violated the policy. Policy violations can either be flagged by other Twitter users or detected automatically, but taking action on them requires human input and access to dashboard tools. Those tools have been suspended since last week, the people said.

This restriction is part of a broader plan to freeze Twitter’s software code so employees don’t push changes to the app during the transition to new ownership. Usually that level of access is granted to a group of people numbering in the hundreds, and it was initially reduced to about 15 people last week, according to two of the people, who asked not to be named discussing internal decisions. Musk completed his $44 billion deal to take the company private on October 27.

Read more: Elon Musk owns Twitter. See how the platform could change

Limited content moderation has raised concerns among employees on Twitter’s Trust and Safety team, who believe the company will be short-handed in enforcing policies ahead of the Nov. 8 U.S. midterm elections. Trust and Safety employees are often tasked with enforcing Twitter’s disinformation and civil integrity policies — many of the same policies that former President Donald Trump routinely violated before and after the 2020 election, the company said at the time.

Other employees said they were concerned about whether Twitter would restore access to data for researchers and academics and how it would deal with foreign influence operations under Musk’s leadership.

On Friday and Saturday, Bloomberg reported an increase in hate speech on Twitter. This included a 1,700% rise in racial slur use on the platform, which at its peak appeared 215 times every five minutes, according to data from Dataminr, an official Twitter partner that has access to the entire platform. The Trust and Safety team did not have access to enforce Twitter’s moderation policies during that time, two of the people said.

Read more: As Elon Musk buys Twitter, the Right celebrates

Yoel Roth, Twitter’s head of security and integrity, posted a series of Tweets on Monday about the rise in offensive posts, saying that very few people are seeing the content in question. “Since Saturday, we have focused on tackling the rise in hateful behavior on Twitter. We’ve made measurable progress, removing over 1500 accounts and reducing impressions of this content to almost zero,” Roth wrote. “We’re mostly dealing with a focused, short-term trolling campaign.”

Musk tweeted last week that he had made “no changes to Twitter’s content moderation policies” so far, though he has also said publicly that he believes the company’s rules are too restrictive and has called himself an absolutist of freedom of speech.

Internally, employees say, Musk has raised questions about some of the policies and zeroed in on some specific rules he wants the team to consider. The first is Twitter’s general misinformation policy, which penalizes posts that include falsehoods about topics such as election results and Covid-19. Musk wants the policy to be more specific, according to people familiar with the matter.

Read more: What you need to know about Trump’s Twitter ban now that Elon Musk owns the platform

Musk also asked the team to review Twitter’s hateful conduct policy, according to the people, specifically a section that says users can be punished for “targeting illegal or non-naming of transgender people.”

In either case, it’s unclear whether Musk wants the policies rewritten or the restrictions lifted entirely.

More must-read stories from TIME


Contact us at letters@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *