Facing growing pressure to fight disinformation ahead of the 2020 presidential election, Twitter said it would label or remove tweets sharing doctored videos and photos, sometimes referred to as deepfakes, that seek to mislead users.
Under the new policy, Twitter users cannot “deceptively” share altered videos and photos that are “likely to cause harm,” the company said Tuesday.
“We’ve seen people try to distort conversations with altered media or fabricated media, not just on Twitter, but across the internet,” Del Harvey, Twitter’s vice president of trust and safety, said. “We want to make sure we can address any instance where media has been altered or fabricated and shared on Twitter.”
Open season on Facebook, Zuckerberg:Joe Biden, Nancy Pelosi and other top Democrats openly hostile to Facebook
Video clips deceptively altered to discredit or embarrass political figures, such as those targeting House Speaker Nancy Pelosi and Democratic presidential candidate Joe Biden, would be labeled, the company said Tuesday.
These clips are part of a new wave of dirty political tricks raising broad concerns about the role of digital manipulation in swaying voter sentiment in the U.S.
“Our goal is really to provide people with more context around certain types of media they come across on Twitter and to ensure they are able to make informed decisions around what they are seeing,” Harvey said.
In deciding what, if any, action to take on a video clip, Twitter will determine whether the clip has been manipulated, if it is being shared in a deceptive manner and if it’s likely to cause serious harm or threaten public safety, Yoel Roth, Twitter’s head of site integrity, said.
Twitter will then decide to let the tweet stand, put a label on the tweet to give users additional context about it or require the tweet be deleted, he said.
“In situations where synthetic and manipulated media are shared in a misleading manner but don’t rise to the level of causing harm, our action is going to be to provide additional context on the tweet itself,” he said. “In most cases, this means we would apply a label to the tweet and provide a link to additional explanation or clarification as available, such as a landing page with more context.”
Over time, Twitter will consider other options such as showing a warning to people before they retweet or like a retweet, reducing the visibility of a tweet or preventing it from being recommended, Roth added.
The changes to Twitter’s rules come as social media platforms develop new policies to combat this insidious new form of digital trickery. As President Trump’s social media bullhorn of choice, Twitter plays an increasingly central role in American political life.
Tech giants have been wrestling with how to combat disinformation that seeks to deceive and sway voters after Russian interference and the spread of fabricated news stories influenced public opinion during the 2016 presidential election. This content is expected to increase sharply in the months before the November election.
Facebook said last month it would ban videos in posts and ads that are manipulated using artificial intelligence. The policy does not cover the majority of fake videos, which are edited to be misleading.
Twitter previously banned all political ads entirely but has largely refrained from taking steps against tweets from major political figures.
The question of falsehoods being spread on Twitter in an election year has gotten increased scrutiny over video clips involving Pelosi and Biden.
One edited clip from Fox Business, which spliced together moments from a 20-minute news conference and made it appear as if Pelosi was stumbling over her words, was shared by President Trump last May when the pair were locked in a public feud.
The other clip, which was edited to make it appear as if Pelosi was slurring her words, was shared by allies of the president.
“Since the video was significantly and deceptively altered, we would label it under this policy,” Roth said. “Depending on what the tweet sharing that video says, we might choose to remove specific tweets.”
Last month, an obscure Twitter account with a history of spreading conspiracy theories about Jeffrey Epstein shared a video from one of former Vice President Biden’s campaign events that was misleadingly edited to make it seem as if he was making racist remarks. The video clip was amplified by conservative Twitter users.
Twitter said the new deepfake policy would not be retroactively applied to the Pelosi and Biden videos.