It took Facebook more than a day to downgrade a doctored video making House Speaker Nancy Pelosi look like she was slurring her words — and the video itself remains on the site, with copycats proliferating.
A group called Politics WatchDog posted the manipulated video of Pelosi — which was slowed down to give the misimpression she was speaking in an impaired fashion at a think-tank event — at 1:29 p.m. Eastern time on May 22.
But it wasn’t until after 9 p.m. on May 23, some 32 hours later, that Facebook began suppressing the video after a fact-check from one Facebook partner, LeadStories, was published. Facebook partner Politifact didn’t post its own fact-check until the following morning on May 24.
One reason for the delay: The fact-checkers had to do their own reporting — finding audio and digital forensics experts who could verify that the video had been manipulated.
The flap over the Pelosi video reveals the limits of Facebook’s third-party fact-checking system in the battle against misinformation heading into the 2020 election cycle — and as the company faces increasing scrutiny in Washington, including calls for it to be broken up from Massachusetts Democratic Sen. Elizabeth Warren and others.
The social media giant has taken a hands-off approach to policing the veracity of content on the site, instead partnering with independent organizations that have become the company’s first and main line of defense against misinformation.
But fact-checking a post or video takes precious time, during which rumors and misinformation can continue to spread at internet speeds.
Facebook works only with fact-checkers that are part of the International Fact-Checking Network, a global coalition of vetted fact-checkers founded by the Poynter Institute, a leading journalism think tank based in Florida. Other members of the network include The Washington Post’s fact-checking arm, as well as the Associated Press and Factcheck.org, IFCN director Baybars Orsek said in an interview.
When one of Facebook’s fact-checking partners rates a post or video as false on the platform, it automatically triggers a change in how Facebook’s algorithm handles that content, Facebook, as well as Politifact and LeadStories confirmed to CNN on Friday. Demoting a piece of content means it will appear less frequently in users’ news feeds. It also notifies Facebook users who share or have shared that content that it is false, said Facebook in a statement.
“Once we publish [a fact-check], right away I go into the Facebook tool and I match the fact-check with the offending post,” Katie Sanders, managing editor of Politifact, told CNN. “The way it’s supposed to work is it’s supposed to de-amplify the reach of the post.”
Yet the altered Pelosi video remains available on Facebook, the company said, because it does not violate the platform’s community standards. There is no rule on Facebook saying content posted there must be true or accurate.
The video now appears with a message to users indicating the post has been flagged and directing them to multiple fact-checks.
On Friday, Facebook defended its handling of the video to CNN’s Anderson Cooper.
“I think the suggestion there is that we haven’t taken action, and that’s not right,” said Monika Bickert, vice president for product policy and counterterrorism. “We have acted … anybody who is seeing this video in News Feed, anyone who is going to share it with somebody else, anybody who has shared it in the past — they are being alerted that this video is false.”
Bickert added that the company’s partnership with fact-checkers strikes a critical balance for Facebook users, one that preserves their ability to make “informed choices about what to believe.”
But some critics said Facebook must be more proactive about fighting misinformation, and that delegating to fact-checkers barely qualifies as taking action.
“Bickert doubles down on AC360 over and over about how Facebook ‘took action,'” said Jason Kint, CEO of Digital Content Next, an association representing digital publishers. “[But] taking action doesn’t mean ‘half a day later due to automated technology receiving information from third parties.'”
As the doctored video rocketed across social media, tech platforms also had to grapple with copycats who were repackaging and creating new, unique uploads of the same video. LeadStories counted as many as 17 distinct copies across Facebook, YouTube and Twitter.
Google quickly moved to eliminate the video from YouTube, and Twitter has continued to say it has nothing to share on the matter. Facebook said that when a post is flagged for demotion by a fact-checker, the company applies the same treatment to copycat posts.
“Speed is critical to this system,” the company said on Friday, “and we continue to improve our response.”
But Facebook’s approach is still constrained by its decision to have outside groups manually verifying content of questionable veracity, even as the amount of fake content is expected to grow.
Policymakers say the rudimentary changes to the original Pelosi clip foreshadow how damaging advances in content manipulation technology could become to democratic discourse.
“It has been clear for some time that the large platforms do not have clear policies or procedures in place to address viral misinformation like this,” Virginia Democratic Sen. Mark Warner, a frequent critic of Silicon Valley, told CNN on Friday. “Viral misinformation is pushed today by simple Photoshop and video editing techniques, but new technologies are going to make this a heck of a lot worse.”
He added: “We have a serious problem on our hands, with technologists developing and releasing tools that will have profoundly destabilizing effects.”
Hawaii Democratic Sen. Brian Schatz was more blunt in a Twitter message Friday: “Facebook is very responsive to my office when I want to talk about federal legislation and suddenly get marbles in their mouths when we ask them about dealing with a fake video. It’s not that they cannot solve this; it’s that they refuse to do what is necessary.”