New York Times journalist Kevin Roose’s investigation into YouTube’s artificial intelligence started because it was the platform mentioned most frequently when he asked online extremists questions about how they were radicalized.
“They kept just saying YouTube, and over and over again I heard stories about people who went to YouTube to watch gaming videos or sports videos or politics videos and ended up getting pulled into this universe of far-right people,” Roose said.
Roose knew that A.I. was the “the core of YouTube and no one knows how it works.” But when he took his questions to YouTube, he noticed that the company’s executives were “cagey about this whole rabbit hole effect,” Roose said, using the term that describes a recommendation algorithm that keeps users engaged for as long as possible.
So how did Roose eventually crack the code on a practice that is largely shrouded in secrecy? The answer is simple and a little ironic: by watching a YouTube video.
“It was really, really powerful and I think it was just sitting out there on some obscure university YouTube,” he said.
Roose called it the “biggest breakthrough” in his reporting when he discovered a video of a Google Brain researcher giving an A.I. talk at a conference this year. Google Brain is the artificial intelligence research team at Google, YouTube’s parent company.
The video had less than a thousand views and the talk was “really dense” and “really technical,” according to Roose, who sat down with Brian Stelter for this week’s Reliable Sources podcast to unpack his reporting process.
After watching the video, Roose enlisted the help of experts to help him translate the technical language into layman’s terms. Beneath the jargon was a “really powerful” explainer on “how the A.I. behind YouTube’s algorithm works,” Roose said.
What ensued from his legwork was a story published on the front page of the New York Times earlier this week titled “The Making of a YouTube Radical.” Roose followed Caleb Cain, a college dropout who turned to YouTube for information, but instead got sucked into a far-right universe of conspiracy theories, thanks in part to the platform’s powerful algorithm.
The story has a happy ending, however, as Cain was de-radicalized by the same platform that radicalized him.
Still, not everyone celebrated Roose’s story. After the article was published on Sunday, his Twitter mentions blew up. Some of the YouTubers mentioned or pictured in Roose’s story were upset about their classification as “radicalizing forces,” he said.
Roose said “The Making of a YouTube Radical” is not just about holding Google or YouTube accountable for the questionable content they amplify. It’s also a story about mental health, he told Stelter.
“There are a lot of young men out there who feel alienated, who are depressed, who have been sort of left out,” Roose said. “I think that there needs to be some more outreach to those people because when they go look for help on places like YouTube, a lot of the people that are there waiting to help them have a partisan agenda.”