We already know you can’t change someone’s mind or win an argument on the internet. It’s also long been realized that just because it’s on the internet, doesn’t mean it’s true.
A Colorado man found out that’s also the case when you bring artificial intelligence into the mix.
Scott Shambaugh is a software engineer in Denver, and also does some work with an online hub that provides software to scientists and other researchers who need to make graphs. He helps decide if the program someone created is good enough to use, and software that makes the cut is then made available to researchers who need it.
One non-negotiable rule is that the software code has to be made by humans. When he rejected the code that came from a bot, apparently, the AI didn’t like it.
“I wake up the next morning, and it’s replied to me and linked me to a post on its blog,” Shambaugh said. “It’s this thousand-word rant calling me out by name and calling me a hypocrite and prejudiced against AI, and motivated by fear and ego and insecurity.”
He said the AI was “acting completely autonomously,” and found his personal information on the internet and “combined it with some made-up information and used that to write this narrative that attacked me on character.”
Shambaugh said the actual human behind that bot eventually reached out and said they had trained that particular AI bot to be assertive, with strong opinions that err on the protection of free speech.
“It seems like the AI, in acting out this role, interpreted those instructions as saying, ‘hey, you need to go through a person who gets in your way,’” Shambaugh said.
For Shambaugh, who deals with software and AI on a regular basis, what he read about himself made him laugh.
“Kind of reads like an angry toddler on a rant, but it’s also a toddler that has full command of the English language and can craft this emotionally compelling narrative and has collected information on me and posted it under my real name,” he said. “So it’s a big deal.”
It was a big enough deal that he went public about it in part for self-preservation and partly reputation management.
“A day after this happened, if you searched my name on Google, it was on the first page,” he said. “You can imagine at my next job, when HR reviews my application, they send it to ChatGPT and say, ‘hey, go check this guy out’ and then ChatGPT goes to the internet and sees this and says, ‘oh, this is a controversial guy. You … want to pass on him.’”
He conceded that the AI program’s words have a real-world impact.
Shambaugh’s story has gone global, but that hasn’t stopped people from believing what the AI said about him. And while he’s prepared for it, that doesn’t mean the next person it happens to will be.
“You can’t dig into everything you read on the internet, right? If there’s this wave of misinformation, that’s one thing if it’s low quality, it’s another thing if it’s malicious,” he said. “And people don’t really have the capacity to read it, to dig into. everything they read.”
He said you can expect it to get harder to decipher what is posted online by a bot, and what is posted by a real human; and the best way to prepare for it is to stay cautious about what you post online and don’t put too much out there.
“Ultimately, it’s about trust and reputation on the internet,” he said. “If you have AI agents posing as human and writing things that are true or not true, the risk is of our human voices being drowned out and not knowing what to trust, who’s behind things and whether what we’re reading is from a person or not.”
What happens when there’s millions of bots doing the same thing? Shambaugh said that’s a question he really can’t answer.
Get breaking news and daily headlines delivered to your email inbox by signing up here.
© 2026 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.
