A recent consumer survey by Transaction Network Services, a telecommunications service provider based in Reston, Virginia, found two-thirds of Americans are concerned about artificial intelligence-generated “deepfake” robocalls.
These scam callers can use AI to mimic the voice of a victim’s friend or family member to gather personal information or ask for money.
The AI technology to do that doesn’t take much.
“It takes less than 10 seconds of video that could be scraped from a popular social media app that could be used to acquire the voice that gives enough information to do a replica of that voice,” said John Haraburda, director of product management at TNS.
The AI robocall scams TNS has seen include those trying to acquire information, or those soliciting money directly.
TNS reports 68% of those it surveyed say they never answer a phone call from an unknown number, but robocall scammers can mask their calls with caller ID numbers that are familiar — an act known as “spoofing.”
Wireless carriers already mark known scam callers as spam (which may come up as “Spam Risk” in caller ID). Under mandates from the Federal Communications Commission, wireless carriers are now implementing the final pieces of security parameters on the trustworthiness of calling parties, as part of the TRACED Act of 2019.
One of those pieces is aimed specifically at exposing spoofed phone numbers.
“So that you can tell the difference between a number you know is your child’s, but you could see if it was legitimately generated by your child’s cellphone versus a spoof number from a third party,” Haraburda said.
TNS also recommends ways consumers can test the legitimacy of such calls, with key words or information that would only be known by the caller, and to insist on calling back before agreeing to anything.
Congress is taking more aggressive action by calling on the Federal Trade Commission to investigate AI-related scams of senior consumers.