How Consumers Can Protect Themselves Against AI Voice Cloning Attacks

Losses related to fraud and scams have increased by 30% over the last year, according to Federal Trade Commission data. The FBI reports that cybercrime losses totaled more than $10.2 billion in 2022 alone.

In attempts to offset these crimes, the financial services sector is using voice recognition biometrics, technology that enables companies to authenticate an individual by their unique vocal characteristics.

Problems, however, are arising. With generative AI, bad actors can mimic voice clips, then fraudulently open and gain access to financial accounts.

Keep reading to find out what voice recognition software is all about, how AI figures into the picture and — most important — what you can do to protect yourself against voice cloning attacks that could compromise your financials.

How Voice Recognition Software Works

Technology companies such as VoicePIN have developed software that enables users to input their voices into a system, which stores it in a database. Voiceprints can be split into various frequencies and behavioral attributes, making each print unique to that individual.

Financial services companies use these voiceprints so people can conduct business quickly and conveniently over the phone.

Customers can access their accounts by giving verbal commands. Users might say, “Tell me my balance” or “Transfer $100 from my checking account and deposit it into my savings account,” rather than inputting passwords or tokens.

The software can even detect angry or aggravated tones. In that case, the caller may be connected to a live customer service representative.

[READ: Congress Bill Designed to Help Prevent Elder Financial Fraud]

Where Generative AI Comes In

Artificial intelligence is a type of technology that produces content such as written text, images and audio data. It has the ability to tease out patterns and structures of existing data to produce new and original content.

And that’s where security issues come in.

While traditional identity theft is on the rise, banks also face a larger and more complicated threat: synthetic identity fraud, says Ajay Amlani, president, head of Americas at iProov, a London-based company that helps organizations maximize online security and protect user privacy.

Synthetic identity fraud creates an entirely new “person” by mixing stolen, fictitious or manipulated personally identifiable information.

“Criminals can use generative AI tools to create realistic synthetic imagery of people who don’t exist,” Amlani says. “The use of AI-generated synthetic imagery is a hugely powerful tool in boosting the success of synthetic identity fraud.”

[Related:6 Scams That Target Your Bank Account]

How Consumers Can Protect Their Accounts Against AI Voice Attacks

To make it more difficult for fraudsters to access accounts, Amlani says consumers can significantly increase their security levels by setting up multifactor authentication.

This security structure is a layered approach in which the user presents a combination of credentials that verifies their identity to log in to their account. For example, in addition to entering a complicated password, you might also have to enter a single-use, time-sensitive code you receive on your cellphone via text.

Consumers may also want to reconsider voiceprints to access accounts and communicate with their financial institutions, especially when alternatives are presented.

“If given the choice, opting for facial biometric verification over vocal biometric verification or a one-time (text) code is the safest, most effective way to protect you from fraudsters utilizing deep fakes to hack into accounts,” Amlani says.

[Related:What’s in Store for the Future of Money?]

Voices Are Among the Easiest of Biometrics to Clone

Voice clone attacks currently involve a highly manual process, says James E. Lee, chief operating officer of Identity Theft Resource Center, a U.S. nonprofit that provides identity crime victim assistance and education. Fraudsters tend to target a single victim, obtain a voice sample and convert it to a natural-sounding message — or use a real-time cloning app.

Lee says voices are among the easiest biometrics to clone and use in some form of impersonation fraud. “That makes them less than ideal to use in granting access to a system or data without additional layers of security to verify a person is literally who they say they are,” he adds.

As generative AI technology quickly advances and becomes more sophisticated, it is becoming more difficult to remotely verify a customer’s identity. When the company can’t determine whether an individual is live and authenticated in real-time, it leaves biometric technology vulnerable to spoofs or biometric attacks.

“To combat the combination of deepfakes and digital injection attacks, financial service institutions need a science-based, multifaceted approach that leverages the creation of a one-time biometric,” says Amlani, explaining that it can be done with a more advanced form of facial recognition technology.

By illuminating the individual’s face with a sequence of colors that can’t be replayed or manipulated synthetically, users could confirm that the system is authenticating at that moment.

More from U.S. News

Is My Money Safe? What You Need to Know About Bank Failures

How Banks Are Working to Protect You From Fraud

15 Steps to Achieve Financial Freedom

How Consumers Can Protect Themselves Against AI Voice Cloning Attacks originally appeared on usnews.com

Federal News Network Logo
Log in to your WTOP account for notifications and alerts customized for you.

Sign up