In 2017, people across the U.S. picked up their phones to answer what seemed like a normal call. The caller would sometimes start with a giggle or feign a bad connection. “Can you hear me?” they’d ask. If you answered “yes,” the FCC warned that your personal information might be compromised. The phone scam didn’t care if you could actually hear them; they were just trying to secretly record you saying “yes,” snagging vocal consent that could be used to trick banks, credit card companies, and other institutions.
Two years later, while there isn’t hard evidence that the so-called “Yes” or “Can You Hear Me” scam has claimed any victims, the U.S. government has still taken steps to crack down on phone scams to stem the tide of nearly one billion phishing phone calls.
And this is just in time too, as voice phishing scams are on the rise. This tax season, maybe you received a threatening call from someone claiming to be an IRS agent. Since 2013, this scam has cost more than 12,000 people a collective $63 million. Another common scam call – one my own mother has received –appears to come from Apple Support, apparently warning of an iCloud security breach. It’s not from Apple; it’s from hackers who want victims to authorize fraudulent charges.
All of these scams are frightening enough.
But as technology advances, a new threat is likely to emerge: scammers are almost certainly devising ways to steal our identities, recording our voices and using them against us. Although the technology to fake voice, text, and video isn’t new, machine learning and other technologies may allow these scams to proliferate at a previously unknown scale.
Here’s what you need to know about this new generation of scamming.
Deepfakes Are Already Here – And Voice Is Probably Next
In April, a nonprofit called Malaria No More released a commercial featuring David Beckham. The star athlete appears to speak nine languages, thanks to technology developed by a UK company called Synthesia, which used software to dub Beckham’s voice in the computer-generated translations.
The commercial is so convincing that it’s prompting fears that before long, AI-driven “deepfakes” will be able to convincingly imitate anyone’s voice or appearance. Today, it’s easy to graft a fake face onto a video or generate thousands of fake texts. Although it’s been tougher to fake a convincing voice, technology is rapidly improving. In 2018, a Chinese tech company called Baidu unveiled an AI algorithm that can mimic someone’s voice based on just 3.7 seconds of audio. Suddenly, the “Yes” scam takes on frightening new significance.
Scammers are using increasingly sneaky ways to get us talking, too. In 2017, the FTC estimated that Americans lost $328 million to “the grandparent scam,” in which a scammer impersonated a grandchild and exploited grandparents’ concerns to extract money. Now imagine if hackers were recording these victims’ voices the whole time. Audio recordings could be fed to AI programs, training them to replicate our voices.
Running Phone Scams Will Only Get Easier
As people embrace voice-enabled devices, voice is becoming a growing target for hacking and scamming. A recent HubSpot survey found that 52 percent currently use a voice assistant such as Siri, Alexa, or Google Home, with an additional 27 percent saying they plan to purchase a voice assistant by the end of 2019. Among people who already have voice assistants, nearly one-quarter use their voice assistant to make purchases.
With Gartner predicting that voice search will make up 30 percent of total online searches by 2020, voice search is quickly replacing typed search queries. As user habits shift, more and more people will rely on voice-enabled devices to quickly complete sensitive online activities such as shopping, banking, and accessing private health information.
Voice assistants are already vulnerable to being hacked. An experiment by CNET showed just how easy it is to impersonate someone’s voice to make purchases through their voice assistant. But the real threat comes with scammers’ growing ability to commit phone-based fraud at scale. It’s one thing for your roommate to order GrubHub from your Alexa; it’s quite another when scammers can attack thousands of people with the click of a button.
When operating at scale, scammers don’t have to fool every person they call. Instead, they play a numbers game: The more voice recordings they can gather, the better their chances of impersonating us when calling our credit card companies, banks, doctors, or anything else.
Meanwhile, scammers are already impersonating phone numbers to trick us into answering their calls. Neighbor spoofing allows phone scams to appear with a familiar area code – sometimes even the real phone number of an individual or business, like in the Apple support scam. If scammers combine neighbor spoofing with faked voices, it will become more difficult than ever to trust phone communication.
Individuals Have Few Good Options for Protection Against Phone Scams
Although email phishing was once a relatively unknown scam, people today have learned to become more cautious about verifying email addresses. Voice phishing will likely follow a similar trajectory. We’ll need new methods of verifying whether the voices we hear over the phone are legitimate – and we’ll need to ensure that our own voices aren’t used without our consent. With that said, voice phishing isn’t the only thing we’ll have to worry about. For example, we’ll have to become more tech-savvy to find out who hacked your device.
Personally, I’ve already started changing my habits to protect myself from unwittingly being recorded. I don’t say my name when picking up the phone, which prevents scammers from obtaining a recording of how I greet people. When I get calls from an unknown number, I don’t engage in conversation until the caller reveals their identity.
It’s admittedly an imperfect solution. I frequently hand out my business card, and the calls I receive from unknown numbers are often potential business partners who I’m actively interested in speaking with. Although they might find my cold phone greeting off-putting, the small amount of friction is worth the peace of mind that comes from knowing I’m taking steps to protect myself from unwanted voice impersonation.
Until we have a solution that stops voice phishing once and for all, we’ll have to strike a tenuous balance between answering the phone professionally and protecting ourselves in an AI-powered world where any form of identity can be faked.
Companies Must Take Proactive Security Measures
While voice phishing presents obvious threats to consumers’ security, it will impact companies, too. After all, how do scammers get our phone numbers in the first place? Usually by purchasing – or frequently, by stealing – personal data from companies.
The government is currently taking steps toward protecting Americans from emerging forms of identity theft. The emergence of deepfake technology has prompted Congress to request a formal report from the Director of National Intelligence. The Pentagon is also currently working with DARPA – the same agency that originally created the internet as we know it – to develop safeguards against deepfake technology. Additionally, a new phone number authentication protocol known as STIR/SHAKEN (Secure Telephone Identity Revisited/Signature-based Handling of Asserted information using toKENs) promises to put an end to neighbor spoofing.
But it’s time for companies to step up, too. Companies should transparently disclose their relationships with third-party vendors and obtain customers’ consent before sharing data, two steps that would go a long way toward reassuring the 71 percent of Americans who worry about how their personal data is used. Improving data management and tightening security will help keep customers’ phone numbers out of harm’s way. Finally, companies have a responsibility to immediately report any phishing attempts to the FCC, especially if they are being impersonated by scammers.
As voice scams and phone scams become more sophisticated, companies will need to lead the way, investing in new security measures to keep customers safe. But until I’m confident that I’m not being secretly recorded by voice phishers, I’m going to keep waiting a beat before speaking to an unknown caller. After all,you never know who might be listening.