Contact Us

AI Could Escalate New Type Of Voice Phishing Cyber Attacks | Cyber Security Hub

Cyber Security | November 3, 2019

While many cyber security professionals have been looking at (and even investing in) the potential benefits of utilizing artificial intelligence (AI) technology within many different business functions, earlier this week, the Israel National Cyber Directorate (INCD) issued a warning of a new type of cyber-attack that leverages AI to impersonate senior enterprise executives. The method instructs company employees to perform transactions including money transfers and other malicious activity on the network.

There are recent reports of this type of cyber-attack received at the operational center of the INCD. While business email compromise (BEC) types of fraud oftentimes use social engineering methods for a more effective attack, this new method escalates the attack type by using AI-based software, which makes voice phishing calls to senior executives.

The attacking software learns to mimic the voice of a person defined for it and makes a conversation with an employee on behalf of the CEO. It was also reported that today there are programs that, after listening to 20 minutes to a particular voice, can speak everything that the user types in that learned voice.

The Potential AI Voice Threat Implications

Head of Information Security & Data Protection Officer for Matrix Medical Network, Dr. Rebecca Wynn, cautions, “It is absolutely a threat to watch and very dangerous.” She explains that staff must be trained about receiving instructions from their managers or senior leaders that are out of the normal requests/processes and have a process in place to verify those requests without being sanctioned.

“Experts have certainly been warning for the past two or three years about the dangerous side of artificial intelligence, namely that agile cyber criminals could use it to extend their reach significantly,” says CNBC Cyber Security Reporter Kate Fazzini, who also recently released her book  “Kingdom of Lies: Unnerving Adventures in the World of Cybercrime,” that digs into the world of cyber crime.  

Fazzini adds, “Using voice impersonations to mimic executives on the phone has obvious implications for wire fraud schemes, which rely on a criminal’s ability to convince an employee that his or her top executive is sending instructions for a wire. Most law enforcement agencies recommend ‘voice verifying’ these wires to ensure they are coming from a legitimate source. Criminals have already demonstrated they can spoof and intercept calls, and adding the executive ‘voice’ may override even these safeguards.”

How To Protect The Enterprise From AI Voice Attack

According to INCD, enterprises that fall prey to such fraud, could suffer high economic damage. In its announcement, the INCD also issued suggestions for taking precautions and raising awareness among organizations — such as training employees, paying attention to deviations in organizational processes, verifying instructions and using technological means to prevent misuse of email.

See Related: “The Phishing Phenomenon: How To Keep Your Head Above Water

Similarly, Wynn advises enterprises that, “Just because it comes from the CEO, CFO, COO, CIO, etc., shouldn’t cause the staff to rush when it goes outside the company’s processes and procedures for such requests. The days of the c-level bypassing policies has to stop. This is paramount when instructions are given to the companies staff members to perform transactions such as money transfers, as well as making a change to the company’s network.”

“Verify, verify, verify. One way to do this is if you receive an email or telephone call with such requests, immediately call the designated corporate number to verify the request and ask for a follow-up email or whatever your policy states should be done,” Wynn adds. “Never sanction someone who does a second verification to ensure that it was a legitimate and sanctioned corporate request.”