From the explosive popularity of large language models (LLMs) like OpenAI’s ChatGPT to the sweeping commercial adoption of generative AI and machine learning, the AI revolution is picking up momentum. While these technological advances could usher in unprecedented efficiency and creativity, they have also brought unprecedented risk with the profusion of AI-powered cyber threats.
One of the most urgent threats arises from AI-generated phishing. There has never been a technology better suited for phishing attacks than generative AI, as it allows cybercriminals to create highly targeted and believable phishing content at scale. From LLM-composed phishing messages to deep fakes, we are entering a period in which it will be far more difficult to distinguish fraud from legitimate content. This means it will be easier for cybercriminals to deceive and manipulate employees with social engineering attacks.
At a time when cybercriminals have powerful tools to launch phishing attacks, cybersecurity awareness training is vital. However, the nature of that training must evolve along with the shifting cyber threat landscape. Resources like simulated phishing have a proven record of preparing employees to resist one of the most common cyberattacks, but they must be updated for the AI era. Let’s look at how CISOs and other security leaders can prepare employees to resist the onslaught of AI-powered phishing attacks.
AI Will Make Phishing Attacks More Dangerous
Phishing is a superweapon in cybercriminals’ social engineering arsenal. According to IBM, phishing was the most common and second-most expensive initial attack vector in 2023 – costing companies an average of $4.76 million. Phishing is particularly dangerous because it’s the most reliable way for cybercriminals to gain initial access.
AI has already begun to make phishing attacks much more effective. Phishing is all about tricking victims into clicking on a link and either downloading malware or sending account credentials, so hackers want their messages to be as believable as possible. While many phishing attacks send a huge volume of fraudulent messages in the hope that a handful will be successful, AI can drastically improve cybercriminals’ ability to launch spear phishing attacks. These attacks use AI to sift through huge quantities of data to craft customized phishing messages with a much higher success rate than standard large-scale phishing attacks.
Because hackers launch attacks from around the world, employees can detect malicious content through grammatical or spelling errors. But GPT-4 supports 26 languages, which allows cybercriminals to produce compelling phishing content for a much larger pool of victims. AI won’t just help cybercriminals create more convincing phishing messages. It will also help them identify victims by searching publicly available information and data on the dark web.
The number of phishing emails has surged over the past year, and there’s no doubt AI has been a significant force multiplier. This means companies have to adjust their cybersecurity awareness training programs to account for the rapidly emerging threat posed by AI.
How Companies Can Thwart AI-Powered Phishing
Nearly three-quarters of data breaches involve the human element. As the most common initial attack vector, phishing is especially reliant on tricking employees into sharing sensitive information or providing direct access. These schemes work because they exploit a long list of psychological vulnerabilities, like fear, obedience, or curiosity. With the help of AI, cybercriminals will be able to leverage these vulnerabilities more effectively.
As AI phishing attacks become more targeted, security leaders need to build their awareness training programs around the employees’ specific behavioral profiles. Each employee has unique psychological characteristics, and their training must be capable of identifying those characteristics and providing personalized content that reinforces behavioral strengths while addressing weaknesses. An essential element of a successful cybersecurity awareness training program is adaptability. That means educational content and assessments have to be built around real-world cyberattacks and evolving tactics like the use of AI. Training programs must also adapt to the workforce’s needs, from an employee’s psychological profile to their knowledge level and learning style.
Generative AI-powered phishing attacks aren’t our only problem. Cybercriminals are also going to use more deepfakes to trick people. To appreciate how potent deepfakes can be, think about what it would be like to receive a phone call from a loved one asking for urgent help or from your CFO demanding an immediate wire transfer. These are powerful reminders that awareness training must undergo a fundamental change in the AI era. Old detection methods must be replaced, and employees should always be encouraged to question who they’re talking to and why they’re being asked to do something.
The deployment of sophisticated techniques like deepfakes and generative AI-composed phishing messages is all the more reason companies need to revamp their CSAT platforms to keep pace with AI-generated cyber threats.
Phishing Simulations Ensure Accountability and Adaptability
Accountability is a central aspect of cybersecurity awareness training. As investments in cybersecurity rise, CISOs, and other security leaders must demonstrate that these resources are being put to good use. They can do this by ensuring that employees are learning what they need to know and are capable of thwarting real-world cyberattacks.
Simulated phishing allows companies to evaluate employees’ ability to identify and prevent the most common types of cyberattacks. This helps security leaders track employees’ progress, reinforce what they’re learning, and determine where the organization is most vulnerable. According to IBM, employee training is one of the top mitigating factors in reducing the total cost of a data breach, outpacing encryption, threat intelligence, and AI-driven insights. But training must stay on top of the latest threats to remain effective. This is why simulated phishing should build adaptive behavioral profiles for all employees and provide personalized instruction based on their skill levels and specific vulnerabilities.
When cybercriminals launch phishing attacks by posing as authority figures, they’re exploiting victims’ fear, obedience, and sense of urgency. Other victims are more likely to fall for scams that offer a reward, like student loan forgiveness or an investment scheme. Simulated phishing should account for the divergent personality traits that put employees at risk for different types of social engineering attacks, while simultaneously offering guidance on how to resist AI-powered cyberattacks across the board. Security leaders can use simulated phishing to close potential security gaps, hold themselves accountable, and improve information retention.
Security leaders must recognize that AI can help cybercriminals get around spam filters and produce more convincing messages. It helps hackers use victims’ public and private information against them and it allows bad actors to exploit psychological vulnerabilities. When employees understand these tactics and consistently confront them in simulations, they will be armed to defend their company against the most cutting-edge social engineering attacks.