Hackers have a major milestone to celebrate this year as the phishing attack turns 30 years old.
Phishing attacks can be traced back to the 90s, coinciding with the popularity of America Online (AOL). For those too young to remember, AOL was an early internet service provider that revolutionized the world. It also enabled users to create an email with an “@aol.com” domain.
In 1995, a group of hackers exploited the platform’s large user base and common email domain name to send mass emails to AOL users, luring them into sending money or providing credentials and Personally Identifiable Information (PII). This became known as “phishing”. As it turns out, not everyone on the ‘Buddy List’ was really a friend.
The phishing scheme has evolved considerably over the last three decades. Just as the popularity of the AOL platform allowed hackers to disseminate phishing emails widely, variations of the scheme have followed the development and adoption of new technologies. Smishing uses SMS text messages to attack victims. Spear phishing targets individuals within an organization specifically to obtain their login credentials. Whaling refers to phishing attacks targeting high-value individuals within an organization, such as top executives.
Gone deep-sea phishing
The introduction of generative AI (GenAI) has major implications for phishing. GenAI enables anyone to instantaneously generate content of their choosing. Long gone are the days of one person sending the same email to millions of people.
GenAI applications lower the entry barrier, enabling high volumes of unique content and increasing the sophistication of socially engineered attacks. Social engineering is employed to manipulate victims psychologically and emotionally as part of a broader scheme.
Deep-sea phishing is the latest threat in the evolution of phishing attacks. This method uses the power of GenAI to blend deepfake videos, audio, and images, thereby enhancing socially engineered phishing attacks. It is projected that deepfake files will reach eight million by the end of 2025, compared to about 500,000 in 2023.
Access to AI video-generating tools, in particular, has risen substantially this year. Don’t let the name of the trend fool you – AI-generated videos are sometimes referred to as “slop” because they tend to be devoid of any meaningful content, but they can still be visually convincing. As deepfake-generating tools become more sophisticated and accessible, it is inevitable that they will be adopted to create more intricate and convincing phishing schemes.
Hook, line and sinker
The stakes are high both for individuals and organizations. In early 2025, a woman was persuaded to give away nearly $1 million to a fraudster who claimed to be Brad Pitt in an elaborate catfishing scam that involved deepfake photos and social media accounts. These types of scams occur frequently and victims can be convinced to part with their life savings with inducements of romance or financial gain.
There are reports emerging of deepfake videos of executives being used to coerce subordinate employees into transferring vast sums to fraudsters. Organizations risk more than just money. Deep sea shishing represents the next wave of cybersecurity threats. An employee may not be willing to transfer money during a Zoom call, but may be more likely to hand over a password or confidential information, which can lead to the theft of customer personal information, including health information and data, and ransom demands.
A single cybersecurity incident can inflict substantial financial and reputational harm. In fact, a single incident in 2024 cost a company nearly $3 billion and impacted over 100 million of its customers. We’re going to need a bigger boat
How do individuals and organizations adapt to the changing nature of phishing attempts? One required strategy is to remain vigilant. In the case of individuals facing threats of impostors posing as family members in urgent need, it would be wise to be prepared either with secret words or phrases. Additionally, asking simple questions about recently shared in-person activities can be highly effective. For example: What did we eat for dinner last night?
For organizations, training programs on phishing will need to be updated to prepare employees for higher levels of personalized messaging. This emphasis should be that even if an unusual request comes from someone they report to via video, it should not be trusted.
The good news is that AI toolkits aren’t just being used by fraudsters. Solutions are being developed to detect deepfakes, whether through the evaluation of the content directly or the analysis of metadata and audit logs.
There is no time to wait. In 2026 and beyond, deep-sea phishing attacks are expected to rise like the tide.
What else are we predicting for 2026? Get insights from our experts
