Content

Events
About

Navigating the rising tide of AI attacks

Daniel Shepherd | 04/09/2025

Over the last few years, there are few that would disagree that artificial intelligence (AI) has emerged as a powerful tool both for business and for cyber security, offering innovative solutions to complex challenges in the digital landscape.

AI’s ability to process vast amounts of data at incredible speed has made it indispensable for identifying and mitigating threats, streamlining responses and enhancing operational efficiency. 

However, while AI is reshaping the cyber security landscape, it is not without limitations and risks.

The potential of AI in cyber security

AI can empower organizations to respond more effectively to threats, minimize disruptions and protect valuable assets.

AI's strength is its ability to detect and respond to threats with speed and precision. Advanced algorithms are able to analyze vast datasets in real time, identifying anomalies that may indicate malicious activities. By correlating patterns across systems, AI enables early detection of potential breaches, often before they get a chance to cause significant harm.
AI is also invaluable for automating repetitive tasks. These include decoding malware scripts or identifying suspicious IP addresses, which frees up security professionals to focus on more strategic concerns. What’s more, AI enhances reporting capabilities, ensuring post-incident analyses are thorough and data-driven, reducing the likelihood of human error in these high-pressure situations.

Challenges and limitations of AI

Despite its clear potential, AI in cyber security brings inherent challenges. One major concern is the risk of overreliance on AI systems. While these tools excel at analyzing data, they depend on the quality and breadth of their training datasets. Poorly trained AI can misinterpret situations, leading to errors that could compromise security efforts. Human oversight remains crucial to validate AI-generated insights and ensure their applicability to real-world scenarios.

There is also the pressing issue of adversarial use. Cyber criminals are increasingly leveraging AI for malicious purposes, such as creating highly convincing phishing attacks or deploying deepfake technologies to deceive targets. The resulting race between attackers and defenders highlights the need for continuous innovation and vigilance in applying AI responsibly.

Today, phishing emails can be generated with minimal effort through AI tools, enabling the creation of polished, personalized phishing content at scale. This shift has rendered human involvement in these operations largely obsolete, further automating and enhancing the sophistication of phishing campaigns. 

Adversarial AI in action

One example of the rise of adversarial AI use is where a sophisticated cyber group operated a fake organization called the International Pentest Company to recruit unsuspecting individuals. This pseudo-company advertised legitimate job roles for translators, copywriters and communication specialists, particularly targeting individuals in Ukraine and Russia. Many believed they were working for a legitimate penetration testing company, only to later discover they were aiding illegitimate cyber attacks.

The company paid real salaries to its employees, who were tasked with crafting phishing emails that appeared highly legitimate. These emails were instrumental in large-scale attacks, such as the infamous Carbon Act incident. 

Another example of malicious use of AI is voice mimicry, which has emerged as a powerful tool for cyber criminals. A widely reported incident last year involved an attacker mimicking a young woman’s voice to extort money from her mother. The criminal convinced the mother her daughter had been kidnapped and demanded a ransom, though the daughter was, in fact, safe and unaware of the situation.

Such attacks have become increasingly common, particularly in Eastern Europe. 
Cyber criminals exploit voice samples from messaging apps like Telegram or WhatsApp, where they can access voice recordings. With as little as 10 to 20 seconds of audio, criminals can create convincing voice replicas. These mimicked voices are then used to target friends or family, often requesting money under false pretenses such as an emergency or blocked bank account. The technology’s accessibility and reliance on personal connections make these attacks highly effective, posing a growing threat globally.

A further example is where an Asian company fell victim to a sophisticated cyber attack involving the use of deepfake technology to impersonate its chief security officer (CSO). The attackers managed to deceive a senior employee during a virtual meeting, leading to the fraudulent transfer of US$25 million to a foreign account. This case represents one of the largest publicly reported financial frauds involving deepfake technology.

The implications of such attacks are manifold, including relying on the inherent trust employees place in their leadership, underscoring the need for additional verification processes.

It also demonstrates how attackers can employ cutting-edge AI technologies, such as deepfakes, to orchestrate complex and high-stakes financial crimes, while helping to underline the importance of implementing robust multi-factor authentication protocols and training employees to recognize potential signs of deepfake manipulations.

Ethical considerations are another critical factor. As AI becomes more embedded in decision-making processes, organizations must address concerns about transparency and accountability. Ensuring AI systems operate within ethical and legal frameworks is essential for building trust and avoiding unintended consequences.

Striking the right balance

The transformative potential of AI is undeniable, but its role in cyber security must be approached with care. Organizations should view AI as a powerful ally that enhances their capabilities, rather than as a standalone solution. Integrating AI thoughtfully, with robust oversight and clear boundaries, ensures it complements rather than replaces human expertise.

While AI is an enabler of more robust cyber security, it cannot substitute the more nuanced judgment and adaptability that human professionals bring to the field. By leveraging AI where it adds value and maintaining human oversight to address its limitations, organizations can maximize the benefits of this technology while mitigating its risks.

As increasingly sophisticated attackers are adopting the technology to supercharge their own operations, defenders need to take a similar approach in an effort to stay ahead in a game of cyber security cat and mouse.

As cyber threats continue to evolve, the key to effective cyber security lies in balance. AI must be part of a broader strategy that combines technological innovation with human insight, ensuring resilience and adaptability in the face of an ever-changing threat landscape.

Upcoming Events


Digital Identity Week

09 - 10 September, 2025
Swissôtel Sydney, Australia
Register Now | View Agenda | Learn More

MORE EVENTS