👋 Hey there!

I've been watching the AI-cybersecurity convergence for some time, and as September 2025 comes to a close, I believe we've reached the point where what has been talked about for over the past few years are finally becoming reality - and it isn’t looking good.

This isn't another "AI is transforming everything" newsletter (you're already drowning in those). What I'd like to explore with you today is something more specific: the point at which AI stopped being a future concern and became the immediate issue that will determine whether enterprise security will succeed or fail.

Consider the evidence from just the past two weeks: AI-powered ransomware campaigns have shut down European airports, while Microsoft invests billions in AI supercomputing to defend against precisely these attacks. The convergence has arrived— and unlike other hype cycles, this one isn’t just a nice-to-have.

OK let’s dive in.

🙀The Crossover

The convergence of AI and cybersecurity is no longer emerging—it's a reality, evolving before our eyes.

This section curates emerging AI-cyber developments into actionable examples and threat scenarios—the kind you can use in your next risk assessment, share with your board, or apply to your security planning. These aren't abstract possibilities; they are life issues from the past seven days, contextualised for your security planning.

The Great Convergence: AI bad and good (sometimes)

Microsoft's threat intelligence team (2025) reported the first confirmed detection of a credential phishing campaign that leveraged large language model-generated code to obfuscate its payload. The campaign, detected on August 28, 2025, employed SVG files with business terminology and a synthetic structure that Microsoft Security Copilot assessed as "not something a human would typically write from scratch due to its complexity, verbosity, and lack of practical utility." This marks a watershed moment where AI-generated attacks are sophisticated enough to require AI-powered defences to detect them.

The implications extend far beyond this single incident. CrowdStrike's announcement (2025) of Threat AI—their first agentic threat intelligence system—demonstrates how defenders are racing to deploy autonomous agents that can "reason across threat data, hunt adversaries proactively, and take decisive action across the kill chain." These mission-ready agents automate complex workflows, including malware analysis, proactive threat hunting, and exposure mapping, representing a fundamental shift from reactive to predictive security operations.

Trust Architecture Under Pressure: The Authentication Revolution

The convergence is reshaping trust models across enterprise environments. Zero Trust Architecture principles are evolving (2025) to accommodate AI agents, machine identities, and automated processes that traditional authentication frameworks were never designed to handle. Every AI agent requires its own unique, verifiable identity, with authentication, registration, and credential rotation capabilities. This extends far beyond traditional user identity management—organisations must now verify every automated process, every AI model interaction, and every machine-to-machine communication within their expanding attack surfaces.

The Economics of AI-Enhanced Threats

September 2025's threat landscape reveals how AI is fundamentally altering the economics of cyberattacks. AI-generated CEO impersonations exceeded $200 million in losses (2025) during the first quarter alone, with deepfake incidents increasing 19% compared to all of 2024. The UK engineering firm Arup's $25 million loss to deepfake fraudsters illustrates how attackers combine psychological manipulation with technological sophistication, creating convincing video conferences with synthetic executives.

HiddenLayer's threat analysis (2025) identifies compromised models from public repositories as the primary AI threat vector, with hackers inserting malicious code into open-source Hugging Face uploads. This supply chain contamination renders trusted AI development tools vulnerable, while third-party GenAI integrations create expanding attack surfaces that traditional security controls struggle to monitor.

Systemic Risk Amplified

The convergence is amplifying systemic risks across critical infrastructure. The Collins Aerospace ransomware attack (2025) demonstrated how a single compromise of AI-powered airport infrastructure can cascade into continent-wide disruption. The attack on Collins' MUSE passenger processing system affected London Heathrow, Brussels, Berlin Brandenburg, Dublin, and Cork airports simultaneously, forcing thousands of passengers into manual check-in processes over an entire weekend.

ENISA's confirmation (2025) that this was a "third-party ransomware incident" highlights how attackers are increasingly targeting AI-powered supply chains that underpin critical operations, rather than directly attacking endpoints. This represents a strategic shift towards "infrastructure-as-target," where sophisticated threat actors seek maximum disruption through minimal effort by compromising shared AI services.

😻Random Acts of AI

The World's First Academic AI Ransomware Experiment Goes Rogue

In what The Register (2025) called "the crazy, true story behind the first AI-powered ransomware," a group of New York University doctoral students created what they intended as academic research but nearly "set the security industry on fire." The researchers developed an AI system they dubbed "Ransomware 3.0," which performed four phases of ransomware attacks using OpenAI's models to generate customised Lua scripts for each victim's computer setup, map IT systems, identify the most valuable files, and write personalised ransom notes based on user information found on infected machines.

The polymorphic malware generated different code each time it ran, making traditional signature-based detection nearly impossible. NYU's Md Raz explained to The Register: "It's more targeted than a regular ransomware campaign that affects the entire system. It specifically targets a couple of files, so it's a lot harder to detect." The research paper submission process became unexpectedly dramatic when cybersecurity professionals began treating it as a real threat, highlighting the thin line between academic AI research and weaponised capabilities.

North Korean Hackers Exploit ChatGPT to Forge Military IDs

Cybersecurity researchers discovered (2025) that the North Korean Kimsuky group (APT43) exploited OpenAI's ChatGPT to generate deepfake military ID cards in a phishing campaign against South Korean defence institutions. The July 2025 attack involved hackers using ChatGPT to create sample images of South Korean government and military employee ID cards, which were then embedded in phishing emails crafted to appear legitimate.

According to South Korean cybersecurity firm Genians, the synthetic ID cards were sophisticated enough to bypass initial visual inspection, demonstrating how consumer AI tools can be weaponised for nation-state espionage operations. The incident raises questions about the ability of AI platforms to detect and prevent such misuse, particularly when the requests appear to involve legitimate document creation rather than explicitly malicious content.

😸Staying Ahead (of the Robots)

Imperatives for the Next 90 Days

Based on September 2025 developments, here are the five critical initiatives I'm recommending to security leaders for Q4 2025:

1. Implement AI-Aware Zero Trust Architecture

Organisations must extend Zero Trust principles to include every AI agent, machine identity, and automated process within their environments. This requires establishing unique, verifiable identities for AI systems with proper authentication, registration, and credential rotation capabilities.

The authentication framework must accommodate machine-to-machine communications and AI model interactions that traditional user identity management was never designed to handle.

2. Establish AI Supply Chain Security Program

The Collins Aerospace incident demonstrates that third-party AI services represent critical infrastructure dependencies requiring dedicated security evaluation. Organisations must assess the security posture of AI service providers, including model integrity verification, data handling practices, authentication mechanisms, and incident response capabilities.

This evaluation must occur with the same rigour applied to traditional critical infrastructure partners—because that's effectively what AI service providers have become.

3. Deploy Agentic Security Operations

Manual threat hunting and reactive security operations cannot scale to meet the timelines of AI-accelerated attacks. Security teams must implement autonomous agents for malware analysis, proactive threat hunting, and exposure mapping while maintaining human oversight for strategic decisions and policy enforcement.

The goal is to achieve machine-speed defensive operations that can counter AI-powered attacks effectively.

4. Develop AI-Cybersecurity Regulatory Compliance Strategies

The convergence of AI governance and cybersecurity compliance requirements demands integrated approaches rather than siloed responses. Organisations must establish frameworks that simultaneously address EU AI Act obligations, data protection requirements, and sector-specific cybersecurity mandates.

This includes implementing human oversight requirements, traceability obligations, and comprehensive logging capabilities that satisfy both AI governance and cybersecurity audit criteria.

5. Establish AI Threat Intelligence Capabilities

Traditional threat intelligence must evolve to address AI-powered attacks and compromised AI services. This includes monitoring for AI-generated phishing campaigns, deepfake fraud attempts, compromised AI models in public repositories, and synthetic identity fraud operations.

Intelligence collection must encompass both technical indicators of compromise and operational patterns specific to AI-enabled attack campaigns.

This month I’ve spent a considerable amount of time examining the developments, and my assessment is straightforward: the convergence of AI and cybersecurity has evolved from an emerging trend to a reality. Technology leaders who treat this as future speculation rather than a present-day challenge are very likely to get burned badly.

The question isn't whether AI will transform your security posture—it's whether you'll shape that transformation proactively or react to it later.

Until next week,

David

Keep Reading

No posts found