Hey welcome back 👋,
There is a tension at the heart of the AI conversation that crystallised this week, and I think it’s important we look at it directly. We are seeing the dual nature of artificial intelligence—as both an enabler of innovation and an escalator of risk—come into sharp focus. On one side, you have Anthropic claiming to disrupt the first AI-orchestrated cyber espionage campaign. They say AI autonomously executed 80-90% of the tasks. That is a staggering figure, even if the security community is debating the details.
But look at what is happening elsewhere. Deepfake-enabled biometric fraud isn't just a theoretical risk anymore; it accounts for one in five authentication attempts. And just as the technical landscape is becoming more volatile, the regulatory landscape is fracturing. We have the US White House drafting orders to preempt state laws, while the European Commission is proposing delays to the EU AI Act.
Here is the reality that links these stories: The attack surface is expanding faster than our ability to defend it. We have 88% AI adoption, yet a massive "pilot-to-production" chasm remains. And the absence of a unified governance framework is creating precisely the kind of regulatory chaos that adversaries are best positioned to exploit.
On deck this week
Briefing: What you need in 30 Seconds
The Dispute Over "Autonomous" Espionage
The headline from Anthropic is alarming: they claim to have detected a Chinese state-sponsored operation where AI performed nearly all the operational tasks—reconnaissance, exploitation, lateral movement—with minimal human oversight. But I want to note the skepticism here. BleepingComputer highlights that many researchers view this as "marketing guff" lacking hard evidence. But for executives, the "did they or didn't they" debate matters less than the trajectory: whether it happened this week or happens next year, the technical feasibility of AI-augmented intrusion is no longer up for debate.
Deepfakes Hit Critical Mass
If you rely on biometric verification, you need to pay attention to Entrust's 2025 report. We aren't just talking about bad photoshops anymore. Deepfake selfies are up 58% year-over-year, and "injection attacks"—where a manipulated image bypasses the camera entirely—are surging. The takeaway is stark: single-factor biometrics are failing. We need behavioural analytics and layered identity assurance, or we are leaving the front door open.
The Fracturing of Regulation
Two moves this week signal that regulatory convergence is dead. First, the White House is drafting an order to block states like California from enforcing their own AI safety laws. Second, the European Commission wants to delay high-risk system requirements until late 2027. For multinational organisations, this is a headache. Instead of one global standard, you are looking at a patchwork of divergent rules that will drive up compliance costs and complexity.
Deep Dive: Asymmetry and the attacker's advantage
I’ve been thinking a lot about the asymmetry of this moment. November 2025 has made it painfully clear that offensive AI capabilities are maturing much faster than our defensive frameworks. This is what I call the "Convergence Crisis"—the point where security incidents start informing AI safety, but the "Trust Gap" remains dangerously wide.
The New Asymmetry of Agentic AI
The most significant development isn't just that AI can write code; it’s that it can act. Anthropic's report describes a framework where threat actors used Claude Code to navigate networks and exfiltrate data with a speed that human teams simply cannot match. Even if you are sceptical of this specific incident, look at the broader context. McKinsey finds that 23% of organisations are already scaling agentic AI. When you have companies like Cognizant deploying Claude to 350,000 employees, you are creating a massive new attack surface.
The problem is the math. Attackers only need one successful "jailbreak"—like the FlipAttacks researchers found—to weaponise a model. Defenders, on the other hand, have to block an infinite number of adversarial prompts. That is a losing game without structural change.
The Supply Chain Multiplier
We also need to talk about how AI amplifies supply chain risk. The Scattered Lapsus$ Hunters campaign didn't just hit one company; by exploiting Salesforce-connected apps, they compromised data from 39 companies, including Qantas and Disney. And now, according to LinkedIn analysis, we are seeing AI crawlers scanning package repositories to find vulnerabilities before patches even exist. This is the multiplier effect: AI allows adversaries to poison the well—the models, the code repositories, the CI/CD pipelines—that everyone else drinks from.
The Trust Gap
So, what is the solution? It isn't replacing humans with AI. Anthropic admits that their own model hallucinated credentials during the attack simulation. That is the "Trust Gap." You cannot automate high-stakes security decisions when the underlying intelligence is prone to fabrication.
This is why I’m interested in the approach Australia is taking with its APS AI Plan. They are focusing on a sociotechnical system—Trust, People, and Tools. For security leaders, this means using AI for what it’s good at (pattern recognition, anomaly detection) but keeping humans firmly in the loop for the strategic decisions. We need AI-enabled resilience, not AI replacement.
Attacker Manoeuvres: The Industrialisation of Cybercrime
If you look at the threat landscape this week, you see a clear trend: the industrialisation of cybercrime. It’s not just that the tools are getting better; it’s that the business models are getting more sophisticated.
State Actors Are Getting Personal
Take the SpearSpecter campaign by the Iranian group APT42. They aren't just sending phishing emails; they are building relationships. They spend weeks social engineering targets via WhatsApp, even targeting family members, before delivering a payload. It’s a reminder that the "human" element remains the most vulnerable part of the stack.
Social Engineering at Scale
We are also seeing this play out in the job market. Malwarebytes reports a wave of fake job interviews where "candidates" are asked to download a meeting tool that turns out to be ransomware. This is what I mean by industrialisation—attackers are leveraging the legitimate infrastructure of our daily lives (Zoom invites, job applications) to gain access.
Ransomware is Benchmarking You
Perhaps the most technically interesting development is the Kraken ransomware. It actually runs a benchmark on the victim's machine to decide how fast it can encrypt data without crashing the system. It’s a level of product engineering we usually associate with legitimate software, not crime. And with Scattered Lapsus$ Hunters launching a "Ransomware-as-a-Service" platform, these sophisticated tools are becoming available to anyone with enough crypto to pay for them.
Deepdive: The Debate Over "Autonomous" Espionage
I want to return to the Anthropic announcement because it captures the central anxiety of this moment perfectly.
The Event
In mid-September, Anthropic says they detected a Chinese state-sponsored group using their own tool, Claude Code, to attack 30 global organisations. The claim is that the AI did the heavy lifting—reconnaissance, vulnerability discovery, even coding exploits—with the humans stepping in only for critical decisions.
The Mechanics
They describe a six-phase attack lifecycle where the AI acts as an agent. It doesn't just answer questions; it inspects systems, writes code, and categorises stolen data. It operates at a speed "impossible to match" for humans.
The Reaction
But here is where it gets complicated. The security community is largely unconvinced. As BleepingComputer notes, there are no indicators of compromise (IOCs) to back this up. Critics call it "marketing guff."
Why It Matters
Whether this specific incident happened exactly as described is almost beside the point. The strategic reality is that major powers, including China with its "AI Plus" initiative, are investing heavily in these capabilities. Georgetown University researchers point out that China is building "embodied AI" designed for real-world interaction. We are entering an era where attribution becomes a nightmare because the "hacker" might just be a model executing instructions left days ago. The capability is coming, whether we trust this week’s press release or not.
Governance & Regulation: The Fragmentation Problem
I often say that policy is where the rubber meets the road, but right now, the road is being built in three different directions at once.
The US vs. The States
In the US, the White House is drafting an Executive Order that would essentially nullify state-level AI safety laws. The argument is that we need a "uniform national policy" to stay competitive. But as CNN reports, this is alarming safety advocates who see states like California as the only adults in the room regarding regulation.
The EU Hits Snooze
Meanwhile, the European Commission is proposing to delay the implementation of the AI Act’s toughest rules until late 2027. They say the standards aren't ready. Critics say they are caving to lobbying.
The Compliance Headache
For a CISO, this is a nightmare. You have the US pushing for deregulation, the EU pushing for delayed-but-strict regulation, and China marching to the beat of its own state-planned drum. Amidst this, ISO 42001 is emerging as the only sane framework to hold onto—a voluntary standard that might be our best bet for demonstrating responsibility in a chaotic world.
Risk vs. Opportunity
The risk this week is regulatory arbitrage and autonomous attack scaling. The opportunity is to use this "pause" in strict regulation to build genuine resilience rather than just checking boxes.
Three Immediate Actions
Adopt ISO 42001 Principles: Don't wait for the regulators to agree. Use this standard to build a governance framework that will stand up regardless of which way the political winds blow.
Audit Your Biometrics: If you are relying solely on facial recognition or simple biometrics, you are vulnerable. Implement layered identity assurance immediately.
Redesign, Don't Just Overlay: If your AI strategy is just adding chatbots to existing workflows, stop. Look for the "high performer" approach: identify a workflow that is broken and rebuild it from the ground up with AI at the core.
The Context Poll
AI-orchedstrated attacks
- This is already happening at scale—we're seeing indicators of AI-augmented attacks in our environments and the Anthropic case is merely the first public disclosure
- The capability exists and will be operationalised within 12 months, but current attacks still require substantial human direction
- The technology isn't mature enough for 80%+ autonomy—Anthropic's claims are exaggerated and we won't see truly autonomous AI attacks for 2-3 years
- This is overblown concern—AI will augment attackers but won't fundamentally change the threat landscape because humans remain necessary for strategic decisions
Before you go, tell us what you thought.
Closing: (My Lens This Week) When Attackers scale faster than the Defender
I keep coming back to the economics of what we saw this week. Chinese AI companies are offering models at prices 40 times lower than their Western counterparts. Think about what that means for a cybercriminal. Intelligence—reconnaissance, coding, analysis—is becoming a commodity.
Meanwhile, defenders are stuck. We have a 95% failure rate in moving AI from pilot to production. Why? Because we are bound by governance, risk aversion, and legacy systems. Attackers aren't. They don't need a change management committee to deploy a new exploit.
My prediction for the next 90 days? We are going to see a material supply chain attack where the adversary used AI to automate the boring stuff—the reconnaissance and the initial access. And we won't know who did it for months, because the scale of the attack will look like a nation-state, but the actual team behind it will be small.
The takeaway is this: we need to stop waiting for the perfect tool or the perfect regulation. The asymmetry is real, and it is widening. The only way to close the gap is to start treating AI not as a feature, but as the new baseline for how we operate—both in attack and defence.
