Logo
Home
Archive
Categories
About
Search
Sign in
Subscribe
  • Home
  • Posts
  • Convergence Crisis (#14)

Convergence Crisis (#14)

Anthropic claims AI is hacking autonomously. Regulators are fighting over who gets to make the rules. We are entering the moment where risk scales faster than defense.

Adventuring through the Canadian Rockies
David Hawks
David Hawks

22 January 2026

Hey welcome back 👋,

❝

Test me !

Coming up

  1. What you need in 30 Seconds

    1. The Dispute Over "Autonomous" Espionage

      1. Deepfakes Hit Critical Mass

        1. The Fracturing of Regulation

        2. Attacker Manoeuvres: The Industrialisation of Cybe …

          1. Section

            1. Explainer:

              1. The Context Poll

                1. This Week’s Standouts

                  • What you need in 30 Seconds

                    • The Dispute Over "Autonomous" Espionage

                      • Deepfakes Hit Critical Mass

                        • The Fracturing of Regulation

                        • Attacker Manoeuvres: The Industrialisation of Cybe …

                          • Section

                            • Explainer:

                              • The Context Poll

                                • This Week’s Standouts

                                  Welcome back ✨

                                  We are seeing the dual nature of artificial intelligence—as both an enabler of innovation and an escalator of risk—come into sharp focus. On one side, you have Anthropic claiming to disrupt the first AI-orchestrated cyber espionage campaign. They say AI autonomously executed 80-90% of the tasks. That is a staggering figure, even if the security community is debating the details.

                                  But look at what is happening elsewhere. Deepfake-enabled biometric fraud isn't just a theoretical risk anymore; it accounts for one in five authentication attempts. And just as the technical landscape is becoming more volatile, the regulatory landscape is fracturing. We have the US White House drafting orders to preempt state laws, while the European Commission is proposing delays to the EU AI Act.

                                  Here is the reality that links these stories: The attack surface is expanding faster than our ability to defend it. We have 88% AI adoption, yet a massive "pilot-to-production" chasm remains. And the absence of a unified governance framework is creating precisely the kind of regulatory chaos that adversaries are best positioned to exploit.

                                  Coming up

                                  There is a tension at the heart of the AI conversation that crystallised this week, and I think it’s important we look at it directly. We are seeing the dual nature of artificial intelligence—as both an enabler of innovation and an escalator of risk—come into sharp focus. On one side, you have Anthropic claiming to disrupt the first AI-orchestrated cyber espionage campaign. They say AI autonomously executed 80-90% of the tasks. That is a staggering figure, even if the security community is debating the details.

                                  But look at what is happening elsewhere. Deepfake-enabled biometric fraud isn't just a theoretical risk anymore; it accounts for one in five authentication attempts. And just as the technical landscape is becoming more volatile, the regulatory landscape is fracturing. We have the US White House drafting orders to preempt state laws, while the European Commission is proposing delays to the EU AI Act.

                                  Here is the reality that links these stories: The attack surface is expanding faster than our ability to defend it. We have 88% AI adoption, yet a massive "pilot-to-production" chasm remains. And the absence of a unified governance framework is creating precisely the kind of regulatory chaos that adversaries are best positioned to exploit.

                                  What you need in 30 Seconds

                                  The Dispute Over "Autonomous" Espionage

                                  • The headline from Anthropic is alarming: they claim to have detected a Chinese state-sponsored operation where AI performed nearly all the operational tasks—reconnaissance, exploitation, lateral movement—with minimal human oversight. But I want to note the scepticism here. BleepingComputer highlights that many researchers view this as "marketing guff" lacking hard evidence. But for executives, the "did they or didn't they" debate matters less than the trajectory: whether it happened this week or happens next year, the technical feasibility of AI-augmented intrusion is no longer up for debate.

                                  Deepfakes Hit Critical Mass

                                  • If you rely on biometric verification, you need to pay attention to Entrust's 2025 report. We aren't just talking about bad Photoshop anymore. Deepfake selfies are up 58% year-over-year, and "injection attacks"—where a manipulated image bypasses the camera entirely—are surging. The takeaway is stark: single-factor biometrics are failing. We need behavioural analytics and layered identity assurance, or we are leaving the front door open.

                                  The Fracturing of Regulation

                                  • Two moves this week signal that regulatory convergence is dead. First, the White House is drafting an order to block states like California from enforcing their own AI safety laws. Second, the European Commission wants to delay high-risk system requirements until late 2027. For multinational organisations, this is a headache. Instead of one global standard, you are looking at a patchwork of divergent rules that will drive up compliance costs and complexity.

                                  Deep Dive: Asymmetry and the attacker's advantage

                                  I’ve been thinking a lot about the asymmetry of this moment. November 2025 has made it painfully clear that offensive AI capabilities are maturing much faster than our defensive frameworks. This is what I call the "Convergence Crisis"—the point where security incidents start informing AI safety, but the "Trust Gap" remains dangerously wide.

                                  The New Asymmetry of Agentic AI

                                  The most significant development isn't just that AI can write code; it’s that it can act. Anthropic's report describes a framework where threat actors used Claude Code to navigate networks and exfiltrate data with a speed that human teams simply cannot match. Even if you are sceptical of this specific incident, look at the broader context. McKinsey finds that 23% of organisations are already scaling agentic AI. When you have companies like Cognizant deploying Claude to 350,000 employees, you are creating a massive new attack surface.

                                  The problem is the math. Attackers only need one successful "jailbreak"—like the FlipAttacks researchers found—to weaponise a model. Defenders, on the other hand, have to block an infinite number of adversarial prompts. That is a losing game without structural change.

                                  The Supply Chain Multiplier

                                  We also need to talk about how AI amplifies supply chain risk. The Scattered Lapsus$ Hunters campaign didn't just hit one company; by exploiting Salesforce-connected apps, they compromised data from 39 companies, including Qantas and Disney. And now, according to LinkedIn analysis, we are seeing AI crawlers scanning package repositories to find vulnerabilities before patches even exist. This is the multiplier effect: AI allows adversaries to poison the well—the models, the code repositories, the CI/CD pipelines—that everyone else drinks from.

                                  The Trust Gap

                                  So, what is the solution? It isn't replacing humans with AI. Anthropic admits that their own model hallucinated credentials during the attack simulation. That is the "Trust Gap." You cannot automate high-stakes security decisions when the underlying intelligence is prone to fabrication.

                                  This is why I’m interested in the approach Australia is taking with its APS AI Plan. They are focusing on a sociotechnical system—Trust, People, and Tools. For security leaders, this means using AI for what it’s good at (pattern recognition, anomaly detection) but keeping humans firmly in the loop for the strategic decisions. We need AI-enabled resilience, not AI replacement.

                                  Attacker Manoeuvres: The Industrialisation of Cybercrime

                                  If you look at the threat landscape this week, you see a clear trend: the industrialisation of cybercrime. It’s not just that the tools are getting better; it’s that the business models are getting more sophisticated.

                                  State Actors Are Getting Personal

                                  Take the SpearSpecter campaign by the Iranian group APT42. They aren't just sending phishing emails; they are building relationships. They spend weeks social engineering targets via WhatsApp, even targeting family members, before delivering a payload. It’s a reminder that the "human" element remains the most vulnerable part of the stack.

                                  Social Engineering at Scale

                                  We are also seeing this play out in the job market. Malwarebytes reports a wave of fake job interviews where "candidates" are asked to download a meeting tool that turns out to be ransomware. This is what I mean by industrialisation—attackers are leveraging the legitimate infrastructure of our daily lives (Zoom invites, job applications) to gain access.

                                  Ransomware is Benchmarking You

                                  Perhaps the most technically interesting development is the Kraken ransomware. It actually runs a benchmark on the victim's machine to decide how fast it can encrypt data without crashing the system. It’s a level of product engineering we usually associate with legitimate software, not crime. And with Scattered Lapsus$ Hunters launching a "Ransomware-as-a-Service" platform, these sophisticated tools are becoming available to anyone with enough crypto to pay for them.

                                  The Debate Over "Autonomous" Espionage

                                  I want to return to the Anthropic announcement because it captures the central anxiety of this moment perfectly.

                                  The Event

                                  In mid-September, Anthropic says they detected a Chinese state-sponsored group using their own tool, Claude Code, to attack 30 global organisations. The claim is that the AI did the heavy lifting—reconnaissance, vulnerability discovery, even coding exploits—with the humans stepping in only for critical decisions.

                                  The Mechanics

                                  They describe a six-phase attack lifecycle where the AI acts as an agent. It doesn't just answer questions; it inspects systems, writes code, and categorises stolen data. It operates at a speed "impossible to match" for humans.

                                  The Reaction

                                  But here is where it gets complicated. The security community is largely unconvinced. As BleepingComputer notes, there are no indicators of compromise (IOCs) to back this up. Critics call it "marketing guff."

                                  Why It Matters

                                  Whether this specific incident happened exactly as described is almost beside the point. The strategic reality is that major powers, including China with its "AI Plus" initiative, are investing heavily in these capabilities. Georgetown University researchers point out that China is building "embodied AI" designed for real-world interaction. We are entering an era where attribution becomes a nightmare because the "hacker" might just be a model executing instructions left days ago. The capability is coming, whether we trust this week’s press release or not.

                                  Governance & Regulation: The Fragmentation Problem

                                  I often say that policy is where the rubber meets the road, but right now, the road is being built in three different directions at once.

                                  The US vs. The States

                                  In the US, the White House is drafting an Executive Order that would essentially nullify state-level AI safety laws. The argument is that we need a "uniform national policy" to stay competitive. But as CNN reports, this is alarming safety advocates who see states like California as the only adults in the room regarding regulation.

                                  The EU Hits Snooze

                                  Meanwhile, the European Commission is proposing to delay the implementation of the AI Act’s toughest rules until late 2027. They say the standards aren't ready. Critics say they are caving to lobbying.

                                  The Compliance Headache

                                  For a CISO, this is a nightmare. You have the US pushing for deregulation, the EU pushing for delayed-but-strict regulation, and China marching to the beat of its own state-planned drum. Amidst this, ISO 42001 is emerging as the only sane framework to hold onto—a voluntary standard that might be our best bet for demonstrating responsibility in a chaotic world.

                                  Risk vs. Opportunity

                                  The risk this week is regulatory arbitrage and autonomous attack scaling. The opportunity is to use this "pause" in strict regulation to build genuine resilience rather than just checking boxes.

                                  Three Immediate Actions

                                  1. Adopt ISO 42001 Principles: Don't wait for the regulators to agree. Use this standard to build a governance framework that will stand up regardless of which way the political winds blow.

                                  2. Audit Your Biometrics: If you are relying solely on facial recognition or simple biometrics, you are vulnerable. Implement layered identity assurance immediately.

                                  3. Redesign, Don't Just Overlay: If your AI strategy is just adding chatbots to existing workflows, stop. Look for the "high performer" approach: identify a workflow that is broken and rebuild it from the ground up with AI at the core.

                                  The Context Poll

                                  How concerned are you about the emergence of autonomous AI-driven cyberattacks?

                                  • A. Immediate and material threat. This changes defence priorities now.
                                  • B. Emerging but containable. Worth monitoring closely.
                                  • C. Early hype. Limited real-world impact for now.
                                  • D. Unsure. Need more evidence and examples.

                                  Login or Subscribe to participate

                                  What really stood out this week

                                  I keep coming back to the economics of what we saw this week. Chinese AI companies are offering models at prices 40 times lower than their Western counterparts. Think about what that means for a cybercriminal. Intelligence—reconnaissance, coding, analysis—is becoming a commodity.

                                  Meanwhile, defenders are stuck. We have a 95% failure rate in moving AI from pilot to production. Why? Because we are bound by governance, risk aversion, and legacy systems. Attackers aren't. They don't need a change management committee to deploy a new exploit.

                                  My prediction for the next 90 days? We are going to see a material supply chain attack where the adversary used AI to automate the boring stuff—the reconnaissance and the initial access. And we won't know who did it for months, because the scale of the attack will look like a nation-state, but the actual team behind it will be small.

                                  The takeaway is this: we need to stop waiting for the perfect tool or the perfect regulation. The asymmetry is real, and it is widening. The only way to close the gap is to start treating AI not as a feature, but as the new baseline for how we operate—both in attack and defence.

                                  That's it. See you next week!

                                  David

                                  Before you go...

                                  Tell us what you thought of this edition
                                  • 😀😀😀😀 Seamless alignment between AI & human insight!
                                  • 😀😀😕 Good stuff - more like this…
                                  • 😕😕 Interesting - but didn’t fully click.
                                  • 🥲 Not for me - and here’s why.

                                  Login or Subscribe to participate

                                  Spotlight

                                  Prompting Trust: Issue 09

                                  12 October 2025

                                  Prompting Trust: Issue 09

                                  News

                                  Trust – the Hidden Architecture of Human Interact

                                  11 August 2025

                                  Trust – the Hidden Architecture of Human Interact

                                  Prompting Trust: Issue 07

                                  6 October 2025

                                  Prompting Trust: Issue 07

                                  Stay in the Loop
                                  Updates, No Noise

                                  Regular essays and notes published via Prompting Trust.


                                  Giving context to cyber risk and digital trust in the age of AI.

                                  Pages

                                  Home

                                  Subscribe

                                  Archive

                                  Tags

                                  Authors

                                  Tools

                                  Login

                                  Reset Password

                                  Update Password

                                  Profile

                                  Search

                                  Collected Ideas Ongoing Notes Curated Thoughts
                                  regular essays practical analysis trusted context

                                  © 2026 Prompting Trust.

                                  Report abuse

                                  Privacy policy

                                  Terms of use

                                  Powered by beehiiv

                                  Subscribe to keep reading

                                  This content is free, but you must be subscribed to keep reading

                                  Not now