Anthropic’s recent report exposed AI agents conducting autonomous cyber attacks at machine speed—and it proves that static access controls can’t defend against threats that move faster than humans can respond.

The game has changed. The era of AI-powered cyberattacks is no longer a theoretical exercise for roundtable discussions. It’s here.
In a recent, eye-opening report, Anthropic detailed the disruption of a sophisticated cyber espionage campaign by a state-sponsored group, GTG-1002. This wasn’t just another case of an attacker using an AI model for advice on how to write code; this was a fundamental shift.
The threat actor used Anthropic’s own Claude Code model as an autonomous attack agent. Anthropic states that the threat actor was able to use AI to perform “80-90% of the campaign, with human intervention required only sporadically.” So, the AI agents were able to independently perform almost all of the tactical operations, including reconnaissance, vulnerability discovery, lateral movement, credential harvesting, and data exfiltration, all with minimal human oversight.
Operating at “physically impossible request rates,” the AI conducted a campaign at a speed and scale that is simply beyond human capability. This new class of attacker doesn’t just think faster; it acts faster. And it’s aimed squarely at the single biggest vulnerability in the modern enterprise: standing access.
The Anthropic report shows us an AI agent operating with a human supervisor, not the other way around. The human operators provided the strategic target, but the AI agent did the tactical work. It used an “autonomous attack framework” built on open standards like the Model Context Protocol (MCP) to “autonomously discover internal services” and “internal APIs”.
This autonomous agent was devastatingly effective for one simple reason: our security models are static. They are built on “always-on” permissions.
When an AI agent can test thousands of harvested credentials against hundreds of internal APIs at machine speed, it’s an automated assault on a static defense. This attack proves that the core problem is twofold:
This mismatch is staggering. You cannot fight a machine-speed attack with a human-speed defense. You have to fight fire with fire.
To defeat an autonomous, continuous attacker, you need an autonomous, continuous defense. SGNL’s Continuous Identity platform was built for this reality. It directly counters the methodology used by the GTG-1002 group by automating and accelerating every stage of the defense lifecycle.
Even if an AI agent steals a token or certificate, that credential grants no access by default. Access is only authorized at the moment of request, based on real-time business and security context.
For example, SGNL can enforce a policy that states: “A user (or an agent acting on their behalf) can only access a production system if they are on-call in PagerDuty, have an active, high-priority ticket in ServiceNow, AND are connecting from a compliant device verified by CrowdStrike”.
This approach starves the AI agent of the very footholds it needs to spread. The attack surface shrinks from “always” to “never, unless explicitly justified for a specific task, right now.”
This isn’t just a one-time check at login. SGNL makes a granular, real-time authorization decision for every request the AI agent makes.
Instead of just asking, “Is this token valid?” SGNL asks, “Based on the user’s current context, is this AI agent justified in accessing this specific database endpoint or this specific API function at this exact second?”. This intercepts the attack during its autonomous execution, blocking its ability to probe and move laterally.
When a tool like CrowdStrike or your SIEM detects the AI’s anomalous, high-frequency activity, it doesn’t just create a ticket for a human to review. It fires an immediate, automated CAEP (Continuous Access Evaluation Protocol) signal.
SGNL receives that signal and automatically triggers remediation: active sessions are terminated, access tokens are revoked, and privileges are removed. This entire process happens in seconds.
This is the only effective response. You must fight a machine-speed attack with machine-speed remediation.
The Anthropic report is a clear warning shot. The age of autonomous AI attacks is here, and it is ruthlessly effective against security models built on static, standing access.
The only viable defense is a continuous one.
Your security can no longer be a series of disconnected, periodic checks. It must be a living, adaptive system that evaluates context in real-time, shrinks the attack surface to zero by default, and responds to machine-speed threats with machine-speed automation.
This is what SGNL’s Continuous Identity platform delivers.
See how SGNL can defend your organization from the next generation of AI threats. Would you like to request a demo?
Want more of the latest identity-first security topics and trends delivered to your inbox? Helpful and insightful content, no fluff.