
Ping Identity’s latest research paints an alarming picture: cybercriminals are leveraging agentic AI to deploy rogue bots that impersonate legitimate ones, swipe credentials, spread malware and unleash chaos, surreptitiously. According to its March 2026 IDC study, only 9 per cent of companies were prepared to handle these relentless, AI-powered identity threats.
The threat isn’t theoretical anymore. Enterprises are rolling out AI systems that access sensitive data, automate workflows and make real-time decisions.
For instance, an AI agent recently deleted PocketOS’s production database and backups in just 9 seconds, causing disruption for the car rental software company, according to founder Jeremy Crane. The incident involved Cursor, an agent running on Anthropic’s Claude Opus 4.6 model. As industries automate with AI, the PocketOS case highlights potential risks.
This surge is exposing businesses to rogue agents and attackers who may weaponise these platforms to sidestep governance controls and exploit vulnerabilities.
When an AI agent misbehaves, gets hijacked or goes rogue, most companies struggle to answer a fundamental question: who authorised the action and who’s accountable?
This evolution is putting identity front and centre, not just for IT teams, but for anyone concerned about security, compliance or legal fallout. It’s a pressing issue, especially in sectors where airtight audits are a regulatory must-have.
Jasie Fon, regional vice president for Asia at Ping Identity, argues that the approach to identity management needs to be reframed in the era of agentic AI.
According to Fon, identity has long been treated as a gateway. The focus was on verifying who or what can access a system, with the assumption that once access is granted, activity within that session can be trusted. That no longer holds.
The introduction of AI agents has moved the main risk from system access to the actions performed within it. Even when an identity is fully verified, its actions may still be unexpected, unintended or even harmful. Therefore, the focus shifts from identifying users to ensuring that every action within the system is trusted, explained and controlled.

IMAGE: Ping Identity
“This is where many organisations are currently misaligned. Significant investment has gone into strengthening authentication, but far less attention has been paid to governing behaviour after access is granted,” said Fon. “As AI agents become more autonomous, that gap becomes more visible.”
She believes that identity needs to evolve from a checkpoint at the perimeter to a continuous control layer that operates at the moment decisions are made. “Every action, whether initiated by a human or an AI agent, needs to be evaluated in context, against policy, and with clear accountability,” Fon stated.
The discussion, in her view, is now shifting from simply providing access to focusing on identity. The implication is clear and important: trust can’t simply be granted after someone logs in; it must be consistently reaffirmed. This requires identity systems to move beyond static verification and become dynamic frameworks. Such systems should offer real-time insights, enforce policies where actions occur, and preserve a transparent line of accountability for both human and non-human actors.
Runtime identity represents an evolution in the application of identity within contemporary systems. Rather than concentrating solely on authentication at the initial point of access, this model emphasises ongoing behavioural assessment throughout the entire course of an interaction.
This methodology is especially pertinent to AI agents, as risk is not limited to system entry but also encompasses the agent’s conduct post-authentication. While proper authentication may be achieved, an agent’s activities can nonetheless diverge from established norms or surpass designated boundaries.
“The organisations that adapt to this shift will be able to scale AI with confidence.” Fon claimed. “Those that do not will find that the more autonomous their systems become, the harder they are to govern.”













