When shadow AI turns sinister: Why the next breach will come from within

Share this:
Image generated by Deeptech Times using Google Gemini

The security landscape is shifting fast, and it’s not just the cybercriminals outside the firewall causing headaches. Increasingly, organisations are waking up to the reality that insider threats, thanks to the unchecked adoption of AI, are now more dangerous than ever. 

According to the latest Exabeam research, 64 per cent of cybersecurity pros say insiders are a greater risk than hackers outside the organisation. Over half (53 per cent) have seen a spike in insider incidents over the past year, and a similar number expect the trend to worsen. 

The anxiety is most acute in the Middle East, where 70 per cent of respondents finger internal actors as the top threat, but APAC including Japan isn’t far off: 60 per cent of organisations saw measurable increases in insider incidents over the past year, and 53 per cent now view insiders as a greater risk than external actors.

AI supercharges insider threats

AI isn’t just accelerating innovation, it’s arming insiders with smarter, stealthier tools. Exabeam’s data reveals that 76 per cent of organisations have already experienced unauthorised use of GenAI by employees. The consequences? 93 per cent of those surveyed see AI making insider attacks more effective. The potential damages? Think AI-powered phishing (27 per cent) and unsanctioned GenAI tools (22 per cent).

Yuval Fernbach, VP and CTO at JFrog, puts it bluntly: “Recognising and mitigating the risks of shadow AI is becoming a critical priority for CIOs and CISOs who must strike a balance between innovating while maintaining security. Organisations should follow proven software development practices by creating developer-friendly workflows with strong security and robust governance.”

Security’s achilles’ heel

The C-suite and security teams aren’t on the same page. Exabeam notes that 74 per cent of security professionals think their executives underestimate insider risk – a sentiment that skyrockets to 88 per cent in the Middle East. This gap in perception is more than a morale issue; it’s a roadblock to the investment, focus and strategic planning needed to stop the threat from within.

What’s holding companies back? Data privacy resistance (20 per cent), lack of visibility (16 per cent) and contextual blind spots (13 per cent) are the biggest hurdles. Add in alert fatigue and siloed security tools, and it’s no wonder organisations are struggling to catch insider threats before it’s too late.

Visibility and alignment are key

Exabeam’s advice? Double down on behavioural analytics and contextual insights to spot the subtle, sophisticated risks that legacy tools miss. But technology alone isn’t enough. Leadership needs to get in sync with security teams to build mature, effective insider threat programmes.

Even with programmes in place, many organisations lack the tools and focus for truly effective detection. 

JFrog’s research found that nearly half (49 per cent) don’t have a handle on machine learning models used within their apps, leaving them exposed to security risks and compliance nightmares. Without proper oversight, ML models can slip through the cracks, opening the door to hidden threats.

More than two-thirds of businesses can’t reliably track open-source packages with ML dependencies, creating major blind spots. When third-party code brings in indirect ML components, vulnerabilities can sneak in undetected.

Scanning helps, but it’s not a silver bullet—79 per cent have some kind of AI or ML scanning in place, but these tools aren’t mature. False positives can hit 96 per cent, especially for models on public repositories like Hugging Face, overwhelming teams and masking real threats.

The stakes are high. Industry research shows that organisations with rampant shadow AI see more sensitive data, PII (65 per cent) and intellectual property (40 per cent), compromised in breaches. 

APAC including Japan is especially alert, with 69 per cent expecting insider threats to climb in the next year, per Exabeam. Over half (53 per cent) see insiders, malicious or compromised, as a bigger risk than outsiders. GenAI is a top culprit, making attacks faster, stealthier and harder to catch.

“AI has added a layer of speed and subtlety to insider activity that traditional defences weren’t built to detect,” says Kevin Kirkwood, CISO at Exabeam. “Security teams are deploying AI to detect these evolving threats, but without strong governance or clear oversight, it’s a race they’re struggling to win. This paradigm shift requires a fundamentally new approach to insider threat defence.”

The bottom line: as AI, identity misuse and poor behavioural visibility fuel a new breed of insider threats, the winners will be those who bridge the gap between boardroom priorities and operational reality. True progress means moving past box-ticking compliance and adopting contextual, AI-savvy strategies that distinguish between human and machine-driven risk and promote collaboration across the business.

Closing the shadow AI threat gap

Solving the shadow AI problem isn’t just about tightening policies. It takes engaged leadership, cross-team collaboration and governance that can keep up with the breakneck pace of AI innovation. The goal: faster threat detection, quicker response and adaptive strategies to face whatever tomorrow’s insiders throw your way.

JFrog’s Shadow AI Detection tool is a step in this direction, automatically cataloguing internal AI models and external API gateways, giving security teams the oversight needed to keep both compliance and innovation on track.

The message for 2026: security isn’t mature unless leadership and operations are on the same page. In the age of shadow AI, that alignment could make all the difference.

Search this website