AI and identity: Rebuilding trust in a world of smarter adversaries

Share this:
Image generated by Deeptech Times using Google Gemini

By Mark Dallmeier and Edwardcher Montreal 

In the span of just a few years, AI has transformed from a futuristic buzzword into a core engine of business innovation and operational effectiveness. From automating routine tasks to enhancing decision making, AI’s impact is undeniable.

But alongside its promise, AI has also reshaped the threat landscape. The very technology that drives efficiency and insight is now being weaponised to undermine identity: the foundational element of secure digital and physical access. 

The new trust challenge

Traditional trust signals such as signatures, photo IDs, static passwords and multi-factor authentication were once sufficient to prove identity. But in an age where generative AI can mimic faces, voices and writing styles convincingly within seconds, these signals are no longer reliable. 

Attackers can now use AI to:

  • Generate deepfake photos, audio and video that look and sound authentic. 
  • Clone communication patterns and behavioural cues to impersonate employees or executives. 
  • Replicate websites and login portals with such fidelity that even trained users can be fooled. 
  • Launch autonomous attack chains that find and exploit vulnerabilities without direct human involvement. 

These aren’t isolated or theoretical threats. They are real and increasing in frequency and sophistication. When trust signals can be generated or manipulated at scale by machines, organisations can no longer rely on passive or perception-based methods of identity proofing. 

Trust has evolved

Despite these challenges, trust itself isn’t gone but has evolved. What was once implicit now has to be engineered intentionally. We must anchor trust in verifiable proof that cannot be forged by AI, while adapting our security architectures to operate at the speed and scale of modern threats.

This means moving beyond traditional authentication toward cryptographically rooted, hardware-backed identity models that are resistant to deepfakes, impersonation and automated attacks. Standards such as FIDO and platforms like HID’s Crescendo ecosystem exemplify this shift by providing strong, phishing-resistant authentication that binds identity to devices and keys that cannot be spoofed.

AI as a workforce member, not just a tool

Organisations have come to a critical realisation that they must treat AI not just as another technology but as an identity in its own right, as AI systems are increasingly embedded into workflows, making decisions, accessing data and interacting with users often autonomously. 

This demands that AI agents be governed with the same rigour we apply to people:

  • Define explicit permissions for what each AI can and cannot access. 
  • Monitor AI activity as you would a human user.
  • Audit outputs and decisions to ensure accountability and traceability. 

When AI is treated like an identity with boundaries and oversight, trust becomes manageable again.

Avoid the numbness trap

One of the biggest risks today is numbness: the sense of inevitability that comes after years of headlines about breaches and cyberattacks. This desensitisation can lead to outdated assumptions about risk and delayed action. 

Organisations can avoid this trap by basing decisions on objective risk analysis, viewing AI through a business lens, staying informed and vigilant about emerging threats and technologies, treating governance as mandatory, and anchoring identity in strong, hardware-backed authentication. 

This mindset shift isn’t just about defence — it’s about enabling resilience, agility, and confidence as technologies evolve.

Five principles for trust-centric identity in the AI Age 

Building durable trust requires a mindset shift.

Five principles stand out as essential:

  1. Understand your current state: Know where AI already influences your systems, workflows or vendors
  2. Apply governance that treats AI as a participant: Give AI systems access rules, oversight and identity controls
  3. Engage users early: Successful adoption requires users to understand the value and feel confident with new tooling
  4. Encourage experimentation: Teams that embrace AI responsibly will gain competitive advantage
  5. Build identity for continuous acceleration: Authentication must be adaptive, hardware-rooted and resistant to phishing and impersonation

The path forward: Converged authentication

One of the most effective strategies today is converged authentication — unifying identity across physical access, digital systems and workforce credentials. 

A converged model helps organisations provide one trusted credential for both physical and digital access; apply consistent, context-aware policies across environments; reduce administrative overhead; strengthen defences against impersonation; and improve user experience through unified journeys

In the age of AI, trust is not lost. It has been redefined. The signals we once depended on are no longer sufficient. But by grounding identity in cryptographic proof, governing AI with discipline, and embracing converged authentication, organisations can rebuild trust for the challenges ahead.

Mark Dallmeier is chief revenue officer at Envoy Data

Edwardcher Montreal is principal solutions architect, identity and access management, consumer authentication solutions at HID

Search this website