Beyond the numbers: Sumsub cautions against the sophistication shift driving AI-powered fraud across APAC

Share this:
IMAGE: Sumsub

At first glance, global fraud statistics in 2025 appear to suggest progress. Many industries report stable or even declining volumes of attempted fraud compared to previous years. But beneath this surface lies a far more sinister trend. 

According to Sumsub’s Identity Fraud Report 2025-2026, the digital fraud landscape has undergone a structural transformation with a staggering 180 per cent surge in sophisticated, multi-layered attacks, even as total fraud volume remains flat. What looks like stability is in fact a deception.

In her interview with Deeptech Times, Penny Chai, vice president for APAC at Sumsub, cautions that many organisations are misreading the threat environment entirely. 

The industry’s long-standing obsession with counting attacks, treating fraud like rainfall that can be measured in volume, is now dangerously outdated. 

“One of the biggest misconceptions is that stable percentages mean improved security,” she explains. “When companies focus on volume alone, they develop a false sense of safety. The real shift is in quality, not quantity.” 

Chai describes this evolution as the “sophistication shift”, in which fraud is no longer driven by scale or brute force but by precision, patience and professionalisation. What began in 2024 as the rise of fraud-as-a-service marketplaces, where criminals sold ready-made attack kits, has now matured into industrialised, AI-orchestrated operations capable of bypassing even advanced defences.

AI industrialises fraud

AI now sits at the centre of the modern criminal ecosystem. While early fraud attempts leveraged AI primarily for automation, today AI is being used far more aggressively and creatively: to synthesise identities, generate hyper-realistic deepfake videos and audio, and even to build autonomous bots that can penetrate systems without human intervention.

“Synthetic personal data now account for around 15.7 per cent of all fraud attempts,” Chai notes, placing them among the fastest-growing forms of attack. Deepfakes have recorded triple-digit growth for three consecutive years across markets such as Singapore, Thailand, Malaysia and Hong Kong, with no signs of slowing down. 

Behind these attacks are emerging AI fraud agents – bots trained to independently conduct complex orchestration, manipulate context and environment signals, as well as mimic authentic behaviour to infiltrate digital platforms more reliably than any human criminal could. 

What emerges is not random criminality but a structured adversarial model with strategy, testing, iteration and deployment cycles that increasingly resemble the discipline of startup innovation.

Why APAC is ground zero

The report reveals that APAC recorded 16.4 per cent year-on-year growth in fraud, the second highest globally. 

To the untrained eye, this may seem inexplicable, especially given that the region includes some of the world’s most advanced digital economies. For Chai, however, the explanation is both simple and sobering: APAC’s diversity has become its vulnerability.

The region spans markets ranging from early-stage digital adopters to highly regulated financial hubs. This fragmentation across digital maturity, regulation, infrastructure and consumer behaviour creates a fertile testing ground for illicit innovation. 

Fraudsters run what Chai calls “black market testbeds” which deploy new tactics first in countries with rising digital adoption such as Malaysia or Pakistan, then scale them into vibrant social commerce ecosystems like Indonesia and the Philippines, and finally unleashing the most refined attacks into developed markets such as Singapore, Hong Kong, Australia and India.

“They experiment where barriers are lower, validate in highly participatory digital cultures, and then push the most sophisticated attacks into mature financial systems where the potential payoff is the greatest,” she explains. 

One of the report’s most striking data points is the 17 per cent fraud acceptance rate among approved applicants in Cambodia, where organised crime compounds have gained notoriety as physical and digital infrastructure for syndicates. 

Demand for technical talent in scam networks is high, and regions rich in engineering skills have inadvertently become incubators for criminal innovation.

Penny Chai, vice president for APAC, Sumsub
IMAGE: Sumsub

The dawn of payment method fraud 

The most dramatic shift identified in the report is that payment method fraud now surpasses ID document fraud, with a fraud rate of 6.6 per cent. This is not an isolated anomaly but a signal that criminals are no longer focused merely on account creation or identity bypassing. Instead, they are embedding themselves into transactional flows to achieve instant monetisation.

Where once fraudsters attacked the front door, they now aim to slip through as legitimate users to exploit digital wallet infrastructure, card credentials and cross-border payment rails. Fraud-as-a-service kits are equipped with prebuilt scripts and automation instructions that enable attackers to manipulate the entire transaction journey.

With digital payment volumes accelerating, particularly across Southeast Asia’s booming financial and crypto ecosystems, the incentives are substantial. “Fraudsters want to get inside the payment layer because it provides immediate revenue,” Chai notes. Once successful, attack strategies are rapidly recycled across multiple platforms.

This exposes a critical weakness across fintech and banking infrastructure: security systems still concentrate their strongest controls at onboarding, assuming the greatest risk lies at the beginning of the user journey. Today, the most dangerous risks unfold long after accounts are created, often when behavioural changes or financial activity spike.

Why traditional fortification is failing

Legacy fraud systems built around document verification, static rules and manual review are collapsing under the sophistication of new attacks. 

Fraudsters are now manipulating telemetry data (tampering with device signals, geolocation, network routing and camera metadata) to undermine biometric and behavioural integrity. Virtual machines, sensors and proxy networks are weaponised to mimic normal interaction. SDK manipulation can now neutralise device fingerprinting entirely.

“We caught many fraud rings in 2025 because they reused the same background behind deepfake videos,” Chai recalls. “Now they manipulate the telemetry instead, spoofing the tools we use to identify them. It requires an engineer to build these systems. They are not simple manipulations anymore.” 

The trust crisis and the industries at the bottom

The report reveals consumers now give traditional banks the highest trust scores, while sectors such as crypto, dating, gaming and social media sit at the bottom of public confidence. 

Banking has earned trust not through advertising but through regulatory discipline, transparency and the proactive communication of security safeguards. By contrast, many social and digital-commerce platforms do not articulate how they verify users, do not demonstrate visible compliance measures and do not intervene until law enforcement demands it.

Chai argues that trust must now be built visibly and continuously. Users must know not only that platforms are safe, but how they are safe. In her view, trust is no longer a brand attribute, it is a security outcome.

“Shared responsibility is essential,” she notes. “Almost half of consumers believe companies and governments should shoulder responsibility for safety. But we can see that many social platforms have not yet embraced that role.” 

The future of fraud will not be defined by higher volume but by escalating intelligence. Attackers are moving faster than regulators, faster than compliance policies and faster than legacy security infrastructure. 

The only credible path forward is an architecture that continuously verifies identity, behaviour and context: an approach where AI is used not defensively but offensively, where ecosystem intelligence is shared rather than siloed, and where fraud prevention is treated as a business-critical discipline rather than a compliance checkbox.

As Chai reflects, this is no longer a contest between companies and criminals. It is a battle between AI and AI, and the organisations that fail to adapt will not realise their vulnerability until they wake up to catastrophic damage. As fraud becomes industrialised, only businesses that act now can shape a safer digital future. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Search this website