
By Sunny Rao
AI adoption is accelerating at breakneck speed with Gartner predicting 90 per cent of new business software applications worldwide will include embedded ML models or services by 2027, and AI agents will augment or automate 50 per cent of business decisions.
Industry research also shows 42 per cent of companies in Southeast Asia are already deploying AI agents, in industries ranging from financial services to manufacturing.
While agentic AI holds immense potential for helping companies unlock new efficiencies and competitive advantages, challenges still exist around data provenance, AI governance and security.
Despite these hurdles, executives remain convinced of AI’s long-term potential and feel increasing pressure to act. When innovation is under pressure to move faster, shortcuts and risk increase, making it vital to build systems where trust and speed go hand in hand.
When speed outruns governance, AI creates more risks than efficiencies
The rapid integration of AI across development pipelines has created major governance challenges for organisations.
For example, developers and data science teams frequently integrate open-source AI models from Hugging Face or services directly from providers such as Anthropic, OpenAI and Google without organisational oversight. This ungoverned activity, often referred to as shadow AI, creates dangerous blind spots that leave enterprises vulnerable to compliance violations, data leaks and supply chain attacks.
Left unchecked, automation in software delivery isn’t innovation; it’s simply risk disguised as progress. Without intentional safeguards built into every stage of development, we’re setting ourselves up for the next big failure story: a rogue release, unchecked vulnerability or dependency bug.
“Control by design” is how we avoid that. It’s not a new process or another compliance layer. It’s a mindset that bakes trust and accountability into the way we architect DevOps processes or software automation itself.
Regulation is shifting from principles to enforcement
As governments worldwide across the EU, U.S., U.K. and APAC implement new regulations focused on software provenance, accountability, and system resilience, organisations need the proper systems and processes in place to protect against unforeseen risks, gain control, ensure compliance and build resilience in the fast-evolving AI landscape.
Singapore, for example, updated its Model AI Governance Framework in 2024 to account for GenAI and outline nine dimensions of trusted AI, from accountability to transparency to robustness. Additionally, under the CEU Artificial Intelligence Act, enterprises that are importing AI systems into the EU must retain certain key documents for 10 years.
As open-source software has become the backbone of digital infrastructure, its supply chains are increasingly targeted. Attackers are subsequently embedding malicious code in widely used libraries, knowing these components will be trusted and adopted at scale.
Thus, governments are guiding organisations towards establishing software bills of materials (SBOMs) to create greater visibility into the origin, intent and evolution of every piece of software. But don’t think of this as an obstacle. Rather it’s an opportunity. Building explainability into systems means you’ll be ready when auditors (or customers, for that matter) ask the inevitable question: why did your system make that choice?
Building control by design for responsible AI
One of the key themes that consistently emerges from my conversations with tech leaders across the region is: how can I utilise AI to accelerate innovation and remain competitive, while staying compliant and managing risk?
Many of us think creating process controls means slowing down, getting a sign-off, or asking for a second pair of eyes. That doesn’t work at machine speed. However, what companies need in today’s AI era are embedded control points that operate invisibly, automatically and continuously.
To address this, we need to shift left; treat AI models, datasets, and agents as first-class citizens in the software supply chain and move from reactive safety nets to proactive architectures. That means embedding them into your build pipelines with the same rigour and traceability we apply to code and binaries. Think of it as creating an AI SBOM: a full map of what’s running, where it came from and what’s changed over time.
1. Create a single source of truth
Start by unifying delivery and compliance. The moment security tools and workflows live in silos, you lose visibility and with it, accountability. When everything runs through a single pipeline, you can capture signed model artifacts, runtime logs, dependency graphs, and build an end-to-end audit trail you can actually trust. This is about creating a system of record that gives teams and regulators confidence in what’s being deployed.
2. Bake software governance into both tools and culture
Governance frameworks are important. But unless they’re enforced at build, deploy and runtime, they stay on paper. Policy-as-code lets you operationalise those frameworks in the places developers already work.
A MIT study found that 95 per cent of enterprise GenAI pilots failed to create meaningful revenue impact. The problem wasn’t the model. It was brittle workflows, poor integration, and friction between teams. The successful 5 per cent? They built AI into high-value workflows, with clear ownership, automation and guardrails.
In my experience, tooling and culture must evolve and move together. You can’t retrofit governance after the fact. It needs to be part of a company’s software delivery DNA.
3. Pair human oversight with smart automation
As software complexity grows, so do the blind spots. Every release, every model update introduces change and risk. What works is a layered approach: humans for judgment, AI for pattern detection and scale. And critically, this oversight isn’t static. The goal is to continually identify governance gaps and automation opportunities, so the system keeps getting stronger.
Think about how fraud prevention works in banking. Machines flag anomalies. People investigate them. The same principle now applies to software supply chains and AI systems, only the scale is even greater, and the stakes arguably higher.
Don’t build for today. Build for the next decade
In boardrooms, “governance” often gets lumped together with “red tape”. But scalable governance isn’t bureaucracy. It’s responsible architecture. It’s what separates the companies that harness AI safely and effectively from those that ultimately open themselves up to increased risk.
In short, we simply cannot afford to think of trust or security as an element that gets “bolted on” – almost an afterthought in the last stages of software development. Trust must be a core principle of software design: embedded throughout the way data flows, the way agents interact, and the way systems make decisions.
The “AI industrial revolution” of software design that we’re living in is not just about having AI write code faster. It’s about redesigning processes to enable companies to utilise AI responsibly at scale. Those that master the balance will set the pace for the rest of the industry.
















