Humans at the core: Why ST Engineering believes cyber defence in the AI Age begins with people

Share this:
Middle: Goh Eng Choon, president for cyber, ST Engineering
Image generated by Deeptech Times using Google Gemini

As AI becomes central to cybersecurity, it’s easy to assume that automation alone can outpace digital threats. Yet, according to Goh Eng Choon, president for cyber at ST Engineering, the most sophisticated defences can still fail if humans are written out of the equation.

“Even as we move into the age of generative and agentic AI, humans must remain in control. The goal is not to replace people but to address their weaknesses, by combining human instinct and ethical judgment with the speed and precision of AI,” he told Deeptech Times at GovWare 2025

That philosophy, humans at the core, underpins ST Engineering’s approach to developing next-generation cybersecurity solutions. In a world where AI now defends and attacks at machine speed, the company is doubling down on one timeless truth: people still matter most.

A shifting AI landscape, from assistive to agentic

AI is hardly new to cybersecurity. For years, machine learning has powered predictive threat models, anomaly detection, and automation of repetitive security operations. What changed, as Goh explained, was the advent of GenAI and agentic AI, which gave rise to systems capable of not just identifying threats but autonomously creating, adapting and executing responses.

“With traditional AI, humans remained firmly in control,” he said. “But with GenAI, there’s a shift toward delegating more decision making to machines. The danger lies in assuming AI can be left entirely unsupervised.”

Goh likened the transition to raising a capable but unpredictable child. One that learns, adapts and evolves beyond its initial programming. Without continuous human engagement, these systems risk “collapsing in on themselves” as self-reinforcing learning loops create unreliable or biased models.

From his vantage point, the solution is symbiosis. “Humans need to interact with AI throughout its deployment lifecycle. It’s not about blind trust. It’s about building confidence through transparency and feedback.”

The perils of deskilling

One of Goh’s most compelling observations is the paradox of convenience. As AI systems grow more capable, human operators risk losing critical skills – a phenomenon he calls ‘deskilling’.

“Think of how we used to remember phone numbers,” he quipped. “Now our smartphones remember them for us. The same risk exists in cybersecurity. If analysts grow too dependent on AI, they stop learning, and that’s dangerous.”

To counter this, ST Engineering designs AI systems that promote constructive dialogue between human and machine. Analysts are encouraged to challenge AI-generated recommendations, ask ‘why’ and ‘how’, and even require AI to justify its reasoning. “That process builds trust,” Goh explained. “It ensures AI doesn’t become a black box but a transparent partner.”

Training the human chain: Developers, operators and users

If humans are to stay at the core, they must be empowered, not only technologically but educationally. Goh outlined a three-tiered human ecosystem ST Engineering is investing in:

  1. Developers, who must embed safety and guardrails into AI design from day one.
  2. System owners, such as CIOs and CISOs, who must understand AI’s limitations and governance boundaries.
  3. End users, who must recognise AI’s imperfections and serve as first responders when systems misbehave.

Each group, according to him, plays a distinct role in sustaining trust. “Developers build the integrity, operators manage the accountability, and users ensure vigilance.”

ST Engineering’s training approach doesn’t stop at knowledge transfer. It also extends to soft skills: empathy, communication and ethical reasoning. “Cybersecurity isn’t just a technical discipline anymore,” Goh highlighted. “Developers now need to understand how people think and how AI decisions affect them.”

Guarding against the AI-driven adversary

While defenders experiment with agentic AI, so do attackers. The rise of AI-powered malware, capable of morphing its behaviour autonomously, has transformed cyber warfare into an arms race of speed and sophistication.

“Attackers can now use AI to generate thousands of malware variants in minutes,” Goh said. “They’re not bound by ethics or accountability. That’s why defenders must also use AI to predict, simulate and neutralise threats faster than ever before.”

ST Engineering’s strategy is to “fight fire with fire”, deploying AI to detect adversarial behaviour in real time while ensuring that all models are themselves protected from prompt injection, data poisoning and model corruption.

“AI systems are vulnerable too,” he warned. “You must build cybersecurity for AI, not just with AI.”

Rethinking governance: Building collective trust

Technology alone cannot create trust. Governance must keep pace. Yet Goh acknowledges that regulators face an uphill battle.

“In the past, technology evolved slowly enough for policymakers to understand it fully before setting rules,” he said. “Now, AI moves faster than any government can legislate.”

His solution is co-creation of governance frameworks. Regulators, industry players and technology developers must collaborate early and continuously to develop adaptable, evolving policies. “AI governance can’t be static,” Goh insisted. “It must evolve with the technology it governs.”

He cited examples like medical institutions experimenting with “AI-free days” – designated periods where professionals work without AI to maintain human competency. “Perhaps cybersecurity operations centres could benefit from similar exercises. A day without AI keeps your instincts sharp.”

Preparing for quantum’s shadow

Beyond AI, Goh flagged the quantum computing threat as a looming frontier of concern. Attackers are already stockpiling encrypted data, hoping to decrypt it once quantum capabilities mature.

“Awareness is growing, but action is uneven,” he said. “Unless regulators mandate migration to quantum-safe encryption, most organisations will delay.”

ST Engineering is advocating for gradual, low-disruption adoption of post-quantum cryptography to help enterprises test and integrate defences without overhauling entire systems. “It’s an arms race of a different kind,” he added, “but one we must prepare for before quantum becomes mainstream.”

Expanding the cyber ecosystem

ST Engineering’s cyber division is extending its reach beyond large enterprises and government clients to include SMEs, a sector often overlooked yet increasingly targeted.

To close that gap, the company has launched a one-stop cybersecurity platform offering low-touch, cost-effective protection and rapid incident response support. “We don’t just sell them tools,” Goh emphasised. “We help them recover, investigate, and even navigate legal and compliance challenges.”

On a global scale, ST Engineering is expanding its cyber footprint across APAC and the Middle East, supported by offices in the U.S. and Europe. The company’s portfolio spans secure hardware, AI-protected cloud services, and a unique “cyber wargame platform” — an immersive environment for testing both human and AI responses under simulated attack conditions.

“AI is a living entity,” he said. “You can’t test it once and be done. Continuous testing of both AI and humans is essential.”

The future of cyber resilience: Augmented humanity

For Goh, the future of cybersecurity isn’t about choosing between human or machine. It’s about engineering the relationship between them.

“The question isn’t when AI will surpass human defenders,” he cited. “It’s how fast we can evolve our partnership with it.”

In the end, ST Engineering’s cyber vision is less about building smarter machines and more about building smarter humans – ones equipped to lead, question and collaborate with AI in the defence of digital trust. As the battlefield of cyberwarfare grows ever more intelligent, in ST Engineering’s playbook, the ultimate firewall is still human.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search this website