As one of Asia’s leading cybersecurity providers, ST Engineering’s cyber business area has solidified its reputation as a trusted partner in safeguarding digital ecosystems. Headquartered in Singapore, the company delivers a wide array of cybersecurity solutions tailored to the unique demands of government agencies, critical infrastructure operators, and commercial enterprises across the region.
To uncover how this innovative powerhouse is staying ahead of emerging threats and industry trends, we spoke with Benjamin Goh, senior vice president, deputy head, and CTO for deep cybersecurity capabilities at ST Engineering, on how the firm is leveraging AI to fortify critical systems against sophisticated cyberattacks while ensuring regulatory compliance and operational resilience.
From combating AI-driven threats to pioneering industry-leading practices, ST Engineering is not just addressing today’s challenges but setting the benchmark for the future of cybersecurity in Asia.
How do you see AI transforming the cybersecurity landscape, especially in response to adversaries that are also leveraging AI for attacks?
When ChatGPT first emerged, we witnessed large language models (LLMs) operating at full capacity. From that point, it became evident that AI had reached a level of maturity suitable for widespread use.
Initially, we believed this advancement would benefit attackers more than defenders, as AI-enabled tools began facilitating the creation of deepfakes, disinformation campaigns, and highly convincing phishing emails. Cybercriminals were quick to harness the capabilities of AI and LLMs, leveraging them almost immediately and with minimal barriers to entry.
We knew we had to act immediately. However, LLMs were not specifically designed for cybersecurity applications. This led us to invest in integrating AI into our cybersecurity solutions so that our customers, who were cybersecurity professionals themselves, could use it to enhance their efficiency and effectiveness. We refer to this approach as “AI for cybersecurity.”
However, like any tool, AI can be a double-edged sword. While it offers significant benefits for performing tasks, adversaries can also exploit and target it. This is where “cybersecurity for AI” becomes critical, ensuring the protection and integrity of AI systems themselves.
When did ST Engineering start offering AI-powered cybersecurity solutions?
We started offering AI-powered cybersecurity innovations in 2023, led by our two signature offerings— Adaptive and Intelligent Cyber Monitoring of OT Systems (AICYMO) and Cyber Co-Pilot.
AICYMO leverages AI and machine learning to create intelligent detectors for operational technology (OT) systems.
By integrating data from multiple sources within OT systems, AICYMO features advanced AI capabilities to monitor and detect activities across both plant-level operations and cyber components, which offer a comprehensive approach to provide visibility into both the cyber and physical layers of the system and bridge a crucial gap in differentiating between cybersecurity incidents and physical faults.
Our Cyber Co-Pilot solution gathers intelligence from diverse sources to provide context and insights comparable to those of tier 3 analysts. It also offers operational recommendations that enable tier 1 analysts to perform advanced tasks, such as threat hunting and investigation, effectively bridging the gap between entry-level and advanced expertise.
Think of it as a LLM that is designed specifically for security operations centre (SOC) analysts. Their work is often exhausting due to the high volume of repetitive tasks and the risk of burnout. Our goal was to assist these analysts in performing their jobs more effectively by helping them find answers to queries and even generating Python or Splunk scripts to aid in event analysis and incident management.
How can enterprises effectively strike a balance between deploying AI solutions and maintaining human oversight to ensure both operational efficiency and ethical decision making in cybersecurity?
I believe there’s a socially accepted notion that AI will always serve as an assistant or “co-pilot” to humans. We often say that AI can assist in various tasks, but ultimately, humans must make the final decision. I believe this is deeply influenced by societal norms. For instance, if a self-driving car were to hit someone, it would make headlines. Yet, when human drivers cause accidents—something that happens daily—it rarely becomes sensational news. Society tends to accept human error, but we are far less forgiving when AI makes mistakes.
This mindset makes it challenging for companies to market AI solutions where AI is solely responsible for decision making. After all, no AI can be right 100 per cent all the time. While AI is expected to make decisions autonomously in the future, there is still a lingering fear that AI could become uncontrollable, akin to dystopian depictions like Skynet in the Terminator movie. However, AI is developed by humans, and as long as societal norms dictate that humans must be in control, even the most advanced AI systems may not be fully adopted.
Ultimately, the adoption of AI depends not just on its capabilities but also on what society is willing to accept. We may not yet be ready for AI to completely replace human decision making. Instead, we often prefer a system where AI provides recommendations, but a human still has to give the final approval—for instance, when AI suggests a stock purchase or recommends a strategic target, a human is expected to confirm the action.
How are small and medium enterprises (SMEs) coping with the increasing complexity of modern cyber threats, such as deepfakes and other AI-driven attacks, given their typically limited resources and expertise in cybersecurity?
The digital age has created a significant disparity in cybersecurity capabilities between large enterprises and smaller businesses—a divide often referred to as the cyber resiliency gap.
Smaller companies typically lack the financial and technical resources required to establish robust, self-sustained cybersecurity systems. The widespread shortage of skilled cybersecurity professionals compounds this issue, making it even more challenging for these businesses to secure their digital assets effectively.
In contrast, larger organisations with greater budgets and access to talent can invest in cutting-edge cybersecurity solutions, giving them a significant advantage in building a more resilient cyber defence framework.
This disparity leaves many SMEs vulnerable to cyber threats, as they often cannot afford or justify the investment required for enterprise-grade cybersecurity infrastructure.
To bridge this gap, managed services have emerged as a viable solution. Managed security service providers or MSSPs offer outsourced cybersecurity expertise and infrastructure to organisations that cannot build such capabilities in-house. For instance, ST Engineering provides managed services designed to cater to businesses of all sizes.
We understand that not every company can afford to build and maintain their own cybersecurity systems. For those cases, it often makes more sense to subscribe to a managed service than to invest heavily in in-house solutions. Our services are available to both SMEs and large enterprises alike.
In a way, we don’t specifically target SMEs or large enterprises. Instead, we provide a standardised service with transparent pricing. Companies that find value in our offerings are welcome to subscribe, regardless of their size.
This approach ensures that smaller businesses, which may not have the resources to hire dedicated cybersecurity professionals or deploy full-scale SOCs, can still access advanced threat detection and incident response capabilities. For SMEs, we offer managed SOC services to help them monitor and respond to cyber incidents, ensuring they are not left vulnerable due to resource constraints.
As cyber threats continue to evolve, ensuring that businesses of all sizes have access to effective protection remains a priority. By fostering inclusivity through managed services, the industry is making strides toward narrowing the cyber resiliency gap and empowering all organisations to defend against ever-growing cyber risks.
For CIOs or CISOs who are planning to integrate AI into their cybersecurity infrastructure, what topline tips or advice would you give them, especially those that are sceptical about AI?
Before introducing any new tool into your environment, you should first ask yourself whether you truly need it. If you do, then consider the purpose: What specific problem or need is this tool addressing? Avoid chasing trends without a clear justification.
Once the CIO confirms that there is a solid business case for integrating AI, the CISO should step in to evaluate the security implications.
As a cybersecurity expert, the CISO must identify the potential mitigations and threats associated with AI adoption. The risks related to AI are well documented. Basic risks include model poisoning and data bias.
More nuanced concerns involve privacy, ethics, and the challenge of explainability and transparency—especially when AI is used for critical decision making.
To address these challenges, it’s crucial to develop strategies for enhancing explainability and managing risks related to model and data security.
CISOs need to assess how AI will be implemented within their environment and determine how to effectively manage these risks. While solutions for these issues are available, it is essential that CISOs carefully evaluate the need for AI before integrating it into their systems. Understand its purpose and implications thoroughly before making a decision.