By Sharat Nautiyal
Make no mistake, the ongoing impact of GenAI is continuing to reshape the cybersecurity landscape, yet again.
Right now, countries throughout the APAC region are under attack from the growing velocity of cyber breaches, many exploiting the vulnerabilities introduced by increasing enterprise adoption of GenAI tools.
We are also seeing a surge in citizens finding their private medical or financial data compromised. In some cases, like the recent Live Nation-Ticketmaster breach, hackers have then released sensitive data of customers, including those in Singapore for sale on the dark web. IBM’s Cost of a Data Breach Report found that the global average cost of a data breach in 2023 was running at US$4.45 million.
That said, governments throughout APAC have also recognised the strategic importance of AI and have devised national strategies and supporting policy frameworks to help countries navigate potential AI risks while also helping organisations unleash their undeniable ability to do good.
Finding a needle in a stack of needles: the current security landscape
While it is encouraging to see governments putting these types of guard rails in place to ensure the responsible and ethical use of AI, we can’t turn a blind eye to the fact that threat actors are increasingly targeting the new attack surfaces created by GenAI adoption.
Large language models (LLMs) have access to proprietary corporate data and can not only make decisions but can also act on them. When attackers gain control of GenAI tools through identity attacks, they attain that same access.
Today, catching an attack before any damage is done within today’s modern hybrid network is a bit like finding a single needle in a stack of needles, and threat actors know this. Hybrid attacks can start with anyone or anything, and move anywhere, at any time, at speed, to disrupt business operations at scale. They can do this despite having every possible preventative measure in place.
For an organisation to be resilient in this environment, it comes down to the security operation centre’s capabilities and competence in finding that needle sooner rather than later. For this, they need the 3Cs – coverage, clarity, and control.
Calling it as it is – the problem with perimeter security
Vectra’s recent State of Threat Detection report highlights the current attack landscape:
- 71 per cent of organisations think they’re already compromised but don’t know it yet
- 90 per cent are unable to keep pace with the number of alerts coming in
- SOCs receive average of 4,400 alerts per day with 83 per cent false positives
IBM research has also found 67 per cent of security operation centre (SOC) analysts say that despite all the advancement in tech for processes and people, the time to detect has not improved in the last two years. Nearly half say it’s even worse than it was two years ago, and two-thirds of security leaders believe an incident could have been stopped if their team had more capability.
We put this down to the fact that many security decision makers are still using the age-old strategy of protecting the perimeter. We must be able to protect against more than lateral movement. You can have everything protected from what may be coming onto the network, but what about attacks and signals coming from your network, maliciously outside?
Humans will always make mistakes, whether they are technical developers, cybersecurity defenders, or just regular employees. We need to invest in visibility and security awareness, to improve security controls and get better at figuring out how AI and GenAI can protect us now and into the future.
The mindset of building a castle and a moat to protect from outside threats really needs to change.
How AI can help with the fundamentals of visibility and awareness
One of the most important aspects of security is good visibility. No matter what sophisticated solution you use, if you do not have visibility and or situational awareness of your network and applications, then attackers will gain the upper hand. I believe that at least 70 per cent of attacks can be stopped by having good visibility.
Visibility aids in situational awareness, and this is where AI can be a masterful assistant to SOC teams.
In the era of AI-based attacks, the only way you can fight AI is with AI. An AI-enhanced solution has the capacity to monitor all activity happening in a network, understand what users are typically doing, and know what data is being sent out of an organisation. With these fundamental pillars, many attacks, both simple and sophisticated, can easily be stopped.
In this age of GenAI, we’re just at level one. With this AI war between the three large providers, GPT-3, Microsoft and Google, the sophistication and capability of the systems will grow. Security teams must understand both how to leverage these tools and how to approach their security from the ground up.
When it comes to the modern hybrid enterprise, hybrid attackers are rendering traditional approaches to threat detection and response inefficient and ineffective. There is a need to eliminate siloed threat detection and response in the increasingly common hybrid attack landscape, and the answer lies in AI. AI-driven solutions can cut through the noise, bringing clarity in protecting against cyber-attacks quickly and, at scale.
The control piece comes in with the SOC analysts. With attack signal intelligence at their disposal, instead of spending up to two and a half hours investigating a threat that ends up not being real, they are more likely to spend less than an hour digging into an entity that has been given a higher urgency score.
The role of AI and the need to evolve security teams
By extending the capacity of SOC experts, AI bridges the gap between talent shortages and helps to improve productivity. At the heart of the matter, AI helps SOCs maximise talent by enabling a shift from a detection mindset to a signal mindset.
AI takes the ambiguity out of the engineer’s and analyst’s day-to-day so that they can focus on what matters, including onboarding and training staff, or enhancing their own skills. Zero-day exploits are a good example of how AI can assist security and highlight how we can set up our teams for maximum value. Leveraging AI to help with the manual load, the combined team of data science and security research teams can sift through a great deal of data and identify important information. The vulnerability can be patched or removed to stop attackers in their tracks.
AI finds the problem, we deal with it
If we as security leaders take a step back and focus on having a robust system that can look at attacker behaviours and use AI where relevant, we can build a very smart brain inside a network for an organisation. This smart brain, which is equipped with data and learns every day, every second, based on what it sees in your network, can be our answer to defend against the unknown.
As we work to evolve security within our organisations and use AI to augment SOCs, we must be open to change. It may make sense, sooner rather than later, for our security teams to have a senior AI role that deeply understands how the technology can impact and benefit the business, or to encourage a chief data officer to work closely with the CISO to maximise the use of AI for security.
Sharat Nautiyal is director of security engineering for Asia Pacific and Japan at Vectra AI, a cybersecurity company specialising in AI-powered threat detection and response solutions