AI can detect cyberattacks more quickly and prevent insider hacks: Securonix

Share this:
ILLUSTRATION: Unsplash

As AI makes it easier for hackers to fire off new cyberattacks, it is imperative for organisations to also take up new AI-powered tools fend them off in a perennial arms race, says cybersecurity vendor Securonix.

The “good guys” need the right tools to process increasingly large amounts of data to find clues to a potential cyberattack, so they can respond quickly and remediate any intrusions more effectively, said Haggai Polak, chief product officer of Securonix.

“Some customers see a million transactions a second so finding indicators of a compromise (of security systems) is like finding a needle in a haystack,” he added.

Analysts need an alert context, so they can prioritise and investigate the right alerts,” he told Deeptech Times, in an interview last week.

Securonix’s security information and event management (SIEM) tools, which scour the logs of devices connected to a corporate network for hints of a cyberattack, now use AI to intelligently put together various clues to form a clearer picture.

For example, a file being changed on a laptop may not be informative on its own but if there is, say, a login from another department or an identity change and transmission of the file later, then a picture of an attack emerges.

By learning and putting the data points together, Securonix has been able to uncover potential cyberattacks targeting its clients. Up to 80 per cent of the threats uncovered by its AI-powered tool are worth investigating, it says.

By presenting the most important threats to analysts to investigate, the AI-enabled tool lets analysts zoom in and work more efficiently. It can also quickly analyse a situation and escalate a crucial alert to a more experienced analyst for faster remediation.

In other words, AI is a force multiplier. With it, organisations are able to do more to beef up their cyber defences at a time when there is a shortage in infocomm talent.

Just as important is the use of AI to detect potential insider jobs that could lead to data breaches or backdoors for a cyberattack.

Here, Securonix makes use of AI to sift through volumes of what it calls psycho-linguistics data to detect employee or contractor sentiment that could indicate an intention to commit an offence. Again, context is key here.

For example, if an employee downloads 10GB of data from an unusual domain, then searches for information on how to send information to the Dark Web, Securonix’s AI tools could flag a potential data breach that could happen soon.

From here, the organisation could either warn the employee or counsel them to avoid committing a potential offence, which could include exfiltrating trade secrets or exposing customer data.

Would this sound like the controversial pre-crime technology in the Minority Report sci-fi movie and book?

Haggai Polak, chief product officer of Securonix, at a media roundtable in Singapore
PHOTO: Deeptech Times

Securonix says the alerts can be anonymised so only authorised users such as senior executives will be able to unmask an employee’s name.

“Similar solutions have been doing this for 20 years,” explained Polak. “What’s unique with psycho linguistics is that it uses GenAI and LLMs (large language models) to understand and improve accuracy of text analysis.”

Previously, detection engines were often based on keywords, so a red-flag phrase such as money laundering might be missed if it was misspelt. Today, LLMs can understand slight misspellings and detect potential fraud.

Would this not generate issues, say, if an employee sues an organisation and asks to open the AI “black box” used to determine an intention to commit a crime?

Ultimately, sensitive decisions still have to be made by humans, said Polak. AI can provide the context and provide, say, a probability that an offence might be committed, but a human has to decide what to do with that information, he added.

Far from being worried about such solutions, companies have been seeking them out. Securonix counts American corporate giants AT&T and GE as its clients, so its tools are already on the frontline today to fight AI-powered cyberattacks.

“Malicious actors are using AI today and they have grown with the AI help to generate malicious files, create spam emails and improve social engineering,” said Polak. “The only way organisations can catch up is by using AI themselves.”

Search this website