AI in security: A boon or a bane?

Share this:

By Ramesh Songukrishnasamy

The security industry is locked in a never-ending game of cat-and-mouse: As security professionals develop new solutions to protect people, places and things, malicious actors adapt and counter with equally sophisticated techniques.

This ongoing challenge requires the security industry to constantly innovate and refine its approaches, ensuring they are always a step ahead of evolving threats. 

Now, with AI changing how people perform their daily tasks in a big way, the ante is higher. 

For the security industry, there are many advantages and challenges with AI, but the analytics capabilities of AI should be seen as the low-hanging fruit to enhance identity management. Identity analytics has the potential to utilise AI to pore over data from a wide range of sources to rapidly bring to light trends and patterns as well as anomalies not visible to the human eye.

In fact, according to HID’s 2024 State of Security Report, which included 2,600 end users and industry partners (installers, integrators, and original equipment manufacturers) from across the globe, 35 per cent of end users reported they will be testing or implementing some AI capability in the next three to five years.

Use case bonanza
Promising use cases include embedding advanced AI machine-learning capabilities into products and offerings or applying AI analytics to identify gaps in internal business processes—whether it’s to optimise performance in applications such as customer service, technical support, etc., or to help detect issues before they become costly problems. According to the survey, just 22 per cent of end users say that are using AI to optimise the accuracy of threat detection and prediction in security programmes. But of those who are, the biggest use case is for data analytics, according to 44 per cent of them.

Additionally, large language models (LLMs) such as Google’s Gemini or ChatGPT offer a revolutionary approach to user experience with a localised, more nuanced dimension. These models personalise the user experience by quickly learning to adapt to local customs and preferences. 

Imagine a global company where customers can access information directly, in their native language, without needing translation. LLMs make this possible by allowing users to express themselves naturally, and the system adapts seamlessly. This not only empowers global customers but also paves the way for advanced AI tools like chatbots and copilots.

To that end, chatbots and copilots are becoming ever more advanced. Unlike their older models from a decade ago, which sounded like machines, today’s chatbots can mimic human conversation with a personable touch. These AI tools assist internal and external users in finding information faster. 

For example, a customer seeking technical assistance can use a chatbot, while a support professional can leverage a copilot as a collaborator to help them complete tasks and gather relevant data and resources.

The role of AI in developing new user interfaces has also become a promising area for many industries and applications. So, instead of navigating complex menus, users can now interact with devices by simply talking to them and telling them what they want. This shift simplifies device usage and provides a more intuitive user experience.

Biometrics and AI
But perhaps the most visible application of AI is biometrics. This technology uses machine learning to identify individuals through facial recognition, fingerprint analysis, and spoof detection. Sophisticated algorithms are crucial for these tasks.

A more sophisticated biometrics approach is in behavioural modeling, which leverages machine learning to identify and analyse behavioural and transaction patterns, enabling proactive detection of anomalies and potential threats. So, machine learning plays a crucial role in here, allowing the security system to learn and adapt to individual baselines.

Furthermore, embedding AI and machine learning directly on edge devices (devices that are located closer to where the data is collected such as a security camera with facial recognition capabilities at airport gates) facilitates real-time anomaly detection and a more efficient response to threats. This shift from reactive to proactive security represents a significant advancement in the field.

Many in the security arena already are heading this way, according to the survey: in addition to analytics, 11 per cent said they are using AI-enabled RFID devices, 15 per cent are using AI-enabled biometrics, and 18 per cent have AI supporting their physical security solutions.

People have come to appreciate the convenience of authentication that can happen literally in the blink of an eye. And as the HID report indicates, the next few years will see more biometrics solutions integrated in our daily lives, with half of the respondents flagging biometrics as an area of top interest. Additionally, 39 per cent of installers and integrators said some of their customers are using fingerprint or palm print, and 30 per cent said some are using facial recognition. 

Challenges and threats 
But as any technology is available to all players, including bad actors, there are also several challenges to face. And because this is also an area where the technology is changing quite rapidly, it’s important to constantly monitor it and be responsive when new threat models arise. 

AI relies on vast amounts of data, and data presents a significant challenge: bias. When data inherits biases, specific conclusions and outcomes in security systems can be skewed, opening the doors for malicious actors to exploit these biases and bypass security measures. In that sense, AI outputs should be used as a guide, not a definitive result.

Beyond addressing data bias, robust and ethical data governance practices are essential.

How do you ensure that that data is not misused? How do we collect the data? How do we maintain it? How do we discard the data and when we find anomalies? How do we go back and adjust the model so that we know the model continues to evolve accurately?

This means being meticulous about data sourcing, ensuring clear and defined purposes for data collection, and maintaining transparency with data subjects. 

Further complicating the landscape is the evolving regulatory environment. Regulations addressing data privacy and AI use are still emerging worldwide, but they vary significantly by region. This creates a complex situation for security professionals who must navigate these regulations in different markets. Not being able to predict future regulations adds another layer of complexity. 

As it works to secure people, places and things, the security industry is ripe for the AI revolution. This does not mean AI will eliminate people and jobs, but it will act as a powerful tool to help people be more productive, reduce errors and help them identify risks before they occur. 

While AI will undoubtedly permeate the security industry, its implementation will likely be strategic. For tasks requiring human judgment, interaction, or a higher ROI, traditional security practices may remain in place. For other tasks and use cases discussed in this article, AI will drive efficiencies and open a new world of possibilities and challenges. It’s the cat and the mouse on steroids.

Ramesh Songukrishnasamy is senior vice president and chief technology officer at HID

Leave a Reply

Your email address will not be published. Required fields are marked *

Search this website