Data is the new battleground: Rebuilding security and trust for the AI and quantum future

Share this:
Todd Moore, global vice president for data security products, Thales
Image generated by Deeptech Times using Google Gemini

As AI systems become deeply embedded within critical industries, the discourse around data security and trust has never been more pertinent. From GenAI to quantum computing, these technologies shaping tomorrow’s industries are transforming how we protect data today.

We sat down with Todd Moore, global vice president for data security products at Thales, on the sidelines of GovWare 2025, to gather his insights and perspectives on how a data-centric approach can help organisations balance between compliance and innovation; and how embedding trust and accountability into AI ecosystems helps build resilience for tomorrow’s quantum threats and digital sovereignty challenges. 

We’ve heard a lot about industry 4.0. Tell us more about your definition of Industry 5.0. What else is different?

Industry 4.0 was all about machines communicating with other machines. From a data perspective, this created a lot of threats and vulnerabilities. Industry 5.0 brings the human element back into the loop; providing resilience, ethical oversight and safety as core elements. 

When we think about data flowing from a lot of different places, whether it’s from people or machines, you still require a human in the loop to validate and verify the data, and to ensure that the right decisions are made. That’s how we’re defining Industry 5.0. With Industry 5.0, cybersecurity is the foundation of the machine as compared to 4.0 where it was more of an add-on.  

With the advent of AI and quantum computing, what are some of the impacts these emerging technologies have on industrial revolution? 

From a data security perspective, GenAI came into the scene a few years ago. Along with that, there was an explosion of unstructured data. If you look across any industry metric, 80 per cent of data is unstructured and 20 per cent is structured. 

By structured data I mean databases; and unstructured data are things like files, emails and chatlogs. With GenAI, it introduces a lot of unstructured data through summarisations and responses to prompts. 

When we think about agentic AI, one analogy I like to use is my teenage son who loves football. When he goes to football practice and comes home, he would leave a trail of clothes, books and homework from the front door to his room. 

Agentic AI does the same thing. When such AI systems work on our behalf, they work on a bunch of interactions to get answers to questions we’ve asked. In doing so, they drop unstructured data and pockets of intermediate results which are not being protected at all.

With this explosion of data, a threat surface is created. Critical data that needs to be protected pops up in spaces like SAS applications, public cloud and file shares. This is a huge disruption to our industry.  With AI coming into the scene, CISOs are having a hard time controlling all these unstructured data. 

Quantum computing adds another layer of disruption. When you bring both AI and quantum computing together, you’ve really created a monster because now you have AI generating more data. While such technologies allow us to accelerate things and do more, it places our current cryptography and security mechanisms at greater risk. 

You mentioned about the threats quantum computing can have on traditional cryptographic protections. What are some new tools or architectural changes that are required to achieve “full visibility” in this era? 

Standardisation bodies around the world like the National Institute of Standards Technology (NIST) in the U.S. have created post-quantum resistant algorithms. With that, I think crypto agility is important – the ability to implement these post-quantum resisting algorithms and implement new ones as they come available. Digital signature standards need to evolve to be able to keep up with the post-quantum world. 

From a visibility perspective, there is the concept of crypto discovery where regulated industries such as healthcare, government and financial that are already leading the way by looking into their infrastructure, getting visibility into what standard protocols and data are going to be impacted. 

This often starts with inventory and risk prioritisation: identifying which data is beyond retention. Some are also re-encrypting data into post-quantum resistant algorithms. I’d say about 80 per cent of regulated industries, the ones that we’ve been dealing with, have already started the visibility. Non-regulated industries, such as gaming, retail and manufacturing, have not taken an interest in post quantum just yet. If a quantum computer arrived tomorrow, these industries could face significant risk. 

What sort of hybrid or interim strategies would you recommend organisations to adopt to balance protection and practicality when it comes to the transition to more quantum-resilient security? 

From a hybrid perspective, put a framework in place today that is quantum safe. Start by taking control of new data that is being generated by deploying quantum safe practices immediately. As you go forward, work on recharacterising existing data. Start with the new data before going back to look at the old one. 

Tell us more about the threats that are going to be imposed by quantum computing. How would this shift the priorities of a data centric security model, especially in terms of key management, encryption-at rest/in-transit, and data access controls?

Quantum computers are going to bring a lot of good to the world. They can help us with things like weather forecasting and pharmaceuticals. But there’s also a malicious side to it, where it has the power to attack asymmetric cryptography. Every time we buy something online or access our bank applications, those internet transactions can be interrupted and data can be decrypted easily and stolen in a post quantum world.

Think about how banks communicate – every transfer from one bank to another is protected by encryption keys. If an attacker were to gain access to those keys, it’s like getting the keys to the kingdom. Transactions could be redirected and information could be stolen, causing massive disruption. The data being encrypted itself is one thing, the real risk is getting access to the keys. Once you have the key, you can decrypt anything, whether that data is at rest or in motion.

From your vantage point,  how should different parties like standard bodies, industry collaboration and regulators work together to foster interoperability, shared visibility and collective defense? 

I think that’s a huge blind spot for the world right now – working together. Everyone is developing their own standards and regulations. When it comes to AI systems, without the commonality of regulations and compliance, the ability to govern across different systems globally will never be successful. Regulation plays a key part to ensure that we hold each other accountable. 

Using the U.S. as an example, we’ve got 50 states coming up with their own privacy and AI laws without any federal oversight. There is this constant infighting over what is needed in innovation. 

Without this standard, we’re going to see AI and quantum affecting different things across the globe. Going forward, we need some convergence in managing these systems. That will be the core to our success. 

Search this website