GenAI is here: Are you a showstopper or enabler?

Share this:
ChatGPT and other similar GenAI apps and tools are “leaky”, they have little or no data privacy and security policies.

ChatGPT and other similar generative AI chatbots and tools will continue to accelerate beyond 2023. They can optimise workflows and processes, uncover new strategies, augment datasets and identify new products and services, and do many, many more tasks that have yet to be discovered. 

But it is not all sunshine and roses as Korean conglomerate Samsung learnt. On three occasions in April this year, employees accidentally leaked top secret corporate data. They shared confidential information while using ChatGPT at work. In one case, an employee shared source code in a bid to look for a solution to a bug. 

ChatGPT is a machine learning platform. Information shared with the chatbot is stored on the servers of OpenAI – the developer of ChatGPT –  and which can be used to improve the model unless the users opt out. 

Following this “conversational AI leak”, Samsung cracked down on ChatGPT usage, limiting its use and other similar tools on company issued devices and including personal devices that are running on its internal networks. 

Samsung’s case demonstrates the promise and peril of GenAI. Touted as a boon to productivity and creativity, it also raised myriad concerns on security, privacy, legal and ethical issues.

Discussions swirling on its promise and peril are unprecedented because the technology is still in its nascent stage and comes at a crucial time when organisations are pursuing digital transformation. The grim reality is that they cannot wait for the technology to mature, for it can mean they lose their competitiveness.  Organisations can choose to be a showstopper or an enabler of GenAI. 

So how can organisations harness the technology to enable growth?

GenAI is all over the mass media. It is dominating business discussions and even family dinners and challenging technology regulators. This talking up of AI is pressuring organisations to adopt AI when the focus should be on selecting the right technology to digitally transform their operations. 

Emeritus Professor Steven Miller of SMU pointed out succinctly at a recent data privacy workshop. Do not be fixated with the word AI, he advised.  Organisations should select the technology best suited to solve their problems. Whether it is GenAI or not, the same considerations for privacy and security must be used.  

Since data is the new oil that lubricates operations, the first step must be to ensure that corporate data crown jewels are safeguarded against leaks and thefts. This is necessary to protect organisations from reputational harm and cyber attacks, said seasoned tech practitioner turned academic, Prof Alex Siow of the School of Computing, National University of Singapore (NUS). 

There are other steps organisations can take for a proactive stance on GenAI: 

1. Embedding fairness, ethics, accountability and transparency in GenAI: To develop safe AI models and applications, these four factors must be baked in to safeguard all stakeholders. Developers and in-house tech teams developing apps must have these four factors in mind. 

2. Developing guardrails: This is a must-have to guide employees on the appropriate and responsible use of GenAI, so that they know the where, when and how they can use the technology. The guardrails should identify that part of the workflow where GenAI can be deployed and highlight the type of prompts employees can ask that would not expose sensitive corporate or personal details.

3. Setting up a human in the loop: It is important for a person to be part of the GenAI workflow to ensure that the right prompts are used and that the responses are correct, fair and accurate. 

4. Checking the data privacy and security policies and terms of use: This gives a good understanding of the type of data the software will share with third-parties and whether this sharing is fit for purpose. 

5. Piloting a trial: Embarking on a small pilot will surface the issues and challenges. Users can then tweak the project to optimise results. 

 At data governance consultancy Straits Interactive, CEO Kevin Shepherdson tested the efficacy of ChatGPT on his company’s work related tasks. After a month, he  found that ChatGPT saved 26 hours of its staff time on work like generating marketing and sales content, assisting with human resource tasks, and drafting and reviewing legal documents and policies. From this experience, he knows where best to use the technology. 

Looking ahead 

Data privacy and security issues are only two key concerns of GenAI. There are others such as faking information, creating new malware, impersonation and unethical use of the technology, themes which I will discuss in another article. 

The chatbots and tools will get better with new versions. Released less than a year ago, GenAI already has over 100 millions users. From the hundreds of millions of prompts generated each day, the chatbots are learning the relationships of words and phrases so that it can generate even better responses. The chatbots and tools will get better with new versions.

Engineers and computer scientists at Google, Microsoft, OpenAI and other big tech companies’ as well as scientists and researchers from universities research agencies are working hard to overcome the perils of GenAI and deliver innovations that could solve some of the weaknesses in the current system

GenAI will see use in more economic sectors. Governments are paving the way with increased R&D spending to get new innovations that organisations can harness. Expect to see too, greater public-private sector collaboration to maximise productivity and enhance innovations in areas ranging from medicine and healthcare to defence and manufacturing. 

Then there is the question of the bias of the current large language models (LLMs) that are used to train the chatbots. AI expert Dr Tan Geok Leng, CEO and co-founder of Singapore-based Aida Technologies, pointed out that the large language models for GenAI are trained on data from western countries like the US and in the European Union. 

Asian information is not well-represented in the current GenAI training models.  It is good news that AI Singapore is involved in developing a Southeast Asian LLM so that the region is fairly and accurately represented in GenAI models. 

Regulations are coming. In June, Europe took the lead, setting the world’s first rules on how companies can use artificial intelligence including GenAI. In July, China passed the GenAI Measures which covers a wide range of issues including data protection, non-discrimination, bias and the quality of training data. Singapore is not looking to regulate AI but is strict on data privacy and security policies. 

These issues may be overwhelming for users and organisations to consider before using GenAI. But it should not stop them. It is better to be grasp the GenAI bull by its horns and be an enabler than a showstopper. 

Search this website