Generative AI in Singapore – Before you use the software, you have to understand the key concerns

Share this:
Copyright issue poses a major concern in the adoption of generative AI. Source: Dall-E

By Bryan Tan and Goh Eng Han

Generative artificial intelligence (AI) refers to artificial intelligence algorithms that can generate new content. The latest versions of ChatGPT and GPT-4 can generate new content in response to various prompts, easily accomplishing tasks like passing bar examinations and writing code for applications. Organisations in Singapore, like any other country, face challenges and concerns in the adoption of generative AI. These include bias, explainability, data privacy, intellectual property (IP) protection, cost, ethics and regulation.


Generative AI algorithms are built on, and limited by, the data they are trained on. There is a risk that existing biases in training data can be perpetuated and amplified by generative AI. Singapore is a small but diverse society with multiple ethnicities and cultures. Training data that accurately represents Singaporeans is scarce and generative AI may not accurately represent all groups. Users should be alert to biases in generative AI outputs on Singapore content and adopt measures like fact-checking and hypothesis testing. Developers should incorporate techniques like reinforcement learning from human feedback (RLHF) to reduce bias in generative AI.


Generative AI must be able to be explained to (and understood by) all. Explainability helps users to understand how the output was derived and programmers to troubleshoot their algorithms. Explainability also has wider benefits – users can check for possible bias if they are provided with details how outputs were generated. Firms developing or deploying generative AI will be better-placed to comply with regulatory disclosures if they can articulate clearly how generative AI works, and what it does or does not do.

Data privacy

Generative AI may flout Singapore’s data protection laws if not implemented properly. The use of sensitive data, such as personal information, to train generative AI models could be seen as a breach of privacy. If they do not qualify for the business improvement exemption under the Personal Data Protection Act, companies must obtain consent from individuals before using their personal data in generative AI. 

Remember when personal electronic devices – like mobile phones – became pervasive and employees began to insist on using their own devices for work and started the BYOD (bring your own devices) phenomenon? Generative AI promises to introduce many helpful tools and well-meaning employees may start deploying them on their own – for instance, transcribing all phone and online meetings into text for record keeping. However, this could very well breach data protection and data transfer restrictions as this data could be processed by offshore servers and by services which then claim a right to such data. 

In another example, engineers fed semiconductor information to an external free-to-use AI generator in the hope of trying to fix coding issues. This resulted in loss of control over valuable proprietary information.

Companies should establish clear policies to categorise their data and ensure that generative AI is not used for unintended processing of personal data or even confidential trade secrets. They should also carefully read the fine print of AI services that they use.

IP protection          

Generative AI has raised questions about how IP should be protected and used. Singapore’s Copyright Act 2021 introduced a new exception for tools like generative AI to make copies of copyrighted works, subject to certain requirements like solely conducting analysis on the works and obtaining lawful access to the works copied. To avoid dangers like copyright “stockpiling”, it is likely that works solely produced by generative AI (as opposed to generative AI assisting human creators) will not receive copyright protection as case law develops in this area.

Ethical concerns   

The use of generative AI may raise ethical concerns around issues such as the creation of deepfakes and who bears the responsibility for harmful outputs. This cannot be ignored as the safeguards in some generative AI algorithms like ChatGPT are rudimentary. Loopholes are often discovered faster than they can be patched.


Regulation could help to mitigate some of the challenges surrounding generative AI. However, it is not easy to develop a robust framework when the field is still rapidly developing and understanding is limited. Transparency is an important first step. Singapore is promoting transparency in generative AI through two initiatives: 

  1. A toolkit for companies to show their AI systems are “fair, explainable and safe” called A.I. Verify
  2. A Model AI Governance framework.

Further afield, jurisdictions like China, the European Union and the United States are drafting regulations and conducting consultations. Regulatory approaches include risk-based regulation where requirements scale up with the severity of risks (such as the EU), and sector-specific regulations (like copyright legislation). These approaches are expected to coexist given the broad scope of generative AI.


While there are potential problems in adopting generative AI in Singapore, there are also many potential benefits. Users should embrace generative AI to drive innovation but mitigate the potential risks, while developers should use RLHF and other techniques to improve algorithms. It is also important to look out for regulatory developments and industry best practices to guide the deployment of generative AI.

Bryan Tan is Partner at Reed Smith and is a Senior Accredited Specialist in data & digital law. Goh Eng Han is an associate at Resource Law. 

Search this website