Singapore prepares personal data protection guidelines as AI raises new challenges

Share this:
PHOTO: Mojahid Mottakin from Unsplash

Singapore has been at the forefront when it comes to learning the impact of AI and trying to shape it, where possible, for positive impact.

Just last month, a group of technology firms, including big wigs such as IBM, Microsoft and Google, came together with the Singapore infocomm regulator to form a foundation to build what they believe would be trustworthy AI.

This AI Verify Foundation is looking to use an open-source toolkit, called AI Verify, that is aimed at testing their AI models and record process checks in a transparent manner.

It is expected to help businesses and consumers check if an AI system is consistent with AI governance principles from the European Union, Organisation for Economic Co-operation and Development (OECD) and Singapore and share the results openly.

For now, however, the toolkit cannot yet test generative AIs or those using large language models (LLMs), such as those used by OpenAI’s ChatGPT and Google’s Bard, though an effort is on the way to include them.

This open-source model also isn’t meant to prescribe ethical standards or guarantee an AI system tested will be completed free from risks or biases.

“AI is fast becoming a general-purpose technology that is applied in a wide array of sectors and use cases,” said Singapore’s Minister for Communications and Information, Josephine Teo earlier this month, when asked about AI governance.

“We cannot adopt a one-size-fits-all approach to regulate it, nor can we anticipate every risk out there,” she told Parliament.

This rather “light-touch” approach in Singapore differs from more proactive regulation elsewhere. In the EU, a new AI Act that is being proposed clearly defines what risks AI would bring and what is allowed and banned, as a result.

For example, using AI to scan and recognise a person’s face in real time isn’t allowed, though a delayed form of identification can be used by the police to prosecute serious crimes.

Potentially, too, companies building AI models may have to declare content that is created by generative AI using other human creators’ original artwork.

While Singapore hasn’t clearly categorised AI risks or stated what is allowed, it does intend to issue guidelines on one area – the collection and use of personal data in AI systems for decision making, predictions or recommendations.

These guidelines will be be issued later this year under the existing Personal Data Protection Act (PDPA), which governs how consumer data is used by private sector entities.

Unlike AI Verify, which is a voluntary, open-source effort involving the industry to a large extent, these data protection guidelines from the government would be clear on what or how personal data can be used for AI in the Republic.

Will existing data protection regulations be enhanced to address the new risks that AI can bring, such as the use of consumer data for training AI models?

Or will these new guidelines be based on existing ones that already govern how personal data is collected by companies today, say, through their sign-up processes?

Consider one example: If a retailer uses its customer data to train an AI to personalise its service, would the customer need to opt in or opt out for that? What if the data doesn’t result in a superior experience?

Bryan Tan, a partner at the law firm Reed Smith, noted that the data protection rules have been amended in recent years to allow companies to collect and use customer data as long as it benefits the customer.

This could enable companies to use customer data for their AI models in future if the process improves the experience for the customer, for example, he added.

One consideration for the government regulator is that Singapore is a small market that is a price taker rather than a price setter in AI governance.

Rather than introduce new AI principles afresh, the current approach is to “take the world where it is, rather than where it hopes the world to be”, wrote Josh Lee, the managing director for Asia-Pacific at the non-profit Future of Privacy Forum.

The Singapore government also sees AI as a key strategic enabler to develop its economy and improve quality of life, he noted.

How AI pans out around the world will also influence how Singapore regulates the new technology.

Speaking in Parliament earlier this month, the Singapore minister Teo brought up the analogy of automobiles in their early years, when people were not sure where the dangers were for the new technology.

Eventually, when risks became clearer, safety also became enhanced in the form of seat belts and airbags to protect passengers, she noted.

AI is still very much in its early development, she contended, adding that regulators here have to be plugged into what is happening around the world to eventually raise Singapore’s own regulatory measures.

Search this website