Ensure data integrity, set up governance committees before starting on GenAI projects, says AI expert  

Share this:
Source: Shutterstock

A US lawyer faced reprimand and fines in June this year for submitting a legal brief that cited non-existent court cases. He had used ChatGPT to search for these case examples and was unaware that the tool had generated fake responses.

While the lawyer was penalised, ChatGPT and the large language model on which it was trained, were exempted from fines. 

This raises the question: why was ChatGPT not subject to penalties? 

Data science and AI expert Dr David Hardoon highlighted that this is a complex challenge for policy makers who want to regulate Generative AI (GenAI) and its tools like ChatGPT.

Technology has always been a double-edged sword for it can do good and harm, said Dr Hardoon, chief executive of Aboitiz Data Innovation and chief data science and AI officer, UnionBank Philippines. 

Addressing the challenge of how to use the technology, he advised enterprises interested in deploying GenAI to have a foundational understanding of the technology, its capabilities and potential pitfalls. 

Data integrity and governance critical for GenAI performance: Dr David Hardoon, chief executive officer Aboitiz Data Innovation and chief data science and AI officer, UnionBank Philippines.

“AI is about knowledge, you can’t regulate knowledge, so regulation has to be contextual, how it is used in a real-world scenario,” said Dr Hardoon who is based in Singapore.

To get a better understanding on the data used to train a tool like ChatGPT, business leaders and software developers should ensure the integrity of the data. “This is the first step: has the data been collected according to best practices, who has the right to dive into the code to validate this?,” he said in a recent interview with Deeptech Times. 

Citing the experience of the UnionBank, he had set up a governance committee to look at data distribution for bias.  “This is a hygiene step which must be done, before any decision can be made about data management. With this knowledge, we can proceed to build a governance framework that suits our purpose.”

He further stressed the principle of justifiability at the development level, stressing the need to question actions at every stage. Data protection officers (DPOs) also play a crucial role, with developers justifying their actions to the DPO, who in turn, justifies them to consumers.

At a country level, governance should remain agnostic and necessitate the cooperation of industries, he added. 

Industries such as aviation and healthcare and medicine already have a framework of knowledge and methodology for mitigating potential harms. Policy makers can add the new guidelines on AI to enhance and reinforce these regulations. 

He also referenced existing guidelines on AI and data analytics set up by the Monetary Authority of Singapore (MAS) in 2019. The guideline creates a framework for responsible AI based on four principles, namely fairness, ethics, accountability and transparency. 

Although initially drawn up for the financial services industry, the principles can be applied universally.

He does however, cautioned that the regulatory environment needs to keep up with the rapid technological change to ensure that rules and innovations are aligned. 

On AI governance, he stressed that it extends beyond data privacy and protection to include data that form the corporate crown jewels, critical to an organisation’s survival. 

A new supervisory agency maybe needed where the governance regulator needs to work with the vertical industries. “Who should be the regulator here? I believe it should be a cross-industry entity focussed on protecting consumers.”

Regarding businesses deploying AI, he believes any conversation should never start with how to use GenAI. “Begin the conversation by considering what problems need to be addressed, what opportunities can be seized and what outcomes are desired. Then the relevant technologies such as AI, GenAI and data science will be considered tools to achieve these goal.”

Ultimately, he pointed out that AI is an abstract tool.

“The premise of AI is about creating foresight, not hindsight. It offers a possibility to do something, but you have to quantify it. So enterprises should view AI as a form of knowledge, necessitating investments in the platforms and skills,” he added. 

Search this website