A third party platform can curb excesses of generative AI

Share this:
To get to responsible AI, foundation models need to be verified and validated against the statement outcome.

Generative AI is the current “hotness” in the tech industry. Many are having great fun with generative AI chatbots. From having a conversation with a virtual assistant and generating poetry to drawing up itineraries for holidays and planning birthday parties. 

The interesting and fun bits however, have been overshadowed by reports of incorrect information provided by the chatbots. Last month, a New York lawyer in the US cited fake cases generated by ChatGPT in a legal brief filed in federal court and may face sanctions as a result. 

As more chatbots are used, more incorrect information and misinformation will emerge. Better to control it in the beginning rather than later. 

Making AI safer is complex. Licensing the generative AI companies is one way. Licensing would offer governments oversight into what AI tools are built and how they are deployed. In turn, this would guide the development of responsible AI.

However, the greater challenge is that there may be companies ignoring the licensing requirements. 

Another way is to monitor data centres where the generative AI foundation models reside. Since the models require huge amounts of compute resources, data centres could be mandated to report when a GAI provider and/or user uses above certain level of compute resources. Apart from hardware improvements which could reduce the computing resources needed and negate this effort, monitoring would require international cooperation to be effective.  

With the rapid advancement of generative AI, the enforcement of licences and monitoring of data centres would quickly become onerous and not feasible. 

One approach is to get to responsible AI is to evaluate the foundation models. It is a demanding task because of the fast growing number and types of foundation models used to train the generative AI-based chatbots. 

ChatGPT, for instance, is trained on a foundation model that has about 175 billion parameters. Some models are trained on a fraction of this because it is meant for niche applications. Some models may have only company-specific information while others maybe industry based. 

Additionally, different providers of chatbots have their own way of operationalising the technology. There will be chatbots with different ways of making conversation and creating images, all of which have a lot of potential for bias. 

Testing the underlying code for responsible AI

For evaluation to work, there needs to be testing methods in an agreed framework and which would be acceptable to developers and users. 

A neutral third party can serve this purpose. It will provide a platform for collaboration and idea sharing. It will be a place where standards, frameworks and best practices can be developed to ensure that the new technology can be used responsibly and in a trusted manner.  

The setting up of AI Verify and AI Verify Foundation by the Infocomm Media Development Authority (IMDA) fits this purpose. You can find out more about these organisations here.

AI Verify is an AI governance testing framework and software toolkit unveiled last year to help companies demonstrate responsible AI in an objective and verifiable manner. It is a validation system, enabling developers to test their applications against expected impact, reveal potential biases, and check for accuracy, fairness and security. More than 50 local and multinational companies including UBS, Hitachi, Singapore Airlines and IBM, are interested to work with AI Verify which has open sourced its tools to drive adoption. 

The Foundation, which was recently announced during the Asia Tech Summit, aims to harness the collective power and contributions of the global open source community to develop AI testing tools for responsible AI. 

What is encouraging is that AI Verify Foundation is a public-private sector collaboration, comprising of policy makers and industry. Its seven premier members are Aicadium, Google, IBM, IMDA,  Microsoft, Red Hat and Salesforce will collectively set strategic directions and development roadmap of AI Verify. Currently, the Foundation has more than 60 general members. 

To be effective, it’s one voice need to speak stronger and louder and reach further. Hence the Foundation should invite other governments and organisations to join it.

In the absence of global governance of generative AI, the work of AI Verify and its Foundation is meaningful for industry and the economy. The added benefit is that Singapore will be recognised as thought leader in this space. 

Search this website