Regulating generative AI is a tough call for the perils of bias and prejudice emerging from this technology are of growing concern among governments and societies.
“The way to regulate GenAI is not through strict laws, but coming up with a consensus and a set of norms that are agreed to and abided by industry and regulators,” said Paul Burton, general manager of IBM APAC.
“What we really need is public education of GenAI so that everyone understands the norms of behaviour needed to develop these applications,” he told Deeptech Times recently.
To get a more holistic understanding of GenAI, business leaders and organisations must also get a better understanding of data science and the process of building large language models (LLMs), he added.
The technology is still young and developing at a rapid stage, he reminded, adding that both the tech sector and businesses are still grappling over how to best harness the technology.
Last week, the AI summits held in the United States and the United Kingdom called for international sharing of best AI practices, unveiled rules for the research and deployment of AI across industry as well as set up AI safety agencies. Many governments were represented at the UK AI Summit. Singapore Prime Minister Lee Hsien Loong attended this Summit virtually on Nov 2. He said that the Republic has a practical, risk-based approach to AI development and deployment. He also highlighted the importance of including diverse multistakeholders in this conversation and collaboration on AI safety.
IBM’s Burton pointed out that there is great interest across all industry sectors to harness GenAI from building training curriculum to augmenting information retrieval from procurement systems.
The ChatGPT chatbot, released a year ago, did the business and tech sectors a favour by creating the buzz and excitement. This quickly generated understanding of the new technology among consumers and business executives.
The tech industry is piggybacking on this buzz, creating solutions, products and services to address the needs of business sectors, he said, adding that organisations have also quickly trialled GenAI projects. However, they have been held back from widely deploying GenAI due to the hallucinations, inaccuracies and other harms generated by the chatbots.
He believed that the critical factor to win over organisations is to show that the data sets used can be free of bias, prejudice and inaccuracies. This can be done by the organisations creating their own corporate data sets.
IBM is building its own foundation models to train chatbots, he added. The models are built from the ground up with focus on using data that is “clean”, unbiased and accurate.
Organisations, he suggested, can leverage IBM’s foundation models by enhancing it with corporate data. “They need to gather as much corporate data as possible and make sure that it is clean. Then they can add it to our foundation model. This process can be finetuned further together. It is an iterative process.”
GenAI chatbots automate manual and repetitive tasks, freeing up human resources for more complex and creative tasks. It can help managers summarise lengthy reports to one page of highlights and help software engineers quickly write code. Housewives, students and scientists can use it to brainstorm for new recipes, fresh ideas and new drug structures.
In the past 12 months since ChatGPT was released, the GenAI market has exploded. According to research firm IDC, businesses are set to spend US$46 billion on AI products and services across the Asia-Pacific region outside of Japan by 2026, according to IDC.