Will trust gap hinder Singapore’s global AI hub ambition?

Share this:
From left: Sujith Abraham, vice president and general manager for ASEAN, Salesforce, Peter Doolan, executive vice president and Slack chief customer officer, Salesforce, Laurence Liew, director for AI innovation, AI Singapore, and Mac Munsayac, head of customer experience at Philippine Airlines

While AI has the potential to offer significant benefits, its adoption is heavily dependent on the level of trust it garners from users and organisations. Efforts to improve transparency, ensure ethical use, enhance security, and build robust regulatory frameworks are key to fostering trust and driving the wider adoption of AI innovations.

Salesforce’s new AI Trust Quotient reveals significant trust issues with AI in Singapore. Nearly half of workers in Singapore find it challenging to get desired outcomes from AI, and 40 per cent distrust the data used to train AI systems.

The study, which surveys 545 full-time workers in Singapore and nearly 6,000 globally, also finds that over half (58 per cent) of workers in Singapore worry that humans will lose control over AI, and an alarming 94 per cent currently do not trust AI to function without human oversight.

This trust gap could be seen as obstructing AI adoption. Among those who do not trust AI, 95 per cent are reluctant to use it.

IMAGE: Deeptech Times

Without reliable data and human oversight, the trust gap is expected to keep expanding and consequentially preventing businesses from fully benefiting from AI deployments.

Sujith Abraham, senior vice president and general manager for ASEAN at Salesforce, who moderated a panel discussion on trust issues surrounding AI in Singapore last week, said the adoption of AI had to be supported by keeping humans in the driver’s seat, and that data was also a critical part of the puzzle. Businesses were sitting on a lot of siloed data that had to be unified in order to produce AI output that workers could trust and adopt. 

Singapore has been placing big bets on AI and was among the first few nations to publish an AI plan in 2019. Its AI strategy in harnessing AI to transform businesses and empower workers is ambitious and comprehensive in areas spanning funding and investment, policies and initiatives, education and training, and infrastructure development. 

To support the National AI Strategy 2.0 and to further catalyse AI activities, more than S$1 billion have been committed for AI compute, talent and industry development over the next five years.

Despite its small size, Singapore’s AI ranking is only surpassed by the U.S. and China, the world’s largest economies. 

Nurturing a skilled AI workforce in Singapore is a key mandate of AI Singapore (AISG), which was set up to enhance Singapore’s AI capabilities and competitiveness as a global hub for innovation. 

Lawrence Liew, director for AI innovation at AISG, who was among the panellists, said building trust in AI was crucial for successful adoption and it required a multi-faceted approach. 

He referenced initiatives such as the AI Apprenticeship Programme (AIAP) and LearnAI, both spearheaded by AISG, as focused on developing a skilled and responsible AI workforce, while programmes like the 100 Experiments (100E) ensured AI solutions were implemented with a human-centric approach. By prioritising data quality, transparency and human oversight, trust in AI could be fostered and used to unlock transformative potential for businesses and society. 

To nurture talent, Singapore aims to triple its number of AI practitioners to 15,000 by training locals and hiring from overseas. The group includes data and machine learning scientists and engineers who are the backbone of translating AI into real-world apps. 

“There are many grants and initiatives available to companies and individuals looking to build AI skills, but the responsibility for learning lies with individuals. AI won’t replace you at your job, but you might get replaced by someone who uses AI,” said Liew. 

Trust is certainly a critical currency between AI and its users. However, closing the trust gap takes more than merely data quality and integrity. Other factors and considerations are just as critical. These include:

  • Educating users on the capabilities, limitations and proper use of AI systems, and be honest about the limitations and potential risks associated with AI technologies
  • Designing AI systems to augment human capabilities rather than replace them while emphasising collaboration, ensuring there is always a level of human oversight especially in critical decision-making processes
  • Developing and adhering to ethical guidelines for AI development and deployment
  • Continuously monitoring and mitigating biases in AI systems to ensure fair and equitable outcomes
  • Implementing strong data privacy and cybersecurity measures to protect user information and safeguard against threats
  • Ensuring AI systems comply with relevant regulations and standards
  • Incorporating user feedback into the design and improvement of AI systems
IMAGE: Deeptech Times

Leave a Reply

Your email address will not be published. Required fields are marked *

Search this website