It’s on a PC desktop with a Windows update and it’s being used by banks to speak to customers so their staff are not inundated by calls.
While generative AI has captured the imagination this past year, less “interactive” AI models that don’t communicate directly with humans are gaining traction as well, thanks to their more famous cousins this year.
In Singapore clinics, for example, deep-learning AI is being used to help diagnose blindness caused by diabetes, thanks to a technology developed locally.
Today, 23 polyclinics are using the Selena+ system to detect potentially threatening eye conditions, by analysing images of patient’s eyes.
It has helped reduce the workload by up to 50 per cent, with patient results now possible in minutes instead of hours or days.
This way, healthcare professionals can spend less time analysing images and use it to provide more direct care to patients in an ageing population.
Selena+ was a ground-up effort, with the technology arriving from a “grand challenge” to solve pressing issues through AI, said Sutowo Wong, director of data analytics at Singapore’s Ministry of Health.
In the past year, he noted that AI has got the attention of senior management, thanks to generative AI tools such as ChatGPT. This, in turn, has helped drive more strategic directions for AI in general, he said, at an IBM industry event last month.
From healthcare to computer programming, AI efforts in general have seen a boost in the past 12 months, thanks to the growing clout gathered by generative AI.
Notably, AI adoption will lead to a rapid 25 per cent efficiency gain in the next two years, with Asia-Pacific being particularly fast in the takeup of the new technology, according to a study released last week by MIT Technology Review.
Fearing they would fall behind, organisations are pushing for more AI tools to be incorporated in their everyday work. And technology vendors have rushed to deliver new capabilities at breakneck speed.
This week, for example, IBM launched watsonx Code Assistant, a generative AI-powered assistant that helps enterprise developers and IT operators code more quickly and more accurately using natural language prompts.
It helps businesses automate IT tasks, such as configuring their networks and deploying software code throughout their IT infrastructure. Plus, for those still using mainframes, yes, a generative AI assistant can now help translate the old-school Cobol programming language to the more modern Java.
Of course, how well AI works depends on how smart it is, which in turn depends on the training it gets and the data on which that training is based.
Foundation models are critical for AI’s success, said Dr Kareem Yusuf, senior vice president for product management and growth at IBM Software, at the IBM industry event last month.
Teaching an AI, he explained, was like teaching a child the alphabet. Once it understands that, the AI can similarly be taught to read and write an essay and debate on stage, he added. “You can tune the different uses once the base foundation is done.”
Herein lies a big issue as well. Many businesses, despite having digitalised their operations in years past, have much of their data fragmented in various places. Creating a data lake is not something easy, even for tech-savvy and large enterprises, say experts.
Some technology companies have advocated open-source data to help plug the gaps with what is available publicly.
Google, for example, is calling for publicly available data across borders to be better organised so it can be found and used more efficiently. Currently, much of that is shared in different formats and timelines.
Through its Google Commons effort, it has been standardising and processing thousands of data sets from publicly available, reliable sources, ranging from the United Nations’ Intergovernmental Panel on Climate Change to the Brazilian Institute of Geography and Statistics.
Large Language Models (LLMs) that pick up the data can link to the original source data, so the information is not generated by the LLM itself, This could help improve accuracy, according to the online search giant.
The Google effort is a work in progress, the company acknowledges, and the world could be very different by the time it manages to collate a large-enough set of data.
Many businesses now narrow their datasets to internal ones so that the context is more relevant for their uses, for example, for a chatbot to answer customer questions.
Yet others are closing off access to AIs by walling off their private data, so competitors do not get an edge by learning from their data.
This includes what may have been previously considered public information. The New York Times, for example, has considered suing OpenAI, the creators of ChatGPT, to protect its intellectual property rights.
So, even as AI continues to become more prevalent in businesses in the coming months, it faces a steep challenge ahead in gathering the very fuel it needs to keep growing – data. Without the right data in the machine, AI cannot be smart enough to carry out new tasks expected of it.