Singapore has been actively positioning itself as a global AI hub through a combination of strategic initiatives, investments and policies.
Besides investing in robust infrastructure to support AI deployment, the island state is also advanced and proactive in developing a balanced regulatory framework to govern AI use, while encouraging close collaboration between the public and private sectors in fostering innovation.
Tech giants including Alibaba, Google and Microsoft have set up AI operations and regional headquarters in Singapore.
Gcore, a global infrastructure solutions provider that has recently introduced a suite of products called Edge AI to help enterprises build and deploy AI applications at scale with low-latency inference at the network edge, is looking to expand its operations across Asia Pacific (APAC) through leveraging Singapore’s imminent AI hub status to drive innovation and growth regionally.
Co-founder and CEO Andre Reitenbach alongside chief revenue officer Fabrice Moizan were in Singapore to introduce the Edge AI suite and affirm the company’s commitment to helping regional enterprises address AI adoption challenges, especially in areas spanning expertise across data science and IT domains, compute scale, performance, and data security.
Through shifting the deployment of AI models and algorithms directly onto local edge devices, such as sensors or IoT devices, where data is generated without relying on centralised cloud infrastructure, significant advantages can be reaped, including low latency, improved privacy and reduced data transition cost.
Edge AI is driving new efficiencies and business outcomes in almost every sector from powering autonomous driving and smart manufacturing, to creating AI-powered instruments used in healthcare and virtual assistants in retail.
According to IDC, by 2025, 75 per cent of enterprise data in APAC will be generated and processed at the edge, outside traditional data centres and the cloud.
While AI training refers to the process that helps train AI models to make conclusions or predictions, AI inference is the process that follows AI training. The better trained an AI model is, the better its inferences will be.
Currently in beta and free to use, Gcore Inference at the Edge solution enables enterprises to deploy trained AI models on edge inference nodes.
By bringing AI models closer to end users, the technology ensures ultra-fast response times and optimised performance. The solution is built on Gcore’s global network of 180+ edge points of presence or PoPs powered by NVIDIA L40S GPUs, and particularly benefits latency-sensitive, real-time applications, including GenAI and object recognition.
The company also announced that it has successfully raised US$60 million in series A funding to help fuel its global expansion, its first external raise since the company was founded.
“We are on the cusp of an AI revolution that will transform how companies operate. Gcore is perfectly positioned to connect the world to AI, anywhere and anytime, by delivering innovative AI, cloud, and edge solutions. The growing demand for AI infrastructure from enterprises and SMBs alike highlights the importance of this significant investment. We are thrilled by the support of investors like Wargaming, Constructor Capital, and Han River Partners as we enhance our extensive network of AI servers and reinforce the powerful edge services we offer,” said Reitenbach.