At first, its chief executive was ousted in a shocking move. Then, as OpenAI faced an open revolt from employees this week, Sam Altman joined Microsoft, following unsuccessful talks with the board that had kicked him out.
Days later, in yet another twist, he would return triumphantly to OpenAI, with the backing of Microsoft, a key investor, and a new board as well.
That wasn’t the end. News soon emerged of a breakthrough project that OpenAI had been working on that could potentially threaten humanity, which the board had been warned of before firing Altman.
Yes, those are the words from a report from Reuters, not some conspiracy theory website or a science fiction novel.
For those following the news at the leading AI company today, it’s been hard to make sense of what has just happened, despite the gravity of its consequences.
As a week of upheaval draws to a close, so much still remains unknown about the bitter battle for leadership and, more importantly, if OpenAI’s work could yet have the most drastic of impacts on people everywhere.
For most observers, OpenAI is the company behind the ChatGPT sensation, which has paved the way to widespread adoption of generative AI and accelerated the use of AI in general, in a short year or so.
Yet, one worrying aspect of this week’s developments at OpenAI is its work towards artificial general intelligence (AGI), the type of AI that is able to comprehend and reason like a human being.
Its project Q*, according to Reuters, could be a breakthrough in AGI, say wire service’s sources.
When firing the CEO, were the board members worried that OpenAI was pursuing AGI, often deemed as the sort of intelligence that would usurp human intelligence, without the proper guardrails?
Whatever their motivation, it is clear that Microsoft, which owns 49 per cent of OpenAI and has been its biggest backer of late, is the biggest winner this week.
Not only has it got OpenAI leading the way, ahead of rivals such as Google and Meta, Microsoft also can count on a trusted leadership that it has now clearly backed.
The only worry, of course, is that few people outside OpenAI know how close it is to a breakthrough such as AGI and what safeguards it has in place.
Lest this sounds like fear mongering, none other than Altman himself, along with more than 350 leaders from AI companies like Google and Anthropic, signed a letter this year warning about the existential risk to humanity that AI poses.
Many others have called for stronger regulation to manage the societal changes that are already here before even more powerful AI technologies, such as AGI, are widespread.
“For all its potential use cases, generative AI also carries heavy risks, not the least of which are data privacy concerns,” said Andy Ng, managing director for Asia South and Pacific region at Veritas Technologies, a data storage management company.
“Organisations that fail to put proper guardrails in place to stop employees from potentially breaching existing privacy regulations through the inappropriate use of generative AI tools are playing a dangerous game with potential detrimental impact,” he added.
“Right now, most regulatory bodies are focused on how existing data privacy laws apply to generative AI, but as the technology continues to evolve, expect generative AI-specific legislation in 2024 that applies rules directly to these tools and the data used to train them,” he noted.
Fraud is also likely to be more common with AI. In Hong Kong, for example, six people were arrested for using AI to fabricate images for loan scams targeting money lenders and banks, said Frederic Ho, vice-president of Asia-Pacific for Jumio, a mobile payment and identity verification company.
“This case marks the first instance where law enforcement in Hong Kong has made arrests linked to deepfake technology, and unfortunately, similar incidents will continue to occur across the region,” he noted. “Easy access to AI has empowered fraudsters.”
Data privacy and cybersecurity are only two obvious worries. Another big worry is the concentration of power in a handful of leading AI companies, such as OpenAI, Google and Anthropic, which are all based in the United States.
Related to this is the “black box” or opaque way that AI is developed and often deployed by the biggest companies.
Just as the drama at OpenAI this week is all that outsiders see, the same can be said of the underlying models on which the most trusted AI tools are based.
“A LLM [large language model] like GPT-4 can be updated over time based on data and feedback from users as well as design changes,” noted the authors of a Stanford University and UC Berkeley research paper on ChatGPT this year.
“However, it is currently opaque when and how GPT-3.5 and GPT-4 are updated, and it is unclear how each update affects the behavior of these LLMs,” they added.
Their warning: “These unknowns makes [sic] it challenging to stably integrate LLMs into larger workflows: if LLM’s response to a prompt (e.g. its accuracy or formatting) suddenly changes, this might break the downstream pipeline. It also makes it challenging, if not impossible, to reproduce results from the “same” LLM.”
The time may soon come when AI reaches a crossroads, as all groundbreaking technologies do. The World Wide Web from the 1990s needed secure encryption to make safe transactions possible; mobile phones a decade later needed a new touch interface to take off as the smart devices they are today.
Will leading AI companies such as OpenAI now focus on fixing its learning models and make them less opaque than now, thus improving their accuracy and building trust?
Or will their next big thing, an AGI, prove to be the society altering tool that many have feared, unleashed without the guardrails set up in advance?
Unfortunately, this week’s events only gave vague hints, like so often with a smart-sounding chatbot, without really committing to a meaningful answer.