Time to be worried? “Godfather of AI” walks out of Google with a warning for the future

Share this:
PHOTO: Sanket Mishra from Unsplash

As the first PCs running Microsoft’s operating system went on sale in the 1980s, could you have imagined Bill Gates walking out abruptly and telling everyone that PCs were bad?

Or the late Intel co-founder Gordon Moore, the man behind the famed Moore’s Law, ever warning people that more transistors in ever smarter processors would one day create a destructive supercomputer?

Well, something similar happened with AI this week. Geoffrey Hinton, one of the new technology’s earliest pioneers who created the foundation for today’s generative AI, has left his job at Google so he could warn about its growing dangers.

Speaking to the New York Times this week, he painted a worrying picture of how AI has developed in the short span of time it had captured the imagination of users through chatbots such as OpenAI’s ChatGPT and Google’s Bard.

Tellingly, he said tech giants Microsoft, an important backer of OpenAI, and Google are locked in a competition that might be impossible to stop.

“Look at how it was five years ago and how it is now,” he said of AI, in the New York Times interview. “Take the difference and propagate it forwards. That’s scary.”

Hinton isn’t the first to speak up about the dangers of AI. What if it becomes so smart that it no longer responds to humans at the wheel?

The likes of Elon Musk, an early financial backer of OpenAI, had warned about AI’s “profound risks to society and humanity”, even though he recently started up his own AI efforts.

In March, Musk and more than 1,000 technology leaders and researchers called for a six-month pause on the development of the most advanced AI systems.

What’s so scary about AI to have spooked the same community that has been developing AI and watching itsimprovements over the years?

According to Hinton, the immediate concern is the generation of fake news in the form of photos, videos and text that would make it hard for anyone to believe what is true anymore.

Jobs, too, would be in danger, he warned. And not just “rote” tasks will be carried out by AI, but more jobs would be in danger.

However, the worst scenario for him is one where AI systems are autonomously creating code and running the code on their own, which could lead to autonomous weapons. Yes, like killer robots.

Like many experts in the field, one cautionary sign for Hinton has been the massive amount of data that generative AI like GPT-4, which is behind ChatGPT, has been able to ingest and learn from.

This means the AI can get smarter than people, he said, and at a pace that few people thought possible in the past. Instead of 30 to 50 years, this future is becoming much closer, he believed, without stating when exactly.

What is one to make of such an extraordinary warning from one of the field’s foremost experts, its “godfather”, as Hinton is known by?

To explain why he worked with a technology that could be potentially dangerous, he also referenced Robert Oppenheimer, who was behind the first atomic bomb.

Are the comparisons valid? Should AI be seen as a similar world-ending technology that has captured the imagination in so many sci-fi movies?

Even Google’s chief executive Sundar Pichai said recently that Bard was able to develop an “emergent ability” that it was not originally trained for.

The AI could translate Bengali even though it had been taught how to do so, he told a much-watched interview with 60 Minutes.

And Microsoft researchers have also claimed the OpenAI GPT-4 large language model shows “sparks of artificial general intelligence” to solve problems without special prompting, reported Vice.

If true, this seems like the spark that could start a fire. Much of today’s AI, which learn from the stuff they find on the Internet, is limited by what they can do.

They can be witty in their responses as chatbots or imaginative in generating a new image with inputs from what they’ve “seen” online. However, they cannot carry out a new task that they are not programmed to do.

If indeed they have achieved artificial general intelligence, that means they are able to carry out tasks that were not originally taught to them, much like Pichai’s example of Bengali.

Yet, even here, these worries have drawn critics, who are skeptical of the claims of these tech giants.

Researchers from Stanford this week put out a paper that said much of these emergent abilities are actually a “mirage” of what the AI systems are actually doing.

The reason, they explain, is the people measuring the output from the AI systems for a particular task. The tech companies may not be comparing the output correctly for a task and thus are seeing a giant leap of capability that isn’t there, they add.

If that sounds confusing, it certainly does. While tech companies see AI as a groundbreaking technology like the Internet or PCs of the past, there seems no parallel in the doomsday scenarios painted now by the very experts who have spent the most time developing this next big thing.

While people were worried about the disruption brought by the Internet in its early days, it was mostly seen as a way of connecting people across the world. E-mail was never going to end humanity.

So, what should the average user who is still testing the waters with AI chatbots like Bard or ChatGPT do?

After all, if you don’t try out AI, you could find yourself behind the curve when your job is ready to be replaced by a machine.

Yet, teach it even more by prompting it in the right direction each day, and you could add to its capabilities that one day would not just remove you from your job but possibly even build killer robots to eliminate humans? No good answers here, unfortunately.

Don’t be surprised that governments will be under pressure to act if AI triggers an incident that has serious real-world consequences. Think of a cyberattack or a malfunction of autonomous cars caused by an AI.

The question is whether governments would want to keep AI development under control, considering it could be a game-changing weapon on a future battlefield.

Outside of the US, China’s technology giants have found themselves suddenly behind the curve in recent months and ramped up efforts with their versions of ChatGPT. They know the importance of this particular technology race in geopolitics.

So, much still remains unclear on how AI will continue to develop. Even as Big Tech continues to drive its progress, it is possible that governments could decide to pause or slow down the development if a serious event triggers a strong public reaction.

All said, though, it looks increasingly unlikely that AI will be held back in the long term. In some form or other, it has already changed the way people work and it looks set to disrupt lives in the years ahead. Like it or not, the genie is out of the box.

Search this website