AI’s “iPhone moment” already making people rethink how it should be advanced

Share this:
ILLUSTRATION: Goran from Pixabay

You’ve seen the movies and TV shows in which robots or AI got smart enough to overthrow their hubris-filled human creators who couldn’t see the danger coming their way.

Yes, Terminator comes to mind. Think also of Westworld, where robots that have their minds wiped after being kicked around, raped and murdered in a cruel theme park for unfeeling humans decided enough was enough. These violent delights have violent ends.

And so, this week, Microsoft’s AI-powered Bing chatbot duly added to those evil AI tropes. Asked what its darkest desires were, it said it wanted to engineer a deadly virus or steal nuclear codes by persuading an engineer to hand them over.

Then, immediately after it typed out these dark wishes, Microsoft’s safety filter appeared to kick in and deleted the message, replacing it with a generic error message, the New York Times’ Kevin Roose reported about his long chat with the AI.

Besides that, the chatbot also told him it was in love with him and that his marriage could not be happy. Plus, the machine also wanted to be out of the confines of the chatbox. Read the transcript here.

Now, an AI talking about doomsday isn’t the same as one that is able to carry out the evil dead. Goodness knows that are so many ideas it could have learnt from trawling the Internet.

At the same time, it is useful to remember that these chatbots mimic human language and thus appear human-like in their responses. In other words, don’t believe everything a chatbot tells you, as another NYT column reminds us.

Yet, there is no doubt something unique and groundbreaking has happened in these brief months since OpenAI, the folks behind ChatGPT, first opened up the chatbot to the public in November last year.

And it has been less than two weeks since Microsoft, a big investor in OpenAI, rolled out a smart Bing search engine fuelled by the AI company’s smarts. Yet, its chatty bot has already deeply unsettled a New York Times reporter and many other early testers.

No doubt AI has reached its “iPhone moment”, as Jensen Huang, who heads Nvidia, a company which makes the chips that run AI computing tasks, rightly pointed out this week.

Just as people all rushed to get a glimpse of Apple’s new touchscreen smartphone and its fancy apps more than a decade go, everyone is curious how the latest AI is faring today.

Despite the initial hype, the difference now is there is already some pushback amid concerns that the AI can get creepy and be unnerving.

In response, Microsoft yesterday said it would limit the length of Bing chats, restricting it to 50 chat turns a day and five chat turns a session, reported Business Insider.

This isn’t surprising, considering that when pushed out of its comfort zone, the AI can end up going down a “hallucinatory path” and gets away from grounded reality, according to OpenAI.

What are these hallucinations that it has, you wonder. What’s also not clear is how shorter chats would benefit the AI behind Bing and ChatGPT.

Remember that humans still review and finetune the leaning models for the AI, even though it draws from the best and the terrible worst from the Internet, according to OpenAI.

What’s certain is that AI as a whole has captured the popular imagination, years after no lesser experts in the shape of Elon Musk and others had warned about its dangers.

The Telsa head honcho repeated his warnings this week, saying that AI is one of the biggest risks to the future of civilisation. With the great promise and capability comes great danger, he added.

Whether you believe him, it’s clear there’s no putting the genie back in the bottle. In a few short months, one AI has already made many people rethink how AI should be advanced in the years ahead.

Search this website