Hearing proponents of AI speak today, it is easy to imagine it as a “black box” which you can throw your problems into and expect answers to be spit back out in no time.
Want to run a marketing campaign? No need to do a field study of your target audience – ask AI to take and analyse data from people’s digital or Web habits, where patterns can be identified to reveal their preferences.
Have a need to manage the calls for your customer hotline? Put up an AI voice assistant to filter them for the right service staff so you need fewer people to manage these customers.
The common thread here is the idea that you don’t need humans – or at least you’d need fewer humans – to do a job thanks to the magical AI tool you just discovered.
What’s easily forgotten is the human toil that’s needed to make the AI possible in the first place.
Despite the technology’s hype, what’s not often mentioned are the people coming up with the AI model or those who actively sort out the gold from the garbage when it comes to data to be ingested during training. There’s real cost.
For an example, look no further than OpenAI’s ChatGPT. Contractors in Kenya report that they have been traumatised by efforts to screen out descriptions of violence and sexual abuse during the run-up to its launch.
The lowly paid workers in East Africa have been engaged to prevent the chatbot technology from spitting out offensive or grotesque statements, the Wall Street Journal reported earlier this week.
While the main focus on AI’s impact has been on the result of its usage – the loss of jobs or misinformation, for example – the conversation should also include the human toll behind the creation and management of AI.
Think also of those whose works have been taken by AI to be re-generated into sometimes different and yet often still recognisable forms that border on plagiarism.
In the short time that generative AI such as ChatGPT or Midjourney have taken text and images and morphed them into new creations for the masses, human creators have been worrying about the lack of recognition for their efforts.
Earlier this month, comedian Sarah Silverman joined a class-action lawsuit in the United States against OpenAI and another against Meta accusing the tech giants of copyright infringement.
The plaintiffs contend that the companies made copies of the authors’ works without permission by scraping illegal shadow libraries that contain the texts of thousands of books, the New York Times reported.
No, you can’t copyright an idea or a thought but if the companies are proven to have trained their AI with copyrighted content that is stored illegally online, then there’s a problem.
The high-profile case, which includes two other authors Christopher Golden and Richard Kadrey, will be closely watched for any impact it would have on how tech companies train their AI models in future.
However it turns out, there has to be more effort to reduce or at least to recognise the human cost of putting up the next great generative AI. This has to be part of the ethical consideration.
The idea that an AI is so smart that it can instantly conjure up solutions for everyday issues, from planning your trip to writing up an e-mail to your boss, obscures the fact that faceless people have put in unrewarding and possibly even traumatising effort into making this intelligence happen.
As AI is adopted on a mass scale, it becomes critical that those whose labour makes the machines possible should be considered as a cost of using the new technology.
Just as carbon emissions are part of the calculus for data centres and the digital services people cannot do without today, the human toll involved in making AI should be in the technology industry’s sustainability and ethical considerations. Despite seeming otherwise, AI is not cost-free to create and use.