One of the first jobs that people envision AIs replacing today is copywriting. You know, the job done by those pesky, irritable folks who are obsessed with some small details like commas and hyphens and are often prone to creative outbursts.
Like many creative types, copywriters rely on putting together content in a way that catches the eye of the fickle media consumer, whether a reader of a newspaper, a blog or newsletter. Their job is to capture the reader’s attention.
Now, it’s clear AI can do this job pretty well. Or does it?
Well, see for yourself an article that I “commissioned” ChatGPT to write for me. The topic? Jobs that AI can’t take over from humans. Write it for the New York Times, I told the machine.
What was turned in, in a mere minute or less, was a nice compilation of all the various jobs for which humans are indeed required.
Think of social work, for example, the article pointed out, where empathy is required. Consider the arts, like music and even writing, where subjective decision-making goes beyond the capabilities of AI.
In all, it was a competent job. No, actually, a very good job that could pass for a byline written by an expert in the industry.
It also is a little bland, because it sounds a little “corporate”, as we say sometimes about copy that is not quite sparkling. Think of it as good champagne, but maybe a bit past it with the fizz gone.
What if you ask the AI to be less formal? I told ChatGPT to rework the article and it came up with an intro like this:
Hey there, folks! With all the buzz around artificial intelligence (AI) taking over jobs, it’s easy to get worried about the future of work. But fear not! There are some things that AI just can’t do, and that’s where humans come in. Let’s dive into what makes us truly irreplaceable in the workforce of the future.-ChatGPT
Okay, a little contrived, though that might just be okay for some folks who don’t mind the over-enthusiastic tone. Flat champagne, this isn’t; but it’s like a cheap Moscato that’s too sweet and fizzes over the top.
Alright, I have to throw in a couple of wine analogies to show that this article is indeed being written by a human. For now, it seems ChatGPT struggles to provide the same spark in an article that keeps you reading. It’s still very much informational.
When it comes to persuading you to change your mind, say, on a piece of legislation or to follow a political cause, it is strangely unconvincing.
Often, it tries to balance two sides of an argument but doesn’t land that many convincing points. (I’m talking about articles here, not memes or fake news that AI can generate easily)
In the news business, the copy that ChatGPT seems to generate is best described as “very editable”. That is, it can be improved by a lot.
I’ve been using ChatGPT using GPT-4 (the paid version) for the past few weeks and its best use has been to help me generate ideas when I’m stuck with writer’s block.
It also helps me string together parts of a writeup that is dealing with factual detail. For example, I might write the main point for a paragraph, say, the rise in broadband prices but then let ChatGPT fill in the remaining sentences that deal with the actual data.
I have to check this data against the source, of course. Plus, I have to confirm that what the AI generates isn’t just a lift-and-copy from another news source, which it can do.
Throw it a media release, for example, and it would basically rework the same stuff with different turns of phrases. Pretty smart, but not there yet, because it still doesn’t seem to understand that a news article needs to strip out all the PR stuff and appear independent.
For example, I avoid describing a company as “industry leading” because everyone is a leader in media releases. I throw out words like “announce” because I’d just go straight into the announcement, say, a price increase or new product in the market.
These small intricacies, it seems, ChatGPT still needs training to sort out. Given a media release, it keeps coming up with the same annoying PRish terms that I tell it to remove.
As many others before me have said, ChatGPT seems an able assistant, which AI companies like OpenAI, which is behind ChatGPT, are positioning them as.
It’s like an intern whom you need to give specific instructions to, who then needs guidance to rework the article over several versions before it is fit to publish.
When I was a rookie at the newsroom years ago, this was what editors did with my terrible copy as well. Watch and learn, I’d be told, as a senior editor took my work and rejigged everything.
Can AI do the same? I’m convinced it can. Those smart turns of phrases? It will learn over time. Using a more convincing tone? Of course, it can mimic the best writers out there in the future.
Already, an AI-generated picture has gone on to win a photography award. I’d be surprised an AI-generated article doesn’t win a journalistic prize one day, if it is vetted and checked by humans.
Therein lies the big lesson here as well. Like any computer program or algorithm, the concept of garbage in/garbage out applies.
Feed the AI lousy data and it has little to work with. I’ve tried asking ChatGPT to prolong an article to, say, 1,000 words when I had fed it insufficient data.
It ended up regurgitating the same text with a few small changes but could not get to the desired length.
I had to tell it where to go to find more data. I had to tell it what to include and afterwards where the additional information would fit best with the other paragraphs. I was still at the wheel.
For folks worried about losing their jobs to AI, it is right to feel so because tools such as ChatGPT will automate and do away with humans for many forms of writing, such as for brochures, blogs and website copy. Already, people have lost their jobs directly to AI.
However, AI is not trusted entirely or with everything, certainly not the most important things, like news articles that cannot be 99 per cent accurate (they have to 100 per cent).
If you wish to write a hard hitting opinion piece seeking to change public opinion, you need a human looking through the copy. It is aimed at human readers, after all.
Plus, AI is only as good as what is fed to it. The likes of ChatGPT are generative AI, in that they generate new content based on what we give them. How much of it is truly new content, though?
What happens when we start consuming the stuff they generate and then throw that back into the machine? Yes, a dangerous feedback loop occurs. Garbage in, garbage out.
To be sure, the so-called content pollution that has happened in recent years, when everyone can put out a blog (like this) or a video will soon explode with even more AI-generated stuff. Some will be good, others not so.
There’s an analogy here in the much-disrupted media business. In the 2000s, the Internet came and killed many small-town newspapers in the United States. Well, even large city papers, but those that have the scale and credibility, such as the New York Times, have continued to thrive in recent years.
The news outlets that didn’t survive often realised too late that they couldn’t just continue to disseminate information as a gatekeeper – the Internet had opened the floodgates.
Similarly, AI has burst the dam open again now and made everyone a semi-competent writer. It will continue to force humans to iterate and improve to stay ahead.
Yet, this is also a great opportunity for content that is rewarding, meaningful and makes a real difference. In other words, the value is in being the grain standing out from the chaff.