The headlines have been pretty striking. ChatGPT — an AI tool that can produce natural-sounding text — can be pointed at almost any problem that requires words. Without attending a single class, the software passed the final exam for the Wharton MBA program. It almost passed the bar exam. And it wrote an endless Seinfeld episode that streamed on Twitch for more than a month.
Next stop — content marketing? The claims have been bold. Entrepreneur magazine suggests the ChatGPT tool will be a revolution for automating routine tasks, such as writing blog posts or email blasts. New AI products have already sprung up, promising to write “months of social media content in minutes.” And the technology’s boosters think the product is also set to take over more strategic work, including market analysis and training new employees.
But how much of a revolution is ChatGPT — and what are the trade-offs that it brings?
The idea of automatic content is far from new. Computer-generated copy had its first major milestone in 1984, with the first book written by a computer. In the mid-1990s, the creators of Microsoft’s PowerPoint added an “AutoContent” button, partly as a joke. The current generation of tools kicked off in the 2010s, when algorithms first started to create simple news stories about stock reports and sports events.
The promise — and real concerns — have been there all along, and those who have studied the current iteration of the technology are far from convinced about its reliability. In a recent New York Times editorial, Noam Chomsky, the country’s most famous living linguist, points out that these machines “differ profoundly from how humans reason and use language,” which can lead to jarring mistakes — and misplaced assumptions.
Are you ready to have AI write your marketing content? See if you agree with the following checklist:
You don’t need accuracy. Dr. Alan Thompson, the author of Bright and one of the leading experts on AI, has this two-word answer to whether ChatGPT is reliable: “Not really.” Time and again, the content generated by this technology has had outright errors. The most embarrassing was probably when the AI chat tool from Google contained an error in its very first — very public — demonstration. A similar thing happened when Microsoft’s Bing AI rolled out.
If mistakes are happening in these cases, chances are they’ll appear for you. When a student turned in a ChatGPT essay to his philosophy class at Furman University, professor Darren Hick summed up his take on the AI-generated copy in a few words: While the piece was “well-written,” it “made no sense” and was “just flatly wrong.”
You don’t care about search. More than half of all website traffic comes from organic search, and search engine optimization is a first step for any current content program. But there’s reason to doubt whether Google and other engines will treat AI-generated content the same. The sites have long-standing rules that devalue automatically generated content, and as recently as last year said that they regard it as “spam.”
While Google soft-pedaled this ruling in January, saying that it won’t penalize AI content in Google Search as long as it’s “useful,” the likely actions of bad black-hat actors pushing AI to create cheap, search-friendly content might change that in a moment.
You don’t mind offending your readers. Remember that month-long episode of Seinfeld? The Twitch stream was shut down when the show went off the rails, making jokes at the expense of transgender people. After the much-lauded launch of Microsoft’s Bing, a technology columnist from The New York Times found the AI saying that it loved him — the kind of behavior that has led other journalists to call it creepy.
AI mirrors the speech and thought patterns it has learned from all corners of the internet. Those likely contain bad behaviors and human bias — a problem also faced by AI in other fields, including medicine.
You’re OK with a lawsuit here and there. Lawyers are sharpening their pencils over a coming wave of claims about who “owns” AI-generated content. In the field of AI-generated images, this battle has already started. Stock image behemoth Getty Images has sued Stability AI, a leading generative AI company, for training its software on Getty Images without a license. Class-action lawsuits are also brewing from artists and others who feel their work is being stolen.
Am I saying that ChatGPT and other AI-driven tools should be an absolute no for your marketing team? That’s not true either. Given the right guardrails, these tools can help transform the way you and your agencies generate content. In my next article, I’ll focus on the bright side of the coin — how working with generative AI can boost creativity, inspire new thinking and help build your brand.