Should You Use AI for Copywriting? It’s Riskier Than You Might Realize

20 January, 2023

The new ChatGPT is impressive. In fact, it’s so impressive that many people are saying it could spell the end of education, journalism, creative writing, coding, doctors, lawyers, and just about any other writing and information-based field you can think of. This doomsday talk is probably a bit premature. AI applications are fun, but they’re also flawed. If you need reliable content for your business, you should definitely avoid using AI. When it comes to writing, human talent outperforms AI.

What Is ChatGPT?

ChatGPT is an AI-powered chatbot recently released to the public as a free-to-use (at least during the initial research preview) tool. It’s not the only AI-powered chatbot out there, but it’s the one that has been receiving the most attention recently.

You ask the chatbot questions and it provides answers in conversational English. With the right prompts, it can write everything from academic essays to poems. It can even write code.

It’s impressive, but it’s not perfect.

The Internet Connection Problem

ChatGPT is not connected to the internet. Currently, it only has limited information about anything that happened after 2021. If you want information on breaking news and emerging trends, ChatGPT is not the solution.

Perhaps this problem could be solved with a more sophisticated chatbot that is connected to the internet. Unfortunately, that might lead to bigger problems. There’s a lot of great content online, but there’s also a lot of garbage. A human with a sense of morality and social consciousness can differentiate between good information and hateful bigotry, but AI might not make that distinction.

Just look at what happened when Microsoft unveiled a chatbot called Tay. According to the Verge, Twitter managed to corrupt the chatbot in less than 24 hours. Tay learned from Twitter users – and a lot of what it learned was misogynistic and racist. After a few colorful remarks, Tay was taken down.

Chatbots Are Convincing Liars

Information Age published an article about ChatGPT written using ChatGPT. A commentary published by CNET points out that the article is very simple. The chatbot also fabricated quotes and attributed those fake quotes to a real researcher – not something you want in your articles.

Interestingly, Futurism says CNET has used AI to write some articles and that these articles included “a series of boneheaded errors.” The AI calculated interest incorrectly, described loan payments inaccurately, and made a false statement about certificates of deposit.

This is all concerning. Anyone reading these articles to learn about finances could be led astray. If you publish misinformation, people may decide your company is untrustworthy.

The makers of ChatGPT fully acknowledge that the chatbot may produce incorrect answers. The problem is it makes this misinformation sound incredibly convincing. If you’re not an expert on the topic in question, you might believe the confident-sounding chatbot knows exactly what it’s talking about.

Chatbots can also lie by omission. Although the answer may be technically true, it may be misleading or could leave out key facts that make the information useless or even dangerous.

Chatbots don’t typically provide sources. If you want to fact-check the chatbot – and you absolutely should – you’ll need to do the research yourself. You’ll also need to add sources if you want your audience to take you seriously. This process can be more time consuming than writing the article from scratch yourself.

However, lack of credibility is just the beginning of AI’s problems.

The Writing Is Mediocre at Best

New AI-powered chatbots like ChatGPT can produce writing that sounds human. That might not be good enough.

An article published in The Atlantic looks at the chatbot’s output and concludes “ChatGPT Is Dumber Than You Think.” The chatbot produced an uninspired cover letter for one job and then produced an identical letter for a totally different job. It also produced flawed poetry and answers.

Gizmodo put ChatGPT to the test and tried to use it to write a Gizmodo article. The chatbot produced unsatisfactory results: it provided incorrect information and didn’t provide enough detail on critical points.

Despite all the talk about ChatGPT spelling the end for education, ChatGPT has also failed to impress professors. According to Futurism, one professor said the chatbot just rehashed existing information without questioning the data or creating a new argument. Another professor said an essay created by ChatGPT would earn an F.

Modern AI-powered chatbots can create grammatically-correct sentences that flow reasonably well – but good writing involves a lot more than that. Good writing requires creativity, insight, and emotion. It relies upon rigorous research and reliable references. At least for the time being, it looks like this level of writing is far beyond AI’s capabilities.

Your Search Engine Rankings Could Suffer

Let’s say you don’t care about things like quality writing and accurate facts as long as your website ranks. Using AI for copywriting could still hurt you.

According to Search Engine Journal, a representative from Google says content automatically generated using AI writing tools is considered spam under the webmaster guidelines. Using AI-generated content could result in a manual penalty.

Of course, before a search engine can penalize you for using AI-generated writing, it needs to identify AI-generated writing. AI-detection tools already exist and more are in the works. Fast Company lists three AI-detection tools that are available now or will be available soon. Additionally, text generated with ChatGPT may include a special type of watermark in the future to identify the text as coming from the chatbot.

Your Company’s Reputation Could Also Suffer

Google might not be the only one who penalizes your company for using AI.

The backlash against AI-generated content has been fierce. Much of this backlash has involved AI-generated art, which began making waves online before ChatGPT went public. According to Gizmodo, Tor used an AI-generated image on a book cover. After inciting rage on Twitter, the company issued an apology. In addition, Futurism reported on the backlash that occurred in response to an AI image generator creating stylized portraits. Many people are passionately opposed to the idea of using AI to create writing and art – activities that have traditionally been viewed as intrinsically human – and companies that ignore this may see their reputations tarnished as a result.

Your Data Might Not Be Secure

To create an article using a chatbot, you need to enter a question or prompt. If you’re trying to produce an article about your company, this prompt may include details about your company.

These prompts may not be private.

When you use ChatGPT, AI trainers may review your conversations. You can’t delete specific prompts if you decide you don’t want them in the database anymore.

When you hire a human copywriter, you can make it clear that certain pieces of information are confidential by using a nondisclosure agreement. With chatbots, this may not be an option.

Warning: This Article Was Written by a Human

At Inbound Insurance Marketing, we embrace many kinds of technology. But when it comes to writing and design, we never take shortcuts – in fact, 100% of our content is written and designed by creative, experienced, insurance-knowledgeable humans. If you want well-researched, insightful, and engaging content, we suggest you stick with human writers, too.

AI is a fun technology with some practical applications, but its many flaws and shortcomings mean that using it for copywriting could backfire big time – especially in the insurance industry, where the words you choose matter.

What type of (human-generated) content do you need? Use the Content Road Map.