What happened when I asked ChatGTP to write this column

By Patrice Lewis

I write a lot of magazine articles, primarily on rural-themed subjects. This past week, I had an article due for which I needed to interview a few people. For this reason, I decided to submit a HARO query.

HARO stands for “Help a Reporter Out,” and it’s a service that connects journalists to sources. I joined years ago when the business was in its infancy, and it’s always served me very well.

The way it works is I formulate a query (“Looking for people with expertise in xyz”) and include a list of five or six specific points I want the respondent to address, then send the query to HARO. They blast it out to a million subscribers in one of their twice-a-day email alerts. Anyone with the expertise I’m seeking submits an answer, which is then forwarded to me.

Get the hottest, most important news stories on the internet – delivered FREE to your inbox as soon as they break! Take just 30 seconds and sign up for WND’s Email News Alerts!

I’ve used HARO dozens of times, most notably back when I was writing for a craft magazine when it was a convenient method to connect with artists and crafters outside my sphere of influence. Responses were invariably full of cheerful spelling mistakes and questionable grammar, but undoubtedly the respondents possessed the expertise I sought. I’ve “cyber-met” some fascinating and talented people through HARO queries.

Anyway, that’s a long-winded way of explaining what HARO is, and why I’m experienced in submitting the types of queries that will garner me precisely the information I need for what I’m writing. Crucially, however, I haven’t sent a HARO query in several years.

Last week I sent in a HARO query for the rural-themed subject I was researching, and shortly thereafter I received six replies. Except … here’s the thing: Five of them were AI-generated.

How could I tell?

Well, two of the five had almost identical formats. They started as follows:

“I am [name deleted], Director of [company deleted], a digital marketing agency. With a passion for web development and design, my focus on usability and user experience has been instrumental in [company name’s] success. Here is my suggestion for your query…”

And: “I am [name deleted], Co-Founder and Managing Director of [company deleted], with 20 years of IT experience, a Master’s in Networking, and many industry certifications. Simplifying IT for reliable business tools and faster repair. Here is my suggestion for your query…”

The suggestions were then some short little blah-blah pablum that had very little to do with the subject matter I was seeking. Neither bothered to address the specific questions I put in my query, and instead wrote a three-line piece of nonsense. Not only that, but notice the high-tech credentials these responses spouted. Last I checked, high-tech industry gurus seldom bother to reply to the humble rural-themed subject I was writing about.

Of the three additional fake HARO replies, at first they seemed genuine since they took pains to answer the specific questions I posted … until I started comparing them. Each reply was almost identical. In other words, the respondents ran my questions through ChatGPT or some other AI source, and simply copied-and-pasted the answers. Notably, the responses were in perfect English, with flawless spelling and grammar.

The one remaining respondent was, indeed, genuine. He had a cheerful number of spelling and grammatical mistakes in his lengthy reply, and was unquestionably an expert in the subject I was addressing. He was great.

After this experience with artificial intelligence, I’ll admit I was miffed. I write both fiction (inspirational romance novels) and nonfiction (primarily magazine articles as well as this column). How long, I wondered, until artificial intelligence replaces genuine writers?

So I got curious. I opened up ChatGTP and typed in a prompt: “Write a 500-word synopsis of an inspirational romance novel, including characters’ goals, motivations, and conflicts.” It spat back a plot full of pablum. I sharpened the prompt a few times, giving it more detailed instructions, and still wasn’t impressed.

Next I tried writing this column with artificial intelligence. On my computer, I keep a running file of hundreds of unwritten columns in which I tuck relevant links and information as I come across them. One such file concerns the advisory report “Our Epidemic of Loneliness and Isolation” and its potential effect on individual privacy.

So into ChatGTP I typed the following prompt: “Write a 1,000-word opinion column on why the advisory report ‘Our Epidemic of Loneliness and Isolation’ is a one-size-fits-all solution and an excuse to pack people into cities. Most important, how the report doesn’t factor in how introverts will object to being forcibly socialized.”

It spat back the dullest drivel imaginable, spinning my prompts into seven paragraphs of absolute nonsense. It drew no conclusions, it posed no insight, it didn’t even offer an opinion. It was just … drivel.

I’m not an editor, but I work with a lot of them. I reached out to a few and asked if they’ve had any submissions that were clearly AI.

One editor replied: “I don’t think I’ve received any submissions that were AI-generated. However, I have received a couple of emails from companies that offered AI-generated content.”

These editors see anywhere from dozens to hundreds of queries and submissions each month, and doubtless are far more skilled than I am at recognizing something written by artificial intelligence. Pablum is pablum, after all.

And, of course, the bot is (in)famous for the amount of false information it can spit out with utter confidence, such as the name of one of America’s four female presidents, Luci Baines Johnson, who served 1973-77.

Steven Pinker, Johnstone Family professor of psychology at Harvard, said in an interview: “We’re dealing with an alien intelligence that’s capable of astonishing feats, but not in the manner of the human mind. … Fear of new technologies is always driven by scenarios of the worst that can happen, without anticipating the countermeasures that would arise in the real world. … [J]ournalists have already stopped using the gimmick of having GPT write their columns about GPT because readers are onto it.”

Still, as a writer, I’m braced for all that to change. AI is in its infancy, but it’s learning. Already it’s wreaking havoc in other spheres (criminal justice, law, journalism, education, the military, cybersecurity, etc.).

It’s a brave new world out there, folks.

Content created by the WND News Center is available for re-publication without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact [email protected].

SUPPORT TRUTHFUL JOURNALISM. MAKE A DONATION TO THE NONPROFIT WND NEWS CENTER. THANK YOU!

Patrice Lewis

Patrice Lewis is a WND editor and weekly columnist, and the author of "The Simplicity Primer: 365 Ideas for Making Life more Livable." Visit her blog at www.rural-revolution.com. Read more of Patrice Lewis's articles here.


Leave a Comment