Is AI a writer’s friend or foe? Both and neither perhaps. Generative AI seems to be creating an increasingly stressful environment for writers of all kinds. Whether you’re a student grafting away at essays, a copywriter producing social media content, a professional blogger, journalist, or script writer, the spectre of AI looms in the shadows and may be threatening your livelihood one way or another.
First there was a widespread concern about ChatGPT stealing writer’s jobs. Now in 2024, the anxiety is that at the click of a button a variety of GPTs can generate articles, blog posts and social media posts that previously took a human being hours to research and draft. The reality is that these texts are not fit for purpose. The prose produced is often flat, dry, repetitive, and the source material inaccurate. Research indicates that entry level freelance writer and coding jobs have been impacted by AI, but the demand for writers with niche expertise and creative talent remains.
However, human writers and prompt engineers are being employed to improve these outputs by training AI in advanced reasoning and language processing techniques as I write. These types of task-based jobs are on the rise as AI developers seek to humanize outputs.
While many professional writers subscribe to GPTs for research purposes, few if any would use their outputs as content. More scholarly GPTs behave like targeted conversational search engines that are free of sponsored ad content and can help stimulate creative juices and get a writer’s ideas flowing. As yet, however, they can’t replace human fact checking capabilities or make the imaginative leaps and humorous inferences needed to hook readers and engage audiences.
But writers beware! Following the ever-increasing sophistication of generative GPTs we have a new menace to contend with. AI detector tools are now being deployed by clients to screen our work and ensure that anything we submit is 100% human.
I’ve heard various explanations for this. One is that Google doesn’t like AI content, and SEO rankings suffer when it’s detected. However, this only applies to very spammy content full of keyword stuffing chiefly designed to manipulate search rankings. Otherwise it’s not true. Another is that GPTs do not base their answers on reliable sources. OK. That can be checked, as every source found anywhere should be anyway.
Finally, and most importantly, AI generated content cannot be copyrighted. So, if you sell your writing to a client who purchases it in exchange for copyright, then those clients must ensure your work is free of AI generated text for legal reasons. Otherwise they can’t publish it safely.
Yet more and more clients are using detector tools that generate false positives. Using Grammarly to edit? Watch out- it’s AI. Copying and pasting between drafts or Word and Google Docs? Ill advised. Pasted text can trigger an alert. Using PerfectIt to proofread? Risky- it too is an AI tool. The problem is that these editing and proofreading tools can trigger AI detectors even when the content is 100% human.
There’s a plethora of articles by disgruntled students and writers who stand accused of using AI to produce their content. Research has demonstrated that detector tools are often inaccurate. Some tools have flagged up pre-AI written content as GPT generated, including the Bible and the US Constitution!
Today, writers not only have to submit content to a raft of grammar, spelling and plagiarism checkers, but also AI detectors, despite the companies who produce these tools admitting they’re deeply flawed.
I’ve been experimenting by submitting the same content to a variety of AI detector tools and getting vastly different results. Originality AI claims to be the most rigorous detector tool- admitting in its own blog that using Grammarly, or cutting and pasting between documents can flag up AI involvement. Rewritten paraphrased content derived from a chat with a GPT? Also, no good. All these activities can trigger this particular AI detection tool, as might unusual formats such as listicles, bullet points, and short, clipped sentences. Don’t summarise article sections to enhance readability either. The repetition might trigger a concern.
Fear not though dear writers, there is a solution. Welcome to the humanizer! Yes, when you subscribe to an AI detection tool that suspects you’re a robot you will be swiftly invited to add on the humanizer (which IS a robot that claims to makes you sound less robotic) for an extra fee. You can only trial humanizers for free on about 300 words. My recent trial introduced grammar errors, redundancy, slang, and other poor word choices. The idea seems to be that mediocre writing can trick the detectors as it’s more likely to be human. If you write smooth, consistent, error free text then you’re more likely to get flagged as a bot. Check out this article in the Guardian, also linked above, where a lecturer claims his brilliant student was flagged with a false positive for precisely that.
Who benefits from this fiasco? Writers? No. Editors? Maybe. Tech firms? Definitely. Let me explain.
Writers are increasingly having to prove they are human to AI robots deployed by editors who screen their work. While this is important, especially as AI text cannot be copyrighted, it also risks promoting an adversarial relationship between staff on the same team and slowing down the production process. In large content agencies in particular, editors often earn hourly rates, so, they’re more likely to intervene and demand more from writers. If a writer is very good, editors have less to do and earn less. In these situations, the editor is incentivized to find fault as they get paid more for each round of revisions. In other scenarios an editor is paid per article or word count and so the less intervention the better off they are.
I was told recently ‘not to worry- our editors know how to tweak content to avoid detection’ when I had written the content from scratch. However, I’d been asked to produce a listicle article, which has a high risk for flagging AI involvement as that is ChatGPT’s preferred format. Sure enough, over 50% of the content got flagged as possibly created by AI. The suggestions made by 2 editors were cringeworthy to read. Wordy, redundant phrasing and flowery language was introduced that sacrificed clarity to claim humanity. Perhaps they’d used a humanizer, to be fair.
Also, some neurodivergent writers have complained that their communication style- direct, detailed, meticulously grammatical and factual- is being penalised. Third person, neutral, and formal writing also falls under suspicion and is more likely used by non-native English speakers– double DEI alert. Yet sometimes this is exactly what a brief demands, and a client wants.
So, in 2024, a well-equipped writer’s armoury should include a word processing app, an external grammar checker and editor app, a plagiarism checker, an AI detector and a humanizer. Each tool is available on subscription for a modest monthly fee.
As Max Loel writes for The National Writers’ Union:
“Originality claims its tool is 97.5% accurate, but this figure does not stand up to real-world scrutiny. They tell our clients that AI-flagged content will get penalized in search results, which Google outright refuted. Plus, there is dishonesty in creating a problem out of thin air and turning around to sell the solution to those most impacted.”
Who benefits? I think the answer’s clear.












