Avoid sending AI content to journalists
I’m happy ensconced in Roma in Mexico City, where I have finally managed to organise access to the city’s bike rental scheme (think London’s so-canned Boris Bikes, but at half the price for an annual pass) after it doesn’t let people use international cards (cue asking a kind new Mexican friend for help). Anyhow, I’m thrilled I can now pedal around the city on two wheels.
In journalism-related news, I wanted to throw a spotlight on AI. You may or may not have seen my recent BBC article on ‘The people refusing to use AI’, which kind of went a bit crazy. One of the main case studies, Sabine Zetteler, and I, received emails from people around the world. It clearly resonated with people. I even had a teenager from the US emailing me about a study she’d done across her school on pupil’s views on AI.
As a journalist, nothing gives me more pleasure than people feeling so passionate about an article you’ve written, that they’ve felt compelled to get in touch. Well, maybe tacos and cycling. It led to the article being picked up by other titles worldwide, and also by my beloved Radio 5 Live. Now, I’m stop naval gazing, but it brings me onto the subject of people using AI to generate comment and copy for magazines/newspapers.
Honestly, as someone said in my article, we need to make sure we’re not outsourcing our ability to think. To problem solve. To have our own thoughts. Who wants to read a magazine, newspaper and it’s all just regurgitated content from the internet pulled together by ChatGPT? Is this the world we want? Ethics and future concerns over our brain cells just plummeting aside, I can assure you that journalists are already receiving such waddle and are calling it out. And many are starting to adjust their editorial guidelines to state that they don’t want to receive such content:
Katie McQuater, head of editorial at Research Live, posted on LinkedIn to say the organisation had made a brief amendment to its editorial guidelines:
“Research Live does not publish editorial content generated by artificial intelligence. If we suspect editorial content has been directly generated using AI, it will not be published.'
We want YOUR thoughts and ideas, not those generated by LLMs.
It shouldn't need to be said, but in the space we cover, accuracy is critical - plus, if we're publishing an opinion piece, we want YOUR opinion, not machine-generated 'content' formed from someone else's.”
Many journalists can tell if an email, comment, opinion piece, has been spat out by AI. It doesn’t sound human. Also, we want your authentic voice. We want comment that is accurate, and not using potentially false information regurgitated by LLM.
In response to Katy’s post, John McCarthy, opinion editor at The Drum, said: “Cards on the table. I'm getting a lot of opinion in that seems to have been written manually then gets a final AI edit pass from the writer or PR by the looks of things. I can forgive on tighter deadlines. I'd rather the odd typo than the GPT edit job but c'est la vie. Ultimately, we can tell a mile off whether something has any substance whatsoever in it or not really easily. Suppose it's much easier to tell with opinion when 800 words come back without stating a single opinion. Even when people prompt it to be punchy, all I get is brand social tone cringe. Sparkly nothing.” He later added: “To be honest, it's getting tiring. I need to be less forgiving on them.” Of course, us journalists know that it’s not always the CEO who has provided comment. That perhaps a PR has written it, or they’ve said it and a PR has edited it. At least then there is a human involved writing good copy. Stick to that. Stick to someone who has good writing skills, rather than outsourcing to a machine. It could mean the difference between a journalist never using you or your agency again. Is it worth it?