5 Comments

User's avatar
Alastair's avatar

Hi Andy,

Interesting article to say the least. It is quite concerning how much published text is just AI content.

Though I will add, all the AI checking programs are dross. They will regularly say with great confidence that works published in the 90s are actually "AI text - High". They also miss things that are AI (I've done this test with my ebook library and stuff I've pumped out from various APIs). OpenAI gave up on trying to make a service for spotting AI text, even though I'm sure it would have been lucrative, simply because they couldn't get it to work accurately.

The slightly frightening outcome of this is that we just won't be able to tell. Human writing will become more like AI the more of it we read as well, compounding the effect.

New World here we come

Expand full comment
Joseph P. Duchesne's avatar

Thank you for your thoughtful article. It is important for us and for the church at large to wrestle with these issues. I know I have and continue to do so at my own Substack.

To me, using AI to brainstorm isn’t a problem per se as long as the person has actually wrestled with the topic they are presenting. There is no substitute to reading, and doing research on a topic. This is especially true when writing a book. Otherwise, as recent research has shown, not only will we not remember what we wrote, we will also actually harm our ability to do so in the future.

I for one am seriously concerned about the importance of having ethical conversations around AI and its usage. It is just too easy to take shortcuts.

A sad consequence of bad ethics is, as you mention, a deep suspicion of authors that use AI and it gets in the way of being enriched by their writing.

Let us not throw out all AI use. It has its place but it must be used ethically.

Expand full comment
3 more comments...

No posts