The Side Syndrome: Why We Can't Think Clearly About AI
Read enough Wendell Berry, and you might reject AI for the sake of nature.
Read enough Tristan Harris, and you might reject AI for the sake of safety.
Read enough Jacques Ellul, and you might reject AI for the sake of telos.
Read enough Gary Marcus, and you might reject AI for the sake of the future.
Read enough Neil Postman, and you might reject AI for the sake of humanity.
Or at least that’s how it seems to happen. It seems as if there are two sides in the AI debate: for and against. And the authors above all push you to the side of anti-AI. Right?
The truth is much more complicated.
One of the issues with humanity is our anti-nuance impulse. We want to be for or against nearly everything. Let’s call this our Side Syndrome. The Side Syndrome is the human bias toward picking a side rather than recognizing issues as multi-faceted.
You see the Side Syndrome in all of life. In sports, for example, people don’t stay neutral. We want one side to win (“our” side), and another to lose. I remember enjoying fantasy sports for the very sake that it made me more interested in watching. I had something to root for.
Because of human nature, we seek meaning. All meaning is story-driven. And so as humans we are constantly getting caught up in narratives, for better and worse. Join the bandwagon and if the Blue Jays win, you win. When they lose, you lose. And you feel it.
The extreme rise of sports-betting taps into the same impulse. Pick the right side. Be part of a larger story. Their victory becomes your victory. This is Side Syndrome, from relatively inconsequential decisions (cheering for the Blue Jays), to life-destroying gambling addictions.
A couple days ago, I spoke with one of the elders at my church about this and he said he never picks a side when he watches sports. Rather, he enjoys the game. It is an interesting concept, isn’t it? What if what you enjoyed most about something was not reducible to a binary result (win or lose), and not even a number (the number of strikeouts). What if you were impressed by the skill, the athletic feats, or the courage of jumping straight into a teammate for the sake of making a catch, at the risk of losing the game?!
What if you were more impressed by people than metrics?

Of course it’s not wrong to cheer for your local sports team. But the elder of my church is quite unique in avoiding Side Syndrome. The same human impulse applies to many spheres. Politics is treated as a simple left vs right rather than a more nuanced assessment of policy. Christian schooling versus home schooling arguments are treated as a matter of one side or the other. And of course, with technology, we like our technology critics to pick a side.
Some of the biggest Wendell Berry fans I know are currently using LLMs daily, and see no contradiction.
Tristan Harris, despite his calls to slow the development of AI, also promotes his colleague’s work in building a generative AI tool to decipher the language of animals.
Gary Marcus, one of the loudest critics of AI labs, still thinks AI should be developed.
Neil Postman found ways to celebrate the “reign of hygiene” in our technological age, and so he might also celebrate the mRNA vaccines which were created with the help of AI.1 None of these technology critics can simply be placed on “one side” or the other.
At this point, I am convinced that “AI” is too nebulous and abstract a term to evaluate holistically. And beyond that, I’m not convinced “for or against” is a helpful line of questioning in the first place, as if the balance can somehow be weighed. Again, this goes back to an orientation toward metrics (“weigh this on a scale”) versus an orientation toward humanity, godliness, and glory.
As such, Jacques Ellul is helpful. He did not assess technologies but assessed the society in which they were embedded. We need to move beyond a simple “how good” or “how bad” each technology is, and towards the difficult questions of what it means to be human, why do we exist, and how technology shapes our worldviews.
Similarly, I have been reading Brent Waters, who says, “Artifacts cannot, on their own, both perform the instrumental tasks of fabricating a world and its inhabitants and provide the content of what is desired in such fabrication. Rather, we must turn our attention to the underlying values that drive technological development in accordance with certain desires and aspirations stemming from those values.”2 This makes sense to me. It’s not just about AI and what it’s saying. It’s about what’s behind our urge to use AI in the first place, as just one example.
Of course, this is going to require a lot of care and effort. Most of all, it will require critical thinking. Who has the time to do all that? Perhaps we should just ask ChatGPT instead.
I recently spoke at Bluewater Church in Sarnia across three sessions for their Fall Conference. Last year they had Wes Huff fill this role. The pastor told me I might get the “Bluewater Bump” and get on Joe Rogan too, just like Wes Huff. We’ll see about that! Hah! Here are the YouTube links below to each session:
Here are some recent podcast episodes I have hosted:
Thinking About The Faith (Links: Website | Apple | Spotify):
Q43: What Are the Sacraments or Ordinances?
Q42: How is the Word of God to be Read and Heard?
Q41: What is the Lord’s Prayer?
Q40: What Should We Pray?
Q39: With What Attitude Should We Pray?
Q38: What is Prayer?
What Would Jesus Tech (Links: Website | Apple | Spotify | YouTube):
Becoming Gods: Theosis, Telos, and Transhumanism, with Wyatt Graham
The AI Bubble: Reality or Hype?
TGC’s AI Christian Benchmark, with Michael Graham
Altman, Musk, and Belief in God
On Postman’s support of technologies that brought about hygiene and reduced disease, see Technopoly, page 12. On mRNA vaccines, see https://en.wikipedia.org/wiki/MRNA_vaccine.
Brent Waters, Christian Moral Theology in the Emerging Technoculture: From Posthuman Back to Human. 2014, Routledge. Page 17.




"We need to move beyond a simple “how good” or “how bad” each technology is..." Yes! Good stuff, Andrew!
"Side Syndrome" is a very helpful term, and really useful in navigating our relationship with, well, everything. Thanks for this - I think I'm going to start using that term.