This Post is About More Than AI, Human Agency, and Click-Bait Headlines
In Praise of Human Thought
Titles, headlines, and rubrics are intended to trenchantly summarize the sentences and paragraphs that follow and unfurl beneath them. Authors, editors, and publishers favor provocative titles contrived to attract the interest of readers.
Some content, however, defies facile summation, resisting the snappy headline or superficial title. These tensions typically arise when you grapple with a complex topic or an inherently ambiguous narrative. There are times when the intricacies and nuances of discourse simply say no to simplicity. No matter how hard you try, you can’t make the irregularly square peg fit into the prefabricated round hole. In those circumstances, the headline’s insistence on simplicity does a disservice to what follows. You could go with a more complicated title or headline, but then you would lose the provocative hook. This, my friends, is what we call an editorial dilemma.
This post, and the topic it addresses, risks being caught on the horns of that dilemma.
I’m not sure misery loves company, but I’m certainly not alone in trying to reconcile the integrity of complex subject matter with the editorial insistence on a catchy title. It happens all the time. A perfect example appeared recent in Wired . The article in question has this title: A Philosopher Released an Acclaimed Book About Digital Manipulation. The Author Ended Up Being AI. That’s a provocative title for the article that follows, but the substance of the article is far more intricate and, frankly, intriguing than its title suggests. The title is misleading; in attempting to plumb for controversy, it misses the mark.
I strongly encourage you to read the article. It’s a fascinating piece, relating the tale of an Italian essayist and philosopher, Andrea Colamedici, who concocted the nom de plume of Jianwei Xun. Colamedici wrote his book using AI as a research foil. With the assistance of AI, Xun (Colamedici) explored and interrogated ideas that were ultimately proposed, challenged, refined, and presented in the book, Hypnocracy: Trump, Musk, and the New Architecture of Reality.
Disappointed a Few People
Readers were disappointed to learn that the purported author did not exist, and that AI was involved in the book’s creation. It wasn’t that simple.
The Wired article includes an interview with Colamedici, who explains the motivation for the experiment that produced the book:
First of all, I teach prompt thinking at the European Institute of Design and I lead a research project on artificial intelligence and thought systems at the University of Foggia. Working with my students, I realized that they were using ChatGPT in the worst possible way: to copy from it. I observed that they were losing an understanding of life by relying on AI, which is alarming, because we live in an era where we have access to an ocean of knowledge, but we don’t know what to do with it. I’d often warn them: “You can get good grades, even build a great career using ChatGPT to cheat, but you’ll become empty.” I have trained professors from several Italian universities and many ask me: “When can I stop learning how to use ChatGPT?” The answer is never. It’s not about completing an education in AI, but about how you learn when using it.”
We must keep our curiosity alive while using this tool correctly and teaching it to work how we want it to. It all starts from a crucial distinction: There is information that makes you passive, that erodes your ability to think over time, and there is information that challenges you, that makes you smarter by pushing you beyond your limits. This is how we should use AI: as an interlocutor that helps us think differently. Otherwise, we won’t understand that these tools are designed by big tech companies that impose a certain ideology. They choose the data, the connections among it, and, above all, they treat us as customers to be satisfied. If we use AI this way, it will only confirm our biases. We will think we are right, but in reality we will not be thinking; we will be digitally embraced. We can’t afford this numbness. This was the starting point of the book. The second challenge was how to describe what is happening now. For Gilles Deleuze, philosophy is the ability to create concepts, and today we need new ones to understand our reality. Without them, we are lost. Just look at Trump’s Gaza video—generated by AI—or the provocations of figures like Musk. Without solid conceptual tools, we are shipwrecked. A good philosopher creates concepts that are like keys allowing us to understand the world.
Later, Colamedici explains how he used AI, pointedly emphasizing that AI did not do the writing:
I want to clarify that AI didn’t write the essay. Yes, I used artificial intelligence, but not in a conventional way. I developed a method that I teach at the European Institute of Design, based on creating opposition. It’s a way of thinking and using machine learning in an antagonistic way. I didn’t ask the machine to write for me, but instead it generated ideas and then I used GPT and Claude to critique them, to give me perspectives on what I had written.
Everything written in the book is mine. Artificial intelligence is a tool that we must learn to use, because if we misuse it—and “misuse” includes treating it as a sort of oracle, asking it to “tell me the answer to the world’s questions; explain to me why I exist”—then we lose our ability to think. We become stupid. Nam June Paik, a great artist of the 1990s, said: “I use technology in order to hate it properly.” And that is what we must do: understand it, because if we don’t, it will use us. AI will become the tool that big tech uses to control us and manipulate us. We must learn to use these tools correctly; otherwise, we’ll be facing a serious problem.
Platforming: Not What You Think
The interview includes a discussion of the risks inherent in anthropomorphizing AI, and, conversely, in “platforming” people, described as an insidious dehumanization resulting from our impersonal immersion in a sea of bots.
Later, there’s a discussion regarding the ontology of AI — short answer, it’s a tool that we should use consciously and purposefully — where the following observation is made:
AI is a human tool, no question. It’s a product of our past, a type of collective consciousness that we created and that helps us understand why we are here. But here’s a paradox: While AI can tell us the weather, recite verses from ancient poets, or suggest possible solutions to problems, it can never help us understand the meaning of life.
The mistake is in asking AI “tell me why I exist.” The better approach is to tell it that, “I’ve been reflecting on the meaning of life. I’ve read Sartre, who says that there is no predetermined meaning, but that we construct one. What other thinkers from other cultures, would you recommend I look at in order to broaden my understanding?”
I have long believed, as prior writings in this space make clear, that AI — whether in its rudimentary form of genAI or in its more evolved incarnation of artificial general intelligence (AGI) — is a tool that we can and should use. But we should use it actively, not passively. The technology should serve us, not the other way around.
I use AI, when appropriate, as a research tool to accelerate and expand investigations into areas and domains where I want to increase and deepen my knowledge and understanding. I don’t want AI to tell me how to live or what to think. I can and should do my own thinking.. Thinking is an essential attribute of being human; thinking, fortified by the right information and a well-calibrated ethical compass, can help lead us to a better, brighter future.
Stepping carefully down from my soapbox, I urge you again to read the Wired article.
Meanwhile, I’m struggling to compose an appropriate title for this post; I’lll probably fail to strike the right balance, erring one way or the other. But it’s better to try, giving it real thought, than not to have tried or thought at all. In the interest of subtlety, I’ll leave the corollary unsaid.