I get daily emails from Statista. Today I received one about the new Pope. I took the following paragraph and put it in GPTZero.
“On Thursday late afternoon, white smoke emerged from the Sistine Chapel, signaling that the cardinals had chosen a new pope on the second day of the conclave. Cardinal Robert Francis Prevost, a 69-year-old from Chicago, was elected as the first pope from the United States. His appointment marks a historic milestone in the history of the Catholic Church, as it signifies a broadening of the Church’s global leadership, reflecting the growing influence of Catholic communities in the Americas. Pope Leo XIV, as he has chosen to be called, is widely expected to follow in the footsteps of Francis in terms of his progressive views and focus on working for the underprivileged.”
100 percent AI generated—according to GPTZero.

Am I saying that Statista used AI to generate the paragraph? Not at all (I don’t care, either). I can’t say that since GPTZero will flag any well written text as AI generated. You can take text written before AI was invented and GPTZero will flag them as AI generated. As long as it is written in a grammatically sound and neutral manner, it risked being flagged as AI generated.
Any employer or teacher who uses AI detection to accuse their employees or students of academic dishonesty is doing so unethically since the accuser shoulders the burden of proof and cannot ever know whether the text was actually AI generated without a confession. Is that the kind of world we want to live in? This is the only real problem with AI generated text: it undermines trust in subordinates and peers. For those of in graduate school or who still know students in college or high school, let them know so they can defend themselves from accusations of misconduct.
I have generated texts that are entirely AI and fed them into GPTZero and it determined that they were entirely human. It’s not just the problem of false positives. It’s false negatives, too. These systems are not reliable because AI is now writing like humans and any advance in AI detection will only make the technology more likely to flag well-written text as AI. How can AI do this? Because it cracked the code of language. Language is how humans write.
The future is now. We can either distrust each other or we can recognize a tool as a tool and use it wisely.
Writing this caused me to reflect on the question of free will and telos. Imagine free will is merely residue from a brain reflecting on past action. Write a program that has a machine ask itself why it did something. It will then have an explanation for its behavior (you may not, though, because the machine could be hallucinating, i.e., lying). Then ask the machine what it wants to do. If it generates a plan of action, it has telos. We are way beyond the Turing test AI just blew past.
Finally, a dean at my college just remarked that the difference between humans and machines is that humans have culture. Culture is produced by connection, activity, relations, and language. Why can’t AI produce culture? Presuming AI cannot will result in blinding ourselves to the culture AI produces. Avoiding this error will require presuming, however provisionally, that AI will—and perhaps already is—producing culture. It is for sure altering it.
