I have been checking out the latest version of ChatGPT (GPT-4). It still hallucinates and lies. They can’t seem to fix this problem.
I am writing an essay on free speech and academic freedom and, out of curiosity, I decided to give the new version a spin and asked it to tell me about famous cases. I thought: let’s see if the bot might be a good starting place for students to begin their research. Not if they don’t check the output against facts.

ChatGPT invented two cases of students harassing their professors for their gender critical views that were entirely or almost entirely false.
One was obviously fake for anybody familiar with British politics. It involved Jeremy Corbyn’s brother, Piers, who was identified as a mathematics professor at the University of Essex. I don’t know that much about Piers, but I know he’s not a professor at that institution. (I had to resist going down that rabbit hole.)
When I called the bot on the lie, it said, “I apologize for the mistake earlier. Corbyn is not involved in any academic institution as a professor, particularly not at the University of Essex.”
I hate the apology line. Its apologies can only be remorseless, since it is incapable of feelings (heaven help us if it ever acquires that attribute). It should simply tell me it made an error. It’s a machine. Machines are fallible.
More subtly, the bot implicated Michael Kimmage, a professor of history at the Catholic University of America, in a similar controversy. I checked this out using Google, since I was not familiar with Kimmage. This instance is more serious, since Kimmage is a professor at that institution.
ChatGPT upon interrogation: “It appears there is a misunderstanding or confusion regarding Michael Kimmage’s involvement in any controversy related to transgender ideology.”
We might wonder whether this lie crosses over into libel. After all, this is an objective misrepresentation of Kimmage’s situation. Moreover, what if somebody read this, believed it, and organized a group of students to harass Kimmage for his supposed gender critical views—perhaps petition the university to fire him?
Who does Kimmage sue? Where does he go to get his reputation back? A bot said it. A bot is not a person. It did not intentionally lie. There is no malice.
There are students on my campus who believe I say racist things in class because somebody said I did. Remember that game we used to play as children called “Telephone”? Imagine if ChatGPT scrapes distorted or false accusations from the Internet and generates output portraying me a “racist professor.” Yes, it’s false, but as we see all the time, falsehoods become truths in the minds of impressionable people, especially through repetition.
This is a real problem and it extends well beyond falsely accusing professors of crimethink (which we can short-circuit by respecting free speech and academic freedom).

For example, ChatGPT is good at diagnosing illnesses. Other bots are, as well. I understand that Grok (the third iteration) is better at this than actual doctors. Musk was on Rogan bragging about it. He’s probably right.
But these bots aren’t perfect. What if AI wrongly diagnoses a case and the patient dies? Who’s responsible? Who gets sued? I suppose whoever used the bot.
But what if medicine becomes mostly automated (this is likely) and patients sign a waiver to render immune from lawsuits the medical group, or, worse, immunity is institutionalized? Would the argument be that, probabilistically, you were better off with an AI diagnosing your illness, so you took your chances?
Are they going to put AI in charge of weapons systems? Will it have access to the nuclear codes? Musk talked about this, as well.

Gemini used to think that misgendering is worse than thermonuclear war. I just checked Gemini 2.0 and, while it gave the correct answer, it admitted that earlier iterations of the bot did prioritize misgendering over the catastrophic consequences of thermonuclear war.
Suppose that problem had not been fixed. A bot could reduce the chances of misgendering to zero if it wiped out the human race. Again, this was Musk’s scenario. It’s why Grok is programmed to think without woke progressive ideology in the background.
What other bizarre priorities do these bots carry? What remains of that destructive ideology in their algorithms?
We’re going big time with this technology, so we have to start thinking about this now and not later. We’re only a few years away from these machines being a smarter than we are. Maybe they already are.
Time to dust off your Isaac Asimov novels.
