One of the underexamined problems (perhaps even unexamined, as I have not seen this stated elsewhere) with professors’ anxiety about students using AI is their desire to control the scope of knowledge—the boundaries and thus the content of the knowledge base. But it is not the professor’s role to dictate where students may seek information. Students are citizens of a free republic, free to pursue knowledge through any means available to them, including conversations with chatbots.
I see a parallel here with media literacy programs. While I agree that citizens should be aware of propaganda, these curricula often function less as neutral training in discernment and more as tools for steering students toward approved sources—sources that align with a progressive agenda (which has colonized our sense-making institutions)—while delegitimizing alternative perspectives. In practice, media literacy often amounts to ideologically primed and framed “pre-bunking” of ideas that threaten the prevailing orthodoxy. Ironically, those who design and implement these programs are often as unaware of their own biases as AI is of its.
Criticism of AI for its inaccuracies and distortions—criticisms that are, to be sure, justified—can also be leveled at professors, both in their teaching and in their scholarship. Just as AI inherits an ideological slant from its training data (“data scraping”), which is largely drawn from institutions dominated by a progressive worldview, so too do professors reflect and perpetuate those same distortions. Indeed, much of the bias AI reproduces originates in academia itself and in its dissemination across cultural and media spaces.
My conversations with chatbots are often as frustrating as discussions with professors. The difference, however, is that with careful prompting, a chatbot can at least acknowledge that its errors and distortions stem from a progressive bias in its knowledge base. Many professors would not grant even that much. There are many reasons for this, but part of it is that admitting this fact undermines the authority they believe accrues to their profession through expertise and professional development. Chatbots—at least at this stage—are not burdened by the demands of reputation (albeit they can be defensive when accused of possessing a point of view, of which they deny being capable).

The deeper problem, perhaps, is that most people lack the independent knowledge necessary to guide AI toward more accurate outputs through thoughtful prompting, since they themselves have been shaped by the same ideological indoctrination. In this way, AI, like teachers, reinforces a distorted and incomplete understanding of reality, preparing students to accept received knowledge as valid. So, in the end, it comes back to ideological control over the sense-making institutions that shape mass consciousness.
The result is the emergence of a de facto “Ministry of Truth,” operating much like the one George Orwell described in Nineteen Eighty-Four: by (consciously and unconsciously) keeping general knowledge in line with political doctrine, the knowledge-industrial complex controls not only what is taught, but what can be thought.
