2025: The Year in Review and Notes on the West’s Islamic Problem

Yesterday’s essay, Moral Authority Without Foundations: Progressivism, Utilitarianism, and the Eclipse of Argument, my 316th publication of the year, capped off Freedom and Reason’s most successful year to date, surpassing last year’s record, which had previously been my best. The platform saw a 170 percent increase in visitors and a 90 percent increase in views compared to 2024. Since 2020—the year my blog gained traction as people sought reliable information on the pandemic amid widespread censorship and deplatforming on Facebook, Twitter, and YouTube—Freedom and Reason has experienced a 2,198 percent increase in views and a 2,180 percent increase in visitors.

I noted that yesterday’s essay was meant to cap off the year. However, with Zohran Mamdani set to be sworn in as New York City mayor on January 1st, taking the oath of office on the Qur’an, surrounded by progressives, in the shadow of occupied Europe, something must be said about the coming year and the specter of Islam. When I put the problem this way, Islamophiles will suggest a parallel to antisemitism in Europe (ironic in light of the virulent antisemitism on the left and among their Muslim comrades). They will accuse me of “Islamophobia.” When patriots urge mass deportation of Muslims, they accuse them of advocating “ethinc cleansing.” But the parallels are false, and readers need to armed for the New Year to confront these lies. So, I will cap off the year with a demographic comparison between Jews and Muslims and a note about its significance to the future of your family and your nation.

Jews represent an outsize threat to the Red-Green Alliance. These two planets orbit around a Jewish star—and they hate its light, wishing instead for a shroud of darkness. They seem to hate Jews more than they hate Christians. Their common mass delusion flies in the face of a historically unique demographic position, both in absolute size and in relation to political sovereignty. This is as true today as it was when Hitler was in power in Germany. At its historical peak, on the eve of World War II, the worldwide Jewish population is estimated at approximately 16-17 million people. At that time, the world population stood at roughly 2.3 billion. This figure represents the highest proportional share Jews have ever held in recorded history. Even at this peak, Jews remained a very small global minority, well under one percent of the world’s population (about 0.7 percent of humanity). And they had no country, their homeland controlled by foreign powers.

German national socialism murdered millions of them in the European diaspora. The Holocaust did not merely reduce the Jewish population in absolute terms; it ensured that Jews would never regain their former demographic position relative to the rest of the world. Today, the global Jewish population is estimated at approximately 14-15 million, while the world population exceeds 8 billion. As a result, Jews now make up roughly 0.17-0.19 percent of humanity—less than one-fifth of one percent. This represents not only an absolute shortfall compared to 1939, but a dramatic relative decline caused by the explosive growth of the global population in the postwar era.

This demographic reality is inseparable from the political fact that there is only one Jewish-majority country in the modern world: the State of Israel, first and noblely recognized by the United States of America. Since its founding in 1948, Israel has been—and remains—the sole sovereign nation-state in which Jews constitute a majority of the population. In the modern international system, there has never been more than one Jewish-majority country. This fact stands in stark contrast to most peoples, including far smaller ethnic or national groups, who often possess multiple states or enjoy majority status across contiguous regions.

Historically, Jewish-majority polities did exist in antiquity. There were the biblical kingdoms of Israel and Judah, and later the Hasmonean kingdom. However, these entities were pre-modern, small in population, and embedded in the imperial systems that organized the world before the rise of sovereign nation-states in the modern period. They were not contemporaneous in a way that would amount to multiple Jewish countries as the term is understood today. Thus, even across millennia of history, the combination of extreme demographic minority status and political singularity remains consistent. This fact puts Jews in a perpetual state of danger. This is why the perennial problem of antisemitism must constantly be surveilled and checked.

Taken together, these facts underscore the unusual position of the Jewish people, a civilization with continuous culture, identity, and religious tradition spanning thousands of years—the tradition that underpins Christianity, the faith that gave the world the Enlightenment, deontological liberalism, i.e., human rights, individualism, natural law, and a return to the species-being lost with the rise of social segmentation and submission to gods and kings—comprising a vanishingly small share of humanity, possessing exactly one majority state in a universe of 193 sovereign nations, yet seen by progressives and Muslims as a problem to be confronted. Understanding this demographic context is essential for making sense of Jewish history, modern Jewish political thought, and the disproportionate symbolic and political weight that questions surrounding Israel continue to carry in global discourse. One might suppose that those who loathe the West loathe the Jews in particular, since Western system of law and justice roots in Jewish doctrine. Max Weber was right when he observed that ancient Judaism is the historical hinge between East and West.

The global Muslim population presents a demographic profile that’s the mirror opposite of that of the Jewish people, both historically and in the contemporary world. Around 1939—the moment of the historical peak of the Jewish population—the number of Muslims worldwide is estimated at approximately 550–600 million, meaning Muslims constituted about 24–26 percent of humanity. Even before the mid-twentieth century, Islam was already one of the largest global religious communities, spanning vast geographic regions across Africa, the Middle East, South Asia, and Southeast Asia. Unlike the Jewish population, the Muslim population experienced no comparable demographic rupture in the twentieth century. Instead, it expanded rapidly due to high fertility rates, population growth in the Global South, and the absence of a single catastrophic event comparable to the Holocaust. It was, for the most part, spared the horrors of WWII. Today, the global Muslim population is estimated at approximately two billion people, placing Muslims at roughly the same percent of the global population as Muslims enjoyed in 1939.

However, although Muslims constituted roughly the same share of the world’s population on the eve of World War II as they do today, this continuity in percentage obscures the extraordinary growth of Islam in absolute terms. Maintaining a quarter of humanity across a period of unprecedented global population expansion—driven especially by explosive growth in Africa and Asia—means that Islam added well over a billion adherents in less than a century. Relative to Christianity, whose global share has declined as European populations aged and fertility rates fell, Islam’s growth has been sustained by higher fertility, younger age structures, and rapid population increases in the Global South, making absolute population growth the more accurate measure of Islam’s demographic expansion than percentage share alone.

The political implications of this demographic scale are striking. In contrast to Jews—who, as I noted, have exactly one Jewish-majority country—Muslims form the majority population in more than 45 sovereign states today. These range from large, populous nations such as Bangladesh, Egypt, Indonesia, Pakistan, and Turkey to smaller states in Africa, Central Asia, and the Middle East. Many now Muslim-majority nations used to be majority Christian. Crucially, Islam is not merely a global religion; it is a civilizational system embedded in legal traditions and political cultures. These traditions and cultures are intrinsically totalitarian. Muslims are hardly the vulnerable minority Westerners are told they are; Islam is an aggressive political project with two billion adherents. Islam is on the march. Jihad is here.

One need to speculate about the future. Shakespiere told us in the The Tempest, “What’s past is proloque.” The Islamic political project is a very old one with a clear record of aggression. Muslim-majority polities have existed continuously since the seventh century, beginning with the early caliphates and later empires such as the Abbasid, Mughal, Ottoman, Safavid, and Umayyad empires. While these were pre-modern entities, they governed immense populations and territories and were often dominant global powers. Unlike the Jewish case, Muslim political sovereignty has been expansive, overlapping, and enduring across centuries, even as specific empires rose and fell. One may appreciate Islam’s tenacity; one must also appreciate why this tenacity represents an existential threat to freedom and reason.

Taken together, these facts and this history highlight a profound demographic and political asymmetry. At the time when Jews constituted less than one percent of humanity at their historical peak, Muslims already represented roughly a quarter of the world’s population. Today, that proportional share remains largely intact, with Muslims comprising nearly one in four human beings on Earth and possessing dozens of the planet’s majority states. This contrast is not a neutral demographic reality, nor should it be rationalized away through appeals to religious liberty; the contrast is essential to consider for any serious comparative discussion of politics and power in the modern world.

In 1939, Europe was overwhelmingly Christian and religiously homogeneous, with Islam present only in marginal numbers, largely confined to small communities in southeastern Europe (such as Albania, Bosnia, and parts of the Balkans) and colonial-era diplomats, sailors, and students in major cities. Muslims made up well under one percent of Europe’s population then, and Islam was not perceived as a permanent or growing presence within European societies. Europe at that time was young demographically, with relatively high fertility rates, and it was a net exporter of people through emigration rather than a destination for large-scale immigration.

Today, Europe’s demographic landscape has changed profoundly. Islam has become the continent’s fastest-growing religion, driven by post–World War II labor migration, refugee flows, and higher fertility rates among Muslim populations compared to native European populations. Muslims now constitute roughly 5–8 percent of Europe’s population overall, with much higher concentrations in countries such as Belgium, France, Germany, the Netherlands, the United Kingdom, and Sweden, especially in the cities. At the same time, Europe’s indigenous populations have experienced aging, low fertility, and, in many cases, population decline, while Muslim communities remain significantly younger on average. As a result, Islam has shifted from a marginal presence in 1939 to a visible and structurally embedded component of European society today, reshaping the continent’s religious, cultural, and demographic future in ways that have no historical precedent in modern European history.

Why are young people in the West pathologically obsessed with the State of Israel, wishing to see an indigenous people driven out of its ancestral homeland, while embracing the Islamization of the West? Why are social democrats in Europe and progressives in America defending the Muslim takeover of Western cities and communities? Why is Christianity and cultural integrity among European populations seen as bigoted and racist, while Islam and the refusal of Muslims to assimilate to Western culture celebrated? Why would gay advocates embrace an ideology that kills or otherwise marginalizes homosexuals? Why would feminists embrace an ideology that subjugates women and reduces them to second-class citizenship? Why would Democrats import millions of Muslims into the United States and establish for them at taxpayer expense religious enclaves?

Why aren’t Christians rising against the Islamization of their homelands as they did centuries ago? As I noted in my recent essay, Trump and the Battle for Western Civilization, it was Christians, including militant monks, who repelled with violence the Muslim barbarians, drove them from Europe, and secured the future for Christianity. Had they not acted when they did, I argued, there would be no Europe. No Europe, no America. No Enlightenment. No human rights. Only clerical fascism. If Christians fail to act now, there will be no Europe, no America, no Enlightenment, no human rights—only clerical fascism. Our civilization will be destroyed and our history erased. Don’t feel relieved by the unrest in Iran today. Islam has experienced unrest before. Islam can only be contained by men of the West.

Where are those men? They’ve been emasculated. Western nation states have been corrupted by leftwing cultural self-loathing organized by corporate state power seeking a new world order in which the workings people of the world are to be managed on high-tech neo-feudalist estates. The barbarians are inside the gates of our cities, and those who let them in and keep them here are our own citizens. Civilizational destruction is wrapped in the language of empathy and humanitarianism. As I explain in my most recent essays—The Problem of Empathy and the Pathology of “Be Kind”; Epistemic Foundations, Deontological Liberalism, and the Grounding of Rights; Moral Authority Without Foundations: Progressivism, Utilitarianism, and the Eclipse of Argument—this is a fake human rights rhetoric, one that conceals the work of instrumental reason sans deontological commitments in the service of transnational power. Those who govern the West have abandoned the democratic-republic tradition and classical liberal values for a reason.

For more of my writings on the Islamic problem, see: Is the Red-Green Alliance Ideologically Coherent?; The Decivilization Process: The Islamization of Western Societies; The Law of Allah is Coming for Your Freedom; Woke Progressivism and the Party of God; Corporatism and Islam: The Twin Towers of Totalitarianism; Whose Time Has Come?; The Islamophilia Problem; Immigration, Colonialization, and the Struggle to Save the West; “Free, Free Palestine!”; Antisemitism Drives Anti-Israel Sentiment; Revisiting the Paradox of Tolerating Intolerance—The Occasion: The Election of Zohran Mamdani; Defensive Intolerance: Confronting the Existential Threat of Enlightenment’s Antithesis; What Islamization Looks Like; The Islamization Project on US College Campuses; The Decivilization Process: The Islamization of Western Societies; Selective Condemnation of Cultural Integrity: The Asymmetry of Anti-Colonial Thought; Indigenous English Rise Against Modern-Day Colonialism

Not a happy note to end on, I know. But if we are to make 2026 a happier year, then we need to know the lay of the land. Knowledge is power, but only if it is mutually possessed. For my leftwing comrades, know that you cannot hope for democratic socialism when the future world won’t even have the capitalist tools to work with, but instead suffer elite control via corporatist arrangments—managed democracy and inverted totalitarianism (to borrow Sheldon Wolin’s terms). We already live in a world where these controls affect our daily lives. What open eyes can see coming is already substantially present. See what you see.

I appreciate your patience regarding the limitations of WordPress, particularly the table of contents function. Due to the sheer volume of essays, I have been unable to update the table of contents since my September 15 essay, The Fool Has Come Down Off The Hill. But Who Called on Antifa to Terrorize the Village? That essay followed up on Charlie Kirk’s Killer is in Custody and the Specter of Antifa, which was picked up by Real Clear Politics, and became the most viewed essay in the history of this platform. This exposure elevated my profile and is the primary reason this year has been so successful. WordPress has explained that the table of contents cannot accommodate the sheer number of essays. While I have removed most pre-2018 titles from the contents, the problem persists. This is not a criticism of WordPress, just an explanation to readers why this is the case.

In light of these limitations, I am grateful to my readers for visiting, reading, and sharing the platform. As 2025 draws to a close, it has been a momentous year—one I have chronicled extensively. In 2026, I plan to continue producing content, including the revival of the FAR Podcast. I discontinued the podcast on YouTube years ago due to severe deboosting (I did not want to lose the platform, as I used it for online teaching). However, with Rumble gaining traction and YouTube relaxing its rules, I am preparing to relaunch. The studio is under construction, and you will be the first to know when it goes live. Approaching retirement, I do not intend to stop teaching and writing. Readers can expect a series of essays and podcasts drawing on my knowledge of sociology, anthropology, and psychology, offering polemical lectures on topics ranging from crime and punishment to the corrupting influence of postmodernist ideology. I’m just getting started!

I would like to close with a request—not for money; as a salaried state employee, I feel obligated to do the people’s work. Instead, I want to encourage you to share this platform with others. Despite its growth, Freedom and Reason remains a relatively low-traffic platform. If you find these ideas valuable, it is likely others will, too. So let family and friends know about my work.

Happy New Year, everybody!

Image by Sora

Moral Authority Without Foundations: Progressivism, Utilitarianism, and the Eclipse of Argument

In my previous essay, Epistemic Foundations, Deontological Liberalism, and the Grounding of Rights, I argued that deontological liberalism—the secular moral foundation of the American Republic—draws heavily on principles rooted in Christian ethics, yet remains fully intelligible and defensible without religious belief. Against contemporary tendencies to reduce morality and politics to ideology, preference, or utility, I claim that any society committed to human dignity, individual rights, and the rule of law requires a reflective epistemic foundation in which moral truths can be sown to exist independently of human opinion—i.e., a stance-independent foundation. In that essay, I cited YouTube debater Andrew Wilson as having inspired the essay. I do not agree with Wilson’s argument that Christian ethics necessarily require divine command, but I will take that up in a future essay in which I will present my moral argument, which rests on natural law.

However, I argued, among progressives, a view many Americans know as liberalism generally (failing to distinguish the tendencies), are not liberal in the deontological sense but instead utilitarians, where ends justify means, however immoral those means are. Progressives dress their moral impoverishment in the language of “empathy,” an early twentieth century term derived form the German Einfühlung, a matter I wrote about in a recent essay, The Problem of Empathy and the Pathology of “Be Kind.”

Anticipating that future essay (which will have to wait until the new year), I concluded my last essay by demonstrating that atheists and humanists can coherently operate within this framework. In that case, their moral reasoning—particularly in opposition to authoritarianism and in defense of human dignity—would exemplify a secular form of deontological liberalism grounded in the universal moral insights of Christian ethical thought, especially the inviolability of the individual and the moral limits of political power.

In today’s essay, I explain how progressives claim the moral high ground despite having no certain epistemic foundation for organizing a moral ontology. Readers may have noticed a widespread perception that progressives are the moral ones, perhaps excessively so, whereas classical liberals and conservatives lack empathy of the downtrodden and marginalized (migrants, trans kids, etc.). However, its moral relativism in particular, exposes progressivism as morally impoverished, since there is no deontological basis in this view for appealing to rights. Moral relativism is the view that what is morally right or wrong depends on cultural, personal, or social contexts rather than on universal moral principles. This renders human rights impossible.

Historically, moral claims in the West are grounded in a deontological framework. On secular grounds, these are constitutionalism, natural law, and rights understood as pre-political constraints. Here, moral disagreement takes the form of argument: Are these duties real? Are these rights correctly specified? Are the means legitimate regardless of ends? Even when people disagree sharply, there is at least a shared expectation that one justifies moral claims by appealing to principles that are binding for everyone, including oneself. By rejecting the republic’s foundational deontological framework, progressivism represents an authoritarian tendency in American politics and in the West generally.

Progressive moral discourse (such as it is) breaks with the American tradition. Its authority does not rest on fixed moral precepts or universal duties, but on outcomes, e.g., the reduction of harm to designated vulnerable groups, selectively chosen to advance ideological and political goals. Again, this is a form of utilitarianism, but one filtered through sociology (yes, my discipline—and not just in its warp form—has played a central role in the corruption of moral understanding) rather than philosophy: moral weight is assigned by group status, historical grievance, and measured disparities.

Crucially, because the metric used in the progressive standpoint is harm reduction and promotion of happiness (as progressives define it) rather than principle, disagreement over means is treated as evidence of moral defect. If an argument is said to “cause harm,” then the arguer is not merely wrong but immoral. He is a “bad actor.” That is why disagreement is moralized (as a rhetorical or strategic act) and personalized in the progressive worldview, rather than addressed substantively.

This shift explains the prominence of moral labeling. Terms like “bigoted,” “Islamophobic,” “nativist,” “racist,” “transphobic,” and “xenophobic” function less as descriptive claims (defined by progressives in any case) than as status judgments, marking someone as standing outside the moral community. Once a person is assigned that status, their arguments no longer require engagement. The targeted man is effectively erased as a citizen with the right to speak his mind and engage in the political process; there is no need to engage with him. It’s an easy jump from here to perpetrating violence against him. The cases of progressive violence against conservatives are mounting.

This is not accidental; it is a feature of a moral framework that lacks deontological limits. If there are no inviolable duties, then exclusion and violence become legitimate moral tools. Moral high ground is asserted not by coherence or consistency, nor by reference to an actual moral epistemic, but by alignment with the approved moral narrative. It is only nominally moral. Arguably, there is no amoral stance among humans, since to act outside a moral order is itself to engage in immoral behavior. Philosophers like Aristotle, Kant, and many virtue ethicists agree that choosing to stand outside a moral order is itself a moral choice, and thus open to moral judgment. Progressives cannot rationally escape the dilemma.

The tactical irony is that those who do operate from an epistemic moral foundation—constitutional restraints, natural rights, rule-based ethics—are especially vulnerable to this tactic. The deontological framework I have outlined requires toleration of disagreement and restraint in judgment (this framework provides the rules for Jürgen Habermas’s ideal speech situation, elaborated in his 1981 The Theory of Communicative Action); it prohibits treating opponents as morally illegitimate merely for disagreement or dissent. Utilitarian-progressive frameworks, by contrast, have no such internal brake. If the end is moralized strongly enough, almost any rhetorical or social means become justified.

The question today’s essay addresses is how progressives came to be seen as holding the moral high ground. The short answer is that this has occurred largely because of the collapse of shared metaphysical commitments. As classical liberal moral philosophy, as well as natural law and religion, lost cultural authority, the language of moral legitimacy migrated from principles to identities. Claiming to stand with the “disadvantaged,” “downtrodden,” “migrant,” “oppressed,” and “victims” became a surrogate for moral justification itself. In this environment, to question the framework is not seen as philosophical dissent but as moral betrayal.

The longer answer will come in another future essay in the new year. Readers won’t have long to wait. It will suffice to say for now that the asymmetry I’m describing is real, and I wanted to cap off the year with this observation. I’m confident most readers recognize this reality. It is not that all progressives lack a moral framework altogether; rather, it’s that their framework treats disagreement as a moral failure and labels as a sufficient moral rebuttal. Those committed to deontological ethics appear to be in the weaker moral position, not because their foundations are thinner (quite the contrary), but because they refuse to abandon reasoned argument for moral denunciation. Ironically, that restraint—once the hallmark of moral seriousness—is now portrayed as guilt.

The dilemma, then, is that those who operate from a deontological framework, incorporating charity, compassion, sympathy, and tolerance, confront those who have no moral foundation who advance morally illegitimate positions. At some point, those who work from deontological commitments are going to have to assert their epistemic authority over those operating without one and insist that, if anyone stands beyond the pale, it is the person claiming the moral high ground without a coherent moral epistemic.

Image by Sora

Epistemic Foundations, Deontological Liberalism, and the Grounding of Rights

This essay argues that deontological liberalism, the ethical foundation of the American Republic, rests on principles derived from Christian ethics, yet it can be coherently embraced without religious commitment. While contemporary debates often treat morality and politics as matters of ideological allegiance, preference, or utility, I contend that a reflective epistemic foundation—one in which moral truths exist independently of human opinion—is essential for any society that seeks to protect human dignity, individual rights, and the rule of law. I conclude by showing that committed atheists and humanists can operate from this ethical framework. Their moral reasoning, particularly in resisting authoritarianism and defending human dignity, illustrates a secular deontological liberalism grounded in the universal moral insights of Christian ethical thought, which prioritizes the inviolability of the individual and the limits of political power. Put another way, one need not be conservative nor Christian to embrace a valid moral ontology.

A little more than a year ago, on December 23, 2024, I published an essay, Rise of the Domestic Clerical Fascist and the Specter of Christian Nationalism, in which I argued that one of the rights government is compelled to defend is religious liberty, which necessarily requires freedom from religion as well as freedom of religion, since a person cannot be free to practice their faith (or no faith at all) if they are not free from the demands of the faith of others. This is why, I argued, Islam is incompatible with freedom: Muslims believe juridical and political authority comes from Allah and must be administered by religious clerics. I warned that Christian nationalism risks the same problem, and that the United States must remain a secular republic tolerant of the rights of believers and disbelievers alike.

America is founded on an entirely different premise than that of the Islamic clerisy. So central is secularism to the US Republic that the Constitution explicitly states that no officeholder can be required to swear allegiance to any god (John Quincy Adams took the oath of President on a book of secular law). Article VI states that “no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States.” The First Amendment guarantees citizens freedom of conscience. The Constitution is the supreme law of the country. It is a secular constitution for a secular nation.

At the same time, the deontological liberalism that underpins the logic of the American Republic emerges from the ethical system that grew out of Christian civilization, especially following the dissolution of Catholic hegemony with the Protestant Reformation, which allowed Christianity to arrive in its fully developed form as a doctrine of individualism and human dignity. From this moment, the Enlightenment was born, and the American Republic became a possibility. The patriots, most of them Protestants, who overthrew the English monarchy and established what is now the world’s oldest constitutional republic, seized that moment in history. We owe them a profound debt of gratitude for their bravery and wisdom.

Although I am critical of Christian nationalism, it is a fact that America’s founding was a product of a secularized form of Christian ethics, in which moral ideas shaped by Christianity were translated into political principles without requiring theological assent. The founders drew on Christian moral assumptions—the inherent dignity and moral equality of persons, the duty to restrain power, the importance of conscience, and concern for justice—while grounding them in natural law, reason, and self-evident truths rather than explicit revelation. Concepts such as human rights, limited government, and the rule of law reflect Christian ethical roots reframed through Enlightenment thought. Here, sin is reinterpreted as human fallibility requiring checks and balances, and love of neighbor is expressed institutionally through ordered liberty and the protection of individual rights. Thus, America’s founding embodies a secular Christian ethics: morally indebted to Christianity, but politically articulated in universal, non-sectarian terms.

In this essay, I present the epistemic foundation that has guided my thinking throughout my life, admitting that at times I fell under the spell of progressive ideas antithetical to that foundation. I am moved to write this essay because of Andrew Wilson. Wilson is an American commentator and host of The Crucible, a long-form debate-style podcast and livestream focusing on culture, gender, politics, and religion. He debates others far and wide and often asks his interlocutors to detail the epistemic foundation upon which they erect their arguments.

In a recent interview with Patrick Bet-David (the PBD Podcast), Wilson contends that most people do not know why they believe what they do. Instead, they repeat talking points provided for them by their tribe. As such, their arguments do not stem from an epistemic that organizes ontological truth. They routinely fail to establish a stance-independent foundation, where truth and validity do not depend on attitudes, beliefs, perspectives, or preferences, but rather systematically determine these. What is required for reason, he argues, is an epistemic that holds regardless of what anyone thinks about it. In other words, individuals do not have their “own truths.” There is a truth, and we all live in that truth regardless of ideology or politics—whether we know it or not.

One of those truths is the fallibility of man. This applies to me as much as anybody else. It is because I recognize my own fallibility that I have changed my opinions over time—and it was in moments when I allowed the tribe to determine those opinions for me that I strayed from the epistemic that guided me from early childhood.

For example, there was a period in my life when I accepted the premise that communism was, in principle, a good thing, since capitalism is an exploitative system (exploiting man and nature). I defended things that communists did that improved the lives of people. To be sure, the Soviet Union accomplished some amazing things, which I document in publications over the years (see, e.g., my 2003 article The Soviet Union: State Capitalist or Siege Socialist? published in Nature, Society, and Thought). However, in defending communism, I had to upplay the accomplishments and downplay the terrible things the communists did to achieve those advances, obscuring the fact that the accomplishments came at the cost of tens of millions of lives. And, moreover, that similar achievements were possible without communism—indeed, greater achievements than communism could muster under the best conditions.

I was awakened to this by a deep dive into the work of George Orwell, whom I have featured in several essays on this platform. I learned from Orwell’s biographer, Christopher Hitchens (who also wrote biographies of Thomas Jefferson and Thomas Paine), that Orwell was often asked why he did not dwell on the problems of fascism, instead focusing his high-powered perception on the tragedy of communism. Of course, Orwell did have things to say about fascism (he took a bullet during the Spanish Civil War). But he focused on the horrors of communism. Why? Because Hitchens explains, Orwell was surrounded by intelligent people—academics and scholars—who could, on the one hand, see the horrors of fascism, yet, on the other, ignore them in communism. Orwell could see people rationalize the double standard. Readers of this platform have likely heard others say, “Communism has never been executed properly, but the ideas are good and worthy of consideration,” never stopping to consider that faithfully following those ideas to their logical conclusion is what led to atrocities they themselves, reluctantly and dismissively, admit.

When I returned to the liberal foundations of my thinking after having been pulled into orbit around leftist ideology during graduate school and my early years as a college professor, I re-examined my beliefs and found that I was inconsistent. I realized that if I did not work from principle every time—judging every event, trend, and thought in terms of those principles—I would reach bad conclusions. I was working from a double standard. I knew double standards were irrational, but I had allowed myself to work from them nonetheless.

For example, I fell under the mistaken belief that only white people could be racist, in the sense that, since whites controlled society, their ideas of racial hierarchy had an effect, whereas the ideas of racial hierarchy among black racialists, in their powerlessness, could not. To borrow the language of philosopher of science Imre Lakatos, I erected around me a protective belt (a system of positive heuristics) to defend the hard core (the negative heuristic) of my research program. Under self-interrogation, I realized that I was committing the fallacy of misplaced concreteness, treating abstractions as if they were real things, which pushed my liberal commitments to individualism to the margins. I had to bracket enlightened thinking to sustain an ideological worldview that had no rational grounding. I was an atheist working from heavenly and idealist ideas rather than earthly materialist ones.

It was in self-interrogation that I came to understand that liberal Enlightenment carries an epistemic foundation, and that foundation lies in Christian ethics. Andrew Wilson’s observations put what I have been working on over the last several years into perspective. Even though I do not subscribe to Christian theology, I recognized that the ethics emerging from the Reformation—the recognition, within a religious tradition of individualism, of the objective reality of human existence, which includes the anthropological and sociological truth that we are social beings who must live collectively—demonstrate the validity of limited government.

This religious tradition forms the basis of republican government, and the liberalism that gives rise to it is not utilitarian, which Wilson criticizes in a recent seven hour debate with Mark Reid, standing Christianity to secular humanism, where ends reduce the means to amoral and instrumental actions, but deontological, where the means must have moral justification. Indeed, some ends are not to be achieved because there are no ethical means to achieve them. This system makes civil and human rights possible—and necessary. In a real sense, the means are ends in themselves.

Restraint of government is deontological in the sense that it imposes moral limits on permissible means, regardless of how desirable the ends may be. In the American founding tradition, government may not violate certain rights (life, liberty, conscience, due process) even to achieve desired outcomes such as prosperity, security, or substantive equality. These limits function as moral prohibitions, not merely prudential calculations. It is also liberal because it centers the moral status of the individual over collective goals, treating persons as ends in themselves rather than instruments of state purposes.

This reflects a secularized Christian moral inheritance: the Christian idea of the inviolable person translated into the Enlightenment language of natural rights. Crucially, restraint of government is not only about means to an end, but also about what ends are morally admissible at all. Some ends—such as coerced virtue or enforced moral perfection—are ruled out in principle. Thus, American liberalism embeds a deontological ethics that governs both how government may act and what it may rightly aim to do.

Both forms of liberalism existed at the time of America’s founding. Thomas Jefferson, the primary author of the Declaration of Independence, and Jeremy Bentham, a proponent of utilitarianism, both liberals, were acquainted and mutually respectful, but they represent two different moral foundations for organizing the Western world. They thus usefully serve as personifications of the two positions, both of which continue to shape governance and lawmaking in Western democratic societies.

Jefferson’s liberalism is essentially deontological and rights-based, grounded in natural law and the moral inviolability of the person. Rights exist before government and place firm limits on what the state may do, regardless of consequences. This aligns with the American founders’ emphasis on inherent rights, restraint of power, and constitutional limits. Bentham, by contrast, rejects natural rights as “nonsense upon stilts” (see his critique of the Declaration of the Rights of Man and of the Citizen in his essay Anarchical Fallacies, c. 1796). He argues that the legitimacy of laws and institutions depends entirely on their tendency to maximize overall happiness. In Bentham’s framework, rights are not moral constraints but useful constructs—rules justified only insofar as they produce good outcomes. This allows, in principle, for rights to be overridden if doing so increases aggregate utility. Jefferson famously argues for happiness as well, but he does so within the framework of natural rights.

The split matters because it produces two distinct liberal traditions: an American constitutional liberalism focused on limits and rights, and a British reformist liberalism more comfortable with technocratic governance and policy experimentation. Jefferson and Bentham illustrate how liberalism can agree on freedom as a goal while sharply disagreeing on the moral rules that govern how freedom may be pursued.

In this essay, I explore the epistemic foundation that underpins the American Republic, namely, Christian ethics, and praise the founders for separating those ethics from the theology that gave rise to them. The danger of Christian nationalism is that it seeks to rejoin Christian ethics and theology to re-Christianize the country. I argue that this is not in keeping with the founders’ vision for America. Moreover, as I have suggested in several essays on Freedom and Reason, utilitarianism inheres in the logic of progressivism, which is an expression of corporate statism, where instrumental reason trumps republican virtue, leading to a decadent society and civilizational decay. While America is not a Christian nation, I have come around to the position that America needs a Christian majority to uphold republican virtue (which is one of the reasons I am highly critical of mass immigration from Muslim-majority countries).

Contemporary moral and political disagreements often appear to concern particular policies or ethical conclusions. Yet beneath these surface disputes lies a deeper conflict—one not primarily about what we believe, but about how we claim to know what we believe, and what ultimately justifies those claims. This is the sense in which Christian apologists like Wilson argue that most people “do not work from an epistemic standpoint.” What Wilson means is not simply that people lack information, but that they lack a reflective account of the foundations of their knowledge, especially moral knowledge. They can repeat conclusions but cannot explain why those conclusions should bind anyone, including themselves. This dispute becomes especially clear when comparing different traditions within liberal political thought, particularly deontological liberalism and utilitarianism, and when asking how liberal societies ground claims about dignity, justice, and rights.

In this context, epistemic refers to a theory of knowledge: an account of how beliefs are justified, what makes them true or false, and what ultimately grounds their authority. To “work from an epistemic” is to be able to answer questions such as: What counts as knowledge? Why should reason be trusted? Why do logic, morality, and truth have binding force? What distinguishes objective moral claims from mere preference or social convention?

Christian ethicists often argue that many modern moral and political claims are epistemically shallow. People assert moral conclusions without being able to explain why those claims are objectively valid rather than contingent on consensus, power, or utility. The critique is not that such people are necessarily insincere (although many are), but that they rely on unexamined assumptions inherited from culture, education, media, or party rather than from a coherent epistemological framework. This is why debates about ethics often collapse into assertion or outrage: the disagreement is not merely moral, but epistemic. The parties lack shared criteria for justification.

Deontological liberalism begins from axioms or postulates about human beings and moral reality. It holds that individuals possess intrinsic worth and therefore certain rights that are not contingent on outcomes, preferences, or social approval. These rights exist before and independent of the state, and the legitimacy of law depends on its conformity to them. Historically, this tradition draws on natural law, natural rights theory, and Enlightenment moral realism.

The American Declaration of Independence is the canonical expression of this view. When it appeals to “the laws of nature and of nature’s God” and declares certain rights “unalienable,” it asserts that moral truths exist objectively, that human reason can apprehend them, and that political authority is constrained by them. Rights are not created by law; they are recognized by it. They are good because they are true. (See Denying Natural Rights at the Heart of Authoritarian Desire.)

Utilitarianism, by contrast, grounds morality in consequences. What is right is what maximizes happiness, preference satisfaction, and well-being. In a world where such things are defined by powerful corporations and their functionaries and propagandists, who determines these desired outcomes is a central question. We saw this during the COVID-19 pandemic, where the supposed well-being of the population required the coercive suppression of fundamental civil and human rights.

In the utilitarian view, moral rules are instrumental rather than intrinsic, and rights are justified insofar as they promote desirable outcomes. This framework is superficially attractive because it appears empirical, flexible, and secular. Yet it weakens the claim that any right is inviolable. If moral validity depends on outcomes, then rights may be revised, overridden, or redefined when doing so seems to improve aggregate welfare.

We saw this in Virginia Senator Tim Kaine’s condemnation of natural rights, arguing instead that rights come from government (see Tim Kaine and the Enemies of Liberty and Rights; Natural Rights, Government, and the Foundations of Human Freedom). Thus, utilitarian liberalism introduces a form of relativism—not that “anything goes,” but that nothing is ultimately fixed. Moral claims lack permanence because they lack grounding in a reality independent of human calculation.

At bottom, this is not a moral disagreement but a dispute about knowledge and reality. Indeed, only one side is moral, and it is not the utilitarian side. This is why secular humanism, working from a utilitarian standpoint, cannot validly claim to work from morality; to claim that rights are unalienable is to assert that they exist, that they are knowable, and that their authority does not depend on human agreement. That requires both an epistemology (how we know moral truths) and an ontology (what kind of things moral truths are).

Utilitarianism, what we recognize today as progressivism, eschews virtue for instrumental reason. This is how we find ourselves in a world where children are drugged and mutilated, marketed as “gender-affirming care,” because they seek the remedy of their dissatisfaction with their bodies in trans joy, a happiness that requires the manufacture of simulated sexual identities—from which the medical industry profits handsomely. The Nuremberg Code, which rests on deontological commitments, is easily suspended when human rights give way to instrumental reason shorn of ethical demands.

Deontological liberalism, therefore, necessarily carries metaphysical commitments. It presupposes a moral order that constrains political power and human will. Utilitarianism, by contrast, minimizes ontological commitments, treating moral knowledge as empirical, pragmatic, or provisional, which subjects it to ideological corruption and political manipulation.

This difference explains why Christian apologists argue that modern secular moral discourse “borrows” moral conclusions while denying the metaphysical foundations that once supported them. Yet it does not follow that moral realism requires commitment to Christian theology. A coherent alternative exists—one deeply rooted in Enlightenment thought and visible in the American founding itself: grounding moral law in nature or natural history rather than in a personal divine lawgiver.

I have already said as much in getting to this point, but it bears elaborating. In deontological liberalism, nature is not morally neutral chaos, but rather an ordered reality governed by universal and intelligible laws. These include not only physical laws, but emergent biological realities, psychological capacities, and the social structures that shape them. Human beings are a certain kind of mammal, with characteristic capacities, needs, and vulnerabilities. From these facts arise norms—not invented arbitrarily but discovered through reflection on what human flourishing requires (I endeavored to explain these in my recent essay praising Samantha Fulnecky’s essay, moving her argument concerning gender roles from theological grounds to the foundation of natural history).

For the deontological liberal, language, reason, and social practices do not create moral law; they allow us to articulate and apply it (which is why the reclamation of accurate and precise language is so imperative). Moral truths are self-evident not because they are obvious in a trivial sense, but because they are accessible to rational agents without appeal to revelation or authority. In Enlightenment usage, “self-evident” means epistemically basic: known directly through reason and observation of the world, rather than inferred from theology (Fulnecky’s error, even if I praise her for providing an epistemic foundation).

This is precisely how Jefferson frames his argument in the Declaration. The Declaration does not present its claims as speculative metaphysics or sectarian doctrine, but as truths available to any rational person. Rights are grounded in human nature itself, not in the decrees of a church or the will of a ruler.

This position reflects a deliberate distinction between metaphysics and theology. The American founders retained key elements of Christian-influenced moral metaphysics—intrinsic human dignity, limits on political authority, objective moral order—while bracketing revealed theology. They rejected the necessity of Christological doctrine, divine revelation, and ecclesiastical authority as sources of political legitimacy. This produced a form of secularism that was not relativist or value-neutral; rather, it was natural-law secularism: a framework that allowed moral objectivity without theological coercion.

Enlightenment values did indeed emerge from Christian civilization, particularly through the Reformation and its emphasis on conscience and moral agency. But acknowledging that historical genealogy does not require accepting theological premises as politically binding. This distinction was essential for pluralism. A republic intended to include citizens with diverse religious commitments—or none at all—could not ground rights in contested theology. By locating moral authority in nature and reason, the founders created a framework in which equal rights did not depend on shared beliefs about God.

Christian critics rightly observe that this framework inherits much from Christian moral thought, and they argue that nature alone cannot generate normativity—that descriptive facts cannot produce binding “oughts.” Whether that critique succeeds remains a live philosophical question. But it is a mistake to assume that secular natural law is incoherent or merely parasitic; it represents a serious attempt to preserve moral objectivity, political legitimacy, and pluralism simultaneously.

The deeper issue, then, is not whether liberal societies can function without theology, but whether they can retain deontological commitments without drifting into Christian nationalism or utilitarian revisionism. When rights become subject to Christian (or any other) theology, society risks sliding into clericalism—or rule by the religious judge—a form of totalitarianism. At the same time, when rights are no longer understood as truths about human beings but as instruments for producing outcomes, their authority becomes authoritarian. Under utilitarianism, democracy is subordinated to technocracy. This is why I reject progressivism.

What this debate ultimately reveals is that political disagreement is inseparable from epistemology. To argue about justice, liberty, or rights without asking how moral knowledge is grounded is to argue at the level of conclusions while ignoring foundations. Deontological liberalism, whether articulated naturalistically or theistically (as an atheist, I obviously prefer—indeed, insist on—the former), entails an explicit epistemic and ontological commitment: that moral truths exist independently of power and preference, and that reason can apprehend them. That commitment is not a relic of theology but a prerequisite for any liberal order that wishes to treat rights as more than temporary conveniences.

Secular humanism need not be utilitarian. Indeed, if it is not to devolve into progressivism or the horrors of communism, it must reject utilitarianism in favor of deontological liberalism. Moreover, any democratic socialism worthy of consideration must be grounded on the same ethical basis. Orwell exemplified this approach: he opposed authoritarian and totalitarian systems, yet remained a democratic socialist throughout his life. His standpoint exemplifies deontological liberalism rooted in Christian ethics.

In recounting a sketch by Italian writer Ignazio Silone, Irving Howe, in an essay in the Fall 1981 edition of Dissent (“On the Moral Basis of Socialism”), leverages the power of Silone’s anti-totalitarian commitments (see also Silone’s “The Choice of Comrades,” published in the Winter 1955 issue of Dissent). As a boy, Silone watched a ragged man being taken away by the police and laughed. His father rebuked him: one should never laugh at a man being taken by the police. When the child asked why, the father offered two reasons. First, one does not know what the man has done—an intuition that anticipates the liberal principle of restraint and the presumption against easy moral certainty: innocent until proven guilty. Second, and more importantly, the man is unhappy.

At first glance, this second reason might sound utilitarian, as though the wrongness of mockery lies in the fact that it increases suffering. But Silone was not a utilitarian thinker. Although he began his political life within communism, he broke decisively with any doctrine that justified cruelty, humiliation, or repression in the name of the collective good or historical necessity. His mature moral vision was grounded in human dignity and the conviction that political action does not suspend ordinary moral obligations. The unhappiness of the man being arrested is not morally salient because it adds to some aggregate of pain, but because it marks a moment of extreme vulnerability. When the state exercises coercive power over an individual, that individual’s dignity does not disappear, even if he is guilty of a crime. To laugh at such a person is not merely unkind; it is a failure to recognize the moral limits that ought to govern both citizens and institutions.

Seen this way, Silone’s anecdote aligns naturally with a deontological liberal tradition rather than a utilitarian one. The prohibition against mockery does not depend on calculations or outcomes. It rests on sympathy for the person as a person, even when one condemns the act that may have led to his arrest. This distinction matters. A utilitarian framework can justify punishment, suffering, and even humiliation if they serve a greater social good. A deontological liberal framework, by contrast, insists that certain forms of treatment are wrong regardless of their utility, because they erode the moral foundations of individualism and human dignity.

Silone’s story is not about sentimental pity or the efficient reduction of suffering; it is about the kind of moral character a free society requires. If citizens lose the capacity for compassion toward those at the mercy of state power—even those who may deserve punishment—then the moral restraint necessary for republican virtue dissolves. Silone’s lesson, properly understood, is not a utilitarian appeal to minimize unhappiness, but a liberal warning: once we permit ourselves to laugh at the humiliated, we have already begun to undermine the ethical conditions that make a free and democratic society possible. Crucially, the epistemic foundation of Silone’s sentiment is rooted in Christian ethics.

Howe himself was a lifelong advocate of democratic socialism, co-founding Dissent magazine in 1954 to provide a platform for anti-Stalinist leftist thought that combined a commitment to social justice with a critique of authoritarianism. Over his life, he watched many comrades (e.g., Irving Kristol) drift into neoconservatism. Howe highlights Silone’s childhood anecdote of witnessing a ragged man being taken by the police and learning a moral lesson about empathy and justice to illustrate the ethical foundation he believed should underlie socialist politics.

In his advocacy, Howe consistently emphasized the importance of humanistic values, individual responsibility, and moral conscience within socialism, distinguishing his socialism from both Stalinism and unprincipled leftist radicalism (which is now rampant in the West). For this reason, Howe admired Orwell, particularly for Orwell’s clear-eyed critique of totalitarianism and moral seriousness; he saw Orwell as a kindred spirit in defending democratic principles against the abuses of power, aligning well with Howe’s vision of an ethical, human-centered socialism.

In concluding this essay, I want to make it clear that I am neither a Christian nor a conservative (see Am I Right-Wing? Not Even Close). One need not be either to ground moral claims in an epistemic framework fashioned by Christian ethics. The American founders demonstrate that moral truths derived from Christian-informed conceptions of human dignity and conscience can be translated into secular, universal terms, producing a liberal framework that protects rights independently of theological belief.

Am I a democratic socialist? I have written essays over the last few years distancing myself from socialism (see Why I am not a socialist; Marxist but not Socialist, although I am no longer Marxist politically). However, like Orwell, I am sympathetic to democratic socialist ideals, and today’s society could benefit from considering them. At the same time, I know which party will take them up, because they already use the language, and I don’t trust that party with power.

Whatever I think of socialism today, Silone and Howe—both atheists, humanists, and democratic socialists—illustrate that commitment to human dignity, moral responsibility, and opposition to authoritarianism can fully operate within these frameworks. Their reasoning shows that deontological liberalism can be defended based on human nature and moral order, not religious authority, allowing secular, pluralistic societies to uphold ethical and political principles that ultimately stem from a Christian moral heritage.

Both conservatives and democratic socialists alike, eschewing utilitarianism, rest their moral arguments upon the epistemic foundation of Christian ethics.

Image by Sora

Fulnecky’s Argument Through the Lens of Anthropology and Sociology

The more I think about Samantha Fulnecky, the University of Oklahoma student who received a “0” out of 25 points on an assignment reflecting on the policing of gender norms among middle schoolers from psychology TA Mel Curth, a trans-identifying male, the more I’m impressed with her. Fulnecky’s essay wasn’t just undeserving of the grade it received; it was actually rather good, her writing typical of a college junior, and she deserved at least a passing grade. Indeed, the only problems I can identify in the paper are formatting and punctuation errors. The substance of the essay is creative, insightful, and provocative. Damning assessments of her work on social media (and this embarrassing letter by the Freedom From Religion Foundation) illustrate the problem of motivated reasoning on the progressive left.

Samantha Fulnecky

To her credit, Fulnecky did something few students do: she revealed the epistemic foundation upon which her normative argument rests. She needed to do this not only for reasons I will discuss, but also because the TA imposed a morality on his students, one he was not making explicit. In Fulnecky’s case, it was clear enough, and she took it on.

Given social science as students once knew it (and I still do)—which could have been used to make the same argument without appealing to religious doctrine—Fulnecky’s insight comes thanks to a text resistant to the corruption of queer doctrine, namely the Bible. Whatever its problems, the Bible gets the matter of gender right (there are two, and they are fixed), and since it is one of the few texts today that recognizes the gender binary and the importance of feminine and masculine roles in reproducing society and the species, it serves as a valuable source. The claim that the Bible is not a scholarly source is nonsense: when making an argument from Christianity, the Bible is the primary source.

For a detailed analysis of the controversy, see my Christmas Eve essay, A Case Study in Viewpoint Discrimination and Poor Assignment Formulation. In today’s essay, I go deeper into Fulnecky’s argument to help critics and others appreciate what she accomplished. I will lay out her position and justification, then show how one can make the same argument using pre-queer anthropology and sociology.

I do this not only to defend Fulnecky’s contribution, but to show how postmodernists have taken a transcultural and historical process and pathologized it to advance queer doctrine. In doing so, queer activists have obscured a vast body of knowledge on the human life course that demonstrates why normal psychological adjustment during puberty requires certain structures. That the course at the University of Oklahoma is called “Lifespan Development” highlights a profound problem in higher education: Curth’s action, and that of the second grader (TA Megan Waldron, both supervised by Professor Lara Mayeux), reflect movement politics that have no place in science courses—not because they are political, but because they are science-denying, which Curth made clear in his criticisms of Fulnecky’s reflection.

Before beginning, I want to emphasize that the assignment was a response essay, also known as a reflection or reaction essay, submitted online, much like a discussion post in a learning management system (LMS) such as Canvas. Many of Fulnecky’s critics seem unaware of this, and it forms a major front in their hyperbolic attacks.

As readers of this platform know, I am a college teacher with over thirty years of experience, and I have asked students to write such essays both as classroom exercises and, with the advent of LMS, as drop-box submissions. Unless I specify that students cite sources, there is no need for them to do so. I am asking for their reaction or reflection, not a literature review or research paper. Those are different assignments with different requirements. Think about it this way: when an instructor asks students in a classroom discussion what they think about an argument or theory, he doesn’t necessarily expect citations. Presumably, many of Fulnecky’s critics have had this experience; their overreaction is disingenuous.

However, in the Fulnecky case, she did cite her source: the Bible. Not only did she cite the Bible, but she also cited the specific book of the Bible from which she drew her argument: Genesis. She explains very clearly why she is using this source, as it provides the epistemic foundation for her critique of the article she was assigned to read, which she would have had to have read to formulate her critique (contrary to claims on social media that misrepresent her remarks in an interview).

The article, published in a 2014 issue of the academic journal Social Development, was “Relations Among Gender Typicality, Peer Relations, and Mental Health During Early Adolescence,” penned by Jennifer Jewell (a graduate student at the time) and Christia Brown (presumably Jewell’s major professor at the University of Kentucky). I will get to Fulnecky’s challenge to the article’s hidden premise in a moment, but I want to reflect on the religious piece of her response to get beyond the false claim that she did not cite her source. In this passage, which I break into paragraphs, she explains why she is responding in the way that she does. I provide commentary along the way.

“It is frustrating to me when I read articles like this and discussion posts from my classmates of so many people trying to conform to the same mundane opinion, so they do not step on people’s toes. I think that is a cowardly and insincere way to live,” Fulnecky writes. (As I have noted on this platform, lying for the sake of getting along is a type of bad faith, so I appreciate the ground she stakes out here.) “It is important to use the freedom of speech we have been given in this country, and I personally believe that eliminating gender in our society would be detrimental, as it pulls us farther from God’s original plan for humans.”

This is where Fulnceky loses most secularists, but I would ask them to consider Thomas Jefferson’s references to the “Creator,” “Laws of Nature,” and “Nature’s God.” It is God’s plan (Providence) that we should enjoy “Life, Liberty, and the pursuit of Happiness,” since he/nature gave these to us as unalienable rights. Fulknecy’s free speech rights are among those, and she is right to note how important it is to use that right, which she is demonstrating in her reflection

“It is perfectly normal for kids to follow gender ‘stereotypes’ because that is how God made us,” Fulnecky continues. “The reason so many girls want to feel womanly and care for others in a motherly way is not because they feel pressured to fit into social norms. It is because God created and chose them to reflect His beauty and His compassion in that way.”

Replace “God” with “natural history,” and Fulnecky has here an observable and well-documented point. And the point is entirely relevant to her critique of the article. The reader should read Jewell and Brown’s article, but to summarize, their research question concerns peer pressure to conform to social norms, which, from the postmodernist view of such things, is not a normal or necessary process, but a bad thing, in that it is associated with psychological problems (showing this is Professor Brown’s life-work). However, again, if one substitutes natural history for God in every instance, Fulnecky’s argument falls in line with pre-queer social science.

It is at this point that Fulnecky explicitly cites her source (as if it were not already obvious): “In Genesis, God says that it is not good for man to be alone, so He created a helper for man (which is a woman). Many people assume the word ‘helper” in this context to be condescending and offensive to women. However, the original word in Hebrew is ‘ezer kenegdo’ and that directly translates to ‘helper equal to’. Additionally, God describes Himself in the Bible using ‘ezer kenegdo’, or ‘helper’, and He describes His Holy Spirit as our Helper as well. This shows the importance God places on the role of the helper (women’s roles). God does not view women as less significant than men. He created us with such intentionally and care and He made women in his image of being a helper, and in the image of His beauty. If leaning into that role means I am ‘following gender stereotypes’ then I am happy to be following a stereotype that aligns with the gifts and abilities God gave me as a woman.”

There are minor issues with Fulnecky’s essay that I would have noted if I were her instructor: American-style placement of commas and periods inside quotation marks, and a few others (the essay was double-spaced, contrary to what one may see on the Internet). What I would not have done is respond with the TA’s rant, which is available online. I would have expressed appreciation that the student provided the epistemic foundation for her critique of the assumptions embedded in the article. (I will soon publish an essay on the necessity of establishing an epistemic foundation for normative claims, specifically concerning Christian ethics.) I would also have introduced her to the vast anthropological and sociological literature supporting her argument and apologized for the situation that has made this literature remote to her. Behavioral and social sciences have been impoverished by postmodernism and queer praxis, especially the weaponization of empathy.

Jewell and Brown’s study examined whether meeting typical gender expectations is linked to popularity and whether failing to meet them is linked to teasing or rejection. It also investigated whether teasing mediates the association between low gender typicality and poorer mental health. Middle school students reported on their own gender expression, experiences with gender-based teasing, and mental health, including anxiety, body image, depression, and self-esteem. Results showed that popular students were more gender-typical than those who were teased or rejected. Boys who did not fit typical gender expectations reported worse mental health outcomes. In other words, the study confirms what many of us know from experience—a lot of psychology simply formalizes the obvious—only now we are asked to interpret these experiences as traumatic rather than formative.

Fulnecky was suspicious of the authors’ motives, namely that they implied there was something wrong with reinforcing gender-typical norms, a stance aligned with gender identity doctrine. She argues there is nothing inherently wrong with reinforcing gender norms, which required her to explain appropriate gender roles, rooted in a biblical worldview. Intellectually responsible, she erected her explanation on an epistemic foundation. She did not merely say, “I don’t like this article” or “I don’t agree with this article,” as students often do, leaving it at that. She engaged with the article’s core premise and challenged it based on authority. As I noted in my Christmas Eve essay, what upset the TA was that she invoked the “wrong” authority.

There is nothing wrong with what Fulnecky did. In fact, that is what we want our students to do: interrogate the premises of claims made by scientists—or anyone else. If an instructor asks for a student’s opinion, he must tolerate that the opinion may be informed by Christian theology. Otherwise, he engages in viewpoint discrimination. The TA, clearly, had not considered the epistemic foundation of his own views. He believes what he believes due to ideology, not because he has constructed a foundation or observed one. He “knows” it is wrong for students to use the Bible as justification—but Fulnecky, who built her argument on a coherent epistemic foundation, is in the superior position.

To explain why peer reinforcement of gender typicality is not necessarily wrong, Fulnecky must explain why typical gender roles exist. The article assumes that reinforcing gender typicality is harmful. Fulnecky suggests that failing to reinforce these roles may be harmful. Why? According to her, God created two genders and assigned them roles, which society reinforces via norms and peer pressure. Peer pressure is standard across cultures and history. Unlike the article, which offers no epistemic foundation for its moral claims, Fulnecky makes hers explicit. This is what led the TA to discriminate against her: it was not a poor essay, but a viewpoint he did not like. His claim that her grade was unrelated to her religious belief is implausible; her religious belief is exactly what he failed her for.

While the Bible provides one epistemic foundation, there is another Fulnecky could have used: pre-queer anthropology and sociology, which explain the natural origins of traditional gender roles. Across cultures, societies have faced a recurring problem: how to manage boys’ transition into manhood. Anchored in puberty, this transition is not merely biological; it is a social transformation fraught with anxiety, uncertainty, and potential disorder. Anthropologists have long recognized this and developed concepts like liminality and rites of passage to explain how societies regulate this unstable period.

Arnold van Gennep’s Les Rites de Passage (1909) describes transitions through separation, liminality, and incorporation. Pubescent boys are separated from childhood roles, enter a liminal phase—no longer boys, not yet men—and eventually reenter society as recognized adults. Victor Turner’s concept of liminality, being “betwixt and between,” aptly describes this state (The Ritual Process: Structure and Anti-Structure, 1969). Liminal individuals exist outside ordinary categories; they are ambiguous, unstable, and socially dangerous if unmanaged. Biological puberty amplifies this instability: sexual maturity, strength, aggression, and psychological volatility create a mismatch between a boy’s bodily capacities and his recognized social status. Without ritual containment, this mismatch threatens both the individual and the community.

Societies almost universally ritualize this transition. Our society does not. At least not adequately. Most boys manage anyway, but many do not. The same is true for girls. The failure to provide appropriate rites of passage likely explains the rise in adolescent mental distress over recent decades. Even worse, behavioral and social scientists, along with educators and social workers, now claim these rituals are harmful—a key part of the project to queer children’s culture and education. Children are told they do not have to be what they are. Boys are told they can be girls, and that other boys acting like boys is wrong. Fulnecky recognized this in the article’s intent—and she was correct. Curth saw this too and punished her for defending transcultural and historical gender roles.

The remainder of this essay will show how peer pressure around gender conformity is normal and necessary for psychological development. This has long been a topic I have wanted to address, and this controversy provides the occasion. I cover it in criminology and juvenile delinquency courses because Western adolescents, particularly boys, are thrown into liminality without the guidance necessary to reach adulthood, and this has caused a lot of problems psychologically and societally.

David Matza’s theory of drift illustrates this: juvenile delinquency arises from adolescents’ liminal position between childhood dependence and adult responsibility. Young people—especially boys—accept dominant moral norms yet lack stable institutional pathways into adulthood. Delinquent acts respond to structural ambiguity, not deviance. Scholarship on anomie, subcultures, and rites of passage reinforces this, showing that erosion of clear roles, mentorship, and legitimate status attainment intensifies liminality. Without structured transitions, adolescents improvise, asserting autonomy, masculinity, and belonging through delinquency.

Replace delinquency with gender identity disorder, and the problem becomes clear: institutions corrupted by gender identity doctrine embrace the issue rather than solve it. Indeed, progressive activists are responsible for creating these conditions. Queering disrupts normative rules, punishes peers who reinforce gender-appropriate roles, and exposes children to Pride Progress paraphernalia and sexualized content. Social-emotional learning identifies those most susceptible, while empathy punishes questioning of peers’ gender conformity.

To the postmodern mind, historical gender socialization appears as “bullying,” the result of “social constructions” around “patriarchal relations.” However, in traditional societies, male initiation ceremonies guide adolescents through instruction, isolation, trials, symbolic suffering, and endurance tests. These rituals externalize anxiety, transform fear into shared experience, and provide meaningful narratives for transition. Hardship becomes proof of worthiness, not arbitrary suffering.

Crucially, these rites are reinforced both vertically and horizontally. Peer pressure within age cohorts ensures conformity to masculine expectations through mockery, shaming, teasing, and ritualized aggression. Sociologically, this regulates status; anthropologically, it produces culturally legible men. Peer pressure is functional, not pathological. Masculinity requires achievement, continuous affirmation, and demonstration. Normal societies develop systems to confirm gender conformity; pathological societies emasculate men and risk cultural collapse.

“I do not think men and women are pressured to be more masculine or feminine,” she writes. “I strongly disagree with the idea from the article that encouraging acceptance of diverse gender expressions could improve students’ confidence.” This is indeed the implication from Jewell and Brown’s argument (which proves she read the article). “Society pushing the lie that there are multiple genders and everyone should be whatever they want to be is demonic and severely harms American youth.” I know, the demonic line bothers secularists, but I have learned to find a synonym that doesn’t sound theological. She has the right to use the words that convey her thoughts.

“I do not want kids to be teased or bullied in school,” she continues. “However, pushing the lie that everyone has their own truth and everyone can do whatever they want and be whoever they want is not biblical whatsoever.” This is true. The Bible establishes a universal truth, and it’s not the paradoxical truth postmodernists espouse: that the only universal truth is that there is no truth. “The Bible says that our lives are not our own but that our lives and bodies belong to the Lord for His glory. I live my life based on this truth and firmly believe that there would be less [sic] gender issues and insecurities in children if they were raised knowing that they do not belong to themselves, but they belong to the Lord.” I am an atheist, but I recognize that hundreds of millions of my fellow humans believe this, and the consequences of those beliefs in action have immeasurably improved their lives.

Cutting through the religious language, which she has a right to in light of freedom of conscience, Fulnecky’s point is that peer reinforcement of gender roles is beneficial and that gender atypicality is, under normal conditions, exceptional. She’s right. This is not what a man who wants to be seen as a woman can accept. Every day he faces the gaze of those whose sensibilities are not scrambled by gender identity doctrine. He desperately wants to redefine the expectation of normal society as exclusive and oppressive. The reality is that denying boys peer reinforcement of gender roles harms their transition to manhood. What has been normal peer encouragement for millennia is now pathologized by progressive ideology. Boys are robbed of ritualized transition and societal expectation. The Bible affirms this. With behavioral and social sciences corrupted, students like Fulnecky no longer have access to academic literature sufficient for forming epistemic foundations for normative statements. But they do have the Bible. Fulnecky used hers, and she was punished for it.

A Case Study in Viewpoint Discrimination and Poor Assignment Formulation

It’s Christmas Eve. I doubt many people will read this essay. But I have to get this off my chest because it’s bugging me. At any rate, Merry Christmas! Enjoy!

I’m an atheist, a civil libertarian, and a sociology teacher. My disbelief in a god cannot affect my assessment of student work that moves from a religious standpoint. I recognize that students have both a First Amendment right and academic freedom to draw on their religious beliefs when reflecting on social phenomena. While I may prefer arguments grounded in sociological theory, I cannot penalize students for organizing their thoughts from a religious standpoint if I don’t specify that they must work from a sociological perspective. When asking students to reflect on a topic without specifying a theoretical framework, I open the door to a diversity of perspectives, including religious ones.

To maintain analytical rigor in assessing reaction/response essays, I focus on the clarity, coherence, and depth of their reasoning rather than the source of their beliefs, encouraging evidence-based or logical argumentation wherever possible. Optional frameworks or prompts can guide students toward sociological thinking, but their freedom to express their worldview remains respected. This approach balances critical engagement with intellectual freedom, allowing students to articulate reasoned perspectives without invalidating their personal beliefs. My job is not what to think, but how to think and how best to articulate their thoughts.

Mel Curth (left). Samantha Fulnecky (right)

Comments on X and other social media about the controversy surrounding University of Oklahoma student Samantha Fulnecky, who received a “0” out of 25 points on an assignment regarding gender norms from a psychology TA, Mel Curth, a trans-identifying male, exemplify the problem of motivated reasoning on the left. Motivated reasoning is a psychological phenomenon in which individuals process information in a biased way to arrive at conclusions that align with their preexisting beliefs, desires, or goals, rather than objectively evaluating evidence. The comments also reflect the progressive fetish for technocracy, which I will come to at the end of these remarks. But the main issue at hand is an utter failure to understand that the assignment is poorly formulated, and that, because of that, the TA opened the door to a religious argument. The grade assigned was obviously a reaction to the student reflecting on an article from a standpoint with which the TA disagreed.

Rather than critiquing the assignment, the comments dwell on the essay, which social media users don’t like because it works from the unfalsifiable proposition that there’s a god and that this god, in which Fulnecky deeply believes (which is her right—freedom of conscience—under the First Amendment), has a gendered plan for humans. The reason this is so offensive to trans activists and their allies is not just because they loathe Christianity, but also that gender identity doctrine works from an unfalsifiable proposition in the same way as religion, namely, the faith-belief that men can be women because they say they are. Trans activists need their foundational assumption to go unquestioned because, deep down, they know it’s a religious-like belief. When two religious worldviews collide, it’s typically the immature and insecure religion that takes offense (we see this with Islam, as well).

The assignment

A reasonable person would begin by asking about the assignment prompt and the grading criteria, which is easy to do in this case since the rubric is readily available (I shared it above). Students were asked to write a 650-word reaction/response essay to a scholarly article titled “Relations Among Gender Typicality, Peer Relations, and Mental Health During Early Adolescence” (I read the article, Fulnecky’s response, and the TA’s rant). The rubric evaluated the paper across three main criteria: 

(1) “Does the paper show a clear tie-in to the assigned article?” is the first, worth up to 10 out of the 25 total points. Fulnecky’s essay clearly ties to the assigned article. The article is an empirical study finding that middle-school students who are more gender-typical—especially boys—are perceived as more popular, while those who are less gender-typical experience more teasing, and this is associated with more mental health complaints. (Does one even need a study to know this? Perhaps to determine whether the mental health complaints precede or follow the teasing. Some children are more fragile than others.) Fulnecky organizes her reaction paper around this finding, arguing that, while she doesn’t want students to be bullied ot teased, social pressure to conform to gender roles, considering her god’s gender plan, is understandable and she does not necessarily see this as a problem. Leaving aside whether there is a contradiction there, her response tells us that she read the article and is doing what was asked of her: reflecting on/responding to it. 

The reflection prompt is explicit in its second criterion: (2) “Does the paper present a thoughtful reaction or response to the article, rather than a summary?” This criterion is also worth up to 10 points. Again, Fulnecky reacted/responded to the article. She did not summarize it since the criterion tells her not to. Everybody in the class knows what article they are responding to. Is her response thoughtful? You may disagree with her, but that’s not the question. It’s possible that the TA is not a thoughtful person but rather someone who imposes his opinion on others using a position of authority. Indeed, the evidence in this case strongly suggests that it is the latter. What does “thoughtful” mean anyway? In this case, it seems to mean whether the TA finds it as such.

(3) “Is the paper clearly written?” is the last criterion, worth up to five points. There is no ambiguity in what Fulnecky wrote. Nobody who reads this essay will be confused about Fulnecky’s point of view, which is what she was asked to share. That’s why the TA objected to it: he understood full well what Fulnecky was arguing and gave her a “0” on the assignment for that reason.

What the rubric does not explicitly require is also crucial to note. The published rubric does not list “empirical evidence” or outside scientific sources as a required element in the response—it focuses on reacting to the article. Students were not supposed to summarize the article. The rubric does not specifically mandate acceptance of the article’s assumptions; rather, students were to read the article and respond to it “thoughtfully,” whatever that was supposed to mean. Note that one of the approaches students may take to writing the essay is (1) “[a] discussion of why you feel the topic is important and worthy of study (or not)”  and (2) “[an] application of the study or results to your own experiences.” There you go.

In his response to the student (which is long and tedious, to my eyes conveying seething anger), the TA goes beyond the article itself to appeal to the authority of “every major psychological, medical, pediatric, and psychiatric association in the United States,” which, he contends, “acknowledges that, biologically and psychologically, sex and gender is neither binary nor fixed.” 

This is an absurd argument—one that is itself an appeal to authority, but which is an entirely falsifiable matter when interrogated on objective scientific grounds. Indeed, we don’t need science to know this is false. Sex and gender are synonyms, and it’s a fact that gender is binary and fixed. This is true for all mammals (and birds and reptiles, most amphibians and fish, and many plant species). A hog can’t be a sow. Even if genitalia are ambiguous, it will be found in the end that the swine is either male or female. Neither swine nor any other mammal can be both or neither. The instructor is using his position of authority to punish students because they don’t affirm his belief in a falsehood. Apparently, this is the pattern with this TA, who has been removed from teaching duties.

Is it possible that “every major psychological, medical, pediatric, and psychiatric association in the United States” can get something wrong? Yes. Of course it is. One might consider how, during the period of Nazi hegemony, major sense-making and professional associations institutions in Germany promulgated the ideology. The Nazi period in Germany lasted 12 years, from 1933, when Adolf Hitler became chancellor, to 1945, when Nazi Germany was defeated at the end of World War II, and during that period, academic and professional organizations in Germany actively promulgated and normalized Nazi ideology during the 1933–1945 period, not under coercion but enthusiastically. Does that make whatever those associations and institutions held up as true actually true? One would be a fool to accept that.

What are the lessons of Nazi Germany? The world learned—as if some people in it didn’t already know—that the following things are bad: conflating academic authority with moral and political correctness; treating consensus within professional bodies as proof of truth; and allowing ideological conformity to substitute for academic freedom and open inquiry. There’s an irony here. Trans activists are always accusing those who dissent from their doctrine of being “Nazis.” Has Curth ever considered how imposing his ideology on Christians by punishing them for opinions—opinions he asks for—aligns with the mentality of authoritarianism?

Here’s what happened in this case: Fulnecky chose another authority! The TA wanted students to work from the standpoint of the authority to which he appeals. Fulnecky knew this, and she wasn’t having it; she reflected on the article from her standpoint. To be sure, gender identity doctrine is the religion to which most academics adhere, but that’s even more reason for dissenting voices to assert their worldview when they have the opportunity (unfortunately, a lot of students go along to get along, which Fulnecky noted in her essay). Curth gave Fulnecky that opportunity—and punished her for taking it. One would not be unreasonable in suggesting that the assignment laid a trap for Christian students. 

To punctuate his motivation in giving Fulnecky a “0” on the assignment, Curth, who found her essay “offensive,” writes, “I implore you [to] apply some more perspective and empathy in your work” before ranting about the “methodology of empirical psychology,” which, remember, is neither part of the prompt nor the rubric. However, the methodology he in retrospect desired students to deploy is flawed if, in using it, one concludes that gender is neither binary nor fixed, an obviously false conclusion—one that even the Bible grasps as false. Suppose he was talking about the methodology of the article. That’s beside the point, since he did not ask students to do that. He asked for a “thoughtful” response to the article. And although the article itself was not about the “scientific consensus” that Curth asserts, Fulnecky had to share her belief in her god’s gender plan to make her response intelligible. Does a man who demands others refer to him as “she/they” know a thoughtful response when he sees one? Would it matter whether it was thoughtful or not?

Curth explained to Fulnecky when she requested a grade change “You do not need empirical evidence when writing a reaction paper, but if your reaction paper argues contrary to not only the article, but the consensus of multiple academic fields (psychology, biology, sociology, etc.), then you should either provide empirical support for those claims or raise criticisms that are testable via psychological research.” He then gave an example: “If you took a geology class and argued that the earth was flat, something contrary to the academic consensus of that field, then you would be asked to provide evidence of such, not just personal ideology.”

But, as is obvious from reading the assignment, that was never specified. Fulnecky disagreed with the article, which she was allowed to do according to directions. She used the Bible, sure, but isn’t what is obviously true by observation empirical? Empirical means based on experience and observation rather than belief or theory alone. It is no problem to move from a mere belief in God’s plan to observing what is true across time, which I noted above. If, in the end, the Bible were the only source affirming that there are only two genders, then it would be the only science text left on Earth. All that jazz about “the consensus of multiple academic fields (psychology, biology, sociology, etc.)” is just more appeal to authority. And it’s not even true. biology confirms that, in mammals, gender is binary and fixed. But it is true, I hate to admit, that my discipline has accepted the madness of queer theory.

Why is an instructor preaching to students about empathy, not as a thing that human beings have (following Adam Smith, I prefer to refer to this capacity as compassion and sympathy), but as a thing he judges her to need more of? He and everybody else who leverages empathy in this way can fuck off with that shit. Who made this clown and his tribe the moral authority? Besides, from her standpoint, her argument is empathetic in that she is concerned for the harm that gender ideology does to kids. She finds it understandable that kids would expect other kids to conform to gender norms, just as kids—and adults—expect other kids to conform to social norms generally. Who has the moral and rational high ground here? The man who thinks he’s a woman or the woman who knows what she is? No apologies for dwelling on this point; whether Christian or not, we cannot accept admonishment from those who believe society should be restructured to groom children into accepting gender identity doctrine.

In professional terms, a grade of “0” is outrageous. The rubric is misaligned with the grading rationale, the penalty is a disproportionate penalty (that’s putting it charitably), and the student was subjected to inappropriate moralizing feedback. Suppose that the student failed to engage deeply enough with the article’s data. That would justify partial credit and targeted feedback, but not a zero and a moral rebuke. Moreover, the TA’s comment that the student’s view was “offensive” and that she needed to “find empathy” crosses a different line altogether. That shifts the grading rationale from academic criteria to ideological or moral evaluation, which is precisely where universities become vulnerable to claims of viewpoint discrimination. And rightly so.

Beyond the problem of assessment, this is a poorly specified assignment. Consider the following assignment prompt: 

Using the article “Relations Among Gender Typicality, Peer Relations, and Mental Health During Early Adolescence” as your source, write an approximately 650-word science-based response paper in which you evaluate one of the study’s central claims using quantitative psychological research methodology. Your response must (a) accurately summarize the relevant hypothesis and findings from the article, (b) analyze the study’s methodology using at least one empirical lens taught in this course (e.g., construct validity, sampling and generalizability, measurement reliability, statistical mediation, or causal inference), and (c) propose either a methodological critique or a concrete follow-up study design that could test the same claim more rigorously.

The bottom line is that, if an instructor is going to appeal to faith-based belief in evaluating claims—even if the article doesn’t—then all faith-based beliefs are welcome. As the assignment is written, any commentary would have to be something along the lines, “Thank you for your opinion,” and then points based on whether the criteria were met (which they were).

I have to add that a virtue of the faith-belief Fulnecky has centered her life on is that it aligns with what any person not deluded by ideology knows to be objectively true: that gender is fixed and binary. Any scientific paper that moves from the assumption that this is not true, no matter how much it dresses itself in the language of science, collapses into its own foundation. That’s not this article, but it is the TA who claims to move from the standpoint of psychology, which he knows affirms his personal delusion about his own gender. In the hands of progressives, psychology has become a corrupt discipline. Indeed, Curth seeks refuge in the discipline because he regards it as a safe space, which Fulnecky is making unsafe with her Christian worldview.  

Now, on this fetish for technocracy, the reaction on X and other social media reflects a deep problem with the corporate state left. Curth is the authority, and the student should submit to his authority. We see this in another thread on X today, warning us away from Matt Walsh because he’s not a professional historian. This is an elitist argument. In my experience, there are a lot of people with a third-grade education who are smarter and wiser than most people with PhDs. After all, most PhDs in the humanities and the sciences believe that men can be women, when every farmer knows that’s a false belief. There is wisdom in their eyeroll. It requires indoctrination to make a reasonably intelligent person believe something so obviously untrue (that’s not the only crazy shit academics believe). I have found, for the most part, the more education a person has, the more easily they’re guided to believe things that defy common sense. This is why some of the smartest and most innovative people in history didn’t have advanced degrees. In fact, many of the most profound contributions to, say, physics, came before the PhD norm. Likewise, the best scholarship comes before peer review. Do these people really believe that the trappings of the modern academic institution—degrees, peer review, etc.—have always been in force? There’s a history here, and it is not hard to find.

This doesn’t mean that people with PhDs are necessarily indoctrinated. There are a handful of people who go through the graduate school experience because they genuinely seek further enlightenment in a particular subject matter (which benefits from the space and time to read, write, and debate issues with others that graduate school affords) and/or know they have to do that to teach college (which they seek as a vocation), but make a conscious effort to stand outside the doctrinal parts of the process. The reason I was able to shake off the doctrinal parts of the process (and there may be more to shake off) was that I distanced myself from them. I never adopted the elitist attitude that, because I have advanced degrees, I am smarter than everybody else. To be sure, those degrees gave me access to a vocation that requires them for entry, but that’s not why I am smart. The irony is that, even with those degrees, people who disagree with me judge me to be wrong, even stupid. Progressives only appeal to degrees when they are held by people who say what they want to hear. (Remember during COVID when we were told to “trust the experts,” by which they meant their experts?)

The question progressives must ask is who and what ideology is in command of the sense-making institutions in any given existing society. Any cursory look at the situation in Nazi Germany will tell a rational person that the universities in Germany at that time were full of professors who promulgated Nazi ideology. From the logic that one should only recognize as valid knowledge produced by people with degrees or the assertions of academic institutions or professional organizations, it follows that one would have to conclude that Nazi ideology was closer to the truth than any other standpoint, since those propagating it were learned people in respectable institutions. That’s the paradigm of appeal to authority. And if progressives like Curth are going to do that, then conservatives like Fulnecky have just as much right to do the same. 

Before I go, I have to fact-check a viral image on the Internet of the supposed correction of Fulnecky’s essay. What I am sharing below is a printout of the essay marked up by a third party. This is not the TA’s markup. I traced the paper to an account on Threads. The person who did this claims to have graduated Magna Cum Laude with a Bachelor’s degree in education. According to her, she was the only student in a grammar and punctuation class to get 100 percent on tests six weeks in a row. She also says that she’s a Christian. At any rate, the function of this image is to reinforce the perception that trans activists are manufacturing that this is a bad essay. The irony is that this markup makes the TA look even worse than he already does.

I’m not going to go through all the red here. Instead, I will ask the reader to look past all that and read the essay. People are feigning astonishment at the poor quality, but the quality of writing is typical of the average college student (when not using ChatGPT). Take that as you will, but if all such essays were awarded grades of “0,” then higher education would have a crisis on its hands. If she had been in my class, I would have encouraged her to double-space her submissions and indent the first line of paragraphs beyond the first one.

Removing an Imaginary Sixth Digit: Ethical or Unethical?

This essay follows up on yesterday’s essay, Orbiting Planet Madness: Consenting to Puberty and Other Absurdities.

Polydactyly is a congenital condition where a human or other animal is born with extra fingers or toes. The extra digits can be fully formed, but are often only partially formed. It can be genetic or the result of a syndrome. Polydactyly is one of the most common congenital limb malformations. It occurs in approximately 1 in 500 to 1,000 live births worldwide, which means that a lot of extra digits are surgically removed in childhood. Polydactyly can result from mutations in several genes involved in limb development, particularly those affecting the Sonic Hedgehog (SHH) signaling pathway, which is crucial for digit patterning during embryonic development. Yep, you read that right: it’s called the Sonic Hedgehog signaling pathway (I didn’t believe it, either). When an extra digit has bone, joints, or tendons, doctors typically recommend surgical removal and reconstruction to improve appearance and function. 

Polydactyly (image source)

Perhaps we must adjust our language: Most humans have ten fingers, though variations exist due to genetic and developmental factors. That’s fine with me (as long as we don’t then suggest that the number of digits on the human hand is “on a spectrum”). However, beyond physical differences, some individuals experience a mismatch between their perceived and actual number of fingers. This is a situation where a person’s internal sense of their body (body schema) doesn’t match their physical anatomy, leading some to seek surgical alteration. (I have written about this before with respect to limb amputation; see The Exploitative Act of Removing Healthy Body Parts.) In rare cases, a person may feel they have six fingers on each hand when they don’t and may seek the removal of a digit to match their internal body perception, leaving them with only four digits per hand.

This phenomenon provides us with a scenario with which to check the integrity of medical ethics. What if, after surgery, the person looks at his hands and now sees only four digits on each? He can’t have his fingers back since the surgery is quite involved and irreversible. Was it ethical for a doctor to remove the patient’s imagined sixth digit? The man was clearly delusional, seeing six fingers where there were only five, and now, confronted with only four, discovers he not only deceived himself, but that the doctor affirmed his deception and mutilated his hands. Even if he now sees five digits, was it ethical? The surgeon knows what he did. He never had extra digits either way. Whether he immediately, later, or never sees that he now has only four digits, we are confronted with a problem: a doctor affirming a delusion and mutilating a man’s hands.

In a philosophy class, a teacher might ask his students to ponder the ethics of such a case, which hinges on the principles of autonomy, beneficence/non-maleficence, and informed consent. He might note that, on the one hand, medical ethics generally uphold a person’s right to bodily autonomy. If an individual experiences deep distress due to a perceived mismatch between their body and their internal perception of it, some might argue that removing the “extra” digit is an act of compassion, akin to “gender-affirming surgeries” or procedures for body integrity dysphoria (BID). Put to one side for the moment the problematic character of these supposed acts of compassion. It will only be a moment because what I say next blows up the acts of compassion claim in both the cases of gender dysphoria and BID. Indeed, the other hand would likely result in a student going to the professor’s chair or dean and complain about a class that problematized the core premise of gender ideology, specifically the pseudoscientific notion of “gender identity.”

The professor might ask the students to suppose that the case of a man who imagines polydactyly differs from a BID case in a crucial way: the perception of extra fingers was a delusion rather than a physical or neurological variation. The patient did not, in reality, have six fingers, yet a doctor, rather than addressing the underlying cognitive or psychological condition that led the patient to believe an observable falsehood, affirmed the false perception by surgically altering the body to match it. But how would the doctor know? What medical test allows a doctor to tell the difference between a man who falsely believes he has extra digits and a man who truly believes he has an extra digit? I might now move on from what the reader may perceive as an analogy, except it is not an analogy—it’s the thing itself. The professor may ask the students to ponder whether this scenario raises serious ethical concerns about non-maleficence (“do no harm”)—that the doctor is complicit in harming the patient by enabling a delusion instead of treating its root cause—but since a man cannot be a woman, then his internal sense of gender must always be delusional. SO why the double standard?

The professor might ask whether the patient in the scenario, post-surgery, realizes that he has made a grave mistake, which makes the ethical implications even starker. The procedure was irreversible, and the doctor, rather than alleviating suffering, may have contributed to permanent physical and psychological damage. This raises questions about informed consent—was the patient capable of making a truly informed decision while operating under a false belief? Should the medical profession have safeguards in place to prevent such surgeries in cases where the patient’s perception is demonstrably false? If we say yes to both, then where do we draw the line between respecting autonomy and preventing harm? If we say no, are we not endorsing the medicalization of delusions and self-destructive choices? Again, since the patient’s perception of gender in cases of gender dysphoria is demonstrably false, we are objectively medicalizing delusions and self-destructive choices. While we may say that an individual is free to wish to permanently alter his body, we cannot say that gives a doctor a license to permanently alter the body of a delusional person.

The professor would tell students that the scenario highlights the ethical tension between respecting individual autonomy and ensuring that medical interventions truly serve a patient’s well-being. But is there really any ethical tension here? If medical professionals knowingly affirm (validate) and act on a delusion rather than addressing its psychological roots, they cross the line from healers to enablers of harm. The ethical course of action would have been to refuse the surgery and instead offer psychiatric care. The question of whether there should be clearer medical guidelines preventing such procedures in cases of misperception already has its answer. The line is clear: any doctor—or anybody else, for that matter—who removes a delusional man’s fifth digit has committed an atrocity. The scenario forces rational minds to reconsider their view that autonomy should extend to cases of irreversible medical decisions where there is no objective underlying medical condition.

As readers ponder this matter, they might also ponder whether it is ethical for parents to remove the sixth digit on their child’s hand who suffers from polydactyly. The parents could wait until the child is old enough to decide for themselves (as a guitarist, I might have an advantage with an actual functional sixth digit, which might be worth the grief I’d experience at the teasing of other children or other guitarists accusing me of an unfair advantage). But there is no ethical problem with surgically removing a sixth digit since this condition is not normative for digital patterning.

In the case of correcting the problem of a micropenis (which I discussed in my last essay), parents must treat this condition because of the critical window of genital development. If the micropenis is due to low testosterone levels, a doctor prescribes a short course of testosterone therapy, usually in the form of intramuscular injections or topical gel, to stimulate penile growth during early childhood or puberty. In cases where hormone treatment is ineffective, or if there’s a developmental or genetic disorder, the doctor may refer the child to a pediatric endocrinologist or urologist. In rare cases, surgical options such as phalloplasty may be considered later in life if the condition significantly affects function or self-esteem. Psychological support and counseling may also be recommended to help with emotional and social concerns. None of this is gender-affirming care in the industry sense, but ethical medical intervention to address an anomaly—that is, actual gender-affirming care. To hell with the parents who love their son just the way he is. It’s not their life. It’s his.

I’m not a philosophy professor. If I were, I would hesitate before using the scenario in an ethics class because of the chill put in the air by trans activists. I would likely get in trouble for broaching the subject. Indeed, having such a discussion is not beyond the boundaries of my discipline of sociology, yet I dare not interrogate such a problematic, whatever its value in interrogating matters of social power. To illustrate the problem, I conclude with a case of a teacher who dared to broach the subject of gender critically and an op-ed by a student that confronts the climate of self-policing and the impact that has on the promise of higher education. (See my recent essay Identity-Based Academic Programming and the Scourge of Heterodoxy.)

Kathleen Stock reported that student protests grew out of hostility from other academics (source)

Kathleen Stock, a philosophy professor at the University of Sussex, UK, argued against gender self-identification and supported gender-critical feminism. Students and activists—even her colleagues—accused her of transphobia, leading to protests and calls for her resignation. Stock resigned in 2021, citing a hostile work environment. She described the climate as “medieval” ostracism. Of course, the accusation of transphobia (like Islamophobia) presumes the validity of the concept; it’s a propaganda term to harass those who criticize or question what is—anthropologically and sociologically—effectively a religious system. Perhaps that’s why it remains an effective rhetorical weapon in policing speech; once an ideology is wrapped in religious symbology, its congregation becomes a protected class.

Emma Camp on the campus of the University of Virginia (source)

This climate has impacted students, as well. Emma Camp, a student at the University of Virginia, wrote an op-ed in The New York Times criticizing the university’s handling of gender identity issues, including policies related to transgender students. She argued that the emphasis on gender identity in academia stifled free speech and that professors were reluctant to engage in debates over gender. Professors who shared her viewpoint on gender identity faced criticism, with some calling for policies that would restrict public discussions or certain types of discourse around gender identity. No professors were formally disciplined, but the controversy was enough to chill the air. Camp’s op-ed is worth a read: “I Came to College Eager to Debate. I Found Self-Censorship Instead.”

Orbiting Planet Madness: Consenting to Puberty and Other Absurdities

This essay concerns the argument, all the rage over on X in the wake of House Republicans (joined by three Democrats) safeguarding minors, that children should not be forced to go through puberty. I also address the retort among transactivists that “cisgender” children are recipients of gender affirming care. For context, Georgia Representative Marjorie Taylor Greene successfully pushed through the House the Protect Children’s Innocence Act (HR 3492), which criminalizes the provision of so-called “gender-affirming care” (often modified with the compound adjective “life-saving”) to minors and imposes penalties on providers. Greene’s legislation, sure to fail in the Senate (where it requires considerable Democratic Party support to pass), has triggered a firestorm.

In the wake of the bill’s passage, Democrats took to the Internet to condemn the bill and rally the troops. One of the arguments among rank-and-file progressives uses the language of “consent” around puberty, as if children have a choice in a naturally unfolding developmental process. When developmentally appropriate, in the absence of abnormalities or intervention by endocrinologists, puberty is an inevitable process. Humans don’t consent to puberty any more than they consent to aging or dying. Humans don’t have a choice in such matters. They age and eventually die. That’s a normal part of life. To be sure, some are trying to cheat the effects of aging and even death. Indeed, this is the same transhumanist desire that animates people seeking to escape one gender to become another, like a hermit crab seeking a new shell.

We hear the absurdity in the argument from people who say they didn’t consent to being born. Readers who haven’t encountered this before may find this incredible, but people seriously make this argument. They’re right in a way: they didn’t consent to being born. Who does? Nobody consents to being miscarried, either. Or consider those who want limbs amputated because they think they have too many—as if one consents to having four limbs, or to having 20 digits. Imagine a society in which doctors remove the normal requisite of healthy limbs and digits because people didn’t consent to them. Don’t laugh; this has happened (see The Exploitative Act of Removing Healthy Body Parts). Imagine a society in which girls suffering from anorexia, because they think they’re too fat, are undergoing bariatric surgery or liposuction (see Disordering Bodies for Disordered Minds).

Those defending the artificially induced arrested development of children contend that puberty blockers are relatively harmless and reversible. Even if one doesn’t like affirming delusions, it’s no big deal, they say. But this isn’t true. Puberty blockers, when used in precocious puberty to delay early onset, make sense. In that case, the intervention is considered reversible, and the delayed developmental processes, including brain maturation, typically unfold once the blockers are stopped. However, when puberty blockers are used to suppress puberty that is starting at a developmentally normal time, the situation is different. Adolescence is a critical period for brain development, including cognitive and emotional maturation driven in part by sex hormones. Delaying or suppressing this developmental phase disrupts important windows of synaptic pruning, myelination, prefrontal cortex refinement, and emotional regulation.

Parents and their children who seek this intervention are often unaware of the potentially harmful effects of these drugs when used at a developmentally inappropriate time (still, that they’re effectively Peter Panning their kids ought to be obvious enough). In such cases, while physical puberty resumes in some fashion after stopping the blockers, the brain and cognitive/emotional development that normally occurs during the typical pubertal window may not fully catch up later; some aspects of normal development may be permanently altered because that sensitive window of opportunity has passed.

There is evidence that puberty blocking at a critical phase can have lasting effects on brain structure, behavior, cognition, and emotional processing. Any responsible parent must therefore ask about the long-term impacts on IQ, neurodevelopment, and emotional function. And they shouldn’t trust doctors to tell them the truth. Parents have a duty not only to study the particular matter, but also to learn how the medical industry exploits ignorance for profit (see Making Patients for the Medical-Industrial Complex; The Story the Industry Tells: Jack Turban’s Three Element Pitch; Thomas Szasz, Medical Freedom, and the Tyranny of Gender Ideology).

Being charitable (although I’m convinced of the harm puberty blockers pose to children), even if the argument were that we’re still collecting and collating the data on blocking puberty during critical developmental stages, no meta-analysis definitively shows that arresting puberty during this phase of development (Tanner stages 2-4) is safe. Being as cautious as possible with respect to the science on this matter, parents—or state actors who might override parental rights—put children at risk of brain/cognitive/emotional stunting by consenting to these therapies. Therefore, governments have a responsibility to the moral order (the same ethical demands that undergird the Nuremberg Code) to safeguard children against this by regulating what doctors are allowed to do (see Medical Atrocities Then and Now: The Dark Continuity of Gender Affirming Care).

For progressives and trans activists who say such decisions should be up to children, their parents, and their doctors, while the child and parent may claim a right to them, doctors have no right to pursue courses of action that may harm patients without objective evidence that the claimed benefit or need justifies the intervention. Just because somebody fears puberty, or for some other reason wants to avoid it, is not a reason to block it. Interventions require a legitimate medical justification, and that cannot be had because a professional association, such as the World Professional Association for Transgender Health (WPATH), has constructed “standards of care” that assert a justification. After all, the Church of Scientology established the Citizens Commission on Human Rights (CCHR) to legitimize L. Ron Hubbard’s doctrine of dianetics. Does that make the practice of auditing a legitimate medical practice? (See my satirical piece Dianetics in Our Schools.)

For those unfamiliar with WPATH, the transnational organization traces its roots to the work of German-American endocrinologist and sexologist Harry Benjamin. Benjamin’s 1966 book, The Transsexual Phenomenon, distinguished transsexualism from homosexuality and transvestism, argued for “compassionate medical interventions,” i.e., hormones and disfiguring surgery, and introduced a scale (later known as the Benjamin Scale) to classify degrees of gender dysphoria. (For a deeper dive in the perversion of science in this area, see my essays The Gender Hoax and the Betrayal of Children by the Adults in Their Lives; Fear and Loathing in the Village of Chamounix: Monstrosity and the Deceits of Trans Joy; Simulated Sexual Identities: Trans as Bad Copy.)

Aware of rhetoric on social media that cites the practice of gender affirming care for so-called “cisgender” persons (a neologism assigned to those who suffer no delusions about their gender), I want to spend the balance of this essay on stressing the point that gender affirming care that actually affirms gender, which is determined by gametes, chromosomes, and reproductive anatomy, presents a different case. The retort that “gender affirming care is used all the time on cis children” is indeed true, but with a big difference: in such cases, it is appropriate medical care. I will use the example of a boy born with a micropenis to illustrate. (I have used the example before; see Gender Denying Care: A Medical and Moral Crisis.)

Suppose the parents of a boy born with a micropenis, knowing they have a developmental window in which a doctor could provide hormones so that their son’s penis could grow to a normal size, but don’t do that because it doesn’t, in their view, jeopardize the life or health of that person. If that had been my situation, and my parents had made that judgment, to not do anything about it, and I got to be an adult male who couldn’t fix the problem because the developmental window passed, I would be bitterly angry at my parents for not intervening at the optimal moment, which might have allowed me to have a normal sized penis. My parents would have, in fact, harmed me by denying me gender affirming care of the real sort.

(A user on X objected to this example last week because he denied that parents or doctors could know whether a newborn has a micropenis. In fact, a micropenis is observable at birth. A micropenis is defined as a penis that is ≥2.5 standard deviations below the mean for age and gestational age, with otherwise normal male anatomy (scrotum, urethral opening, and typically palpable testes). Clinicians determine this by measuring the stretched penile length (SPL). A micropenis is a treatable abnormality—as long as the intervention is performed at the right developmental stage. The thought of parents in the grip of ideology, knowing this but not doing anything to help their son, should disturb anybody who cares about the well-being of children. The X poster never returned to drop the other shoe.)

Now, suppose a boy with a normal penis is born to parents who want to halt his puberty because he or the parents want to avoid the development of secondary sex characteristics. Who knows, perhaps they seek to Peter Pan the boy. Keep him in Never Never Land. At any rate, this is an instance of gender-denying care, or, as Health and Human Services Secretary Robert F. Kennedy Jr. referred to it in public remarks, “sex-rejecting procedures.” What parents would do such a thing to their child? The same parents who would Peter Pan their kid because the kid wanted it, justifying their actions as “affirming” their kid in “his identity.” Whatever the motive, the parents are supposed to safeguard the child, not harm the child because they or the child wants to stop puberty. Neither parents nor children have the right to this, just as they don’t have a right to remove their child’s limbs or digits (except in the case of extra appendages).

The House absolutely did the right thing in passing HR 3492—and the Senate should follow their lead and send the bill to President Trump’s desk. The Supreme Court would almost certainly uphold the law (see United States v. Skrmetti—The Supreme Court Strikes a Blow to the Madness of Gender Ideology). I have been waiting a long time for Congress to stop what are, by any objective ethical standard, medical atrocities. Whether woke zealots, sufferers of Munchausen’s by proxy, or parents swept up in social contagion—and doctors, of course—must be held accountable for failing to safeguard children. No parent would affirm an anorexic child in her fat delusion (see An Ellipse is a plane figure with four straight sides and four right angles, one with unequal adjacent sides). No doctor working from a scientific or moral standpoint would remove the limbs of a normal child who didn’t want them. Why would any doctor stop puberty or lop off healthy breast tissue of a young woman or invert the penis of a boy who said he wanted a vagina? This is madness. Puberty blocking in the case of precocious puberty, or removal of breast tissue in a boy suffering from gynecomastia, is entirely appropriate because there is an objective medical need. There is no such need in the case of gender dysphoria.

Society needs to ensure that gender affirming care remains available for anomalous cases where the developmental process did not unfold in the normal way, but at the same time make illegal any “medical intervention” arresting a normal process or altering children’s bodies because children and doctors believe they’re something they are not, or because the parents have been pulled into orbit around Planet Madness. Don’t call that “gender affirming care.” Because it’s not. It’s the opposite of affirming gender. And it’s not “life-saving.” A rational person must not tolerate the practice of emotional blackmail—the weaponization of empathy—in health care (see The Problem of Empathy and the Pathology of “Be Kind”).

Image by Sora

Beyond the Realms of Plausibility: The Trump–Epstein Allegations as Moral Panic

Public discourse surrounding Donald Trump’s alleged connection to Jeffrey Epstein frequently rests on the implicit—sometimes explicit—claim that government-held “Epstein files” contain evidence that Trump engaged in illegal sexual activity with minors. Yet this claim faces a serious inferential problem: if authorities at the local, state, and federal levels—across multiple administrations and partisan alignments—have long had such evidence, the absence of investigation or prosecution demands explanation. The most plausible explanations do not support the allegation. What is more, the insinuations, which are driven by the desire to continue a partisan narrative to delegitimize Trump, are harmful to the many people who knew Epstein but did not participate in child molestation. That list may include Trump.

The “missing photo” we all had.

I am going to first interrogate the hysteria from the standpoint of criminal justice. I will then shift to a social psychological observation. On the criminal justice front, the accusations made against Trump are of a criminal nature and, as such, require prosecutors to put ducks in a row. To be sure, Democrats and RINOs (a few openly, more down deeply) couldn’t care less about whether Trump is guilty of crimes. What they seek is reputational damage to put Trump and the populist movement behind them and return to the status quo that MAGA disrupted. They care only that the voters believe Trump has done something untoward. They already have much of the public in a place where they perceive a picture of Trump surrounded by beautiful women as evidence of a crime, when in fact it’s merely evidence of what we already knew about the man: that he’s a billionaire playboy in the 1980s and 1990s. They see a video of him kissing an wife and, not recognizing her, think he is kissing a minor. They see pictures of him with his arm around his daughter and, not recognizing her, think he is drawing near a minor. They see his name on flight logs and think the plane was headed to Epstein’s island, unable to process entries that list Trump, his wife, and daughter, and the nanny, or bother to know that the flights were from Florida to New York, not to an island, or that one of them was one-way.

From Epstein’s flight logs

It is essential in doing the actual work of criminal investigation and working up an evidentiary and rational cause to pursue prosecution to distinguish between the possession of materials and the possession of prosecutable evidence. Investigative files often contain raw, unverified information: hearsay, names mentioned in interviews, photographs, travel logs, videos, witness statements, etc. None of these, individually or collectively, necessarily meet the legal threshold required to initiate criminal proceedings, particularly for crimes alleged to have occurred decades earlier. Criminal prosecution requires corroboration, credible witnesses, jurisdictional clarity, and material evidence sufficient to meet the standard of proof beyond a reasonable doubt. Association or mention is not evidence of criminal conduct. If this were true, then everybody photographed with Epstein would be the subject of a criminal investigation. Mick Jagger? Chris Tucker? Noam Chomsky? Child molesters? I hesitate to even mention their names in the context of a moral panic over a supposed widespread elite child trafficking operation.

The strongest argument against the allegation is institutional. Prosecutorial systems are neither monolithic nor perfectly coordinated. They involve numerous actors—career prosecutors, investigators, judges, and oversight bodies—often with divergent political interests. In this case, the divergent political interests could not be clearer. Nor are the convergent political interests that fuel the panic. The idea that credible evidence of sex crimes involving minors by a former or sitting president could or do exist and yet be universally suppressed across administrations of both parties strains plausibility. Indeed, the incentive structure cuts in the opposite direction: such a prosecution would be career-defining, morally unassailable, and politically advantageous to many actors. Democrats have this evidence. Why haven’t they moved to prosecute Trump?

Did you see Presidential candidate Kamala Harris’ explanation for why they didn’t on Jimmy Kimmel the other night? It wasn’t a non-answer as many might suppose; it was designed to reinforce an assumption that the Biden Administration had nothing to do with the lawfare waged against Trump during the period between his occupancy of the White House. Does anybody believe that if Biden had evidence that Trump was a child molester, he wouldn’t have handed that information over to the DoJ? Of course, he didn’t need to. The DoJ already had that information. The Attorney General of New York, Letitia James, pursues a zombie case against the President, but doesn’t pursue him for sexual assault? Is this believable? The absence of such action strongly suggests not suppression, but insufficiency of evidence. No, not suggests—screams.

It is also mistaken to assume that a lack of public prosecution implies a lack of scrutiny. Prosecutors routinely evaluate allegations and decline to proceed when evidentiary standards cannot be met. If anybody doubts that investigators and prosecutors across multiple jurisdictions have not pored through these files, then they have very little understanding of how the criminal justice system works. Do they believe that the mass of people who loathe Donald Trump are going to protect him? They lined up to try and put him in prison for centuries—on the flimsiest of charges! The reality is that declinations to prosecute are often silent, particularly in politically sensitive cases and where many honorable people may be damaged reputationally or even put in harm’s way. The public is rarely informed of investigations that lead nowhere, especially when announcing them would unfairly taint individuals without a legal basis. That doesn’t mean they’re covering up crimes. It means that they’re performing their jobs conscientiously and professionally. Frankly, to think otherwise is a sign of a paranoid mind.

This is what in sociology we call a moral panic, mass hysteria, or mass psychogenic illness. This pattern of reasoning—where suspicion substitutes for evidence and absence of prosecution is reinterpreted as proof of conspiracy—fits squarely within the historical structure of moral panics. In such episodes, moral certainty precedes empirical verification, and institutional restraint is recast as complicity. I cover the phenomenon of mass psychogenic illness in several of my sociology courses. My go-to example is the Satanic Ritual Abuse panic of the 1980s and early 1990s. When students see how millions of people could believe something as absurd as a transatlantic conspiracy to enlist daycares in Satanic rituals, they’re astonished. In this case, allegations of widespread child abuse by secret networks of elites were treated as self-evidently true despite a lack of implausible claims, inconsistent testimony, and physical evidence. Irrationally, members of the public reasoned that the absence of proof demonstrated the sophistication of the cover-up. When, years later, extensive (entirely unnecessary) reviews concluded that no organized satanic networks existed and that many lives had been destroyed by inference alone, the media dropped the matter.

The Satanic Panic was a few decades ago. But mass psychogenic illness is not a new thing. We can look centuries back to events like the Salem witch trials. These followed a similar logic. A woman was hanged because people believed in witches. We can expand the sample to include the numerous witch trials that occurred across Europe during the Middle Ages, where thousands were burned at the stake because people believed in witches, and accusations substituted for findings of guilt. This is how it works: accusations function as evidence, denials are treated as further proof of guilt, and procedural safeguards are abandoned in favor of moral urgency or political objectives. In every case of moral panic we examine, we find that the belief that “something so widely suspected must be true” overrode institutional skepticism and evidentiary discipline.

Modern moral panics differ in content, to be sure, but never in form. They rely on expansive interpretations of association, elevate suspicion to certainty, and dismiss institutional non-action as corruption rather than constraint. The invocation of disappearing photographs, excessive redactions, and secret files plays a similar rhetorical role to hidden covens, Jewish cabals, secret societies, or underground networks: it explains why proof is insufficient (cover-up!) while maintaining absolute confidence in guilt. None of this implies that elites are uniformly virtuous or that crimes never go unpunished. Rather, it insists that serious claims require serious evidence, and that institutions—however flawed—are constrained by legal standards that moral and partisan political narratives ignore.

But again, any video of Trump and women is now portrayed as proof of his supposed crimes. The video clip below, which has 120 thousand views on X, is from a November 1992 party at Mar-a-Lago. Recently divorced Donald Trump (his divorce from Ivana Trump finalized earlier that year) is seen dancing, socializing, and briefly kissing a woman amid a group of NFL cheerleaders, all women in their twenties, attending a calendar/promotional event tied to a football game weekend. The woman he kisses? That’s Marla Maples, Trump’s girlfriend at the time. They would be married the next year. She appears prominently in the footage, and her identity is confirmed. The video also briefly shows Jeffrey Epstein and Ghislaine Maxwell arriving and observing from the sidelines. There is no illegal activity or minors depicted. It’s an adult party from the early 1990s. Ever been to one of those?

In the absence of prosecutable evidence, the most parsimonious explanation for the lack of action is not conspiracy, but insufficiency. Authorities across both parties know what’s in those files. They also know they do not have the evidence they need to prosecute Trump for criminal wrongdoing. Again, for Democrats and a handful of Republicans, that is not their aim. Their aim is the delegitimization of a President and a movement that stands in the way of their announced goal: the elevation of transnational corporate power over the people. There is no conspiracy there.

The Trump–Epstein narrative illustrates a broader epistemological problem in contemporary politics: the replacement of evidentiary reasoning with moral inference. Abandoning standards of proof invites injustice in the name of irrational certainty. History tells us that moral panics rarely end well. But there is no mass hysteria to end all mass hysterias. There will be more moral panics, and people will be swept up in them because a significant portion of the population is ignorant, and the undisciplined mind is prone to see patterns where none exist. This tendency of our species has led to the needless suffering of millions across time and space, and it has proved politically useful for those who manipulate the masses for their own ends. (See also Epstein, Russia, and Other Hoaxes—and the Pathology that Feeds Their Believability; The Future of a Delusion: Mass Formation Psychosis and the Fetish of Corporate Statism)

Identity-Based Academic Programming and the Scourge of Heterodoxy

In contemporary universities, programs such as Women’s and Gender Studies or Race and Ethnic Studies have become well-established parts of the academic landscape. These programs are often justified as correctives to historical exclusions, offering focused attention to groups whose experiences were previously marginalized within traditional disciplines. Yet, from a sociological perspective, there’s a deeper question worth examining, namely, whether academic programs organized explicitly around identity can genuinely sustain heterodox thought and robust internal critique.

My concern is not that such programs lack intellectual seriousness (although they often do), nor that the topics they address are illegitimate. Rather, it’s that programs defined by identity categories tend, culturally and structurally, to function as representational spaces for particular subgroups within the population. In doing so, they risk prioritizing advocacy, affirmation, and protection over critical inquiry. When a program implicitly understands itself as representing a community—rather than studying a phenomenon—it becomes difficult for that program to tolerate viewpoints that are perceived as threatening to that community’s self-understanding.

This dynamic is often described in terms of “safe spaces.” While the original intent of safe spaces may have been to protect students from harassment or overt hostility, the concept has increasingly expanded to include insulation from critical perspectives that challenge foundational assumptions. In such an environment, heterodox views—whether theoretical or normative—can come to be interpreted not as contributions to scholarly debate but as moral or political threats. The result is a narrowing of acceptable discourse within the very programs that claim to be dedicated to critical thinking.

To clarify this concern, consider an argument by analogy. Imagine a university establishing a program called Christian and Conservative Studies. Such a program would almost certainly be understood as pandering to a particular identity-based constituency, namely conservative Christians. If the program were genuinely critical—if it rigorously examined Christianity as a belief system, conservatism as a political ideology, and the historical consequences of their influence—it would likely provoke strong objections from the very community it ostensibly represents. Conservative Christian students would perceive the program as hostile rather than affirming, and enrollment pressure, donor backlash, or public controversy would follow.

Conversely, if the program were designed to be attractive to conservative Christian students—to function as a “home” for them, a safe space for a distinct minority in the humanities and social sciences—it would almost certainly avoid sustained critique of core beliefs and commitments. In that case, the program would not serve as a genuine locus of critical inquiry but rather as a protective ideological enclave, reinforcing shared assumptions and discouraging internal dissent. The very logic that makes the program viable as an identity-affirming space would undermine its capacity for rigorous critique.

The analogy is instructive because it reveals a structural symmetry. If we can readily see why a Christian and Conservative Studies program would struggle to maintain intellectual independence from the identity it represents, then we should at least be open to the possibility that Women, Gender, and Sexuality Studies or Race and Ethnic Studies face a similar tension. The issue is not the political valence of the identity in question but the institutional logic of representation itself. This parallel is reinforced if one imagines a Christian and Conservative Studies program organized as a space to aggressively critique the worldview of conservative Christians. It’s inconceivable that a public university would organize programs around race and gender in which woke progressive ideology and queer theory were the targets of withering criticism.

Traditional academic disciplines—such as economics, history, philosophy, or sociology—are organized around methods, questions, and objects of study rather than affirming particular identities. At least ostensibly. This organizational structure makes it easier, at least in principle, to sustain internal disagreement and theoretical pluralism. I say in principle because policing around gender in traditional disciplines is intense. As a sociologist, I’m expected to teach gender in my courses that survey the field. I avoid dwelling in this area because the discipline long ago aligned its subject matter with the demands of critical race and trans activists. I limit myself to discussing the origins of the patriarchy in the unit on historical materialism. I tell myself, and in terms of disciplinary siloing, this is perhaps reasonable, that gender identity disorder is properly the subject of clinical psychology. At the same time, psychology has also aligned its subject matter with the same ideological demands.

When an academic program or discipline becomes closely aligned with the moral or political interests of a specific group, dissenting views risk being framed not as alternative explanations but as betrayals or acts of hostility. I found this out firsthand despite skirting the issue in the classroom. My writing on this platform was enough to trigger complaints. I risk more of the same if my argument in this essay is perceived as a suggestion that programs like Women and Gender Studies should be abolished.

None of this implies that inequality or power should be excluded from academic study. On the contrary, these are central concerns in sociology and related fields. My course content is centrally focused on the problems of inequality and power, just as these matters are the focus of this platform. The question is whether the most intellectually robust way to study them is through programs that are explicitly organized as representational spaces for identity groups, or whether such organization inevitably constrains the range of permissible inquiry. A further problem is that, even with the proliferation of identitarian programming providing affirming and safe spaces for students, the policing of disciplinary and interdisciplinary curricula and teaching has generalized ideological and political constraints over higher education.

If universities are committed to the ideal of critical thinking, they must be willing to ask whether certain institutional forms, curricular programming, and pedagogical practices—however well-intentioned—unintentionally trade intellectual openness for moral and political solidarity. The challenge is not to abolish these programs (although I am increasingly persuaded that they might have to go or at least substantially reformed by forcing them open to intellectual diversity and protecting those who present alternative viewpoints), nor am I advocating for woke progressive teachers to be removed from their positions, but rather to confront honestly the structural pressures teachers face and the limits those pressures place on heterodox thought.

Without such reflection, the university risks confusing advocacy with scholarship and affirmation with understanding. Indeed, the fact that there are so few conservative students and teachers in the humanities and social sciences tells us that it already has. We cannot conduct science if explanations for behavioral, cognitive, and social phenomena are straitjacketed by movement ideology or protective empathy (see The Problem of Empathy and the Pathology of “Be Kind”). Colleges and universities (indeed, K-12) need to open curricula and programmatic spaces to other points of view and defend heterodox teachers and their materials against the manufactured orthodoxy that polices higher education to advance political objectives and movement goals. Liberal education is corrupted by ideological hegemony. It transforms the academic space into a system of indoctrination centers. It devolves the ideal speech situation into a succession of partisan struggle sessions. Those targeted for indoctrination check out. If that’s the purpose of such programming—to push conservatives out of the humanities and the social sciences—then the matter is settled: these programs must go.

Image by Sora

An Ellipse is a plane figure with four straight sides and four right angles, one with unequal adjacent sides (in contrast to a Circle)

I watched the Andrew Gold podcast heretics with a doctor who provides “gender-affirming care.” I shared the video on Facebook last week and told my friends and followers that her arguments were circular. I want to expand on that observation here.

The doctor’s name is Helen Webberley. Gold asked her why anorexia is a mental illness (she agreed it was), but gender dysphoria isn’t. Her answer was self-sealing: anorexia is classified as a mental disorder; gender dysphoria isn’t. It used to be classified that way, she acknowledged, but it isn’t anymore—therefore it isn’t. She insisted this wasn’t something to argue about; it was simply true. Her appeal to psychiatric classification boils down to this: something is a mental illness if and only if psychiatrists currently say it is. She pointed out that homosexuality used to be classified as a mental illness, but no longer is, so it isn’t one. Once, psychiatrists said it was; now they say it isn’t. End of story.

But doesn’t that admit that psychiatrists declare things true or false not because of objective findings but because their collective opinion has shifted? When was there any evidence that homosexuality was a mental illness? Homosexuality is as old as humanity itself (and practiced among thousands of other mammalian, as well as avian species). What does that say about psychiatry as a scientific field? It doesn’t exactly sound scientific. Medicine does change, of course, but normally based on observable facts and rational interpretation. Would this doctor have agreed that homosexuality was a mental illness back when the manuals said it was? I doubt it. So why lean on that argument now?

Why, exactly, did psychiatrists change their minds about homosexuality? We know it wasn’t because of new objective criteria. As I just explained, if the change had truly been driven by objective observation, they would simply have noted that same-sex attraction is a natural, cross-cultural, historically constant fact. No, it was largely political pressure. Norms governing sexuality were changing, and the gay movement accelerated that change by squeezing the medical profession and other institutions.

If psychiatrists suddenly declared tomorrow that anorexia was no longer a mental illness, would we stop treating it as one? Using the doctor’s own logic: “Gender dysphoria isn’t a mental illness; it’s about their bodies, and they know best who they are.” Fine—then don’t anorexics also know what their bodies “should” look like? Who are we to tell a starving girl she’s not actually fat? She looks in the mirror and sees fat. That’s her lived reality.

Doctors describe anorexia as a form of body image distortion: the person perceives parts of their body as larger or “too fat” even when they are objectively underweight. We diagnose this not just from what patients say but from the observable fact that they are starving themselves to death. We infer the distorted subjectivity from the objective behavior. This is not analogous to homosexuality. It is, however, analogous to gender dysphoria. Indeed, they are species of the same genre of mental illness. But allow me to continue demolishing the doctor’s logic for a little while longer. I wish to leave no doubt as to the madness of her worldview.

To demand that we “normalize” anorexia because the girl insists she’s fat is, at bottom, a demand that we all adopt the anorexic’s subjectivity as shared reality. We mustn’t call her skinny if she doesn’t experience herself that way. But if we do that, we are denying the observable fact that she is emaciated. That would be lying—and lethal. Once you abandon objective reality, why not offer bariatric surgery or liposuction to starving patients? Any doctor who did so would be guilty of blatant malpractice. A responsible physician does not affirm a starving person’s belief that she is fat; the doctor treats the illness, which may well originate in a brain-based body mapping problem.

So why is a girl who insists she is a boy treated differently? The objective fact is that she is female. She appears to suffer the same species of body image distortion as the anorexic—only in her case, the distortion is about sex rather than fatness. She perceives herself as the opposite sex when she is not. Yet instead of treating this as the delusion it parallels—one in which subjectivity is incongruent with objective reality (and some clinicians still describe it that way)—doctors affirm the delusion, prescribe cross-sex hormones, and sometimes surgically alter the body to simulate the opposite sex. They can’t actually change male genitalia into female genitalia or vice versa; they can only create an approximation. They must know they are lying to the patient.

Screenshot of Andrew Gold’s podcast heretics.

Webberley argued that since she herself, as a woman, would be horrified to have a “willie” (the term both she and Gold used), a patient should have the right to have theirs removed. The identical logic applies to anorexia: a skeletal person believes she has excess fat; she is horrified at the thought of being forced to keep it; so why shouldn’t she have the right to demand liposuction? If someone is objectively thin yet subjectively experiences fat on her body, what possible reason is there to deny her the surgery she wants?

The parallel Webberley draws between homosexuality and gender identity is completely fallacious. They are not the same thing at all. One is an objective, observable behavior that requires no medical intervention. The other is a subjective claim that directly parallels anorexia or even the delusional schizophrenic: the person believes something that is objectively untrue. What in the fuck are doctors doing affirming delusions—and getting rich doing it?

Finally, Webberley explained that she became convinced when a girl who said she was a boy managed to persuade her that she really was one. The evidence? The girl seemed to really, really believe it. I said to my wife, who was watching the podcast with them, Is this a real doctor? Are medical schools actually handing out degrees to people who base life-altering treatments on whether a patient successfully sells them their delusion? This isn’t medicine. This is a corporate model.

For the record, Webberley refused to define what a woman is. She skirted it by saying it was a “gotcha question.” But it’s not. It is the question. Unless you have a non-tautological definition, you are not even in the ballpark of defining reality. Science is not possible without conceptual definitions that capture the objective world. That means that scientific theory is impossible. This is true in mathematics, as well. A rectangle is not a thing we define as a rectangle. A rectangle is a plane figure with four straight sides and four right angles, one with unequal adjacent sides (in contrast to a square). Sure, one may find a book that calls that geometric shape a circle (if one ever does, toss it), but it wouldn’t change what the shape is objectively. A woman is not somebody who says he or she is. A woman is an adult female human, in contrast to a man.