Why the Israel-Gaza War Is Not Seen Like World War II—and What That Reveals About the Present Situation

I have been thinking about why progressives seem so incapable of seeing the Israeli–Gaza conflict through the same historical and moral framework that they apply to World War II, particularly the struggle between the Allied and Axis powers in Europe. The relevant historical touchstone is Nazi Germany. Under Hitler, the Nazis pursued the elimination of the Jews from Europe—a process that culminated in systematic extermination. At that time, the Jews were uniquely vulnerable. They had no capacity for collective self-defense, no army, no sovereign nation. Their survival depended entirely on the eventual intervention and victory of the Allied powers. Liberation came from the outside. That fact alone is terrifying. Now that Jews have the capacity for collective self-defense, they are condemned when they use it to collectively defend themselves, as the world witnessed in the war following the October 7, 2023 pogrom against Jewish citizens in Israel carried out by Hamas, the Islamist government of Gaza.

Source: The Institute for National Security Studies

An often-overlooked but historically significant part of this story of World War II involves Haj Amin al-Husseini, the Grand Mufti of Jerusalem, who rose to prominence during the British Mandate period. During World War II, al-Husseini fled to Nazi Germany, where he lived from roughly 1941 to 1945 and collaborated with the Nazi regime. He became a propagandist, recruiter, and ideological ally to Hitler’s regime. Al-Husseini was deeply antisemitic and militantly opposed to Zionism, or Jewish nationalism. In November 1941, he personally met with Adolf Hitler in Berlin. Surviving German diplomatic records make clear that their conversation concerned the “Jewish question,” particularly its future extension beyond Europe. Many Muslims in the Middle East were eager to eliminate Jews from the terrorities under the thumb of Islam. (See Jew-Hatred in the Arab-Muslim World: An Ancient and Persistent Hatred.)

It is important to be precise here. Neither Hitler nor the Mufti spoke in the blunt language of “extermination” or “genocide” as we would frame it today. Nazi discourse habitually relied on euphemisms—destruction, elimination, removal, solution—language that deliberately obscured intent while authorizing violence. This rhetorical indirection was characteristic of Hitler’s leadership style and of the bureaucratic culture of the Nazi regime. By late 1941, these euphemisms had already acquired a lethal meaning in practice. Mass shootings by Einsatzgruppen, Schutzstaffel (SS) paramilitary death squads, were underway, and extermination was transitioning from improvised killing to systematic policy.

Within that context, Hitler told the Mufti that Germany’s objective was the elimination of the Jewish element not only in Europe but eventually in the Arab sphere as well. He explained that Germany could not act in Palestine, historically Judea before the Roman Empire renamed it as punishment for the third and final Jewish-Roman War (second century AD), until Britain—then the mandatory power—was defeated. Once that obstacle was removed, the Jewish problem there would be “solved.” The Mufti welcomed this logic. He fully endorsed Nazi antisemitismand the Holocaust in Europe and sought to extend its application to Palestine and the broader Muslim world. He later acted on this alignment by broadcasting Arabic-language Nazi propaganda, recruiting Muslims into Waffen-SS units, and repeatedly intervening to block Jewish escape routes from Europe, including efforts to rescue Jewish children.

As history records, Germany lost the war. The Allies liberated the Jews. And, in the aftermath of that catastrophe, the State of Israel was created. For the first time in nearly two thousand years, Jews possessed a sovereign nation and, crucially, a military. The lesson drawn from history was unmistakable: never again would Jews wait defenseless for salvation from outside powers. If genocidal movements arose again, Jews would be capable of resisting them directly.

That lesson now collides with another ideological tradition—one rooted in politicized Islam, which is distinct from secular Arab nationalism. Arab nationalism, while often hostile to Israel, is not inherently fascist (no nationalism is). But movements such as the Muslim Brotherhood and especially Hamas represent something different. Drawing on Christopher Hitchens’s terminology, this tendency can be described as clerical fascism: an authoritarian, totalizing ideology grounded in religious absolutism, animated by conspiratorial antisemitism, and explicitly genocidal in aspiration. Hamas fits this description. Its founding documents, rhetoric, and genocidal and terroristic behavior make clear that its goal is not coexistence with the Jews, but their elimination in the land they inhabit. “From the river to the sea.”

On October 7, 2023, Hamas launched a mass attack on Israeli civilians. This was not merely an act of resistance or retaliation; it was an assault animated by genocidal ideology. As noted, the crucial difference from the 1940s is that Israel now exists and can defend itself. The Jewish nation need not wait for external liberation. It can respond directly. And it did. With overwhelming force. Israel’s goal in the wake of October 7 was to annihilate Hamas and liberate Gaza from clerical fascist rule. Only an international push to broker a ceasefire deterred Israel from its goal. I am adamantly opposed to a ceasefire, and have been highly critical of the Trump Administration’s leading role in securing it.

As I noted in The Danger of Missing the Point: Historical Analogies and the Israel-Gaza Conflict, when images of devastation in Gaza circulate—civilian casualties, destroyed neighborhoods, rubble—the dominant narrative in many political and media spaces portrays Israel as the villain and Palestinians as the victims. I argued in that essay that the historical parallel between Germany’s and Hamas’ wars against Jews is rarely acknowledged. During World War II, Allied bombing campaigns devastated German cities—Dresden, Hamburg, Berlin, Cologne, Frankfurt. Estimates suggest between 350,000–500,000 German civilians were killed in Allied bombing campaigns and ground offensives. In Operation Gomorrah alone, carried out in 1943 against Hamburg, as many as 40,000 civilians were killed. Yet no serious moral framework treats Nazi Germany as the victim of Allied aggression, nor are Roosevelt or Churchill remembered as war criminals for prosecuting the war to defeat fascism.

This is where symbolism and historical archetypes exert extraordinary power. Before Hitler, history had its monsters—Attila the Hun, for example, for a thousand years was the human embodiment of evil—but in the decades after WWII, Hitler became the archetype of genocidal evil. A person is ethically suspect merely for donning the dictator as a costume. And those whom the left despise are smeared with his name (Trump is the latest target). The swastika and other Nazi imagery carry a unique moral charge. That is why public displays of Nazi symbols are banned in many European countries and why marches under Nazi banners would provoke universal condemnation, even when the marches involve only a handful of emotionally dysregulated misfits. By contrast, other symbols associated with mass death and totalitarianism, more notably the Soviet hammer and sickle—often provoke little reaction. Indeed, today, mass marches in the streets of America proudly display the communist emblem. Hitler, not Stalin, has become the universal icon of wickedness. Correspondingly, in this context, Jews became the archetypal victims of genocidal ideology.

Something strange has happened since the creation of Israel. Now that Jews have a nation, an army, and a nationalist ideology—Zionism, or Jewish self-determination—they are cast, particularly on the political left, as the new archetypal villains. Israel is accused of apartheid, genocide, and unique moral depravity, while far worse regimes around the world receive comparatively little attention. This occurs despite the continuous Jewish presence in the land for over three millennia and despite Israel facing enemies who openly articulate genocidal aims.

This is a fact with which we must grapple: when Hamas commits atrocities, and Israel responds militarily, the moral framework is inverted. Hamas—the aggressor, animated by clerical fascism—is treated as the victim. Gaza’s population and leadership are framed analogously to the Jews of Europe, while Israel is implicitly cast in the role once occupied by Nazi Germany. Israeli leaders are labeled war criminals for defending their country, while Allied leaders who destroyed German cities to defeat fascism are rarely subjected to comparable moral judgment. This inversion collapses historical memory, ignores ideology and intent, and erases the lesson that led to Israel’s existence in the first place: that Jews must never again be defenseless in the face of genocidal movements—whether secular fascism or clerical fascism.

When I see images of devastation in Gaza, I cannot help but also think of the bombed-out cities of Germany. The suffering is real and tragic, but it is morally intelligible within a framework in which defeating genocidal regimes sometimes requires devastating force. That is the parallel I am drawing. And the reason I believe the dominant narrative so profoundly misunderstands what is happening? Antisemitism on the left. (See Antisemitism Drives Anti-Israel Sentiment; Israel’s Blockade of Gaza and the Noise of Leftwing Antisemitism.)

* * *

There is a valid and historically grounded way to understand Hamas not merely as a reactive militant organization, but as the latest institutional expression of an ideological lineage of Jew-hatred that predates the creation of Israel. This lineage is not organizational in the narrow sense—there is no unbroken chain of command or formal inheritance, albeit there are direct linkages as I have established—but rather at its core it is conceptual, rhetorical, and theological. It consists of recurring assumptions about Jews, power, and violence that emerged in the early twentieth century and were radicalized through contact with European fascism. To be sure, its origins are much older than this; Jew-hatred is thousands of years old. But, for this essay, I am focused on the twentieth century.

The Grand Mufti of Jerusalem represents a fusion point in the genealogy between antisemitism and fascism. Al-Husseini combined religious authority with modern political antisemitism and explicitly aligned himself with Nazi ideology during Hitler’s Judeocide. While al-Husseini did not originate genocidal antisemitism, he absorbed and endorsed its most radical implications, including the legitimacy of eliminating Jews as a collective. What he took from Nazism—what he was primed to accept given the depth of Jew-hatred in the Islamic world—was not merely hostility to Zionism, but a conspiratorial worldview in which Jews were seen as a civilizational, indeed metaphysical threat whose removal was both necessary and therefore justified. No means were ruled out of bounds to achieve this end: the eradication of Jews from Muslim territories.

The Muslim Brotherhood functioned as the principal transmission belt for this worldview after the war. Founded before World War II but transformed during and after it, the Brotherhood integrated European antisemitic tropes, for example, the Protocols of the Elders of Zion, a forged antisemitic conspiracy document produced in the Russian Empire around 1902–1903 by members of the Tsarist secret police (Okhrana) or their collaborators, into Islamist political theology and preserved them long after the defeat of Nazism. In Brotherhood literature and preaching, Jews were no longer treated simply as religious rivals or political adversaries, but as cosmic enemies embedded in a global conspiracy against Islam. Violence against Jews was sacralized, framed not as contingent resistance but as an enduring religious obligation.

Hamas emerges from this intellectual environment as a more explicit and operationalized embodiment of the same ideological framework. As an offshoot of the Muslim Brotherhood, it did not need to invent a new antisemitic worldview; it inherited one already fused with political militancy and religious absolutism. What distinguishes Hamas from earlier figures like the Mufti is not greater extremism of intent, but rather greater capacity for implementation. It is the Nazi project brought to Gaza to effectuate anti-Jewish hatred embedded in Islamic ideology. Hamas institutionalized clerical fascism in its founding documents, governing structure, and a military apparatus, openly articulating the goal of eliminating Jews rather than merely opposing Israeli policies.

We can thus show a direct ideological lineage, even in the absence of formal organizational continuity. The continuity lies in the fusion of conspiratorial antisemitism, religious authority, and the moral legitimization of total violence. Hamas does not represent a historical anomaly or a purely situational response to modern events; it represents the maturation of a line of thinking that originated in the interwar period, was shaped by collaboration with European fascism, transmitted through Islamist movements, and adapted to contemporary conditions.

* * *

Jew-hatred is not only a problem in the Islamic world, as should be obvious to readers given the prevalence of the inverted perpetrator-victim narrative I’m describing in this essay. The inversion has been taken up by the left in the West. At the end of last year, I analyzed this phenomenon in Is the Red-Green Alliance Ideologically Coherent? There, I note that, academically, Islamist violence is best understood as a distinct form of religious extremism rather than being forced into Western left–right political categories, even though it shares traits with far-right authoritarianism. Rightwing or not, Islamism’s theocratic goals and rejection of liberal values should set it apart from Western secular ideologies.

Yet the “Red-Green” alliance, an alliance between segments of the left and Islamist movements, is driven less by ideological coherence than by shared hostility toward deontological liberalism, Western power, and the State of Israel, expressed through anti-capitalist, anti-imperialist, and identity-based narratives, as we see in rhetoric conflating religion with ethnicity, such as in the adoption of the Islamist propaganda term “Islamophobia” or the condemnaton of “white” conservative Christianity. I argue in that essay that this alliance is pragmatic and historically temporary, not because the left wishes it to be so, but because Islamists will ultimately sideline their leftist partners once power is secured. (Is all that coming undone in Iran?)

The crucial point I was making in that essay is that labeling Islamism as right-wing extremism, which I am very much inclined to do given its characteristics, masks a broader convergence of leftist, Islamist, and corporate forces that collectively challenge the free and open society. In this monstrosity, contradictions don’t matter because they serve to advance the respective agendas. This challenge will remain as long as these threats are tolerated in the West—freedom and openness will disappear from the face of the Earth, no matter which of them prevails in the end. However, although we are moving rapidly towards a one-world order governed by corporate power, presently hampered by a resurgence of populism in the West, the latter possibility of a global Caliphate is very real. (See last year’s final essay, 2025: The Year in Review and Notes on the West’s Islamic Problem.)

The point of the present essay was to explore a double standard to show that the ideological glue that holds the Red-Green Alliance together is eliminationist antisemitism combined with anti-capitalist, anti-Christian, and anti-Enlightenment sentiment. This is what determines the shifting perpetrator-victim narrative—the archtypical evil of Nazism today, the wickedness of the Judeocide testifying to that fact; the archtypical evil of Hitler’s victims tomorrow, the wickedness of Jews demonstrated by their state’s response to a genocidal death cult at its border. Indeed, when not smearing populists and nationalists with Nazi symbology, the conflation of opposites makes Zionism appear as the paradigm of modern-day fascism, obscuring the reality that Hamas, and Islamism more broadly, is not only a fascist threat in the Middle East, but a fascist threat worldwide. This is why global corporate power—the other fascist threat—is using Islam to undermine the West. Today’s left supplies the project with an endless stream of useful idiots.

That such a small proportion of the world’s population, the most persecuted people in history, with only a tiny nation in a very large and dangerous world to defend the collective security interests of its people, should loom so largely in the minds of the left testifies to the presence of a mass psychogenic illness in our midsts. The left has been made susceptible to this madness over decades of progressive politics and the destruction of reason in the West’s primary sense-making institutions. If the left hates the Jew, then it must love the Jew’s supposed victims—the Muslim. Hence, the outpouring of support for the Somalis in the Midwest, even while this population, enabled by the Democratic Party, drains public resources and defauds the taxpayer.

As I write this, leftist mobs are marching in the streets of Minneapolis, attacking federal officers enforcing immigration law and uncovering corruption. The mobs are attacking local police who are trying to contain an insurrection (whether the police know it or not). Offering up a martyr for the cause, a woman was shot yesterday while attempting to run over an ICE agent. Every attempt at reestablishing order today is warped into proof of the authoritarianism inherent in the rule of law. In this frame, self-defense against domestic terrorism becomes itself a terroristic act. The cause that yields martyrs? Cancelling the American Republic.

Such madness knows no reason. It only knows violence. And it should be met with violence. What America needs now is an overwhelming show of force wherever the useful idiots show up with destructive and violent intent.

Image by Grok

The Manufactured Perception of Moral High Ground

In recent essays, I have explored deontological liberalism through an epistemic framework that grounds rights and morality in natural law, which I argue aligns closely with Christian theism. This alignment suggests that Christian ethics serve as a valid moral system, even if one disagrees with—or proves incorrect (were that possible)—the ontological foundation of Christian theism itself. In essence, Christians arrive at correct moral principles based on an ontology that, in reality, emerges from the facts of hominid evolution and natural history, and thus natural law, sublimated at Christian theism. Other religions don’t mirror natural history in this way. This is why the recognition of universial human rights develops in the Christian world and nowhere else.

While I reserve a deeper dive into this argument for a later essay (as promised at the end of last year), the argument I am building underscores the potential for moral convergence across seemingly divergent foundations. For the present essay, the key observation I wish to make is that progressives have cultivated a widespread perception of holding the moral high ground, largely through their dominance over sense-making institutions, when, in truth, the progressive reference to an ethical foundation, even when articulated, is illusory, as progressivism is fundamentally anchored in consequentualism and utilitarianism—pseudoethical approaches that ultimately devolve into subjective preferences shaped by political ideology and enforced through institutional power. Woke progressivism is organized nihilism rationalized by postmodernist babble and the force of the state.

A prime illustration of organized nihilism is the institutional endorsement of medical interventions for children, such as puberty blockers, cross-sex hormones, and surgeries, treating puberty as an optional condition and gender as malleable and subject to voluntarism (see, e.g., Orbiting Planet Madness: Consenting to Puberty and Other Absurdities). Of course, “gender affirming care” does not alter an individual’s gender; gender is an unalterable binary. But the understanding and science and truth are easily perverted by ideology, especially among the highly indoctrinated segments of the population, and those suffering from emotional dysregulation and psychiatric maladies. As I have shown in numerous essays, the rise of woke progressivism is associated with an effectively post-truth worldview.

To circumvent material reality, queer activists, drawing on the postmodernist epistemology to give the madness the gloss of intellectual legitimacy, repurposed “gender” to detach medical and moral concepts from material science and natural history, positing instead that reality is constructed through “discursive formation,” a construction suggesting that humans call things into existence with words. They further held that the social power constituted by discourse determines the definition of words, their meanings, and usages. Reality is not, as material science would have it, an external, mind-independent thing potentially grasped with accurate and precise language; reality, such as it is, is observer-dependent and, therefore, truth is plural. The assumption of the multiplicity of truths allowed queer theories to recast gender as an internal and fluid subjective state.

Discursive approaches yield no objective morality, reducing everything to ideology, politics, preferences, and power. What about religion? Christianity gets us closer to an objective morality than any other discourse; however, as I argue in an upcoming essay, God, as an axiom, can simply be a term denoting the objective structure of the universe, including biological truths. Resolved in this way, we observe that we are not under divine command but rather the command of natural history, which has made us human, with brains capable of sublimating nature into ethereal forms. On this ground, which readers might recognize as the Feuerbachian method (wisdom is human, imagined as divine), I advocate for rooting rights in an ontology of natural law, as conveyed to the ethical system of deontological liberalism, akin to the US Founding Fathers’ vision. In this view, puberty is not a medical ailment but a natural life stage, gender is not a subjective internal identity but an immutable binary, and “transitioning” merely simulates a sexual identity, thus making “gender affirming care” not only unethical but destructive.

It is not just gender that postmodernist thinking has “problematized”; the postmodernist project reflects a broader ideological and political strategy to establish a system in which plural truth is dictated by language manipulation and social power rather than by empirical observation or an objective moral foundation. We see the project at work in the language of “systemic racism,” which manufactures the illusion of white supremacy and roots moral action in the social justice frame of “perpetrator” and “victim.” Fallacies such as that of misplaced concreteness and unjust legal practices of collective and intergenerational punishment become possible when logic is abandoned and replaced with sophistry, and when progressives command the state.

This brings me to the problem of this essay: the false perception that progressives occupy the moral high ground. Commanding the moral high ground is useful for manufacturing consent around progressive administration of society. One of the most striking features of contemporary political debate is the asymmetry of moral confidence between progressives, on the one side, and their conservative or classical liberal critics on the other. Progressives routinely claim the moral high ground, even while operating from an ethical framework that lacks stable deontological commitments. Hence, those who appeal to rule-based moral systems—constitutional restraints, duty-bound ethics, natural rights—are frequently dismissed not through argument but through moral labeling: “chauvinist,” “racist,” and so forth. These labels function less as substantive critiques than as mechanisms of exclusion, foreclosing debate rather than engaging it. The puzzle is how such moral authority came to be established in the absence of a shared epistemic foundation. I have put the pieces together, but I wish to elaborate it here.

As I argued in my previous essays, traditionally, Western moral and political thought rested on deontological frameworks (see Epistemic Foundations, Deontological Liberalism, and the Grounding of Rights; Moral Authority Without Foundations: Progressivism, Utilitarianism, and the Eclipse of Argument). As noted, this is the foundation upon which the American Republic is erected. Whether grounded in divine command or natural law, moral claims are here justified by reference to constraints and duties that bound all actors equally. Even fierce disagreements assume a shared expectation of argument: that moral claims require reasons, that means matter independently of ends, and that one’s opponent is owed charity and fairness. This framework does not eliminate moral conflict; rather, it structured it, thus avoiding nihilism or at least instrumental reason shorn of moral precepts.

Progressive moral discourse largely departs from this tradition. As I showed in the essays cited above, its ethical orientation is broadly utilitarian, though often implicitly so, emphasizing outcomes—happiness, well-being—rather than principles. Moral weight is assigned through the lens of disparity, group vulnerability, or harm reduction, rather than through inviolable rights and universal duties. This is the style of social justice. Crucially, in the current period, utilitarianism is no longer really even philosophical in the Benthamite sense but postmodernist and sociological in the vein of Saint-Simon’s positivism and the desire for technocracy wrapped in the pretense of morality: legitimacy is conferred by alignment with narratives of historical injustice and oppressive power structures organized around identity.

It follows that analyses hailing from this standpoint—critical race theory and its ilk—define outcomes such as income inequality as definitionally racist and so forth, as if the outcome is its own cause. The moral unit is no longer the individual bound by duty, or guaranteed equality before the law, but the group defined by status. There is no need to show how outcomes are the result of racist structures; they are, on their face, racist because the system is assumed as white supremacist; therefore, equality defined as equality of outcome is warranted; any person who defends the system or is opposed to equity so defined is by definition a racist.

Within this framework, disagreement predictably becomes morally suspect. Indeed, if an argument is said to “cause harm” or “reinforce oppression,” then it is not merely incorrect but immoral. As a result, moral condemnation replaces rebuttal. Labels such as “bigoted” or “racist” do not function as falsifiable claims but as status judgments that expel the speaker from the moral community. Once expelled, deontological arguments no longer merit engagement; they allow the consequentialist to dismiss the argument, and morality along with it, out of hand.

This is not an accidental degeneration of discourse; it is a rational strategy within a self-proclaimed moral system that lacks deontological constraints on means, which then allows means to serve ends selected based on preference, imposed by power—not just institutional, but personal, hence the notion that arguments and opinions are violence to be met with violence. Academia, the culture industry, and mass media elevate progressivism as a moral standard while portraying classical liberals and conservatives as standing against the moral order. Violence on the left becomes extraordinary but justified. The inversion is not a swapping of moral alternatives, since the morality of the classical liberal and the conservative rests on a reasoned moral foundation, whereas progressivism does not and cannot. Again, this inversion becomes possible because of progressive and social democratic command of the West’s sensemaking apparatus.

The absence of constraints undermines democracy and liberty. This is why classical liberals and conservatives have to fight to reclaim the moral foundation of the West. Deontological ethics impose limits on means independent of desired ends. Utilitarian-progressive ethics recognize no such inherent limits. If the goal—protecting the vulnerable groups or reducing harm they define and identify—is sufficiently moralized, then institutional suppression, rhetorical exclusion, or reputational destruction becomes not only permissible but obligatory. Moral seriousness is demonstrated not by restraint, but by zeal. Without a real moral foundation, progressivism easily leads to authoritarianism and political extremism, which we witnessed during the COVID-19 pandemic. The evidence of this is ample (COVID-19 is just one example among many, including the organizational enforcement of preferred pronouns and the diminishment of women’s rights), but it follows theoretically; it is the predictable result of abandoning deontological liberalism, an abandonment that represents a precondition for the corporate state to govern the masses.

Ironically, this dynamic leaves those who operate from principled moral frameworks at a rhetorical disadvantage. Deontological ethics require tolerance of disagreement, good-faith engagement, procedural fairness, and the presentation of facts, not presupposition. These ethics prohibit treating dissent as evidence of moral depravity, even when those dissenting are moral depraved. In a pseudomoral environment that rewards denunciation and punishes restraint, this commitment is mistaken for guilt or weakness. The very virtues that once defined moral seriousness—humility, rational justification, and restraint—are recast as complicity. The liberty of those judged guilty are thus rightly constrained.

We see this when an empirical finding inconvenient to a progressive position is dismissed or rejected, not on a rational basis, but to sustain a standpoint. The charge is that facts will be misused by the other side, which it has no moral right to do, and therefore antiwoke voices have no right to the facts. For example, Johanna Olson-Kennedy, the lead researcher on a large, federally funded study on puberty blockers and mental health outcomes in transgender and gender-diverse youth, found no clear mental health benefit from puberty blockers. Rather than release the findings, Olson-Kennedy withheld publication of key results for years because she said she did not want the findings to be “weaponized” by critics of gender-affirming care or used in legal and political fights over treatment for transgender youth. The absense of a moral foundation in Olson-Kennedy’s suppression of research contradicting her opinion is astounding—but not at all exceptional. The fact that Olson-Kennedy’s actions are generally unknown by the public illustrates the power of captured institutions to memory hole research.

Does the reader now see how progressives came to occupy the moral high ground so convincingly? The answer lies in the purposeful collapse of shared metaphysical foundations. As classical liberal moral philosophy, natural law reasoning, and religious authority lost cultural legitimacy, the result of decades of progressive command of Western institutions, moral justification migrated from principles to identities. Standing with the oppressed became a surrogate for moral grounding itself. In this context, questioning the framework is no longer philosophical dissent but moral transgression. The framework immunizes itself against critique by redefining critique as harm. The result is a political culture in which moral authority is asserted rather than argued, and in which ethical language is weaponized to silence rather than persuade.

This is the essence of contemporary totalitarianism. It moves society from individualism towards collectivism via the rhetoric of identity that parallels past hierarchical arrangements, but with their presumed victims in nominal control and alleged perpetrators becoming subjects worthy of controlling. All the while, the corporate state is in back of all this, the presumed victims managed by valorizing their grievances and elevating them to the status of totems of white and other manufactured guilts. Collectivism lies at the core of ostensibly very different systems, namely communism and fascism, both appealing to the rhetoric of socialism. Widespread confusion among conservatives over terms notwithstanding, the contemporary case is state corporatist and therefore an expression of soft fascism, albeit becoming hardened in many European states and under the Biden regime.

This does not mean that progressive concerns about injustice are necessarily illegitimate; classical liberals and conservatives are also concerned about injustices. Nor does it mean that deontological frameworks are beyond criticism, which is the point of this critique. It does mean, however, that a self-proclaimed moral system unwilling to subject itself to argument forfeits the very authority it claims; moral high ground that cannot explain itself without condemnation is not moral reasoning—it’s an illegitimate exercise of presummed moral power, which can be made into mass perception via political power.

To recover genuine moral discourse requires more than civility. It requires a renewed commitment to principles that bind all parties equally, including rules on how moral disagreement is conducted, and these rules must be grounded in a common ontology if it is to be universally obligatory, and that common ontology must rests on objective grounds. Until then, the paradox will persist: those most committed to moral foundations will be treated as morally suspect, while those least constrained by moral rules will speak with the loudest moral certainty. We see the effects of this in strategic language and, sometimes, bad faith, where people convince themselves of untruths to survive. To break out of the loop, classical liberals and conservatives have to assert themselves and not allow moral entrepeneurs to bully them.

Image by Sora

The New World Order as Given

“If you’re not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing.” —Malcolm X

The Guardian yesterday published an article, “European leaders appear torn in face of new world order after Venezuela attack,” is exemplary of a revealing propaganda frame. In a crude, Orwellian inversion of reality, the British propaganda organ advancing a transnational corporate agenda explains the design of the nascent “new world order,” a system in which transnational corporate state power (TCSP) decides which leaders are to be deposed not sovereign nation states based on national self interests and concern for the human rights of those suffering under the thumb of dictatorial regimes.

A clear instance of TCSP action in this spirit is Viktor Yanukovych, the Ukrainian president, ousted in 2014 during the engineered “Revolution of Dignity,” a coup, in this case, a revolution-from-above, ultimately leading to the dictatorship of Volodymyr Zelensky, a leader subservient to TCSP (see History and Sides-Taking in the Russo-Ukrainian War). Another example is the long-running effort to remove an American president, Donald Trump, from power and replace him with a leader similarly subservient to TCSP (see The Conspiracy to Overthrow an American President).

TCSP action contrasts with the old world order—that is, the existing world order being held together by Trump (and Putin)—in which sovereign states, exercising authority (i.e., legitimate power), make decisions about deposing foreign leaders based on collective self-defense and the rights of people. The TCSP position rejects these principles, asserting that nation-states do not possess such authority because they are not truly sovereign. This assumption underlies the outrage over Russia’s actions in Ukraine. It is this view—not Trump’s actions in Venezuela—that actually constitutes the so-called “new world order.”

See how propaganda works? The framing presumes the prior existence of a new world order in which the United States no longer has the authority—no longer possesses legitimate power—to depose rogue leaders whose actions harm US interests and the interests of the people in their respective countries. This is analogous to presuming a social order in which sovereign (free) individuals no longer possess the right to defend themselves against others who threaten them.

In the “new world order” presupposed by The Guardian and other corporate state media, there is no right to collective self-defense. This mirrors the erosion of individual self-defense rights within states subservient to the transnational corporate order, as seen in governments systematically disarming their populations, as well as in suppressing the right of indigenous populations to protect their homelands from barbarians sent by TCSP to undermine their nations (the replacement project).

Put another way, The Guardian shifts perception by portraying what remains essential to national sovereignty—namely, the collective right to self-defense and shaping the context in which conflict may be avoided—as a radical “new world order,” rather than recognizing it for what it is: the existing world order as upheld by the defenders of Western civilization. In anticipation of TCSP subverting that right for all sovereign nation-states, the propaganda frame recasts collective self-defense against rogue regimes as something novel. The subversion has not yet been fully achieved, which is precisely why voters chose to return Trump to power: as a protective measure against the loss of sovereignty. But The Guardian asks its readers to presume the transnational order is given.

Somewhere between Venezuela and New York

Critics of Trump’s actions invoke the concept of “national sovereignty” without defining it. Propagandists have reduced the term to a glittering generality, selectively deployed to advance the TCSP agenda. (See Will They Break the Peace of Westphalia or Will We Save National Sovereignty for the Sake of the People?) Properly defined, sovereignty refers to authority, independence, and supreme power—whether of a self-governing nation or of an individual’s ultimate control over his own life. Sovereignty carries inherent rights, among the most fundamental of which is the right to collective self-defense.

While Venezuelans are out in the streets of their country celebrating the removal of Maduro from power, the progressive mob—the same mob that claims to protest against fascists and kings and calls for Jews to be driven from their homes in “Palestine” into the sea—is on the streets of America decrying collective self-defense and demanding a dictator be returned to power. Why? For the same reason that they protest against the deportation of Mara Salvatrucha. Because they stand with the TCSP agenda to deconstruct the sovereign nation-state. Progressives are the street gang of the new world order.

The propagandists flip meanings and say that Trump violated the sovereignty of Venezuela, which is like saying that a police officer taking into custody a criminal suspect, or ICE detaining an illegal alien, is violating the sovereignty of the suspect/alien, as if there were no legitimate reason for law enforcement to enforce the law against those who have abdicated their sovereignty by acting beyond human decency and moral boundaries.

Let’s recall the wisdom of Christopher Hitchens. Hitchens noted that sovereignty is derived from the people, the nation, not inherent in a regime. A government’s claim to sovereignty depends on its fulfilling the basic obligations of a state—protecting its population and respecting their rights. When a regime becomes the primary threat to its own people and to those in other nations, it voids the moral basis of its claims to sovereignty. Claimed sovereignty can thus be legitimately overridden when a state commits or enables systematic mass repression or engages in aggressive wars or terror sponsorship. Isn’t this what Democrats have been saying about the Maduro regime for years?

Trump did not “invade a sovereign nation.” His administration arrested the leader of an illegitimate regime and liberated the Venezuelan people from a dictatorship. Maduro will now stand trial in a court of justice. This is legal. But, more importantly, it is moral. This is what makes the legal piece legitimate. Venezuelans were killed in the action. Whose fault is this? Trump gave Madura every opportunity to step down. He was even prepared to allow Maduro exile to Turkey. Madura chose to stay. The blood is on his hands.

Progressives festoon their social media profiles with Ukrainian and Palestinian flags, and are now (predictably) out in the streets calling for the return of a dictator to power in a foreign country. The foreign flags and anti-American/Russia/Israel chants represent affronts to national sovereignty, which is precisely why progressives wave flags and chant slogans; this because progressives are an affront to the morality of national sovereignty. Progressives stand with TCSP, so naturally they will oppose the US bringing to justice those who oppress people.

* * *

One of my favorite takes on the Maduro arrest is that the action was carried out to distract from the Epstein files. This is one of the reasons for opposing it. It was “Wag the Dog.” “Justice for Epstein’s victims!” the chant goes up. “Stay focused on the thing!” That the thing itself is a distraction is beside the point. There is no principle operating here. Trump must be derailed for the sake of TCSP.

Of course, this presumes Trump had something to do with Epstein’s crimes—or at least the distraction wants the public to presume this. Nothing Trump does can be good, so distracting from the good the President does is imperative. To demonstrate the hollowness of the claim, when Clinton’s name comes up over and over again in the files, Democrats become an orchestra of crickets.

Remember in 1998 when, during the Lewinsky scandal, Clinton sent cruise missiles into Afghanistan and the Sudan following the US embassy bombings in Kenya and Tanzania earlier that month? There was no opposition to Clinton’s actions coming from Democrats. That was not “Wag the Dog.” Nor was there Democrat opposition to Clinton 1999 bombing Belgrade (see The US and NATO in the Balkans, my first published article in New Interventions).

I, too, denied that Clinton’s action in Afghanistan and Sudan was a “Wag the Dog” moment. But not for the same reason Democrats did. I was no fan of Clinton’s. I knew that the Clinton administration had substantial evidence that al-Qaeda was behind the US embassy bombings. Osama bin Laden had publicly declared war on the United States in 1996 and again in 1998 (the “World Islamic Front” fatwa), explicitly calling for attacks on Americans. Bin Laden made good on the fatwa, not only by bombing US diplomats in Africa, but, while the Bush Administration pretended to be asleep at the switch, on September 11, 2001.

Why is Trump’s action in Venezuela not “Wag the Dog”? Trump told the world from the beginning of his second term that he was reestablishing US hegemony over the Western Hemisphere because of the threat posed to the United States by Chinese and Russian presence in the Americas. That’s what the Panama Canal intervention was about. This is why Trump talks about Greenland. Venezuela is part of a piece. (See my January 9, 2025 essay Monroe Doctrine 2.0.)

Democrats have been talking about the threat the Madura regime presented to US national security for years. Yet they did nothing about it. Trump did something about it. Now it’s “Wag the Dog.”

The New Orwellian Slogans

The United States was founded neither on Christian theism nor on divine command doctrine. Nor is it founded on the utilitarianism that guides progressive immorality. The American republic is grounded in Christian ethics, democratic republicanism, and deontological liberalism rooted in natural law. Individualism is the great achievement of the West. That great achievement is imperiled by the Red-Green Alliance and the Democratic Party.

Sharia is incompatible with Christian ethics because it is based on divine command doctrine and the rejection of individualism. Islamic culture is incompatible with American values because it is anti-Enlightenment and illiberal. It represents an authoritarian and totalitarian vision of the future—and it’s already substantially present in the West. We’ve let down our guard, with considerable help from enemies within—progressives and social democrats.

Progressives hyperventilate over “Christian nationalism,” yet they show no concern about Islamization. They speak endlessly of justice and liberation, but their actual project is to place all people under the authority of the corporate state. They seek administrative rule and technocratic control of every societal institution according to the principles of corporate statism. Democracy will die in the darkness that brings.

This is the core of the Red-Green Alliance. What progressives and Islamists share is a loathing of the West and an embrace of subjection. The alliance was symbolically affirmed yesterday, sworn in on a Qur’an, by a mayor who now governs arguably the most important city on the planet—the financial hub of the global capitalist economy. Progressives backed this outcome every step of the way. In both New York City and London, progressives and social democrats have elevated Islam to positions of global political power.

The future of freedom is on the ballot this November. If Republicans lose, the beacon of freedom and democracy will be extinguished. Progressives will usher in a new Dark Age. We will be dispossessed, and our daughters will wear the veil.

You think this is hyperbole? Study history. Islam is not a religion of peace; it is an ideology of war. Nor is collectivism the path to liberty. Collectivism is liberty’s antithesis. The historical record is clear. Wherever Muslims take power, freedom and democracy are replaced by tyranny and violence. Christian ethics point toward individual autonomy and liberty. Sharia leads to subjection. This is what “Islam” means—literally: submission and surrender. And wherever collectivism holds sway, the people are not free.

Why did we fight national socialism and Soviet communism? Were we merely waiting for the Red-Green Alliance? Nazism and Stalinism were evil, but corporate statism and Mohammadism are acceptable? There are enough of us who see what is happening, who love freedom and hate tyranny. Now we must turn that understanding into action.

2025: The Year in Review and Notes on the West’s Islamic Problem

Yesterday’s essay, Moral Authority Without Foundations: Progressivism, Utilitarianism, and the Eclipse of Argument, my 316th publication of the year, capped off Freedom and Reason’s most successful year to date, surpassing last year’s record, which had previously been my best. The platform saw a 170 percent increase in visitors and a 90 percent increase in views compared to 2024. Since 2020—the year my blog gained traction as people sought reliable information on the pandemic amid widespread censorship and deplatforming on Facebook, Twitter, and YouTube—Freedom and Reason has experienced a 2,198 percent increase in views and a 2,180 percent increase in visitors.

I noted that yesterday’s essay was meant to cap off the year. However, with Zohran Mamdani set to be sworn in as New York City mayor on January 1st, taking the oath of office on the Qur’an, surrounded by progressives, in the shadow of occupied Europe, something must be said about the coming year and the specter of Islam. When I put the problem this way, Islamophiles will suggest a parallel to antisemitism in Europe (ironic in light of the virulent antisemitism on the left and among their Muslim comrades). They will accuse me of “Islamophobia.” When patriots urge mass deportation of Muslims, they accuse them of advocating “ethinc cleansing.” But the parallels are false, and readers need to armed for the New Year to confront these lies. So, I will cap off the year with a demographic comparison between Jews and Muslims and a note about its significance to the future of your family and your nation.

Jews represent an outsize threat to the Red-Green Alliance. These two planets orbit around a Jewish star—and they hate its light, wishing instead for a shroud of darkness. They seem to hate Jews more than they hate Christians. Their common mass delusion flies in the face of a historically unique demographic position, both in absolute size and in relation to political sovereignty. This is as true today as it was when Hitler was in power in Germany. At its historical peak, on the eve of World War II, the worldwide Jewish population is estimated at approximately 16-17 million people. At that time, the world population stood at roughly 2.3 billion. This figure represents the highest proportional share Jews have ever held in recorded history. Even at this peak, Jews remained a very small global minority, well under one percent of the world’s population (about 0.7 percent of humanity). And they had no country, their homeland controlled by foreign powers.

German national socialism murdered millions of them in the European diaspora. The Holocaust did not merely reduce the Jewish population in absolute terms; it ensured that Jews would never regain their former demographic position relative to the rest of the world. Today, the global Jewish population is estimated at approximately 14-15 million, while the world population exceeds 8 billion. As a result, Jews now make up roughly 0.17-0.19 percent of humanity—less than one-fifth of one percent. This represents not only an absolute shortfall compared to 1939, but a dramatic relative decline caused by the explosive growth of the global population in the postwar era.

This demographic reality is inseparable from the political fact that there is only one Jewish-majority country in the modern world: the State of Israel, first and noblely recognized by the United States of America. Since its founding in 1948, Israel has been—and remains—the sole sovereign nation-state in which Jews constitute a majority of the population. In the modern international system, there has never been more than one Jewish-majority country. This fact stands in stark contrast to most peoples, including far smaller ethnic or national groups, who often possess multiple states or enjoy majority status across contiguous regions.

Historically, Jewish-majority polities did exist in antiquity. There were the biblical kingdoms of Israel and Judah, and later the Hasmonean kingdom. However, these entities were pre-modern, small in population, and embedded in the imperial systems that organized the world before the rise of sovereign nation-states in the modern period. They were not contemporaneous in a way that would amount to multiple Jewish countries as the term is understood today. Thus, even across millennia of history, the combination of extreme demographic minority status and political singularity remains consistent. This fact puts Jews in a perpetual state of danger. This is why the perennial problem of antisemitism must constantly be surveilled and checked.

Taken together, these facts underscore the unusual position of the Jewish people, a civilization with continuous culture, identity, and religious tradition spanning thousands of years—the tradition that underpins Christianity, the faith that gave the world the Enlightenment, deontological liberalism, i.e., human rights, individualism, natural law, and a return to the species-being lost with the rise of social segmentation and submission to gods and kings—comprising a vanishingly small share of humanity, possessing exactly one majority state in a universe of 193 sovereign nations, yet seen by progressives and Muslims as a problem to be confronted. Understanding this demographic context is essential for making sense of Jewish history, modern Jewish political thought, and the disproportionate symbolic and political weight that questions surrounding Israel continue to carry in global discourse. One might suppose that those who loathe the West loathe the Jews in particular, since Western system of law and justice roots in Jewish doctrine. Max Weber was right when he observed that ancient Judaism is the historical hinge between East and West.

The global Muslim population presents a demographic profile that’s the mirror opposite of that of the Jewish people, both historically and in the contemporary world. Around 1939—the moment of the historical peak of the Jewish population—the number of Muslims worldwide is estimated at approximately 550–600 million, meaning Muslims constituted about 24–26 percent of humanity. Even before the mid-twentieth century, Islam was already one of the largest global religious communities, spanning vast geographic regions across Africa, the Middle East, South Asia, and Southeast Asia. Unlike the Jewish population, the Muslim population experienced no comparable demographic rupture in the twentieth century. Instead, it expanded rapidly due to high fertility rates, population growth in the Global South, and the absence of a single catastrophic event comparable to the Holocaust. It was, for the most part, spared the horrors of WWII. Today, the global Muslim population is estimated at approximately two billion people, placing Muslims at roughly the same percent of the global population as Muslims enjoyed in 1939.

However, although Muslims constituted roughly the same share of the world’s population on the eve of World War II as they do today, this continuity in percentage obscures the extraordinary growth of Islam in absolute terms. Maintaining a quarter of humanity across a period of unprecedented global population expansion—driven especially by explosive growth in Africa and Asia—means that Islam added well over a billion adherents in less than a century. Relative to Christianity, whose global share has declined as European populations aged and fertility rates fell, Islam’s growth has been sustained by higher fertility, younger age structures, and rapid population increases in the Global South, making absolute population growth the more accurate measure of Islam’s demographic expansion than percentage share alone.

The political implications of this demographic scale are striking. In contrast to Jews—who, as I noted, have exactly one Jewish-majority country—Muslims form the majority population in more than 45 sovereign states today. These range from large, populous nations such as Bangladesh, Egypt, Indonesia, Pakistan, and Turkey to smaller states in Africa, Central Asia, and the Middle East. Many now Muslim-majority nations used to be majority Christian. Crucially, Islam is not merely a global religion; it is a civilizational system embedded in legal traditions and political cultures. These traditions and cultures are intrinsically totalitarian. Muslims are hardly the vulnerable minority Westerners are told they are; Islam is an aggressive political project with two billion adherents. Islam is on the march. Jihad is here.

One need to speculate about the future. Shakespiere told us in the The Tempest, “What’s past is proloque.” The Islamic political project is a very old one with a clear record of aggression. Muslim-majority polities have existed continuously since the seventh century, beginning with the early caliphates and later empires such as the Abbasid, Mughal, Ottoman, Safavid, and Umayyad empires. While these were pre-modern entities, they governed immense populations and territories and were often dominant global powers. Unlike the Jewish case, Muslim political sovereignty has been expansive, overlapping, and enduring across centuries, even as specific empires rose and fell. One may appreciate Islam’s tenacity; one must also appreciate why this tenacity represents an existential threat to freedom and reason.

Taken together, these facts and this history highlight a profound demographic and political asymmetry. At the time when Jews constituted less than one percent of humanity at their historical peak, Muslims already represented roughly a quarter of the world’s population. Today, that proportional share remains largely intact, with Muslims comprising nearly one in four human beings on Earth and possessing dozens of the planet’s majority states. This contrast is not a neutral demographic reality, nor should it be rationalized away through appeals to religious liberty; the contrast is essential to consider for any serious comparative discussion of politics and power in the modern world.

In 1939, Europe was overwhelmingly Christian and religiously homogeneous, with Islam present only in marginal numbers, largely confined to small communities in southeastern Europe (such as Albania, Bosnia, and parts of the Balkans) and colonial-era diplomats, sailors, and students in major cities. Muslims made up well under one percent of Europe’s population then, and Islam was not perceived as a permanent or growing presence within European societies. Europe at that time was young demographically, with relatively high fertility rates, and it was a net exporter of people through emigration rather than a destination for large-scale immigration.

Today, Europe’s demographic landscape has changed profoundly. Islam has become the continent’s fastest-growing religion, driven by post–World War II labor migration, refugee flows, and higher fertility rates among Muslim populations compared to native European populations. Muslims now constitute roughly 5–8 percent of Europe’s population overall, with much higher concentrations in countries such as Belgium, France, Germany, the Netherlands, the United Kingdom, and Sweden, especially in the cities. At the same time, Europe’s indigenous populations have experienced aging, low fertility, and, in many cases, population decline, while Muslim communities remain significantly younger on average. As a result, Islam has shifted from a marginal presence in 1939 to a visible and structurally embedded component of European society today, reshaping the continent’s religious, cultural, and demographic future in ways that have no historical precedent in modern European history.

Why are young people in the West pathologically obsessed with the State of Israel, wishing to see an indigenous people driven out of its ancestral homeland, while embracing the Islamization of the West? Why are social democrats in Europe and progressives in America defending the Muslim takeover of Western cities and communities? Why is Christianity and cultural integrity among European populations seen as bigoted and racist, while Islam and the refusal of Muslims to assimilate to Western culture celebrated? Why would gay advocates embrace an ideology that kills or otherwise marginalizes homosexuals? Why would feminists embrace an ideology that subjugates women and reduces them to second-class citizenship? Why would Democrats import millions of Muslims into the United States and establish for them at taxpayer expense religious enclaves?

Why aren’t Christians rising against the Islamization of their homelands as they did centuries ago? As I noted in my recent essay, Trump and the Battle for Western Civilization, it was Christians, including militant monks, who repelled with violence the Muslim barbarians, drove them from Europe, and secured the future for Christianity. Had they not acted when they did, I argued, there would be no Europe. No Europe, no America. No Enlightenment. No human rights. Only clerical fascism. If Christians fail to act now, there will be no Europe, no America, no Enlightenment, no human rights—only clerical fascism. Our civilization will be destroyed and our history erased. Don’t feel relieved by the unrest in Iran today. Islam has experienced unrest before. Islam can only be contained by men of the West.

Where are those men? They’ve been emasculated. Western nation states have been corrupted by leftwing cultural self-loathing organized by corporate state power seeking a new world order in which the workings people of the world are to be managed on high-tech neo-feudalist estates. The barbarians are inside the gates of our cities, and those who let them in and keep them here are our own citizens. Civilizational destruction is wrapped in the language of empathy and humanitarianism. As I explain in my most recent essays—The Problem of Empathy and the Pathology of “Be Kind”; Epistemic Foundations, Deontological Liberalism, and the Grounding of Rights; Moral Authority Without Foundations: Progressivism, Utilitarianism, and the Eclipse of Argument—this is a fake human rights rhetoric, one that conceals the work of instrumental reason sans deontological commitments in the service of transnational power. Those who govern the West have abandoned the democratic-republic tradition and classical liberal values for a reason.

For more of my writings on the Islamic problem, see: Is the Red-Green Alliance Ideologically Coherent?; The Decivilization Process: The Islamization of Western Societies; The Law of Allah is Coming for Your Freedom; Woke Progressivism and the Party of God; Corporatism and Islam: The Twin Towers of Totalitarianism; Whose Time Has Come?; The Islamophilia Problem; Immigration, Colonialization, and the Struggle to Save the West; “Free, Free Palestine!”; Antisemitism Drives Anti-Israel Sentiment; Revisiting the Paradox of Tolerating Intolerance—The Occasion: The Election of Zohran Mamdani; Defensive Intolerance: Confronting the Existential Threat of Enlightenment’s Antithesis; What Islamization Looks Like; The Islamization Project on US College Campuses; The Decivilization Process: The Islamization of Western Societies; Selective Condemnation of Cultural Integrity: The Asymmetry of Anti-Colonial Thought; Indigenous English Rise Against Modern-Day Colonialism

Not a happy note to end on, I know. But if we are to make 2026 a happier year, then we need to know the lay of the land. Knowledge is power, but only if it is mutually possessed. For my leftwing comrades, know that you cannot hope for democratic socialism when the future world won’t even have the capitalist tools to work with, but instead suffer elite control via corporatist arrangments—managed democracy and inverted totalitarianism (to borrow Sheldon Wolin’s terms). We already live in a world where these controls affect our daily lives. What open eyes can see coming is already substantially present. See what you see.

I appreciate your patience regarding the limitations of WordPress, particularly the table of contents function. Due to the sheer volume of essays, I have been unable to update the table of contents since my September 15 essay, The Fool Has Come Down Off The Hill. But Who Called on Antifa to Terrorize the Village? That essay followed up on Charlie Kirk’s Killer is in Custody and the Specter of Antifa, which was picked up by Real Clear Politics, and became the most viewed essay in the history of this platform. This exposure elevated my profile and is the primary reason this year has been so successful. WordPress has explained that the table of contents cannot accommodate the sheer number of essays. While I have removed most pre-2018 titles from the contents, the problem persists. This is not a criticism of WordPress, just an explanation to readers why this is the case.

In light of these limitations, I am grateful to my readers for visiting, reading, and sharing the platform. As 2025 draws to a close, it has been a momentous year—one I have chronicled extensively. In 2026, I plan to continue producing content, including the revival of the FAR Podcast. I discontinued the podcast on YouTube years ago due to severe deboosting (I did not want to lose the platform, as I used it for online teaching). However, with Rumble gaining traction and YouTube relaxing its rules, I am preparing to relaunch. The studio is under construction, and you will be the first to know when it goes live. Approaching retirement, I do not intend to stop teaching and writing. Readers can expect a series of essays and podcasts drawing on my knowledge of sociology, anthropology, and psychology, offering polemical lectures on topics ranging from crime and punishment to the corrupting influence of postmodernist ideology. I’m just getting started!

I would like to close with a request—not for money; as a salaried state employee, I feel obligated to do the people’s work. Instead, I want to encourage you to share this platform with others. Despite its growth, Freedom and Reason remains a relatively low-traffic platform. If you find these ideas valuable, it is likely others will, too. So let family and friends know about my work.

Happy New Year, everybody!

Image by Sora

Moral Authority Without Foundations: Progressivism, Utilitarianism, and the Eclipse of Argument

In my previous essay, Epistemic Foundations, Deontological Liberalism, and the Grounding of Rights, I argued that deontological liberalism—the secular moral foundation of the American Republic—draws heavily on principles rooted in Christian ethics, yet remains fully intelligible and defensible without religious belief. Against contemporary tendencies to reduce morality and politics to ideology, preference, or utility, I claim that any society committed to human dignity, individual rights, and the rule of law requires a reflective epistemic foundation in which moral truths can be sown to exist independently of human opinion—i.e., a stance-independent foundation. In that essay, I cited YouTube debater Andrew Wilson as having inspired the essay. I do not agree with Wilson’s argument that Christian ethics necessarily require divine command, but I will take that up in a future essay in which I will present my moral argument, which rests on natural law.

However, I argued, among progressives, a view many Americans know as liberalism generally (failing to distinguish the tendencies), are not liberal in the deontological sense but instead utilitarians, where ends justify means, however immoral those means are. Progressives dress their moral impoverishment in the language of “empathy,” an early twentieth century term derived form the German Einfühlung, a matter I wrote about in a recent essay, The Problem of Empathy and the Pathology of “Be Kind.”

Anticipating that future essay (which will have to wait until the new year), I concluded my last essay by demonstrating that atheists and humanists can coherently operate within this framework. In that case, their moral reasoning—particularly in opposition to authoritarianism and in defense of human dignity—would exemplify a secular form of deontological liberalism grounded in the universal moral insights of Christian ethical thought, especially the inviolability of the individual and the moral limits of political power.

In today’s essay, I explain how progressives claim the moral high ground despite having no certain epistemic foundation for organizing a moral ontology. Readers may have noticed a widespread perception that progressives are the moral ones, perhaps excessively so, whereas classical liberals and conservatives lack empathy of the downtrodden and marginalized (migrants, trans kids, etc.). However, its moral relativism in particular, exposes progressivism as morally impoverished, since there is no deontological basis in this view for appealing to rights. Moral relativism is the view that what is morally right or wrong depends on cultural, personal, or social contexts rather than on universal moral principles. This renders human rights impossible.

Historically, moral claims in the West are grounded in a deontological framework. On secular grounds, these are constitutionalism, natural law, and rights understood as pre-political constraints. Here, moral disagreement takes the form of argument: Are these duties real? Are these rights correctly specified? Are the means legitimate regardless of ends? Even when people disagree sharply, there is at least a shared expectation that one justifies moral claims by appealing to principles that are binding for everyone, including oneself. By rejecting the republic’s foundational deontological framework, progressivism represents an authoritarian tendency in American politics and in the West generally.

Progressive moral discourse (such as it is) breaks with the American tradition. Its authority does not rest on fixed moral precepts or universal duties, but on outcomes, e.g., the reduction of harm to designated vulnerable groups, selectively chosen to advance ideological and political goals. Again, this is a form of utilitarianism, but one filtered through sociology (yes, my discipline—and not just in its warp form—has played a central role in the corruption of moral understanding) rather than philosophy: moral weight is assigned by group status, historical grievance, and measured disparities.

Crucially, because the metric used in the progressive standpoint is harm reduction and promotion of happiness (as progressives define it) rather than principle, disagreement over means is treated as evidence of moral defect. If an argument is said to “cause harm,” then the arguer is not merely wrong but immoral. He is a “bad actor.” That is why disagreement is moralized (as a rhetorical or strategic act) and personalized in the progressive worldview, rather than addressed substantively.

This shift explains the prominence of moral labeling. Terms like “bigoted,” “Islamophobic,” “nativist,” “racist,” “transphobic,” and “xenophobic” function less as descriptive claims (defined by progressives in any case) than as status judgments, marking someone as standing outside the moral community. Once a person is assigned that status, their arguments no longer require engagement. The targeted man is effectively erased as a citizen with the right to speak his mind and engage in the political process; there is no need to engage with him. It’s an easy jump from here to perpetrating violence against him. The cases of progressive violence against conservatives are mounting.

This is not accidental; it is a feature of a moral framework that lacks deontological limits. If there are no inviolable duties, then exclusion and violence become legitimate moral tools. Moral high ground is asserted not by coherence or consistency, nor by reference to an actual moral epistemic, but by alignment with the approved moral narrative. It is only nominally moral. Arguably, there is no amoral stance among humans, since to act outside a moral order is itself to engage in immoral behavior. Philosophers like Aristotle, Kant, and many virtue ethicists agree that choosing to stand outside a moral order is itself a moral choice, and thus open to moral judgment. Progressives cannot rationally escape the dilemma.

The tactical irony is that those who do operate from an epistemic moral foundation—constitutional restraints, natural rights, rule-based ethics—are especially vulnerable to this tactic. The deontological framework I have outlined requires toleration of disagreement and restraint in judgment (this framework provides the rules for Jürgen Habermas’s ideal speech situation, elaborated in his 1981 The Theory of Communicative Action); it prohibits treating opponents as morally illegitimate merely for disagreement or dissent. Utilitarian-progressive frameworks, by contrast, have no such internal brake. If the end is moralized strongly enough, almost any rhetorical or social means become justified.

The question today’s essay addresses is how progressives came to be seen as holding the moral high ground. The short answer is that this has occurred largely because of the collapse of shared metaphysical commitments. As classical liberal moral philosophy, as well as natural law and religion, lost cultural authority, the language of moral legitimacy migrated from principles to identities. Claiming to stand with the “disadvantaged,” “downtrodden,” “migrant,” “oppressed,” and “victims” became a surrogate for moral justification itself. In this environment, to question the framework is not seen as philosophical dissent but as moral betrayal.

The longer answer will come in another future essay in the new year. Readers won’t have long to wait. It will suffice to say for now that the asymmetry I’m describing is real, and I wanted to cap off the year with this observation. I’m confident most readers recognize this reality. It is not that all progressives lack a moral framework altogether; rather, it’s that their framework treats disagreement as a moral failure and labels it as a sufficient moral rebuttal. Those committed to deontological ethics appear to be in the weaker moral position, not because their foundations are thinner (quite the contrary), but because they refuse to abandon reasoned argument for moral denunciation. Ironically, that restraint—once the hallmark of moral seriousness—is now portrayed as guilt.

The dilemma, then, is that those who operate from a deontological framework, incorporating charity, compassion, sympathy, and tolerance, confront those who have no moral foundation who advance morally illegitimate positions. At some point, those who work from deontological commitments are going to have to assert their epistemic authority over those operating without one and insist that, if anyone stands beyond the pale, it is the person claiming the moral high ground without a coherent moral epistemic.

Image by Sora

Epistemic Foundations, Deontological Liberalism, and the Grounding of Rights

This essay argues that deontological liberalism, the ethical foundation of the American Republic, rests on principles derived from Christian ethics, yet it can be coherently embraced without religious commitment. While contemporary debates often treat morality and politics as matters of ideological allegiance, preference, or utility, I contend that a reflective epistemic foundation—one in which moral truths exist independently of human opinion—is essential for any society that seeks to protect human dignity, individual rights, and the rule of law. I conclude by showing that committed atheists and humanists can operate from this ethical framework. Their moral reasoning, particularly in resisting authoritarianism and defending human dignity, illustrates a secular deontological liberalism grounded in the universal moral insights of Christian ethical thought, which prioritizes the inviolability of the individual and the limits of political power. Put another way, one need not be conservative nor Christian to embrace a valid moral ontology.

A little more than a year ago, on December 23, 2024, I published an essay, Rise of the Domestic Clerical Fascist and the Specter of Christian Nationalism, in which I argued that one of the rights government is compelled to defend is religious liberty, which necessarily requires freedom from religion as well as freedom of religion, since a person cannot be free to practice their faith (or no faith at all) if they are not free from the demands of the faith of others. This is why, I argued, Islam is incompatible with freedom: Muslims believe juridical and political authority comes from Allah and must be administered by religious clerics. I warned that Christian nationalism risks the same problem, and that the United States must remain a secular republic tolerant of the rights of believers and disbelievers alike.

America is founded on an entirely different premise than that of the Islamic clerisy. So central is secularism to the US Republic that the Constitution explicitly states that no officeholder can be required to swear allegiance to any god (John Quincy Adams took the oath of President on a book of secular law). Article VI states that “no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States.” The First Amendment guarantees citizens freedom of conscience. The Constitution is the supreme law of the country. It is a secular constitution for a secular nation.

At the same time, the deontological liberalism that underpins the logic of the American Republic emerges from the ethical system that grew out of Christian civilization, especially following the dissolution of Catholic hegemony with the Protestant Reformation, which allowed Christianity to arrive in its fully developed form as a doctrine of individualism and human dignity. From this moment, the Enlightenment was born, and the American Republic became a possibility. The patriots, most of them Protestants, who overthrew the English monarchy and established what is now the world’s oldest constitutional republic, seized that moment in history. We owe them a profound debt of gratitude for their bravery and wisdom.

Although I am critical of Christian nationalism, it is a fact that America’s founding was a product of a secularized form of Christian ethics, in which moral ideas shaped by Christianity were translated into political principles without requiring theological assent. The founders drew on Christian moral assumptions—the inherent dignity and moral equality of persons, the duty to restrain power, the importance of conscience, and concern for justice—while grounding them in natural law, reason, and self-evident truths rather than explicit revelation. Concepts such as human rights, limited government, and the rule of law reflect Christian ethical roots reframed through Enlightenment thought. Here, sin is reinterpreted as human fallibility requiring checks and balances, and love of neighbor is expressed institutionally through ordered liberty and the protection of individual rights. Thus, America’s founding embodies a secular Christian ethics: morally indebted to Christianity, but politically articulated in universal, non-sectarian terms.

In this essay, I present the epistemic foundation that has guided my thinking throughout my life, admitting that at times I fell under the spell of progressive ideas antithetical to that foundation. I am moved to write this essay because of Andrew Wilson. Wilson is an American commentator and host of The Crucible, a long-form debate-style podcast and livestream focusing on culture, gender, politics, and religion. He debates others far and wide and often asks his interlocutors to detail the epistemic foundation upon which they erect their arguments.

In a recent interview with Patrick Bet-David (the PBD Podcast), Wilson contends that most people do not know why they believe what they do. Instead, they repeat talking points provided for them by their tribe. As such, their arguments do not stem from an epistemic that organizes ontological truth. They routinely fail to establish a stance-independent foundation, where truth and validity do not depend on attitudes, beliefs, perspectives, or preferences, but rather systematically determine these. What is required for reason, he argues, is an epistemic that holds regardless of what anyone thinks about it. In other words, individuals do not have their “own truths.” There is a truth, and we all live in that truth regardless of ideology or politics—whether we know it or not.

One of those truths is the fallibility of man. This applies to me as much as anybody else. It is because I recognize my own fallibility that I have changed my opinions over time—and it was in moments when I allowed the tribe to determine those opinions for me that I strayed from the epistemic that guided me from early childhood.

For example, there was a period in my life when I accepted the premise that communism was, in principle, a good thing, since capitalism is an exploitative system (exploiting man and nature). I defended things that communists did that improved the lives of people. To be sure, the Soviet Union accomplished some amazing things, which I document in publications over the years (see, e.g., my 2003 article The Soviet Union: State Capitalist or Siege Socialist? published in Nature, Society, and Thought). However, in defending communism, I had to upplay the accomplishments and downplay the terrible things the communists did to achieve those advances, obscuring the fact that the accomplishments came at the cost of tens of millions of lives. And, moreover, that similar achievements were possible without communism—indeed, greater achievements than communism could muster under the best conditions.

I was awakened to this by a deep dive into the work of George Orwell, whom I have featured in several essays on this platform. I learned from Orwell’s biographer, Christopher Hitchens (who also wrote biographies of Thomas Jefferson and Thomas Paine), that Orwell was often asked why he did not dwell on the problems of fascism, instead focusing his high-powered perception on the tragedy of communism. Of course, Orwell did have things to say about fascism (he took a bullet during the Spanish Civil War). But he focused on the horrors of communism. Why? Because Hitchens explains, Orwell was surrounded by intelligent people—academics and scholars—who could, on the one hand, see the horrors of fascism, yet, on the other, ignore them in communism. Orwell could see people rationalize the double standard. Readers of this platform have likely heard others say, “Communism has never been executed properly, but the ideas are good and worthy of consideration,” never stopping to consider that faithfully following those ideas to their logical conclusion is what led to atrocities they themselves, reluctantly and dismissively, admit.

When I returned to the liberal foundations of my thinking after having been pulled into orbit around leftist ideology during graduate school and my early years as a college professor, I re-examined my beliefs and found that I was inconsistent. I realized that if I did not work from principle every time—judging every event, trend, and thought in terms of those principles—I would reach bad conclusions. I was working from a double standard. I knew double standards were irrational, but I had allowed myself to work from them nonetheless.

For example, I fell under the mistaken belief that only white people could be racist, in the sense that, since whites controlled society, their ideas of racial hierarchy had an effect, whereas the ideas of racial hierarchy among black racialists, in their powerlessness, could not. To borrow the language of philosopher of science Imre Lakatos, I erected around me a protective belt (a system of positive heuristics) to defend the hard core (the negative heuristic) of my research program. Under self-interrogation, I realized that I was committing the fallacy of misplaced concreteness, treating abstractions as if they were real things, which pushed my liberal commitments to individualism to the margins. I had to bracket enlightened thinking to sustain an ideological worldview that had no rational grounding. I was an atheist working from heavenly and idealist ideas rather than earthly materialist ones.

It was in self-interrogation that I came to understand that liberal Enlightenment carries an epistemic foundation, and that foundation lies in Christian ethics. Andrew Wilson’s observations put what I have been working on over the last several years into perspective. Even though I do not subscribe to Christian theology, I recognized that the ethics emerging from the Reformation—the recognition, within a religious tradition of individualism, of the objective reality of human existence, which includes the anthropological and sociological truth that we are social beings who must live collectively—demonstrate the validity of limited government.

This religious tradition forms the basis of republican government, and the liberalism that gives rise to it is not utilitarian, which Wilson criticizes in a recent seven hour debate with Mark Reid, standing Christianity to secular humanism, where ends reduce the means to amoral and instrumental actions, but deontological, where the means must have moral justification. Indeed, some ends are not to be achieved because there are no ethical means to achieve them. This system makes civil and human rights possible—and necessary. In a real sense, the means are ends in themselves.

Restraint of government is deontological in the sense that it imposes moral limits on permissible means, regardless of how desirable the ends may be. In the American founding tradition, government may not violate certain rights (life, liberty, conscience, due process) even to achieve desired outcomes such as prosperity, security, or substantive equality. These limits function as moral prohibitions, not merely prudential calculations. It is also liberal because it centers the moral status of the individual over collective goals, treating persons as ends in themselves rather than instruments of state purposes.

This reflects a secularized Christian moral inheritance: the Christian idea of the inviolable person translated into the Enlightenment language of natural rights. Crucially, restraint of government is not only about means to an end, but also about what ends are morally admissible at all. Some ends—such as coerced virtue or enforced moral perfection—are ruled out in principle. Thus, American liberalism embeds a deontological ethics that governs both how government may act and what it may rightly aim to do.

Both forms of liberalism existed at the time of America’s founding. Thomas Jefferson, the primary author of the Declaration of Independence, and Jeremy Bentham, a proponent of utilitarianism, both liberals, were acquainted and mutually respectful, but they represent two different moral foundations for organizing the Western world. They thus usefully serve as personifications of the two positions, both of which continue to shape governance and lawmaking in Western democratic societies.

Jefferson’s liberalism is essentially deontological and rights-based, grounded in natural law and the moral inviolability of the person. Rights exist before government and place firm limits on what the state may do, regardless of consequences. This aligns with the American founders’ emphasis on inherent rights, restraint of power, and constitutional limits. Bentham, by contrast, rejects natural rights as “nonsense upon stilts” (see his critique of the Declaration of the Rights of Man and of the Citizen in his essay Anarchical Fallacies, c. 1796). He argues that the legitimacy of laws and institutions depends entirely on their tendency to maximize overall happiness. In Bentham’s framework, rights are not moral constraints but useful constructs—rules justified only insofar as they produce good outcomes. This allows, in principle, for rights to be overridden if doing so increases aggregate utility. Jefferson famously argues for happiness as well, but he does so within the framework of natural rights.

The split matters because it produces two distinct liberal traditions: an American constitutional liberalism focused on limits and rights, and a British reformist liberalism more comfortable with technocratic governance and policy experimentation. Jefferson and Bentham illustrate how liberalism can agree on freedom as a goal while sharply disagreeing on the moral rules that govern how freedom may be pursued.

In this essay, I explore the epistemic foundation that underpins the American Republic, namely, Christian ethics, and praise the founders for separating those ethics from the theology that gave rise to them. The danger of Christian nationalism is that it seeks to rejoin Christian ethics and theology to re-Christianize the country. I argue that this is not in keeping with the founders’ vision for America. Moreover, as I have suggested in several essays on Freedom and Reason, utilitarianism inheres in the logic of progressivism, which is an expression of corporate statism, where instrumental reason trumps republican virtue, leading to a decadent society and civilizational decay. While America is not a Christian nation, I have come around to the position that America needs a Christian majority to uphold republican virtue (which is one of the reasons I am highly critical of mass immigration from Muslim-majority countries).

Contemporary moral and political disagreements often appear to concern particular policies or ethical conclusions. Yet beneath these surface disputes lies a deeper conflict—one not primarily about what we believe, but about how we claim to know what we believe, and what ultimately justifies those claims. This is the sense in which Christian apologists like Wilson argue that most people “do not work from an epistemic standpoint.” What Wilson means is not simply that people lack information, but that they lack a reflective account of the foundations of their knowledge, especially moral knowledge. They can repeat conclusions but cannot explain why those conclusions should bind anyone, including themselves. This dispute becomes especially clear when comparing different traditions within liberal political thought, particularly deontological liberalism and utilitarianism, and when asking how liberal societies ground claims about dignity, justice, and rights.

In this context, epistemic refers to a theory of knowledge: an account of how beliefs are justified, what makes them true or false, and what ultimately grounds their authority. To “work from an epistemic” is to be able to answer questions such as: What counts as knowledge? Why should reason be trusted? Why do logic, morality, and truth have binding force? What distinguishes objective moral claims from mere preference or social convention?

Christian ethicists often argue that many modern moral and political claims are epistemically shallow. People assert moral conclusions without being able to explain why those claims are objectively valid rather than contingent on consensus, power, or utility. The critique is not that such people are necessarily insincere (although many are), but that they rely on unexamined assumptions inherited from culture, education, media, or party rather than from a coherent epistemological framework. This is why debates about ethics often collapse into assertion or outrage: the disagreement is not merely moral, but epistemic. The parties lack shared criteria for justification.

Deontological liberalism begins from axioms or postulates about human beings and moral reality. It holds that individuals possess intrinsic worth and therefore certain rights that are not contingent on outcomes, preferences, or social approval. These rights exist before and independent of the state, and the legitimacy of law depends on its conformity to them. Historically, this tradition draws on natural law, natural rights theory, and Enlightenment moral realism.

The American Declaration of Independence is the canonical expression of this view. When it appeals to “the laws of nature and of nature’s God” and declares certain rights “unalienable,” it asserts that moral truths exist objectively, that human reason can apprehend them, and that political authority is constrained by them. Rights are not created by law; they are recognized by it. They are good because they are true. (See Denying Natural Rights at the Heart of Authoritarian Desire.)

Utilitarianism, by contrast, grounds morality in consequences. What is right is what maximizes happiness, preference satisfaction, and well-being. In a world where such things are defined by powerful corporations and their functionaries and propagandists, who determines these desired outcomes is a central question. We saw this during the COVID-19 pandemic, where the supposed well-being of the population required the coercive suppression of fundamental civil and human rights.

In the utilitarian view, moral rules are instrumental rather than intrinsic, and rights are justified insofar as they promote desirable outcomes. This framework is superficially attractive because it appears empirical, flexible, and secular. Yet it weakens the claim that any right is inviolable. If moral validity depends on outcomes, then rights may be revised, overridden, or redefined when doing so seems to improve aggregate welfare.

We saw this in Virginia Senator Tim Kaine’s condemnation of natural rights, arguing instead that rights come from government (see Tim Kaine and the Enemies of Liberty and Rights; Natural Rights, Government, and the Foundations of Human Freedom). Thus, utilitarian liberalism introduces a form of relativism—not that “anything goes,” but that nothing is ultimately fixed. Moral claims lack permanence because they lack grounding in a reality independent of human calculation.

At bottom, this is not a moral disagreement but a dispute about knowledge and reality. Indeed, only one side is moral, and it is not the utilitarian side. This is why secular humanism, working from a utilitarian standpoint, cannot validly claim to work from morality; to claim that rights are unalienable is to assert that they exist, that they are knowable, and that their authority does not depend on human agreement. That requires both an epistemology (how we know moral truths) and an ontology (what kind of things moral truths are).

Utilitarianism, what we recognize today as progressivism, eschews virtue for instrumental reason. This is how we find ourselves in a world where children are drugged and mutilated, marketed as “gender-affirming care,” because they seek the remedy of their dissatisfaction with their bodies in trans joy, a happiness that requires the manufacture of simulated sexual identities—from which the medical industry profits handsomely. The Nuremberg Code, which rests on deontological commitments, is easily suspended when human rights give way to instrumental reason shorn of ethical demands.

Deontological liberalism, therefore, necessarily carries metaphysical commitments. It presupposes a moral order that constrains political power and human will. Utilitarianism, by contrast, minimizes ontological commitments, treating moral knowledge as empirical, pragmatic, or provisional, which subjects it to ideological corruption and political manipulation.

This difference explains why Christian apologists argue that modern secular moral discourse “borrows” moral conclusions while denying the metaphysical foundations that once supported them. Yet it does not follow that moral realism requires commitment to Christian theology. A coherent alternative exists—one deeply rooted in Enlightenment thought and visible in the American founding itself: grounding moral law in nature or natural history rather than in a personal divine lawgiver.

I have already said as much in getting to this point, but it bears elaborating. In deontological liberalism, nature is not morally neutral chaos, but rather an ordered reality governed by universal and intelligible laws. These include not only physical laws, but emergent biological realities, psychological capacities, and the social structures that shape them. Human beings are a certain kind of mammal, with characteristic capacities, needs, and vulnerabilities. From these facts arise norms—not invented arbitrarily but discovered through reflection on what human flourishing requires (I endeavored to explain these in my recent essay praising Samantha Fulnecky’s essay, moving her argument concerning gender roles from theological grounds to the foundation of natural history).

For the deontological liberal, language, reason, and social practices do not create moral law; they allow us to articulate and apply it (which is why the reclamation of accurate and precise language is so imperative). Moral truths are self-evident not because they are obvious in a trivial sense, but because they are accessible to rational agents without appeal to revelation or authority. In Enlightenment usage, “self-evident” means epistemically basic: known directly through reason and observation of the world, rather than inferred from theology (Fulnecky’s error, even if I praise her for providing an epistemic foundation).

This is precisely how Jefferson frames his argument in the Declaration. The Declaration does not present its claims as speculative metaphysics or sectarian doctrine, but as truths available to any rational person. Rights are grounded in human nature itself, not in the decrees of a church or the will of a ruler.

This position reflects a deliberate distinction between metaphysics and theology. The American founders retained key elements of Christian-influenced moral metaphysics—intrinsic human dignity, limits on political authority, objective moral order—while bracketing revealed theology. They rejected the necessity of Christological doctrine, divine revelation, and ecclesiastical authority as sources of political legitimacy. This produced a form of secularism that was not relativist or value-neutral; rather, it was natural-law secularism: a framework that allowed moral objectivity without theological coercion.

Enlightenment values did indeed emerge from Christian civilization, particularly through the Reformation and its emphasis on conscience and moral agency. But acknowledging that historical genealogy does not require accepting theological premises as politically binding. This distinction was essential for pluralism. A republic intended to include citizens with diverse religious commitments—or none at all—could not ground rights in contested theology. By locating moral authority in nature and reason, the founders created a framework in which equal rights did not depend on shared beliefs about God.

Christian critics rightly observe that this framework inherits much from Christian moral thought, and they argue that nature alone cannot generate normativity—that descriptive facts cannot produce binding “oughts.” Whether that critique succeeds remains a live philosophical question. But it is a mistake to assume that secular natural law is incoherent or merely parasitic; it represents a serious attempt to preserve moral objectivity, political legitimacy, and pluralism simultaneously.

The deeper issue, then, is not whether liberal societies can function without theology, but whether they can retain deontological commitments without drifting into Christian nationalism or utilitarian revisionism. When rights become subject to Christian (or any other) theology, society risks sliding into clericalism—or rule by the religious judge—a form of totalitarianism. At the same time, when rights are no longer understood as truths about human beings but as instruments for producing outcomes, their authority becomes authoritarian. Under utilitarianism, democracy is subordinated to technocracy. This is why I reject progressivism.

What this debate ultimately reveals is that political disagreement is inseparable from epistemology. To argue about justice, liberty, or rights without asking how moral knowledge is grounded is to argue at the level of conclusions while ignoring foundations. Deontological liberalism, whether articulated naturalistically or theistically (as an atheist, I obviously prefer—indeed, insist on—the former), entails an explicit epistemic and ontological commitment: that moral truths exist independently of power and preference, and that reason can apprehend them. That commitment is not a relic of theology but a prerequisite for any liberal order that wishes to treat rights as more than temporary conveniences.

Secular humanism need not be utilitarian. Indeed, if it is not to devolve into progressivism or the horrors of communism, it must reject utilitarianism in favor of deontological liberalism. Moreover, any democratic socialism worthy of consideration must be grounded on the same ethical basis. Orwell exemplified this approach: he opposed authoritarian and totalitarian systems, yet remained a democratic socialist throughout his life. His standpoint exemplifies deontological liberalism rooted in Christian ethics.

In recounting a sketch by Italian writer Ignazio Silone, Irving Howe, in an essay in the Fall 1981 edition of Dissent (“On the Moral Basis of Socialism”), leverages the power of Silone’s anti-totalitarian commitments (see also Silone’s “The Choice of Comrades,” published in the Winter 1955 issue of Dissent). As a boy, Silone watched a ragged man being taken away by the police and laughed. His father rebuked him: one should never laugh at a man being taken by the police. When the child asked why, the father offered two reasons. First, one does not know what the man has done—an intuition that anticipates the liberal principle of restraint and the presumption against easy moral certainty: innocent until proven guilty. Second, and more importantly, the man is unhappy.

At first glance, this second reason might sound utilitarian, as though the wrongness of mockery lies in the fact that it increases suffering. But Silone was not a utilitarian thinker. Although he began his political life within communism, he broke decisively with any doctrine that justified cruelty, humiliation, or repression in the name of the collective good or historical necessity. His mature moral vision was grounded in human dignity and the conviction that political action does not suspend ordinary moral obligations. The unhappiness of the man being arrested is not morally salient because it adds to some aggregate of pain, but because it marks a moment of extreme vulnerability. When the state exercises coercive power over an individual, that individual’s dignity does not disappear, even if he is guilty of a crime. To laugh at such a person is not merely unkind; it is a failure to recognize the moral limits that ought to govern both citizens and institutions.

Seen this way, Silone’s anecdote aligns naturally with a deontological liberal tradition rather than a utilitarian one. The prohibition against mockery does not depend on calculations or outcomes. It rests on sympathy for the person as a person, even when one condemns the act that may have led to his arrest. This distinction matters. A utilitarian framework can justify punishment, suffering, and even humiliation if they serve a greater social good. A deontological liberal framework, by contrast, insists that certain forms of treatment are wrong regardless of their utility, because they erode the moral foundations of individualism and human dignity.

Silone’s story is not about sentimental pity or the efficient reduction of suffering; it is about the kind of moral character a free society requires. If citizens lose the capacity for compassion toward those at the mercy of state power—even those who may deserve punishment—then the moral restraint necessary for republican virtue dissolves. Silone’s lesson, properly understood, is not a utilitarian appeal to minimize unhappiness, but a liberal warning: once we permit ourselves to laugh at the humiliated, we have already begun to undermine the ethical conditions that make a free and democratic society possible. Crucially, the epistemic foundation of Silone’s sentiment is rooted in Christian ethics.

Howe himself was a lifelong advocate of democratic socialism, co-founding Dissent magazine in 1954 to provide a platform for anti-Stalinist leftist thought that combined a commitment to social justice with a critique of authoritarianism. Over his life, he watched many comrades (e.g., Irving Kristol) drift into neoconservatism. Howe highlights Silone’s childhood anecdote of witnessing a ragged man being taken by the police and learning a moral lesson about empathy and justice to illustrate the ethical foundation he believed should underlie socialist politics.

In his advocacy, Howe consistently emphasized the importance of humanistic values, individual responsibility, and moral conscience within socialism, distinguishing his socialism from both Stalinism and unprincipled leftist radicalism (which is now rampant in the West). For this reason, Howe admired Orwell, particularly for Orwell’s clear-eyed critique of totalitarianism and moral seriousness; he saw Orwell as a kindred spirit in defending democratic principles against the abuses of power, aligning well with Howe’s vision of an ethical, human-centered socialism.

In concluding this essay, I want to make it clear that I am neither a Christian nor a conservative (see Am I Right-Wing? Not Even Close). One need not be either to ground moral claims in an epistemic framework fashioned by Christian ethics. The American founders demonstrate that moral truths derived from Christian-informed conceptions of human dignity and conscience can be translated into secular, universal terms, producing a liberal framework that protects rights independently of theological belief.

Am I a democratic socialist? I have written essays over the last few years distancing myself from socialism (see Why I am not a socialist; Marxist but not Socialist, although I am no longer Marxist politically). However, like Orwell, I am sympathetic to democratic socialist ideals, and today’s society could benefit from considering them. At the same time, I know which party will take them up, because they already use the language, and I don’t trust that party with power.

Whatever I think of socialism today, Silone and Howe—both atheists, humanists, and democratic socialists—illustrate that commitment to human dignity, moral responsibility, and opposition to authoritarianism can fully operate within these frameworks. Their reasoning shows that deontological liberalism can be defended based on human nature and moral order, not religious authority, allowing secular, pluralistic societies to uphold ethical and political principles that ultimately stem from a Christian moral heritage.

Both conservatives and democratic socialists alike, eschewing utilitarianism, rest their moral arguments upon the epistemic foundation of Christian ethics.

Image by Sora

Fulnecky’s Argument Through the Lens of Anthropology and Sociology

The more I think about Samantha Fulnecky, the University of Oklahoma student who received a “0” out of 25 points on an assignment reflecting on the policing of gender norms among middle schoolers from psychology TA Mel Curth, a trans-identifying male, the more I’m impressed with her. Fulnecky’s essay wasn’t just undeserving of the grade it received; it was actually rather good, her writing typical of a college junior, and she deserved at least a passing grade. Indeed, the only problems I can identify in the paper are formatting and punctuation errors. The substance of the essay is creative, insightful, and provocative. Damning assessments of her work on social media (and this embarrassing letter by the Freedom From Religion Foundation) illustrate the problem of motivated reasoning on the progressive left.

Samantha Fulnecky

To her credit, Fulnecky did something few students do: she revealed the epistemic foundation upon which her normative argument rests. She needed to do this not only for reasons I will discuss, but also because the TA imposed a morality on his students, one he was not making explicit. In Fulnecky’s case, it was clear enough, and she took it on.

Given social science as students once knew it (and I still do)—which could have been used to make the same argument without appealing to religious doctrine—Fulnecky’s insight comes thanks to a text resistant to the corruption of queer doctrine, namely the Bible. Whatever its problems, the Bible gets the matter of gender right (there are two, and they are fixed), and since it is one of the few texts today that recognizes the gender binary and the importance of feminine and masculine roles in reproducing society and the species, it serves as a valuable source. The claim that the Bible is not a scholarly source is nonsense: when making an argument from Christianity, the Bible is the primary source.

For a detailed analysis of the controversy, see my Christmas Eve essay, A Case Study in Viewpoint Discrimination and Poor Assignment Formulation. In today’s essay, I go deeper into Fulnecky’s argument to help critics and others appreciate what she accomplished. I will lay out her position and justification, then show how one can make the same argument using pre-queer anthropology and sociology.

I do this not only to defend Fulnecky’s contribution, but to show how postmodernists have taken a transcultural and historical process and pathologized it to advance queer doctrine. In doing so, queer activists have obscured a vast body of knowledge on the human life course that demonstrates why normal psychological adjustment during puberty requires certain structures. That the course at the University of Oklahoma is called “Lifespan Development” highlights a profound problem in higher education: Curth’s action, and that of the second grader (TA Megan Waldron, both supervised by Professor Lara Mayeux), reflect movement politics that have no place in science courses—not because they are political, but because they are science-denying, which Curth made clear in his criticisms of Fulnecky’s reflection.

Before beginning, I want to emphasize that the assignment was a response essay, also known as a reflection or reaction essay, submitted online, much like a discussion post in a learning management system (LMS) such as Canvas. Many of Fulnecky’s critics seem unaware of this, and it forms a major front in their hyperbolic attacks.

As readers of this platform know, I am a college teacher with over thirty years of experience, and I have asked students to write such essays both as classroom exercises and, with the advent of LMS, as drop-box submissions. Unless I specify that students cite sources, there is no need for them to do so. I am asking for their reaction or reflection, not a literature review or research paper. Those are different assignments with different requirements. Think about it this way: when an instructor asks students in a classroom discussion what they think about an argument or theory, he doesn’t necessarily expect citations. Presumably, many of Fulnecky’s critics have had this experience; their overreaction is disingenuous.

However, in the Fulnecky case, she did cite her source: the Bible. Not only did she cite the Bible, but she also cited the specific book of the Bible from which she drew her argument: Genesis. She explains very clearly why she is using this source, as it provides the epistemic foundation for her critique of the article she was assigned to read, which she would have had to have read to formulate her critique (contrary to claims on social media that misrepresent her remarks in an interview).

The article, published in a 2014 issue of the academic journal Social Development, was “Relations Among Gender Typicality, Peer Relations, and Mental Health During Early Adolescence,” penned by Jennifer Jewell (a graduate student at the time) and Christia Brown (presumably Jewell’s major professor at the University of Kentucky). I will get to Fulnecky’s challenge to the article’s hidden premise in a moment, but I want to reflect on the religious piece of her response to get beyond the false claim that she did not cite her source. In this passage, which I break into paragraphs, she explains why she is responding in the way that she does. I provide commentary along the way.

“It is frustrating to me when I read articles like this and discussion posts from my classmates of so many people trying to conform to the same mundane opinion, so they do not step on people’s toes. I think that is a cowardly and insincere way to live,” Fulnecky writes. (As I have noted on this platform, lying for the sake of getting along is a type of bad faith, so I appreciate the ground she stakes out here.) “It is important to use the freedom of speech we have been given in this country, and I personally believe that eliminating gender in our society would be detrimental, as it pulls us farther from God’s original plan for humans.”

This is where Fulnceky loses most secularists, but I would ask them to consider Thomas Jefferson’s references to the “Creator,” “Laws of Nature,” and “Nature’s God.” It is God’s plan (Providence) that we should enjoy “Life, Liberty, and the pursuit of Happiness,” since he/nature gave these to us as unalienable rights. Fulknecy’s free speech rights are among those, and she is right to note how important it is to use that right, which she is demonstrating in her reflection

“It is perfectly normal for kids to follow gender ‘stereotypes’ because that is how God made us,” Fulnecky continues. “The reason so many girls want to feel womanly and care for others in a motherly way is not because they feel pressured to fit into social norms. It is because God created and chose them to reflect His beauty and His compassion in that way.”

Replace “God” with “natural history,” and Fulnecky has here an observable and well-documented point. And the point is entirely relevant to her critique of the article. The reader should read Jewell and Brown’s article, but to summarize, their research question concerns peer pressure to conform to social norms, which, from the postmodernist view of such things, is not a normal or necessary process, but a bad thing, in that it is associated with psychological problems (showing this is Professor Brown’s life-work). However, again, if one substitutes natural history for God in every instance, Fulnecky’s argument falls in line with pre-queer social science.

It is at this point that Fulnecky explicitly cites her source (as if it were not already obvious): “In Genesis, God says that it is not good for man to be alone, so He created a helper for man (which is a woman). Many people assume the word ‘helper” in this context to be condescending and offensive to women. However, the original word in Hebrew is ‘ezer kenegdo’ and that directly translates to ‘helper equal to’. Additionally, God describes Himself in the Bible using ‘ezer kenegdo’, or ‘helper’, and He describes His Holy Spirit as our Helper as well. This shows the importance God places on the role of the helper (women’s roles). God does not view women as less significant than men. He created us with such intentionally and care and He made women in his image of being a helper, and in the image of His beauty. If leaning into that role means I am ‘following gender stereotypes’ then I am happy to be following a stereotype that aligns with the gifts and abilities God gave me as a woman.”

There are minor issues with Fulnecky’s essay that I would have noted if I were her instructor: American-style placement of commas and periods inside quotation marks, and a few others (the essay was double-spaced, contrary to what one may see on the Internet). What I would not have done is respond with the TA’s rant, which is available online. I would have expressed appreciation that the student provided the epistemic foundation for her critique of the assumptions embedded in the article. (I will soon publish an essay on the necessity of establishing an epistemic foundation for normative claims, specifically concerning Christian ethics.) I would also have introduced her to the vast anthropological and sociological literature supporting her argument and apologized for the situation that has made this literature remote to her. Behavioral and social sciences have been impoverished by postmodernism and queer praxis, especially the weaponization of empathy.

Jewell and Brown’s study examined whether meeting typical gender expectations is linked to popularity and whether failing to meet them is linked to teasing or rejection. It also investigated whether teasing mediates the association between low gender typicality and poorer mental health. Middle school students reported on their own gender expression, experiences with gender-based teasing, and mental health, including anxiety, body image, depression, and self-esteem. Results showed that popular students were more gender-typical than those who were teased or rejected. Boys who did not fit typical gender expectations reported worse mental health outcomes. In other words, the study confirms what many of us know from experience—a lot of psychology simply formalizes the obvious—only now we are asked to interpret these experiences as traumatic rather than formative.

Fulnecky was suspicious of the authors’ motives, namely that they implied there was something wrong with reinforcing gender-typical norms, a stance aligned with gender identity doctrine. She argues there is nothing inherently wrong with reinforcing gender norms, which required her to explain appropriate gender roles, rooted in a biblical worldview. Intellectually responsible, she erected her explanation on an epistemic foundation. She did not merely say, “I don’t like this article” or “I don’t agree with this article,” as students often do, leaving it at that. She engaged with the article’s core premise and challenged it based on authority. As I noted in my Christmas Eve essay, what upset the TA was that she invoked the “wrong” authority.

There is nothing wrong with what Fulnecky did. In fact, that is what we want our students to do: interrogate the premises of claims made by scientists—or anyone else. If an instructor asks for a student’s opinion, he must tolerate that the opinion may be informed by Christian theology. Otherwise, he engages in viewpoint discrimination. The TA, clearly, had not considered the epistemic foundation of his own views. He believes what he believes due to ideology, not because he has constructed a foundation or observed one. He “knows” it is wrong for students to use the Bible as justification—but Fulnecky, who built her argument on a coherent epistemic foundation, is in the superior position.

To explain why peer reinforcement of gender typicality is not necessarily wrong, Fulnecky must explain why typical gender roles exist. The article assumes that reinforcing gender typicality is harmful. Fulnecky suggests that failing to reinforce these roles may be harmful. Why? According to her, God created two genders and assigned them roles, which society reinforces via norms and peer pressure. Peer pressure is standard across cultures and history. Unlike the article, which offers no epistemic foundation for its moral claims, Fulnecky makes hers explicit. This is what led the TA to discriminate against her: it was not a poor essay, but a viewpoint he did not like. His claim that her grade was unrelated to her religious belief is implausible; her religious belief is exactly what he failed her for.

While the Bible provides one epistemic foundation, there is another Fulnecky could have used: pre-queer anthropology and sociology, which explain the natural origins of traditional gender roles. Across cultures, societies have faced a recurring problem: how to manage boys’ transition into manhood. Anchored in puberty, this transition is not merely biological; it is a social transformation fraught with anxiety, uncertainty, and potential disorder. Anthropologists have long recognized this and developed concepts like liminality and rites of passage to explain how societies regulate this unstable period.

Arnold van Gennep’s Les Rites de Passage (1909) describes transitions through separation, liminality, and incorporation. Pubescent boys are separated from childhood roles, enter a liminal phase—no longer boys, not yet men—and eventually reenter society as recognized adults. Victor Turner’s concept of liminality, being “betwixt and between,” aptly describes this state (The Ritual Process: Structure and Anti-Structure, 1969). Liminal individuals exist outside ordinary categories; they are ambiguous, unstable, and socially dangerous if unmanaged. Biological puberty amplifies this instability: sexual maturity, strength, aggression, and psychological volatility create a mismatch between a boy’s bodily capacities and his recognized social status. Without ritual containment, this mismatch threatens both the individual and the community.

Societies almost universally ritualize this transition. Our society does not. At least not adequately. Most boys manage anyway, but many do not. The same is true for girls. The failure to provide appropriate rites of passage likely explains the rise in adolescent mental distress over recent decades. Even worse, behavioral and social scientists, along with educators and social workers, now claim these rituals are harmful—a key part of the project to queer children’s culture and education. Children are told they do not have to be what they are. Boys are told they can be girls, and that other boys acting like boys is wrong. Fulnecky recognized this in the article’s intent—and she was correct. Curth saw this too and punished her for defending transcultural and historical gender roles.

The remainder of this essay will show how peer pressure around gender conformity is normal and necessary for psychological development. This has long been a topic I have wanted to address, and this controversy provides the occasion. I cover it in criminology and juvenile delinquency courses because Western adolescents, particularly boys, are thrown into liminality without the guidance necessary to reach adulthood, and this has caused a lot of problems psychologically and societally.

David Matza’s theory of drift illustrates this: juvenile delinquency arises from adolescents’ liminal position between childhood dependence and adult responsibility. Young people—especially boys—accept dominant moral norms yet lack stable institutional pathways into adulthood. Delinquent acts respond to structural ambiguity, not deviance. Scholarship on anomie, subcultures, and rites of passage reinforces this, showing that erosion of clear roles, mentorship, and legitimate status attainment intensifies liminality. Without structured transitions, adolescents improvise, asserting autonomy, masculinity, and belonging through delinquency.

Replace delinquency with gender identity disorder, and the problem becomes clear: institutions corrupted by gender identity doctrine embrace the issue rather than solve it. Indeed, progressive activists are responsible for creating these conditions. Queering disrupts normative rules, punishes peers who reinforce gender-appropriate roles, and exposes children to Pride Progress paraphernalia and sexualized content. Social-emotional learning identifies those most susceptible, while empathy punishes questioning of peers’ gender conformity.

To the postmodern mind, historical gender socialization appears as “bullying,” the result of “social constructions” around “patriarchal relations.” However, in traditional societies, male initiation ceremonies guide adolescents through instruction, isolation, trials, symbolic suffering, and endurance tests. These rituals externalize anxiety, transform fear into shared experience, and provide meaningful narratives for transition. Hardship becomes proof of worthiness, not arbitrary suffering.

Crucially, these rites are reinforced both vertically and horizontally. Peer pressure within age cohorts ensures conformity to masculine expectations through mockery, shaming, teasing, and ritualized aggression. Sociologically, this regulates status; anthropologically, it produces culturally legible men. Peer pressure is functional, not pathological. Masculinity requires achievement, continuous affirmation, and demonstration. Normal societies develop systems to confirm gender conformity; pathological societies emasculate men and risk cultural collapse.

“I do not think men and women are pressured to be more masculine or feminine,” she writes. “I strongly disagree with the idea from the article that encouraging acceptance of diverse gender expressions could improve students’ confidence.” This is indeed the implication from Jewell and Brown’s argument (which proves she read the article). “Society pushing the lie that there are multiple genders and everyone should be whatever they want to be is demonic and severely harms American youth.” I know, the demonic line bothers secularists, but I have learned to find a synonym that doesn’t sound theological. She has the right to use the words that convey her thoughts.

“I do not want kids to be teased or bullied in school,” she continues. “However, pushing the lie that everyone has their own truth and everyone can do whatever they want and be whoever they want is not biblical whatsoever.” This is true. The Bible establishes a universal truth, and it’s not the paradoxical truth postmodernists espouse: that the only universal truth is that there is no truth. “The Bible says that our lives are not our own but that our lives and bodies belong to the Lord for His glory. I live my life based on this truth and firmly believe that there would be less [sic] gender issues and insecurities in children if they were raised knowing that they do not belong to themselves, but they belong to the Lord.” I am an atheist, but I recognize that hundreds of millions of my fellow humans believe this, and the consequences of those beliefs in action have immeasurably improved their lives.

Cutting through the religious language, which she has a right to in light of freedom of conscience, Fulnecky’s point is that peer reinforcement of gender roles is beneficial and that gender atypicality is, under normal conditions, exceptional. She’s right. This is not what a man who wants to be seen as a woman can accept. Every day he faces the gaze of those whose sensibilities are not scrambled by gender identity doctrine. He desperately wants to redefine the expectation of normal society as exclusive and oppressive. The reality is that denying boys peer reinforcement of gender roles harms their transition to manhood. What has been normal peer encouragement for millennia is now pathologized by progressive ideology. Boys are robbed of ritualized transition and societal expectation. The Bible affirms this. With behavioral and social sciences corrupted, students like Fulnecky no longer have access to academic literature sufficient for forming epistemic foundations for normative statements. But they do have the Bible. Fulnecky used hers, and she was punished for it.

A Case Study in Viewpoint Discrimination and Poor Assignment Formulation

It’s Christmas Eve. I doubt many people will read this essay. But I have to get this off my chest because it’s bugging me. At any rate, Merry Christmas! Enjoy!

I’m an atheist, a civil libertarian, and a sociology teacher. My disbelief in a god cannot affect my assessment of student work that moves from a religious standpoint. I recognize that students have both a First Amendment right and academic freedom to draw on their religious beliefs when reflecting on social phenomena. While I may prefer arguments grounded in sociological theory, I cannot penalize students for organizing their thoughts from a religious standpoint if I don’t specify that they must work from a sociological perspective. When asking students to reflect on a topic without specifying a theoretical framework, I open the door to a diversity of perspectives, including religious ones.

To maintain analytical rigor in assessing reaction/response essays, I focus on the clarity, coherence, and depth of their reasoning rather than the source of their beliefs, encouraging evidence-based or logical argumentation wherever possible. Optional frameworks or prompts can guide students toward sociological thinking, but their freedom to express their worldview remains respected. This approach balances critical engagement with intellectual freedom, allowing students to articulate reasoned perspectives without invalidating their personal beliefs. My job is not what to think, but how to think and how best to articulate their thoughts.

Mel Curth (left). Samantha Fulnecky (right)

Comments on X and other social media about the controversy surrounding University of Oklahoma student Samantha Fulnecky, who received a “0” out of 25 points on an assignment regarding gender norms from a psychology TA, Mel Curth, a trans-identifying male, exemplify the problem of motivated reasoning on the left. Motivated reasoning is a psychological phenomenon in which individuals process information in a biased way to arrive at conclusions that align with their preexisting beliefs, desires, or goals, rather than objectively evaluating evidence. The comments also reflect the progressive fetish for technocracy, which I will come to at the end of these remarks. But the main issue at hand is an utter failure to understand that the assignment is poorly formulated, and that, because of that, the TA opened the door to a religious argument. The grade assigned was obviously a reaction to the student reflecting on an article from a standpoint with which the TA disagreed.

Rather than critiquing the assignment, the comments dwell on the essay, which social media users don’t like because it works from the unfalsifiable proposition that there’s a god and that this god, in which Fulnecky deeply believes (which is her right—freedom of conscience—under the First Amendment), has a gendered plan for humans. The reason this is so offensive to trans activists and their allies is not just because they loathe Christianity, but also that gender identity doctrine works from an unfalsifiable proposition in the same way as religion, namely, the faith-belief that men can be women because they say they are. Trans activists need their foundational assumption to go unquestioned because, deep down, they know it’s a religious-like belief. When two religious worldviews collide, it’s typically the immature and insecure religion that takes offense (we see this with Islam, as well).

The assignment

A reasonable person would begin by asking about the assignment prompt and the grading criteria, which is easy to do in this case since the rubric is readily available (I shared it above). Students were asked to write a 650-word reaction/response essay to a scholarly article titled “Relations Among Gender Typicality, Peer Relations, and Mental Health During Early Adolescence” (I read the article, Fulnecky’s response, and the TA’s rant). The rubric evaluated the paper across three main criteria: 

(1) “Does the paper show a clear tie-in to the assigned article?” is the first, worth up to 10 out of the 25 total points. Fulnecky’s essay clearly ties to the assigned article. The article is an empirical study finding that middle-school students who are more gender-typical—especially boys—are perceived as more popular, while those who are less gender-typical experience more teasing, and this is associated with more mental health complaints. (Does one even need a study to know this? Perhaps to determine whether the mental health complaints precede or follow the teasing. Some children are more fragile than others.) Fulnecky organizes her reaction paper around this finding, arguing that, while she doesn’t want students to be bullied ot teased, social pressure to conform to gender roles, considering her god’s gender plan, is understandable and she does not necessarily see this as a problem. Leaving aside whether there is a contradiction there, her response tells us that she read the article and is doing what was asked of her: reflecting on/responding to it. 

The reflection prompt is explicit in its second criterion: (2) “Does the paper present a thoughtful reaction or response to the article, rather than a summary?” This criterion is also worth up to 10 points. Again, Fulnecky reacted/responded to the article. She did not summarize it since the criterion tells her not to. Everybody in the class knows what article they are responding to. Is her response thoughtful? You may disagree with her, but that’s not the question. It’s possible that the TA is not a thoughtful person but rather someone who imposes his opinion on others using a position of authority. Indeed, the evidence in this case strongly suggests that it is the latter. What does “thoughtful” mean anyway? In this case, it seems to mean whether the TA finds it as such.

(3) “Is the paper clearly written?” is the last criterion, worth up to five points. There is no ambiguity in what Fulnecky wrote. Nobody who reads this essay will be confused about Fulnecky’s point of view, which is what she was asked to share. That’s why the TA objected to it: he understood full well what Fulnecky was arguing and gave her a “0” on the assignment for that reason.

What the rubric does not explicitly require is also crucial to note. The published rubric does not list “empirical evidence” or outside scientific sources as a required element in the response—it focuses on reacting to the article. Students were not supposed to summarize the article. The rubric does not specifically mandate acceptance of the article’s assumptions; rather, students were to read the article and respond to it “thoughtfully,” whatever that was supposed to mean. Note that one of the approaches students may take to writing the essay is (1) “[a] discussion of why you feel the topic is important and worthy of study (or not)”  and (2) “[an] application of the study or results to your own experiences.” There you go.

In his response to the student (which is long and tedious, to my eyes conveying seething anger), the TA goes beyond the article itself to appeal to the authority of “every major psychological, medical, pediatric, and psychiatric association in the United States,” which, he contends, “acknowledges that, biologically and psychologically, sex and gender is neither binary nor fixed.” 

This is an absurd argument—one that is itself an appeal to authority, but which is an entirely falsifiable matter when interrogated on objective scientific grounds. Indeed, we don’t need science to know this is false. Sex and gender are synonyms, and it’s a fact that gender is binary and fixed. This is true for all mammals (and birds and reptiles, most amphibians and fish, and many plant species). A hog can’t be a sow. Even if genitalia are ambiguous, it will be found in the end that the swine is either male or female. Neither swine nor any other mammal can be both or neither. The instructor is using his position of authority to punish students because they don’t affirm his belief in a falsehood. Apparently, this is the pattern with this TA, who has been removed from teaching duties.

Is it possible that “every major psychological, medical, pediatric, and psychiatric association in the United States” can get something wrong? Yes. Of course it is. One might consider how, during the period of Nazi hegemony, major sense-making and professional associations institutions in Germany promulgated the ideology. The Nazi period in Germany lasted 12 years, from 1933, when Adolf Hitler became chancellor, to 1945, when Nazi Germany was defeated at the end of World War II, and during that period, academic and professional organizations in Germany actively promulgated and normalized Nazi ideology during the 1933–1945 period, not under coercion but enthusiastically. Does that make whatever those associations and institutions held up as true actually true? One would be a fool to accept that.

What are the lessons of Nazi Germany? The world learned—as if some people in it didn’t already know—that the following things are bad: conflating academic authority with moral and political correctness; treating consensus within professional bodies as proof of truth; and allowing ideological conformity to substitute for academic freedom and open inquiry. There’s an irony here. Trans activists are always accusing those who dissent from their doctrine of being “Nazis.” Has Curth ever considered how imposing his ideology on Christians by punishing them for opinions—opinions he asks for—aligns with the mentality of authoritarianism?

Here’s what happened in this case: Fulnecky chose another authority! The TA wanted students to work from the standpoint of the authority to which he appeals. Fulnecky knew this, and she wasn’t having it; she reflected on the article from her standpoint. To be sure, gender identity doctrine is the religion to which most academics adhere, but that’s even more reason for dissenting voices to assert their worldview when they have the opportunity (unfortunately, a lot of students go along to get along, which Fulnecky noted in her essay). Curth gave Fulnecky that opportunity—and punished her for taking it. One would not be unreasonable in suggesting that the assignment laid a trap for Christian students. 

To punctuate his motivation in giving Fulnecky a “0” on the assignment, Curth, who found her essay “offensive,” writes, “I implore you [to] apply some more perspective and empathy in your work” before ranting about the “methodology of empirical psychology,” which, remember, is neither part of the prompt nor the rubric. However, the methodology he in retrospect desired students to deploy is flawed if, in using it, one concludes that gender is neither binary nor fixed, an obviously false conclusion—one that even the Bible grasps as false. Suppose he was talking about the methodology of the article. That’s beside the point, since he did not ask students to do that. He asked for a “thoughtful” response to the article. And although the article itself was not about the “scientific consensus” that Curth asserts, Fulnecky had to share her belief in her god’s gender plan to make her response intelligible. Does a man who demands others refer to him as “she/they” know a thoughtful response when he sees one? Would it matter whether it was thoughtful or not?

Curth explained to Fulnecky when she requested a grade change “You do not need empirical evidence when writing a reaction paper, but if your reaction paper argues contrary to not only the article, but the consensus of multiple academic fields (psychology, biology, sociology, etc.), then you should either provide empirical support for those claims or raise criticisms that are testable via psychological research.” He then gave an example: “If you took a geology class and argued that the earth was flat, something contrary to the academic consensus of that field, then you would be asked to provide evidence of such, not just personal ideology.”

But, as is obvious from reading the assignment, that was never specified. Fulnecky disagreed with the article, which she was allowed to do according to directions. She used the Bible, sure, but isn’t what is obviously true by observation empirical? Empirical means based on experience and observation rather than belief or theory alone. It is no problem to move from a mere belief in God’s plan to observing what is true across time, which I noted above. If, in the end, the Bible were the only source affirming that there are only two genders, then it would be the only science text left on Earth. All that jazz about “the consensus of multiple academic fields (psychology, biology, sociology, etc.)” is just more appeal to authority. And it’s not even true. biology confirms that, in mammals, gender is binary and fixed. But it is true, I hate to admit, that my discipline has accepted the madness of queer theory.

Why is an instructor preaching to students about empathy, not as a thing that human beings have (following Adam Smith, I prefer to refer to this capacity as compassion and sympathy), but as a thing he judges her to need more of? He and everybody else who leverages empathy in this way can fuck off with that shit. Who made this clown and his tribe the moral authority? Besides, from her standpoint, her argument is empathetic in that she is concerned for the harm that gender ideology does to kids. She finds it understandable that kids would expect other kids to conform to gender norms, just as kids—and adults—expect other kids to conform to social norms generally. Who has the moral and rational high ground here? The man who thinks he’s a woman or the woman who knows what she is? No apologies for dwelling on this point; whether Christian or not, we cannot accept admonishment from those who believe society should be restructured to groom children into accepting gender identity doctrine.

In professional terms, a grade of “0” is outrageous. The rubric is misaligned with the grading rationale, the penalty is a disproportionate penalty (that’s putting it charitably), and the student was subjected to inappropriate moralizing feedback. Suppose that the student failed to engage deeply enough with the article’s data. That would justify partial credit and targeted feedback, but not a zero and a moral rebuke. Moreover, the TA’s comment that the student’s view was “offensive” and that she needed to “find empathy” crosses a different line altogether. That shifts the grading rationale from academic criteria to ideological or moral evaluation, which is precisely where universities become vulnerable to claims of viewpoint discrimination. And rightly so.

Beyond the problem of assessment, this is a poorly specified assignment. Consider the following assignment prompt: 

Using the article “Relations Among Gender Typicality, Peer Relations, and Mental Health During Early Adolescence” as your source, write an approximately 650-word science-based response paper in which you evaluate one of the study’s central claims using quantitative psychological research methodology. Your response must (a) accurately summarize the relevant hypothesis and findings from the article, (b) analyze the study’s methodology using at least one empirical lens taught in this course (e.g., construct validity, sampling and generalizability, measurement reliability, statistical mediation, or causal inference), and (c) propose either a methodological critique or a concrete follow-up study design that could test the same claim more rigorously.

The bottom line is that, if an instructor is going to appeal to faith-based belief in evaluating claims—even if the article doesn’t—then all faith-based beliefs are welcome. As the assignment is written, any commentary would have to be something along the lines, “Thank you for your opinion,” and then points based on whether the criteria were met (which they were).

I have to add that a virtue of the faith-belief Fulnecky has centered her life on is that it aligns with what any person not deluded by ideology knows to be objectively true: that gender is fixed and binary. Any scientific paper that moves from the assumption that this is not true, no matter how much it dresses itself in the language of science, collapses into its own foundation. That’s not this article, but it is the TA who claims to move from the standpoint of psychology, which he knows affirms his personal delusion about his own gender. In the hands of progressives, psychology has become a corrupt discipline. Indeed, Curth seeks refuge in the discipline because he regards it as a safe space, which Fulnecky is making unsafe with her Christian worldview.  

Now, on this fetish for technocracy, the reaction on X and other social media reflects a deep problem with the corporate state left. Curth is the authority, and the student should submit to his authority. We see this in another thread on X today, warning us away from Matt Walsh because he’s not a professional historian. This is an elitist argument. In my experience, there are a lot of people with a third-grade education who are smarter and wiser than most people with PhDs. After all, most PhDs in the humanities and the sciences believe that men can be women, when every farmer knows that’s a false belief. There is wisdom in their eyeroll. It requires indoctrination to make a reasonably intelligent person believe something so obviously untrue (that’s not the only crazy shit academics believe). I have found, for the most part, the more education a person has, the more easily they’re guided to believe things that defy common sense. This is why some of the smartest and most innovative people in history didn’t have advanced degrees. In fact, many of the most profound contributions to, say, physics, came before the PhD norm. Likewise, the best scholarship comes before peer review. Do these people really believe that the trappings of the modern academic institution—degrees, peer review, etc.—have always been in force? There’s a history here, and it is not hard to find.

This doesn’t mean that people with PhDs are necessarily indoctrinated. There are a handful of people who go through the graduate school experience because they genuinely seek further enlightenment in a particular subject matter (which benefits from the space and time to read, write, and debate issues with others that graduate school affords) and/or know they have to do that to teach college (which they seek as a vocation), but make a conscious effort to stand outside the doctrinal parts of the process. The reason I was able to shake off the doctrinal parts of the process (and there may be more to shake off) was that I distanced myself from them. I never adopted the elitist attitude that, because I have advanced degrees, I am smarter than everybody else. To be sure, those degrees gave me access to a vocation that requires them for entry, but that’s not why I am smart. The irony is that, even with those degrees, people who disagree with me judge me to be wrong, even stupid. Progressives only appeal to degrees when they are held by people who say what they want to hear. (Remember during COVID when we were told to “trust the experts,” by which they meant their experts?)

The question progressives must ask is who and what ideology is in command of the sense-making institutions in any given existing society. Any cursory look at the situation in Nazi Germany will tell a rational person that the universities in Germany at that time were full of professors who promulgated Nazi ideology. From the logic that one should only recognize as valid knowledge produced by people with degrees or the assertions of academic institutions or professional organizations, it follows that one would have to conclude that Nazi ideology was closer to the truth than any other standpoint, since those propagating it were learned people in respectable institutions. That’s the paradigm of appeal to authority. And if progressives like Curth are going to do that, then conservatives like Fulnecky have just as much right to do the same. 

Before I go, I have to fact-check a viral image on the Internet of the supposed correction of Fulnecky’s essay. What I am sharing below is a printout of the essay marked up by a third party. This is not the TA’s markup. I traced the paper to an account on Threads. The person who did this claims to have graduated Magna Cum Laude with a Bachelor’s degree in education. According to her, she was the only student in a grammar and punctuation class to get 100 percent on tests six weeks in a row. She also says that she’s a Christian. At any rate, the function of this image is to reinforce the perception that trans activists are manufacturing that this is a bad essay. The irony is that this markup makes the TA look even worse than he already does.

I’m not going to go through all the red here. Instead, I will ask the reader to look past all that and read the essay. People are feigning astonishment at the poor quality, but the quality of writing is typical of the average college student (when not using ChatGPT). Take that as you will, but if all such essays were awarded grades of “0,” then higher education would have a crisis on its hands. If she had been in my class, I would have encouraged her to double-space her submissions and indent the first line of paragraphs beyond the first one.

Removing an Imaginary Sixth Digit: Ethical or Unethical?

This essay follows up on yesterday’s essay, Orbiting Planet Madness: Consenting to Puberty and Other Absurdities.

Polydactyly is a congenital condition where a human or other animal is born with extra fingers or toes. The extra digits can be fully formed, but are often only partially formed. It can be genetic or the result of a syndrome. Polydactyly is one of the most common congenital limb malformations. It occurs in approximately 1 in 500 to 1,000 live births worldwide, which means that a lot of extra digits are surgically removed in childhood. Polydactyly can result from mutations in several genes involved in limb development, particularly those affecting the Sonic Hedgehog (SHH) signaling pathway, which is crucial for digit patterning during embryonic development. Yep, you read that right: it’s called the Sonic Hedgehog signaling pathway (I didn’t believe it, either). When an extra digit has bone, joints, or tendons, doctors typically recommend surgical removal and reconstruction to improve appearance and function. 

Polydactyly (image source)

Perhaps we must adjust our language: Most humans have ten fingers, though variations exist due to genetic and developmental factors. That’s fine with me (as long as we don’t then suggest that the number of digits on the human hand is “on a spectrum”). However, beyond physical differences, some individuals experience a mismatch between their perceived and actual number of fingers. This is a situation where a person’s internal sense of their body (body schema) doesn’t match their physical anatomy, leading some to seek surgical alteration. (I have written about this before with respect to limb amputation; see The Exploitative Act of Removing Healthy Body Parts.) In rare cases, a person may feel they have six fingers on each hand when they don’t and may seek the removal of a digit to match their internal body perception, leaving them with only four digits per hand.

This phenomenon provides us with a scenario with which to check the integrity of medical ethics. What if, after surgery, the person looks at his hands and now sees only four digits on each? He can’t have his fingers back since the surgery is quite involved and irreversible. Was it ethical for a doctor to remove the patient’s imagined sixth digit? The man was clearly delusional, seeing six fingers where there were only five, and now, confronted with only four, discovers he not only deceived himself, but that the doctor affirmed his deception and mutilated his hands. Even if he now sees five digits, was it ethical? The surgeon knows what he did. He never had extra digits either way. Whether he immediately, later, or never sees that he now has only four digits, we are confronted with a problem: a doctor affirming a delusion and mutilating a man’s hands.

In a philosophy class, a teacher might ask his students to ponder the ethics of such a case, which hinges on the principles of autonomy, beneficence/non-maleficence, and informed consent. He might note that, on the one hand, medical ethics generally uphold a person’s right to bodily autonomy. If an individual experiences deep distress due to a perceived mismatch between their body and their internal perception of it, some might argue that removing the “extra” digit is an act of compassion, akin to “gender-affirming surgeries” or procedures for body integrity dysphoria (BID). Put to one side for the moment the problematic character of these supposed acts of compassion. It will only be a moment because what I say next blows up the acts of compassion claim in both the cases of gender dysphoria and BID. Indeed, the other hand would likely result in a student going to the professor’s chair or dean and complain about a class that problematized the core premise of gender ideology, specifically the pseudoscientific notion of “gender identity.”

The professor might ask the students to suppose that the case of a man who imagines polydactyly differs from a BID case in a crucial way: the perception of extra fingers was a delusion rather than a physical or neurological variation. The patient did not, in reality, have six fingers, yet a doctor, rather than addressing the underlying cognitive or psychological condition that led the patient to believe an observable falsehood, affirmed the false perception by surgically altering the body to match it. But how would the doctor know? What medical test allows a doctor to tell the difference between a man who falsely believes he has extra digits and a man who truly believes he has an extra digit? I might now move on from what the reader may perceive as an analogy, except it is not an analogy—it’s the thing itself. The professor may ask the students to ponder whether this scenario raises serious ethical concerns about non-maleficence (“do no harm”)—that the doctor is complicit in harming the patient by enabling a delusion instead of treating its root cause—but since a man cannot be a woman, then his internal sense of gender must always be delusional. SO why the double standard?

The professor might ask whether the patient in the scenario, post-surgery, realizes that he has made a grave mistake, which makes the ethical implications even starker. The procedure was irreversible, and the doctor, rather than alleviating suffering, may have contributed to permanent physical and psychological damage. This raises questions about informed consent—was the patient capable of making a truly informed decision while operating under a false belief? Should the medical profession have safeguards in place to prevent such surgeries in cases where the patient’s perception is demonstrably false? If we say yes to both, then where do we draw the line between respecting autonomy and preventing harm? If we say no, are we not endorsing the medicalization of delusions and self-destructive choices? Again, since the patient’s perception of gender in cases of gender dysphoria is demonstrably false, we are objectively medicalizing delusions and self-destructive choices. While we may say that an individual is free to wish to permanently alter his body, we cannot say that gives a doctor a license to permanently alter the body of a delusional person.

The professor would tell students that the scenario highlights the ethical tension between respecting individual autonomy and ensuring that medical interventions truly serve a patient’s well-being. But is there really any ethical tension here? If medical professionals knowingly affirm (validate) and act on a delusion rather than addressing its psychological roots, they cross the line from healers to enablers of harm. The ethical course of action would have been to refuse the surgery and instead offer psychiatric care. The question of whether there should be clearer medical guidelines preventing such procedures in cases of misperception already has its answer. The line is clear: any doctor—or anybody else, for that matter—who removes a delusional man’s fifth digit has committed an atrocity. The scenario forces rational minds to reconsider their view that autonomy should extend to cases of irreversible medical decisions where there is no objective underlying medical condition.

As readers ponder this matter, they might also ponder whether it is ethical for parents to remove the sixth digit on their child’s hand who suffers from polydactyly. The parents could wait until the child is old enough to decide for themselves (as a guitarist, I might have an advantage with an actual functional sixth digit, which might be worth the grief I’d experience at the teasing of other children or other guitarists accusing me of an unfair advantage). But there is no ethical problem with surgically removing a sixth digit since this condition is not normative for digital patterning.

In the case of correcting the problem of a micropenis (which I discussed in my last essay), parents must treat this condition because of the critical window of genital development. If the micropenis is due to low testosterone levels, a doctor prescribes a short course of testosterone therapy, usually in the form of intramuscular injections or topical gel, to stimulate penile growth during early childhood or puberty. In cases where hormone treatment is ineffective, or if there’s a developmental or genetic disorder, the doctor may refer the child to a pediatric endocrinologist or urologist. In rare cases, surgical options such as phalloplasty may be considered later in life if the condition significantly affects function or self-esteem. Psychological support and counseling may also be recommended to help with emotional and social concerns. None of this is gender-affirming care in the industry sense, but ethical medical intervention to address an anomaly—that is, actual gender-affirming care. To hell with the parents who love their son just the way he is. It’s not their life. It’s his.

I’m not a philosophy professor. If I were, I would hesitate before using the scenario in an ethics class because of the chill put in the air by trans activists. I would likely get in trouble for broaching the subject. Indeed, having such a discussion is not beyond the boundaries of my discipline of sociology, yet I dare not interrogate such a problematic, whatever its value in interrogating matters of social power. To illustrate the problem, I conclude with a case of a teacher who dared to broach the subject of gender critically and an op-ed by a student that confronts the climate of self-policing and the impact that has on the promise of higher education. (See my recent essay Identity-Based Academic Programming and the Scourge of Heterodoxy.)

Kathleen Stock reported that student protests grew out of hostility from other academics (source)

Kathleen Stock, a philosophy professor at the University of Sussex, UK, argued against gender self-identification and supported gender-critical feminism. Students and activists—even her colleagues—accused her of transphobia, leading to protests and calls for her resignation. Stock resigned in 2021, citing a hostile work environment. She described the climate as “medieval” ostracism. Of course, the accusation of transphobia (like Islamophobia) presumes the validity of the concept; it’s a propaganda term to harass those who criticize or question what is—anthropologically and sociologically—effectively a religious system. Perhaps that’s why it remains an effective rhetorical weapon in policing speech; once an ideology is wrapped in religious symbology, its congregation becomes a protected class.

Emma Camp on the campus of the University of Virginia (source)

This climate has impacted students, as well. Emma Camp, a student at the University of Virginia, wrote an op-ed in The New York Times criticizing the university’s handling of gender identity issues, including policies related to transgender students. She argued that the emphasis on gender identity in academia stifled free speech and that professors were reluctant to engage in debates over gender. Professors who shared her viewpoint on gender identity faced criticism, with some calling for policies that would restrict public discussions or certain types of discourse around gender identity. No professors were formally disciplined, but the controversy was enough to chill the air. Camp’s op-ed is worth a read: “I Came to College Eager to Debate. I Found Self-Censorship Instead.”