Revisiting the Paradox of Tolerating Intolerance—The Occasion: The Election of Zohran Mamdani

I am not the first person to address this subject. Most famously, Karl Popper, in his 1945 book The Open Society and Its Enemies, argued that a tolerant society cannot afford unlimited tolerance. If a society tolerates the intolerant without limit, those intolerant forces can destroy tolerance itself. On this basis, Popper contended that a tolerant society has the right—and even the duty—to be intolerant of intolerance, especially when it threatens democratic institutions or the rights of others. He was concerned with totalitarian ideologies in general, but the context of his writing was heavily shaped by both fascism and communism. His point is not to suppress dissent lightly, but to protect the framework that allows open debate and freedom from being overrun by forces that would abolish them. A critic might say that Popper’s argument is a paradox. But is it?

I addressed Popper’s apparent paradox in late February of this year in Weaponizing Free Speech or Weaponizing Speech Codes? (See also my February 2021 essay The Noisy and Destructive Children of Herbert Marcuse and my August 2019 Project Censored article Defending the Digital Commons: A Left-Libertarian Critique of Speech and Censorship in the Virtual Public Square.) What prompted that essay on Popper was Vice President JD Vance’s speech at the Munich Security Conference, criticizing European nations for their restrictions on free speech and association. There, I concerned myself with the suppression of populist nationalist parties in Germany and the larger problem of censorship in Europe.

In the present essay, which follows up on one of Sunday’s essays (Defensive Intolerance: Confronting the Existential Threat of Enlightenment’s Antithesis), I turn my attention to the ascension of Muslims to political office in the West, most obviously the victory of Zohran Mamdani in New York City’s mayoral race (which parallels the election of Sadiq Khan as mayor of London in 2016). I am drawn back into this subject not only because of this development but also because of a video I recently watched, which I posted on X on Friday (see below). Ayann Hirsi Ali makes a strong argument concerning the tolerance of intolerance and how failing to keep democracy safe from totalitarian actors is a form of what Gad Saad, a professor at the John Molson School of Business at Concordia University in Canada, calls “suicidal empathy.”

Suppose a society values religious tolerance and guarantees every individual the freedom to follow their own faith. We don’t have to suppose this, of course, since this is the situation in the United States with the First Amendment in our Bill of Rights—at least in our finer moments. Yet, even in the light of the power of that Article, a question arises when a religious (or other ideological) group that rejects pluralism seeks political power. On the surface, barring such a group from governing might seem to contradict the principle of religious freedom. How can a tolerant society justify restricting a religion (or ideology) when it claims to respect the right of all individuals to believe as they choose?

Following from this, those who call for restricting a religion based on its rejection of religious pluralism are often accused of religious intolerance and smeared as bigots. But are they? Religious pluralism distinguishes between private belief and the political or personal capacity to impose that belief on others. Surely that distinction matters. Under the terms of religious tolerance, each person remains free to worship, practice, and organize according to their faith, provided they respect the same rights for others. No religion may demand that others obey its doctrines or attempt to enforce its rules across society. However, obtaining the political—or claiming the personal—capacity to impose beliefs on others, in this case Islam, a totalitarian project that either subjugates non-Muslims or kills them (I will leave to decide which is worse), is something quite apart from religious liberty. More accurately, the problem is the negation of religious liberty.

By separating personal faith from political authority, the free society ensures that belief itself is never suppressed while preventing coercion or domination by some over others. At the same time, it must preserve that arrangement by ensuring that religious groups seeking to undermine it are barred from political power. I would argue, if we agreed that this were necessary, that this restriction cannot rely on a guarantee from that group that they will respect pluralism if they obtain power; it must be based on an understanding of the doctrine of the religion itself. If the doctrine leaves no room for pluralism, then adherents of that doctrine are disqualified from holding office. The adherents cannot be trusted because the doctrine that moves them is totalitarian.

To put this another way, when a religious group explicitly rejects pluralism and seeks to impose its will on others, the society may justifiably limit its political power. This may include barring it from governing, enforcing laws, or controlling institutions in ways that would undermine religious freedom. The restriction is not on private belief, but on acquiring the capacity to destroy tolerance itself. In this way, the society preserves both freedom and pluralism: individuals can freely follow their religion, while no religion is allowed to use that freedom to eliminate the freedom of others. Nothing is taken from the person except his access to the means to take away the religious liberty of others.

The First Amendment is not the only obstacle in defending tolerance from subversion by the intolerant. By explicitly prohibiting any religious test for public office, a principle articulated in Article VI, Clause 3, which declares that “no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States,” the US Constitution makes it more difficult keeping from office members of a totalitarian political movement that moves under the cover or religion. Indeed, this is arguably the most daunting obstacle since it concerns the specific political right we are discussing. The Constitution does place qualifications on those seeking office, including age restrictions, and the requirement that any candidate running for President of the United States must be a natural-born citizen. But it appears to disallow any restriction on the basis of religious affiliation.

This provision, remarkable in 1787, when most nations and even many American colonies imposed religious requirements on public officials, often restricting officeholding to Protestants, could not have anticipated the Islamization project. Indeed, it was beyond their imagination to envision Muslims as potential officeholders. What concerned them was the policing of Christian sects or deism, which was relatively common among educated elites, including some of the Founders. Many of the colonial-era religious tests and oaths were explicitly or implicitly designed to exclude Catholics from holding public office or exercising full civil rights. By rejecting such tests, the framers established that the federal government would be secular in character, open to individuals of any or no faith.

Later interpretation and the Fourteenth Amendment extended this protection to state and local offices as well. While voters remain free to consider a candidate’s religion in their private judgment, the government itself cannot impose or enforce religious qualifications. However, religious tests remained on the books for several decades in several states, namely those that required affirmation of a Supreme Being. The Supreme Court only struck these down in 1961, in Torcaso v Watkins, ruling that states could not require a declaration of belief in God as a condition for public office. Beyond religion, certain federal and state oaths required officials and teachers to swear they were not members of the Communist Party or any “subversive organization.” These loyalty oaths were also gradually struck down as violations of free speech and associational rights (e.g., Keyishian v Board of Regents, 1967).

Together with the First Amendment’s guarantees of religious liberty and the prohibition of an established religion, the no religious test clause forms a cornerstone of America’s commitment to freedom of conscience and the separation of church and state. However, today we confront a totalitarian movement that uses foundational law to establish a political-religious regime that perverts that foundation. This is not an abstract concern. As I write these words, this is happening across the world.

Mayor-Elect of New York City Zohran Mamdani (Source)

Today, there are around fifty Muslim-majority countries. Fewer than half of these countries are formally secular, that is, arrangements separating religion from government or law. The rest are governed in whole or in part by Sharia. Indeed, even in the formally secular Muslim-majority state, the secular arrangement is more nominal than substantive. Moreover, apart from government oppression, those who do not subscribe to Islam must deal with the extra-legal actions of Muslim proselytizers.

Muslims are now colonizing the West. Before 1970, in the United States, there were fewer than 250,000 Muslims. Today, the number is approaching 3.5 million, concentrated in major Blue Cities in the Midwest and Northeast United States. In Europe, the number is projected to reach 9 percent of the population by 2030. Muslim representation is already greater than 8 percent in France and Sweden.

What is driving this growth? Immigration (especially post‑1960s), higher fertility among Muslim immigrant populations, and a younger age structure. However, the proportion of Muslims need not be large to shift politics. Islam, which already enjoys the support of European elites, joins with progressive and social democratic forces, to multiply its power—what is known as the Red-Green Alliance. The Muslim population in New York City is today around 10 percent of the city’s population, or roughly 850,000. A Muslim was just elected mayor of that city. In Greater London, approximately 15 percent of the population identifies as Muslim. A Muslim was elected mayor of that city in 2016 and is still serving. The effect of this, as well as in other European countries and Canada, is the spread of Sharia councils and tribunals. Populations in the West are already partially governed by Sharia. (See Whose Time Has Come?)

The question of whether the West should allow the Islamization of its countries is an either/or proposition. There is no neutral position one may take on the question. Jihadism, the politics to sow the seeds of Islam everywhere, is a militant doctrine advocating the establishment of Islamic-style governance through violent action. The Ummah is a central concept in Islam that refers to the global community of Muslims—those who share a common faith in Allah and follow the teachings of the Prophet Muhammad. The term literally means “community” or “nation.” But it carries a deeper moral, religious, and social meaning that transcends ethnicity, geography, or political boundaries. In its most profound sense, the Ummah is the collective body of believers united by their submission (islām) to God. The Qur’an uses the term to describe not just a sociological grouping but a divinely guided community.

In the late nineteenth and early twentieth centuries, Muslim intellectuals revived the concept of the Ummah as a rallying cry for unity among Muslims across colonial and cultural divides. Pan-Islamism seeks to reawaken a sense of shared religious and civilizational identity that transcended ethnic and territorial boundaries. The goal is to reestablish and spread across the world the Caliphate, the Islamic system of governance that represent the unity and leadership of the Ummah under a single ruler known as the Caliph, or khalīfah rasūl Allāh, meaning “successor to the Messenger of God.” For jihadists, the Ummah is not merely a theological concept but a political ideal—a means to unify the Muslim world and spread Islam globally.

In this respect, Islamism is like fascism, seeking to subject every man, woman, and child under totalitarian control. More than similar: Islam is a species of the thing itself. Islam is a clerical fascist project. Either one condemns fascism in whatever form it takes or he supports it, even if the latter comes with disinterest or silence. One cannot be neutral on the matter.

In my earlier treatment of Popper, the issue at hand was censorship of offensive ideas in Europe, not the question of political office. However, there have been steps by Germany to challenge the status of the Alternative für Deutschland (AfD) under its laws for protecting the constitutional order, although no formal party ban has yet been effected. Germany has legal mechanisms to ban parties that promote fascism or seek to undermine the democratic order. This is rooted in the post-World War II Basic Law (Grundgesetz), specifically designed to prevent the rise of totalitarian movements like the Nazis. The BfV, Germany’s domestic intelligence agency has formally classified the AfD as a “right‑wing extremist” organization. This is an attempt to portray popular democratic forces in Germany as fascist in order to suppress them. AfD is not a threat to democracy, but to corporate statism and technocratic control. Globalists are using the principle of defending democracy from intolerance to suppress the populist uprising against globalization. Democrats in the US endeavor to manufacture the same perception about popular democracy, portraying Donald Trump and MAGA as fascists.

However, as noted, Islam is the thing in itself. One need not perform Orwellian meaning inversions to warp words into their opposite. There is no peaceful movement here that requires conjuring to transform into a totalitarian monster. Why Islam is the favored religion by progressives and social democrats across the trans-Atlantic order is rather obvious: corporate statism, the instantiation of fascism in the twenty-first century, finds a useful form of totalitarianism in Islam, useful because it disorders the nation state through demographic recomposition and cultural disintegration, and by disrupting worker solidarity. The hubris of the transnational elite leads them to believe they can harness the force of Islam. But, as history tells us, they are playing with fire. (See Corporatism and Islam: The Twin Towers of Totalitarianism.)

I admitted in one of the essays cited at the top of the present one that I recognized that libertarianism is itself an ideological framework with assumptions about the state’s proper role in society, but I stressed that the libertarian standpoint is not a singular truth, but rather one that affirms the singular truth that authoritarianism negates free and open society. I said in that essay that one cannot simultaneously proclaim support for a free and open society, on the one hand, and then, on the other, restrict arguments, ideas, opinions, and assembly. I still believe this, but on the question of public office, which I had not considered that deeply, I am not sure that it is a contradiction to proclaim support for a free and open society while erecting barriers to office for those who advance a totalitarian ideology—not an ideology the corporate elite say is totalitarian, but one that is on its face totalitarian, which a long history of showing its face.

So my argument that, from a libertarian standpoint, any attempt to police speech—even speech advocating authoritarianism—is itself authoritarian still holds. The principle of free speech indeed only holds if it applies to all viewpoints, including those some find abhorrent. People are free to believe and say what they will. The matter at hand is not a question of whether Muslims have a right to practice an abhorrent religion. Rather, the question is whether non-Muslims have a right not to be treated as second-class citizens in their own country. Not to sound trite, but one cannot have freedom of religion unless one first has freedom from religion. So we must consider whether it violates our principles to safeguard the West from this form of totalitarianism, and if we agree that it doesn’t comprise those principles, then erect the necessary structures that render our society immune from this problem. We need to move quickly. We don’t have much time. Did I already tell you: New York City just elected a Muslim mayor?

The Overrepresentation of Black Communities in Crime Statistics and the Source of Denialism on the Progressive Left


In a recent video I shared on Facebook, Will Johnson confronts several older white progressives at an anti-Trump rally. These individuals struggle to articulate their reasons for disliking Donald Trump. One of their reasons is opposition to Trump’s efforts to reduce crime in American cities, violence that disproportionately affects black people. This may seem strange at first, but there is a reason for it—and it goes to the heart of the epistemological problem that corrupts the rank and file on Democratic side of the aisle.

The encounter resonates with themes from my criminal justice class, where I recently reviewed crime statistics. In that session, I examine overall trends alongside three key demographic factors: age, gender, and race. I emphasized to my students that we spend several days unpacking the underlying causes of these numbers, as I anticipate concerns from progressive students about the stark overrepresentation of black individuals in serious crime data. This got me in trouble with a dean a few years ago, so much concern is not imagined.

To illustrate, consider homicide statistics: More than half of homicide victims in the United States are black. Their perpetrators are predominantly black men. Black men constitute only about 6 percent of the US population, yet they are responsible for between 45 percent and 50 percent of homicides. Rhetorically, I pose the question: Is this disparity inherent to the concept of race itself? The answer is no. Instead, I tell them, geographic and social context shape the statistics: these murders overwhelmingly occur in urban areas, specifically in black-majority neighborhoods within cities.

To explain this pattern, I draw on established criminological frameworks, such as social disorganization theory, differential opportunity theory, and subcultural theory. These models highlight how structural factors—community instability and lack of resources—foster environments conducive to crime. A historical dimension underpins this: the Great Migration of black Americans from 1910 to 1970. During this period, roughly half of the black population left the rural South for cities in the Northeast, Midwest, and West. Moreover, internally within the South, blacks migrated from rural areas to the cities. By 1970, the majority of black Americans, who had once lived primarily in rural communities and small towns, where children grew up in intact families, were concentrated in inner-city ghettos across the United States, where the black family disintegrated.

This is not a partisan observation but an empirical one: these cities have long been governed by Democrats. These are Blue Cities. The progressives in the video I shared—Democrats themselves—falter when the black man asks why their party seems indifferent to the situation in black communities. They offer no substantive response.

Their silence raises a troubling dilemma: either Democrats deliberately seek to subjugate black people through policies that promote poor labor force attachment, welfare dependency, family disintegration, and crime, or they suppress discussion of these issues to avoid exposing how their own policies—from globalization (offshoring and mass immigration) to welfare programs—have devastated black communities. Make no mistake: ghettoization and family disintegration lie at the heart of this crisis. Through these mechanisms, black populations have been systematically idled and marginalized, maintained in a state of economic and social subjection.

Image by Grok

Who bears responsibility for this? It’s not the MAGA movement or Republicans. Historically, Republicans freed the slaves and, during Reconstruction, attempted to restructure society to prevent black subjugation. Republicans do not control the major cities of America. The post-slavery oppression of black people was engineered by Democrats.

A deeper truth emerges here: even if we set aside the notion that some Democrats would actively desire this outcome, the reality is that they do not genuinely aim to help black communities but rather maintain them in impoverished inner-city areas. If Democrats answered the questions honestly, they would have to acknowledge the horrors inflicted by progressive social policies on the very people they claim to represent.

Progressives are terrified of this recognition, not only by those outside their ideological bubble, but by themselves. Such an admission would shatter their faith in the Democratic Party. Many have been raised from birth to identify as Democrats and to despise Republicans. Their worldview hinges on a binary premise: Democrats are good, Republicans are bad; Democrats are liberators, Republicans are oppressors; progressives are anti-racist, conservatives are racist. To confront this as a falsehood would mean dismantling their entire identity. They would leave the tribe. They fear the judgment of their peers: “What happened to you? You used to be a progressive Democrat. How can you align with Republicans now?” This is the power of political identity—it is a pseudoidentity that disorders clear reason for the sake of ideology.

This is why reasoning with such individuals is so often futile. They rationalize the destruction of black communities as a charitable act to be enforced by the government—a view that is 180 degrees from reality.

The corporate state media plays a massive role in perpetuating this distortion. As the propaganda arm of transnational corporate elites and globalists who are disorganizing the working class to disorder the nation, the media promotes an ideology that, if people were rational and open to facts, would attract virtually no adherents.

I suspect this is why progressives identify with Islam. In Islam, if one leaves the faith, then he risks death. For progressives, excommunication is substituted for death. This is why I have been subject to harassment by progressives. It’s not my opinion, but what progressives perceive as my betrayal of the tribe.

There are several examples of my betrayal. There are few issues that I’ve changed my mind on so dramatically in the face of logic and evidence as gun control. I was spectacularly wrong. I was spectacularly wrong on the trans issue, too, but that was because I didn’t know what I was talking about. On the question of guns, I have no excuse. I also have no excuse for past arguments on matters of crime and race. I should have known better. The problem was tribal. I was never fully progressive, but I was pulled into the orbit of that worldview for many years. Leaving that orbit was liberating.

It’s instructive how my view on guns, queer theory, and race and crime started to crumble at the same time. That’s what really frightens progressives to the point that they refuse to engage in conversation—because they’re afraid if they change their mind on one thing, then other articles of faith will start to fall to the point where they risk losing their worldview, which is precious to them not because it’s a rational standpoint, but because it’s an emotionally satisfying epistemic—and because they don’t want to alienate the tribe where all their friends dwell.

Why a Man of the Left Would Support Right-Wing Populism

Have you watched Frontline’s The Rise of Germany’s New Right, posted a few days ago? You should (see link below). As you watch (if you watch), be aware of what shapes PBS’s framing of rightwing populism—the corporate progressive bias portrays the populist movement across Europe and North America as the politics of extremism. Yet tens of millions support populist politics, whereas outlets like PBS represent the interests of only a small number of elites and functionaries.

As I have noted on this platform (see, e.g., Am I Rightwing? Not Even Close), all my life, I’ve identified as a person on the left. I still do. Yet, over the last several years, I’ve found myself supporting movements such as Brexit, MAGA, the Sweden Democrats, and Germany’s Alternative für Deutschland (AfD)—all of which are routinely portrayed as far right. This shift isn’t the result of a sudden change in my core values, but rather a reaction to what has happened to the left itself over the past fifteen or so years, and indeed, to longer historical developments beneath the surface. Moreover, I haven’t so much “found” myself sympathetic to the rightwing position, as I have determined that it is the right position for a man of the left.

Over time, the cultural and epistemic framing of our sense-making institutions—academia, media, entertainment, and other organs of public meaning—has shifted steadily leftward. This transformation has redefined the ideological landscape, creating the perception that the political center has moved rightward when in fact it is the left that has migrated further left. However, this leftward tilt is an odd one—can it really even be described as left?—in that it has not deepened its commitment to the ideas of the Old Left, but rather fused New Left sensibilities (critical theory, postmodernism, queer theory) with corporate state power. 

Elon Musk is seen on a large screen as Alice Weidel, co-leader of Germany’s AfD party, addresses an election campaign rally in Halle, eastern Germany, on January 25. 

As globalization advanced in the background over the last several decades, gaining momentum under President Bill Clinton, although for the most part prepared by his predecessor George H. W. Bush (who gushed over the New World Order), it generated conditions of rising inequality and widespread resentment. NAFTA, completed and signed by Bush in 1992 (with Clinton’s role to secure congressional approval) and the Uruguay Round of GATT—which led to the creation of the World Trade Organization—sacrificed US sovereignty to transnational corporate power. The coup de grâce was Clinton paving the way for China’s entry into the WTO.

For the most part, the media, already shifting leftward in the 1970s, framed globalization as an unalloyed good. The left, instead of addressing the material roots of this discontent as it had traditionally, turned increasingly toward identity politics, heightening group antagonisms and fragmenting social cohesion. This deepened the angst of a nation devastated by globalization.

As a result of these developments, many who saw themselves as left-liberal began to feel politically homeless. Conservatives, embracing many of the tenets of liberalism, became seen as allies. The emergence of alternative media allowed people to step outside the hegemonic control of traditional outlets and to see these developments with greater clarity. Conservatives themselves began shifting away from the neoconservatism and neoliberalism that had corrupted the Republican Party. Lincoln’s Republican Party was making a comeback.

In light of these developments, the rise of right-wing populism appears less as a descent into extremism and more as a form of democratic resistance and the reclamation of democratic republicanism—a popular attempt to push back against a globalist order eroding national sovereignty and the Westphalian system of independent states in favor of a transnational, corporate, and technocratic regime. Liberals and conservatives could see that America’s decline was a managed one, and they forged an alliance to turn things around. This is how you get a Trump Administration with Robert F. Kennedy, Jr., as Secretary of Health and Human Services.

Unsurprisingly, the institutions dominated by progressive and social-democratic sentiment portray movements like the AfD in Germany and MAGA in the US as dangerous or regressive. From their vantage point, any assertion of national identity or popular sovereignty represents a threat to the globalist project. Predictably, they reinforce these narratives by linking right-populist movements and personalities to Russia or other perceived external enemies, as can be seen in the PBS documentary.

Across Europe—and indeed, across the Western world—the same pattern is unfolding: ordinary citizens, alienated by the elite consensus and frustrated with the failures—or more accurately the design—of globalization, are embracing political movements that the left condemns as “far right.” Yet these movements are not fundamentally about hatred or reaction; they are, in many respects, an expression of resistance to a global system that has stripped people of agency and their democratic voice.

Narcissistic Vulnerability and The Dunning-Kruger Effect: The Psychological Roots of Resentment of Competence

This essay, which concerns the emotional and psychological burden that attends the Dunning-Kruger effect, continues my ongoing examination of cognitive errors and the rise of mental illness in America, especially among young Americans on the left. Before I get to the substance of today’s offering, I want to briefly review past writings on these matters.

I write about cognitive dissonance and motivated reasoning in various essays. See, e.g., Living with Difficult Truths is Hard. How to Avoid the Error of Cognitive Dissonance; Why People Resist Reason: Understanding Belief, Bias, and Self-Deception; When Thinking Becomes Unthinkable: Motivated Reasoning and the Memory Hole; The Fainting Man: What Kennedy and Trump Were Doing. I first wrote about the problem of cognitive dissonance back in 2007 in Cognitive Dissonance and Its Resolutions to frame a critique of Mike Adams, a conservative criminal justice professor who was treated poorly by the University of North Carolina Wilmington. By 2020, the university had pressured Adams into early retirement. Shortly afterwards, Adams committed suicide.

I have written numerous essays on personality disorders, which the DSM classifies as Cluster B. I have explored how these function at the individual level and at scale. See, e.g., Explaining the Rise in Mental Illness in the West; Understanding Antifa: Eric Hoffer, the True Believer, and the Footsoldiers of the Authoritarian Left; Chaos, Crisis, Control—Narcissistic Collapse at Scale; Concentrated Crazy: A Note on the Prevalence of Cluster B Personality Disorder; RDS and the Demand for AffirmationLiving at the Borderline—You are Free to Repeat After MeFrom Delusion to Illusion: Transitioning Disordered Personalities into Valid Identities. I have also explored mass psychogenic illness in several essays: The Future of a Delusion: Mass Formation Psychosis and the Fetish of Corporate Statism; A Fact-Proof Screen: Black Lives Matter and Hoffer’s True Believer; Why Aren’t We Talking More About Social Contagion?

Now I turn to the Dunning–Kruger effect, first identified by social psychologists David Dunning and Justin Kruger in 1999 in an article in the Journal of Personality and Social Psychology. The authors describe a cognitive bias in which individuals with low ability or knowledge in a given domain overestimate their competence. A corollary of this effect is that those with greater ability sometimes underestimate their competence; competent individuals assume their understanding is incomplete and that knowledge production requires openness and skepticism. As a consequence, those who know the least about a subject are at the same time the least equipped to perceive their own ignorance. Dunning and Kruger describe this as a “double burden” of incompetence and unawareness. For those with fragile egos, this is an unpleasant spot to be in.

David Dunning and Justin Kruger (1999), Journal of Personality and Social Psychology (image by Sora)

Empirical support for the effect Dunning and Kruger describe is found in experiments conducted across several domains, including logic and, interestingly, humor. In the realm of civil and rational argumentation, this is often seen in the penchant among confident but otherwise ignorant persons to engage in sophistry rather than reason (in extreme cases, harassment, intimidation, and even violence). Subsequent research has replicated similar findings in fields such as academic achievement and political knowledge. (To begin one’s journey through the literature, one may go here, here, and here. The latter source may be of some help to teachers who encounter this effect and its attendant psychological burden in their students.)

Thus, beyond its original formulation as an error of metacognition, the Dunning–Kruger effect points to deeper psychological and social dynamics. When people overestimate their understanding of the world, they often do so not merely out of ignorance, but out of a need to maintain a positive and stable self-concept in the face of ego threats. They’re triggered by those they perceive as being smarter than they are. This reaction is likely when there is an ideological or political disagreement, and the afflicted wishes to believe his opponent did not arrive at his conclusions rationally. To presume otherwise might lead to an examination of one’s own beliefs, which he is stubbornly committed to for tribal reasons.

Those with fragile egos are especially upset when the person they looked up to demonstrates openness by changing his opinion on an article of faith associated with several other articles perceived to be constituents of a unified ideological worldview. Here, acknowledging that one is wrong about one thing exposes him to the threatening possibility of being wrong about a great many things, including what is perceived as the core assumption holding the standpoint together. This is why fragile egos are so rigid in their thinking (Eric Hoffer captures this trait in his True Believers).

Epistemic overconfidence, that is, the overestimation of one’s grasp of complex biological or social realities, leads individuals who hold strong but poorly informed opinions to dismiss or resent experts and, more broadly, those whom they suspect are smarter than they are across many domains, evidenced by the fact that they have accomplished more professionally and demonstrate a proficiency in applying knowledge and method to other areas. Whatever one thinks of academics with advanced degrees, obtaining those degrees and publishing in peer-reviewed journals indicates that, for the most part, they can think through problems carefully and avoid arriving at conclusions without sufficient evidence, which, gatekeeping aside, the publication of their findings and the appeal to those findings by other scholars attests to. However much the fragile ego might disagree with the conclusions of a scholar’s work, he is still confronted with the quality of mind he does not himself possess—and he resents this. The defensiveness of those who belittle and downgrade demonstrated proficiency in knowledge production serves a psychological function: it preserves the illusion of competence in the face of evidence to the contrary.

Such emotional factors as envy and jealousy thus mark the psychological burdens of insecurity. This is painfully obvious to those who encounter such persons, but not always fully recognized by those afflicted by it, and they keep that recognition at bay by lashing out at those they perceive as their antagonists—their betters—those whose existence reminds them of their incompetence.

An important piece of this is the penchant among shallow thinkers to engage in social comparison, noted in 1954 by Leon Festinger, best known for his theory of cognitive dissonance (a not-unrelated phenomenon). Festinger posits that many individuals assess their own worth by comparing themselves to others. Following from this, when an individual with a fragile ego perceives another person as more knowledgeable or insightful, even if unconsciously, it threatens his distorted sense of self. To protect against this threat, his mind generates feelings of contempt and resentment toward the more competent individual. Such a reaction acts as a psychological buffer, transforming admiration—which would require humility, absent in the narcissist (which I’m coming to)—into hostility, which is an attempt to reclaim or preserve ego integrity.

This mixture of insecurity and resentment often evolves into a fixation. A person who subconsciously recognizes his inferiority in understanding becomes preoccupied with the individual who exposes that gap. The result is an obsession—part envy, part hostility, part dependence. The dependence piece is crucial, as this is the persistent source of compulsion to ruminate over the situation. The individual revolves around the target of his resentment. He may seek to discredit the more knowledgeable person, undermine his authority, prove him wrong (albeit with no substance), or humiliate him, typically by projecting his insecurity onto the target of his emotional pain, all while seeking validation from him. He wants to get a rise out of the person because of a pathological need to be acknowledged. Submerged in his ontological insecurity, he desperately wants to have an effect against which to check his significance.

Image by Grok

Psychologically, this pattern aligns with what is known as narcissistic vulnerability (which is not divorced from grandiosity)—a fragile form of self-esteem that vacillates between feelings of shame and superiority. The vulnerable narcissist is highly defensive, prone to envy, and frequently interprets the accomplishments of others as threats to his own self-worth. This defensiveness serves a protective psychological function, helping to preserve an illusion of competence and maintain a positive self-concept in the face of failure and self-doubt about his own adequacy.

His dismissal of, or resentment towards, those he perceives as more accomplished and smarter than him often leads him to engage in passive-aggressive behavior to shield his fragile self-image while lashing out. He feels the need to belittle the target of his envy, but instead of engaging openly with him, since he knows or fears that he cannot compete, the passive-aggressive engages in indirect expressions of hostility. Instead of directly stating disagreement, he expresses his angst through covert action. One sees this in anonymous letter writing or, more conveniently, social media accounts with obscure names, used to follow and harass those who are smarter than they are on the sly. Such persons are often afraid of publicly revealing their identity, since this would also reveal that they lack the smarts they so desperately wish they had. Thus, the effect becomes intertwined with defense mechanisms that protect the sufferer against the pain of public recognition (which is outsized in his mind). The pattern is not merely one of individual delusion but social friction—an elevation of self-assured ignorance that moves behind the many opportunities to disguise one’s identity made possible by a highly technological world.

Therefore, the Dunning–Kruger effect is not only a statement about cognitive error but, more importantly, I think, also about emotional fragility and the human need to preserve self-worth without corresponding accomplishment. I want to be careful about attributing this to human nature generally, though. The need is felt not by everyone, but by those who deep down know they are inadequate but cannot, because of their narcissism, acknowledge or defer to those who aren’t. While its empirical form describes an error of judgment (and some of the literature on this front is obnoxious in its obsession with rationalizing appeal to the authority of consensus and expertise), its deeper significance lies in how people emotionally respond to their own ignorance and insecurities to the uncomfortable presence of those who do, in fact, understand the world better, and thus remind them of their own inadequacies.

At its most profound level, then, the effect reveals the tension between ego and humility—that is, the difficulty of admitting what one does not know, and the emotional burden of encountering those who do. It would be one thing if they suffered privately, but those suffering from fragile egos sometimes burden others with their self-loathing. At that point, the effect becomes pathological, and clinical intervention is indicated.

Gavin Newsom’s Invocation of Christian Doctrine to Justify Big, Intrusive Government is Wrong and Cynical

Gavin Newsom, a man who slept with his campaign manager’s wife and has done nothing about the tens of thousands of people living on the sidewalks of California cities, told Jake Tapper today that it was un-Christian to reduce the size and scope of the welfare state. One may believe food and medical assistance to the poor is one of the roles the government should perform, but that belief does not follow from Christian teachings. There is nothing in Christian doctrine that recognizes a role for government to play in meeting the needs of the poor and vulnerable. 

Newsom’s argument that expansive social programs reflect Christian values of care and compassion for the less fortunate is a common one among proponents of big government, advanced to justify the size and scope of the state apparatus. While it is true that acts of generosity and mercy are central to Christian ethics, the question is not whether these acts are good, but whether scripture or Christian tradition supports their administration through the coercive power of the state.

When one turns to the Gospels, one indeed finds there injunctions to give to the poor, to feed the hungry, and to clothe the naked. But these are directed at individuals, not political authorities. Charity, in the Christian understanding, is a voluntary act of love—an expression of free will guided by compassion and faith. Coerced giving, by contrast, may provide material benefit but lacks the moral dimension that makes charity a virtue.

Indeed, the recent insistence on empathy—feeling what another person feels by putting yourself in their place—and, in previous decades, sympathy—feeling compassion without necessarily sharing another’s point of view—while concerned with emotional responses, are not really part of the moral commitment central to Christian teachings. Rather, the commitment is love or agape, the highest form of charity. This involves self-giving action for the good of the other. It is rooted in free will, not emotion. If a man wishes to be kind, he may be so; the government cannot compel his kindness.

Moreover, Christianity emphasizes not only moral agency but also personal responsibility. The Good Samaritan is commended not because he voted for a policy compelling others to do so but because he chose to stop and help a stranger in need. The spiritual merit lies in the act freely given. Paul writes in 2 Corinthians 9:7, “Each one must give as he has decided in his heart, not reluctantly or under compulsion, for God loves a cheerful giver.”

California Governor Gavin Newsom

The Gospels are very clear about this: the Christian ethic of charity is fundamentally incompatible with the idea that government should enforce redistribution. The state operates by compulsion—through taxation and regulation—whereas Christian charity is rooted in conscience, liberty, and love. Newsom is arguing that Christian charity can be achieved through state coercion. Nothing could be further from the truth of scripture.

This is not to say that Christians should oppose all forms of government assistance, to be sure; rather, it is to say that they should recognize the distinction between charity and state-enforced redistribution. The former benefits both giver and receiver; the latter, while (perhaps) alleviating material hardship, erodes personal responsibility and the communal and spiritual bonds between individuals. As the Robin Trower lyric in his 1973 “Two Rolling Stoned” goes: “Takers get the honey/Givers sing the blues.”

Historically, it has often been Christians themselves—through churches, missions, and voluntary associations—who have cared for the poor most effectively, motivated not by law (nor by empathy) but by love. In fact, Christians give much more to charity compared to secular individuals. The Gospel model of charity is deeply personal and moral, not bureaucratic or political, and the personal and moral desire to give is much stronger among Christians than in other groups.

Finally, whatever one’s view on Christian charity or the place of big, intrusive government, the United States is by design a secular state. It is not guided by Christian doctrine but by the will of a people who express many faiths—or no faith at all—with the rights of the individual, among these conscience and property, protected from the tyranny of the majority.

Newsom’s invocation of Christianity can have no purchase here—not only because his interpretation is wrong (really, cheap political rhetoric designed to manufacture the appearance of hypocrisy on the other side), but because religion is walled off from the formation and implementation of law and policy. Indeed, it is the principle of religious liberty that lies behind the strength and vitality of Christianity in America.

James Madison, in a 1832 letter written to Henry Lee, reflected on the meaning and intent of the First Amendment, particularly the Establishment Clause. “The tendency to a usurpation on one side or the other, or to a corrupting coalition or alliance between them,” he wrote, “will be best guarded against by an entire abstinence of the Government from interference in any way whatever, beyond the necessity of preserving public order, and protecting each sect against trespasses on its legal rights by others.”

Madison believed that government involvement in religion inevitably corrupts and weakens faith. He saw in the situation of Europe historical evidence that state-established churches led to loss of spiritual vitality, religious stagnation, and persecution. He was convinced that religion flourishes most when left free from government control or support. He argued that state aid to religion undermines its authenticity and vigor. Separation was therefore not only good for the preservation of secular society, but good for religion, because it preserved its independence, integrity, and character of voluntarism.

Capitalism, Socialism, and Communism: Avoiding Sophistry By Defining One’s Terms

I will do this from the Marxist standpoint so that even the true believer may hear.

Capitalism is an economic system based on private ownership and control of the means of production, primarily by capitalists and the petty bourgeoisie. The majority of people, lacking direct access to productive property, must sell their labor for the wages they need to meet their needs and wants. Welfare provisions, social safety nets, or other public functions do not alter the fundamental structure of capitalism—wage labor and private capital remain central. Indeed, such provisions and nets are designed to keep at bay the socialist revolutionary. Neither are fire departments nor the war departments socialist institutions.

Socialism is a system in which the means of production are, at least in principle, collectively owned and controlled—either directly by workers in their enterprises or indirectly through a state and an army of bureaucrats that purports to act in the interests of the working class (a situation of which one should always be suspicious). The goal of socialism is to subordinate capital to labor and to distribute the social product/surplus more equitably. Do I need to mention that the character of state control and whether it truly represents worker interests is historically and theoretically contested? Looking at unfolding events, I suppose I do.

Image by Sora

Communism is a classless, stateless form of social organization in which property is held in common and production guided by the principle “from each according to his ability, to each according to his need,” a slogan by Marx that is remarkably similar in spirit to the story of Ananias and Sapphira, which appears in the Book of Acts 5: 1–11 (a story with which I will conclude this essay). Under this imagined situation, coercive institutions of the state become unnecessary because social relations are based on cooperation rather than exploitation, much like it was in the original human society (gatherers and hunters), but on a higher technological plane.

Though theoretically harmonious, the realization of such a system would in practice require a level of social coordination and technocratic control—and a nonorganic moral consensus—that makes it inherently utopian (as in “no place”). It is, therefore, a fanciful paradigm of what Thomas Sowell calls the “unconstrained vision of human nature,” that is, the belief that humanity has no intrinsic nature and is therefore infinitely malleable in the hands of those who would design the perfect society.

If readers aren’t sure they’re understanding what I am conveying here, then I recommend BF Skinner’s Beyond Freedom and Dignity and the utopia he describes there. Or maybe one requires beating over the head with it, in which case I will assign George Orwell’s Nineteen Eighty-Four. The point is that communism, even in its most optimistic conception, presupposes a coordinating function, i.e., managerial control (what Marx called an “administration of things”), which in practice necessarily requires coercion and hierarchy. But, then, so does state socialism, a fact to which all really-existing socialist systems attest.

I conclude with the fates of Ananias and Sapphira in the New Testament. Making this connection got me in trouble at Middle Tennessee Christian School, which I attended for ninth grade. In Bible Study, we covered the Book of Acts in-depth. Familiar with the book as a child, the semester-long deep dive into scripture, along with my knowledge of Karl Marx, caused me to see a connection I had not seen before.

A bit of background: The first Christian community was founded in Jerusalem by Jesus’s original apostles—Peter, James, John, and the others—shortly after Jesus’s death, decades before the story of Jesus and the first church were written down. The Book of Acts was written by Paul’s companion, Luke, around 80–90 CE. Acts is a theological history that preserves genuine early traditions while shaping them into a narrative of the church’s growth.

The story goes like this:

In the earliest days of the Christian community in Jerusalem, the believers lived in unity, sharing their possessions so that no one lacked anything; those who owned land or houses would sell them and bring the proceeds to the apostles, who distributed resources according to each person’s need. This was the first Christian church.

A wealthy man named Ananias and his wife Sapphira sold a piece of property but secretly kept part of the money for themselves while pretending to donate the full amount. When Ananias presented the gift, Peter confronted him for lying not just to the community but to God, and Ananias suddenly collapsed and died. Later, when Sapphira arrived, unaware of what had happened, she repeated the falsehood and likewise fell down dead. (My own view is that things went down differently, a bit more human agency involved, but that’s neither here nor there, I suppose.)

When I told the teacher and my classmates the connection I perceived between communism and the first Christian church, the teacher explained that such an arrangement would be fine as long as Jesus were the dictator. I was then sent out of the room to sit and reflect upon my thoughtcrime on a bench in the hall.

I was wrong, of course. The story of Ananias and Sapphira may seem to contradict the Christian emphasis on individual freedom, conscience, and salvation by personal faith. However, the focus is not on denying personal freedom or property rights, but on deception. In the narrative, Peter makes clear that the couple was under no obligation to sell their land or donate the full proceeds; their property remained their own. The problem was not withholding money but lying about it, betraying the trust of a community that was, at that early stage, practicing voluntary communal sharing under unique circumstances (remember, followers expected Christ’s imminent return).

Thus, as I would later come to understand, rather than prescribing communal ownership for all Christians, the passage serves as a moral warning about hypocrisy during a formative and spiritually intense moment in the life of the church. Early Christianity affirms individual conscience, personal faith, and voluntary generosity, but it also stresses honesty and responsibility within the community. The story highlights the seriousness of violating that trust, not a rejection of individual freedom.

It would have been useful for the teacher to have corrected my error so that students could learn about individualism and voluntarism in Christianity. Instead, he appeared to lack an understanding of Christianity sufficient to take advantage of a teachable moment, and my error—and his ignorance—resulted in students hearing that day that the first Christian church was an instantiation of communism. Whatever else the first church was, it wasn’t that; communism in Marx’s conception is the stage of human development that socialism prepares, and socialism is not a voluntary condition.

Defensive Intolerance: Confronting the Existential Threat of Enlightenment’s Antithesis

Today, I turn my attention to the ascent of Muslims to political office in the West. The occasion is the victory of Muslim and socialist Zohran Mamdani in New York City’s mayoral race. Mamdani’s win echoes the 2016 election of Sadiq Khan—former chair of the Fabian Society—as mayor of London. Khan’s election occurred the same year a majority of Britons voted to detach from the pan-European superstate that had long shaped their political destiny (and really still does). Mamdani’s rise comes in a year when the populist forces that fueled Brexit and Donald Trump’s first term have once again propelled Trump into the presidency, this time with a clearer mandate. The gulf between cosmopolitan and working-class sensibilities could not be starker—on both sides of the Atlantic.

While self-described democratic socialists are busy on social media trying to rationalize socialism (“Socialism is the fire department saving your house”), many among them are leaning into the new ecumenism that, on the occasion of Trump’s 2017 executive order restricting travel from several Muslim-majority countries, found them staging large-scale protests at airports and in public squares across the United States. Demonstrators held signs denouncing the ban and expressing solidarity with Muslims, invoking religious unity, quoting the Statue of Liberty (Emma Lazarus’s poem “The New Colossus,” penned in 1883 to help raise funds for the statue’s pedestal, affixed to the inside of the pedestal in 1903), or declaring “I Am Muslim Too.” (Perhaps in time, the Islamophiles will get their wish.)

The practice of ecumenism (or ecumenicalism) was originally founded on the idea of cooperation, dialogue, and unity among Christian denominations. The aim was to heal divisions among Christians so that the faithful could better witness to their shared faith and work together in the world. Over time, ecumenism came to represent a shared commitment to interfaith cooperation and social justice, leading many Christians to adopt the doctrine of inclusivity. Ecumenical types welcomed non-Christian faiths, including Islam and Judaism, into the dialogue, which, according to the inclusivity doctrine, with respect to Islam, meant tolerating the intolerable.

Image by Grok

The modern ecumenical movement thus professes a commitment to the liberal values of pluralism and tolerance, albeit warped by a progressive twist, one that Canadian psychologist Gad Saad identifies as parasitic, namely cultural relativism, that is, the suspension of moral and political judgment in the thought and actions of those from other places. What follows from relativism is the expectation that reasonable people adopt a neutral gaze and see all religions as compatible and equal, and therefore at least tolerated—better yet, admired. One must adopt this view if one wishes not to be seen as a bigot or a chauvinist. You know, an Islamophobe.

Assuming compatibility and equality of religious belief obscures the reality that religions can be, and in fact are, antithetical to the universal interests of humankind—the rights to autonomy, conscience, publishing, and so forth. I do not here mean mere incompatibility, but antithetical in a way that makes peaceful coexistence or synthesis impossible, with one faith demanding the submission of all others based on a doctrine of offensive intolerance. In this situation, either the antithesis is allowed to negate the thesis by subverting pluralism or tolerance to attain positions of commanding power, or the thesis precludes its annihilation by restricting the power and presence of the antithesis. This is defensive intolerance: the thesis requires the latter to preserve the arrangements that make religious liberty possible. This stance depends on an awareness that the threat is existential; to do anything else is a perversion of tolerance, what Saad describes as suicidal empathy.

Let’s work through this apparent paradox dialectically. Suppose a religion that supports the separation of church and state and promotes the idea of individualism—that is, the secular ethic of religious liberty—and support for this arrangement and idea exists at the doctrinal level. This is the religious thesis. In the West, the thesis is Christianity, which, in its historical development, is simultaneously the thesis of the Enlightenment. Indeed, the Enlightenment is the result of a long struggle within Christianity to become a rational faith, which required the fracturing of Catholic hegemony, brought about by the emergence of Protestantism. So impactful was Protestantism and the idea of individualism, already present in the womb of Christian doctrine, that it moderated the collectivist orientation of Catholicism. Here, ecumenism served a purpose by bringing Christians together around the rational premise embedded in scripture. The Founders of the American Republic were ecumenical in this way. They established the United States as the nationalist exemplar of secularism, putting the rights of man beyond any particular religious doctrine by locating it in human nature.

Suppose now the antithesis of this establishment. Rather than supporting religion-state separation and the ethnic of individualism, the antithesis advances a collectivist doctrine, one that seeks total control over a population by converting, marginalizing, or eradicating those of other faiths by acquiring—including through democratic means—the political power necessary to assert its dominance over a society. Today’s religious antithesis is Islam, a term that literally means “submission” and “surrender,” and its advance everywhere terminates in the eventual elimination of secularism and individual liberty. The oppression and killing of apostates, heretics, and infidels, as well as the subordination of women, follow. The faith is not a civil and rational one, but violent and irrational, incapable of liberalism because it is intrinsically illiberal. As such, it is a species of clerical fascism. As such, alongside the secular totalitarianism of China and the rise of transnational corporate statism, Islam is the greatest threat to human freedom in the world.

In such a situation, the religious tolerance exhibited by Christian ecumenism, born of the Enlightenment sensibilities that the struggle for rational Christianity gave rise to, endangers the very arrangement that makes ecumenism possible. Put another way, progressive ecumenism brings people to tolerate a religion that rejects the foundation of religious liberty and the secular regime of tolerance. The empathy here invites the destruction of the empathetic. It is not only in this case that the religious antithesis opposes the religious thesis; more broadly, the greater thesis that includes both rational Christianity and the secular foundation it founded—upon which it depends to keep free its faith—is under the threat of extinction by the presence of the antithesis.

I want to emphasize the point that what allowed the Christian thesis to thrive was its own gradual development of religious pluralism over centuries, a process only possible because there was a doctrinal basis for individualism. There is no such ethic in Islam. Islam seeks to replace Christian doctrine with its diametric opposite. Therefore, no coexistence or synthesis is possible because both the thesis and its antithesis rest on entirely antithetical standpoints. Imposing one requires enslaving the other; saving one requires excluding the other. And since Christianity rests on a rational foundation conducive to human freedom, the choice is clear. This is no small problem to be worked out in dialogue. This is a civilizational matter.

I can hear the antisemite: “What about Judaism?” A man accused of antisemitism, Karl Marx, in his 1843 essay “On the Jewish Question”—one of his earliest major writings, following his “Critique of Hegel’s Philosophy of Right” (both published in 1843)—, makes a provocative argument about what he calls “practical Judaism,” by which he means the social logic of egoistic individual rights, exchange, money, and political emancipation as it functions in modern bourgeois society. He contrasts this with “rational” or “theoretical” Christianity, i.e., Protestantism, which he argues the Reformation transformed by stripping away its otherworldly (and I would add collectivist) orientation and aligning it with economic rationalization, self-interest, and worldly pursuits.

In Marx’s view, the Reformation completed the secularization of Christian Europe by bringing Christian moral life into line with the practical, this-worldly ethos he associates with bourgeois society. Thus, Marx suggests that the Protestant Reformation dissolved religious obstacles to capitalism by reorienting Christian life toward the rational, disciplined, individual-centered, materially engaged conduct that already characterized Jewish practical life. The result is that modern Christian society becomes market-driven, secular, and structured by the logic of capital—not that Judaism triumphs theologically (Judaism is not a proselytizing religion anyway), but that capitalism imposes its own universal epistemic.

Comparing Judaism with Islam betrays a profound ignorance of the respective faiths. Christians and Jews are indeed People of the Book. Over against them, the Islamization of a society cancels that epistemic and substitutes for it its own doctrine of submission and surrender. Existing law—in the West, founded on rational and secular principles, however corrupted these are in places and at times by ideology—is replaced by Sharia, rooted in the belief that all law derives from Allah’s will, and that interpretations of that design are properly delegated to a religious clerisy who instructs those charged with securing the order of things, an order conveyed to Allah’s messenger Muhammad by the archangel Gabriel, who he claims to have encountered alone in a cave. It would be one thing if Muhammad’s revelations affirmed the ethics of Christianity and Judaism. It doesn’t. It negates them.

Muhammad does not envision the world that Christianity supposes. Rational Christian doctrine is rooted in the idea that each person stands in a direct, personal relationship with God and is personally accountable for accepting or rejecting salvation. On this view, the separation of church and state is not merely a political arrangement but the natural outgrowth of a faith that emphasizes voluntary belief—since genuine conversion, according to Christian teaching, must be freely chosen rather than coerced. Indeed, periods of forced conversion or state-imposed orthodoxy are historical aberrations rather than logical consequences of Christian principles. Christian individualism and a secular political order complement one another by ensuring that the decision to follow Christ remains an act of personal conviction rather than compulsion.

Muhammad’s god envisions the opposite: a world in which every person either converts to the Muslim faith or is subordinated to it, a situation that leads to a progressive purging of Christians, Jews, Hindus, and so forth, either by formal, i.e., legal and state action, or informal methods, including harassment, intimidation, and extralegal violence. There is no earthly consequence for leaving the Christian faith; one need only accept Jesus into one’s life to avoid hellfire. Islam punishes apostasy by execution. The overall message of Christianity is one of forgiveness, grace, and repentance that extends to all people, regardless of the specific sins with which they struggle. In Islam, gay men are put to death.

Despite the desire to be tolerant of other religions, one cannot be so tolerant that one robs oneself of one’s dignity and freedom and, possibly, one’s life (yet we see progressive women trying on the veil). Indeed, self-defense is a foundational human right for a reason—survival—and that right extends to communities and nations: there is a moral obligation to protect the innocent and vulnerable from danger and harm. Islam’s offensive intolerance must therefore be met with defensive intolerance, since failing to do so imperils the very foundation of Western civilization. Its presence makes us all unsafe.

Those who appeal to childish religious tolerance (although it is not often actually naïve but strategic) ought to be asked to consider whether their tolerance of fascism should extend any further than allowing individuals to hold such beliefs. To be sure, we cannot (or at least should not) police opinion, but we can address and take action. Assuming that the attribution of fascism is an accurate and not an ideologically or politically convenient one (such as the false attribution of fascism to populists and nationalists by globalists), should fascists be allowed to ascend to political office in America? Should fascists from other parts of the world be allowed to migrate to America and deepen fascism here? Would we not wonder why secular leaders and a rank and file are welcoming to America the bearers of fascism?

If the answers to these questions are “no,” “no,” and “yes” respectively, then it is only a matter of knowing what the thing itself is and applying the same standard.

The Fainting Man: What Kennedy and Trump Were Doing

Those who were present at the news conference where the man (not Gordon Findlay, the global brand director for Novo Nordisk) fainted reported that, since three doctors were tending to the man, it was not imperative that Kennedy, who is not a doctor, immediately do so.

A rational person understands that not everybody responding to a health crisis has to behave in the same way. But in this age, on one side at least, rationality is only embedded in the system, not in minds. In either case, the behavior is irrational.

On planet earth, what Kennedy did, besides making room (my initial interpretation, since he was not the only one to leave the gaze of the camera—moreover obvious in the handlers shooing reporters from the room, which observers do see), was retrieve a chair. If you watch the sequence of events, the man first started falling, and Kennedy thought he just needed to sit. Kennedy also got a wet towel for the man, too, a common thing to do when an individual faints.

So, no, Kennedy wasn’t fleeing the scene. He was responding rationally to the situation. He’s known as a steady man.

Trump watches doctors tend to stricken man

Why people wrongly interpreted Kennedy’s actions, just as they wrongly framed Trump’s response (which was one of concern, if you don’t rely on a single conveniently curated still image), is a cognitive fallacy known as “motivated reasoning,” where one’s perception of a situation is shaped by an ideological or partisan worldview, often without charity, in this case a worldview eager in every instance to attribute to Kennedy and Trump untoward motives. Those afflicted cannot see it any other way. And the corporate state media leaves them to it.

Concerning Trump’s actions that day: “I wanted to speak to the wife to let her know what was happening, but also comfort her,” Dr. Oz reported. “The president saw me in the corner and said, ‘Who are you talking to?’ I said, sort of sheepishly: I was talking to the wife. And he said, ‘Give me that phone.’”

Corporatism and Islam: The Twin Towers of Totalitarianism

Yesterday, I drafted an essay concerning the passing of Dick Cheney, a warmonger who held numerous positions in government over the last half a century, most recently as Vice President under George W. Bush (2001-2009), the son of former CIA director and appointed minder of President Ronald Reagan, George H. W. Bush. I hope to publish that essay tomorrow, but the election of Zohran Mamdani as Mayor of New York City pushes it back.

Mamdani is the man on the right

I will, however, provide readers with a bit of a preview because it is relevant here: My forthcoming essay will illustrate why one must avoid dwelling on personalities and instead understand world-historical developments and the dynamics driving their ambitions. I critique Thomas Carlyle’s influential “great man” theory of history. Recalling Georg Hegel’s concept of the “world-historical figure,” Carlyle contended that the actions of extraordinary individuals primarily shape the course of history; society advances because of rare figures whose courage, moral force, and vision enable them to influence the destiny of nations and civilizations. For Carlyle, historical change reflects human greatness, with the masses and social structures playing subordinate roles. Making history is, in his words, “the biography of great men.”

Although I emphasize the role of social structures in shaping historical events and the personalities that appear in our history books, there is something to Carlyle’s thesis. Muhammad, born in Mecca around 570 CE, regarded in Islam as the final prophet, received from the Archangel Gabriel God’s message to humanity. His revelations were later compiled into the Qur’an. Muhammad authorized and personally led numerous military expeditions after establishing Islam in Medina, and over the next decade, the Muslim polity expanded through warfare and the submission of other tribes. By the time of his death in 632, most of the Arabian Peninsula had come under his political and military authority. The regime was cruel and unforgiving. The case of Muhammad is certainly illustrative of the force of personality.

The ambitions and cruelties of Islam did not end there. The attempted Islamic conquest of Europe began soon after the rise of Islam, in the seventh century, as Arab forces expanded rapidly across the Mediterranean. Muslim armies entered Europe through two main routes: across the Strait of Gibraltar into Spain and through Anatolia and the Balkans toward Constantinople. Frankish forces halted further northward advance at the Battle of Tours in 732, but Muslims controlled the Iberian Peninsula for centuries, while Islam spread east and south across the planet.

Muslim rule in Spain ended in 1492, when the Catholic monarchs Ferdinand of Aragon and Isabella of Castile captured Granada, the last Muslim stronghold on the Iberian Peninsula. This event concluded nearly eight centuries of Islamic presence that had begun with the Umayyad conquest in the eighth century. The fall of Granada marked the completion of the Reconquista, the long Christian effort to reclaim territory from Muslim rule, and it coincided with Spain’s emergence as a unified Christian kingdom and global power. Without the bravery of the European Christians, the American Republic would not exist.

Muslim expansion into eastern Europe was largely halted by the strength of the Byzantine Empire, which successfully defended Constantinople during two major Arab sieges, preventing a direct route into the Balkans. Difficult terrain, long supply lines, and the rise of powerful regional forces created rather intractable military and political barriers. These factors together limited further advance and kept Islamic rule from extending deeply into eastern and central Europe during the early centuries of Islamic expansion.

However, Muslims could not be content with controlling much of the world east and south of European Christian civilization (for a map of Islam’s present reach, see my recent article Whose Time Has Come?). In the ensuing centuries, Islamic imperialists recalibrated their strategy of attack, in recent years, by invading Europe by invitation and preying on the misplaced humanitarianism of a people plagued by generational guilt and self-loathing.

Who ushered in the barbarians? Europe was betrayed by its own leaders. Transnationalists, bent on disempowering the indigenous peoples of Europe by changing the demographic composition of that continent and amalgamating nations in a single superstate governed by corporate power, threw open the Western gate to Islam. Today, Muslims enjoy ethnoreligious enclaves across Europe, and have successfully elected to government representatives of a totalitarian political movement. London’s mayor, Sir Sadiq Khan, is a Muslim. Like New York City, London is a world city, a hub of transnational corporate power.

Tragically, the United States is not immune to this development, namely the Islamization of the West, as mass immigration from Muslim majority countries over the last several decades has established ethnoreligious enclaves here—not just on the East Coast, but in America’s heartland. And, like the United Kingdom, this has resulted in the election of Muslims to public office, and, as we witnessed last night, to the financial capital of the corporate world order.

As an atheist, I dread the slide of the Christian West into clerical fascism. As critical as I have been over the decades of Christianity, the followers of that religion have been good to me. The historical record indicates that Muslims won’t be so kind. More than its record, there is a problem intrinsic to this ideology. Jesus preached peace and gave his life for his followers. The Founders of the American Republic, most of them Christians, established the paradigm of secular government. A Christian extremist, if he holds true to the teachings, can only be more like the teacher. The wristband he wears—“What would Jesus do?”—carries a rather benign slogan. However, the Muslim extremist is a man who strives to be more like Mohammed. He bears a different slogan—the slogan of a warlord—and he wears it on his forehead. He comes demanding surrender and submission.

That’s the difference doctrine makes. Not all religions are the same, and naive religious tolerance is a manifestation of suicidal empathy—and a strategy for conquest.

Reclaiming Work: Economic Nationalism and the Limits (and Perils) of Welfare

For much of my life, I have approached economic systems from a Marxist perspective, recognizing that capitalist relations are inherently exploitative. I still recognize the exploitative character of the capitalist mode of production—and I don’t think classical liberals would disagree (indeed, in many ways, Marx was a classical liberal, accepting John Locke’s labor theory of value). At the same time, I understand that modes of production before capitalism—feudalism, ancient class-based societies, and even some agrarian systems—were also exploitative. Apart from hunter-gatherer societies, which primarily involve subsistence-level labor, every class-divided society imposes forms of exploitation. Nevertheless, in any society beyond mere subsistence (and even in subsistence societies, where distribution is egalitarian, but work is required), people must produce value through their labor and exchange it for the things they need to sustain themselves.

Capitalism, despite its exploitative nature, has historically enabled an unprecedented development of productive forces, greatly improving the overall quality of life. Yet the pursuit of free trade, made possible by capital portability and labor mobility, has introduced significant challenges. Policies favoring globalization, offshoring, and mass immigration have, in many cases, de-industrialized local economies and increased structural unemployment—what Marx called the “industrial reserve army.” Workers face declining wages, displacement, and marginalization by foreign labor, both abroad and domestically, as well as by automation and bureaucratic rationality. The emergence of artificial intelligence and advanced robotics will only worsen these problems, and the end of work is a very real and dire eventuality (we may have to move to some form of communism, whether we like it or not).

Image by Sora

The welfare state arises in this context as a mechanism, at least ostensibly, to support those marginalized by these economic shifts. While I firmly believe in taking care of the unemployed, underemployed, and otherwise disadvantaged, the expansion of welfare can entrench dependence. Moreover, as implied, this dependence may not always be inadvertent. To fund these programs, resources are drawn from productive members of society, creating a cycle in which the exploited labor force must also sustain the very system that compensates for exploitative policies. This dependency can have profound social consequences, including family disintegration, idleness, and the emergence of a managed “culture of poverty” under technocratic administration.

Although I critique capitalism’s exploitative tendencies, I also recognize that it will persist and bring considerable benefits to human civilization. Even the very poor live better lives than the poor did in the past. But are they freer? As an individualist who values personal freedom and autonomy, I oppose a welfare state that fosters dependency and expands government intrusiveness. Instead, I support policies that promote economic independence: protectionist measures, tariffs, strong national borders, and the cultivation of domestic industry—often called economic nationalism. By fostering a domestic labor market insulated from global wage competition, economic nationalism can raise wages, increase employment, and shrink the need for intrusive government. This was the vision at America’s founding, the American System, and it propelled a nation from agrarian peripheral status in the global capitalist economy to the most technologically advanced civilization in human history and world hegemon.

A contemporary illustration of this problem is the SNAP (food stamp) program (see Oh SNAP! Democrats’ Antics Raise Consciousness About the Consequences of Free Trade and Progressive Social Policy). In 1969, SNAP covered approximately 1.4 percent of the population; today, it encompasses roughly 42 percent. While the expansion reflects some genuine social need, it also signals the risk of allowing welfare programs to replace economic and social policies that create prosperity at home and promote independence. SNAP was expanded not only to ameliorate the effects of globalization but also to pull a larger proportion of the population under government control. My argument is not against aiding those genuinely in need, but against allowing the state to substitute for policies that empower individuals through meaningful, well-compensated work. Economic and social systems should instead cultivate dignity, independence, and the opportunity for productive engagement. This is why I describe my politics as populist and nationalist and have aligned with the Trump wing of the Republican Party.

Progressives, who defend neoliberal and social welfare policies that undermine American labor and the nuclear family and subject the population to corporate and technocratic control, often attack populists as lacking empathy for those who are suffering. By doing so, they frame their argument in moral terms, portraying those who call for small, unintrusive government as heartless. However, morally, my argument is rooted in compassion and concern for human flourishing. Critics misrepresent the stance I have adopted as a lack of care for the vulnerable, but the core of my position emphasizes a desire to see people live dignified and self-sufficient lives. True compassion, in this view, is not merely the provision of aid but the creation of conditions under which people can thrive, achieve meaningful work, and participate fully in society.

It is also important to acknowledge that among those dependent on the welfare system are many able-bodied individuals who could work. While globalization has undeniably altered Americans’ life-chances and created real economic challenges, it does not eliminate the possibility of meaningful work. Human striving—the desire to work, create, and provide for oneself and one’s family—is a fundamental aspect of human dignity. Opportunities still exist. Historically, even during the Great Depression, when unemployment was widespread and opportunities scarce, people still sought work to sustain themselves and their families. Welfare dependency has diminished this drive, substituting state support for personal initiative and eroding the ethic of self-reliance, in turn degrading human freedom and, too often, leading to demoralization, which in turn generates crime and violence. While the state can and should support those genuinely unable to work, it should not supplant the human pursuit of achievement, independence, and purpose in the face of economic adversity.

Restricting access to welfare programs, while politically controversial, can, over time, encourage individuals to reenter the labor market and take advantage of the jobs that do exist. Job-seeking also signals to capitalists, policymakers, and the public that workers desire an economic course that restores domestic employment and raises wages. Since unemployment statistics measure those actively seeking work, restricting welfare will raise the official unemployment rate by sending people out looking for work. This increase conveys to policymakers and the public that current economic and social policies contribute to labor market challenges and limit opportunities for meaningful employment. By making visible the gap between available jobs and the need for more positions and higher wages, these statistics provide feedback that can drive policy change.

Encouraging labor market participation thus both empowers individuals and communicates a democratic demand for reforms that restore economic independence, higher wages, and a more self-reliant society. Individual initiative thereby becomes a form of politics: by striving to work and provide for oneself, citizens communicate a demand for economic conditions that promote opportunity and self-sufficiency. Although opponents may frame welfare restrictions as unsympathetic or harsh, we must insist that such measures can, in fact, be the most compassionate course of action because they signal a demand for action. By fostering independence, human dignity, and engagement in meaningful labor—and by compelling elites to address societal unrest rather than channel it into projects that further globalization—these policies ultimately benefit both individuals and society, creating conditions in which people can flourish rather than languish in dependence.

The push for expanding welfare programs, especially when cloaking itself in the language of empathy and humanitarianism, portraying opposition to big, intrusive government as heartless, is therefore a barrier that populists must overcome. Progressives’ misplaced humanitarianism, or, in societal terms, suicidal empathy, masks the long-term consequences of current policies and discourages critical evaluation. Even progressives face peril, as the present course risks undermining Western civilization and replacing capitalism with a form of neo-feudalism. By framing welfare expansion as an act of moral superiority, proponents of free trade (opponents to tariffs) have been conditioned to normalize and perpetuate the economic and social disruptions caused by globalization, offshoring, and the erosion of domestic labor markets to the detriment of most, except perhaps the power elite. The vast welfare state serves as a tool to sustain these systemic forces, cushioning the population from their consequences while disincentivizing self-reliance and independence; far from being purely compassionate, this approach prioritizes ideological and economic goals over the long-term well-being and dignity of the very individuals it claims to help.

While my critique of the expansion of the welfare state might strike some as neoclassical, it is worth noting Friedrich Hayek’s nuanced position on welfare. Hayek recognized that large, intrusive government undermines individual freedom and is prone to the inefficiencies of central planning. Yet he argued that a compassionate society must provide for the aged, the disabled, and others genuinely unable to work. In this light, limited social welfare can coexist with a system that encourages self-reliance and initiative, provided it is narrowly targeted and does not create widespread dependency. Amartya Sen, in Development as Freedom, similarly advocates a minimalist welfare approach that ensures a safety net for the genuinely vulnerable without supplanting individual striving or labor market participation.

Finally, I have considered whether my argument can be situated within a Marxist framework without abandoning Marx’s critique of capitalism. Marx emphasized that exploitation arises when workers do not fully control the value of their labor and are subject to alienating conditions. From this perspective, policies that create dependency on welfare, rather than promoting productive labor, sustain labor’s alienation. At the same time, ensuring basic provision for those genuinely unable to work aligns with Marx’s concern for human dignity and material well-being. It is possible to maintain a Marxist critique of capitalist exploitation while advocating for policies that cultivate independence and self-sufficiency, seeing these not as a rejection of Marxism but as a pragmatic application of its principles to modern economic realities. Scholars such as Michael Lebowitz, in Beyond Capital: Marx’s Political Economy of the Working Class, emphasize the need for structures that promote worker agency—ideas that resonate with the balance I advocate. At the same time, Lebowitz envisions this in the context of industrial democracy and collective worker control over production, the possibility of which is highly unlikely given the concentrated power of elites in a transnational corporate system.

My bottom line is that, given the persistence of capitalism, it is preferable to embrace a small-government, liberal-capitalist framework with targeted social provisions, rather than allow an ever-expanding welfare state to entrench dependency and push society toward a form of serfdom, as Hayek warned. In this sense, one can maintain a Marxist critique of exploitation while pragmatically recognizing the benefits of capitalist development (Marx himself was impressed by capitalism’s dynamic). I am reminded of Christopher Hitchens, who publicly renounced socialism while still identifying as a Marxist, arguing that the capitalist revolution is not yet complete and that its ongoing unfolding promises greater affluence and human well-being through advances such as the progressive elimination of disease and the expansion of material prosperity. While it is unclear whether Hitchens would have endorsed globalization and free trade, his position resonates with my own: that it is possible to retain a critical, class-conscious perspective while advocating policies that maximize human flourishing within the existing capitalist framework, promoting independence, dignity, and meaningful work rather than dependency. The task before us is to shape it to the advantage of the working class, and, in the context of the international system, the advantage of the American worker.