Islamist violence refers to acts of terrorism or extremism motivated by Islamist ideologies, such as those associated with groups like al-Qaeda, Hamas, or ISIS, which seek to impose strict interpretations of Islamic law on the world. This form of violence does not fit neatly into the traditional left-wing versus right-wing political spectrum as it is typically understood in Western political analysis. Instead, it is more accurately treated as a distinct category of religious or ideological extremism. But that has not stopped politicians from hiding it behind the rhetoric of right-wing terrorism, or left-wing activists from seeing in Muslims an ally in their struggle against the free and open society.
The analytical distinction exists because Islamism is grounded in theocratic objectives—establishing governance based on religious authority—rather than in secular political ideologies, such as ethnic nationalism, liberalism, or socialism. To be sure, Islamists suggest their struggle is ethnic when they accuse their opponents of “Islamophobia.” That construction is something of a self-reveal, however, since it inadvertently invites the public to see Islam as an ideology, not an ethnicity. Apparently aware of this, more recently, propagandists have increasingly substituted “anti-Muslim” for Islamophobia. But the swap doesn’t work well—at least for those whose brains are in gear. A Muslim is an adherent to the Islamic faith.
Perhaps applying conventional left-right labels to Islam obscures more than it clarifies. Islamism shares characteristics with far-right ideologies: strong emphasis on authoritarian governance, moral conservatism, opposition to secular liberal values such as gay rights, gender equality, pluralism, and demand for traditional social hierarchies. Islam’s promotion of patriarchal structures and rejection of modern liberal norms does resemble far-right conservatism. To be sure, left-wing ideologies can be authoritarian, and often are, but they do not, for the most part, contain the same content as Islamism.
Some commentators have assumed that Islamist violence is grouped under “right-wing extremism.” In the wake of the Bondi Beach massacre, Australia’s prime minister, Anthony Albanese, recently made statements regarding the rise of right-wing extremism as a security threat in Australia. Although this is a rhetorical claim rather than a standard or widely accepted academic practice, it arguably follows from what I described above. Of course, Albanese’s motive is to marginalise populist-nationalist forces on the move across the world, decried as far-right-wing actors.
Islamism has, at times, intersected tactically with left-wing themes, particularly through shared opposition to capitalism and Western imperialism. We saw this in Iran during the Islamic revolution in the late 1970s—with disastrous results. As we saw in the Iranian case, these overlaps are pragmatic rather than ideological and do not reflect a genuine alignment with left-wing political theory. Moreover, the virulent antisemitism associated with Islamist terrorism is shared by left and right-wing ideologies beyond the Islamic space. I have written extensively on the rise of antisemitism among left-wing Activists in the West. More recently, a strange affinity with Islam has emerged in the antisemitism expressed by prominent voices on the Christian right, for example, Tucker Carlson, Nick Fuentes, and Candice Owens.
I will leave the matter of right-wing antisemitism to a future essay and focus on the Red-Green affinity for the balance of this essay. Red ideology is characterized by atheistic materialism, class struggle, and opposition to capitalism (while embracing corporatism), while Green ideology emphasizes the embrace of theocratic rule and opposition to secularism. Despite their ideological contradictions, both share a common objective: challenging Western cultural, economic, and political dominance. This is often framed through narratives of “oppressors” versus “the oppressed,” with conflicts such as Israel–Palestine portrayed as examples of “white settler colonialism.”
We see the alliance most concretely in communist and socialist groups supporting the Palestinian “resistance” movement, Islamist leaders—such as Iran’s ayatollahs—employing anti-US and anti-capitalist rhetoric that resonates with leftist audiences (Michel Foucault was a fan), and instances in urban Western politics where leftwing Muslims have attained leadership roles, Zohran Mandami of New York and Sadiq Khan of London being the most obvious examples. Together, these examples are illustrative of how ideological cooperation can occur despite big philosophical differences. The glue holding the coalition together: loathing of Jews, liberalism, and whiteness.
Tactically, leftist movements have historically relied on cultural Marxist and postmodernist discourse, disruptive protests, and identity politics, while Islamist movements prioritize jihad and the mobilization of the religious ummah. In the Red-Green alliance, these approaches converge in coordinated activism against shared enemies—such as “imperialism,” or “Zionism”—employing multicultural and identity-based frameworks to promote mutually reinforcing objectives.
Critics of the left (including some on the left) argue that leftist actors are naïve about the long-term goals of Islamist movements, particularly the risk of Islamist dominance after revolutionary success. They warn that Islamist groups and leaders strategically exploit leftist platforms and institutions to pursue broader objectives, foremost among them establishing a global caliphate. Historically, this alliance is temporary, with leftist groups marginalized or eliminated once Islamist factions consolidate power, as cited in examples like the post-revolutionary purges in Iran.
In the study of political violence, Islamist attacks are frequently analyzed as a separate category, in part because of their unique motivations and, in many cases, their comparatively high lethality on a global scale. While forcing Islamist violence into a simple left-right framework oversimplifies its religious foundation and ideological distinctiveness, Islam’s presence in leftwing politics is a concrete reality. We see the alliance of anarchists, communists, socialists, and Islamists not because the former agree with everything Islamists believe, but rather share with Islamists a loathing of the Enlightenment and liberalism. Both Red and Green sides seek to replace the free and open society with a totalitarian order.
Is the Red-Green Alliance ideologically coherent? In terms of objectives, yes. That one finds it odd that left-wing actors work alongside an ideology that would, in the end, subjugate them and exterminate some among them is rather beside the point. To be sure, slogans such as “Queers for Palestine” are opportunities to point out the contradiction. However, characterizing Islam as right-wing extremism obscures the triple threat to Western civilization, the third threat being the corporate state project operating the Red-Green alliance. While we make explicit the contradictions, we also need to expose the reason why so much energy is spent glossing over them.
Roy Bhaskar was a British philosopher best known as the founder of critical realism, a philosophy of science and social theory that aims to bridge the gap between positivism and interpretivism. Critical realism holds that reality exists independently of our knowledge of it; however, our knowledge of reality is always fallible, theory-laden, and socially conditioned. The standpoint critiques simplistic views of science and society, hence “critical”; “realist” because it insists that real structures and mechanisms exist whether or not we observe them. We do not believe that the world exists because we are observing it. What would explain the existence of a world before the emergence of human brains capable of interpreting the remains of that past world? The evolutionary process that produced thinking heads would have to precede the thinking head. Things do not cease to exist because a man who knew them dies. Many never consider him, yet he existed.
Bhaskar rightly rejected the idea that causation is merely regular patterns (constant conjunctions) of events (the Humean habit of expectation); instead, outcomes depend on context and interacting really-existing causes. Causation results from generative mechanisms, which operate in open systems, including in social life. Thus, Bhaskar extended his ideas beyond natural science to human society, where he argued that social structures (e.g., capitalism, bureaucracy) are real and pre-existing, yet reproduced or transformed by human action. This is known as the Transformational Model of Social Activity (TMSA). Critical Realism thus offers a middle ground between structural determinism, where structures are said to determine everything, and voluntarism, in which individuals freely choose their actions. At flush blush, I found this a very attractive position.
Roy Bhaskar
I first encountered Bhaskar’s work in the mid-1990s during my master’s program. A good friend of mine gave me a copy of A Realist Theory of Science, which graduate studies around the country were treating as a serious alternative to both positivism and postmodernism. As I was skeptical of both of these epistemological frames, I welcomed Bhaskar’s book. I joined a listserv (a mail-distribution tool, popular in the early days of the Internet) devoted to his work and began engaging with Bhaskar devotees around the world. I wanted to see how they used Bhaskar as a counterpoint to postmodernism and social constructionism. I was troubled by the practice, each in their own way, of elevating epistemology to an ontological position, which suggested to me something of a return to an absolute idealism. At the same time, in the ecosystem of graduate school, I could not deny that ideas exercised a pull on me, and while I was more impressed by the symbolic interactions (George Herbert Mead, for example), the phenomenologists fascinated me.
Many of the sociology professors in my master’s program were steeped in social constructionism of Peter Berger and Thomas Luckmann (especially their 1966 The Social Construction of Reality, which for them was something of a secular bible), which rooted in the phenomenology of Edmund Husserl through the interpretation by Alfred Schutz (which synthesized Husserl with Max Weber’s interpretationist sociology), and although my professors did not explicitly identify as postmodernists, I increasingly grew to suspect that their thinking drifted in that direction—particularly in the tendency in their lectures to collapse ontology into epistemology. A materialist my entire intellectual life, and I have never found that move convincing; postmodernism always struck me as an evasion rather than an advance.
Bhaskar’s appeal, at least initially, was clear. In his 1975 A Realist Theory of Science, he mounted a powerful argument for scientific realism that directly challenged both empiricism and idealism. His distinction between the real, the actual, and the empirical is compelling. The world, on this account, contains real structures and causal mechanisms that exist independently of our knowledge of them. Events may or may not occur depending on whether those mechanisms are activated, and our observations capture only a limited slice of that deeper reality. This was a direct rejection of the idea—central to postmodernism—that reality is constituted by discourse or knowledge claims. Bhaskar was explicit that ontology could not be reduced to epistemology. In other words, reality stood outside cultural and historical constraints, even if culture and history constrained human ability to comprehend that reality. In the end, reality pushed back, imposing its objective and mind-independent presence on those who observe it.
Where my doubts about Bhaskar’s work emerged was not in his philosophy of natural science, but in his treatment of social reality. In later works, especially his 1979 The Possibility of Naturalism, Bhaskar argued that social structures are real and causally efficacious, but also concept-dependent in a way that natural kinds are not. Society, in his view, exists only insofar as it is reproduced through human activity. He famously described social structures as both the conditions for and the outcomes of human practices. Formally, this position rejects postmodernism; Bhaskar repeatedly insisted that social structures are not reducible to beliefs or discourse. Capitalism constrains individuals regardless of whether they understand it; language governs speech even when speakers are unaware of its rules. In this sense, social structures possess real causal powers independent of individual consciousness.
And yet, there remained in his work for me a tension. Bhaskar’s insistence that social structures exist only through their reproduction in practice introduces a form of concept-dependence that sits uneasily with a robust materialism. While he does not collapse ontology into epistemology, he does tether social ontology more tightly to human conceptual activity than many materialists would accept, foremost Karl Marx (whom I will come to at the end of this essay). This is precisely where sociological social constructionists might find space to appropriate Bhaskar while blunting the realist edge of his argument. My concern was that Bhaskar’s account of social reality leaves itself open to misinterpretation—not because it is incoherent, but because it concedes too much to the idea that social existence is constitutively tied to meaning and practice rather than being fully grounded in material relations that persist regardless of belief. A harder realism would insist that once social structures are instantiated—class relations, economic systems, legal institutions—they exert causal force independently of how they are understood or narrated, and not merely insofar as they are conceptually reproduced.
In the Bhaskar listerv, I pursued a three-pronged strategy to suss all this out. First, I wanted to challenge what I saw as an uncritical admiration of Bhaskar that treated him as a kind of philosophical guru rather than as a thinker whose arguments required scrutiny. Second, I attempted to steel-man the strongest possible version of social constructionism by accepting its axioms and following them to their logical conclusions, then offering them up to scrutiny by the Bhaskar devotees. This involved deliberately collapsing ontology into epistemology—not because I believed this was correct, but because I wanted to see whether the position could sustain itself without contradiction. I did this while avoiding the problem of solipsism. Third, I treated the exercise as a test of my developing rhetorical skill: how persuasively could I advance a position I ultimately rejected among a group of scholars and students who should be able to rebut me?
What surprised me was how often interlocutors failed to recognize this as an immanent critique. Many assumed I was expressing a deeply held conviction rather than probing the internal logic of their assumptions about the concept-dependent piece of Bhaskar’s argument. To be charitable, this reaction is understandable. But the result was revealing. When pushed to its limits, critical understanding of Bhaskar’s position often lacked the conceptual resources to respond coherently, precisely because the distinction between what exists and what is known had already been surrendered. I could find nobody who, advancing Bhaskar’s argument, could dismantle my arguments from that standpoint. Of course, I was a young sociologist who may have had a higher opinion of my project than it deserved. Maybe it sounded incoherent to others. But even when I explained what I was doing, I was made to feel more like a troll than a good-faith interlocutor. I didn’t intend it, but I understand why others take it that way. It could also be that the fully steel-manned social constructionist position sounds incoherent—and, in retrospect, it does.
All that said, Bhaskar remains an important figure in the development of my thought. He successfully rescued scientific realism from idealism and positivism. He never fully disentangled social ontology from conceptual dependence, and as my understanding of the world progressed, I came to understand that this is hard to do without denying human agency. Admittedly, this is probably a humanist concern apart from science; at the same time, human beings are capable of resisting and transforming the structures around them, and this can be the subject of scientific inquiry. All that notwithstanding, for someone committed to a thoroughgoing materialism, residual entanglement remained a problem. At the same time, perhaps this is where Bhaskar could be embraced positively by sociologists whose intellectual instincts were drifting toward postmodernism. Time would prove my interest in such a project moot; in the following decades, sociology would give itself over almost completely to postmodernism. At that point, Bhaskar would be lost on them.
I still find Bhaskar’s arguments a powerful way of understanding human reality. In his favor, Bhaskar distinguishes concept-dependence from conceptual awareness more sharply than my initial critique acknowledged (as I remember it). For Bhaskar, social structures are activity-dependent, not belief-dependent. They require practices, not understandings, to persist. Capitalism does not exist because people believe in it—but because they engage in commodity exchange, wage labor, etc., whether or not they conceptualize these activities correctly.
However, while Marx does not claim that social structures are material in the same way as rocks or tables are material (he is not a crude physicalist), he explicitly rejects the idea that social structures are merely practice-dependent in the way Bhaskar does. For Marx, material relations are embedded in productive activity, enforced through coercion, law, property, and violence, inscribed not only in infrastructure and institutions, but in bodily necessity. As he stated in his 1845 The German Ideology, “The mode of production of material life conditions the social, political, and intellectual life process in general.” Workers do not reproduce capitalism because they recognize capitalism, or even because they intend to reproduce it; they reproduce it because they must eat.
That’s a more robust materialism than Bhaskar’s. For Bhaskar, social structures are real, but they exist only through their reproduction in practice. For Marx, reproduction is forced, not enacted; social structures are real because material life is organized by them; and practice is constrained by pre-existing material relations. Marx does not deny agency; rather, he sharply limits it. In his 1852 The Eighteenth Brumaire of Louis Bonaparte, Marx writes, “Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.” Social structures are material relations with objective force. Treating them as non-material risks idealism in another name. Marx is hostile to philosophy that treats meaning as foundational rather than derivative, practice as constitutive rather than constrained, and relations as conceptual rather than material. Social structures are material relations—more real than our concepts about them.
I opened ChatGPT on my home computer and asked it several empirical questions about lethal violence. The statistics it returned surprised me a little in light of how politically correct OpenAI can be. But facts are facts.
I begin with school shootings. Schools in high-poverty, urban, majority-minority schools, which comprise about 20 percent of schools, have an approximate rate of lethal violence per 1,000 of 5–15 students. For low/medium-poverty, suburban, predominantly white schools, which comprise around 50 percent of schools, the approximate rate of lethal violence per 1,000 is 0.5-1 students. In rural, mixed demographics schools, roughly 30 percent of the total, the rate per 1,000 is 0.5–2 per 1,000 students.
Implications: 1. Perception vs. reality: National attention emphasizes rare, high-profile attacks in wealthier communities. 2. Exposure: Most students affected by school shootings live in high-violence urban neighborhoods. 3. Policy relevance: Efforts focusing solely on guns in schools miss community-based violence associated with most incidents in disadvantaged areas. Question to ask: Why does the media ignore school shootings in impoverished inner-city neighborhoods? Is it because the people residing there are black and Hispanic?
Orlando Harris was killed in a shootout with police after opening fire in his St. Louis high school in 2022
Around 50–55 percent of cases of lethal police shootings involve white males (~93–95 percent of those shot by police are male). Question to ask: Why does the public believe most people shot by police are black? Is it because the media does not report that most people who are shot by the police are white? What is that not an important fact to report?
Among ethnic and racial groups, blacks account for most homicides in America. Moreover, most homicide victims are black. For interracial homicide, whites are far more likely to be shot by blacks than the other way around. Question to ask: Why does the media not report these facts?
We’re told that gun violence is the number one killer of children. “Children” in these statistics refer to individuals ages 0–19. Are 18 and 19-year-olds children? If children 14 years of age and younger are counted only, then accidents (car crashes, drowning, falls) are the leading cause of death among children. For 15–19, firearms (homicide ~55–60 percent + suicide ~35–40 percent) are the leading cause of death. Where do most gun deaths for those ages 15-19 occur? High-poverty, urban neighborhoods. Teens in these neighborhoods are 5–10 times more likely to be killed by guns than teens in low-poverty areas. Question to ask: Why doesn’t the media report this?
Governance and structural inequality are associated with all these patterns. Why would the media be reluctant to report this? Could it be that lethal violence occurs predominantly in cities governed by Democrats and that the structural inequality that lies at the root of these problems is maintained by corporate-friendly policies, such as offshoring production, mass immigration, and managing redundant populations, disproportionately black, these forces historically directed to socially disorganized inner-city neighborhoods?
Yes. And this, too: The media and Democrat politicians push an anti-gun rights narrative. Rather than inform the public about where and why gun violence is a problem, which means focusing on human agency and the social conditions that benefit the rich and powerful, they make it appear that the mere presence of guns explains high rates of lethal violence in America. This serves the interests of the corporate state, which seeks a disarmed population to advance the agenda of total control over the population.
The data are very clear on the question of guns and violence (see The Law and Order President and His Detractors—Who’s Right?; Lying With Statistics; Once More, for the People in the Back: It’s Not Guns; Guns and Control). In states with high rates of gun homicide and violence, those lawfully possessing guns, disproportionately whites, are underrepresented in gun violence. The solution to gun violence is not gun control. Indeed, if the presence of guns does not explain variability in homicide rates, and it doesn’t, then gun control measures are not merely unnecessary, but they make citizens less safe. And as I alluded to above, this is intentional.
Progressives (selectively, as readers will see) advocate identity-based frameworks that analyze historical actors—especially Europeans in the Americas—as collective agents whose actions are interpreted through categories such as ethnicity, race, and structural power. They use this frame to describe the history of America as an act of colonization.
This description is accurate. Colonization involves the movement of people from a foreign territory into a new land with the goal of settlement, resource use, and the transformation of local society. Colonization establishes institutions, farms, towns, and social structures to sustain the settlers and assert long-term control. The English colonization of North America in the seventeenth century illustrates this process well. English settlers not only claimed land but also created enduring communities and governance systems rooted in those long established in their countries of origin. What emerged from the process was a new cultural and social order. This is how America was possible.
One can lament colonization and the (negative and positive) effects it has on indigenous populations. However, once long established, those born into the new cultural and social order are native to it. For indigenous peoples, the moment to prevent the establishment of settler colonies and the displacement of populations already there is when it is occurring. In the case of the United States, the process is complete. The country is a multiracial, primarily Christian society with a secular republic, and those native to it have a right to cultural integrity and national identity on this basis.
Yet when contemporary migration produces dense cultural enclaves and visible cultural and social transformation in Western countries, for example, the Muslim enclaves that have formed across Europe and in places in North America, such as in Dearborn, Michigan, and Minneapolis, Minnesota, criticism of colonization is not merely set aside. Instead of interpreting these developments through the same group-level lenses used to describe the history of European settlement, the discourse shifts to a language of cultural relativism, individual rights, and religious liberty. The native population is not supposed to regard migrants in terms of ethnic identification and tribal affinity. If they do, they’re smeared as racists and xenophobes.
I am not arguing that the ethnic enclave in the West is analogous to towns established by colonizers; rather, I am saying that it is the thing itself. The West is being colonized. If progressives insist that demographic change, disruption to native cultural sensibilities and traditions, and institutional transformation count as colonization when practiced by historically powerful groups, then intellectual consistency demands a clear explanation for why the same dynamics we are experiencing today must be described in different terms.
There is no clear explanation. Thus, we are confronted by a double standard. For some reason, the native peoples of Europe and North America are racist and xenophobic for seeking to assert cultural integrity and national identity, and on that basis restrict migration; non-Western cultures are entitled to do the same without being smeared in the same way. Indeed, resistance to Western settler colonialism is lauded; not only is Western resistance to mass migration from the Middle East and North Africa (MENA) characterized as racist, but so would be white European mass migration to MENA areas.
Prime Minister of Rhodesia, Ian Smith, shown here at a 1965 press conference, was overtaken by black nationalist Robert Mugabe and the Patriotic Front in 1980.
To illustrate this double standard, consider a hypothetical scenario in which millions of white Europeans—fleeing climate challenges, cultural shifts in their home countries, economic stagnation—begin migrating en masse to a Sub-Saharan African country like Kenya. Today, less than one percent of the Kenyan population is white of European descent. These hypothetical settlers, drawn from nations like France, Germany, and Poland, arrive with capital and a shared ethnic identity, establishing dense enclaves in urban centers such as Nairobi and Mombasa, as well as rural areas rich in arable land.
The white European migrants build churches, schools teaching European languages and curricula, businesses prioritizing their networks, and even gated communities that replicate suburban European lifestyles. Over time, these groups advocate for policy changes: dual-language signage, relaxed land ownership laws to facilitate further settlement, and so on. Birth rates among the migrants outpace the local population, leading to demographic shifts where white Europeans comprise 20-30 percent of certain regions, influencing local elections and cultural norms.
Would the native Kenyan population—predominantly black Africans with diverse ethnic groups like the Kalenjin, Kikuyu, Luhya, and Maasai—react with alarm? One imagines so. Community leaders would likely organize protests against “cultural erosion,” citing the influx as a threat to control over resources, indigenous languages, and traditions. One would expect that they would demand stricter immigration controls, deportation of undocumented settlers, or even quotas on European-owned land to preserve national identity.
Such resistance would likely be framed by local voices as a defense against neo-colonialism, echoing historical grievances from British rule. Progressive commentators in the West and globally would applaud or at least sympathize with this stance, portraying resistance as righteous anti-imperialism. One can picture media outlets running headlines like “Kenyans Fight Back Against European Encroachment,” drawing parallels to anti-apartheid struggles or decolonization movements. Any European migrant complaints about “anti-white racism” would be dismissed as tone-deaf entitlement, rooted in historical privilege. Progressives would emphasize the collective rights of the indigenous population to maintain sovereignty, arguing that unchecked migration risks repeating the harms of past colonization.
Now, reverse the scenario: Suppose millions of black African Muslims from Sub-Saharan countries migrate to a European nation like France, forming enclaves in cities such as Marseille or Paris. If native French citizens—predominantly white Europeans—voice similar concerns about cultural integrity, demographic change, or institutional shifts (e.g., calls for halal options in schools or mosque construction by those coming from Muslim-majority countries like Senegal or Gambia), they would be swiftly labeled racist, Islamophobic, or xenophobic by the same progressive voices who condemn white European migration to Sub-Saharan Africa.
When one turns the dynamic the other way around, the discourse pivots to cultural relativism and individual rights: freedom of movement, the moral imperative of diversity, and religious liberty. Collective interpretations of the migration as “colonization” are rejected as bigoted fearmongering, even as the dynamics mirror historical European settlement patterns. This 180-degree reversal reveals the inconsistency: Non-Western natives are granted legitimacy in asserting group-based resistance, while Western natives are expected to dissolve their cultural boundaries in the name of progress.
Protesters in London, in 1968, demonstrated against Prime Minister Ian Smith’s resistance to black majority rule in the independent Rhodesia. Does anybody think these same protestors, if alive today, would protest the Islamization of that city? It’s hard to imagine.
In light of the double standard, two things are necessarily kept in mind. First, the hypocrisy, wrapped in the selective CRT language of the “perpetrator-victim” dynamic, has propagandistic value: disordering our conceptual vocabulary to avoid politically disadvantageous conclusions (for the transnationalist project) about how societies are reshaped by large, sustained population movements. Second, since the welcoming attitude towards migrants by so many millions of Europeans is hard to imagine as a naturally occurring one, socializing people to harm themselves involved concentrated power and a concerted plan. Two obvious and necessary questions follow: Who possesses that power? And how did they prepare so many Westerners to welcome their own destruction?
Here’s how: In the post-colonial scholarship, the term “colonial collaborators” refers to local individuals, groups, or elites within a colonized society who cooperated with the external power driving colonization. The colonizing force commandeers the sense-making apparatus—education, mass media, popular cultural production—and socializes the young and the cognitively vulnerable to embrace cultural relativism—the false attitude that all cultures are morally equal, except those of a uniquely evil civilization—and to think poorly of themselves for wanting to keep their culture and societies European. This is the work of progressivism. The American progressive and his social democratic comrades in Europe are the colonial collaborators. But he won’t tell you that. Instead, he will present himself as your moral better. That so many people agree with him tells us how far down the road we have travelled to post-Western civilization.
We are told to “be kind.” What lies behind this demand is the idea of “empathy.” The word empathy is only about a century old. Still, the moral expectation that surrounds it today has grown into one of the defining cultural norms of the early twenty-first century. Empathy is widely treated as an unquestioned virtue. It is a therapeutic discourse, pitched as a paradigm of interpersonal morality. It has become a central pillar of education. Yet philosophers, psychologists, and social critics have increasingly argued that empathy, as presently understood, often distorts moral judgment rather than clarifying it. Even if nobody else were forming this critique, I would; empathy paves the road to civilizational demise.
Before The Wealth of Nations (1776), there was The Theory of Moral Sentiments (1759)
To understand this debate, it is helpful to begin with Adam Smith’s classic eighteenth-century notion of “sympathy,” a concept frequently but inaccurately described as a forerunner of modern empathy. I begin the semester in my Freedom and Social Control course with a presentation of Adam Smith and his “moral sentiments” thesis, presented in his 1759 The Theory of Moral Sentiments. Smith uses sympathy to refer not to emotional fusion with another person’s feelings and self-understanding (the core of the empathy construct), but to a process of imaginative moral evaluation.
When we sympathize with someone, Smith argues, we attempt to view their situation not as he does (his “first-order” reality, as the anthropologists say) but as an “impartial spectator” would. We try to understand how the man’s circumstances would appear to a reasonable observer, not to simply assume or absorb his emotional state. For Smith, sympathy serves the function of judgment; it mediates between compassion for others and our responsibility to uphold moral standards, as well as remaining firmly grounded in the real world. One may have compassion for a mentally ill man who believes he is a wolf, for example, but a rational person does not attempt to empathize with his delusion. It is pitiable that he believes this, but he is not a wolf.
Modern empathy operates by a different logic. In contemporary psychology, empathy usually means some combination of two things: cognitive empathy, the capacity to understand another’s perspective, and affective empathy, the ability to feel what another person feels, or empathic emotional mirroring. It is this latter form—absorbing or mirroring others’ emotions—that has fueled the moral prestige of empathy in contemporary culture. This is a problem not only because the emotions of others may motivate destructive behavior, and are therefore unworthy of sympathy, but also because the observer’s emotions may cause him to feel affinity with those with destructive emotions or misunderstand those emotions by projecting his own sentiments onto the other person or group.
One major criticism is that empathy tends to override or distort judgment. Affective empathy has a “spotlight” quality: it focuses moral attention on the person whose feelings we inhabit (or at least attempt to), often at the expense of broader principles or other people who may be affected. For example, our wolf-man may indeed be a sympathetic figure, but we do not excuse a murder in which he has killed and eaten part of his victim. In his 2016 Against Empathy: The Case for Rational Compassion, Yale psychologist Paul Bloom argues that empathy can lead to moral blind spots, selective compassion, and sentimental favoritism. When the priority is to feel what another person feels, the result may be to excuse harmful behavior, overlook responsibility, or fail to evaluate actions in a principled way. The impartial spectator gives way to an emotional partner whose perspective dominates our moral response.
A second concern is that empathy is easily exploited. Those acting destructively may invoke empathy as a shield against accountability: “If you understood my feelings, you wouldn’t judge me.” (One can easily see Gresham Sykes and David Matza identifying this as an item in an expanded list of their techniques of neutralization). Emotional identification becomes a moral cudgel, turning judgment or boundary-setting into a supposed failure of kindness. If this sounds like a psychopathic desire, there’s good reason to feel this way. This dynamic—the weaponization of empathy—erodes the ability of individuals and communities to insist on standards of conduct. It can paralyze intervention when people harm themselves or others, because empathizing with the person’s subjective experience begins to feel like a moral imperative that prohibits firm action.
This is how a paranoid schizophrenic with a history of violent behavior is set free by an empathetic judge to prey on more victims: the judge has empathy for his mental illness, to the detriment of the public, which expects the justice system to protect them from those who may harm them. For example, Decarlos Brown Jr., a 34-year-old man diagnosed with schizophrenia and known to have a long history of arrests and violent behavior, on August 22, 2025, aboard a Lynx Blue Line light-rail train in Charlotte, North Carolina, stabbed to death Iryna Zarutska, a 23-year-old Ukrainian refugee. In January of that year, Judge Teresa Stokes, a magistrate in Mecklenburg County, released Brown with no bail or bond.
Ask yourself why a queer activist who hates his culture camps out at Harvard to harass Jewish students and defend an Islamic death cult that would murder him if it had a chance. It is not because he used sympathy to understand what members of Hamas think and want; rather, it is because he, via empathy, has projected his own sentiments into the hearts of Islamists and comes to believe they think like him and want the same things. Because of his irrational worldview, he is only half right. Hamas does hate Western culture and wants to destroy it, just as he does. But Hamas also hates him and wants to kill him—indeed, to wipe queer people (along with Jews) from the face of the Earth. Empathy, compounded by his cultural self-loathing, causes him to subject himself to potential destruction.
A third criticism thus naturally follows: empathy disarms necessary self-defense. When kindness is interpreted as imagining oneself into the perspective of those who would harm them—whether personal or collective—it can weaken the resolve to resist or intervene. The moral pressure to understand someone “from their point of view” can undermine the instinct to protect oneself or to uphold the norms that sustain civil life. Compassion urges us to help those who are struggling; sympathy helps us judge their motives, situation, and understandings; empathy asks us to surrender our own well-being or standards to accommodate another’s emotions—often self-destructive sentiments we project into the subject of our empathy.
We see this in the pleadings of a woman who defends the man who batters her; she loves him, and so she assumes that he loves her, and on that basis begs the arresting officers to let him go. She wishes the man were not violent, but she tacitly accepts that as part of the arrangement because she has substituted affective empathy for reason. Because she believes he thinks as she does, she believes that his violent proclivities are a problem to be resolved with kindness. But he is not like her. He does not love her. He wishes instead to control her, possess her, and use her existence as a means for his violent expression. There is no love to be found there, whatever he says—or she believes. If not her, then another woman will experience his rage.
A fourth problem is that empathy is neither neutral nor universal. Humans empathize selectively—often with those who resemble us, flatter us, or appear vulnerable. Empathy reinforces tribal boundaries rather than transcends them. Flattery seduces us into relationships that can be toxic and damaging. And appearances are easily manufactured. Compassion, by contrast, can be extended impartially to strangers and even to those we do not particularly like. But it does not mean we blind ourselves to who they are and what they want. Compassion has its limits, too. (Same with tolerance.)
Somalis are not leaving their home country for Minneapolis (many by way of Green Bay, Wisconsin) because they want the freedoms native Minnesotans enjoy (such as they are). They are coming to Minnesota to change the culture there by spreading Sharia. As with other Muslim groups, they come bearing an alien culture that is incompatible with American norms and values. Muslims do not see the world the way Americans do; rather, they see the world between those who submit to Islam and those who don’t, namely the infidels. With rare exception (Ayaan Hirsi Ali, for instance), Somalis wish to remake America in the image of Somalia. They have selected Minneapolis because the city government there welcomes them—a government that also seeks to remake America in a different image.
When the Somalis and their progressive enablers in Green Bay and Minneapolis demand empathy for the plight of Somalis, they would have us assume that Somalis are just like the native-born American who loves his country and its freedom. But the progressive himself does not love his country. Nor does he love its freedom. He sees America as an illegitimate entity, one founded in white supremacy and other oppressions. His worldview is a woke world of “perpetrators” and “victims,” and the perpetrators look like him, so he loathes himself. Whether he sees it or not, in Islam, there is a logic of authoritarian control he wishes to wield himself. He identifies with the Muslims because he sees himself in them, and, moreover, through them, he can escape his white guilt, which his ideology has taught him to acutely feel. In effect, he is weaponizing empathy in a project of managed decline of America. We know this when we listen to the hatred progressives express for their country, its culture, and history; for, if the progressive loved his country, he would not wish to see it undermined by those with whom he shares no intrinsic interests.
These tensions explain the rise and the limits of the “be kind” ethos. The phrase appears generous and humane, but it is vague and differentially applied to situations that those who command our sense-making institutions determine. In other words, “be kind” functions as a moral command to selectively validate emotional experience uncritically, to conveniently avoid judging someone’s behavior, or to treat the boundaries progressives seek to transgress as forms of cruelty (seen, for example, in the selective transgression of guardrails protecting children from sexual exploitation). In this sense, kindness becomes conflated with targeted unconditional empathy. Whether or however targeted, eschewing sympathy and suppressing the “impartial spectator” does not result in moral clarity; instead, it creates a therapeutic culture in which selected emotional expression is morally sacrosanct, while judgment—once guided by Smith’s impartial spectator—is treated as suspect or oppressive. Empathy is not a relative of compassion or sympathy; it is its opposite.
Gender identity doctrine is the paradigm. As I explained in a previous essay about the high-profile debate between Andrew Gold and Helen Webberley, we do not treat anorexia empathetically; we treat it sympathetically. The condition is a mental illness to be treated because we have compassion for those afflicted with this species of body dysmorphia. Webberley herself admitted that anorexia is a terrible thing. At the same time, gender dysphoria is treated not as a mental illness but as a situation where a man with the gendered soul of a woman requires transitioning to his “true” or “authentic” self. Rather than tell the man that he is mentally ill, which the compassionate doctor would, we are told to “be kind” and affirm the man’s delusion. One must take his emotional standpoint as truth. Otherwise, if we seek to protect the man (or the boy), we become an “oppressor.” To escape the label, some support the madness, and they rationalize it as kindness.
Identifying the problem with empathy is not an argument for callousness, nor is it a rejection of understanding others’ experiences. All this can be had with compassion and sympathy. Rather, it is a recognition that empathy, especially in its affective form, does not necessarily lead to good moral outcomes. Indeed, it can lead to civilizational destruction and medical atrocities. Judgment and responsibility remain essential for any just and rational society. Empathy warps the essentials. Empathy derails freedom and reason.
Adam Smith understood this. Sympathy, for him, was never about surrendering to another’s feelings; it was about seeing the other’s situation clearly and holding it up to an objective moral lens. Through what lens shall we see objective morality? We can debate the matter, but systems that demand society submit to the will of Allah, or to institutional practices driven by the material interests of a medical industry whose practices harm children, are straightaway precluded from the discussion; such systems by design negate compassion, fairness, judgment, responsibility, and sympathy. They are dehumanizing and totalitarian ideologies. Proponents cover these moral failures by substituting for them the construct of empathy.
The modern elevation of empathy to a near-absolute moral value has obscured the distinction implicit in Smith’s work and the work of modern critics of the construct. For a recent treatment of the matter, see Jesse Prinz, in “Against Empathy” (Southern Journal of Philosophy, 2011), where the author argues that empathy is not necessary for moral judgment and can be counterproductive, as it is a vicarious emotion that often leads to biased or partial responses. Prinz’s critique aligns with modern arguments, such as those by Bloom, that offer compassion—caring about others’ welfare without emotional over-identification—as a more rational and proportionate guide for public policy and large-scale ethical decision-making. For a short version of Bloom’s argument, see his “Empathy and Its Discontents,” published in Trends in Cognitive Sciences (2017). There, he highlights that empathy is biased and narrow in scope, and that it can even motivate aggression, cruelty, and large-scale group violence. Most importantly, he shows how empathy can become tribalized: we empathize intensely with “our own,” which in turn fuels hostility and dehumanization toward outsiders.
Thus, while cognitive empathy can foster connection and insight, it can also distort judgment, encourage manipulation, inhibit intervention, and weaken the capacity of communities and individuals to defend themselves from those who seek to harm them. A rational moral framework recognizes the importance of compassion and understanding—but also the necessity of accountability, boundaries, and the impartial spectator that Smith saw as indispensable to moral life.
I tell my students in my criminal justice courses this when I explain the operation of a rational justice system. In such a system, we do not judge the actions of the murder defendant from his emotional state and worldview; we do not judge him against himself. Nor do we project our own feelings onto him. We judge the defendant based on the model of the “rational actor.” We ask: What would a rational actor have done in this situation? If we can’t presume the rational actor, then self-work is required.
This error is how the men on a jury can acquit a man for murder because they experience the jealousy he experienced when they put themselves in his shoes. Smith’s concept of sympathy involves imaginatively placing oneself in another’s circumstances, but crucially, this must not involve a projection of one’s own raw emotions or biases—it must be filtered through the “impartial spectator” to ensure objective moral evaluation. Projecting personal sentiments, like a jury’s own jealousy, without this impartial check, distorts judgment, leading to improper approbation or excusing actions that lack propriety. This process emphasizes evaluating the fitness of passions to their objects, rather than merging emotionally in a way that compromises reason.
This is how a black man who stabs a white man is acquitted by a jury (a real effect that occurred a few days ago) because, in his astonishment at having been stabbed, the white man calls the black man a racial slur. Because of jurors’ empathy towards blacks as a class, the victim’s use of a slur negates the harm caused by the man who stabbed him. The jurors have lost their capacity to sympathize with the stabbing victim and instead empathetically identify with the perpetrator. They ask themselves how they would feel hearing the slur directed at them. They have utterly lost any sense of perspective and fairness.
Thus, Smith is describing sympathy as a projective imagination: “By the imagination we place ourselves in his situation,” he writes, “we enter as it were into his body, and become in some measure the same person with him.” However, he stresses that true moral approval requires the impartial spectator’s perspective, which corrects for partiality or self-deception. “We can never survey our own sentiments and motives,” he argues, “unless we remove ourselves” from the situation to view it objectively. We must, he insists, “endeavor to view them as at a certain distance from us.” The impartial spectator acts as a rational standard, judging whether passions like jealousy are appropriate or proportionate, not whether they match the observer’s personal feelings.
In cases of strong passions such as jealousy or revenge, Smith notes that sympathy is limited if the emotion is excessive or self-interested: “The furious behavior of an angry man is more likely to exasperate us against himself than against his enemies,” because the impartial spectator would not fully concur with unchecked rage. Smith explicitly discusses how jealousy can prevent proper sympathy: “A sentiment of envy commonly prevents us from heartily sympathizing.”
This is what causes a progressive to celebrate the assassination of a CEO of a healthcare corporation, in the same way that communist sympathizers find no horror in the Bolsheviks murdering the Tsar’s family during the Revolution of 1918. The French aristocracy had it coming in the Right of Terror. Smith warns that passions like fear, jealousy, and resentment can drive tyrannical actions. (If readers are interested in Smith’s original work, which is public domain, see The Theory of Moral Sentiments.)
In my jury example, a group projecting their own jealousy would fail this impartiality, potentially acquitting based on shared bias rather than evaluating the act’s propriety (likewise with the racial slur case)—precisely what Smith seeks to avoid by insisting on the spectator’s detached view. The distinction between Smith’s sympathy (as impartial, evaluative imagination) and affective empathy (as distorting emotional projection) allows us to see how the demand for empathy derails justice rather than manifests it. While Smith doesn’t directly address modern legal concepts like actus reus (guilty act) and mens rea (guilty mind) that I tie to my lecture on the matter, his framework supports prioritizing rational judgment over personal emotional alignment in moral and, thus, by extension, judicial decisions.
In the final analysis, empathy is conditioning the population to suspend reason and instead respond emotionally and based on their own distorted understanding of the world. Empathy is regressive, making people morally childish so that they can be more easily led by the nose—and scolded for disobedience. The empathy project wants us to identify with the perspective of those who seek to harm us. The project asks us to “be kind,” requesting that we each advance the project if we are to be seen as properly moral actors. This is so that project leaders can achieve their aims without resistance. We can know this because there are consequences for those who are not kind. Those who weaponize empathy are therefore not really asking us to “be kind.” They’re telling us to tolerate ideas and behaviors that undermine our interests and safety—indeed, that diminish our nation and Western civilization. They’re conscripting us in a war against ourselves.
I’m sure I’ve left out many others. These are smears routinely hurled at conservatives. As a free-speech advocate, I have no objection to people using any of these words; I use plenty of them myself. My point in listing them is simple: there is no consequence for doing so. I cannot think of a single smear routinely directed at conservatives that has been euphemized into the childish “[capital letter]-word” construction.
I would, however, get in trouble for using words that do receive that treatment. Take one example: “retard.” This word has become somewhat safe to say now. Still, I can’t count how many times I’ve seen the sanctimonious “R-word” formulation, accompanied by ritual condemnation of anyone who refuses to adopt the approved progressive kindergarten locution.
Yet, as I’ve pointed out before, “idiot,” “imbecile,” and “moron” were once formal medical classifications in psychology and psychiatry during the late nineteenth and early twentieth centuries. They described levels of intellectual functioning and adaptive ability—i.e., degrees of what was then clinically termed mental retardation: an “idiot” had the most severe impairment—the lowest level of intellectual functioning; an “imbecile” had moderate impairment; a “moron” had mild impairment.
The Three Stooges
These terms were gradually abandoned between the 1950s and 1970s precisely because they had become common insults (thanks in part to the Three Stooges and Warner Brothers). They were linguistically and semantically bleached through repeated pejorative use—a process known as the euphemism treadmill (or the language cycle of harm). When a word is heard often enough as an insult, it eventually feels ordinary.
Sometimes a bad word is even reclaimed and repurposed. Calling someone “queer,” for instance, once felt visceral. Today, it’s an affirmative identity and the name of an entire academic discipline, complete with departments and degree programs in “Queer Studies.”
All of this illustrates a larger truth: which words we’re allowed to say—and who is allowed to say them—are windows into power. This isn’t only (or even primarily) about formal punishment through laws or institutional policy. Mostly, it operates through subtler, informal social controls. One lowers one’s voice when uttering a forbidden word because one fears what will happen if one doesn’t: being labeled, harassed, ostracized, or even subjected to violence.
Because progressivism is the dominant worldview in virtually all sense-making institutions—corporate HR, academia, entertainment, media, tech—no progressive will face formal or informal consequences for deploying any of the slurs I listed against conservatives (or, for that matter, against liberals). That fact tells us who actually holds power over acceptable speech. Conservatives—and even many liberals—appear to have almost none. In the spring of 2024, students at the institution where I teach drew up a petition to get me fired for, among other things, using a racial slur, even though I was not using the word in a derogatory manner. Yet they smeared me as a “racist” and “transphobe.” Nobody defended me against these smears. I don’t care that they didn’t; it proves my point.
The good news, as I argued in the essay I published on my platform Saturday, is that words only have the power we collectively grant them. If we refuse to be afraid of them and use them as we see fit, the speech police may eventually grow tired of trying to punish us. Perhaps their authoritarian and illiberal actions will delegitimize their speech codes and the practice of thought control. At the very least, over time, frequent use will once semantically bleach the offending terms, stripping them of their sting—just as happened with “idiot,” “imbecile,” “moron,” and, in a different way, “queer.”
Nick Fuentes has now appeared on the Tucker Carlson, Steven Crowder, and Piers Morgan shows. The conservative world is fractured over whether conservatives should defend such appearances in the name of open dialogue. Kevin Roberts, president of the Heritage Foundation, has drawn fire for defending Carlson’s interview with Fuentes. For progressives, whether it’s appropriate to defend Carlson is not really the question. The real question is whether Fuentes and his ilk should ever be platformed at all.
The flashpoint for conservatives, as usual, is Israel. Fuentes identifies with the “America First” tradition and argues that Zionists wield excessive influence over US foreign policy. I disagree with Fuentes on a range of issues, including his views on Israel, but giving him a platform is in keeping with the free speech tradition that, among other things, makes America a model for the world to emulate. We know that progressives don’t believe in this proud tradition. But conservatives? I thought they had taken a liberal turn in the face of progressive authoritarianism. Defending Carlson should be reflexive. That it’s not is troubling.
Screen shot of Nick Fuentes speaking with Tucker Carlson
As a man in his sixties who has followed politics all his life, I remember a time when—even for highly controversial figures and ideas—the value of open dialogue was broadly recognized as a core feature of a free society. Freedom of speech and the airing of opposing views were not merely tolerated but actively encouraged, especially by those who saw themselves as defenders of democracy as a republican proposition (that is, anti-majoritarian). This was before the rise of what is now called “safetyism”—the idea that certain viewpoints are so dangerous they must not be publicly aired at all. Beyond the legitimacy of its stated goal of protecting people from harmful art, images, and works, safetyism immediately raises a deeper question: who decides which ideas are “too dangerous” to be heard—elites, or the public itself?
Whether American society is more open today than it was in previous generations is an easy question to answer. From roughly the 1950s through the 1970s, it was markedly more open. I recognize that there was censorship during this period and especially before (e.g., the Hays Code—formally called the Motion Picture Production Code—the dominant movie censorship and content regulation system in the United States from the early 1930s to the late 1960s). America cycles between periods in which disagreeable expression and speech are more or less tolerated. But from the late 1960s through the 1970s, expression in America was free-wheeling. But it was not to last.
The establishment of the Parents Music Resource Center (PMRC) in 1985—founded by a group of politically connected women (including Tipper Gore, Susan Baker, Pam Howar, and Sally Nevius) to increase parental awareness of explicit content in popular music—was not the first manufactured panic around perceived harmful expression. However, the PMRC was not a one-off; it was the harbinger of a re-emerging censorship regime: the modern speech-code movement. These codes accompanied the growing hegemony of antiracist ideology, feminist theory, and multiculturalism. By 1990, well over a hundred U.S. colleges had formal speech codes regulating “offensive” or “demeaning” speech. Speech codes spread across American institutions, public and private.
Social media platforms were established within an already embedded culture of safetyism. At first, however, they emphasized free expression as their primary value proposition. Then, around 2013–2015, the idea of “trust and safety” emerged, and within only a few years, it came to govern content moderation along ideological and political lines. The result was stifling. While there has been a partial return to openness in recent years—especially since Elon Musk purchased Twitter, rebranded it as X, and allowed formerly deplatformed figures to reenter the social media space, prompting other platforms to follow suit—the backlash over Fuentes’s high-profile appearances demonstrates how incomplete that recovery remains. The habits of deplatforming and no-platforming still prevail in public institutions and popular sensibilities.
George Lincoln Rockwell (center), Head of the American Nazi Party, at Black Muslim Meeting, Washington, DC, 1960 (Photo credit: Eve Arnold)
The case of George Lincoln Rockwell, founder of the American Nazi Party, illustrates this shift with striking clarity. Rockwell was an avowed neo-Nazi who openly celebrated Adolf Hitler and promoted explicit racial hostility. Yet in the 1950s and 1960s, he appeared on national television, spoke on university campuses, and debated prominent public intellectuals. To younger observers, socialized in the context of today’s platforming norms, Rockwell’s media presence would feel shocking; many contemporary figures who are far less extreme face far greater institutional barriers. Whatever one thinks of Fuentes, he is no George Lincoln Rockwell. His adoration of Hitler is a fascination with pomp and circumstance and a “great man” sense of history (he admires Josef Stalin for similar reasons). Rockwell was hardcore. He was the real deal.
In Rockwell’s era, mainstream hosts such as David Susskind were willing—even eager—to confront extremist voices in public. Susskind invited Rockwell onto his show despite openly despising him, arguing that the public had a right to see such figures in the clear light of day. To be sure, Carlson and Morgan are willing to confront Fuentes publicly (even if Carlson chose to do so after interviewing Fuentes). But the fallout from Rockwell’s television appearances was not accompanied by the same pearl-clutching surrounding Fuentes’s more public exposure. Indeed, Rockwell’s campus appearance at Brown University was defended not only by civil libertarians but by administrators who regarded free expression as a core academic principle. Given the panic over Charlie Kirk’s presence on university campuses, it’s obvious that the tolerance for disagreeable ideas has sharply eroded among a great many college students. Not that students didn’t protest Rockwell; however, in the face of those protests, the prevailing assumption was not that students must be shielded from offensive or dangerous ideas, but that a democratic society proves its strength by allowing such speech to occur.
Back when college students appreciated platforming controversial figures—even Nazis. https://t.co/Yk3mo2o7yJ
It wasn’t only Susskind who took on controversial figures on broadcast television. The broader media environment of the mid-twentieth century reflected a tradition of openness and the free exchange of ideas. Mike Wallace’s pioneering interview program in the late 1950s routinely hosted radical figures. Wallace’s aggressive questioning and willingness to push the boundaries of public debate reflected a deeply held assumption of the era: that open exposure, not suppression, was the best antidote to hateful or dangerous ideas. William F. Buckley, the host of Firing Line, was also willing to bring on controversial figures. Broadcasters and university leaders trusted the public to be resilient. They believed that extremism lost its power when stripped of mystery and confronted in public. In this view, the danger lay not in allowing extremists to speak but in hiding them—and thereby granting them a forbidden allure.
Today’s norms are fundamentally different; the question remains whether these norms are organic or manufactured—and whether that matters. While legal protections for speech remain largely intact, our dominant institutions have become profoundly risk-averse. To put the matter bluntly, our sense-making institutions have taken on an authoritarian and elitist character, treating the public less as citizens to be informed than as a population to be managed. Platforming is no longer viewed as a means of exposing and defeating bad ideas, but as an implicit endorsement—or at least that is the perception that has been cultivated. Digital platforms, media firms, and universities increasingly operate on the premise that exposure itself constitutes harm. In a fragmented online environment, where clips circulate without context and algorithms amplify outrage, institutions suggest that exposure to ideas is too dangerous and that suppression is safer—and they claim the authority to decide which ideas and personalities are worthy of suppression.
As a result, individuals whose views fall far short of Rockwell’s—whether merely heterodox or sharply conservative (labels that presume progressivism as orthodoxy)—often face barriers to public participation that did not exist half a century ago. The contrast between Rockwell and contemporary figures like Fuentes makes this unmistakable. In today’s climate, Fuentes is systematically excluded from mainstream media, deplatformed by major tech companies (X just reinstated his account, which was banned in the summer of 2021), and treated as beyond the pale even in spaces historically devoted to dissent. Whether one approves of his views is beside the point. The institutional response reveals how tightly the boundaries of acceptable discourse have narrowed; Fuentes, alongside other figures such as Alex Jones, remains banned on YouTube.
Some will argue that this comparison does not establish a simple story of decline but merely reflects changing philosophies of openness. The argument goes like this: mid-century America tolerated a broader range of public speech because it believed democratic resilience required exposure rather than insulation; contemporary society prefers a protective model, one that prioritizes harm reduction and the prevention of inadvertent legitimation. This is true as a description of prevailing hegemony. But openness in a democracy is not a principle that can be endlessly redefined. A society is either open or it is not. The earlier era feared authoritarian suppression and trusted sunlight as the best disinfectant. That is the definition of an open society. The present era fears misinformation, radicalization, and destabilization—and therefore seeks to manage public discourse by restricting it. By definition, that is a closed society. Its explicit aim is to prevent certain ideas from ever being openly discussed. Presuming the majority wants it that way does not change the fact.
When modern defenders of censorship confront the challenge that America was freer in the past, they dissolve the question into competing definitions of freedom. However, if freedom means a broad arena of public debate in which even the most odious views may be confronted openly, then mid-twentieth-century America was, in this crucial respect, a freer society. If freedom is redefined as a social environment carefully shielded from destabilizing or hateful ideas, then no-platforming—whether through rules or shaming—becomes a method of protection. But this latter conception does not match the historical meaning of freedom in Western societies. No-platforming does not protect freedom from within—it replaces freedom with managed thought.
Put another way, the censorship regime echoes George Orwell’s starkest warnings: it risks creating a society in which the appearance of safety and knowledge substitutes for genuine understanding. Recall the slogans in Nineteen Eighty-Four: “Freedom is Slavery.” “Ignorance is Strength.” Invoking Orwell’s slogans may sound dramatic; Airstrip One is not our reality. But the substance of his argument prevails: without access to information and the right to be an informed participant, we remain essentially ignorant, subject to the control of those who dictate speech and shape thought. To be kept in the dark by those who want to keep us there is an essential element in servitude. And while we can still hear Fuentes if we want to, some wish we couldn’t, and it’s worth worrying about them before they prevail. Because if we don’t, then they will.
Since I have a formal area of expertise in political economy as part of my PhD credentials, I can help Congressman Alexandria Ocasio-Cortez with something I recently learned she said in a 2018 interview with Margaret Hoover on Firing Line. Ocasio-Cortez stated that, when the United States was founded, it did not operate on a capitalist economy. “Capitalism has not always existed in the world,” she said. “When this country started, we were not a capitalist—we did not operate on a capitalist economy.” The first part is true enough. The second part is entirely false.
Alexandria Ocasio-Cortez appears on PBS’s Firing Line
Putting the matter charitably, Ocasio-Cortez profoundly misunderstands both economic history and the conceptual frameworks used by scholars of capitalism’s long development. While popular historical memory and political discourse often treat capitalism as synonymous with industrial capitalism, the major traditions of classical political economy—as well as Marxist, neo-Marxist, and world-systems scholarship—define capitalism as a mode of production and exchange that long predates America’s founding. Agrarian capitalism and merchant capitalism are not stages preceding capitalism but expressions of capitalism in its developmental phases. By the mid-eighteenth century, when the United States emerged, the transatlantic world was already deeply embedded in capitalist relations, both legally and structurally.
World-systems theory offers one of the clearest accounts of this long historical view (the longue durée, as the Annales School calls it). Immanuel Wallerstein argued that capitalism consolidated as a world-system during the “long sixteenth century,” roughly 1450 to 1620. This period saw the emergence of a Europe-centered network of production and trade characterized by core–periphery relations and market-oriented forms of labor control and exploitation.
For Wallerstein and others in this tradition, capitalism is not merely a national economic system but a world structure of accumulation and exchange. Within that framework, mercantilism, colonial resource extraction, coerced plantation labor, and early wage labor are all unmistakably capitalist. The system was already centuries old by 1776, the year Adam Smith described its logic in the Wealth of Nations and the Continental Congress passed the Declaration of Independence. (For a primer on these matters, see Ronald Chilcote’s 1984 Theories Of Development And Underdevelopment. Although fewer than 180 pages, it manages to present a comprehensive review of the capitalist history and the various theories developed to understand it.)
Legal historians have similarly emphasized capitalism’s deep medieval roots. In his 1977 Law and the Rise of Capitalism, Michael Tigar argues that the legal architecture of capitalism—the commodification of labor obligations, contract doctrine, merchant law, and private property norms—developed from the early Middle Ages onward, a period of eight centuries. As feudal society matured, the growing prominence of towns, trade guilds, merchant capital, and money rents created institutional and juridical conditions conducive to capital accumulation. By the fifteenth century, capitalism was not merely an emergent possibility but an increasingly visible reality woven into European social and economic life.
Marxist historiography reinforces this view. Karl Marx found capitalism developing in the womb of feudal society, emerging through long-term shifts in property relations, labor control, and market dependence. Far from appearing suddenly in the eighteenth or nineteenth centuries, capitalism unfolded immanently from contradictions within feudalism itself. The steady commutation of feudal labor dues into money rents, rise of agrarian capitalism in England, expansion of commodity markets—all signaled the internal maturation of capitalist relations long before industrialization. Thus, for Marx and the generations of scholars who followed him, capitalism is a process, not an event or a structure—and the process in all its foundational elements was in place well before America’s founding. Indeed, it was a central cause of America’s founding.
Before continuing, I want to head off at the pass resort to the social constructionist dodge, in which things come into existence because power names them. It is true that, before the mid-nineteenth century, people described the system with different terms. However, even before the modern name for the system came about, those who described the system recognized it for what it was.
Enlightenment writers speak of the “commercial society,” emphasizing markets, trade, and profit-seeking. Smith famously describes Britain in these terms. Others refer to the “mercantile system,” highlighting the state-directed, trade-oriented economic order of the transatlantic world. In political debates, in Britain and the early United States, observers write about the “moneyed interest” or the “monetary system,” calling attention to the power of financiers. Early nineteenth-century commentators describe a “system of wage labor” or “free labor,” capturing the dependence of the system on labor markets. Although historians now use the term “agrarian capitalism,” we read in the texts of past historians about “enclosures,” “improvement,” “money rents,” and “tenant farming”—all features of rural capitalist transformation. In short, before the label capitalism existed, the same underlying system was identified through the language of commerce, industry, money, trade, and wage labor.
As I have explained in previous essays, cultural and ideological transformations played an important role. Max Weber’s famous analysis of the Protestant Reformation—beginning in 1517—illustrates how religious innovation helped rationalize economic behavior and weaken medieval constraints on accumulation. While Weber’s thesis does not claim that the Reformation created capitalism (he is often misunderstood in this respect), it shows how Calvinist ethics intensified tendencies already underway, giving moral and psychological sanction—the “spirit of capitalism”—to the disciplined, rationalized pursuit of economic gain. In this respect, the Reformation acted as an accelerator of capitalist development rather than its point of origin. Sharp readers will note that, to theorize that Calvinism acted in such a way, one must presume the system already existed. Indeed, in many ways (and Marx would insist this was the case), the emerging capitalist system created the conditions that prefigured the Protestant Reformation.
The Founders of the United States understood that they operated within a capitalist system, and the nation’s founding documents reflect this awareness. The Constitution embeds core principles of capitalist political economy: strong protections for commercial relations, enforceable contracts, private property, and mechanisms to secure credit, money, and interstate markets. The Contract Clause, the Takings Clause, and the Commerce Clause all function as legal infrastructure for a society organized around private ownership and capital accumulation. The Constitution was a conscious and deliberate effort to legitimize and stabilize market relations. By establishing a national framework that ensured rational legal rules for investment, facilitated commercial expansion, and protected property rights, the Founders created a constitutional order designed not to facilitate the future rise of capitalism—or some other system Ocasio-Cortez imagines capitalism corrupted—but to perpetuate and strengthen a capitalist system they already took for granted.
I agree with critics of the congresswoman that she is not up to the task of representing her constituents (I will avoid commenting here about why they would continually re-elect her). Still, we cannot excuse Ocasio-Cortez’s ignorance of the history of capitalism. She and her supporters tout her 2011 bachelor’s degree from Boston University, where she double majored in Economics and International Relations. Her bona fides are referenced to claim that her credentials make her specially qualified to speak on such matters. Yet she doesn’t know that the scholarly traditions she would have encountered in her coursework converge on a clear conclusion: by the time the United States was founded, capitalism was not only present but entrenched in the Atlantic world.
We cannot blame this on her instructors. She would have learned in lectures that colonial commodity production, merchant finance, the plantation complex, the transatlantic slave trade, land speculation, and the increasingly global division of labor were all expressions of a capitalist world-system already in full operation. To describe early America as something other than capitalist is incompatible with the conceptual frameworks of classical and neoclassical economics, world-systems theory (which the congresswoman would have learned about in international relations), Marxist and neo-Marxist theory (one should expect a socialist to know at least that), and standard histories of law and property. I will give the teachers at Boston College the benefit of the doubt; they would have taught students that the United States’ founding institutions, commercial practices, and social relations were already embedded in a centuries-old capitalist order.
Yet, this is not a simple matter of not paying attention in class or natural variation in the range of native intelligence. There’s an ideological and political problem for Ocasio-Cortez and her Democratic colleagues in admitting that the American Republic was founded to guarantee capitalist relations for its citizens. Democratic socialism is difficult to reconcile with a legal system designed with such assurances. The institutional architecture of the United States was built to safeguard private property and support market exchange, and thus limit the state’s ability to interfere extensively in economic life. While democratic socialism seeks to expand public ownership, socialize key sectors, and subordinate market outcomes to collective decision-making, the American framework assumes the primacy of private capital and uses constitutional constraints to preserve it. For any democratic socialist policies to work, they must operate within—rather than replace—a constitutional order fundamentally designed to reproduce the capitalist mode of production. I no longer believe that it possibly beyond a very limited extent.
Being a charitable person, I checked to see whether Ocasio-Cortez ever corrected her comment. I could find no evidence that she ever did. This is troubling enough (she is quite incurious). More troubling is the possibility that she suspects she is wrong but doesn’t care to know whether she is because the fiction that capitalism represents a corruption of the American Republic rather than its foundation is ideologically useful. By remaining ignorant, the congresswoman can advocate for democratic socialism while at the same time feigning support for the Constitution and the Republic’s founding ideals that her politics contradict. To be sure, she doesn’t believe in the American System; she is, after all, a progressive, a functionary of corporate statism, which is an expression of late capitalism brought about by the hegemony of transnational corporate power, which is as much a danger to capitalism as the socialist policies Ocasio-Cortez and her ilk pretend to understand. We must therefore conclude that her alliance with corporate state power is about replacing democratic republicanism with an administrative apparatus she presumes will see in her a useful bureaucrat. It already has, hasn’t it?
“The world’s fundamental political unit is and will remain the nation-state. It is natural and just that all nations put their interests first and guard their sovereignty. The world works best when nations prioritize their interests. The United States will put our own interests first and, in our relations with other nations, encourage them to prioritize their own interests as well. We stand for the sovereign rights of nations, against the sovereignty-sapping incursions of the most intrusive transnational organizations, and for reforming those institutions so that they assist rather than hinder individual sovereignty and further American interests.” –National Security Strategy of the United States of America, November 2025
The Trump administration’s National Security Strategy (NSS) articulates a fundamental shift away from the post–Cold War bipartisan consensus that assumed increasing economic interdependence and the diffusion of political authority beyond nation-states—in a word, globalization—would generate peace and prosperity. The Trump strategy rejects those premises. Instead, it grounds US national security in four core commitments: (1) protecting the American homeland; (2) promoting American prosperity; (3) projecting peace through strength; (4) advancing US influence in a world of sovereign nations.
Where previous administrations—Democrat and Republican—have treated globalization and transnational governance as inevitable, the Trump NSS exposes this rhetoric of “inevitability” as a choice that carries costs. It argues that globalist frameworks have empowered non-accountable actors (corporations, ideological networks, and international institutions) at the expense of citizens, workers, and national sovereignty. Rather than viewing geopolitical competition as outdated, the NSS asserts that great-power rivalry has returned, and that the United States must defend its cultural and economic vitality against those powers (China, Russia) and diffuse transnational systems that undermine democratic self-determination.
Readers need to be able to deconstruct the rhetoric of inevitability coming from corporate state propagandists. Framing globalization and transnational governance as “inevitable” performs a specific kind of ideological work: it recasts contested political projects as objective historical destiny. By transforming deliberate strategies into impersonal processes, such rhetoric depoliticizes choices that would otherwise demand public justification, while recoding dissent as resistance to reality rather than disagreement over ambitions and aims. It obscures elite machinations by portraying the people as backwards and naïve. Inevitability language thus functions as a legitimating device, shielding ambitious institutional agendas from democratic contestation by presenting them as the unavoidable tide of history rather than as risks knowingly assumed. It is a profoundly anti-democratic rhetoric.
Embedded within Trump’s broader argument is a pointed assessment of Europe’s situation—an analysis that is predictably receiving widespread criticism in the corporate state media but that represents one of the most philosophically coherent portions of the memo. If readers don’t have time to read the entire memo, they will profit from reading section “C. Promoting European Greatness,” which begins on page 25 of the memorandum. We are early down the path Europe has been on for quite a while. I want to focus on that section next. I conclude this essay with a note about the political character of nationalism.
The NSS portrays Europe as confronting a convergence of pressures: (1) demographic decline combined with unmanaged migration flows; (2) cultural and civilizational uncertainty, where traditional identities are increasingly viewed as illegitimate; (3) transnational governance structures (e.g., EU bureaucracy) that insulate major decisions from the democratic will of individual national electorates; (4) economic stagnation outside of small competitive zones, producing frustration among working classes; (5) political fragmentation, with populist parties rising in response to elite indifference.
The memo describes these factors as contributing to what it terms “civilization erasure”—the weakening of Europe’s inherited civilizational foundations: accountable democratic institutions, cultural continuity, national self-determination, rule of law, and stable borders. The NSS argues that the strength of the West historically comes not from supranational structures, but from sovereign nation-states cooperating freely. Europe’s classical nations—Britain, France, Germany, Hungary, Italy, Poland—are the engines of advanced political systems, economic and technological innovation, and humanism and science. When those nations lose sovereignty, Europe loses its vigor—and, I will add, elites and barbarians alike become emboldened. Thus, the report frames the European Union’s push toward deeper integration as strategically risky: it consolidates authority in distant institutions while weakening the democratic legitimacy of national governments. This creates a vacuum in which public discontent grows, fueling social fragmentation.
The NSS explicitly asserts that cultural integrity is a component of national strength. When a society loses its confidence—when its historical narratives are delegitimized or portrayed as inherently oppressive—it becomes less coherent and less capable of defending its interests. Europe, in this framing, is experiencing a loss of faith in its civilizational inheritance, a diminished capacity to regulate migration according to national priorities, a cosmopolitan and elite culture suspicious of national identity itself, and an erosion of social trust necessary for political stability. By calling this “civilization erasure,” the memo is arguing that Europe risks dissolving the very preconditions of democratic sovereignty. As I have long argued, the preservation of Western institutions and the rule of law requires a common culture rooted in Enlightenment principles, values that emerge from European Christian civilization.
The NSS argues that a strong, self-confident Europe is essential for US security. A fragmented or demoralized Europe becomes a weak partner. The document thus encourages reaffirmation of national sovereignty, respect for distinct European cultures and histories, a shift toward secure borders and democratic accountability, and partnerships with governments willing to defend these principles. This section of the memo is a call for renewal, urging Europe to reclaim the civilizational confidence that once made it a central pillar of the West. I could not agree more with the Trump Administration’s assessment here, and those who regularly read my work know that I have made this argument on this platform for years. Indeed, this is the theme of Freedom and Reason: A Path Through Late Capitalism. Nationalism is the path. Otherwise, globalism will replace capitalism with a transnational corporate state order, which will not carry over the Enlightenment freedoms—conscience, expression, and individualism. This is what I have identified as the New Fascism.
Whenever I tell a leftist that I am a nationalist, they assume that I am a right-winger. This is why I frequently remind readers of a crucial point: nationalism has been mislabeled as exclusively right-wing for propaganda purposes; historically, nationalism has been left-wing, including anti-colonial sovereignty movements, socialist nationalism (not nationalist socialism, i.e., fascism, which is a form of corporate statism with globalist ambitions), and working-class movements that fought for sovereignty against empires and oligarchies.
A left-leaning defense of nationalism can be articulated in several key ways. Progressives often champion democratic participation and fair economic systems, typically couching their rhetoric in “worker rights.” But these desires are impossible to secure when decision-making migrates upward into opaque transnational bodies that lack direct democratic accountability. That progressives support administrative-technocratic rule and transnational governance structures betrays their leftwing rhetoric. When multinational/transnational corporations can relocate production globally, open borders to mass migration, bypass environmental and labor rules, and pressure governments through capital mobility, workers lose leverage. When corporate power can command the rules of speech, democracy fades. National sovereignty—including borders, democratic control of economic rules, policymaking, and industrial policy—is essential for protecting working-class interests.
A key insight shared by both populist left and populist right is that globalization has redistributed power upward, not outward. Corporations have benefited from offshoring production and open borders, which have allowed them to take advantage of cheap foreign labor and drive down the wages of workers in the West. Financial institutions have benefited from deregulated capital flows, amassing vast concentrations of wealth in ever fewer hands. Managerial elites have benefited from cosmopolitan mobility. Working people in all nations—American, British, French, German, Italian, Swedish—bear the costs. The NSS’s critique aligns with this understanding: transnationalism has facilitated elite integration while undermining local democratic communities.
So, on the occasion of Trump’s NSS memo, I return to the argument I have been making for years: the traditional left/right divide is no longer the primary axis of politics. The new divide is, on one side, populism-nationalism, emphasizing borders, cultural continuity or integrity, democratic control, and economic fairness for citizens; the other side, progressivism-globalism, which emphasizes borderless labor markets, transnational governance, and the moral obsolescence of nation-states, portends the demise of democracy and human freedom. Understanding the real bifurcation points is how one explains the rise of populist and nationalist movements across Europe and the United States. These movements include both left-wing (in the liberal sense) and right-wing (conservative/traditionalists) variants, but they share a common concern: the disempowerment of ordinary citizens by global systems they did not vote for and cannot influence. The struggle is not between capitalists and the working class but between corporate elites and the masses.
We need to return to and reinvigorate populist nationalism as a bulwark against the New Fascism. A healthy nationalism—civic, democratic, pluralistic—anchors political legitimacy in the people, limits the power of corporate and transnational bureaucracies, fosters solidarity and mutual obligation, protects local cultures and traditions, and ensures borders reflect the interests of citizens, not economic elites. This is the form of nationalism compatible with left-wing commitments to democratic participation, labor, and the general welfare. One may wonder why self-described leftists (socialists, etc.) are not populists, but the fact that they oppose populism tells that they are not really on the left, but instead on the side of corporate state power. How that happened is a topic I have addressed in several essays on this platform. But the fact that it happened is incontrovertible.
The Trump administration’s National Security Strategy memo should therefore be seen not as a backward-looking or xenophobic document, but as a coherent defense of sovereign democratic nations in an era of transnational dominance. Its analysis of Europe—particularly the argument concerning cultural confidence and the concept of civilization erasure—reflects growing popular concerns across the Western world. The grand strategic debate of the twenty-first century is no longer left versus right. It’s populism versus progressivism, democratic nationalism versus globalist managerialism. Europe and the United States are witnessing parallel movements because ordinary citizens sense that the social contract has been renegotiated without their consent.
A left-leaning, pro-worker, civic nationalism provides a compelling counter-vision: one rooted in the belief that nations remain the only institutions strong enough to defend democratic participation, protect workers, regulate markets, and preserve cultural continuity against the homogenizing forces of global capital. In that sense, the NSS’s argument about Europe is not merely an assessment of foreign policy—it is part of a larger global realignment, one in which cultural integrity, democratic republicanism, and national sovereignty are the fronts in a struggle to save Western civilization.
In monotheism, the devil possesses only the power that God permits. Analogously, words—criticism, insults, and labels—possess only the power we grant them. (See Sacred Words—Presumed and Actual Power.)
This raises a fundamental question: who, then, acts as the god in the governance of language? Will it be the people themselves—the authors of history—or an authority that assumes the role of commissar, disciplining and punishing individuals for their words? And who, precisely, would that commissar be? Is there anyone you would trust with such power? Put more plainly: who do you wish to be your master—yourself, or someone else?
When we internalize negative words, they may wound us; when we acknowledge them without allowing them to define us, they lose their force. The power of language is not intrinsic but relational—it arises from the interaction between speaker and listener. If we wish to nullify hurtful words, we need only refuse to grant them authority over us. A mature person has developed the capacity for tolerance. In a free and open society, this choice belongs to the individual. In a totalitarian society, it is made by someone else.
With power behind them, those with authoritarian ambitions teach people to fear words by making them psychologically fragile—by persuading them that they are incapable of deciding for themselves how language should affect them. Children are no longer taught “sticks and stones may break bones…”; instead, they are taught to experience speech itself as a form of violence—but only the words that those who are teaching them identify as such. To a significant degree, those in charge of children are government agents. Teachers, for example. Many teachers, working from curricula and pedagogical practices designed to this end, make children fragile.
This has been true for a long time. More recently, the public has learned that physical violence is a just means of suppressing speech that offends certain people. We see this, for example, in the case of a white woman who calls a black man a derogatory name at a traffic dispute. The black man renders her unconscious with a fist—a homicidal act praised by progressives on social media. Those who glorify the man’s actions say she deserves the violence he visits on her. We see the same thing with transactivists, who target with harassment, intimidation, and violence those who refuse to affirm their delusions (namely, lesbians and proponents of women’s rights). When they aren’t using violence, the transactivists are demanding institutions punish those who refuse to misgender them.
The selection of words to be restricted, or those that justify violence, is determined by particular groups. The rules are windows to power.
I’ve been called an Islamophobe, a racist, and a transphobe. Yet I choose not to be affected by those words (although I am harmed by those who attempt to discipline or punish me for my opinions). But even if I were offended by these words, nobody would think that those who smear me with them are justifiably disciplined, punished, or subject to violence. Such selectivity tells us who controls the narrative. They are not those who believe words are just that: words. They are those who treat some words as worthy of punishment, even violence.
When an authority laces words with coercive power, the result is not freedom and openness but an illiberal, authoritarian condition that suppresses both. The policing of speech, therefore, exposes a problem that liberals—those committed to a free, open, and tolerant society—must confront. If they don’t, they will find themselves living under conditions of unfreedom.
Years ago, I said that our defense of free speech must become obnoxious. A colleague who understood me pinned it to her office door. I stand by what I said. Be obnoxious. Disobey the thought police. Don’t stand for government and public institutions telling you what to say; they don’t believe in liberty. They are the enemies of freedom. We must be in command of words if we are to freely express our conscience, which is our right. Without that, we are not individuals.
But no one has the right to freedom from offense. It’s an individual’s choice to be offended. No one should expect that the government will police words that some individuals find offensive. Their stunted capacity to tolerate the expressions and opinions of others, whatever the cause of their immaturity, is neither the fault nor the responsibility of those who use words. Offense-taking is on the offense-taker.