What Explains Trump Derangement Syndrome? Ignorance of Background Assumptions in Worldview

“Insanity in individuals is something rare—but in groups, parties, nations, and epochs, it is the rule.” —Friedrich Nietzsche, Beyond Good and Evil (1886)

Nietzsche used madness as a metaphor for the irrationality of collective movements, herd behavior, or mass delusion. Today’s woke progressivism around culture, gender, and race is the paradigm (see Explaining the Rise in Mental Illness in the West). Their derangements command far too much power. These derangements find their expression in Trump Derangement Syndrome (TDS).

A bumpersticker

TDS describes strong emotional reactions to two-time US President Donald Trump. Characteristic of this disorder is irrational thoughts and extreme behavior, specifically overreaction to Trump’s actions, statements, or policies, while dismissing facts and eschewing logical reasoning. The condition is marked not only by a pathological obsession with Trump. Those with TDS are likely to trust mainstream public health messaging, for example, during the COVID-19 pandemic (lockdowns, masking, vaccine mandates), support trans rights, advocate for immigrant protections, or endorse ideas associated with critical race theory. (See The Future of a Delusion: Mass Formation Psychosis and the Fetish of Corporate Statism.)

This essay argues that TDS can be best understood as a clash of background assumptions that shape worldviews. People do not process political events or leaders in a vacuum; their perceptions are filtered through deeply held beliefs and values about culture, gender, government, media, morality, race, and society. Ignorant of these underlying frameworks, observers of a particular worldview react emotionally, often hysterically, to Trump’s presence, pronouncements, and policies, because they do not have access to the deeper understanding necessary to form a rational argument. Instead of logical argumentation, reflex leads to mocking Trump’s intelligence, manner of speaking, and physique (the latter betraying their rhetoric of body positivity). By making explicit these often-unspoken assumptions, this essay explains why reactions to Trump have been so polarized and why mutual understanding between opposing sides has been so difficult to achieve.

It’s crucial to do this because the dominant sensemaking institutions—academia, the corporate media, the culture industry, and the Democratic Party—depend on popular ignorance to advance the transnational project. Behind the strategies globalists use to disorganize the population—historical revisionism, multiculturalism, racial and ethnic antagonism, and radical gender ideology—is the project to dismantle national sovereignty for the sake of transnational corporate and financial powers. By incorporating a mass of the population into the progressive worldview, elites can produce mass hysteria when it is functional to their ends. TDS is the paradigm of deep propaganda work.

Image generated by Sora

In the modern world, there exist two competing narratives about how people ought to organize their economic, political, and social lives. The first of these is the system born out of the Peace of Westphalia and later embodied in the American System—a system of sovereign nation-states, each jealous of its independence, cautious of foreign entanglements, but free to cooperate through alliances when necessary (see Will They Break the Peace of Westphalia or Will We Save National Sovereignty for the Sake of the People?).

This was the vision of Alexander Hamilton, carried forward by Henry Clay, and later defended by Abraham Lincoln (see With Reciprocal Tariffs, Trump Triggers the Globalists; Tariffs, Trade, and the Future of the American Worker; Why the Globalists Don’t Want Tariffs. Why the American Worker Needs them; History as Ideology: The Myth that the Democrats Became the Party of Lincoln). It is grounded in classical liberal principles of free enterprise, individualism, and republican governance. Economically, it finds expression in national industrial development, protective tariffs, and policies designed to secure the independence of citizens from foreign domination. In this vision, the sovereignty of the people is inseparable from the sovereignty of the nation.

Opposed to this stands the second vision: the transnational order, rooted in the technocratic speculations of French philosopher Henri de Saint-Simon (the derangements of French philosophy inform much of the progressive worldview) and nourished by European intellectual currents. Here sovereignty is not preserved but sacrificed—subsumed into larger, unelected, bureaucratic, and corporate arrangements that dictate to nations and their peoples.

This is the ideology of progressivism, an ideology that clothes itself in humanitarian rhetoric but ultimately strips free peoples of their independence in favor of managerial elites. Its institutional forms are the European Union, the IMF, the WTO, and countless other transnational organizations that presume to legislate without the consent of the governed. (See Taking Back Our Country from the Globalists; Protectionism in the Face of Transnationalism: The Necessity of Tariffs in the Era of Capital Mobility; Marx the Accelerationist: Free Trade and the Radical Case for Protectionism.)

Its cultural forms are multiculturalism, first articulated by Horace Kallen as cultural pluralism in the early twentieth century, which gradually erodes the shared civic identity upon which true self-government depends, and the inversion of the historic racial hierarchy (which a truly liberal person seeks to dismantle altogether). Its economics are free trade without limit, mass immigration without assimilation, and the wholesale transfer of industry to foreign shores.

What is too often missed in the heat of contemporary debate is that the progressive adherents to this second narrative are largely unconscious of the architecture of their worldview. They live inside it as fish in water, operating from unexamined assumptions about “global interdependence,” “diversity,” and “inevitability.” Thus, when they encounter a figure like Trump, their opposition is almost entirely superficial: they dislike his manner, his bluntness, his appearance, his defiance of polite technocratic norms. Rarely do they engage his policies at the level of ideas, for to do so would expose the fact that Trump, like Hamilton, Clay, and Lincoln before him, stands within the older and truer American tradition—the tradition of national independence, protective tariffs, and a government that serves its citizens rather than distant managers.

The irony is that the progressive worldview, in its zeal to appear cosmopolitan and humanitarian, aligns with the very Democratic Party that once stood for slave democracy, free trade, and later Jim Crow segregation. Meanwhile, the Republican Party, in its origin, was born as a protest against the economic and political degradation of that Democratic vision. It sought to restore the American System, to defend national industry, and to protect the working man from being undercut by cheap labor and cheap imports. To ignore this continuity is to misread both the present and the past.

As noted above, Nietzsche famously remarked that insanity in the individual is rare, but in groups, parties, and ideologies, it is the rule. Progressivism, with its hollow cosmopolitanism and technocratic faith, is precisely such a madness—a system that promises liberation while delivering dependence, that preaches diversity while eroding unity, that invokes democracy while undermining sovereignty. Against the madness stands the sober realism of the American System, which insists that free people can only remain free if they control their own borders, their own industries, and their own political institutions. This is not a relic of the nineteenth century but the perennial truth of rational self-government. (See Populism and Nationalism; Progressivism Hasn’t Been Betrayed—It’s Been Installed; Richard Grossman on Corporate Law and Lore.)

Human Nature and the Limits of Tolerance: When Relativism Becomes Nihilism

I’m an atheist, but I recognize that not all religions are the same. Some are far more harmful than others. But we are told that religion is relative to a culture and that it’s wrong to judge another’s culture. We’re called bigots and xenophobes if we do.

Have you seen the meme below? It’s the blunt truth. But there is an error. What one sees here is indeed about control. But it is also about religion. The religion of Islam. If you see this image and have trouble bringing yourself to judge that religion, then you must do better. What’s holding you back is some degree of cultural relativism.

The doctrine of cultural relativism has been one of the worst ideas to ever emerge from big heads in Western civilization. It’s a concept that has shielded barbaric practices from critique and placed oppressive traditions beyond the reach of morality and reason.

Meme currently circulating on social media

To clarify, cultural relativism is the idea that the beliefs, practices, and values of others should be understood based on their culture rather than judged by the standards of another. On the surface, this seems reasonable. But the problem arises when understanding becomes an excuse for moral abdication.

Should we judge the culture of Nazi Germany based on the standards of National Socialism? It doesn’t take very long to see how reckless the demands of cultural relativism truly are. By the same logic, we could excuse slavery in the American South, foot-binding in China, or apartheid in South Africa simply because those societies once endorsed them.

This idea of cultural relativism is basic to anthropology and sociology, two disciplines in which I was professionally socialized (I teach in a sociology and anthropology program at a state university). Every introductory sociology text aims to condition students to believe in the inherent goodness of cultural relativism. Students are trained to think that the suspension of moral judgment is not just intellectually sophisticated but also morally virtuous. It does this while constantly dragging the West. My conservative students object. I am not a conservative, but I agree with them.

What I am conveying in these remarks is heresy in my profession. But I have never been comfortable with the doctrine of cultural relativism. And, as you may have picked up on, I am instinctively a heretic.

More broadly, we were all taught—and today’s youth still are—to believe and promote intercultural and interfaith tolerance (the spirit of ecumenicalism) and avoid the sin of ethnocentrism (or chauvinism), defined as the tendency to see one’s own culture as superior or as the default. This moral reflex—to recoil from judgment—is so strong that even when faced with clear injustice, many people will remain silent for fear of being labeled intolerant.

To further clarify the matter, there are two main types of cultural relativism.

Descriptive cultural relativism refers to the observation that different cultures have different moral codes and social norms. This is obvious and unproblematic. Who wouldn’t acknowledge the fact that, in the Islamic realm, women are subjects of male domination? Women wear burqas, their bodies and movements controlled, their voices silenced. They are second-class citizens by design.

If you asked some of these women whether they approve of this arrangement, many would likely say yes. But their affirmation cannot be taken at face value. They have been socialized from birth to see obedience as virtue, and fear of reprisal makes dissent dangerous. The deeper question, the objective question, is whether such subjugation is good or justifiable. Whether an individual says yes or no does not determine the objective moral status of the practice.

This observation necessarily takes us to the second type: moral or normative relativism. This is the belief that no culture’s morality or values are inherently superior to another’s. In other words, right and wrong are culturally dependent. (There is an inherent racism suggested here, but I will leave that to the side for now.) The upshot: We’re taught that, if a society considers a practice moral, then it is moral within that society.

See the problem? If we accept the premise of universal human rights, scientifically determinable on a close examination of species-being (and science—uncorrupted by ideology—is universal), then some cultures are, in fact, superior by default. We can put it this way: if normative systems meet Abraham Maslow’s hierarchy of needs, then they are superior. By the same token, if they don’t, they are inferior.

Consider the example of female genital mutilation (FGM). In many patriarchal societies, the clitoris is removed, depriving women of sexual pleasure. The cultural rationale is that women should be chaste, obedient, and free of sexual desire. Men, of course, have also been subject to genital mutilation through circumcision, which in some dulls sexual pleasure to some extent. But every man knows that sexual desire is not entirely under conscious control. Erections happen, sometimes involuntarily, because we are animals with biological drives. And while societies have long tried to suppress this “animality,” the biological fact remains: pleasure is part of our evolutionary design.

That is why science—empirical, objective, and universal—provides a standard by which we can judge these practices. Evolutionary biology tells us that sexual pleasure is not purposeless. It serves a function. Pleasure reinforces the behaviors necessary for reproduction and the propagation of the species. Practices that mutilate or repress this function violate something deeply rooted in human nature.

Perhaps this example is more persuasive than the one about women’s rights in general. The first example, about women in burqas, is too easily dismissed by those who have internalized cultural relativism. They may rationalize women’s subjugation as a “different but equal” arrangement. But when confronted with the physical mutilation of a child’s body—when confronted with irreversible harm—the relativist dodge becomes harder to sustain.

And yet, even this example reveals something troubling: the legacy of patriarchy in our own culture. The resistance to recognizing women’s freedom as an objective good suggests that many still unconsciously accept a hierarchy where women’s suffering can be rationalized as culturally legitimate.

And there’s this: The example of female genital mutilation, and male genital mutilation, leads naturally to contemporary debates about transgenderism and so-called “gender-affirming care” (GAC). Here we see how a superior culture’s moral order can be corrupted by ideology and, in our case, by greed as well. Children are put on puberty blockers, cross-sex hormones, or undergo surgeries with lifelong consequences, all in the name of affirmation. We are told that questioning this is hateful. It’s hateful to ask whether permanently altering a child’s body, often before full cognitive maturity, meets any objective moral or biological standard?

The moral paralysis of relativism is at work in this case, as well. Those of us who oppose genital mutilation are accused of imposing our morality on others. But it is not our morality we are imposing. It’s the morality of universal human rights. It is wrong to mutilate the genitalia of children, to sterilize them, to rob them of their ability to act as sexual beings. We object to acts of dehumanization. We are criticized for describing the barbaric acts of GAC as mutilation.

Yet GAC is no different than FGM—except that it’s framed as a compassionate act in Western societies. FGM is practiced mainly in parts of Africa, the Middle East, and Asia, but it also occurs in immigrant and diaspora communities around the world, including in Europe, North America, and Australia. In Africa, FGM is most common in countries across East Africa (Eritrea, Ethiopia, Kenya, Somalia, Sudan); West Africa (Gambia, Mali, Nigeria, Sierra Leone); and parts of North Africa, such as Egypt and Djibouti. In the Middle East, the practice is found in countries like Iraq (particularly in the Kurdistan region), Oman, Yemen, and in some communities within the United Arab Emirates. In Asia, FGM is reported in countries such as India (especially among Bohra Muslim communities), Indonesia, Malaysia, Pakistan, and parts of Thailand. Due to migration, cases are also found in many Western countries, including the United States, Canada, the United Kingdom, Australia, and across Europe.

According to UNICEF and the World Health Organization (WHO), more than 230 million girls and women worldwide have undergone FGM, and an estimated 4.3 million girls are at risk of being subjected to the practice each year. Most procedures are carried out on girls between infancy and the age of 15, often before they reach puberty, making it a major global health and human rights concern.

There was a great outcry over this many years ago, but over the past several years, media coverage of FGM has noticeably diminished. This decline reflects broader cultural and political shifts. In the 1990s and early 2000s, FGM was more frequently covered in mainstream outlets, often framed as a clear-cut human rights issue. Since then, cultural relativism and a deepening of multiculturalism have encouraged more cautious discussions around practices associated with specific cultural or religious communities.

Increasing sensitivity toward Muslim communities, particularly in the post-9/11 era (why sympathy for Islam followed 9/11 is a curious phenomenon), has made public discourse around issues perceived as tied to Islam more delicate. Fear of fueling stereotypes, Islamophobia, or xenophobia has led media outlets and commentators to downplay or avoid extensive coverage of the issue, especially when cases emerge in immigrant communities. Readers may recall the 2017 case in Michigan, where members of the Muslim community were prosecuted for performing FGM on girls—the first federal prosecution of its kind. Since then, discussions around FGM in the US have been more confined to advocacy, healthcare, and policy circles.

But the cultural relativism of the progressives who downplay the problem of FGM—and one suspects it has a great deal to do with normalizing genital mutilation associated with GAC—is not apparent in criticisms of American culture, which is condemned for being transphobic and white supremacist.

Ever been told that an “ought” doesn’t follow from an “is”? Nonsense. Acorns ought to become oaks under optimal conditions. Just as children thrive when their needs are met. And so they ought to. How will the species propagate otherwise? Why would any man with a conscience and a basic grasp of human development tolerate children with small brains and low IQs? (Yet, men do.)

There are things one ought not to do, and this isn’t a matter of opinion. When I was in high school, I remember some of my classmates proclaiming that morality is personal. No, it’s not, I would respond. Humans exist in moral orders, and some are better than others. Just ask a woman in Afghanistan under Taliban rule. Or don’t. She may feel compelled to lie to you. Better to just see what you see.

Given the descriptive definition, cultural relativism has some merit, such as encouraging cross-cultural understanding when the beliefs and practices are not harmful, for example, in one’s tastes in food or music. You can still not like it or partake in it, but such things usually don’t harm you.

However, beyond cuisine and aesthetic sensibilities (rather trivial matters, I think, although some of it is quite tasty and pleasing), cultural relativism can lead to moral paralysis, where harmful practices like heterosexism, misogyny, and slavery cannot be condemned because such condemnations are ethnocentric.

Obviously, taken whole cloth, the concept of cultural relativism complicates arguments for universal ethical standards and human rights. This is to put the matter mildly. In fact, at its core, the demand from cultural relativists that we eschew moral and normative standards, which we must do if we are to be nonjudgmental and inclusive, is nihilistic. Put another way, then, cultural relativism, in its full sense, is suicidal.

Consider that Muslims don’t practice cultural relativism (why would they?). And they like it very much that Westerners do, since it allows Islam to colonize the West while demanding we adhere to the value of cultural relativism. Does that mean that we should be intolerant like Muslims? Sure. But our culture is superior.

Have you noted the weird paradox in all this? If we should not draw an “ought” from an “is,” then why is there an ethical prescription that we shouldn’t judge the adequacy of other cultures (or subcultures)? On what grounds ought we not make determinations about moral and normative adequacy?

Cultural relativism sounds like a political project designed to morally paralyze us, doesn’t it? And the fact that the doctrine is arbitrarily applied makes that possibility all the more likely. Yeah, I think we’ve been conned. I know we have.

What about judging individuals as such? We can’t. Humans are culture bearers. That is, they bring their cultures with them. If they’re prepared to denounce their faith, then we can welcome them into the community of equals. But if they cling to their barbaric practices, then we can’t tolerate their presence. Not without sacrificing our moral integrity.

Puerto Rican Prodigy Wilfred Benítez—Was He the “Fifth King?”

The “Four Kings”—Roberto Durán, Marvin Hagler, Thomas Hearns, and Ray Leonard—became immortal in boxing history because of the way their careers intertwined and because each became household names. They all faced each other during the 1980s and 1990s in a series of high-stakes bouts. Yet some fans and historians have long argued that there was a fifth king: Puerto Rican prodigy Wilfred Benítez. After all, Benítez was a three-division world champion, the youngest ever to win a title at just 17, and shared the ring with Leonard, Hearns, and Durán—beating Durán in 1982 and pushing both Leonard and Hearns in competitive fights.

Puerto Rican prodigy Wilfred Benítez

The reason Benítez is usually left outside the “Four Kings” mythology perhaps has less to do with his ability than with narrative. He never fought Hagler (despite competing at middleweight), his prime was shorter than the others’, and he lacked the same US mainstream profile that made Durán, Hagler, Hearns, and Leonard legendary names in the public mind. By the mid-1980s, the Puerto Rican’s reflexes had declined, while the other four were still at their peaks, engaging in their legendary round-robin of rivalries. Still, many in the boxing world—Steve Farhood among them—have called Benítez “the forgotten King.”

In this essay, I will consider whether Benítez should be reckoned among the fighters tagged the “Four Kings.” I will chart the paths of these fighters in chronological order, beginning with the Panamanian great Robert Durán, whom I place in my top five greatest fighters of all time, alongside Sugar Ray Robinson, Henry Armstrong, Willie Pep (sometimes including Sandy Saddler, since their four matches are etched in history), and the great Mexican fighter Julio Cesar Chavez.

Debuting in the late 1960s, Durán beat Scotland’s Ken Buchanan in 1972 to win the lineal lightweight title. Durán met Puerto Rican Esteban DeJesús that year in a non-title fight and lost for the first time. Knocked down in the first round, Durán couldn’t impose his will on DeJesús, and DeJesús was awarded a 10-round decision. This set up a title fight in 1974 in which, after being knocked down again in the first round, Durán clawed his way back into the fight and dominated the later rounds, stopping DeJesús in the 11th round.

In 1975, De Jesús challenged Panamanian Antonio “Kid Pambelé” Cervantes for the WBA light-welterweight title. Cervantes dropped De Jesús three times en route to a 15-round unanimous decision. The next year, Durán was stripped of the WBC lightweight title, which De Jesús claimed, defeating Japan’s Guts Ishimatsu by unanimous decision. De Jesús would successfully defend the WBC title twice, establishing himself as the next best fighter in the division after Durán.

In 1976, Puerto Rican Wilfred Benítez defeated Cervantes for the WBA light-welterweight title, successfully defending the title three times: Tony Petronelli, by unanimous decision; Emilio Valdés, by knockout in round 15; and Carmelo Negrón, by unanimous decision. Benítez would move up in weight in 1979 and capture the WBC welterweight championship on a split-decision over American Carlos Palomino.

In 1978, Durán and De Jesús met for a third time to unify the lightweight title. Durán dominated and stopped De Jesús in the 12th round. It was Durán’s final title fight at lightweight, having successfully defended his championship twelve times, with only one of these bouts going the distance. (A few months later, De Jesús would attempt to win the light welterweight title again, but Saoul Mamby stopped him in the 13th round, effectively ending De Jesús’s career.) Durán retired the lightweight title in 1979 and moved to the welterweight division.

As noted, having also moved to the welterweight division, Benítez defeated Palomino for the WBC title in 1979 on a split decision. Palomino, who had held the title since 1976, had seven title defenses under his belt and was highly regarded. The fight was close, and Palomino remained a top contender. Benítez defended the welterweight title successfully against Harold Weston Jr., a fighter with whom he had drawn in 1977, before accepting the challenge of Ray Leonard. Leonard was ahead on points on all three scorecards when referee Carlos Padilla stopped the fight with seconds to go in the 15th and final round, handing Benetiz his first loss.

The 1979 fight between Benítez and Leonard was a doubleheader, preceded by the controversial world middleweight championship fight between Italy’s Vita Antuofermo and American Marvin Hagler. The fight ended in a 15-round draw. Most observers favored Hagler (as I did) based on his dominance over the first ten rounds of the fight. However, Antuofermo’s tenacity impressed the judges and he retained his title.

The next year, in 1980, Hagler finally reached the summit of the middleweight division, stopping Brit Alan Minter (who had defeated Antuofermo) in three rounds in Minter’s home country. Not leaving it to the judges this time, Hagler dispatched Minter in brutal fashion. Wembley Area erupted in violence, and Hagler had to be hastily ushered to his dressing room, robbing him of his moment to celebrate his victory in the ring.

A few weeks before that bout, Durán won a convincing 10-round decision over Palomino, dropping the former champion in the sixth round, thus setting the stage for the first Leonard-Durán clash. Leonard and Durán would meet in Montreal in 1980. Leonard had a successful title defense under his belt, a spectacular 4th-round knockout of Dave “Boy” Green of Britain. Durán had proved he was a legitimate welterweight with his commanding victory over Palomino. Durán was the aggressor throughout his bout with Leonard, winning a close but unanimous decision. Five months later, Leonard regained his title when Durán quit in the eighth round.

That same year, Thomas Hearns knocked out Mexico’s Pipino Cuevas in two rounds to win the WBA version of the welterweight title. Hearns would successfully defend his title three times—knocking out Luis Primera, Randy Shields, and Pablo Baez—before facing Leonard in 1981 in a title unification bout. Leonard prevailed in a come-from-behind 14th-round stoppage. Leonard had picked up the WBA light middleweight title a few months earlier by knocking out Ayub Kalule. Leonard defended his welterweight title for a fourth time (against Larry Bonds) before retiring due to a detached retina. Anticipating a big-money fight, Leonard’s retirement left Hagler crestfallen.

Meanwhile, Benítez had won the WBC light middleweight title in 1981 by knocking out the Brit Maurice Hope. Benítez would successfully defend his title twice with 15-round decisions over Carlos Santos and Durán, before losing the title in 1982 on a majority decision to Thomas Hearns. Benítez continued to fight on, but never regained his form, suffering losses to Mustafa Hamsho, Davey Moore, and Matthew Hilteon, among others. He retired in 1990 with a record of 53-8-1 (32). He lives in Chicago under constant care from post-traumatic encephalitis.

Hearns would successfully defend his light middleweight title four times over the next several years. One of those defenses was against Durán, who was coming off a 15-round defeat to Hagler for the middleweight title in 1983. Durán had worked his way back into contention after his defeat to Benítez by stopping Cuevas in four rounds and brutalizing Davey Moore over eight rounds to win the WBA light middleweight title, both bouts also in 1983. Hearns and Durán met in 1984, and Hearns knocked the Panamanian out in two rounds. Hearns’ other title defense came against Luigi Minchillo, Fred Hutchings, and Mark Medal.

In 1985, Hearns challenged Hagler for the middleweight championship, losing on a third-round knockout. Going into that fight, Hagler had defended his title ten times, knocking out or stopping Fulgencio Obelmajias (twice), Antuofermo, Hamsho (twice), Caveman Lee, Tony Sibson, Wilfred Scypion, and Juan Roldan, as well as decisioning Durán. That next year, on the same card, Hearns knocked out American James Schuler in the first round, while Hagler defended his title for the twelfth time against Ugandan John Mugabi, knocking him out in the 11th, setting up a rematch between Hagler and Hearns.

However, that fight Hagler-Hearns rematch would never occur. In 1987, Hagler would lose the middleweight title in a controversial 12-round split decision to a returning Ray Leonard. It would be Hagler’s final fight, retiring with a record of 62-3-2 (52), with twelve successful title defenses, putting him on Durán’s level in terms of dominating a weight class. Hagler had sought a rematch with Leonard, but Leonard declined and, with his hopes of surpassing Carlos Monzon’s record of fourteen uninterrupted title defenses having been dashed, Hagler walked away from the sport. Hagler died in March of 2021 from natural causes.

In 1987, Hearns would defeat Roldan (whom Hagler had defeated in a title fight in 1984) on a 4th-round knockout, winning the vacant WBC middleweight title. Just prior to that fight, Hearns had won the WBC light heavyweight title, knocking out Brit Dennis Andries in the 10th round. Hearns vacated the light heavyweight title shortly afterwards, and Hearns would lose the middleweight title on a 3rd-round knockout to American Iran Barkley in 1988. A few months later, Leonard would stop Canadian Donnie LaLonde, winning the WBC super middleweight and the WBC light heavyweight titles.

Barkley would lose the middleweight title in his first title defense in 1989 to Durán, who had clawed his way back into contention. Flooring Barkley in the 11th round, Durán won a split decision. This set up a rubber match with Leonard, who outpointed Durán over twelve rounds for Leonard’s WBC super middleweight title. Just prior to this match, Leonard had drawn over twelve rounds in a long-awaited rematch with Hearns, also for Leonard’s WBC super middleweight title. The draw was controversial, with most observers (including Leonard) believing Hearns deserved the nod.

Durán would fight on with considerable success. However, he lost in a bid for the WBA middleweight title in 1998 against American William Joppy and retired in 2001 with a record of 103-16 (70).

Leonard and Hearns would fight on. Leonard would fight two more times, first in a failed attempt to win the WBC light middleweight title against American Terry Norris in 1991, losing by decision, In 1997, in an ill-advised comeback against Puerto Rican Hector Camacho in a minor middleweight title fight in 1997. Camacho, having outpointed Durán nine months prior, stopped Leonard in the fifth round. Leonard retired that year with a record of 36-3-1 (25).

Hearns continued to have success, shocking the world with a unanimous decision over long-reigning WBA light heavyweight champion Virgil Hill in 1991. Hearns would lose the title in 1992 to Barkley in a hard-fought 12-round split decision. Hearns moved up the cruiserweight division (winning a minor title) before finally retiring in 2006 with a record of 61-5-1 (48).

The “Four Kings” (image generated by Sora)

An argument can be made that Benítez should stand with the other four in terms of his record and quality of opposition. Benítez won the WBA light welterweight title and successfully defended it three times, relinquishing the title undefeated after winning the welterweight title. He successfully defended that title before losing to Leonard in a close contest (unfairly, in my eyes, stopped by the referee with seconds to go in the match). He then won the WBC light middleweight title, successfully defended it twice, including a 15-round unanimous decision over Durán, before losing to Hearns on a majority decision.

Let’s compare that record to those of the others.

Durán was world lightweight champion (unifying the WBA and WBC titles after being stipped of the latter), the WBC welterweight champion, WBA light middleweight champion, and WBC middleweight champion. While he never successfully defended these later titles, he did successfully defend his lightweight title twelve times, which puts him in an elite class. Durán would lose the WBC welterweight title, twice in attempts to win the WBC light middleweight titles, in challening for the middleweight and WBC supermiddleweight title.

Leonard twice won the WBC welterweight title, added the WBA welterweight title in a unification match, won the WBA light welterweight title, won the world middleweight, WBC super middleweight, and WBC light heavyweight titles. He successfully defended his welterweight title four times and his WBC super middleweight title twice. He only lost one title in the ring, his WBC welterweight title to Durán (although, reckoned in lineal terms, an argument could be made that he lost his middleweight title to Norris, but that requires a complex examination that is beyond the scope of this essay).

Hagler was not a weight jumper (although world light heavyweight champion Michael Spinks challenged Hagler to a fight at the catchweight of 168 lbs in 1983). Hagler was a solid middleweight throughout his career, posting twelve successful title defenses across seven years in a deep field. He held wins over Durán and Hearns, and, many will argue (including me), over Leonard, as well. Hagler has the distinction of never being knocked down (no, Roldan did not knock down Hagler in their match) or stopped in 67 fights. Hagler may be the best middleweight in boxing history and is in my list of the ten best boxers of all time.

Hearns held the WBA welterweight title, the WBC light middleweight title, the WBC light heavyweight title, the WBC middleweight title, and the WBA light heavyweight title. Hearns would defend his welterweight title three times and his light middleweight title four times. Had Hearns had better whiskers and stamina, he might have very well been the best fighter of his era. But his knockout losses to Hagler and Barkley, and his gassing out in the later rounds against Leonard keep him from that status.

The “Four Kings” era is rightly remembered as a golden age. Benítez stands just outside its pantheon. Benítez had the talent and résumé to bring him close to that circle. He did beat Durán, but not having faced Hagler is significant. Moreover, his short prime goes against him. Part of what matters in these assessments is longevity.

As for how the “Four Kings” rank in my estimation, I have the order: Durán, Hagler, Hearns, and Leonard. Readers may wonder why Leonard is fourth given his wins over the other three (as well as Benítez). The reason for this is that Hearns was beating him rather easily in their first match before gassing out, and Hearns deserved the nod in the rematch, dropping Leonard twice in the fight. Despite losing to Leonard twice afterwards, Durán dominated Leonard in their first match. Hagler deserved the nod against Leonard. Excluding Durán’s reign as lightweight champion, the order changes, but I am assessing each fighters entire career.

Why We Must Resist Neologisms like “Cisgender”

Understand what the neologism cisgender is meant to convey. It’s not a minor detail—it’s a significant matter.

Queer activists tell us that a trans woman is a type of woman. What, then, is another type of woman? A “ciswoman.”

Cisgender first appeared in academic and activist circles in the 1990s and gained wider recognition in the 2000s. It was coined to describe people whose gender identity aligns with the sex they were “assigned” at birth. The term provides a linguistic parallel to transgender without implying that being cisgender is the default or “normal” way of being.

German sexologist Volkmar Sigusch

In the early twentieth century, German physician Ernst Burchard, an associate of Magnus Hirschfeld, coined the term “cisvestitism” to describe individuals who presented themselves in a manner consistent with their gender. German sexologist Volkmar Sigusch (who studied under neo-Marxists Max Horkheimer and Theodor Adorno) used the prefixes “cis” and “trans” in the context of gender identity in the early 1990s—building on the concept of gender identity, a construct introduced into psychiatry by sexologist Robert Stoller in the 1960s. In English-language discourse, the term “cisgender” was coined in 1994 by Dana Defosse, a graduate student at the University of Minnesota. From there, the construction quickly spread among feminist and LGBTQ+ academic circles.

In reality, adding the prefix cis- to “woman” is redundant. A woman is, by definition, an adult female human. There is no other kind. Yet tautology is part of the ideological purpose behind the term. The goal of promoting “cisgender” is to normalize the concept of a trans woman as a type of woman by affirming the premise, thereby persuading people to accept the claim that a trans woman is not a man, even if exclusively he is.

This is the propaganda function of the neologism: legitimizing transgenderism and the construct of the “trans woman” through specialized language. George Orwell would recognize it as newspeak—the deliberate reworking of language for ideological purposes (as in Nineteen Eighty-Four, where 2 + 2 = 5 because the Party says so).

Activists often insist that objections to cisgender are overreactions. They claim it merely describes someone who identifies as a woman. But identity is not a matter of what one thinks; it is a matter of what one is. People can imagine themselves in all sorts of ways: some think they are dogs, for instance. If they truly believe this, it is a psychiatric condition.

Consider this analogy: a cat raised among dogs may act like a dog. Its owner may say, “She thinks she’s a dog.” But the cat remains a cat. There’s no such thing as a “cis-cat.” The same principle applies to humans: a trans woman is not a woman in the exclusive and objective sense; he is a man.

Or take a squirrel raised by cats that “purrs.” The squirrel imitates feline behavior and produces sounds reminiscent of purring. But any reasonable observer knows it is still a squirrel. It cannot become a cat simply through imitation or socialization. There is no such thing as a “trans-cat.” You don’t need to be a biologist to recognize that.

Objecting to cisgender is not overreacting; it’s a rational response to an ideological effort that uses language to destabilize clear thinking and promote falsehoods. Promoting the neologism is essentially encouraging people to lie—to themselves and to others. If normalizing deception is acceptable, nothing else would be objectionable.

This is the broader objective of queer rhetoric. It is not only about transgenderism or redefining womanhood. The larger project is a kind of transhumanism: preparing people to accept their own dehumanization. By insisting on certain language, activists manipulate people into estranging themselves from objective reality and the truth of their identity. They seek to disrupt natural cognition, to distance humans from their animality.

This is precisely what must be resisted.

Remember about a decade ago when a white woman named Rachel Dolezal identified as black? She claimed to be “transracial.” Using queer logic, this means that someone identifying with the race they were “assigned” at birth would be “cisracial.” Why hasn’t this neologism caught on? Perhaps because rigid racial categories are functional to progressive politics in a way the gender binary isn’t?

Trump, Missing Ukrainian Children, and a Dishonest meme

There’s a very dishonest meme making the rounds on social media. It falsely suggests that President Trump told Ursula von der Leyen, President of the European Commission, that the thousands of missing Ukrainian children had nothing to do with the meeting’s business.

A version of the dishonest meme

That is simply not true. In fact, Trump himself raised the issue with President Zelensky and others, even sharing a letter the First Lady had written on the subject. Far from dismissing it, he made it central to the discussion.

Here’s the actual exchange the meme distorts:

Von der Leyen: “I want to thank you also that you mentioned the thousands of Ukrainian children that have been abducted. And, as a mother and grandmother, every single child has to go back to its family. This should be one of our main priorities also in these negotiations.”

Notice—Von der Leyen explicitly says Trump was the one who raised the issue.

Trump: “Thank you. And we did. I was just thinking we’re here for a different reason. But we, just a couple of weeks ago, made the largest trade deal in history, so that’s a big thing. And congratulations. That’s great. Thank you very much, Ursula.”

Trump’s remark—“we’re here for a different reason”—was not directed at the plight of missing children. It was a segue into his next point, about the recently concluded US-EU trade deal. Read in context, it’s obvious:

“I was just thinking we’re here for a different reason. But we, just a couple of weeks ago, made the largest trade deal in history…”

If you saw the meme and concluded Trump was brushing off missing children without checking the context, then you’re part of the problem. Assuming he’s that callous says more about your bias than about what he actually said.

This is the same playbook we’ve seen before—whether it was claims about “bleach injections,” “very fine people on both sides,” or “suckers and losers.”

Misquotes thrive because people believe the caricature they’ve built, not the record of what was actually said. There has never been a US President who has ever been treated this way.

The Law and Order President and His Detractors—Who’s Right?

Last week, PBS published a story titled “Fact-checking Trump’s claims about homicides in DC.” I doubt PBS intended their report to come across this way—yet, in effect, they conceded the point. They acknowledged that Washington, D.C’s homicide rate is alarmingly high (even if reducing murders to near-zero would be difficult in a nation as large, diverse, and unruly as the United States).

“Washington, D.C’s homicide rate isn’t even the highest in the U.S.,” PBS tells its readers. “Per the February Rochester Institute of Technology report, the district has the fourth highest homicide rate in the U.S. after St. Louis, New Orleans and Detroit.”

Fourth? Our nation’s capital ranks behind only St. Louis, New Orleans, and Detroit. DC is worse than Baltimore? Chicago? Kansas City? Memphis? The seat of the federal government sits among the top four most violent cities in America—an outlier even by First World standards, with crime levels more commonly associated with the developing world.

This hardly undercuts Trump’s case for making DC safer. If anything, it underscores the urgency.

Image generated by Sora

Corporate media outlets, echoing progressive academics and politicians, consistently stress that crime rates nationwide are declining. They cite recent FBI statistics and local police reports showing drops in certain categories—property crimes or homicides in select cities—as evidence that public fears of rising lawlessness are overstated or politically motivated.

The message is clear: America is safer than in previous decades, and concerns about crime spikes are either temporary fluctuations or distortions amplified by conservative media.

Against this backdrop, Donald Trump’s push to reintroduce a hardline “law and order” agenda—expanding police powers and imposing tougher sentencing policies—is portrayed not only as unnecessary but also as dangerous. Critics argue his rhetoric stokes fear, reinforces racialized narratives of urban disorder, and reflects an authoritarian impulse: prioritizing control and security over addressing social drivers of crime, such as inequality, housing shortages, or mental health challenges. (At least, that is the progressive framing.)

But the premise is flawed. Crime did not fall under the Biden administration. At best, it stabilized after sharp pandemic-era increases, with stabilization occurring only toward the end of his term. Yet a plateau offers cold comfort to ordinary Americans still facing daily risks of victimization. A neighborhood plagued by break-ins, shootings, and theft is no safer simply because the crime rate stops rising.

The public knows this instinctively. Citizens witness armed robberies on security cameras, shoplifting in broad daylight, smashed car windows, and neighbors or coworkers becoming victims. They do not need sanitized headlines to judge their own safety. They need effective public safety measures—and many recognize they cannot rely on Democrats to deliver them.

The progressive insistence that crime is “down” amounts to lying by omission: cherry-picking statistics, dismissing victims’ lived experiences, and ignoring persistent, real-world danger.

I will focus here on the cherry-picking. The National Crime Victimization Survey (NCVS), which collects data directly from households, shows that a large share of crimes never appear in police reports. Between 2020 and 2023, only about 38 percent of violent victimizations in urban areas were reported to police—lower than in suburban (43%) or rural (51%) areas. With at best half of all crimes reported, media and expert claims about falling crime deserve extreme skepticism.

Systematic underreporting means official sources, like the FBI’s Uniform Crime Report (UCR) program or the Crime Data Explorer (CDE), present a highly incomplete picture. When Americans hear that crime is “falling,” they are hearing only about reported incidents—not the full scope of lawlessness.

Historically, NCVS and UCR trends moved roughly in parallel: crime peaked in the early 1990s and then declined, with NCVS estimates typically 2–3 times higher than UCR counts. However, between 2020 and 2023, NCVS data show a notable increase in serious violent crime, while FBI-reported crime decreased. The divergence suggests a shift in reporting or recording practices rather than a true decline in criminal activity.

However, between 2020 and 2023, serious violent crime as measured by NCVS increased notably. During that same window, FBI-reported violent crimes decreased. Thus, while NCVS suggests a significant rise in victimizations, particularly those not captured in police data, UCR indicates a decline in reported violent crime—highlighting a widening gap, in this case diverging in opposite directions. This tells us that something has changed in the way the FBI measures crime.

While UCR data may show, for example, a certain number of aggravated assaults in a given year, the NCVS consistently finds that far more such assaults occur but are never reported to law enforcement. This means that the apparent decline in some categories of crime may not necessarily be the result of fewer assaults, but rather the result of fewer victims reporting incidents, whether due to cynicism, fear, or frustration with a justice system they no longer trust.

The gap between official statistics and victim experiences represents a serious flaw in the narrative being advanced by progressive commentators and politicians.

Victims avoid reporting for many reasons. Some distrust law enforcement, influenced by years of rhetoric depicting police as corrupt or racist. Others fear retaliation in neighborhoods where enforcement is lax. Some citizens, even when reporting, experience inaction: police may decline to investigate property crimes, prosecutors may drop charges, and officers may be discouraged from proactive policing to avoid political backlash. Residents of crime-ridden neighborhoods recognize these dynamics.

Underreporting by victims is compounded by the reality that agencies from our most crime-ridden neighborhoods, i.e., the Blue City (urban areas run by Democrats), withhold data from the federal government because these data make their cities look bad. This is not speculation. The FBI’s UCR makes clear which agencies report, and the politicization of data is undeniable.

The incentives are obvious: reporting crime exposes high-crime areas, and because black Americans are overrepresented in crime statistics, it also raises uncomfortable racial questions. Public narratives that criminal justice disparities reflect systemic racism have influenced perceptions, but many Americans—what some call experiencing “black fatigue”—recognize that crime remains disproportionately committed by young black males, even if the majority of black Americans are law-abiding.

Reflecting on the discrepancies, I stress this paradox to my criminal justice students: rising arrests or reports can indicate better policing, while falling crime in UCR may not mean fewer crimes—but simply fewer reports or fewer arrests. Effective crime reduction requires both active policing and citizen engagement. Put another way, solving the problem of crime in America (as in any society) takes a whole-of-society approach.

Persistent crime is not solely a policing issue. When offenders are apprehended but prosecutors fail to pursue charges, or judges release criminals quickly, public safety is further degraded. More police—and, if necessary, the military—can deter crime, but prevention requires aggressive prosecution and incarceration. Those taken off the streets must remain off the streets to achieve real impact. Offenders cannot victimize others if they are in prison.

Chart depicting the impact of incarceration on crime rates

Deterrence and incapacitation were the primary drivers of the dramatic drop in crime from its historic highs in the 1980s and 1990s: more police on the streets, aggressive law enforcement, harsher penalties, and the expanded use of prisons.

Progressives have long criticized the large US prison population, yet they resist social policies that could alter the dynamics in our most crime-ridden neighborhoods. And this isn’t about taking guns off the street (more on that shortly). The issues are deeper: broken families, a subculture of idleness and violence, and conditions stemming from deindustrialization, ghettoization, mass immigration, and the rise of the welfare state.

The progressive abandonment of law and order policies set against the backdrop of idleness and welfare dependency, further complicated by anti-police and prison abolitionist rhetoric from the progressive left, is what led to the return of significant crime and violence after 2014. Whether one relies on the NCVS or the UCR, the return of serious crime is not in dispute.

Note the rise in crime after 2014

Who suffers the most because of all this? Black Americans. Blacks are far more likely to be the victims of crime and violence perpetrated by other blacks in the neighborhoods into which urban elites have directed and in which they have trapped them over the decades.

The failure to protect blacks in crime-ridden communities is a phenomenon Randall Kennedy calls “racially selective underprotection.” Indeed. It puts the lie to the “Black Lives Matter” slogan. It also prompts a rational person to ask: Who are the real racists here?

Now, about guns. This is an important piece in all this since politicians and progressives say the problem of crime (homicide, robbery) can be dealt with by diminishing the public’s access to firearms, a right protected by the Second Amendment. It’s part of the argument PBS makes in the story that inspires this essay.

I must note that, paradoxically, taking guns off the street would require a vast expansion of the law enforcement apparatus, given the number of guns in America and the reluctance of citizens to give up their most effective means of self-defense. Indeed, more strident gun control would lead to more crime, not less.

John R. Lott Jr. (More Guns, Less Crime) makes a compelling argument that legally owned firearms—carried by law-abiding citizens—serve as a powerful deterrent to crime. Criminals prefer unarmed and vulnerable targets, so when potential victims are armed, the risks of committing violent crimes increase, leading to fewer such offenses.

According to Lott’s careful analyses, states that adopt “shall-issue” laws making it easier for citizens to carry concealed weapons generally experience less violent crime. While some criminals may shift toward lower-risk property crimes, the overall effect of legal gun ownership is to reduce victimization.

Gun control thus disarms law-abiding individuals rather than criminals, thereby undermining public safety. In this case, what feels intuitively true is empirically sound.

I used to be on the other side of the gun argument (see The Truth About the AR-15 and The Sandy Hook Shootings: What Really Happened). My error was due to a category error. A gun cannot be guilty of murder. Yet progressives treat guns as if they have agency. But guns don’t shoot themselves; people shoot guns. People have agency. (I apologize for my error here: Guns Don’t Shoot Themselves.)

But supposing that guns themselves are a problem, is it true that more guns mean more violence? Let’s look at the facts.

From the 1960s through the 1970s, US gun ownership rose to roughly 40,000–60,000 firearms per 100,000 people as production increased, with gun homicides peaking around 7 per 100,000 in 1974.

In the 1980s, ownership climbed to an estimated 60,000–80,000 per 100,000, while the overall homicide rate averaged about 8.7 per 100,000, plateauing or gradually declining by the decade’s end.

We are approaching the peak of violent crime in American history in the 1980s, driven by demographics (more young men in the population) and the fruit of progressive policy hollowing out America’s industrial base and destroying the black family. Gun proliferation is not driving the crime problem. Indeed, it is likely a response to the experience of rising crime.

While the 1990s saw ownership reach about 74,000 per 100,000 (192 million guns for a population of ~260 million), the average homicide rate declined to around 8.1 per 100,000. Gun homicides had peaked in the early 1990s (the absolute zenith of criminal violence in America) before falling nearly 49 percentage points by 2010, even as ownership continued to grow.

This sharp decline was not thanks to the so-called assault weapons ban (1994-2004). As I have explained before on this platform, rifles account for a small proportion of guns used in homicide. More people are killed with feet and hands than are killed with rifles. Most gun deaths are perpetrated with handguns (more than half).

In the 2000s, the gun stock likely reached 90,000–110,000 per 100,000 people, while the homicide rate averaged 5.6 per 100,000, with overall gun deaths declining (with suicides making up the largest share).

By the 2010s, ownership had stabilized between 110,000–120,000 per 100,000 (roughly 40–45 percent of households owning guns), and the homicide rate averaged 4.9 per 100,000, continuing the long-term downward trend.

If guns explain homicides, then more guns should be associated with more homicides. But gun ownership has increased over the last several decades, while the homicide rate decreased during the same period. Thus, it would seem that the causal relation is the inverse of what PBS’s expert claims: more guns means fewer homicides.

What explains this? The likelihood that an armed citizen is a deterrent to those who wish to inflict harm on him explains a lot of it. But it’s also because a more expansive criminal justice system saves lives, since police presence deters violence and incarceration keeps violent offenders away from the general public.

If America locked up 2.3 million people and didn’t reduce crime, then one would have reason to doubt the efficacy of incarceration. But the reality is that homicide was reduced because of deterrence and incapacitation effects. So were car thefts, etc. Law and order work. Progressive approaches to crime and disorder don’t. Indeed, they make the situation worse (for obvious reasons).

Remember, during the BLM craze, when the police were told to stand down, and the prison population declined, thus allowing more violent offenders to operate more freely on our streets? What happened? What any rational person would expect: Homicides increased. Car theft increased. Etcetra. As Lott likes to say, this isn’t rocket science.

One other thing, and this is crucial for understanding the false claims gun grabbers make. Approximately 46–47 percent of American adults who report owning a gun live in rural areas, which have much lower rates of homicide than urban areas. Suburban areas? About 28–30 percent of adults say they personally own a gun. Again, not areas with a lot of homicides.

But Urban areas? Roughly 19–20 percent of adults report owning a firearm, yet many inner-city neighborhoods are killing fields, including DC. Why? Not because of the amount of guns, but because the people there are much more likely to murder people.

Democrats don’t want Trump to stop the killing. Weird, right? And conservatives and their guns are somehow to blame for crime. More weirdness.

To put the matter bluntly, crime reporting in the mainstream media is garbage. It’s a pack of lies promulgated by those who are simultaneously striving to make guns the problem, while excusing the consequences of progressive social policy—not only in the area of criminal justice, but in the areas of economic policy and the welfare state.

It’s time to get back to what works: law and order. Public safety is a human right. Liberty lives where people are safe to go about their business and their lives.

A Deeper Horror Lurks Behind the Xenomorph: Hawley’s FX Series Expands on Scott’s Core Thematic

I couldn’t disagree more with Josh Bell’s complaint in his review of the new FX/Hulu series conceived by director and writer Noah Hawley: “The Alien franchise hits a wall with TV series Alien: Earth,” published in Inlander. It’s not that I have no criticisms of the series—I do, although I won’t consider them here, since I am still progressing through the series—but I reject the idea that the franchise is venturing into territory it shouldn’t. In fact, it’s going exactly where I hoped it would go from the start.

I was seventeen when Alien premiered in May 1979. I found a seat up front in the Martin Twin Theater (in Jackson Heights Plaza, Murfreesboro, Tennessee), ready for whatever Ridley Scott had in store for me. The TV campaign had given us almost nothing—just enough eerie imagery and sounds to make it irresistible. With no Internet to offer clues or ruin the shock, we went in blind, unprepared for the horror of the chestburster scene. I shrank in my seat the way others in the theater must have. I didn’t scream. But others did.

Scott’s Alien tells the story of a deep-space mining expedition whose course is secretly altered—known only to the synthetic Ash, his identity also concealed from the crew—to collect an alien species for the Weyland-Yutani Corporation, one of five conglomerates that dominate Earth in a transnational corporatist order. This voracious system seeks the spoils of interstellar space, including other life forms. In this future, the world’s proletariat—and the synthetics embedded among them—serve the interests of technological overlords. The parallel to our own emerging reality is unmistakable. As he would do in 1982 with Blade Runner, Scott gave us a window into a possible future with Alien.

Corporate intrigue has never been an incidental subplot to Alien; it’s embedded in the series from the start. Scott’s fascination with world-building ensures that, just as the xenomorph has an origin story (collected or created by the Engineers), so does Ash. Alien has the structure of a slasher film. I love the action and horror of the genre as much as anyone. But what really fascinates me about Alien is the deeper horror: the realization that humans—and other humanoids across the stars—are consumed by an obsession with biotechnology and the Promethean desire to transcend the forms provided by natural history.

HR Giger’s The Pilot (Engineer)

That’s why, when I saw Alien on opening night, the moment that haunted me most wasn’t the chestburster scene—it was the fossilized Engineer seated at that vast HR Giger contraption aboard the derelict ship. The archaeologist in me burned to know the story behind it. I had to wait until 2012, with the release of Scott’s Prometheus, a flawed but compelling prequel, to get answers.

Prometheus takes place 27 years before Alien: Earth. The follow-up Covenant (2017) occurs a decade after Prometheus. Alien: Earth is set two years before the events of the original Alien, set in the year 2122. Thus, Hawley’s story partly fills the gap between Covenant and Alien that a future theatrical release promised. (Will we ever learn the fates of Daniels, David, and the USCSS Covenant and its cargo?)

It’s in Prometheus and Covenant that we learn how the Weyland Corporation first encounters alien life. CEO Peter Weyland (a fictional character anticipating the ambitions of Elon Musk, as I note in a recent essay, From Neon Rain to Corporate Space: Blending the Histories of Alien and Blade Runner) may have been more interested in the Engineers and immortality, but the acquisition of alien species became central to Weyland-Yutani’s ambitions (the corporate merger of Weyland and Yutani occurring in the aftermath of the events of Covenant).

In Alien: Earth, the USCSS Maginot, carrying alien life forms, crashes into Prodigy City, New Siam, in futuristic Thailand. Because of what happens there, Weyland-Yutani knows it must keep the crew of the USCSS Nostromo (the ship in Alien) ignorant of the Maginot’s fate—and the catastrophic consequences of the company’s ambitions. Thus, when the Nostromo, returning from Thedus, picks up the distress signal from LV-426, the crew knows nothing of the fate that awaits them. In the depths of interstellar space, they have no knowledge of the hybrids realized by wunderkind Boy Kavalier. The plot of the original Alien thus remains untouched—a self-contained story of corporate exploitation. The difference is that we, the audience, now know the context. Far from spoiling the original, the FX series, and Prometheus and Covenant before it, enrich it. This is the beauty of an expanded universe—provided it doesn’t distort the core backstory.

Critics who fault Hawley for introducing existential themes into the franchise overlook that Scott himself already did, exploring them in the prequels. Scott introduces these themes through the exploration of humanity’s hubristic desire to transcend natural limits, a premise embedded in Alien itself, which we learn from Mother, the AI that runs the Nostromo (a shoutout to Kubrick and Clarke’s Hal 9000 from 2001: A Space Odyssey). We also learn in the original film about the intrinsic pathological potential of synthetics, which mirrors the amorality of the xenomorph (recall Ash’s admiration of the creature).

In Scott’s 2012 prequel, the crew of the USCSS Prometheus embarks on a quest to find the Engineers, the mysterious species Weyland believes to have created humanity. They travel to LV-223, a distant moon orbiting a gas giant in the  Zeta 2 Reticuli system (see my June 2012 essay Ridley Scott’s Prometheus and the Problem of Time Dilation for more). LV-223 is near LV-426 (aka Acheron), the planetoid in the original Alien. The search raises fundamental questions about human origins and the nature of our creators, forcing the characters—and the audience—to confront whether life itself has inherent meaning or whether our makers are fallible and even malevolent. Weyland’s obsessive pursuit of immortality highlights humanity’s desire to cheat death and assert control over existence, raising questions about the cost of such ambitions and the hubris inherent in attempting to transcend natural limits.

Another existential theme is the tension between humans and their creations. David, a synthetic, exhibits curiosity, creativity, and, at times, malice, blurring the boundary between creator and creation. The synthetics turn not only on the human crews in which they are embedded, but also turn on their creators (a theme also explored in Blade Runner). Through David, Scott challenges assumptions about what it means to be human and probes the ethical implications of playing god—and the inevitable boomerang of having done so.

At the core of his concerns, Scott emphasizes the fragility and expendability of human life in the corporate gaze. Both films place characters in the presence of cosmic forces—the Engineers, xenomorphs, alien landscapes—that dwarf them, underscoring their vulnerability and the existential uncertainty of their—indeed our—place in the universe. The unleashing of the xenomorphs and the chaos that ensues serve to illustrate this key existential idea: our creations escape our control and even turn against us, reflecting on the unpredictability of life and the limits of human knowledge. The Gods punished the Titan Prometheus for giving man the technology of fire. Like Dr. Frankenstein, man punishes himself.

In these ways, Scott uses the prequels to explore profound existential questions about origins, creation, ambition, and mortality. These themes set the stage for Alien: Earth, where Hawley continues to probe humanity’s obsession with biotech, the consequences of overreach, and our fragile place in a vast, indifferent universe. Bell’s review of the FX series moves from a superficial understanding of Scott’s vision. This suggests that he has not spent much time with the Alien franchise—or dwelled much in Scott’s other dystopian world, the world of the blade runner.

All that said, if I were introducing a novice to the Alien franchise, I’d start with the original film. Not only would this preserve the shock of the chestburster scene, but more importantly, mirror how we come to understand any world—ours included. We don’t receive a neatly packaged explanation before we experience what life throws at us. We live it first and afterward ask, “What the hell just happened?” Taken chronologically, the Alien franchise unfolds like a murder mystery or the excavation of an ancient site. Along the way, we deepen our understanding of humanity and the terrible potential that lies at the heart of corporate ambition. Hawley’s Alien: Earth (with Scott serving as executive producer) continues our journey of discovery. For that reason, whatever its flaws, the new FX series works.

The Anchorage Summit Could Mark the Beginning of a US–Russia Rapprochement After Years of Tension

Today marks the eighteenth anniversary of the end of World War II—the day Imperial Japan formally surrendered to the United States (which required the detonation of two nuclear fission devices over major Japanese cities), bringing to a close the deadliest conflict in modern history.

Russian President Vladimir Putin and US President Donald Trump in Anchorage, Alaska (source of image: BBC)

This anniversary is especially significant in light of current events: President Donald Trump and President Vladimir Putin are meeting today in Anchorage, Alaska, to seek a resolution to the Russia–Ukraine war and to establish more normal relations between world powers—powers armed to the teeth with thermonuclear or fusion weapons (which are also part of the negotiations).

It’s worth remembering that in World War II, the United States’ most important ally was Russia, then the Soviet Union. Great Britain fought valiantly, to be sure, but it must be noted that its primary concern was preserving the British Empire (which it did not in the end, as the United States became the world hegemon).

Other European states contributed very little to the Allied struggle against Nazi Germany. Italy was a fascist ally of Germany, as was Spain. France, under the Vichy government, was effectively under Nazi control. Neither did the Scandinavian nations offer much assistance; for example, Sweden’s “neutrality” worked in practice to Germany’s benefit.

In the end was the Russian working class (who suffered the most of any nation, with as many as thirty million perishing one way or another), the American working class (who lost more than 400 thousand lives), and the British (losing nearly 400 thousand souls) who bore the heaviest burden in defeating Nazism.

Germany and Japan also lost millions, but since they were the instigators, we must note that these and other losses were self-inflicted.

After the war, the financial and industrial elites who had supported Nazi Germany orchestrated the European Union, while NATO fulfilled Hitler’s vision of a pan-European military. Thus, the Europeans, as well as Britain (Brexit notwithstanding), have pursued their own self-inflicted wounding—a situation they wish to inflict on others.

In the post-Soviet era, globalist hostility toward Russia has been strikingly persistent—even belligerent. NATO’s eastward expansion, the 2014 Ukraine coup backed by globalist forces, and other actions have all served to antagonize Russia. The Russia-Ukraine war is the predictable consequence of Western belligerence, pursued through its proxy Ukraine, which has brought great suffering to the Ukrainian people, as well as to the Russian people.

Why such hostility? Notably, the same entrenched forces direct similar animosity toward President Trump and the populist–nationalist movements sweeping both the United States and Europe—movements that seek to restore national sovereignty and resist transnational control. The same goals shape Russia’s stance.

This suggests (more than suggests really) that the animus toward both the Russian people and the American people is rooted in globalist disdain for popular self-determination. A renewed commitment to the Peace of Westphalian—centered on national sovereignty, peaceful international relations, and freedom from entangling alliances—runs directly counter to the transnational agenda.

In other words, the forces opposing both Russia and the Trump movement fear nothing more than normalized relations between the two nations, because such cooperation threatens the globalist project.

The truth is, the Russian and American peoples ultimately want the same things. We have common interests. And, for this reason, we should hope that the talks in Anchorage lead not only to a resolution of the Russia–Ukraine conflict, but also to a broader normalization of US–Russia relations, including robust trade and mutual respect—and a reduction in the nuclear arsenal of both nations.

Moreover, if such a rapprochement between the United States and Russia were achieved, it would also serve to isolate China. Beijing has relied heavily on the estrangement between Washington and Moscow to advance its own strategic ambitions, positioning itself as Russia’s indispensable partner against the West.

A genuine thaw in US–Russia relations—grounded in mutual respect, sovereignty, and trade—would remove that leverage, leaving China increasingly alone in its push for global influence. Instead of exploiting a divided geopolitical landscape, China would face a more united front of great powers seeking stability and balance, undermining its capacity to expand unchecked.

The effect of this would not only lessen the antagonisms that are driving the world towards World War III, but would weaken the transnational project, as well as blunt China’s bid for world domination. A successful negotiation would therefore be a win-win-win for the world. Let’s hope for the best.

The Call for DC Statehood: Resurrecting a Bad Idea to Counter Trump’s Call for Good Order

“To exercise exclusive Legislation in all Cases whatsoever, over such District (not exceeding ten Miles square) as may, by Cession of particular States, and the Acceptance of Congress, become the Seat of the Government of the United States, and to exercise like Authority over all Places purchased by the Consent of the Legislature of the State in which the Same shall be, for the Erection of Forts, Magazines, Arsenals, dock-Yards and other needful Buildings.” —Article I, Section 8 of the United States Constitution

The latest counter to Donald Trump’s push to reestablish law and order in the Capitol is the renewed demand for DC statehood. This demand isn’t new—I’ve heard it my entire life, more loudly in the 1980s. It fades from time to time. It most recently remerged in 2023 in Congress. Now it’s back.

Before making the case against DC statehood (which won’t take long), a brief timeline of Congressional moves made to give DC more power. In 1961, the Twenty-Third Amendment to the US Constitution was ratified, granting the District of Columbia three electoral votes in presidential elections. This measure gave DC residents a voice in selecting the President and Vice President for the first time, yet it stopped short of offering them any voting representation in Congress. The amendment addressed only one aspect of DC’s political disenfranchisement, according to proponents, ignoring the plain text of the Constitution, leaving unresolved the broader question of the city’s role in the federal system.

A further step toward local self-governance came in 1973, when Congress enacted the District of Columbia Home Rule Act. This legislation created an elected local government consisting of a mayor and a city council, with members chosen both at-large and by ward. While Home Rule allowed DC to manage many of its internal affairs, not a particularly objectionable move, Congress retained the authority to review and override the city’s laws and budget, ensuring that the federal government maintained ultimate control over the nation’s capital. This piece is crucial.

The debate over DC’s political status continues into the present day. In 2023, the Senate reintroduced the Washington, DC Admission Act, which would transform most of the District into the nation’s 51st state—called Washington, Douglass Commonwealth (so it can keep the acronym)—while preserving a small federal district around core government institutions. Supporters argue that statehood would end “taxation without representation” for hundreds of thousands of residents, while opponents contend it would violate constitutional principles and upset the balance of power envisioned by the Founders. I am with the opponents.

Indeed, it would be refreshing if DC Mayor Muriel Bowser and proponents of DC statehood would sit down and read the United States Constitution and the Federalist Papers—or, if they struggle with comprehension, have someone explain it to them. In truth, I suspect Bowser and others around her have read these documents and do understand them. But they operate under the cynical assumption that most Americans haven’t, and can’t, so they work to mislead the public into thinking the federal government is “taking over” something it already has exclusive control of.

Even Trump, during his first term, may not have fully grasped the constitutional arrangement. Many in his administration—some actively working at cross purposes with him—certainly weren’t going to help him figure it out. Ironically, Democrats may come to regret giving Trump four years between presidencies to study how government works and assemble a more loyal, informed team. One aspect of Trump’s second term is that he has assembled a team much more loyal to the will of the People.

As the above quote indicates, the US Constitution designates the District of Columbia as a federal district, not a state. Making it a state would require a constitutional amendment—an amendment that would undermine the Founders’ intent and embed the Capitol within a single state’s jurisdiction. Why did the Founders avoid statehood for DC in the first place? To ensure the nation’s capital remained under direct federal control, free from the influence of any one state. They wanted the capital to be politically neutral, preventing one state from gaining disproportionate power over the federal government. A separate federal district ensures that Congress can maintain security and stability without interference from state politics—a safeguard for effective, unbiased governance.

DC statehood would shatter this balance. Since DC voters (no longer a chocolate city, DC is still around 45 percent black) lean heavily Democratic, granting it statehood would guarantee two additional Democratic senators and a shift in congressional power. We don’t need another Democrat-controlled state flooding Congress with more bad ideas. The state of DC itself is proof enough: decades of Democratic rule have left residents—and visitors—wading through crime, mismanagement, and urban decay. You want more of that in national politics? No thanks.

The ripple effects would be significant. If DC is granted statehood, other US territories will demand the same, further altering the political landscape and diluting the union. This would derail populist efforts to restore the Republic and move away from the destructive policies of the Democratic Party. (For perspective: this is the same reason I opposed adding Canada to the United States. The progressivism up there would make America look like—well—Canada: a woke hellscape.)

Let’s be clear: DC already has representation—however poor that representation may be. Statehood won’t fix the city’s problems. Establishing law and order is the first step, but real improvement will require a change in leadership and political culture. That will be a challenge for a city whose voters once saw their mayor (Marion Barry, DC’s four-term mayor, who spent six months in federal prison on drug possession) on video smoking crack with prostitutes—and reelected him anyway. Half a century of Democratic control has yielded misery as its primary fruit. We should not reward that record with statehood.

Steven Miller once more laying down the facts of history

This is very important for everybody to socialize. The controversy Democrats are trying to manufacture about DC is yet another attempt to revise American history. History matters, and if we arm ourselves with the facts we can win the arguments that advance the project to reclaim our beloved republic.

Remember, it’s not so much who you are arguing with that matters. I’m sure you know full well that those working from partisan standpoints are typically incapable of being persuaded. What matters most is the audience. Our goal is to expand the parameters of mutual knowledge. For every progressive one encounters, there are many others open to changing their mind. Many people are not as dug in as progressives, since they have not articulated a position they feel compelled to defend to remain loyal to the tribe.

Cultural Totalitarianism: Derailing the Real-World Ministry of Truth

The term “cultural totalitarianism” (more accurately, the concept behind it) is associated with essayist Norman Mailer (see Sophie Joscelyne’s essay on this). Mailer warned that American society in the 1960s was threatened by a subtle, internalized form of totalitarianism—one rooted in cultural conformity and psychological manipulation rather than overt political coercion. This doesn’t happen by accident. It happens because an ideology captures the nation’s major sense-making institutions.

This week, the corporate media is buzzing with a new refrain (albeit not that different from things they have said before): “Trump is a Stalinist revising history at the Smithsonian!” The charge is everywhere, repeated so often that it feels like fact, especially among those prone to believe the mainstream media. But in reality, it’s an inversion of the truth.

Anti-white display at the Smithsonian

To see why, we must remember what the Smithsonian itself was doing just a few years ago—at the height of the summer 2020 unrest, which many (including your humble narrator) have described as a “color revolution” in the United States. At that time, the institution aligned itself with the corporate-backed, self-styled “neo-Marxist” group Black Lives Matter. The BLM organization, heavily funded by corporate donors, pushed demonstrably false narratives—chief among them the claim that systemic racism drives lethal civilian–police encounters (debunked as early as 2017 by Harvard’s Roland Fryer). By 2020, such ideas, drawn from critical race theory (CRT), had already been incubated for years by the corporate media.

In July 2020, Newsweek reported that the Smithsonian National Museum of African American History and Culture had “clarified” the purpose of its new Talking About Race portal:

“At a time when the soul of our country is being tested, our ‘Talking About Race’ portal will help individuals and communities foster constructive conversations and much needed dialogue about one of our nation’s most challenging topics: Racism and its corrosive impact.”

This was published on its official Twitter channel. The Smithsonian continued:

“America is once again facing the challenge of race, a challenge that needs all of our understanding and commitment. Our portal was designed to help individuals, families, and communities talk about racism, racial identity and how forces shape every aspect of our society.”

On the surface, this may have appeared benevolent to many eyes. In practice, it amounted to the Smithsonian adopting the role George Orwell warned about in his 1949 novel Nineteen Eighty-Four: the “Ministry of Truth” (or “Minitrue”), an institution tasked with generating and disseminating pro-regime ideology, often by rewriting history. In Orwell’s vision, the Ministry’s name was the opposite of its function.

Image generated by Sora

Today’s Smithsonian—captured by “woke” progressives aligned with the emergent transnational corporate state (TCS)—operates in much the same way. The TCS is represented politically by Democrats, administratively by the technocracy progressive operatives command, culturally by compliant academic and media institutions, and strategically by the transnational corporations (TNCs) seeking hegemony over the West.

However, Trump is not “revising history.” On the contrary. He is resisting historical revisionism—pushing back against a coordinated elite effort to delegitimize America in service of corporate power and profit and the functionaries that serve these interests.

History is not the only battleground. The real-world Ministry of Truth also revises scientific reality to suit ideological ends. One glaring example is radical gender ideology (RGI). With RGI as its guide, schools are teaching children that they can change—or discard entirely—their gender.

The Smithsonian has played its part here as well. Plans for its forthcoming American Women’s History Museum included the recognition of transgender women, newspeak for men portraying themselves as women. Its materials reference, for example, Monica Helms, a Navy veteran and creator of the first transgender pride flag.

This March, Trump signed an executive order titled Restoring Truth and Sanity to American History, which prohibits the Smithsonian’s Women’s History Museum from recognizing men imitating women in any capacity. Such actions aim to return the institution to its foundational mission—truth in sense-making—rather than letting it serve as a propaganda arm of the ruling ideology. It also notes the anti-white display noted above.

Predictably, the corporate media is framing this effort not as a restoration of truth, but as an act of historical revisionism. This is yet another example of Orwellian inversion—where insistence on historical accuracy and scientific soundness are painted as falsification.

The managed decline of the American Republic depends on such inversions. And once again, the press is doing its part to ensure the process runs smoothly.

You don’t hate the mainstream media enough.

Did Norman Mailer nail it or what? See British journalist Melanie Phillips argue that we have slipped into an age of “cultural totalitarianism.”