Orbiting Planet Madness: Consenting to Puberty and Other Absurdities

This essay concerns the argument, all the rage over on X in the wake of House Republicans (joined by three Democrats) safeguarding minors, that children should not be forced to go through puberty. I also address the retort among transactivists that “cisgender” children are recipients of gender affirming care. For context, Georgia Representative Marjorie Taylor Greene successfully pushed through the House the Protect Children’s Innocence Act (HR 3492), which criminalizes the provision of so-called “gender-affirming care” (often modified with the compound adjective “life-saving”) to minors and imposes penalties on providers. Greene’s legislation, sure to fail in the Senate (where it requires considerable Democratic Party support to pass), has triggered a firestorm.

In the wake of the bill’s passage, Democrats took to the Internet to condemn the bill and rally the troops. One of the arguments among rank-and-file progressives uses the language of “consent” around puberty, as if children have a choice in a naturally unfolding developmental process. When developmentally appropriate, in the absence of abnormalities or intervention by endocrinologists, puberty is an inevitable process. Humans don’t consent to puberty any more than they consent to aging or dying. Humans don’t have a choice in such matters. They age and eventually die. That’s a normal part of life. To be sure, some are trying to cheat the effects of aging and even death. Indeed, this is the same transhumanist desire that animates people seeking to escape one gender to become another, like a hermit crab seeking a new shell.

We hear the absurdity in the argument from people who say they didn’t consent to being born. Readers who haven’t encountered this before may find this incredible, but people seriously make this argument. They’re right in a way: they didn’t consent to being born. Who does? Nobody consents to being miscarried, either. Or consider those who want limbs amputated because they think they have too many—as if one consents to having four limbs, or to having 20 digits. Imagine a society in which doctors remove the normal requisite of healthy limbs and digits because people didn’t consent to them. Don’t laugh; this has happened (see The Exploitative Act of Removing Healthy Body Parts). Imagine a society in which girls suffering from anorexia, because they think they’re too fat, are undergoing bariatric surgery or liposuction (see Disordering Bodies for Disordered Minds).

Those defending the artificially induced arrested development of children contend that puberty blockers are relatively harmless and reversible. Even if one doesn’t like affirming delusions, it’s no big deal, they say. But this isn’t true. Puberty blockers, when used in precocious puberty to delay early onset, make sense. In that case, the intervention is considered reversible, and the delayed developmental processes, including brain maturation, typically unfold once the blockers are stopped. However, when puberty blockers are used to suppress puberty that is starting at a developmentally normal time, the situation is different. Adolescence is a critical period for brain development, including cognitive and emotional maturation driven in part by sex hormones. Delaying or suppressing this developmental phase disrupts important windows of synaptic pruning, myelination, prefrontal cortex refinement, and emotional regulation.

Parents and their children who seek this intervention are often unaware of the potentially harmful effects of these drugs when used at a developmentally inappropriate time (still, that they’re effectively Peter Panning their kids ought to be obvious enough). In such cases, while physical puberty resumes in some fashion after stopping the blockers, the brain and cognitive/emotional development that normally occurs during the typical pubertal window may not fully catch up later; some aspects of normal development may be permanently altered because that sensitive window of opportunity has passed.

There is evidence that puberty blocking at a critical phase can have lasting effects on brain structure, behavior, cognition, and emotional processing. Any responsible parent must therefore ask about the long-term impacts on IQ, neurodevelopment, and emotional function. And they shouldn’t trust doctors to tell them the truth. Parents have a duty not only to study the particular matter, but also to learn how the medical industry exploits ignorance for profit (see Making Patients for the Medical-Industrial Complex; The Story the Industry Tells: Jack Turban’s Three Element Pitch; Thomas Szasz, Medical Freedom, and the Tyranny of Gender Ideology).

Being charitable (although I’m convinced of the harm puberty blockers pose to children), even if the argument were that we’re still collecting and collating the data on blocking puberty during critical developmental stages, no meta-analysis definitively shows that arresting puberty during this phase of development (Tanner stages 2-4) is safe. Being as cautious as possible with respect to the science on this matter, parents—or state actors who might override parental rights—put children at risk of brain/cognitive/emotional stunting by consenting to these therapies. Therefore, governments have a responsibility to the moral order (the same ethical demands that undergird the Nuremberg Code) to safeguard children against this by regulating what doctors are allowed to do (see Medical Atrocities Then and Now: The Dark Continuity of Gender Affirming Care).

For progressives and trans activists who say such decisions should be up to children, their parents, and their doctors, while the child and parent may claim a right to them, doctors have no right to pursue courses of action that may harm patients without objective evidence that the claimed benefit or need justifies the intervention. Just because somebody fears puberty, or for some other reason wants to avoid it, is not a reason to block it. Interventions require a legitimate medical justification, and that cannot be had because a professional association, such as the World Professional Association for Transgender Health (WPATH), has constructed “standards of care” that assert a justification. After all, the Church of Scientology established the Citizens Commission on Human Rights (CCHR) to legitimize L. Ron Hubbard’s doctrine of dianetics. Does that make the practice of auditing a legitimate medical practice? (See my satirical piece Dianetics in Our Schools.)

For those unfamiliar with WPATH, the transnational organization traces its roots to the work of German-American endocrinologist and sexologist Harry Benjamin. Benjamin’s 1966 book, The Transsexual Phenomenon, distinguished transsexualism from homosexuality and transvestism, argued for “compassionate medical interventions,” i.e., hormones and disfiguring surgery, and introduced a scale (later known as the Benjamin Scale) to classify degrees of gender dysphoria. (For a deeper dive in the perversion of science in this area, see my essays The Gender Hoax and the Betrayal of Children by the Adults in Their Lives; Fear and Loathing in the Village of Chamounix: Monstrosity and the Deceits of Trans Joy; Simulated Sexual Identities: Trans as Bad Copy.)

Aware of rhetoric on social media that cites the practice of gender affirming care for so-called “cisgender” persons (a neologism assigned to those who suffer no delusions about their gender), I want to spend the balance of this essay on stressing the point that gender affirming care that actually affirms gender, which is determined by gametes, chromosomes, and reproductive anatomy, presents a different case. The retort that “gender affirming care is used all the time on cis children” is indeed true, but with a big difference: in such cases, it is appropriate medical care. I will use the example of a boy born with a micropenis to illustrate. (I have used the example before; see Gender Denying Care: A Medical and Moral Crisis.)

Suppose the parents of a boy born with a micropenis, knowing they have a developmental window in which a doctor could provide hormones so that their son’s penis could grow to a normal size, but don’t do that because it doesn’t, in their view, jeopardize the life or health of that person. If that had been my situation, and my parents had made that judgment, to not do anything about it, and I got to be an adult male who couldn’t fix the problem because the developmental window passed, I would be bitterly angry at my parents for not intervening at the optimal moment, which might have allowed me to have a normal sized penis. My parents would have, in fact, harmed me by denying me gender affirming care of the real sort.

(A user on X objected to this example last week because he denied that parents or doctors could know whether a newborn has a micropenis. In fact, a micropenis is observable at birth. A micropenis is defined as a penis that is ≥2.5 standard deviations below the mean for age and gestational age, with otherwise normal male anatomy (scrotum, urethral opening, and typically palpable testes). Clinicians determine this by measuring the stretched penile length (SPL). A micropenis is a treatable abnormality—as long as the intervention is performed at the right developmental stage. The thought of parents in the grip of ideology, knowing this but not doing anything to help their son, should disturb anybody who cares about the well-being of children. The X poster never returned to drop the other shoe.)

Now, suppose a boy with a normal penis is born to parents who want to halt his puberty because he or the parents want to avoid the development of secondary sex characteristics. Who knows, perhaps they seek to Peter Pan the boy. Keep him in Never Never Land. At any rate, this is an instance of gender-denying care, or, as Health and Human Services Secretary Robert F. Kennedy Jr. referred to it in public remarks, “sex-rejecting procedures.” What parents would do such a thing to their child? The same parents who would Peter Pan their kid because the kid wanted it, justifying their actions as “affirming” their kid in “his identity.” Whatever the motive, the parents are supposed to safeguard the child, not harm the child because they or the child wants to stop puberty. Neither parents nor children have the right to this, just as they don’t have a right to remove their child’s limbs or digits (except in the case of extra appendages).

The House absolutely did the right thing in passing HR 3492—and the Senate should follow their lead and send the bill to President Trump’s desk. The Supreme Court would almost certainly uphold the law (see United States v. Skrmetti—The Supreme Court Strikes a Blow to the Madness of Gender Ideology). I have been waiting a long time for Congress to stop what are, by any objective ethical standard, medical atrocities. Whether woke zealots, sufferers of Munchausen’s by proxy, or parents swept up in social contagion—and doctors, of course—must be held accountable for failing to safeguard children. No parent would affirm an anorexic child in her fat delusion (see An Ellipse is a plane figure with four straight sides and four right angles, one with unequal adjacent sides). No doctor working from a scientific or moral standpoint would remove the limbs of a normal child who didn’t want them. Why would any doctor stop puberty or lop off healthy breast tissue of a young woman or invert the penis of a boy who said he wanted a vagina? This is madness. Puberty blocking in the case of precocious puberty, or removal of breast tissue in a boy suffering from gynecomastia, is entirely appropriate because there is an objective medical need. There is no such need in the case of gender dysphoria.

Society needs to ensure that gender affirming care remains available for anomalous cases where the developmental process did not unfold in the normal way, but at the same time make illegal any “medical intervention” arresting a normal process or altering children’s bodies because children and doctors believe they’re something they are not, or because the parents have been pulled into orbit around Planet Madness. Don’t call that “gender affirming care.” Because it’s not. It’s the opposite of affirming gender. And it’s not “life-saving.” A rational person must not tolerate the practice of emotional blackmail—the weaponization of empathy—in health care (see The Problem of Empathy and the Pathology of “Be Kind”).

Image by Sora

Beyond the Realms of Plausibility: The Trump–Epstein Allegations as Moral Panic

Public discourse surrounding Donald Trump’s alleged connection to Jeffrey Epstein frequently rests on the implicit—sometimes explicit—claim that government-held “Epstein files” contain evidence that Trump engaged in illegal sexual activity with minors. Yet this claim faces a serious inferential problem: if authorities at the local, state, and federal levels—across multiple administrations and partisan alignments—have long had such evidence, the absence of investigation or prosecution demands explanation. The most plausible explanations do not support the allegation. What is more, the insinuations, which are driven by the desire to continue a partisan narrative to delegitimize Trump, are harmful to the many people who knew Epstein but did not participate in child molestation. That list may include Trump.

The “missing photo” we all had.

I am going to first interrogate the hysteria from the standpoint of criminal justice. I will then shift to a social psychological observation. On the criminal justice front, the accusations made against Trump are of a criminal nature and, as such, require prosecutors to put ducks in a row. To be sure, Democrats and RINOs (a few openly, more down deeply) couldn’t care less about whether Trump is guilty of crimes. What they seek is reputational damage to put Trump and the populist movement behind them and return to the status quo that MAGA disrupted. They care only that the voters believe Trump has done something untoward. They already have much of the public in a place where they perceive a picture of Trump surrounded by beautiful women as evidence of a crime, when in fact it’s merely evidence of what we already knew about the man: that he’s a billionaire playboy in the 1980s and 1990s. They see a video of him kissing an wife and, not recognizing her, think he is kissing a minor. They see pictures of him with his arm around his daughter and, not recognizing her, think he is drawing near a minor. They see his name on flight logs and think the plane was headed to Epstein’s island, unable to process entries that list Trump, his wife, and daughter, and the nanny, or bother to know that the flights were from Florida to New York, not to an island, or that one of them was one-way.

From Epstein’s flight logs

It is essential in doing the actual work of criminal investigation and working up an evidentiary and rational cause to pursue prosecution to distinguish between the possession of materials and the possession of prosecutable evidence. Investigative files often contain raw, unverified information: hearsay, names mentioned in interviews, photographs, travel logs, videos, witness statements, etc. None of these, individually or collectively, necessarily meet the legal threshold required to initiate criminal proceedings, particularly for crimes alleged to have occurred decades earlier. Criminal prosecution requires corroboration, credible witnesses, jurisdictional clarity, and material evidence sufficient to meet the standard of proof beyond a reasonable doubt. Association or mention is not evidence of criminal conduct. If this were true, then everybody photographed with Epstein would be the subject of a criminal investigation. Mick Jagger? Chris Tucker? Noam Chomsky? Child molesters? I hesitate to even mention their names in the context of a moral panic over a supposed widespread elite child trafficking operation.

The strongest argument against the allegation is institutional. Prosecutorial systems are neither monolithic nor perfectly coordinated. They involve numerous actors—career prosecutors, investigators, judges, and oversight bodies—often with divergent political interests. In this case, the divergent political interests could not be clearer. Nor are the convergent political interests that fuel the panic. The idea that credible evidence of sex crimes involving minors by a former or sitting president could or do exist and yet be universally suppressed across administrations of both parties strains plausibility. Indeed, the incentive structure cuts in the opposite direction: such a prosecution would be career-defining, morally unassailable, and politically advantageous to many actors. Democrats have this evidence. Why haven’t they moved to prosecute Trump?

Did you see Presidential candidate Kamala Harris’ explanation for why they didn’t on Jimmy Kimmel the other night? It wasn’t a non-answer as many might suppose; it was designed to reinforce an assumption that the Biden Administration had nothing to do with the lawfare waged against Trump during the period between his occupancy of the White House. Does anybody believe that if Biden had evidence that Trump was a child molester, he wouldn’t have handed that information over to the DoJ? Of course, he didn’t need to. The DoJ already had that information. The Attorney General of New York, Letitia James, pursues a zombie case against the President, but doesn’t pursue him for sexual assault? Is this believable? The absence of such action strongly suggests not suppression, but insufficiency of evidence. No, not suggests—screams.

It is also mistaken to assume that a lack of public prosecution implies a lack of scrutiny. Prosecutors routinely evaluate allegations and decline to proceed when evidentiary standards cannot be met. If anybody doubts that investigators and prosecutors across multiple jurisdictions have not pored through these files, then they have very little understanding of how the criminal justice system works. Do they believe that the mass of people who loathe Donald Trump are going to protect him? They lined up to try and put him in prison for centuries—on the flimsiest of charges! The reality is that declinations to prosecute are often silent, particularly in politically sensitive cases and where many honorable people may be damaged reputationally or even put in harm’s way. The public is rarely informed of investigations that lead nowhere, especially when announcing them would unfairly taint individuals without a legal basis. That doesn’t mean they’re covering up crimes. It means that they’re performing their jobs conscientiously and professionally. Frankly, to think otherwise is a sign of a paranoid mind.

This is what in sociology we call a moral panic, mass hysteria, or mass psychogenic illness. This pattern of reasoning—where suspicion substitutes for evidence and absence of prosecution is reinterpreted as proof of conspiracy—fits squarely within the historical structure of moral panics. In such episodes, moral certainty precedes empirical verification, and institutional restraint is recast as complicity. I cover the phenomenon of mass psychogenic illness in several of my sociology courses. My go-to example is the Satanic Ritual Abuse panic of the 1980s and early 1990s. When students see how millions of people could believe something as absurd as a transatlantic conspiracy to enlist daycares in Satanic rituals, they’re astonished. In this case, allegations of widespread child abuse by secret networks of elites were treated as self-evidently true despite a lack of implausible claims, inconsistent testimony, and physical evidence. Irrationally, members of the public reasoned that the absence of proof demonstrated the sophistication of the cover-up. When, years later, extensive (entirely unnecessary) reviews concluded that no organized satanic networks existed and that many lives had been destroyed by inference alone, the media dropped the matter.

The Satanic Panic was a few decades ago. But mass psychogenic illness is not a new thing. We can look centuries back to events like the Salem witch trials. These followed a similar logic. A woman was hanged because people believed in witches. We can expand the sample to include the numerous witch trials that occurred across Europe during the Middle Ages, where thousands were burned at the stake because people believed in witches, and accusations substituted for findings of guilt. This is how it works: accusations function as evidence, denials are treated as further proof of guilt, and procedural safeguards are abandoned in favor of moral urgency or political objectives. In every case of moral panic we examine, we find that the belief that “something so widely suspected must be true” overrode institutional skepticism and evidentiary discipline.

Modern moral panics differ in content, to be sure, but never in form. They rely on expansive interpretations of association, elevate suspicion to certainty, and dismiss institutional non-action as corruption rather than constraint. The invocation of disappearing photographs, excessive redactions, and secret files plays a similar rhetorical role to hidden covens, Jewish cabals, secret societies, or underground networks: it explains why proof is insufficient (cover-up!) while maintaining absolute confidence in guilt. None of this implies that elites are uniformly virtuous or that crimes never go unpunished. Rather, it insists that serious claims require serious evidence, and that institutions—however flawed—are constrained by legal standards that moral and partisan political narratives ignore.

But again, any video of Trump and women is now portrayed as proof of his supposed crimes. The video clip below, which has 120 thousand views on X, is from a November 1992 party at Mar-a-Lago. Recently divorced Donald Trump (his divorce from Ivana Trump finalized earlier that year) is seen dancing, socializing, and briefly kissing a woman amid a group of NFL cheerleaders, all women in their twenties, attending a calendar/promotional event tied to a football game weekend. The woman he kisses? That’s Marla Maples, Trump’s girlfriend at the time. They would be married the next year. She appears prominently in the footage, and her identity is confirmed. The video also briefly shows Jeffrey Epstein and Ghislaine Maxwell arriving and observing from the sidelines. There is no illegal activity or minors depicted. It’s an adult party from the early 1990s. Ever been to one of those?

In the absence of prosecutable evidence, the most parsimonious explanation for the lack of action is not conspiracy, but insufficiency. Authorities across both parties know what’s in those files. They also know they do not have the evidence they need to prosecute Trump for criminal wrongdoing. Again, for Democrats and a handful of Republicans, that is not their aim. Their aim is the delegitimization of a President and a movement that stands in the way of their announced goal: the elevation of transnational corporate power over the people. There is no conspiracy there.

The Trump–Epstein narrative illustrates a broader epistemological problem in contemporary politics: the replacement of evidentiary reasoning with moral inference. Abandoning standards of proof invites injustice in the name of irrational certainty. History tells us that moral panics rarely end well. But there is no mass hysteria to end all mass hysterias. There will be more moral panics, and people will be swept up in them because a significant portion of the population is ignorant, and the undisciplined mind is prone to see patterns where none exist. This tendency of our species has led to the needless suffering of millions across time and space, and it has proved politically useful for those who manipulate the masses for their own ends. (See also Epstein, Russia, and Other Hoaxes—and the Pathology that Feeds Their Believability; The Future of a Delusion: Mass Formation Psychosis and the Fetish of Corporate Statism)

Identity-Based Academic Programming and the Scourge of Heterodoxy

In contemporary universities, programs such as Women’s and Gender Studies or Race and Ethnic Studies have become well-established parts of the academic landscape. These programs are often justified as correctives to historical exclusions, offering focused attention to groups whose experiences were previously marginalized within traditional disciplines. Yet, from a sociological perspective, there’s a deeper question worth examining, namely, whether academic programs organized explicitly around identity can genuinely sustain heterodox thought and robust internal critique.

My concern is not that such programs lack intellectual seriousness (although they often do), nor that the topics they address are illegitimate. Rather, it’s that programs defined by identity categories tend, culturally and structurally, to function as representational spaces for particular subgroups within the population. In doing so, they risk prioritizing advocacy, affirmation, and protection over critical inquiry. When a program implicitly understands itself as representing a community—rather than studying a phenomenon—it becomes difficult for that program to tolerate viewpoints that are perceived as threatening to that community’s self-understanding.

This dynamic is often described in terms of “safe spaces.” While the original intent of safe spaces may have been to protect students from harassment or overt hostility, the concept has increasingly expanded to include insulation from critical perspectives that challenge foundational assumptions. In such an environment, heterodox views—whether theoretical or normative—can come to be interpreted not as contributions to scholarly debate but as moral or political threats. The result is a narrowing of acceptable discourse within the very programs that claim to be dedicated to critical thinking.

To clarify this concern, consider an argument by analogy. Imagine a university establishing a program called Christian and Conservative Studies. Such a program would almost certainly be understood as pandering to a particular identity-based constituency, namely conservative Christians. If the program were genuinely critical—if it rigorously examined Christianity as a belief system, conservatism as a political ideology, and the historical consequences of their influence—it would likely provoke strong objections from the very community it ostensibly represents. Conservative Christian students would perceive the program as hostile rather than affirming, and enrollment pressure, donor backlash, or public controversy would follow.

Conversely, if the program were designed to be attractive to conservative Christian students—to function as a “home” for them, a safe space for a distinct minority in the humanities and social sciences—it would almost certainly avoid sustained critique of core beliefs and commitments. In that case, the program would not serve as a genuine locus of critical inquiry but rather as a protective ideological enclave, reinforcing shared assumptions and discouraging internal dissent. The very logic that makes the program viable as an identity-affirming space would undermine its capacity for rigorous critique.

The analogy is instructive because it reveals a structural symmetry. If we can readily see why a Christian and Conservative Studies program would struggle to maintain intellectual independence from the identity it represents, then we should at least be open to the possibility that Women, Gender, and Sexuality Studies or Race and Ethnic Studies face a similar tension. The issue is not the political valence of the identity in question but the institutional logic of representation itself. This parallel is reinforced if one imagines a Christian and Conservative Studies program organized as a space to aggressively critique the worldview of conservative Christians. It’s inconceivable that a public university would organize programs around race and gender in which woke progressive ideology and queer theory were the targets of withering criticism.

Traditional academic disciplines—such as economics, history, philosophy, or sociology—are organized around methods, questions, and objects of study rather than affirming particular identities. At least ostensibly. This organizational structure makes it easier, at least in principle, to sustain internal disagreement and theoretical pluralism. I say in principle because policing around gender in traditional disciplines is intense. As a sociologist, I’m expected to teach gender in my courses that survey the field. I avoid dwelling in this area because the discipline long ago aligned its subject matter with the demands of critical race and trans activists. I limit myself to discussing the origins of the patriarchy in the unit on historical materialism. I tell myself, and in terms of disciplinary siloing, this is perhaps reasonable, that gender identity disorder is properly the subject of clinical psychology. At the same time, psychology has also aligned its subject matter with the same ideological demands.

When an academic program or discipline becomes closely aligned with the moral or political interests of a specific group, dissenting views risk being framed not as alternative explanations but as betrayals or acts of hostility. I found this out firsthand despite skirting the issue in the classroom. My writing on this platform was enough to trigger complaints. I risk more of the same if my argument in this essay is perceived as a suggestion that programs like Women and Gender Studies should be abolished.

None of this implies that inequality or power should be excluded from academic study. On the contrary, these are central concerns in sociology and related fields. My course content is centrally focused on the problems of inequality and power, just as these matters are the focus of this platform. The question is whether the most intellectually robust way to study them is through programs that are explicitly organized as representational spaces for identity groups, or whether such organization inevitably constrains the range of permissible inquiry. A further problem is that, even with the proliferation of identitarian programming providing affirming and safe spaces for students, the policing of disciplinary and interdisciplinary curricula and teaching has generalized ideological and political constraints over higher education.

If universities are committed to the ideal of critical thinking, they must be willing to ask whether certain institutional forms, curricular programming, and pedagogical practices—however well-intentioned—unintentionally trade intellectual openness for moral and political solidarity. The challenge is not to abolish these programs (although I am increasingly persuaded that they might have to go or at least substantially reformed by forcing them open to intellectual diversity and protecting those who present alternative viewpoints), nor am I advocating for woke progressive teachers to be removed from their positions, but rather to confront honestly the structural pressures teachers face and the limits those pressures place on heterodox thought.

Without such reflection, the university risks confusing advocacy with scholarship and affirmation with understanding. Indeed, the fact that there are so few conservative students and teachers in the humanities and social sciences tells us that it already has. We cannot conduct science if explanations for behavioral, cognitive, and social phenomena are straitjacketed by movement ideology or protective empathy (see The Problem of Empathy and the Pathology of “Be Kind”). Colleges and universities (indeed, K-12) need to open curricula and programmatic spaces to other points of view and defend heterodox teachers and their materials against the manufactured orthodoxy that polices higher education to advance political objectives and movement goals. Liberal education is corrupted by ideological hegemony. It transforms the academic space into a system of indoctrination centers. It devolves the ideal speech situation into a succession of partisan struggle sessions. Those targeted for indoctrination check out. If that’s the purpose of such programming—to push conservatives out of the humanities and the social sciences—then the matter is settled: these programs must go.

Image by Sora

An Ellipse is a plane figure with four straight sides and four right angles, one with unequal adjacent sides (in contrast to a Circle)

I watched the Andrew Gold podcast heretics with a doctor who provides “gender-affirming care.” I shared the video on Facebook last week and told my friends and followers that her arguments were circular. I want to expand on that observation here.

The doctor’s name is Helen Webberley. Gold asked her why anorexia is a mental illness (she agreed it was), but gender dysphoria isn’t. Her answer was self-sealing: anorexia is classified as a mental disorder; gender dysphoria isn’t. It used to be classified that way, she acknowledged, but it isn’t anymore—therefore it isn’t. She insisted this wasn’t something to argue about; it was simply true. Her appeal to psychiatric classification boils down to this: something is a mental illness if and only if psychiatrists currently say it is. She pointed out that homosexuality used to be classified as a mental illness, but no longer is, so it isn’t one. Once, psychiatrists said it was; now they say it isn’t. End of story.

But doesn’t that admit that psychiatrists declare things true or false not because of objective findings but because their collective opinion has shifted? When was there any evidence that homosexuality was a mental illness? Homosexuality is as old as humanity itself (and practiced among thousands of other mammalian, as well as avian species). What does that say about psychiatry as a scientific field? It doesn’t exactly sound scientific. Medicine does change, of course, but normally based on observable facts and rational interpretation. Would this doctor have agreed that homosexuality was a mental illness back when the manuals said it was? I doubt it. So why lean on that argument now?

Why, exactly, did psychiatrists change their minds about homosexuality? We know it wasn’t because of new objective criteria. As I just explained, if the change had truly been driven by objective observation, they would simply have noted that same-sex attraction is a natural, cross-cultural, historically constant fact. No, it was largely political pressure. Norms governing sexuality were changing, and the gay movement accelerated that change by squeezing the medical profession and other institutions.

If psychiatrists suddenly declared tomorrow that anorexia was no longer a mental illness, would we stop treating it as one? Using the doctor’s own logic: “Gender dysphoria isn’t a mental illness; it’s about their bodies, and they know best who they are.” Fine—then don’t anorexics also know what their bodies “should” look like? Who are we to tell a starving girl she’s not actually fat? She looks in the mirror and sees fat. That’s her lived reality.

Doctors describe anorexia as a form of body image distortion: the person perceives parts of their body as larger or “too fat” even when they are objectively underweight. We diagnose this not just from what patients say but from the observable fact that they are starving themselves to death. We infer the distorted subjectivity from the objective behavior. This is not analogous to homosexuality. It is, however, analogous to gender dysphoria. Indeed, they are species of the same genre of mental illness. But allow me to continue demolishing the doctor’s logic for a little while longer. I wish to leave no doubt as to the madness of her worldview.

To demand that we “normalize” anorexia because the girl insists she’s fat is, at bottom, a demand that we all adopt the anorexic’s subjectivity as shared reality. We mustn’t call her skinny if she doesn’t experience herself that way. But if we do that, we are denying the observable fact that she is emaciated. That would be lying—and lethal. Once you abandon objective reality, why not offer bariatric surgery or liposuction to starving patients? Any doctor who did so would be guilty of blatant malpractice. A responsible physician does not affirm a starving person’s belief that she is fat; the doctor treats the illness, which may well originate in a brain-based body mapping problem.

So why is a girl who insists she is a boy treated differently? The objective fact is that she is female. She appears to suffer the same species of body image distortion as the anorexic—only in her case, the distortion is about sex rather than fatness. She perceives herself as the opposite sex when she is not. Yet instead of treating this as the delusion it parallels—one in which subjectivity is incongruent with objective reality (and some clinicians still describe it that way)—doctors affirm the delusion, prescribe cross-sex hormones, and sometimes surgically alter the body to simulate the opposite sex. They can’t actually change male genitalia into female genitalia or vice versa; they can only create an approximation. They must know they are lying to the patient.

Screenshot of Andrew Gold’s podcast heretics.

Webberley argued that since she herself, as a woman, would be horrified to have a “willie” (the term both she and Gold used), a patient should have the right to have theirs removed. The identical logic applies to anorexia: a skeletal person believes she has excess fat; she is horrified at the thought of being forced to keep it; so why shouldn’t she have the right to demand liposuction? If someone is objectively thin yet subjectively experiences fat on her body, what possible reason is there to deny her the surgery she wants?

The parallel Webberley draws between homosexuality and gender identity is completely fallacious. They are not the same thing at all. One is an objective, observable behavior that requires no medical intervention. The other is a subjective claim that directly parallels anorexia or even the delusional schizophrenic: the person believes something that is objectively untrue. What in the fuck are doctors doing affirming delusions—and getting rich doing it?

Finally, Webberley explained that she became convinced when a girl who said she was a boy managed to persuade her that she really was one. The evidence? The girl seemed to really, really believe it. I said to my wife, who was watching the podcast with them, Is this a real doctor? Are medical schools actually handing out degrees to people who base life-altering treatments on whether a patient successfully sells them their delusion? This isn’t medicine. This is a corporate model.

For the record, Webberley refused to define what a woman is. She skirted it by saying it was a “gotcha question.” But it’s not. It is the question. Unless you have a non-tautological definition, you are not even in the ballpark of defining reality. Science is not possible without conceptual definitions that capture the objective world. That means that scientific theory is impossible. This is true in mathematics, as well. A rectangle is not a thing we define as a rectangle. A rectangle is a plane figure with four straight sides and four right angles, one with unequal adjacent sides (in contrast to a square). Sure, one may find a book that calls that geometric shape a circle (if one ever does, toss it), but it wouldn’t change what the shape is objectively. A woman is not somebody who says he or she is. A woman is an adult female human, in contrast to a man.

Is the Red-Green Alliance Ideologically Coherent?

Islamist violence refers to acts of terrorism or extremism motivated by Islamist ideologies, such as those associated with groups like al-Qaeda, Hamas, or ISIS, which seek to impose strict interpretations of Islamic law on the world. This form of violence does not fit neatly into the traditional left-wing versus right-wing political spectrum as it is typically understood in Western political analysis. Instead, it is more accurately treated as a distinct category of religious or ideological extremism. But that has not stopped politicians from hiding it behind the rhetoric of right-wing terrorism, or left-wing activists from seeing in Muslims an ally in their struggle against the free and open society.

The analytical distinction exists because Islamism is grounded in theocratic objectives—establishing governance based on religious authority—rather than in secular political ideologies, such as ethnic nationalism, liberalism, or socialism. To be sure, Islamists suggest their struggle is ethnic when they accuse their opponents of “Islamophobia.” That construction is something of a self-reveal, however, since it inadvertently invites the public to see Islam as an ideology, not an ethnicity. Apparently aware of this, more recently, propagandists have increasingly substituted “anti-Muslim” for Islamophobia. But the swap doesn’t work well—at least for those whose brains are in gear. A Muslim is an adherent to the Islamic faith.

Perhaps applying conventional left-right labels to Islam obscures more than it clarifies. Islamism shares characteristics with far-right ideologies: strong emphasis on authoritarian governance, moral conservatism, opposition to secular liberal values such as gay rights, gender equality, pluralism, and demand for traditional social hierarchies. Islam’s promotion of patriarchal structures and rejection of modern liberal norms does resemble far-right conservatism. To be sure, left-wing ideologies can be authoritarian, and often are, but they do not, for the most part, contain the same content as Islamism.

Some commentators have assumed that Islamist violence is grouped under “right-wing extremism.” In the wake of the Bondi Beach massacre, Australia’s prime minister, Anthony Albanese, recently made statements regarding the rise of right-wing extremism as a security threat in Australia. Although this is a rhetorical claim rather than a standard or widely accepted academic practice, it arguably follows from what I described above. Of course, Albanese’s motive is to marginalise populist-nationalist forces on the move across the world, decried as far-right-wing actors.

Islamism has, at times, intersected tactically with left-wing themes, particularly through shared opposition to capitalism and Western imperialism. We saw this in Iran during the Islamic revolution in the late 1970s—with disastrous results. As we saw in the Iranian case, these overlaps are pragmatic rather than ideological and do not reflect a genuine alignment with left-wing political theory. Moreover, the virulent antisemitism associated with Islamist terrorism is shared by left and right-wing ideologies beyond the Islamic space. I have written extensively on the rise of antisemitism among left-wing Activists in the West. More recently, a strange affinity with Islam has emerged in the antisemitism expressed by prominent voices on the Christian right, for example, Tucker Carlson, Nick Fuentes, and Candice Owens.

I will leave the matter of right-wing antisemitism to a future essay and focus on the Red-Green affinity for the balance of this essay. Red ideology is characterized by atheistic materialism, class struggle, and opposition to capitalism (while embracing corporatism), while Green ideology emphasizes the embrace of theocratic rule and opposition to secularism. Despite their ideological contradictions, both share a common objective: challenging Western cultural, economic, and political dominance. This is often framed through narratives of “oppressors” versus “the oppressed,” with conflicts such as Israel–Palestine portrayed as examples of “white settler colonialism.”

We see the alliance most concretely in communist and socialist groups supporting the Palestinian “resistance” movement, Islamist leaders—such as Iran’s ayatollahs—employing anti-US and anti-capitalist rhetoric that resonates with leftist audiences (Michel Foucault was a fan), and instances in urban Western politics where leftwing Muslims have attained leadership roles, Zohran Mandami of New York and Sadiq Khan of London being the most obvious examples. Together, these examples are illustrative of how ideological cooperation can occur despite big philosophical differences. The glue holding the coalition together: loathing of Jews, liberalism, and whiteness.

Tactically, leftist movements have historically relied on cultural Marxist and postmodernist discourse, disruptive protests, and identity politics, while Islamist movements prioritize jihad and the mobilization of the religious ummah. In the Red-Green alliance, these approaches converge in coordinated activism against shared enemies—such as “imperialism,” or “Zionism”—employing multicultural and identity-based frameworks to promote mutually reinforcing objectives.

Critics of the left (including some on the left) argue that leftist actors are naïve about the long-term goals of Islamist movements, particularly the risk of Islamist dominance after revolutionary success. They warn that Islamist groups and leaders strategically exploit leftist platforms and institutions to pursue broader objectives, foremost among them establishing a global caliphate. Historically, this alliance is temporary, with leftist groups marginalized or eliminated once Islamist factions consolidate power, as cited in examples like the post-revolutionary purges in Iran.

In the study of political violence, Islamist attacks are frequently analyzed as a separate category, in part because of their unique motivations and, in many cases, their comparatively high lethality on a global scale. While forcing Islamist violence into a simple left-right framework oversimplifies its religious foundation and ideological distinctiveness, Islam’s presence in leftwing politics is a concrete reality. We see the alliance of anarchists, communists, socialists, and Islamists not because the former agree with everything Islamists believe, but rather share with Islamists a loathing of the Enlightenment and liberalism. Both Red and Green sides seek to replace the free and open society with a totalitarian order.

Is the Red-Green Alliance ideologically coherent? In terms of objectives, yes. That one finds it odd that left-wing actors work alongside an ideology that would, in the end, subjugate them and exterminate some among them is rather beside the point. To be sure, slogans such as “Queers for Palestine” are opportunities to point out the contradiction. However, characterizing Islam as right-wing extremism obscures the triple threat to Western civilization, the third threat being the corporate state project operating the Red-Green alliance. While we make explicit the contradictions, we also need to expose the reason why so much energy is spent glossing over them.

Image by Grok

Revisiting Roy Bhaskar and His Critical Realism

Roy Bhaskar was a British philosopher best known as the founder of critical realism, a philosophy of science and social theory that aims to bridge the gap between positivism and interpretivism. Critical realism holds that reality exists independently of our knowledge of it; however, our knowledge of reality is always fallible, theory-laden, and socially conditioned. The standpoint critiques simplistic views of science and society, hence “critical”; “realist” because it insists that real structures and mechanisms exist whether or not we observe them. We do not believe that the world exists because we are observing it. What would explain the existence of a world before the emergence of human brains capable of interpreting the remains of that past world? The evolutionary process that produced thinking heads would have to precede the thinking head. Things do not cease to exist because a man who knew them dies. Many never consider him, yet he existed.

Bhaskar rightly rejected the idea that causation is merely regular patterns (constant conjunctions) of events (the Humean habit of expectation); instead, outcomes depend on context and interacting really-existing causes. Causation results from generative mechanisms, which operate in open systems, including in social life. Thus, Bhaskar extended his ideas beyond natural science to human society, where he argued that social structures (e.g., capitalism, bureaucracy) are real and pre-existing, yet reproduced or transformed by human action. This is known as the Transformational Model of Social Activity (TMSA). Critical Realism thus offers a middle ground between structural determinism, where structures are said to determine everything, and voluntarism, in which individuals freely choose their actions. At flush blush, I found this a very attractive position.

Roy Bhaskar

I first encountered Bhaskar’s work in the mid-1990s during my master’s program. A good friend of mine gave me a copy of A Realist Theory of Science, which graduate studies around the country were treating as a serious alternative to both positivism and postmodernism. As I was skeptical of both of these epistemological frames, I welcomed Bhaskar’s book. I joined a listserv (a mail-distribution tool, popular in the early days of the Internet) devoted to his work and began engaging with Bhaskar devotees around the world. I wanted to see how they used Bhaskar as a counterpoint to postmodernism and social constructionism. I was troubled by the practice, each in their own way, of elevating epistemology to an ontological position, which suggested to me something of a return to an absolute idealism. At the same time, in the ecosystem of graduate school, I could not deny that ideas exercised a pull on me, and while I was more impressed by the symbolic interactions (George Herbert Mead, for example), the phenomenologists fascinated me.

Many of the sociology professors in my master’s program were steeped in social constructionism of Peter Berger and Thomas Luckmann (especially their 1966 The Social Construction of Reality, which for them was something of a secular bible), which rooted in the phenomenology of Edmund Husserl through the interpretation by Alfred Schutz (which synthesized Husserl with Max Weber’s interpretationist sociology), and although my professors did not explicitly identify as postmodernists, I increasingly grew to suspect that their thinking drifted in that direction—particularly in the tendency in their lectures to collapse ontology into epistemology. A materialist my entire intellectual life, and I have never found that move convincing; postmodernism always struck me as an evasion rather than an advance.

Bhaskar’s appeal, at least initially, was clear. In his 1975 A Realist Theory of Science, he mounted a powerful argument for scientific realism that directly challenged both empiricism and idealism. His distinction between the real, the actual, and the empirical is compelling. The world, on this account, contains real structures and causal mechanisms that exist independently of our knowledge of them. Events may or may not occur depending on whether those mechanisms are activated, and our observations capture only a limited slice of that deeper reality. This was a direct rejection of the idea—central to postmodernism—that reality is constituted by discourse or knowledge claims. Bhaskar was explicit that ontology could not be reduced to epistemology. In other words, reality stood outside cultural and historical constraints, even if culture and history constrained human ability to comprehend that reality. In the end, reality pushed back, imposing its objective and mind-independent presence on those who observe it.

Where my doubts about Bhaskar’s work emerged was not in his philosophy of natural science, but in his treatment of social reality. In later works, especially his 1979 The Possibility of Naturalism, Bhaskar argued that social structures are real and causally efficacious, but also concept-dependent in a way that natural kinds are not. Society, in his view, exists only insofar as it is reproduced through human activity. He famously described social structures as both the conditions for and the outcomes of human practices. Formally, this position rejects postmodernism; Bhaskar repeatedly insisted that social structures are not reducible to beliefs or discourse. Capitalism constrains individuals regardless of whether they understand it; language governs speech even when speakers are unaware of its rules. In this sense, social structures possess real causal powers independent of individual consciousness.

And yet, there remained in his work for me a tension. Bhaskar’s insistence that social structures exist only through their reproduction in practice introduces a form of concept-dependence that sits uneasily with a robust materialism. While he does not collapse ontology into epistemology, he does tether social ontology more tightly to human conceptual activity than many materialists would accept, foremost Karl Marx (whom I will come to at the end of this essay). This is precisely where sociological social constructionists might find space to appropriate Bhaskar while blunting the realist edge of his argument. My concern was that Bhaskar’s account of social reality leaves itself open to misinterpretation—not because it is incoherent, but because it concedes too much to the idea that social existence is constitutively tied to meaning and practice rather than being fully grounded in material relations that persist regardless of belief. A harder realism would insist that once social structures are instantiated—class relations, economic systems, legal institutions—they exert causal force independently of how they are understood or narrated, and not merely insofar as they are conceptually reproduced.

In the Bhaskar listerv, I pursued a three-pronged strategy to suss all this out. First, I wanted to challenge what I saw as an uncritical admiration of Bhaskar that treated him as a kind of philosophical guru rather than as a thinker whose arguments required scrutiny. Second, I attempted to steel-man the strongest possible version of social constructionism by accepting its axioms and following them to their logical conclusions, then offering them up to scrutiny by the Bhaskar devotees. This involved deliberately collapsing ontology into epistemology—not because I believed this was correct, but because I wanted to see whether the position could sustain itself without contradiction. I did this while avoiding the problem of solipsism. Third, I treated the exercise as a test of my developing rhetorical skill: how persuasively could I advance a position I ultimately rejected among a group of scholars and students who should be able to rebut me?

What surprised me was how often interlocutors failed to recognize this as an immanent critique. Many assumed I was expressing a deeply held conviction rather than probing the internal logic of their assumptions about the concept-dependent piece of Bhaskar’s argument. To be charitable, this reaction is understandable. But the result was revealing. When pushed to its limits, critical understanding of Bhaskar’s position often lacked the conceptual resources to respond coherently, precisely because the distinction between what exists and what is known had already been surrendered. I could find nobody who, advancing Bhaskar’s argument, could dismantle my arguments from that standpoint. Of course, I was a young sociologist who may have had a higher opinion of my project than it deserved. Maybe it sounded incoherent to others. But even when I explained what I was doing, I was made to feel more like a troll than a good-faith interlocutor. I didn’t intend it, but I understand why others take it that way. It could also be that the fully steel-manned social constructionist position sounds incoherent—and, in retrospect, it does.

All that said, Bhaskar remains an important figure in the development of my thought. He successfully rescued scientific realism from idealism and positivism. He never fully disentangled social ontology from conceptual dependence, and as my understanding of the world progressed, I came to understand that this is hard to do without denying human agency. Admittedly, this is probably a humanist concern apart from science; at the same time, human beings are capable of resisting and transforming the structures around them, and this can be the subject of scientific inquiry. All that notwithstanding, for someone committed to a thoroughgoing materialism, residual entanglement remained a problem. At the same time, perhaps this is where Bhaskar could be embraced positively by sociologists whose intellectual instincts were drifting toward postmodernism. Time would prove my interest in such a project moot; in the following decades, sociology would give itself over almost completely to postmodernism. At that point, Bhaskar would be lost on them.

I still find Bhaskar’s arguments a powerful way of understanding human reality. In his favor, Bhaskar distinguishes concept-dependence from conceptual awareness more sharply than my initial critique acknowledged (as I remember it). For Bhaskar, social structures are activity-dependent, not belief-dependent. They require practices, not understandings, to persist. Capitalism does not exist because people believe in it—but because they engage in commodity exchange, wage labor, etc., whether or not they conceptualize these activities correctly.

However, while Marx does not claim that social structures are material in the same way as rocks or tables are material (he is not a crude physicalist), he explicitly rejects the idea that social structures are merely practice-dependent in the way Bhaskar does. For Marx, material relations are embedded in productive activity, enforced through coercion, law, property, and violence, inscribed not only in infrastructure and institutions, but in bodily necessity. As he stated in his 1845 The German Ideology, “The mode of production of material life conditions the social, political, and intellectual life process in general.” Workers do not reproduce capitalism because they recognize capitalism, or even because they intend to reproduce it; they reproduce it because they must eat.

That’s a more robust materialism than Bhaskar’s. For Bhaskar, social structures are real, but they exist only through their reproduction in practice. For Marx, reproduction is forced, not enacted; social structures are real because material life is organized by them; and practice is constrained by pre-existing material relations. Marx does not deny agency; rather, he sharply limits it. In his 1852 The Eighteenth Brumaire of Louis Bonaparte, Marx writes, “Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.” Social structures are material relations with objective force. Treating them as non-material risks idealism in another name. Marx is hostile to philosophy that treats meaning as foundational rather than derivative, practice as constitutive rather than constrained, and relations as conceptual rather than material. Social structures are material relations—more real than our concepts about them.

Some Facts About Lethal Violence and Questions For the Media

I opened ChatGPT on my home computer and asked it several empirical questions about lethal violence. The statistics it returned surprised me a little in light of how politically correct OpenAI can be. But facts are facts.

I begin with school shootings. Schools in high-poverty, urban, majority-minority schools, which comprise about 20 percent of schools, have an approximate rate of lethal violence per 1,000 of 5–15 students. For low/medium-poverty, suburban, predominantly white schools, which comprise around 50 percent of schools, the approximate rate of lethal violence per 1,000 is 0.5-1 students. In rural, mixed demographics schools, roughly 30 percent of the total, the rate per 1,000 is 0.5–2 per 1,000 students.

Implications: 1. Perception vs. reality: National attention emphasizes rare, high-profile attacks in wealthier communities. 2. Exposure: Most students affected by school shootings live in high-violence urban neighborhoods. 3. Policy relevance: Efforts focusing solely on guns in schools miss community-based violence associated with most incidents in disadvantaged areas. Question to ask: Why does the media ignore school shootings in impoverished inner-city neighborhoods? Is it because the people residing there are black and Hispanic?

Orlando Harris was killed in a shootout with police after opening fire in his St. Louis high school in 2022

Around 50–55 percent of cases of lethal police shootings involve white males (~93–95 percent of those shot by police are male). Question to ask: Why does the public believe most people shot by police are black? Is it because the media does not report that most people who are shot by the police are white? What is that not an important fact to report?

Among ethnic and racial groups, blacks account for most homicides in America. Moreover, most homicide victims are black. For interracial homicide, whites are far more likely to be shot by blacks than the other way around. Question to ask: Why does the media not report these facts?

We’re told that gun violence is the number one killer of children. “Children” in these statistics refer to individuals ages 0–19. Are 18 and 19-year-olds children? If children 14 years of age and younger are counted only, then accidents (car crashes, drowning, falls) are the leading cause of death among children. For 15–19, firearms (homicide ~55–60 percent + suicide ~35–40 percent) are the leading cause of death. Where do most gun deaths for those ages 15-19 occur? High-poverty, urban neighborhoods. Teens in these neighborhoods are 5–10 times more likely to be killed by guns than teens in low-poverty areas. Question to ask: Why doesn’t the media report this?

Governance and structural inequality are associated with all these patterns. Why would the media be reluctant to report this? Could it be that lethal violence occurs predominantly in cities governed by Democrats and that the structural inequality that lies at the root of these problems is maintained by corporate-friendly policies, such as offshoring production, mass immigration, and managing redundant populations, disproportionately black, these forces historically directed to socially disorganized inner-city neighborhoods?

Yes. And this, too: The media and Democrat politicians push an anti-gun rights narrative. Rather than inform the public about where and why gun violence is a problem, which means focusing on human agency and the social conditions that benefit the rich and powerful, they make it appear that the mere presence of guns explains high rates of lethal violence in America. This serves the interests of the corporate state, which seeks a disarmed population to advance the agenda of total control over the population.

The data are very clear on the question of guns and violence (see The Law and Order President and His Detractors—Who’s Right?; Lying With Statistics; Once More, for the People in the Back: It’s Not Guns; Guns and Control). In states with high rates of gun homicide and violence, those lawfully possessing guns, disproportionately whites, are underrepresented in gun violence. The solution to gun violence is not gun control. Indeed, if the presence of guns does not explain variability in homicide rates, and it doesn’t, then gun control measures are not merely unnecessary, but they make citizens less safe. And as I alluded to above, this is intentional.

Selective Condemnation of Cultural Integrity: The Asymmetry of Anti-Colonial Thought

Progressives (selectively, as readers will see) advocate identity-based frameworks that analyze historical actors—especially Europeans in the Americas—as collective agents whose actions are interpreted through categories such as ethnicity, race, and structural power. They use this frame to describe the history of America as an act of colonization.

This description is accurate. Colonization involves the movement of people from a foreign territory into a new land with the goal of settlement, resource use, and the transformation of local society. Colonization establishes institutions, farms, towns, and social structures to sustain the settlers and assert long-term control. The English colonization of North America in the seventeenth century illustrates this process well. English settlers not only claimed land but also created enduring communities and governance systems rooted in those long established in their countries of origin. What emerged from the process was a new cultural and social order. This is how America was possible.

One can lament colonization and the (negative and positive) effects it has on indigenous populations. However, once long established, those born into the new cultural and social order are native to it. For indigenous peoples, the moment to prevent the establishment of settler colonies and the displacement of populations already there is when it is occurring. In the case of the United States, the process is complete. The country is a multiracial, primarily Christian society with a secular republic, and those native to it have a right to cultural integrity and national identity on this basis.

Yet when contemporary migration produces dense cultural enclaves and visible cultural and social transformation in Western countries, for example, the Muslim enclaves that have formed across Europe and in places in North America, such as in Dearborn, Michigan, and Minneapolis, Minnesota, criticism of colonization is not merely set aside. Instead of interpreting these developments through the same group-level lenses used to describe the history of European settlement, the discourse shifts to a language of cultural relativism, individual rights, and religious liberty. The native population is not supposed to regard migrants in terms of ethnic identification and tribal affinity. If they do, they’re smeared as racists and xenophobes.

I am not arguing that the ethnic enclave in the West is analogous to towns established by colonizers; rather, I am saying that it is the thing itself. The West is being colonized. If progressives insist that demographic change, disruption to native cultural sensibilities and traditions, and institutional transformation count as colonization when practiced by historically powerful groups, then intellectual consistency demands a clear explanation for why the same dynamics we are experiencing today must be described in different terms.

There is no clear explanation. Thus, we are confronted by a double standard. For some reason, the native peoples of Europe and North America are racist and xenophobic for seeking to assert cultural integrity and national identity, and on that basis restrict migration; non-Western cultures are entitled to do the same without being smeared in the same way. Indeed, resistance to Western settler colonialism is lauded; not only is Western resistance to mass migration from the Middle East and North Africa (MENA) characterized as racist, but so would be white European mass migration to MENA areas.

Prime Minister of Rhodesia, Ian Smith, shown here at a 1965 press conference, was overtaken by black nationalist Robert Mugabe and the Patriotic Front in 1980.

To illustrate this double standard, consider a hypothetical scenario in which millions of white Europeans—fleeing climate challenges, cultural shifts in their home countries, economic stagnation—begin migrating en masse to a Sub-Saharan African country like Kenya. Today, less than one percent of the Kenyan population is white of European descent. These hypothetical settlers, drawn from nations like France, Germany, and Poland, arrive with capital and a shared ethnic identity, establishing dense enclaves in urban centers such as Nairobi and Mombasa, as well as rural areas rich in arable land.

The white European migrants build churches, schools teaching European languages and curricula, businesses prioritizing their networks, and even gated communities that replicate suburban European lifestyles. Over time, these groups advocate for policy changes: dual-language signage, relaxed land ownership laws to facilitate further settlement, and so on. Birth rates among the migrants outpace the local population, leading to demographic shifts where white Europeans comprise 20-30 percent of certain regions, influencing local elections and cultural norms.

Would the native Kenyan population—predominantly black Africans with diverse ethnic groups like the Kalenjin, Kikuyu, Luhya, and Maasai—react with alarm? One imagines so. Community leaders would likely organize protests against “cultural erosion,” citing the influx as a threat to control over resources, indigenous languages, and traditions. One would expect that they would demand stricter immigration controls, deportation of undocumented settlers, or even quotas on European-owned land to preserve national identity.

Such resistance would likely be framed by local voices as a defense against neo-colonialism, echoing historical grievances from British rule. Progressive commentators in the West and globally would applaud or at least sympathize with this stance, portraying resistance as righteous anti-imperialism. One can picture media outlets running headlines like “Kenyans Fight Back Against European Encroachment,” drawing parallels to anti-apartheid struggles or decolonization movements. Any European migrant complaints about “anti-white racism” would be dismissed as tone-deaf entitlement, rooted in historical privilege. Progressives would emphasize the collective rights of the indigenous population to maintain sovereignty, arguing that unchecked migration risks repeating the harms of past colonization.

Now, reverse the scenario: Suppose millions of black African Muslims from Sub-Saharan countries migrate to a European nation like France, forming enclaves in cities such as Marseille or Paris. If native French citizens—predominantly white Europeans—voice similar concerns about cultural integrity, demographic change, or institutional shifts (e.g., calls for halal options in schools or mosque construction by those coming from Muslim-majority countries like Senegal or Gambia), they would be swiftly labeled racist, Islamophobic, or xenophobic by the same progressive voices who condemn white European migration to Sub-Saharan Africa.

When one turns the dynamic the other way around, the discourse pivots to cultural relativism and individual rights: freedom of movement, the moral imperative of diversity, and religious liberty. Collective interpretations of the migration as “colonization” are rejected as bigoted fearmongering, even as the dynamics mirror historical European settlement patterns. This 180-degree reversal reveals the inconsistency: Non-Western natives are granted legitimacy in asserting group-based resistance, while Western natives are expected to dissolve their cultural boundaries in the name of progress.

Protesters in London, in 1968, demonstrated against Prime Minister Ian Smith’s resistance to black majority rule in the independent Rhodesia. Does anybody think these same protestors, if alive today, would protest the Islamization of that city? It’s hard to imagine.

In light of the double standard, two things are necessarily kept in mind. First, the hypocrisy, wrapped in the selective CRT language of the “perpetrator-victim” dynamic, has propagandistic value: disordering our conceptual vocabulary to avoid politically disadvantageous conclusions (for the transnationalist project) about how societies are reshaped by large, sustained population movements. Second, since the welcoming attitude towards migrants by so many millions of Europeans is hard to imagine as a naturally occurring one, socializing people to harm themselves involved concentrated power and a concerted plan. Two obvious and necessary questions follow: Who possesses that power? And how did they prepare so many Westerners to welcome their own destruction?

Here’s how: In the post-colonial scholarship, the term “colonial collaborators” refers to local individuals, groups, or elites within a colonized society who cooperated with the external power driving colonization. The colonizing force commandeers the sense-making apparatus—education, mass media, popular cultural production—and socializes the young and the cognitively vulnerable to embrace cultural relativism—the false attitude that all cultures are morally equal, except those of a uniquely evil civilization—and to think poorly of themselves for wanting to keep their culture and societies European. This is the work of progressivism. The American progressive and his social democratic comrades in Europe are the colonial collaborators. But he won’t tell you that. Instead, he will present himself as your moral better. That so many people agree with him tells us how far down the road we have travelled to post-Western civilization.

The Problem of Empathy and the Pathology of “Be Kind”

We are told to “be kind.” What lies behind this demand is the idea of “empathy.” The word empathy is only about a century old. Still, the moral expectation that surrounds it today has grown into one of the defining cultural norms of the early twenty-first century. Empathy is widely treated as an unquestioned virtue. It is a therapeutic discourse, pitched as a paradigm of interpersonal morality. It has become a central pillar of education. Yet philosophers, psychologists, and social critics have increasingly argued that empathy, as presently understood, often distorts moral judgment rather than clarifying it. Even if nobody else were forming this critique, I would; empathy paves the road to civilizational demise.

Before The Wealth of Nations (1776), there was The Theory of Moral Sentiments (1759)

To understand this debate, it is helpful to begin with Adam Smith’s classic eighteenth-century notion of “sympathy,” a concept frequently but inaccurately described as a forerunner of modern empathy. I begin the semester in my Freedom and Social Control course with a presentation of Adam Smith and his “moral sentiments” thesis, presented in his 1759 The Theory of Moral Sentiments. Smith uses sympathy to refer not to emotional fusion with another person’s feelings and self-understanding (the core of the empathy construct), but to a process of imaginative moral evaluation. 

When we sympathize with someone, Smith argues, we attempt to view their situation not as he does (his “first-order” reality, as the anthropologists say) but as an “impartial spectator” would. We try to understand how the man’s circumstances would appear to a reasonable observer, not to simply assume or absorb his emotional state. For Smith, sympathy serves the function of judgment; it mediates between compassion for others and our responsibility to uphold moral standards, as well as remaining firmly grounded in the real world. One may have compassion for a mentally ill man who believes he is a wolf, for example, but a rational person does not attempt to empathize with his delusion. It is pitiable that he believes this, but he is not a wolf.

Modern empathy operates by a different logic. In contemporary psychology, empathy usually means some combination of two things: cognitive empathy, the capacity to understand another’s perspective, and affective empathy, the ability to feel what another person feels, or empathic emotional mirroring. It is this latter form—absorbing or mirroring others’ emotions—that has fueled the moral prestige of empathy in contemporary culture. This is a problem not only because the emotions of others may motivate destructive behavior, and are therefore unworthy of sympathy, but also because the observer’s emotions may cause him to feel affinity with those with destructive emotions or misunderstand those emotions by projecting his own sentiments onto the other person or group.

One major criticism is that empathy tends to override or distort judgment. Affective empathy has a “spotlight” quality: it focuses moral attention on the person whose feelings we inhabit (or at least attempt to), often at the expense of broader principles or other people who may be affected. For example, our wolf-man may indeed be a sympathetic figure, but we do not excuse a murder in which he has killed and eaten part of his victim. In his 2016 Against Empathy: The Case for Rational Compassion, Yale psychologist Paul Bloom argues that empathy can lead to moral blind spots, selective compassion, and sentimental favoritism. When the priority is to feel what another person feels, the result may be to excuse harmful behavior, overlook responsibility, or fail to evaluate actions in a principled way. The impartial spectator gives way to an emotional partner whose perspective dominates our moral response.

A second concern is that empathy is easily exploited. Those acting destructively may invoke empathy as a shield against accountability: “If you understood my feelings, you wouldn’t judge me.” (One can easily see Gresham Sykes and David Matza identifying this as an item in an expanded list of their techniques of neutralization). Emotional identification becomes a moral cudgel, turning judgment or boundary-setting into a supposed failure of kindness. If this sounds like a psychopathic desire, there’s good reason to feel this way. This dynamic—the weaponization of empathy—erodes the ability of individuals and communities to insist on standards of conduct. It can paralyze intervention when people harm themselves or others, because empathizing with the person’s subjective experience begins to feel like a moral imperative that prohibits firm action.

This is how a paranoid schizophrenic with a history of violent behavior is set free by an empathetic judge to prey on more victims: the judge has empathy for his mental illness, to the detriment of the public, which expects the justice system to protect them from those who may harm them. For example, Decarlos Brown Jr., a 34-year-old man diagnosed with schizophrenia and known to have a long history of arrests and violent behavior, on August 22, 2025, aboard a Lynx Blue Line light-rail train in Charlotte, North Carolina, stabbed to death Iryna Zarutska, a 23-year-old Ukrainian refugee. In January of that year, Judge Teresa Stokes, a magistrate in Mecklenburg County, released Brown with no bail or bond.

Ask yourself why a queer activist who hates his culture camps out at Harvard to harass Jewish students and defend an Islamic death cult that would murder him if it had a chance. It is not because he used sympathy to understand what members of Hamas think and want; rather, it is because he, via empathy, has projected his own sentiments into the hearts of Islamists and comes to believe they think like him and want the same things. Because of his irrational worldview, he is only half right. Hamas does hate Western culture and wants to destroy it, just as he does. But Hamas also hates him and wants to kill him—indeed, to wipe queer people (along with Jews) from the face of the Earth. Empathy, compounded by his cultural self-loathing, causes him to subject himself to potential destruction.

A third criticism thus naturally follows: empathy disarms necessary self-defense. When kindness is interpreted as imagining oneself into the perspective of those who would harm them—whether personal or collective—it can weaken the resolve to resist or intervene. The moral pressure to understand someone “from their point of view” can undermine the instinct to protect oneself or to uphold the norms that sustain civil life. Compassion urges us to help those who are struggling; sympathy helps us judge their motives, situation, and understandings; empathy asks us to surrender our own well-being or standards to accommodate another’s emotions—often self-destructive sentiments we project into the subject of our empathy. 

We see this in the pleadings of a woman who defends the man who batters her; she loves him, and so she assumes that he loves her, and on that basis begs the arresting officers to let him go. She wishes the man were not violent, but she tacitly accepts that as part of the arrangement because she has substituted affective empathy for reason. Because she believes he thinks as she does, she believes that his violent proclivities are a problem to be resolved with kindness. But he is not like her. He does not love her. He wishes instead to control her, possess her, and use her existence as a means for his violent expression. There is no love to be found there, whatever he says—or she believes. If not her, then another woman will experience his rage.

A fourth problem is that empathy is neither neutral nor universal. Humans empathize selectively—often with those who resemble us, flatter us, or appear vulnerable. Empathy reinforces tribal boundaries rather than transcends them. Flattery seduces us into relationships that can be toxic and damaging. And appearances are easily manufactured. Compassion, by contrast, can be extended impartially to strangers and even to those we do not particularly like. But it does not mean we blind ourselves to who they are and what they want. Compassion has its limits, too. (Same with tolerance.)

Somalis are not leaving their home country for Minneapolis (many by way of Green Bay, Wisconsin) because they want the freedoms native Minnesotans enjoy (such as they are). They are coming to Minnesota to change the culture there by spreading Sharia. As with other Muslim groups, they come bearing an alien culture that is incompatible with American norms and values. Muslims do not see the world the way Americans do; rather, they see the world between those who submit to Islam and those who don’t, namely the infidels. With rare exception (Ayaan Hirsi Ali, for instance), Somalis wish to remake America in the image of Somalia. They have selected Minneapolis because the city government there welcomes them—a government that also seeks to remake America in a different image.

When the Somalis and their progressive enablers in Green Bay and Minneapolis demand empathy for the plight of Somalis, they would have us assume that Somalis are just like the native-born American who loves his country and its freedom. But the progressive himself does not love his country. Nor does he love its freedom. He sees America as an illegitimate entity, one founded in white supremacy and other oppressions. His worldview is a woke world of “perpetrators” and “victims,” and the perpetrators look like him, so he loathes himself. Whether he sees it or not, in Islam, there is a logic of authoritarian control he wishes to wield himself. He identifies with the Muslims because he sees himself in them, and, moreover, through them, he can escape his white guilt, which his ideology has taught him to acutely feel. In effect, he is weaponizing empathy in a project of managed decline of America. We know this when we listen to the hatred progressives express for their country, its culture, and history; for, if the progressive loved his country, he would not wish to see it undermined by those with whom he shares no intrinsic interests.

These tensions explain the rise and the limits of the “be kind” ethos. The phrase appears generous and humane, but it is vague and differentially applied to situations that those who command our sense-making institutions determine. In other words, “be kind” functions as a moral command to selectively validate emotional experience uncritically, to conveniently avoid judging someone’s behavior, or to treat the boundaries progressives seek to transgress as forms of cruelty (seen, for example, in the selective transgression of guardrails protecting children from sexual exploitation). In this sense, kindness becomes conflated with targeted unconditional empathy. Whether or however targeted, eschewing sympathy and suppressing the “impartial spectator” does not result in moral clarity; instead, it creates a therapeutic culture in which selected emotional expression is morally sacrosanct, while judgment—once guided by Smith’s impartial spectator—is treated as suspect or oppressive. Empathy is not a relative of compassion or sympathy; it is its opposite.

Gender identity doctrine is the paradigm. As I explained in a previous essay about the high-profile debate between Andrew Gold and Helen Webberley, we do not treat anorexia empathetically; we treat it sympathetically. The condition is a mental illness to be treated because we have compassion for those afflicted with this species of body dysmorphia. Webberley herself admitted that anorexia is a terrible thing. At the same time, gender dysphoria is treated not as a mental illness but as a situation where a man with the gendered soul of a woman requires transitioning to his “true” or “authentic” self. Rather than tell the man that he is mentally ill, which the compassionate doctor would, we are told to “be kind” and affirm the man’s delusion. One must take his emotional standpoint as truth. Otherwise, if we seek to protect the man (or the boy), we become an “oppressor.” To escape the label, some support the madness, and they rationalize it as kindness.

Identifying the problem with empathy is not an argument for callousness, nor is it a rejection of understanding others’ experiences. All this can be had with compassion and sympathy. Rather, it is a recognition that empathy, especially in its affective form, does not necessarily lead to good moral outcomes. Indeed, it can lead to civilizational destruction and medical atrocities. Judgment and responsibility remain essential for any just and rational society. Empathy warps the essentials. Empathy derails freedom and reason.

Adam Smith understood this. Sympathy, for him, was never about surrendering to another’s feelings; it was about seeing the other’s situation clearly and holding it up to an objective moral lens. Through what lens shall we see objective morality? We can debate the matter, but systems that demand society submit to the will of Allah, or to institutional practices driven by the material interests of a medical industry whose practices harm children, are straightaway precluded from the discussion; such systems by design negate compassion, fairness, judgment, responsibility, and sympathy. They are dehumanizing and totalitarian ideologies. Proponents cover these moral failures by substituting for them the construct of empathy.

The modern elevation of empathy to a near-absolute moral value has obscured the distinction implicit in Smith’s work and the work of modern critics of the construct. For a recent treatment of the matter, see Jesse Prinz, in “Against Empathy” (Southern Journal of Philosophy, 2011), where the author argues that empathy is not necessary for moral judgment and can be counterproductive, as it is a vicarious emotion that often leads to biased or partial responses. Prinz’s critique aligns with modern arguments, such as those by Bloom, that offer compassion—caring about others’ welfare without emotional over-identification—as a more rational and proportionate guide for public policy and large-scale ethical decision-making. For a short version of Bloom’s argument, see his Empathy and Its Discontents,” published in Trends in Cognitive Sciences (2017). There, he highlights that empathy is biased and narrow in scope, and that it can even motivate aggression, cruelty, and large-scale group violence. Most importantly, he shows how empathy can become tribalized: we empathize intensely with “our own,” which in turn fuels hostility and dehumanization toward outsiders.

Thus, while cognitive empathy can foster connection and insight, it can also distort judgment, encourage manipulation, inhibit intervention, and weaken the capacity of communities and individuals to defend themselves from those who seek to harm them. A rational moral framework recognizes the importance of compassion and understanding—but also the necessity of accountability, boundaries, and the impartial spectator that Smith saw as indispensable to moral life. 

I tell my students in my criminal justice courses this when I explain the operation of a rational justice system. In such a system, we do not judge the actions of the murder defendant from his emotional state and worldview; we do not judge him against himself. Nor do we project our own feelings onto him. We judge the defendant based on the model of the “rational actor.” We ask: What would a rational actor have done in this situation? If we can’t presume the rational actor, then self-work is required.

This error is how the men on a jury can acquit a man for murder because they experience the jealousy he experienced when they put themselves in his shoes. Smith’s concept of sympathy involves imaginatively placing oneself in another’s circumstances, but crucially, this must not involve a projection of one’s own raw emotions or biases—it must be filtered through the “impartial spectator” to ensure objective moral evaluation. Projecting personal sentiments, like a jury’s own jealousy, without this impartial check, distorts judgment, leading to improper approbation or excusing actions that lack propriety. This process emphasizes evaluating the fitness of passions to their objects, rather than merging emotionally in a way that compromises reason.

This is how a black man who stabs a white man is acquitted by a jury (a real effect that occurred a few days ago) because, in his astonishment at having been stabbed, the white man calls the black man a racial slur. Because of jurors’ empathy towards blacks as a class, the victim’s use of a slur negates the harm caused by the man who stabbed him. The jurors have lost their capacity to sympathize with the stabbing victim and instead empathetically identify with the perpetrator. They ask themselves how they would feel hearing the slur directed at them. They have utterly lost any sense of perspective and fairness.

Thus, Smith is describing sympathy as a projective imagination: “By the imagination we place ourselves in his situation,” he writes, “we enter as it were into his body, and become in some measure the same person with him.” However, he stresses that true moral approval requires the impartial spectator’s perspective, which corrects for partiality or self-deception. “We can never survey our own sentiments and motives,” he argues, “unless we remove ourselves” from the situation to view it objectively. We must, he insists, “endeavor to view them as at a certain distance from us.” The impartial spectator acts as a rational standard, judging whether passions like jealousy are appropriate or proportionate, not whether they match the observer’s personal feelings.

In cases of strong passions such as jealousy or revenge, Smith notes that sympathy is limited if the emotion is excessive or self-interested: “The furious behavior of an angry man is more likely to exasperate us against himself than against his enemies,” because the impartial spectator would not fully concur with unchecked rage. Smith explicitly discusses how jealousy can prevent proper sympathy: “A sentiment of envy commonly prevents us from heartily sympathizing.”

This is what causes a progressive to celebrate the assassination of a CEO of a healthcare corporation, in the same way that communist sympathizers find no horror in the Bolsheviks murdering the Tsar’s family during the Revolution of 1918. The French aristocracy had it coming in the Right of Terror. Smith warns that passions like fear, jealousy, and resentment can drive tyrannical actions. (If readers are interested in Smith’s original work, which is public domain, see The Theory of Moral Sentiments.)

In my jury example, a group projecting their own jealousy would fail this impartiality, potentially acquitting based on shared bias rather than evaluating the act’s propriety (likewise with the racial slur case)—precisely what Smith seeks to avoid by insisting on the spectator’s detached view. The distinction between Smith’s sympathy (as impartial, evaluative imagination) and affective empathy (as distorting emotional projection) allows us to see how the demand for empathy derails justice rather than manifests it. While Smith doesn’t directly address modern legal concepts like actus reus (guilty act) and mens rea (guilty mind) that I tie to my lecture on the matter, his framework supports prioritizing rational judgment over personal emotional alignment in moral and, thus, by extension, judicial decisions. 

In the final analysis, empathy is conditioning the population to suspend reason and instead respond emotionally and based on their own distorted understanding of the world. Empathy is regressive, making people morally childish so that they can be more easily led by the nose—and scolded for disobedience. The empathy project wants us to identify with the perspective of those who seek to harm us. The project asks us to “be kind,” requesting that we each advance the project if we are to be seen as properly moral actors. This is so that project leaders can achieve their aims without resistance. We can know this because there are consequences for those who are not kind. Those who weaponize empathy are therefore not really asking us to “be kind.” They’re telling us to tolerate ideas and behaviors that undermine our interests and safety—indeed, that diminish our nation and Western civilization. They’re conscripting us in a war against ourselves.

The “R-word” and Other Childish Progressive Constructions

“Anti-vaxxer,” “anti-intellectual,” “backward,” “Bible-thumper,” “bigot,” “brownshirt,” “caveman,” “chauvinist,” “Christian nationalist,” “cracker,” “cretin,” “cruel,” “deplorable,” “extremist,” “fascist,” “flat-earther,” “garbage,” “gun-nut,” “heartless,” “hick,” “hillbilly,” “homophobe,” “idiot,” “ignoramus,” “Islamophobe,” “imbecile,” “low-information voter,” “MAGAt,” “moron,” “mouth-breather,” “Nazi,” “neo-Nazi,” “Neanderthal,” “racist,” “reactionary,” “redneck,” “regressive,” “trailer-trash,” “transphobe,” “uneducated,” “white supremacist,” “yokel,” and “xenophobe.”

I’m sure I’ve left out many others. These are smears routinely hurled at conservatives. As a free-speech advocate, I have no objection to people using any of these words; I use plenty of them myself. My point in listing them is simple: there is no consequence for doing so. I cannot think of a single smear routinely directed at conservatives that has been euphemized into the childish “[capital letter]-word” construction.

I would, however, get in trouble for using words that do receive that treatment. Take one example: “retard.” This word has become somewhat safe to say now. Still, I can’t count how many times I’ve seen the sanctimonious “R-word” formulation, accompanied by ritual condemnation of anyone who refuses to adopt the approved progressive kindergarten locution.

Yet, as I’ve pointed out before, “idiot,” “imbecile,” and “moron” were once formal medical classifications in psychology and psychiatry during the late nineteenth and early twentieth centuries. They described levels of intellectual functioning and adaptive ability—i.e., degrees of what was then clinically termed mental retardation: an “idiot” had the most severe impairment—the lowest level of intellectual functioning; an “imbecile” had moderate impairment; a “moron” had mild impairment.

The Three Stooges

These terms were gradually abandoned between the 1950s and 1970s precisely because they had become common insults (thanks in part to the Three Stooges and Warner Brothers). They were linguistically and semantically bleached through repeated pejorative use—a process known as the euphemism treadmill (or the language cycle of harm). When a word is heard often enough as an insult, it eventually feels ordinary.

Sometimes a bad word is even reclaimed and repurposed. Calling someone “queer,” for instance, once felt visceral. Today, it’s an affirmative identity and the name of an entire academic discipline, complete with departments and degree programs in “Queer Studies.”

All of this illustrates a larger truth: which words we’re allowed to say—and who is allowed to say them—are windows into power. This isn’t only (or even primarily) about formal punishment through laws or institutional policy. Mostly, it operates through subtler, informal social controls. One lowers one’s voice when uttering a forbidden word because one fears what will happen if one doesn’t: being labeled, harassed, ostracized, or even subjected to violence.

Because progressivism is the dominant worldview in virtually all sense-making institutions—corporate HR, academia, entertainment, media, tech—no progressive will face formal or informal consequences for deploying any of the slurs I listed against conservatives (or, for that matter, against liberals). That fact tells us who actually holds power over acceptable speech. Conservatives—and even many liberals—appear to have almost none. In the spring of 2024, students at the institution where I teach drew up a petition to get me fired for, among other things, using a racial slur, even though I was not using the word in a derogatory manner. Yet they smeared me as a “racist” and “transphobe.” Nobody defended me against these smears. I don’t care that they didn’t; it proves my point.

The good news, as I argued in the essay I published on my platform Saturday, is that words only have the power we collectively grant them. If we refuse to be afraid of them and use them as we see fit, the speech police may eventually grow tired of trying to punish us. Perhaps their authoritarian and illiberal actions will delegitimize their speech codes and the practice of thought control. At the very least, over time, frequent use will once semantically bleach the offending terms, stripping them of their sting—just as happened with “idiot,” “imbecile,” “moron,” and, in a different way, “queer.”

And, now, “retard.”