The Case of Trey Reed: A Modern-Day Lynching?

The Trey Reed case came up in one of my classes. I was not familiar with all the details of the case, although I am the one who brought it up in response to a claim that lynching is still a problem. My point was not to interrogate the case but to note that authorities ruled the case a suicide and that, moreover, even if it were a racially-motivated killing (for which there is no evidence to my knowledge), it would not be a lynching for conceptual reasons. My point was to inject skepicism in the conversation. In this essay, I will explain my reasoning and provide details of the case after taking a closer look at the facts.

I begin with a disclaimer and a couple of statistical observations. This case is still ongoing, and evidence currently not publicly available may be forthcoming that indicates a racially-motivated killing. It would take additional evidence to conclude that it was a lynching. It should be noted that, although suicide among blacks is rarer than among whites, according to the CDC, for 2022 (the most recent year with a detailed demographic breakdown), of the 49,476 total suicides, 3,826 were blacks. Moreover, according to the FBI, for that year, there were 13,446 black homicide victims. Approximately 89 percent of those murders were perpetrated by blacks. Although most of those murders were perpetrated with guns, many other methods were also used to carry out homicide. Strangulation is not an uncommon method of murderers.

Demartravion “Trey” Reed was a 21-year-old Black student at Delta State University in Cleveland, Mississippi. On September 12, 2025, Reed was found hanging from a tree on the university campus. The Mississippi State Medical Examiner’s Office, led by Randolph “Rudy” Seals Jr., conducted an autopsy and ruled the death a suicide by hanging. Delta State University Police Chief Michael Peeler reported that the findings of his department were consistent with the local coroner’s conclusions, which noted no broken bones, contusions, lacerations, or other signs of assault. Peeler said there was no evidence of foul play. These facts were widely reported across the media.

Reed’s family was not satisfied with the ruling and has called for an independent autopsy as well as greater transparency, including access to video evidence. Civil rights attorney Ben Crump is representing the family in their independent investigation, and Colin Kaepernick’s “Know Your Rights Camp” is reportedly funding the independent autopsy. Additionally, US Representative Bennie Thompson has called for an FBI investigation.

The case has drawn comparisons to the history of racial violence in the United States, particularly lynching, which shapes how many people are interpreting the circumstances surrounding Reed’s death. Whatever the facts of the case, there is a conceptual problem with the claim of racial lynching in this case in that the historical and scholarly understanding of the phenomenon in the United States (Ida B. Wells, Stewart Tolnay and EM Beck, and many contemporary historians) emphasize that lynching was not merely a form of homicide but a public, ritualized performance of racial domination. (For my writings on the topic, see “Explanation and Responsibility: Agency and Motive in Lynching and Genocide,” published in 2004 in The Journal of Black Studies; “Race and Lethal Forms of Social Control: A Preliminary Investigation into Execution and Self-Help in the United States, 1930-1964,” published in 2006 in Crime, Law, & Social Change. See also Agency and Motive in Lynching and Genocide and There was No Lynching in America on September 24, 2024, on this platform.)

Racial lynchings were carried out by groups of white perpetrators against black victims, before large and small crowds, who treated the violence as a communal spectacle, for which they were held immune from legal consequences. This public and performative quality distinguishes lynching from private acts of violence or clandestine hate crimes; lynching’s purpose extended beyond harming an individual to terrorizing an entire racial community and reinforcing a social hierarchy grounded in white supremacy. I have described the phenomenon in my work as a public spectacle used to reclaim boundaries serving the interests of white racial exclusion and hierarchy. My thinking was inspired by James M. Inverarity’s “Populism and Lynching in Louisiana, 1889–1896: A Test of Erikson’s Theory of the Relationship Between Boundary Crises and Repressive Justice,” published in a 1976 issue of American Sociological Review. Inverarity’s analysis relies on Kai Erickson’s Durkheimian framework (boundary maintenance, deviance, and repressive justice) to test whether boundary crises in white political order produced repressive collective violence in the form of lynching.

By framing lynching as a subset of racially motivated homicide, especially as an act of boundary maintenance, this definition captures the essential features of audience presence, collective participation, and symbolic intent. It reflects the scholarly consensus that a lynching is best understood as a social ritual—an assertion of racial control—rather than simply as a killing motivated by racial animus. (My position was later supported in work by Mattias Smångs. See “Doing Violence, Making Race: Southern Lynching and White Racial Group Formation,” published in American Journal of Sociology in March 2016.)

There is no evidence that Reed’s death was a homicide or perpetrated collectively with audience presence. The surveillance video from Delta State University that might indicate this has not been publicly released because the investigation into Reed’s death is still ongoing. With no eyewitness reports of a lynching, video evidence would be necessary to make such a determination. Withholding video evidence by authorities is not uncommon. Authorities often withhold such footage to avoid compromising eyewitness interviews, forensic analysis, or potential criminal proceedings. Privacy concerns also play a role, as campus cameras frequently capture students and staff unrelated to the incident. Moreover, maintaining strict control over the chain of evidence ensures that the footage remains admissible in court, and early public release could raise questions about authenticity or tampering, as well as biasing the jury pool. However, if the video evidence did show such a thing, it is highly unlikely—so unlikely as to be implausible—that the public would not already know about it.

Without evidence, conceptual distinctions aside, how did the belief that this was a lynching emerge and spread? Misinformation about Reed’s death after the release of the initial autopsy. An individual operating an account claiming to be Reed’s cousin alleged that he had sustained injuries—specifically broken bones—that would have made suicide physically impossible. As noted, the initial autopsy does not indicate this. Although the creator of the misinformation later deleted the videos, as well as the account itself (I can find no information on the identity of the person behind the account), they went viral. Moreover, on a podcast, Krystal Muhammad, chair of the New Black Panther Party, claimed in a conversation with rapper Willie D that Reed’s mother had spoken to her about the contents of the second autopsy report. (I hasten to note that the original Black Panther Party has denounced the New Black Panther Party, emphasizing that it has no connection to the original organization.)

Terry Wilson, founder of the Idaho chapter of Black Lives Matter Grassroots, injected fuel into the moral panic, telling The Chicago Crusader ( “Lynching by Suicide: The Rebranded Face of America’s Racial Violence”) that the response from black Americans is deeply rooted in shared historical memory. “This sophisticated machinery of racial terror is just a fascist strategy that relies on overwhelming force from multiple directions, including misinformation, intimidation, and threats,” Wilson said. “I think we’re witnessing a coordinated campaign of disappearances, lynchings, and state-sanctioned killings that target Black, Brown, and Indigenous communities.” He added, “We need to address this method of ‘lynchings by suicide,’ which is their way of rationalizing, from a medical standpoint, their actions. I think this is sort of a death rattle for white supremacy, because they’re relying on nearly every structural institution to justify or cover up the actions of individuals.”

DeNeen L. Brown, faculty member at the University of Maryland’s Philip Merrill College of Journalism

I trust the reader will recognize the hyperbole of these assertions. The apparent factual basis of the assertions was provided in part by a June 3, 225 Washington Post piece, “Lynchings in Mississippi Never Stopped,” penned by DeNeen L. Brown, a staff writer for the paper. Her claim that “[s]ince 2000, there have been at least eight suspected lynchings of Black men and teenagers in Mississippi, according to court records and police reports,” is valorized by the reputation of the Post as an objective mainstream news outlet.

However, every instance of death Brown cites was ruled a suicide by officials. One either accepts these rulings or supposes a conspiracy in which Mississippi state officials are covering up homicides. One must furthermore imagine that there was a racial motivation behind these homicides. Finally, if all these things could be proven beyond a reasonable doubt, one must alter the definition of lynching to classify these homicides as such. It should be kept in mind that around one-quarter of all suicides are the result of asphyxiation and that more than 90 percent of those involve hanging. That eight black men over 25 years chose hanging as a method of suicide is not an extraordinary fact.

According to The New York Times (“A Black Man’s Death in Mississippi Strikes the Nation’s Raw Nerves” ), Jy’Quon Wallace, the 20-year-old Delta State student who discovered Reed’s body, is sympathetic to Reed’s family but, in the absence of a second independent autopsy, is not inclined to automatically connect Mississippi’s historical racial context to the body he found. “A lot of people are trying to use this situation to make it seem like it’s racially motivated. There are a lot of signs pointing to this as not a racially motivated situation. When that whole story comes out, if it does come out, it may give some people clarity. It may not. That’s not up to us,” Wallace told the outlet.

In that story, The Times reports, “Mr. Reed’s death was twice ruled a suicide, and no evidence has emerged that would suggest otherwise.” However, even if Reed were the victim of homicide, it does not follow that the perpetrator(s) was/were white or that, if they were, racial animus motivated the murder. Evidence is needed to make these claims. Even if the second autopsy found that blunt force trauma to the back of the head was the cause of death, or at least part of the sequence of events that led to Reed being hung from a tree, thus indicating a murder, the substance of a common rumor, the more likely scenario is that somebody had a grievance against Reed and murdered him. Some would object with the quip that the absence of evidence is not evidence of absence. Sure, but when speculating, one has to consider relative likelihoods.

And that is what lies at the crux of this problem. Motivated reasoning makes up for the gap between the evidence and what many would like to believe—or have the others believe: that the United States remains a profoundly white supremacist nation where whites target blacks for violence. As I have shown on this platform, the reality is that whites are far more likely to be victimized (murder, robbery) by a black perpetrator than the other way around. This does not mean that racially-motivated violence does not occur (indeed, I would argue that the disproportionality just noted indicates its presence in contemporary society), but rather that, in the absence of facts indicating racism, it is a leap of faith fueled by ideology to believe without compelling evidence that white supremacy explains the Trey Reed case.

Note: The discussion of viral media claims was adapted from reporting by Daniel Johnson writing for Black Enterprise.

The Liberal Origins of Modern Punishment

I want to share a narrative I often present in a similar form to my students and conclude with an observation about how some people perceive my politics. 

Many of my students identify as progressives (typical of higher-education social science programs) and uniformly view incarceration as a right-wing idea. In fact, incarceration is a liberal invention. Liberals sought to replace torture and retributive approaches with a rational system of justice grounded in the principles of deterrence, incapacitation, and rehabilitation. 

I tell this story to help all students understand the moral and political character of modern criminal justice. Part of its value is in showing progressive students how ideology can distort history and principle; it also helps conservative students see that the institutions they support rest on liberal, not traditional conservative, foundations. My goal is to not only correct misperception but also deepen their political-philosophical understanding.

The emergence of modern criminal justice in the eighteenth and early nineteenth centuries was deeply rooted in the liberal tradition, which emphasized individual rights, legal constraints on state power, and rational governance—what Herbert Packer identifies as the “due process model” in his article “Two Models of the Criminal Process,” in a 1964 issue of the University of Pennsylvania Law Review.

Two of the most influential figures shaping this new penal philosophy were Jeremy Bentham and Cesare Beccaria. Their works clarified the aims and methods of punishment in the modern state and circulated widely in Britain, continental Europe, and the American colonies. The philosophers provided the intellectual foundation for the rise of penal confinement and the development of the penitentiary as a core institution of criminal justice. Far from being a right-wing creation, the penitentiary was a liberal reform.

Jeremy Bentham’s 1789 Principles of Morals and Legislation articulated a systematic utilitarian approach to legal and penal reform. Bentham emphasized deterrence and incapacitation as rational goals of punishment, seeking to minimize suffering while maximizing social utility. His architectural design for the Panopticon—a subject on which I devote an entire lecture—symbolized a broader shift toward a humane, systematized mode of punishment intended to replace the arbitrary and often brutal practices of earlier eras. 

For Bentham, criminal justice should be guided by general laws, proportionality, and a view of offenders as individuals whose behavior could be shaped through predictable incentives and disincentives. Moreover, he insisted that the judicial process focus on acts rather than actors: class, gender, race, and other statuses were irrelevant; actions were what mattered.

Cesare Beccaria’s 1764 On Crimes and Punishments similarly transformed the moral landscape of criminal law. Writing decades before Bentham, Beccaria offered a powerful Enlightenment critique of disproportionality, secrecy, and torture. He argued for clarity in the law, proportional penalties, and the rational administration of justice. (For this, his book was added to the Index Librorum Prohibitorum, the Church’s official list of forbidden books in 1766.)

Beccaria’s emphasis on legality, liberty, and predictable legal processes resonated deeply with American political leaders. The principles he articulated—visible in key provisions of the Constitution and the Bill of Rights—shaped American commitments to due process, bans on cruel and unusual punishment, and the rights of the accused. Beccaria helped shift the prevailing view toward deprivation of liberty (unfreedom for those who break the law), rather than capital and corporal punishments, as the primary penal instrument of the state.

Inspired by these ideas, reformers in the nascent United States moved rapidly toward creating institutions devoted to penal confinement. The first American penitentiaries emerged in the 1790s, grounded in the belief that offenders could be reformed through regulated labor, separation from corrupting influences, and structured discipline. By the end of the eighteenth century, the penitentiary had become a defining feature of the American penal order.

While Northern states adopted this model most rapidly, Southern states also had early advocates. In Virginia, for example, the establishment of a penitentiary was driven partly by the reformist impulses of Thomas Jefferson, whose broader political philosophy—deeply indebted to John Locke—aligned with liberal commitments to equality under the law, individual rights, and rationalized governance. The system across America was elaborated during the nineteenth century.

The intellectual foundations of these reforms rested squarely on the classical liberal tradition. Drawing from Beccaria, Bentham, Locke, Montesquieu, and other liberal thinkers, American constitutionalism and early criminal justice were built on the idea that political authority derives from the consent and rights of individuals and that punishment must be justified by general principles rather than arbitrary force. This framework informed the Declaration of Independence, the Constitution, and the Bill of Rights, each presupposing a political order grounded in individual liberty, limits on state coercion, and the rule of law.

Seen in this light, I explain to students, the rise of the penitentiary in the United States was not merely an administrative reform but an expression of deeper philosophical commitments. It is a window into the foundation of a free society. Confinement became the preferred mode of punishment precisely because it aligns with liberal principles: it operates through law rather than spectacle, proportionality rather than cruelty, and treats offenders as autonomous individuals capable of reform. 

Far from reflecting traditional conservatism, the penitentiary embodies a humane and optimistic vision of justice. The emergence of the penitentiary system stands as a central example of how Enlightenment liberalism reshaped the modern state and gave enduring institutional form to its moral and political ideals.

Of course, as implied above, some now argue that liberalism is not left-wing but right-wing—a view that ignores history. This revisionist approach would classify the Constitution and the Bill of Rights as elements of right-wing governance. If one identifies as “on the left” and equates left-wing politics with progressivism, then liberalism indeed becomes “right-wing” by contrast. 

But in truth, progressivism—emerging as a post-liberal ideology supporting the rise of the corporate state after the Civil War, paralleling its social-democratic counterpart in Europe—is not left-wing in the classical sense. Progressivism elevates administrative and bureaucratic authority over the individual. It is an illiberal philosophy.

The point is that, if progressivism—rooted in corporatism and the ascent of a new administrative aristocracy—is labeled “left-wing,” then liberalism—understood as a commitment to individual liberty—becomes “right-wing,” simply because it stands in opposition to progressivism. This reframing reverses the ideological map as it was understood at the time of America’s founding and the French Revolution. Clever, to be sure.

Here’s the upshot: because I am a liberal, the swapping of political-philosophical sides makes me appear right-wing. Is it any mystery, then, why so many self-identified leftists accuse me of switching sides? What happened is that, beginning in earnest around 2018, as I explained in last Saturday’s essay, I shed ideas that contradicted my liberal principles. This meant rejecting the progressive elements in my thinking.  Through the distorted lens of the camera obscura, sharpening my thinking with the stone of principle has transformed me into a right-winger. So be it—but there it is. I am much happier as a result.

Image by Sora

“Trump is a Felon!” The Squawking of Party Parrots

Trump’s New York “hush-money” case is a farce, a textbook show trial. The purpose of the case was not justice. It was so party parrots could clack around squawking “34 felonies! Rraaawk!, 34 felonies!” “Trump’s a felon! Rraaawk!” They’re still squawking.

Cartoon by Sora

In what is known in the business as a “zombie case,” prosecutors elevated misdemeanors beyond the statute of limitations to felony status by alleging that records were falsified to conceal another crime. How fake was this? Totally fake.

The fakery was present all along. The indictment did not specify exactly what the underlying crime was, pointing opaquely to “election-related” or “financial-related” violations, which, at the outset, denied Trump clear notice of the charge he supposedly intended to conceal. The first question any objective and rational person asks when they see this is how a criminal trial proceeds based on an indictment containing no underlying crime specified.

It gets even more absurd from there. The judge’s jury instructions required jurors to unanimously agree that Trump falsified business records, but did not require them to unanimously agree on which underlying crime Trump intended to commit or hide. In other words, jurors could rely on different theories of what the unspecified underlying felony was, just so long as they unanimously returned a felony conviction.

See the problem? I hope you do. I want to believe you do. All that mattered was that jurors said Trump was guilty of something, but they didn’t have to determine what he was guilty of, which prosecutors never told them, since they prosecuted Trump, wielding an indictment that never specified a crime.

Illustration by Sora

This is why I often respond to the squawking of parrots with the question, “Have you ever read Franz Kafka’s The Trial?” In Kafka’s story, the precise charge against the accused, Josef K, is elusive, the logic of the accusation shifts (there is no logic, really), and K is expected to defend himself against something that is never clearly or fully articulated. K is trapped in a process where the form of legal procedure proceeds with great seriousness while the substance remains phantasmagoric. K is executed, never knowing what he was being executed for.

Adding to the insanity, after the verdict was delivered, the judge, Juan M. Merchan, imposed an unconditional discharge—no prison time, no fine—despite having allowed a conviction structure to stand that could not reasonably be expected to survive appellate review. Merchan could have set aside the verdict, but then he participated in a farce. He allows the show trial to go through the motions. And the point of the whole thing was to train party parrots to squawk a squawk. “34 felonies! Rraaawk!, 34 felonies!” “Trump’s a felon! Rraaawk!”

Even the way the parrots squawk the squawk is brainless. Trump wasn’t convicted of 34 felony counts. He was convicted of one felony offense (whatever it was). So even if one accepts that Trump was legitimately convicted of something, it would not be “34 felonies” but a “34-count felony.” The 34 counts? Every one of them was a misdemeanor that had expired.

There are certain things people say that disqualify them in my eyes from having anything to say worth listening to. This is one of them. Every time I hear somebody say that Trump is a felon, I know they haven’t a clue about the case or the law. Scarier is that they believe that show trials appropriate to the Soviet Union under Stalin should be run in America. They want Trump to be a felon because they loathe him, not because they have a shred of integrity or commitment to the rule of law.

There is a term for that: Trump derangement syndrome, or TDS. In Epstein, Russia, and Other Hoaxes—and the Pathology that Feeds Their Believability, I reference Manhattan-based psychotherapist Jonathan Alpert, who has noted that TDS resembles a genuine psychological condition he has observed in his practice (see also What Explains Trump Derangement Syndrome? Ignorance of Background Assumptions in Worldview). A crazy case only seems sane to a crazy person.

Sacred Words—Presumed and Actual Power

Words are presumed to carry power, especially words that offend people. The very idea that a word can “offend” someone depends on an imagined or assumed structure of power. When a term is labeled a slur, it is usually because it is thought to emerge from, reinforce, or call into being some underlying social hierarchy. For example, there are words that black people can use to describe white people that technically qualify as slurs, yet very few white people are seriously offended by them. There is a presumption that whites hold structural power over blacks and thus their words do not injure. Moreover, whites deserve to suffer slurs since they are the oppressors. The presumed asymmetry of power flows in one direction, and that presumption shapes how the words operate. (Do you see the paradox?)

In the opposite direction, there are words that white people can use toward black people that are deeply hurtful. The assumption is that such words express or invoke a position of power, and that they carry within them the weight of a larger social asymmetry. At the same time, black people may use these same words among themselves and often argue that this usage strips the words of their oppressive power—an act of rhetorically “reclaiming” language from the dominant group.

We see a similar dynamic in words directed at gay people: slurs aimed at gay men or lesbians wound deeply, while parallel slurs thrown at straight people land with far less force. Yet accusations of homophobia, like accusations of racism, can be hurtful because they charge the accused with moral wrongdoing. In that sense, the equivalent offense on one side is the use of a derogatory term; on the other side, it is the accusation that the person is morally tainted for supposedly using or embodying a derogatory attitude that manifests the asymmetry of power.

Over time, some words become so heavily charged that even referencing them without malice becomes taboo. The power dynamic is so baked in that people avoid speaking the word outright and instead reduce it to constructions like “the N-word” or “the F-word.” Yet, everyone who hears the euphemism instantly imagines the actual word in their mind. Even the people who would be offended if they heard the word mentally summon it the moment the euphemism appears. It is in everybody’s head (or else we wouldn’t know what was being conveyed). The taboo becomes paradoxical: the word is forbidden to speak, but impossible not to think.

This dynamic is on my mind today because of the controversy surrounding the word retarded,” now frequently replaced by “the R-word.” When I was growing up in the early 1960s, words like “idiot,” “imbecile,” and “moron” were understood as synonyms for retarded. Yet today retarded alone has taken on the status of a sacred or forbidden term. It resembles, in a way, the ancient Jewish taboo against vocalizing the actual name of God; instead, one used circumlocutions. Only priests or scribes could speak the divine name. This taboo was built on the assumption of an asymmetrical power relation between the clerical class and ordinary people. Similarly, our modern panel of offensive words functions as a set of secularized sacred terms—words that cannot be uttered because of the social power they are imagined to reveal.

Thus, what we call “offensive language” is really a structure of sacred language embedded within an imagined system of power. This is what postmodern philosophers describe as discursive formation: the idea that language does not so much reflect power as generate and organize it. If one is to have power, one must control language (yet another paradox). While the term is modern, the underlying phenomenon is ancient. Civilizations long before ours used regulated language—taboos, sacred terms, forbidden names—to enforce and perpetuate structures of power. In that sense, nothing about our current landscape of forbidden words is new. The observation is simply that we have reinvented an old form of linguistic sacredness under secular conditions.

When I was growing up in church, I learned something about power that I now see as parallel. I often heard it said that the devil—Satan—has only the power that God allows him. If we imagined Satan as possessing independent, self-generated power, a kind of standalone evil deity, then Judaism and Christianity would be polytheistic rather than monotheistic. But the theology I heard insisted that God alone is sovereign and that anything Satan does occurs only within limits established by God (see the story of Job).

Years ago, during a debate on CNN’s Crossfire between Frank Zappa and a guest—likely someone associated with the Moral Majority, since it occurred during their campaign to ban or label certain song lyrics—Zappa repeatedly emphasized that lyrics are simply “words,” nothing more than letters arranged in a particular order to convey an idea.

Zappa, a well-known atheist, approached the issue from a perspective I share. My objection to any theological system that forbids certain words from being spoken—what is traditionally called blasphemy—has always been strong. I find the creation and exercise of such power offensive. Here, I am not using “offensive” in the sense of hurtful words; rather, I find it offensive when systems restrict people’s freedom to speak. I find it offensive because it is illiberal and totalitarian.

The theological concept of blasphemy has been secularized: the same logic now governs prohibited social words, where uttering them—especially depending on who speaks—can trigger sanctions. This phenomenon shatters the illusion of presumed power. The real power structure is revealed when people find themselves on the disciplinary end of this linguistic control system. This is a situation of inequality; liberty is manifest when everybody enjoys equal access to words to express their thoughts.

It takes a lot of courage, I know, but we should collectively refuse to participate in a system that punishes people for uttering words and should actively work to dismantle such punitive mechanisms. It is not as if we don’t have the tools to wage this fight. The First Amendment to the US Constitution can be understood as a recognition that power structures have historically used punishment for certain forms of speech as a tool of authoritarian control. The Framers rebelled against that power. To allow a system of linguistic control is fundamentally at odds with the free and open society envisioned in American jurisprudence.

Image by Sora

Does Religious Liberty Permit Extreme and Primitive Religious Practices?

A post circulating on X claims that Japan is hostile to Islamic burial practices and that these practices are effectively banned. The claim is not entirely accurate. However, Islamic burial customs indeed face significant constraints in Japan. The post frames the issue as a suppression of religious liberty. My contribution to these threads—posed as a rhetorical question—is whether there are legitimate limits on religious freedom. Of course there are. However, before explaining why, I would like to outline Islamic burial traditions and the current situation in Japan.

Islamic tradition requires burial. It strongly prefers that the deceased be buried as soon as possible—ideally within 24 hours and traditionally before sunset if death occurs earlier in the day. Embalming is generally strongly discouraged or outright prohibited, and cremation is strictly forbidden. In Japan, however, cremation is the overwhelmingly dominant practice (99.8 percent of corpses are cremated).

A small number of Japanese cemeteries accept Muslim burials, but they are few, often far from major Muslim enclaves, and sometimes prohibitively difficult or expensive to access. When local Muslim groups attempt to establish new cemeteries, they frequently encounter strong local resistance based on concerns about cultural identity, groundwater contamination, and property values. As a result, proposed cemeteries are routinely canceled.

Japanese burial grounds (Source: Gareth Jones)

The issue, then, is less one of explicit state prohibition than of de facto exclusion resulting from administrative hurdles, community opposition, and cultural norms. In practical terms, Muslims in Japan face significant obstacles to securing a burial that aligns with their faith—an ongoing problem (for them, at least) even without a formal national ban.

My rhetorical question to posters is whether they believe it would constitute an infringement of religious liberty for Japan (or countries most anywhere in the world, for that matter) to prohibit funerary practices involving endocannibalism—anthropologists’ term for the ritual consumption of members of one’s own community as part of mortuary rites. Such practices were not acts of hostility but expressions of cosmological belief, mourning, and reverence for the dead.

This is not a theoretical scenario. Various societies around the world have incorporated ritual cannibalism into their treatment of the dead, viewing it as a compassionate means of honoring the deceased, maintaining spiritual continuity, and strengthening social solidarity.

As an anthropology minor, I took an entire course on cannibalism taught by Dr. Marilyn Wells, whose fieldwork spanned Central America, East and West Africa, and Papua New Guinea. Her lectures and course materials were fascinating. When we reached the topic of endocannibalism in funerary rites, I remember clearly thinking about multiculturalism and whether Western nations should tolerate the practice in the name of religious liberty. Cannibalism is often my go-to example when testing the limits of religious freedom.

One of the best-known examples is the Fore people of Papua New Guinea, who practiced funerary cannibalism into the mid–twentieth century. For the Fore, consuming parts of the deceased preserved the person’s auma—spiritual life force—within the kin group. The auma, the source of vitality, contrasted with the aona, the physical body. Endocannibalism was the consumption of corpses, not the living, and served to protect the community from the kwela, a dangerous spirit believed to linger after death. The practice was eventually suppressed after researchers linked it to kuru, a fatal prion disease.

Concerns about groundwater contamination associated with burial—including those linked to specific Muslim burial methods—are entirely rational, and Japan is within its rights to impose restrictions for public health reasons. But more is at stake. The Japanese have the right to preserve their cultural practices within their own country—a right one might expect cultural relativists to defend.

Yet, within contemporary progressive discourse, the cultural norms of advanced societies such as European and East Asian nations are often treated with contempt and deemed unworthy of protection. Meanwhile, primitive cultures are presumed to possess an absolute right to preserve their traditions, even when doing so imposes significant burdens on the host society. Resistance to extreme and primitive religious practices is thus framed as a violation of the very religious liberties to which advanced populations are expected to subscribe.

The Fore are not the only example. The Wari’ of the Brazilian Amazon also practiced funerary cannibalism. For them, consuming the dead was the most respectful mortuary practice; in contrast to Muslims, burial was considered degrading and emotionally harmful. Anthropologist Beth Conklin has written extensively about how Wari’ mortuary cannibalism expressed compassion, reinforced emotional bonds, and strengthened solidarity among survivors.

Various Melanesian groups likewise practiced funerary cannibalism, often as part of cosmological frameworks that guided the spirit of the dead or preserved aspects of their essence within the lineage. In the Amazon Basin, groups such as the Amahuaca and neighboring peoples consumed parts of the body during mourning rituals. Certain indigenous Australian groups historically ingested charred bone powder as a way of symbolically incorporating the spirit of the deceased.

A recurring theme in my cannibalism course, and more broadly the anthropological and sociological curricula, I can accurately convey the cultural relativists’ viewpoints: although foreign or unsettling to outsiders, these practices are deeply meaningful to those cultures where they appear. If one asks why this should matter, this is the right question. Moreover, why should cultural relativism be anything more than an epistemological problem and methodological approach? It does not follow that it should also be a moral standpoint.

Ultimately, the question comes down to this: Why should the practices of foreign cultures impose burdens on host countries? What moral obligation does a society have to tolerate religious rituals that are profoundly alien to its own traditions—especially when these practices compromise social cohesion, disrupt cultural norms, and threaten public health? While religious liberty is a vital principle, it is not absolute. Host societies have every right—and indeed, a responsibility—to set reasonable limits that protect the culture, values, and welfare of their citizens.

Moreover, tolerance must be reciprocal: just as outsiders should respect the laws and norms of the countries they inhabit, host nations are justified in shaping the boundaries of acceptable practice, particularly when the stakes involve both public safety and the preservation of cultural integrity.

If foreign culture-bearers wish to continue their traditional practices, then they need not enter those countries that do not tolerate extreme or primitive rituals. They can stay where they are. We should prefer that they do. And if they are not allowed to practice their rituals where they are, for example, because their people have been integrated into another superior, more advanced group, then the following generations can thank those who stopped them.

The Virtue of Being Wrong: How Humility Strengthens Thought

When a person discovers that they are wrong about something—especially something of significance—they ought to ask a further question: What else might I be wrong about? A single error can be dismissed as an isolated lapse (everybody makes mistakes or misses something), but recognizing a substantial error should naturally prompt a broader self-examination.

Beneath that lies an even deeper question: Why was I wrong? For it is the “why” that reveals whether there is a flaw in one’s thinking, methods of reasoning, or habits of evaluating evidence. Identifying the cause of an error helps prevent the same underlying problem from quietly generating future mistakes.

Most people do not reach this deeper question until they have first checked whether they might also be wrong about something else (of course, some people never reach the deeper question). If they uncover a second (or more) significant error(s) and still fail to ask why their judgments are misfiring, the issue is no longer a simple mistake—it becomes a matter of cognitive integrity. A pattern of errors suggests that what requires scrutiny is not just one’s conclusions, but one’s intellectual process itself.

Realizing one was wrong about a single belief may be unremarkable. Realizing one was wrong about two or more important matters calls for a harder look at the structure of one’s thinking. Multiple false beliefs rarely occur by accident; more often, they signal a deeper problem in how a person forms, organizes, and justifies their views.

As mistaken beliefs fall away, the result can be a profound reordering of one’s worldview. But it may also result in recovering deep principles. Indeed, the ability to admit one is wrong, and to see that there is a reason they have arrived a wrong conclusions, itself points to deeper principles about which one may be unaware or have forgotten.

Epistemology concerns the nature and justification of knowledge, while ontology concerns what fundamentally exists or is true. The shift I am describing may ultimately reshape both epistemology and ontology: not only how a person acquires and evaluates knowledge, but also what he believes to be true about reality. When a person confronts the roots of his own errors, both dimensions of his thinking may undergo significant revision. At the same time, as I have suggested, it can result in the reclamation of a deeper understanding.

If the latter, then what explains the buried principle or lost understanding? Affinity and ideology play a central role here. By ideology, I mean a way of thinking that systematically distorts a person’s epistemological approach—assuming, of course, that a rational and undistorted approach is possible (which I believe it is, since I believe in objective truth). Ideology does not merely mislead someone about particular facts; it warps the framework through which facts are assessed. The warping corrupts not only the content of one’s knowledge but also one’s cognitive integrity. The individual’s sense of intellectual honesty, his standards for evidence, and his capacity for self-correction can all erode under the weight of an ideology that supplies ready-made answers and shields its adherents from uncomfortable truths.

Partisan loyalty and tribal affinity also play roles in keeping people away from reason and a clear assessment of evidence—even the evidence itself.

In 2018, when I discovered I was wrong about systemic racism in American criminal justice, I wondered what else I was wrong about—and why. I began taking long walks, during which I reconsidered the things I believed and which of these beliefs were worth keeping and which needed jettisoning. Critical self-examination led to a reflection on the deeper structure of belief-formation. This led me to recover something professional development had compromised: common sense. Of course, men can’t be women. There is no science there. Moreover, my commitment to women’s rights. What was I thinking? That was the problem. I wasn’t. I was following.

This is where humility becomes so important to intellectual development. Humility is the cornerstone of personal growth and meaningful relationships because it allows us to acknowledge that we are not infallible. It’s okay to be wrong. It is not okay to deny oneself the capacity to admit it. It is not fair to others. And it is unfair to oneself.

Recognizing that humans can be wrong requires courage, self-awareness, and a willingness to confront our own limitations. When we admit our errors, we not only correct misunderstandings but also foster trust and openness with others. Of course, we depend on others to extend charity in such situations. Alas, one discovers that some do not wish us to be wrong, especially when they relied upon us for their appeals to authority. If the authority changes his mind on some matter dear to others, it cannot be that he corrected an error, but that he has become misguided in his judgment. The error is what they want to continue believing in. They lose faith in something they should never have faith in: the infallibility of others.

Humility transforms mistakes from sources of shame into opportunities for learning, for getting closer to the truth. By embracing the possibility that one’s perspective may be flawed, a man cultivates empathy, deepens his understanding, and creates space for collaboration, ultimately becoming a wiser and more compassionate individual. Those around him with the same humility can grow with him, or at least acknowledge that opinions can differ. The stubborn can condemn the man for “switching sides.” But that’s their problem, not his.

Image by Sora

Trump and the Battle for Western Civilization

The media are reporting that President Donald Trump’s friendly Oval Office meeting with the soon-to-be mayor of America’s largest city, Zohran Mamdani, on November 21 roiled parts of the MAGA base. The New York Times was somewhat less optimistic in its assessment. “There was one moment when Zohran Mamdani seemed like he might have bit off a little more than he could chew by making his pilgrimage to the lion’s den that is President Trump’s blinged-out Oval Office,” Shawn McCreesh writes. “The 34-year-old mayor-elect of New York was pressed by a reporter if he thought his host, who was sitting about four inches away, was really ‘a fascist.’ How terribly awkward.” Indeed. “But before Mamdani could get out an answer,” McCreesh continues, “Trump jumped in to throw him a lifeline. ‘That’s OK, you could just say, Yes,’ Trump said, looking highly amused by the whole thing. He waved his hand, as if being called the worst term in the political dictionary was no big deal. ‘OK, all right,’ Mamdani said with a smile.”

New York City mayor-elect Mamdani at the side of President Trump in the Oval Office, November 21, 2025

The interpretation of this moment is easy to get right. Contrary to what progressives desperately want the public to believe, Trump is highly intelligent, and he played Mamdani like a fiddle. Being smeared as a fascist doesn’t play well today. Not just because of overuse, but because calling a liberal businessman from Queens a fascist is so inaccurate that it draws an eyeroll from those who hear it misapplied. They’re thought is: “There you go again.” By playing it cool, seated at the Resolute Desk, an imposing figure even while sitting down, Trump made Mamdani look small and insignificant. He let Mamdani, his arms folded in front of him like a schoolboy, do his thing: talk without saying anything. What anybody prepared to accept reality saw was the mayor-elect bending the knee to the President of the United States. Trump gave Democrats nothing. His strategy was obvious: when Mamdani fails, the Muslim’s sycophants won’t be able to talk about a confrontational moment at the White House.

What observers didn’t know was that Trump had something in his pocket. Just 72 hours later, the White House gave supporters (and most Americans, if they understood the situation), something they had long sought: an executive order that designated Muslim Brotherhood chapters as foreign terrorist organizations: “Within 30 days of the date of this order, the Secretary of State and the Secretary of the Treasury, after consultation with the Attorney General and the Director of National Intelligence, shall submit a joint report to the President, through the Assistant to the President for National Security Affairs, concerning the designation of any Muslim Brotherhood chapters or other subdivisions, including those in Lebanon, Jordan, and Egypt, as foreign terrorist organizations.”

Founded in Egypt in 1928, the Muslim Brotherhood is a transnational Islamist movement that has influenced Islamist organizations and parties worldwide. The Brotherhood plays a chief role in the Islamization project. Trump’s EO allows the federal government to investigate, among other things, the Brotherhood’s public relations firm, the Council on American–Islamic Relations (CAIR). Founded in 1994, CAIR describes its mission as advocating for Muslim Americans, fostering understanding of Islam, and protecting civil liberties. The political action organization, Unity & Justice Fund, CAIR’s super PAC, donated thousands of dollars to New Yorkers for Lower Costs, one of the main PACs backing Mamdani. Mamdani is the smiling face of the Islamization project.

With this EO, Trump is signaling significant movement against the project. But he is doing much more than this. Indeed, even before the November 24 order, following his meeting with Mamdani, Trump ended Temporary Protected Status (TPS) for Somalis in Minnesota. On November 27, the President announced a review of green cards for Afghans—along with holders from 18 other “countries of concern.” The review was triggered by the targeted shooting on November 26 of two National Guard members, who were ambushed near the White House by an Afghan refugee. The Afghan, Rahmanullah Lakanwal, a 29-year-old Afghan national who had previously worked with a CIA-backed paramilitary unit in Afghanistan, was one of tens of thousands imported to the United States by the Biden regime, organized by then-DHS Secretary Alejandro Mayorkas.

Readers will recall that Trump has confronted Islam before. In a January 2017 essay, Executive Order 13769: Its Character and Implications, I argued that, if democracy and liberalism are to prevail, “the state must preserve secular values and practices, and every person who enjoys the blessings of liberty should dedicate himself to ensuring the perpetuation of this state of affairs. A liberal democracy must proceed based on reason.” Therefore, I continued, the conversation about Trump’s actions in 2017 should be grounded in “an understanding of the unique problem Islam presents to human freedom, as well as an examination of the European experience with Muslim immigration.” I noted that “[t]he problem that many on the left fail to consider is the corrosive effects of an ideology antithetical to the values and norms of Western society—government, law, politics, and culture—and the need for a policy that deliberately integrates Muslims with these values and norms, as well as promotes these values in the Islamic world.” I saw in the reaction to Trump’s order “an opportunity to have a broader conversation about Islam and immigration.”

Trump’s actions have Steve Bannon of the podcast War Room embracing the late Christopher Hitchens’ warning about Islam: that the Islamization (or Islamification) of the West is an existential problem. Atheists and liberals have long warned conservatives about the Islamization project, and I think I speak for many of us when I say that we welcome conservatives to the fight. We don’t have much time to turn things around, however, so the more robustly Republicans address the problem, the better (and they’d better put a strategy in place before the 2026 midterm elections). Indeed, America (and the West more broadly) should move aggressively to contain Islam in the same way the West contained communism during the Cold War. Just because Islam calls itself a religion is no reason to throw open the doors of Western civilization to Muslims. After all, as Hutchens noted, it’s not as if communism weren’t also effectively a religion; Islam, a species of clerical fascism, represents no less a threat to the internal security of the nations across the trans-Atlantic space.

I addressed this problem in recent essays (see Defensive Intolerance: Confronting the Existential Threat of Enlightenment’s Antithesis; Revisiting the Paradox of Tolerating Intolerance—The Occasion: The Election of Zohran Mamdani; Human Nature and the Limits of Tolerance: When Relativism Becomes Nihilism), as well as in a May 2019 essay, Threat Minimization and Ecumenical Demobilization. In these essays, I warn the West about the extension of the ecumenical spirit—originally aimed at creating understanding and unity across Christian sects—to fellowship with Muslims. Christianity and Islam are radically different ideological systems, and ignoring this fact prepares populations for what Canadian psychologist Gad Saad identifies as suicidal empathy. This progressive desire is, for many of the rank and file, an instantiation of misguided tolerance. For elites, it is a strategy of denationalism and the managed decline of the West.

Christianity is about charity, love, tolerance, and many other good things. But many Christians have forgotten about or never learned the history of Islamic conquest and the reality that our Christian ancestors took up swords and saved Europe from the fate suffered by the Middle East and North Africa, formerly thriving Christian centers in the world, now primitive hellholes, where women are treated as second class citizens, and the fate of hundreds of millions have fallen into the hands of clerics working from a plagiarism of JudeoChristian texts that twists those scriptures into a totalitarian system. It was Christians, including militant monks, who repelled with violence the Muslim barbarians, drove them from Europe, and secured the future for Christianity. Had they not acted when they did, there would be no Europe. No Europe, no America. No Enlightenment. No human rights. Only clerical fascism. Tragically, modern Christianity has made Nietzsche’s critique of the religion a reality by rejecting the militant side of the faith and suppressing the human instinct for self-preservation (see Republican Virtue and the Unchained Prometheus: The Crossroads of Moral Restraint and the Iron Cage of Rationality). 

As I noted in those essays, Muslims have now added to the tactic of military aggression the mass migration of Muslims to the West and the progressive Islamization of the trans-Atlantic space. The tactic of migration is a strategy to conquer the civilized world from within. The softest parts of Christianity, strategically exploited by transnational elites, continue in the progressive attitude that empathizes with Muslims and the barbarian hordes, while rejecting the militancy necessary to repel the existential threat Islam represents to human dignity and freedom. The failure of Westerners to take up both sides of Christianity—the soft (selectively tolerant) and the hard (militant) sides—portends disaster. At the same time, what militancy remains, progressives have aimed at their fellow Westerners. We must not be shy about calling things what they are; the left has become a fifth column in the West, working with our enemies to bring down Western civilization.

Reflecting on this, I have lost confidence in the United Nations and the efficacy of international law to defend freedom and human rights. When the United Nations was founded, it was established on Western values of international cooperation and law. The Universal Declaration of Human Rights emerged from this framework. But not all member states endorsed it in substance, even if they formally signed onto it. Moreover, Muslim-majority nations developed their own declarations of rights—most notably the Cairo Declaration on Human Rights in Islam—which is founded on Sharia rather than the Enlightenment principles that gave rise to democratic republicanism and human rights. As a result, the UN includes a wide array of states whose commitments to democracy and rights are not aligned with the Western standards that originally shaped the institution. These Western standards are not arbitrary; they are the product of reason in the context of European culture, made possible by the Protestant Reformation and the broader intellectual currents of Christian civilization. 

This matters when we consider cases such as Israel (see my recent essay How Did the Roles Get Reversed? The Moral Confusion Surrounding Israel and Gaza, and embedded links). If the UN or its agencies are asked to adjudicate whether Israel is responsible for genocide after the massacre of Jews in Israel on October 7, 2023, the judgment would ostensibly rest on the legal definition of genocide—a Western juridical concept. In practice, however, the judgment rendered would be heavily influenced by the political alignments and value systems of states that do not share the underlying philosophical commitments from which those legal definitions arose. Many of these states are openly hostile to Israel and to the West. Perhaps the UN won’t make this determination. But one has reason to worry it will. (And then what?)

When reflecting on this dynamic, it is easy to think of the contrast presented in Star Trek’s construct of the United Federation of Planets. Starfleet included many different species and cultures, but they were all integrated into a framework of shared values rooted in Enlightenment-style principles and liberal norms—equality, reason, tolerance, universalism. Diversity existed, but it was anchored in a common civilizational ethic. In contrast, groups like the Klingons and Romulans, who did not share these principles, remained outside the Federation and were recurring sources of conflict because their worldviews diverged so fundamentally. I raise the matter of a 1960s Sci-fi TV show and its spin-offs because it shaped the beliefs of many Americans who today contemplate the world situation. By portraying such antagonism as occurring out in space, they do not see the Klingons and Romulans as analogs to Muslims.

However, the contemporary terrestrial situation more closely resembles the dark side of that fictional interstellar situation. The real Earth is divided by profoundly different religious and civilizational traditions, and there is no universally accepted philosophical foundation uniting all nations. Had the West colonized the world and brought it to the principles of individualism and secularism, it would be a different matter. Even in its failure to accomplish this, the desire is portrayed as imperial ambition. The UN project to include every state in a single system of international cooperation by tolerating the cultures of barbaric countries and regions has undermined its original purpose. Instead of a mechanism for upholding universal principles, it has become an arena in which illiberal, non-Western, and even totalitarian regimes can leverage their numbers to dilute, reinterpret, or subvert the values the institution was created to advance and defend.

Last night, I revisited an interview conducted with Hitchens (Conversations with History, UC Berkeley’s Harry Kreisler) in which he expresses optimism about the role of international law in holding member nations to account based on a universal standard of treatment. His argument is similar to arguments advanced by pro-Arab intellectuals Noam Chomsky and Norman Finkelstein, who insist on putting Israel’s fate in the United Nations’ hands. However, the validity of their argument depends on a uniformity across the planet of values that align with the underlying principles upon which a just international law must rest. It should be obvious that this is not the case. Given this, one must ask whether justice is what these intellectuals desire or if their sentiments are driven more by a hostility towards the Jewish state.

The reality of the world we live in, with the totalitarian ambitions of China, and its radically different conception of the world, growing more belligerent by the day, and also those of Islam and the rest of the Third World, make such uniformity impossible. The universalism desired by those who established the United Nations and developed further the system of international law presumes the hegemony of the Western worldview. There is no such hegemony. Only in a fantasy world like Star Trek could such a situation exist. At this point, we can’t even count on Europe to uphold the foundational values that support the endeavor. Europe is well into its Islamization phase, and the pessimistic side of me has trouble believing that the continent hasn’t passed the point of no return.

We must therefore ask whether the United Nations is something worth continuing in its present form. How can we allow barbarian cultures and corrupt elements of the West to determine the fate of mankind? At the very least, how can we leave the fate of America to such madness? The situation demands a comprehensive rethink. In the meantime, Trump is doing the right thing: halting mass immigration and reviewing the status of those who have entered our country.

* * *

Because of all the anti-Western and anti-white rhetoric the occasion of Thanksgiving has provoked, I want to close with a couple of historical notes. For it was not just the false claim of “stolen land” that progressives rehearsed (see Gratitude and the Genocide Narrative: Thanksgiving and the Ideology of Historical Responsibility), but the African slave trade. “Never forget,” advocates lecturers. I’ll take them up on that.

First, the Asante (an ethnic group of modern-day Ghana) were deeply involved in the slave trade, particularly from the seventeenth through the nineteenth centuries. Readers may remember that Democrats wore the ceremonial garb of the Asante, the Kente cloth, during the BLM riots, a large-scale uprising against the West and white people triggered by the overdose death of convicted felon George Floyd while in the custody of Minneapolis police.

Second, white Europeans (millions of them) were enslaved in the Barbary States for several centuries. The Muslim slave trade—also called the Arab, Islamic, or Trans-Saharan slave trade—was one of the largest and longest-lasting systems of slavery in world history, spanning over 1,300 years, involving multiple regions and empires, and predating and outlasting the Atlantic slave trade. In fact, slavery continues in the Islamic world. I will say more about the Barbary States, in particular, Tripoli, today an open-air slave market. I will bring these closing remarks to the point about religion and freedom.

During Thomas Jefferson’s presidency, the United States intervened militarily against the Barbary States—Algiers, Morocco, Tripoli, and Tunis—because these North African regimes sponsored piracy and the enslavement or ransoming of captured American and European sailors. For centuries, Barbary corsairs seized ships in the Atlantic and Mediterranean, forcing nations to pay tribute for safe passage. After the American Revolution, the US no longer had British naval protection, and American crews were increasingly captured. Earlier presidents agreed to pay tribute to the Barbary States, but Jefferson believed this was dishonorable and unsustainable.

In 1801, when Tripoli demanded increased payments, Jefferson refused, prompting the ruler of Tripoli to declare war. Jefferson responded by sending the US Navy to the Mediterranean, launching the First Barbary War (1801–1805). The conflict included naval blockades, ship-to-ship battles, and the famous 1804 raid led by Lieutenant Stephen Decatur to destroy the captured USS Philadelphia. The war ultimately forced Tripoli to renounce future tribute demands and release American captives, marking the first major overseas military campaign in US history and establishing America’s willingness to confront piracy and state-sponsored enslavement abroad.

As I noted in a December 2023 essay, Rise of the Domestic Clerical Fascist and the Specter of Christian Nationalism, the Treaty of Peace and Amity with Tripoli (1805), which ended the First Barbary War, included a famous clause emphasizing the secular nature of the US government. “As the Government of the United States of America is not, in any sense, founded on the Christian religion,” Article 11 states, “it is declared that there is no hostility on the part of the United States to the laws, religion, or tranquility of Muslims.” This provision was intended to reassure Tripoli that the US, though largely populated by Christians, was not a religiously motivated state and had no intention of spreading Christianity through its foreign policy.

The inclusion of Article 11, however diplomatically strategic, testifies more profoundly to the American principle of separating religion from government, even in international relations, and is often cited as evidence that the US government was officially secular even while its citizens were predominantly Christian. I have invoked this clause many times in my insistence that the United States is not and should not become a theocratic state.

However, America’s adversaries do not advance such a principle; Islamic countries are not secular even while their citizens are predominantly Muslim. If they did, it might be reasonable to tolerate Muslim immigrants, as they would have been socialized in a secular culture that respected other religious faiths (or, in my case, those who have no faith at all). However, as I have explained many times, since humans are culture-bearers, those bearing cultures incompatible with secular ethics are not suited to reside in America. They should therefore be barred from entering the country.

Whether we are a Christian nation is a point reasonable people can debate, but those who believe all laws derive from Islam are a priori unreasonable people. No discussion is possible with such people. Therefore, the rational policy is to keep those animated by irrational cultures from entering and subverting Western institutions.

Gratitude and the Genocide Narrative: Thanksgiving and the Ideology of Historical Responsibility

“We didn’t land on Plymouth Rock. The rock was landed on us.”—Malcolm X

“Who controls the past controls the future. Who controls the present controls the past.”—George Orwell

In A People’s History of the United States, first published in 1980 and widely adopted in high schools, Howard Zinn argues that all history-writing is shaped by choices, just as mapmaking is. A cartographer decides what to enlarge, what to shrink, and what to leave out entirely; those decisions create a perspective, not a neutral mirror of reality. Historians, Zinn contends, do the same (but more than that, as we shall see): what they highlight or omit reflects ideology, political interests, and values (and, I must add, tribal affinity). He uses the analogy to insist that objectivity in history is impossible, because the historian must always select from an overwhelming number of facts—and those selections inevitably reflect a standpoint, usually that of governments, elites, or victors.

The analogy is true enough in science, as well, and thus crashes on the shores of a necessary truth. Yet it has proved useful to those who claim that truth is determined by power and standpoint, and that a marginal standpoint can legitimately revise history in the pursuit of power—a hallmark of postmodernist thought.

Below, I quote Zinn at length so readers can see exactly the perspective and politics I am criticizing in this essay—a politics I once endorsed myself, for example, in a 2012 talk to educators, A Culturally Competent and Democratic Pedagogy.

“To state the facts,” Zinn writes, “and then to bury them in a mass of other information is to say to the reader with a certain infectious calm: yes, mass murder took place, but it’s not that important—it should weigh very little in our final judgments; it should affect very little what we do in the world.”

He then deploys the mapmaker analogy:

“It is not that the historian can avoid emphasis of some facts and not of others. This is as natural to him as to the mapmaker, who, in order to produce a usable drawing for practical purposes, must first flatten and distort the shape of the earth, then choose out of the bewildering mass of geographic information those things needed for the purpose of this or that particular map.”

Zinn concedes that selection, simplification, and emphasis are inevitable for both cartographers and historians. But, he insists,

“My argument cannot be against [them],” he writes. “The map-maker’s distortion is a technical necessity for a common purpose shared by all people who need maps. The historian’s distortion is more than technical; it is ideological; it is released into a world of contending interests, where any chosen emphasis supports (whether the historian means to or not) some kind of interest, whether economic or political or racial or national or sexual.”

The ideological interest, Zinn continues, is never openly expressed the way a mapmaker’s technical interest is obvious. Instead, traditional history is presented “as if all readers of history had a common interest which historians serve to the best of their ability.” This is not intentional deception; historians have simply been trained in a society that treats knowledge as a technical problem of excellence rather than as a weapon in the hands of contending classes, nations, and races.

At the core of Zinn’s project is the smuggling in of a primitive ethic: that the living are responsible—not for historiography, but for the actual deeds of past generations. Otherwise, why would any historian’s “ideological” rendering of the past matter at all? If traditional historians distort history to evade collective, intergenerational responsibility, then the responsible progressive historian must rediscover or emphasize the facts they omit or downplay. The entire endeavor only makes sense if one first accepts that collective, intergenerational responsibility is something the living ought to bear—and bear in a way that justifies altering present arrangements.

An. exercise in guilting the living

I reject that premise, as I made clear on Thanksgiving 2021 in Awokening to the Meaning of Thanksgiving. “Thanksgiving is about the living. It’s not about corpses—except for the dearly departed we remember together,” I wrote. “Those who want everybody to dwell in a narrative of collective guilt have way too much influence in today’s world. We need to be more forceful in our insistence that they sit the fuck down.”

I put the matter bluntly, I know. I was frustrated. I still am. Every time I hear a land acknowledgment at a ceremony or meeting, I sigh and roll my eyes. If I were inclined to be more disruptive, I would say something. Instead, I redirect the frustration into essays.

Two years later, in Giving Thanks Amid Uncertainty and Hopeful Developments, I wrote:

“I hope I never have a day in my life when I won’t or can’t be thankful for living in the greatest republic that ever existed—the United States of America. Although I am not responsible for the actions of those now dead and gone, I can be thankful for my ancestors who founded, built, and defended this great nation. I worry about the future, though, not only because of the threats abroad, but also because of the rot inside. The enemies of America are in charge of the machinery of the republic. I’m not religious, but I know many of you are and will pray for America. I’m thankful for that, too. We need more than prayers, though. We need action.”

(We took that action in November 2024 and returned a transformational leader to the White House.)

What I want to do in the remainder of this essay—while I wait to celebrate the day with my nuclear family—is recover from manufactured forgetting key relevant facts about Thanksgiving and show that the claim that the holiday celebrates the genocide of indigenous peoples is a recent, thoroughgoing political reinterpretation, one that emerged long after the holiday’s traditions were firmly established in American culture.

The facts of the case are objective, not ideological. Thanksgiving developed not as a commemoration of conquest but as a moral and religious day of gratitude, shaped far more by nineteenth-century Protestant culture and the exigencies of the Civil War than by early colonial events—though those events supplied moments later generations felt worth remembering.

The colonial antecedents lie in seventeenth-century New England harvest celebrations. The best-known—the 1621 Plymouth gathering—was a modest festival attended by both Pilgrims and Wampanoag during a period of alliance and mutual dependence. It was neither intended nor understood at the time as a celebration of dispossession or violence.

When Malcolm X, in his 1963 Message to the Grassroots speech, uttered the phrase quoted at the top of this essay, he could not possibly have been talking about Africans. There were no African slaves at Plymouth (or for decades after). He was deconstructing the symbolism of Plymouth Rock as the founding of a great and peaceful nation by misleading his audience—just as journalist Nikole Hannah-Jones and The New York Times Magazine would decades later with the 1619 Project—about the history of America.

The national holiday we observe today, however, owes its form to Abraham Lincoln’s 1863 proclamation, issued amid a civil war, designating a day of gratitude, prayer, and unity. Whatever nostalgic connection we retain to the Plymouth story we learned in grade school (complete with hand-traced and crudely-decorated construction-paper turkeys stapled to corkboards), modern Thanksgiving has no connection to the Indian Wars or any narrative of conquest. To teach children that it does is educational malpractice—and malpractice in American public education is as rare as medical injury.

The association of Thanksgiving with genocide is a post-1960s critical narrative born of the convergence of American Indian political mobilization (AIM and related movements), broader progressive civil-rights activism, and the rise of postcolonial, revisionist historiography rooted in postmodern corruption of our sense-making institutions. Beginning with the 1970 National Day of Mourning, activists reframed Thanksgiving as a myth that obscures catastrophic population loss, displacement, and cultural destruction. For the anti-American activist, the holiday now symbolizes the start of a tragic trajectory rather than communal gratitude. To them it means the American project is invalid.

In this telling, the American story is exceptional in terms of ethnic oppression and genocide. Indeed, this is the only kind of American exceptionalism allowed—if one wishes to avoid being smeared as a white supremacist.

The 1621 gathering itself is not a myth; it happened. But turning Thanksgiving into a day of mourning is a political act of repurposing—a classic move of woke ideology, which demands that every American story be reexamined through the lens of power, race, and structural injustice.

(When critics remind me that “woke” is an old word whose meaning has changed, they are half-right: its first mainstream print appearance was in 1962, urging black Americans to “stay woke” to racial injustice. The core purpose, however, has not changed: to make permanent the perception that America is fundamentally unjust.)

The reinterpretation of the holiday as a symbol of genocide thus represents an intentional political shift in cultural sensibilities rather than the uncovering of a hidden historical truth. But the truth of Thanksgiving was never hidden—any more than the history of slavery was hidden. The trope of “hidden history” is itself a rhetorical device for manufacturing historical forgetting.

The youth of today are taught history not as an informative exercise, or even to educate the developing person about discernment in historiography and the importance of understanding biography and history; rather, the purpose of history education since the 1960s is pitched as the liberation of secret truths concealed by oppressors—white cisgendered Christian supremacists—to advance an imagined status quo manufactured forgetting means to valorize.

Many of us who grew up before the woke era experienced Thanksgiving as a day of family and reflection (even an atheist like me could participate culturally and feel loved), unburdened by subversive political desire. I say that so younger readers may pine for a world where not everything is politicized, where the woke gaze is diminished.

My generation (born 1962) always knew about the fate of indigenous peoples. We were horrified by aspects of that history, but we recognized it as history: deeds done by the dead, for which no living person bears responsibility—even if they inherited the spoils of conquest and colonization.

America is not exceptional in this way: World history is the story of conquest and colonization; American Indians themselves arrived in worlds shaped by earlier conquests.

Progressive history revises the past in order to delegitimize the present on the fallacious premise that each generation is responsible for the sins of its predecessors. That is a primitive ethic, one that the modern world rightly buried. It should never have been resurrected from its grave.

The future is open, but it is also constrained by the present order—some elements of which are worth preserving. When in Nineteen Eighty-Four Orwell quotes O’Brien, an Inner Party member, about the past, present, and future, he highlights for readers the power of shaping history to influence society and maintain authority.

Those in power, or who are in a position to capture it, manipulate collective memory by censoring and rewriting historical events to justify their ambitions. If the ruling class or some other determined group can convince people that past events occurred in a certain way, then they can shape beliefs, values, and expectations—and this control shapes future behavior to align with their interests.

Postmodernists are right about this—the one truth they cannot deny: control over historical narrative is a tool for political domination, as people’s understanding of the present and their vision of the future are deeply influenced by what they are taught about the past. For them, it’s all about discursive power (which depends on corruption and command of society’s institutions). For those who care about facts as really-existing things, it’s about truth and justice. This is why it is vital to the life of the free republic to prevent its youth from being taught to feel guilty about their nation’s past.

(For further reading on this topic, see my July 2021 essay The Zinn Effect: Lies Your Teachers Tell You.)

Why Chicago Mayor Brandon Johnson is Full of Shit

Have you seen this yet?

The chart below illustrates why the mayor of Chicago, Brandon Johnson, is full of shit. He tells his constituents that America will never incarcerate its way out of violent crime. No social system can completely eliminate violent crime. And the best that a society with dense urban populations, widespread idleness and welfare dependency, fractured family structures, the presence in power of policymakers and politicians who promote a culture of resentment and violence, and officials who stand down law enforcement while returning lawbreakers to the street can do is reduce crime and violence to tolerable levels. The most effective way to do that? Incarceration.

Chart based on FBI and BoJ statistics.

Incarceration doesn’t reduce violent crime by deterring criminals from preying on the public or warring with one another. Deterrence requires more law enforcement officers on the street and the aggressive policing of the populations there. Incarceration reduces violent crime through incapacitation. Suppose a society removes violent offenders from the streets. In that case, it follows that those who cannot abide by the rules of a decent society will be unable to commit violent crime. This is logically obvious, and the empirical evidence confirms it, as shown in the above chart.

There is no other explanation for the drastic drop in crime associated with mass incarceration. Our society is neither more equal nor less impoverished than it was in the decades before the 1960s. Criminogenic conditions only increased in the period following the 1960s, which explains the drastic rise in crime since then. What exacerbated those conditions? Ghettoization; the vast expansion of the welfare state; mass immigration that idled millions of American citizens; and the practice of defining down deviance. Who is responsible for this? Corporations and their progressive operatives in the Democratic Party, along with Republican collaborators (RINOs).

Given the degree of violent crime in American society—largely the result of decades of progressive social policy that destroyed inner-city neighborhoods and demoralized the people living in them—mass incarceration has proven the most effective intervention if the goal is to make society safer and therefore freer. That should be the aim of anyone who claims to care about other people—especially those who profess that black lives matter. Unfortunately, the same party that for the most part created these conditions continues to perpetuate them for economic and political reasons, and that party remains a significant force at both the federal and state levels. That would be the Democratic Party.

Some people view mass incarceration as an indicator of unfreedom. But the relevant question is whether the deprivation of liberty is justified. Not everybody deserves to be free. Unfreedom is justified under the principle of just deserts: if one breaks the law, there are consequences, and the consequences should keep foremost in mind the safety of those who follow the law. It is the right of the lawbreaker to be punished for his actions. It is the right of the people to be protected from those actions. Some see demographic patterns in criminal justice as evidence of systemic racism. This may be true with respect to the policies that create and exacerbate criminogenic conditions, but it is not true of the institutions that must deal with the consequences of those policies. Demographic patterns in criminal justice reflect demographic patterns in serious criminal offending.

In the final analysis, the deprivation of liberty experienced by those who commit violent crimes is the result of both progressive policies and the voluntary actions of those who suffer them. Those who abide by the law do not deserve to be victimized by those who do not. Regardless of social conditions, those who harm others choose to do so. One makes a choice to break the law. Their victims—or those they are likely to victimize—have a legitimate expectation that a good society will use the most effective and immediate means available to enhance public safety. Incarceration is the most effective and immediate means to that end.

Politicians like Brandon Johnson (and JB Pritzker) do not operate from an objective, empirical standpoint. Not because they cannot—although Johnson is plainly a stupid man—but because they operate from an ideology that asks the public to imagine that demographic patterns in criminal justice are driven not by the demographics and patterns of crime but by systemic racism. This is a falsifiable proposition, and it has been repeatedly falsified. If rational and honest people are to reason objectively and scientifically, then ideologues like Johnson are among the worst politicians a city can elect. Yet citizens continue to elect them. Therein lies the deeper problem plaguing the blue city: widespread ignorance and ideological corruption among the populace.

Are there other ways to reduce violent crime? Yes. Among them: closing the borders; deporting illegal aliens; restricting public assistance to those who truly have no other means of support; and insisting that able-bodied Americans go to work. However, these measures must be pursued in tandem with aggressive law enforcement and incarceration. It will take decades to undo the harm Democrats have inflicted on American cities over the last seventy years. Given the depth of ideological corruption, partisan loyalty, tribal affinity, and imposed ignorance in this country—largely a consequence of progressive control over society’s sense-making institutions, e.g., public education—it is unlikely that citizens will be able to keep Democrats out of government and elect those who would rationally address these problems at the scale required to re-order society, restore public safety, and reverse the structural causes of criminogenic conditions (what one properly identifies as the evidence of systemic racism).

I will close by noting that the logic behind the reductions in violent crime between the mid-1990s and roughly around 2014 is the same logic that explains why violent crime increased after 2014: the nation largely abandoned effective law-and-order policies. This was not accidental. Beginning around 2010, the mass media began promoting the myth of systemic racism and white supremacy. Wealthy individuals and organizations created and funded groups like Black Lives Matter, which persuaded millions that depolicing and decarceration were justified based on the false claim that law enforcement was inherently racist. This problem was made worse when, in 2020, Democrats opened the borders and flooded the United States with cheap foreign labor—an intentional action benefiting billionaires while disorganizing working-class communities and diminishing the life chances of American citizens.

The worsening conditions in impoverished inner-city neighborhoods are not the unintended consequences of well-meaning policy. The do-gooders are not doing good. Today’s situation is deliberate in the same way that criminal law defines and adjudicates intent and criminal culpability. Because of the way violent crime affects all of us, we are victims of a grand political crime perpetrated by the elite and their functionaries in the Democratic Party. As I have noted before, Republicans don’t run the blue cities. Unfortunately, congressional Republicans seem hesitant to act to stop the federal judiciary from undermining Donald Trump’s efforts to rein in violent crime.

How Did the Roles Get Reversed? The Moral Confusion Surrounding Israel and Gaza

Recent polling by Richard Baris (of Big Data Poll) shows that a large share of Americans—particularly younger voters, including many on the political right—believe that Israel committed genocide in Gaza. When asked, a plurality of registered voters (38.4%) believe “what Israel has done in Gaza amounts to genocide.” Less than 3 in 10 (29.0%) say it does not, and roughly one-third (32.6%) are unsure. Republican voters ages 18-29 agree 43.5 to 36.2 percent. That margin widens significantly among the same age group that self-identifies as America First Republicans, with nearly 60 percent agreeing with the statement. Moreover, except among Republicans overall, Israel drew less support than did Gazans. Even here, sympathy for Israel is less than 50 percent. More striking is that the group with the greatest sympathy for Gaza is young devotees of the American First movement. Note also the ambivalence of many respondents. The sample size of the poll exceeded 2,000. (For Baris’s report, see Poll: Sympathy for Israel Falls to Historic Low Among U.S. Voters.)

(Source)

As someone well-informed about the conflict and having an in-depth understanding of the laws of genocide and war, these numbers are troubling. They indicate that a large proportion of the American population does not understand the situation. However, as I will come back to at the end of this essay, it suggests something more disturbing: that many Americans hold Israel to a different standard than they do other nations. Assuming, charitably, that these numbers mainly reflect widespread ignorance of genocide law and a nation’s permissible response when attacked, it is important to state that the belief that Israel perpetrated genocide in Gaza misinterprets both the legal meaning of genocide and Israel’s response to the events of October 7, 2023.

On the matter of genocide, a genocide is defined by its motive: the intent to destroy an ethnic population in whole or in part. Israel did not carry out its operations in Gaza with this motive. Israel’s action in Gaza was defensive. Israel was responding to an attack by a belligerent entity on Israeli soil. Indeed, it was responding to a genocidal act, not perpetrating one. To explain this, I will draw a parallel between the Israeli-Gazan situation and Allied operations conducted against Nazi Germany during WWII. Allied actions in Nazi Germany will serve as the moral measuring rod for judging the appropriateness of Israel’s actions.

Under Nazi rule, Germany pursued a genocidal agenda, seeking to eliminate the Jews from German society and from Europe altogether, with plans to do the same in the Middle East (see Jew-Hatred in the Arab-Muslim World: An Ancient and Persistent Hatred). Following this genocidal aggression and Germany’s broader assault on Europe, the Allies unleashed a campaign of overwhelming force on German cities—Berlin, Cologne, Dresden, Frankfurt, and other urban centers—reducing them to rubble. The devastation, when viewed in photographs today (easily obtained by searching Google images, some of which appear in my essay The Danger of Missing the Point: Historical Analogies and the Israel-Gaza Conflict), bears a striking visual resemblance to Gaza. Roughly 600,000 German civilians were killed in Allied bombing alone, tens of thousands of them children, and millions of German civilians died through other causes during the war. Yet the Allied campaign is not understood as genocidal because its motive was defensive and reactive. The scale of devastation, horrific as it was, did not define the moral category. Intent did.

Hamas gunman, October 7, 2023

The Hamas attack of October 7 carried a clearly stated genocidal intention. Hamas’s foundational commitment is the removal of Jews from Palestine, which its slogan “from the river to the sea” and its charter openly articulate. The 1988 Hamas Covenant contains genocidal language, including explicit calls for violence against Jews as a group, promotion of antisemitic conspiracy theories, and framing of the conflict as a religious obligation to eliminate the “Zionist enemy.” The charter contains two particularly inflammatory provisions that are widely regarded as genocidal in intent. Article 7 quotes a well-known hadith declaring that the Day of Judgment will not arrive until Muslims fight and kill the Jews. Article 13 categorically rejects any peaceful solution or negotiation, stating that “there is no solution for the Palestinian question except through Jihad” and dismissing all diplomatic initiatives and international conferences as contrary to Hamas’s principles. Regardless of later revisions to the charter, which do not alter the intent identified above, the ideological core remains: a Jew-free Palestine. October 7 was carried out in furtherance of this genocidal goal.

Israel responded to the horrific attacks of October 7 defensively, striking Hamas targets embedded across Gaza’s densely populated urban environment. Again, crucially, the moral comparison between Germany and Hamas rests not on the scale of devastation (in lives lost, approximately 6-7 percent of the German civilian population, and 3-4 percent of the Gazan population), but on motive: in both cases, one side initiated aggression grounded in genocidal ideology; the other responded with overwhelming force designed to defeat that aggression.

Critics argue that the comparison to World War II is flawed because the Allies fought a sovereign nation-state, whereas Israel faces a non-state militant organization embedded among civilians. However, the structural form of the enemy does not alter the essential moral fact: in each case, a genocidal actor initiated the violence. Israel’s response, like that of the Allies, aimed to neutralize an entity driven by the elimination of a people, as the Hamas Convenant makes clear. Once more, intent, not political form, is the hinge of the moral argument.

Another criticism focuses on foreseeability. Critics claim that even if Israel did not intend civilian casualties, the extent of the destruction was foreseeable and therefore morally condemnable. Yet international law has long distinguished between intent and foreseeable collateral damage. Civilian casualties, even on a large scale, do not constitute genocide unless they arise from a desire to destroy a population. The Allies bombed German cities knowing that civilians would die in enormous numbers, yet their motive—to defeat a belligerent and genocidal regime—remains morally distinct from genocide itself. The same holds for Israel confronting Hamas fighters who systematically embed themselves in civilian structures precisely to produce inflated civilian death tolls.

A further argument asserts that Israel’s overwhelming military superiority imposes a heightened obligation for restraint. But superiority does not alter intent, nor does it erase the right of a nation to defend itself after suffering a genocidal massacre. Indeed, a nation acquires overwhelming military superiority to deter threats to its people and to effectively repel those threats if deterrence fails. The Allies eventually enjoyed overwhelming industrial and military superiority over Germany, yet this never transformed their defensive campaign into genocide. Nor did Israel’s campaign in Gaza become genocidal. Moral categories do not shift based on the balance of forces.

Some critics insist that Israel never truly left Gaza, pointing to border controls and airspace restrictions. This is the “Gaza under siege” narrative, which typically elevates controls and restrictions with language suggesting an Israeli blockade. But Israel’s withdrawal in 2005 was complete: every soldier and every Jewish civilian was removed from Gaza. What followed was Hamas’s ascendancy and its decision to militarize Gaza, diverting international aid away from civilian needs and into tunnels and weaponry (Gaza-specific aid for the 2005-2023 period is estimated at $12–15 billion, with $3.5-4 billion coming from USAID). The dire conditions in Gaza reflect this militarization, not an Israeli desire to eliminate the population. Holding Israel responsible for the consequences of Hamas’s governance confuses cause with effect.

Critics also claim that Hamas does not represent the civilian population in the way that the Nazi regime represented Germany, making the analogy inappropriate. Yet Hamas is the de facto governing authority of Gaza, exercising control for nearly two decades. (Can it really be said that the Nazi government was representative of German interests?) Gaza has deliberately placed its military infrastructure in hospitals, schools, and residential buildings to maximize civilian exposure and to weaponize civilian casualties for political effect. When a governing authority uses civilians as shields, civilian deaths become part of its strategic calculus, not evidence of genocidal intent by the opposing force.

Some argue that the scale of destruction in Gaza must itself be taken as proof of genocide. But devastation alone does not define genocide. World War II’s destruction of Germany far exceeded what has occurred in Gaza (possibilty twice as many civilians deaths in Germany compared to Gaza), yet the Allies are not remembered as perpetrators of genocide against Germans. The decisive factor in moral reasoning is always intent, not the magnitude of devastation, and Israel’s intent has been the defeat of a genocidal organization, not the extermination of a people.

This brings the analogy to one more important dimension. The Allied demand for Germany’s total surrender was followed by the project of denazification, which aimed to ensure that Germany would not repeat its genocidal aggression. Ending hostilities without uprooting the ideology at its core would have guaranteed future conflict. By contrast, the cease-fire negotiated between Israel and Gaza—despite Israel’s ongoing operations—prevented Israel from securing a total surrender from Hamas or enforcing any ideological disarmament comparable to denazification. Calls for Hamas to be disarmed have not been accepted by Hamas itself (and the Arabic parties involved seem disinteresting in pressing the issue), and nothing resembling ideological de-radicalization has occurred in Gaza. The Islamist, clerical-fascist ideology that undergirds Hamas bears a conceptual similarity to the fascism that animated Nazi Germany, but unlike postwar Germany, Gaza has undergone no ideological transformation. This is why I opposed a cease-fire. I believe Israel should have been permitted to completely remove Hamas from the territory.

Thus, Israel is not only wrongly accused of genocide; it is held to a standard that the Allies themselves were never held to. Imagine how unacceptable a resolution to WWII would have been if it had ended through a cease-fire that left the Nazi regime intact, unreformed, unbeaten, and un-disarmed. Such an outcome would have been rightly rejected as dangerous and incomplete. A cease-fire may halt violence temporarily, but it can also freeze a conflict in a form that prevents the defensive side from accomplishing the very goal that made its campaign morally justified. Yet Israel faces precisely this situation. It is judged harshly for doing far less than what the Allies were required to do to end a genocidal threat, and at the same time, is denied the opportunity to achieve the decisive conditions that ended the fascist threat in Europe.

The charge of genocide against Israel not only fails historically, legally, and morally—it inverts the roles of aggressor and defender in a way that obscures the real dynamics of the conflict. So I close by asking readers to consider the source of the double standard. How did the sides get flipped in the minds of so many people? How does Israel become, in the eyes of millions of reasonably intelligent observers, a bad actor when the Allied victory over Germany is celebrated, and the deradicalization of a belligerent entity is seen as necessary? What is the difference between the cases? The only one I can see is that, in the case of Israel’s actions, the ethnic group defending its people from genocide is Jewish. Given the extent and intensity of anti-Jewish sentiment in the West today, perhaps this was a predictable development.