Admiral Frank Bradley met with Congress behind closed doors concerning the double-tap strike of the drug boat, a semi-submersible, to explain his reasoning behind the operation. Democrats and Republicans emerged from the meeting with different opinions about the legitimacy of the admiral’s actions. Only one side’s opinion is correct—and it’s not the Democrats.
Adm. Frank Bradley arrives for a closed-door meeting with lawmakers Thursday (source)
As you probably know, on September 2, a DoD helicopter crew fired two Hellfire missiles at a fleeing narco-vessel in the Caribbean Sea. The first missile disabled the vessel’s propulsion; the second, fired minutes later, ensured it sank along with its multi-ton cargo of cocaine and the two traffickers who remained aboard attempting to right the ship, potentially allowing the operation to continue. The point of the DoD operations is to prevent either the delivery or recovery of the boat’s cargo. This necessarily involves killing agents attempting to deliver or protect illicit cargo.
One may believe that extrajudicial killing is immoral, but beneath the partisan noise around the question lies a straightforward legal reality: as I explained in an essay published on my platform yesterday (The Legality of Extrajudicial Killing and the Convenient Forgetting of History), such actions are lawful under long-standing US statutes and case law. Read the essay for the deep legal basis for conducting such operations and to learn about how the Obama administration used this same authority in a far more aggressive manner than the Trump administration. In today’s essays, in light of yesterday’s meeting, I want to review the two primary domestic legal authorities upon which such actions rest: (1) the Maritime Drug Law Enforcement Act (46 USC. §§ 70501 et seq.) or MDLEA; (2) Use of Force (RUF) and the joint Standing Rules of Engagement (SRE) (10 USC § 284).
First, the MDLEA asserts US jurisdiction over drug trafficking on the high seas, even by foreign nationals aboard foreign or stateless vessels. Courts have repeatedly upheld this assertion when the ship is stateless or when the flag state consents (explicitly or tacitly). Second, the RUF and SRE for these missions explicitly authorize deadly force—up to and including the complete destruction of the vessel—when lesser means fail to stop it. The double-tap was a straightforward instance of mission completion.
These semi-submersibles are designed to remain afloat even when disabled. As long as the hull remains afloat, the cargo can be recovered by follow-on boats. Video evidence shows the vessel still moving under residual momentum and two crew members actively working on deck after the first strike. From the lawful perspective of the helicopter crew, the objective—preventing the cocaine from reaching its destination or being recovered—had not yet been achieved. The second missile finished the job the first strike started.
Critics who decry the action as extrajudicial killing are importing a framework that doesn’t apply. This was a law-enforcement interdiction on the high seas against a stateless criminal enterprise. No sovereign state has ever successfully challenged these operations in an international forum, and dozens of Latin American and European nations have signed bilateral agreements facilitating them.
Those who condemn the strike while ignoring precedent should recall that materially identical operations occurred under every administration from Reagan to Biden. As I noted in the previous essay, buttressed by law passed in the aftermath of 9/11, the Obama administration conducted hundreds of drone strikes—some killing American citizens and many more killing uninvolved civilians—in countries with which the United States was not at war, under legal theories far more expansive than those used in the Trump case. I criticized the Obama Administration for some of these actions; the question of their lawfulness was never challenged by Democrats. They’re only objecting here because Trump is the President. It is purely partisan noise—meant to reinforce the video of Seditious Six (see below).
We want to speak directly to members of the Military and the Intelligence Community.
The American people need you to stand up for our laws and our Constitution.
Ultimately, the legality of such strikes is not a function of who occupies the White House but of who controls the statutes and the courts. Congress may amend the MDLEA or restrict DoD counter-drug support tomorrow if it wishes. They have chosen not to do so for more than four decades because the American political consensus has supported the disruption of the cocaine supply chain, even when disruption requires lethal force at sea. They would also opt to repeal enabling legislation passed in the wake of 9/11. Until that consensus changes and is codified, any order to sink a fleeing drug boat, including with a second missile when the first proves insufficient, is a lawful order by definition.
Again, I want to emphasize that one may still find the practice morally troubling. Reasonable people can debate whether the problem of narcoterrorism justifies such actions. But these debates are about what the law ought to be, not what it currently is. As it stands, the helicopter crew that fired the second Hellfire acted well within US domestic law, international treaty obligations, and decades of unbroken precedent. The footage is dramatic. But the law is settled until Congress unsettles it. The rest is partisan politics—and a continuation of the coup the Establishment put into action even before Trump entered the White House for the first time. Don’t let the signal be drowned in noise. Make the moral argument—which would, on principle, require criticizing the law itself—but don’t use this case to undermine the legitimacy of the federal republic.
In the first two decades of the twentieth century, especially around 2010–2011, academic concepts such as “white privilege,” “white supremacy,” and related terms migrated into popular discourse. This shift was driven largely by progressive activists influenced by critical race theory and sympathetic media. These concepts are often invoked to explain average group differences—most notably income disparities—between black and white Americans. The claim is that such differences are primarily, if not entirely, the result of racial privilege enjoyed by whites and racial oppression experienced by blacks. This framework, however, rests on two fundamental errors: (1) conflating cause and effect; (2) the fallacy of misplaced concreteness.
Advocates of the “white privilege” explanation often point to statistical differences as evidence of racism, treating the observed disparity itself as proof of its cause. But this reverses the proper direction of rational analysis. If racism is proposed as the independent variable, one must clearly define what racism means in this context and how it causally produces specific economic outcomes. Simply labeling the disparity as racism sidesteps the actual causal investigation. In this model, the conclusion is smuggled into the premise. It’s sophistry.
A second error lies in treating demographic averages as if they directly describe every individual within a group. This assumes that each white person is a concrete instance of the abstract statistical average for whites, and likewise for blacks. In reality, both groups contain enormous internal variation: homeless whites living under bridges, wealthy and professionally successful whites; the same range among black Americans. Treating individuals as embodiments of group averages obscures more than it reveals.
A more grounded analysis begins by acknowledging that white Americans, on average, do earn more—but then asks why. These explanations look beyond racial identity and toward a broader set of variables associated with life outcomes: Are whites more likely to graduate from high school? Are they more likely to have some college or a completed college degree? Are they more likely to grow up in stable, intact families? Are they more likely to live in safer neighborhoods? Are they more likely to obtain jobs with higher wages? Are they more likely to save and accumulate wealth?
Not every white person enjoys these advantages, nor does every black person lack them. But if these factors correlate with income, then average differences between groups can emerge even without invoking racial privilege as the driving causal mechanism. These variables describe patterns of behavior, social structures, and personal or community-level achievements—not racialized systems of privilege.
The progressive explanation based on white privilege and white supremacy does not grapple with these underlying variables. Instead, it reframes the discussion into a moral narrative of “oppressors” and “victims”. This moral binary, borrowed conceptually from the French revolutionary categories of “perpetrators” and “victims,” stands in for genuine analysis (“Franz Fanon says…”). Consequently, it prevents a clear understanding of the factors actually constraining average black outcomes.
Among these relevant factors are: Higher crime rates in many black neighborhoods, lower high school and college graduation rates, lower workforce participation, lower prevalence of stable, intact families, lower likelihood of obtaining higher-wage jobs, etc. None of these observations is offered to “blame the victim”—for that would assume that blacks, as a collective, are victims, which is precisely the assumption this argument rejects. The concern is that by attributing all disparities to white wrongdoing, progressive rhetoric prevents society from addressing the real obstacles to black advancement. Critical race theory in practice tice undermines the life chances of black Americans.
Woke progressive rhetoric sabotages them; if the goal is to improve average outcomes for Black Americans, then solutions should target the factors that truly drive opportunity. These include: strengthening public safety so that communities can thrive; increasing mentorship and support in schools; reinforcing norms of achievement, discipline, and work ethic; pursuing public policies that expand employment opportunities
Economic policy also plays a big role in this. For example, the post-1965 surge in mass immigration introduced large numbers of low-wage workers into the labor market. This had real displacement effects on black workers, particularly those without advanced education, while also suppressing wages across several sectors. These are material realities, not ideological constructs.
Reducing complex socioeconomic differences to a narrative of white privilege not only rests on flawed reasoning; it actively diverts attention from the cultural, economic, educational, and structural factors that more plausibly explain group outcomes. Addressing these real causes—rather than blaming whites as an abstract categorical group—offers a path toward genuine improvement in the lives of black Americans and a clearer, more honest national conversation.
Trump administration officials are today justifying the follow-up strike on a drug-running vessel on September 2 that killed surviving crew members by asserting that the mission’s purpose was to ensure the vessel’s complete destruction—an action the Pentagon had internally authorized as legally permissible. It was not to kill survivors. The first strike had not completely disabled the vessel, so a second strike was necessary.
White House press secretary Karoline Leavitt said at a briefing on Monday that Admiral Frank Bradley ordered the second strike specifically to sink the boat. I missed that press conference. “Admiral Bradley acted fully within his authority and the law by directing the engagement to ensure the vessel was destroyed and the threat to the United States was eliminated,” Leavitt said. War Secretary Pete Hegseth echoed that account for the first time at a Cabinet meeting on Tuesday, saying the second strike “sank the boat and eliminated the threat.”
A still from a declassified video of a drug boat targeted by the US military
By framing the strikes as targeting the vessels themselves—language that mirrors a classified Office of Legal Counsel (OLC) memo authorizing the attacks—officials place the operation that led to the double-tap of the drug boat on its strongest possible legal footing amid mounting questions about the incident. According to three lawyers with direct knowledge of the matter, the Guardian is reporting today, the memo concludes that the United States may lawfully use lethal force against unflagged vessels transporting cocaine because drug cartels use the proceeds to finance violence.
The memo argues that cartels are engaged in a so-called “armed conflict” with regional allies and that, under the doctrine of collective self-defense, the US is permitted to destroy the cocaine shipments to cut off the cartels’ funding for weapons and operations. The memo asserts that the likelihood of deaths among those on board does not, by itself, render the vessels illegitimate military targets. The legal analysis is buttressed by intelligence findings contained in a classified “statement of facts” annex to the OLC opinion, as well as a National Security Presidential Memorandum (NSPM) dated July 25 authorizing the use of military force against drug cartels. Although the documents are classified (to avoid alerting the enemy to intelligence sources, operations, and tactics), sources say they include detailed information, e.g., estimates that each drug-running boat carries roughly $50 million worth of cocaine. One can see by the images and video the Pentagon has released that these are drug-running vessels.
Yesterday’s Pentagon press secretary, Kingsley Wilson, addressed this issue (see above). Wilson also pointed out that the Washington Post article condemning the action was based on anonymous sources and that the Pentagon had corrected the article before it was published by the Post. The Post ran with the story anyway. The Post’s claims were, surprisingly, debunked by the New York Times.
There really isn’t anything here. But that doesn’t mean that the story is no longer interesting. On what grounds is the White House basing its legal authority to conduct these operations? This is where it gets interesting. Readers may recall that the Obama administration carried out a wide range of lethal operations that rested on the same legal foundations now invoked by Trump to justify maritime strikes on drug-running vessels. Obama authorized hundreds of drone strikes in Pakistan, Somalia, and Yemen, countries with which the United States was not formally at war, on the theory that the 2001 AUMF extended to “associated forces” of terrorist groups and that the president possessed inherent Article II authority to neutralize imminent threats abroad.
What’s the AUMF? The AUMF, or Authorization for Use of Military Force of 2001, is a law passed by Congress just three days after the September 11, 2001, attacks on the nation. It has been the primary legal foundation for many US military actions ever since. You may believe that extrajudicial action is wrong, and you have a right to that opinion, but one’s moral viewpoint does not make a law illegal. If Democrats don’t like the law, they should try to change it. However, the AUMF was passed with overwhelming support from both parties. In the Senate: 98 yeas, 0 nays, 2 not voting. In the House: 420 yeas, 1 nay, 10 not voting. (The lone “nay” was from Barbara Lee, a Democrat from California.) That means that Democrats are responsible for the law that Obama used to carry out extrajudicial killings.
Actions by the Obama administration involved extrajudicial killings not only to kill noncitizens, but also US citizens, and did so routinely without judicial oversight or host-nation consent. Moreover, the Obama administration expanded maritime interdiction under Operation Martillo, in which US forces boarded, disabled, and, yes, even used lethal force against stateless vessels engaged in narcotics trafficking. The reality is that Obama pushed the AUMF much farther than Trump has—much farther. Where was the hysteria then? I said something about it back in the day, especially concerning the extrajudicial execution of US citizens (including a teenager), but I heard nothing from Democrats. They adored Obama (still do). And they loved his Secretary of State, Hillary Clinton (still do), who was all in on extrajudicial killings. Many of them fell prostrate on the pavement when she lost the presidential campaign to Trump. (TDS can be traced to this moment.)
The hysteria we’re seeing on the left today ostensibly moves from the claim that attacks on drug-trafficking boats linked to Venezuela constitute illegal extrajudicial killings. Yet Trump’s actions are based on the same conceptual framework underpinning Obama-era counterterrorism and counternarcotics operations: collective self-defense, broad AUMF interpretations, and the “unwilling or unable” doctrine to justify force against non-state actors operating from hostile or ineffective states. Therefore, Trump’s actions are not illegal. So why are Democrats calling on Hegseth to resign? Why are the Seditious Six using these incidents to justify their video encouraging military personnel to disobey Trump’s orders? Their party joined Republicans in passing the AUMF. They supported Obama throughout his presidency. They did not repeal the law under Biden. The hysteria is not because Democrats are principled, but because all this is part of a color revolution aiming to derail the will of the people.
The difference in perception lies not in the underlying legal theories but in political alignment and public framing. It is purely partisan. Obama’s actions were cast as legitimate acts of counterterrorism, whereas the Trump administration’s maritime actions have been portrayed as aggressive counternarcotics strikes with geopolitical implications without legal foundation. Complete propaganda. Some even go so far as to absurdly call his actions “war crimes,” saying that when Democrats get back in power, they will hold Trump and administration officials accountable. I have no doubt they will try (they impeached the man twice for nothing and pursued lawfare against him and his associates under Biden). Yet, both administrations relied on expansive executive authority of the identical sort (only Obama pushed it further) to use lethal force outside traditional battlefields against networks viewed as national security threats.
Not that I think the facts will have any effect on TDS sufferers, but at least the rational among us have the facts on our side. This is why it is so important to ensure Democrats don’t return to power in 2027. The project of managed decline of the American Republic is bad enough with Democrats out of power. If they regain power, they will accelerate the project. They explicitly tell us that this is the plan. Electing Trump did not erase the globalist threat to the American Republic. Democrats and the Establishment still possess the power to dismantle the nation. The rot is inside the system. Republicans need to focus like a laser beam on this. They need to articulate a compelling narrative to counter the forces arrayed against the people.
The economy is struggling thanks to years of globalist policies. I understand that many Americans are suffering higher prices and all the rest of it. This is why Democrats are resurrecting James Carville’s “It’s the economy, stupid,” slogan. To the extent that this is true, however, the economic situation is due to Democratic and RINO financial and monetary policies. But even more than the economy, something greater is at stake: the future of democratic republicanism. Americans faced economic hardship in the past, but when the future of the country is under threat, they historically pull together and defend America from those who seek to harm it, foreign or domestic. To be sure, nationalism and patriotism are at a low ebb due to decades of indoctrinating our youth with anti-American sentiment. Yet there are adults in this country who can come off the sidelines and stand in the breach.
The Trey Reed case came up in one of my classes. I was not familiar with all the details of the case, although I am the one who brought it up in response to a claim that lynching is still a problem. My point was not to interrogate the case but to note that authorities ruled the case a suicide and that, moreover, even if it were a racially-motivated killing (for which there is no evidence to my knowledge), it would not be a lynching for conceptual reasons. My point was to inject skepicism in the conversation. In this essay, I will explain my reasoning and provide details of the case after taking a closer look at the facts.
I begin with a disclaimer and a couple of statistical observations. This case is still ongoing, and evidence currently not publicly available may be forthcoming that indicates a racially-motivated killing. It would take additional evidence to conclude that it was a lynching. It should be noted that, although suicide among blacks is rarer than among whites, according to the CDC, for 2022 (the most recent year with a detailed demographic breakdown), of the 49,476 total suicides, 3,826 were blacks. Moreover, according to the FBI, for that year, there were 13,446 black homicide victims. Approximately 89 percent of those murders were perpetrated by blacks. Although most of those murders were perpetrated with guns, many other methods were also used to carry out homicide. Strangulation is not an uncommon method of murderers.
Demartravion “Trey” Reed was a 21-year-old Black student at Delta State University in Cleveland, Mississippi. On September 12, 2025, Reed was found hanging from a tree on the university campus. The Mississippi State Medical Examiner’s Office, led by Randolph “Rudy” Seals Jr., conducted an autopsy and ruled the death a suicide by hanging. Delta State University Police Chief Michael Peeler reported that the findings of his department were consistent with the local coroner’s conclusions, which noted no broken bones, contusions, lacerations, or other signs of assault. Peeler said there was no evidence of foul play. These facts were widely reported across the media.
Reed’s family was not satisfied with the ruling and has called for an independent autopsy as well as greater transparency, including access to video evidence. Civil rights attorney Ben Crump is representing the family in their independent investigation, and Colin Kaepernick’s “Know Your Rights Camp” is reportedly funding the independent autopsy. Additionally, US Representative Bennie Thompson has called for an FBI investigation.
The case has drawn comparisons to the history of racial violence in the United States, particularly lynching, which shapes how many people are interpreting the circumstances surrounding Reed’s death. Whatever the facts of the case, there is a conceptual problem with the claim of racial lynching in this case in that the historical and scholarly understanding of the phenomenon in the United States (Ida B. Wells, Stewart Tolnay and EM Beck, and many contemporary historians) emphasize that lynching was not merely a form of homicide but a public, ritualized performance of racial domination. (For my writings on the topic, see “Explanation and Responsibility: Agency and Motive in Lynching and Genocide,” published in 2004 in The Journal of Black Studies; “Race and Lethal Forms of Social Control: A Preliminary Investigation into Execution and Self-Help in the United States, 1930-1964,” published in 2006 in Crime, Law, & Social Change. See also Agency and Motive in Lynching and Genocide and There was No Lynching in America on September 24, 2024, on this platform.)
Racial lynchings were carried out by groups of white perpetrators against black victims, before large and small crowds, who treated the violence as a communal spectacle, for which they were held immune from legal consequences. This public and performative quality distinguishes lynching from private acts of violence or clandestine hate crimes; lynching’s purpose extended beyond harming an individual to terrorizing an entire racial community and reinforcing a social hierarchy grounded in white supremacy. I have described the phenomenon in my work as a public spectacle used to reclaim boundaries serving the interests of white racial exclusion and hierarchy. My thinking was inspired by James M. Inverarity’s “Populism and Lynching in Louisiana, 1889–1896: A Test of Erikson’s Theory of the Relationship Between Boundary Crises and Repressive Justice,” published in a 1976 issue of American Sociological Review. Inverarity’s analysis relies on Kai Erickson’s Durkheimian framework (boundary maintenance, deviance, and repressive justice) to test whether boundary crises in white political order produced repressive collective violence in the form of lynching.
By framing lynching as a subset of racially motivated homicide, especially as an act of boundary maintenance, this definition captures the essential features of audience presence, collective participation, and symbolic intent. It reflects the scholarly consensus that a lynching is best understood as a social ritual—an assertion of racial control—rather than simply as a killing motivated by racial animus. (My position was later supported in work by Mattias Smångs. See “Doing Violence, Making Race: Southern Lynching and White Racial Group Formation,” published in American Journal of Sociology in March 2016.)
There is no evidence that Reed’s death was a homicide or perpetrated collectively with audience presence. The surveillance video from Delta State University that might indicate this has not been publicly released because the investigation into Reed’s death is still ongoing. With no eyewitness reports of a lynching, video evidence would be necessary to make such a determination. Withholding video evidence by authorities is not uncommon. Authorities often withhold such footage to avoid compromising eyewitness interviews, forensic analysis, or potential criminal proceedings. Privacy concerns also play a role, as campus cameras frequently capture students and staff unrelated to the incident. Moreover, maintaining strict control over the chain of evidence ensures that the footage remains admissible in court, and early public release could raise questions about authenticity or tampering, as well as biasing the jury pool. However, if the video evidence did show such a thing, it is highly unlikely—so unlikely as to be implausible—that the public would not already know about it.
Without evidence, conceptual distinctions aside, how did the belief that this was a lynching emerge and spread? Misinformation about Reed’s death after the release of the initial autopsy. An individual operating an account claiming to be Reed’s cousin alleged that he had sustained injuries—specifically broken bones—that would have made suicide physically impossible. As noted, the initial autopsy does not indicate this. Although the creator of the misinformation later deleted the videos, as well as the account itself (I can find no information on the identity of the person behind the account), they went viral. Moreover, on a podcast, Krystal Muhammad, chair of the New Black Panther Party, claimed in a conversation with rapper Willie D that Reed’s mother had spoken to her about the contents of the second autopsy report. (I hasten to note that the original Black Panther Party has denounced the New Black Panther Party, emphasizing that it has no connection to the original organization.)
Terry Wilson, founder of the Idaho chapter of Black Lives Matter Grassroots, injected fuel into the moral panic, telling The ChicagoCrusader ( “Lynching by Suicide: The Rebranded Face of America’s Racial Violence”) that the response from black Americans is deeply rooted in shared historical memory. “This sophisticated machinery of racial terror is just a fascist strategy that relies on overwhelming force from multiple directions, including misinformation, intimidation, and threats,” Wilson said. “I think we’re witnessing a coordinated campaign of disappearances, lynchings, and state-sanctioned killings that target Black, Brown, and Indigenous communities.” He added, “We need to address this method of ‘lynchings by suicide,’ which is their way of rationalizing, from a medical standpoint, their actions. I think this is sort of a death rattle for white supremacy, because they’re relying on nearly every structural institution to justify or cover up the actions of individuals.”
DeNeen L. Brown, faculty member at the University of Maryland’s Philip Merrill College of Journalism
I trust the reader will recognize the hyperbole of these assertions. The apparent factual basis of the assertions was provided in part by a June 3, 225 Washington Post piece, “Lynchings in Mississippi Never Stopped,” penned by DeNeen L. Brown, a staff writer for the paper. Her claim that “[s]ince 2000, there have been at least eight suspected lynchings of Black men and teenagers in Mississippi, according to court records and police reports,” is valorized by the reputation of the Post as an objective mainstream news outlet.
However, every instance of death Brown cites was ruled a suicide by officials. One either accepts these rulings or supposes a conspiracy in which Mississippi state officials are covering up homicides. One must furthermore imagine that there was a racial motivation behind these homicides. Finally, if all these things could be proven beyond a reasonable doubt, one must alter the definition of lynching to classify these homicides as such. It should be kept in mind that around one-quarter of all suicides are the result of asphyxiation and that more than 90 percent of those involve hanging. That eight black men over 25 years chose hanging as a method of suicide is not an extraordinary fact.
According to The New York Times (“A Black Man’s Death in Mississippi Strikes the Nation’s Raw Nerves” ), Jy’Quon Wallace, the 20-year-old Delta State student who discovered Reed’s body, is sympathetic to Reed’s family but, in the absence of a second independent autopsy, is not inclined to automatically connect Mississippi’s historical racial context to the body he found. “A lot of people are trying to use this situation to make it seem like it’s racially motivated. There are a lot of signs pointing to this as not a racially motivated situation. When that whole story comes out, if it does come out, it may give some people clarity. It may not. That’s not up to us,” Wallace told the outlet.
In that story, The Times reports, “Mr. Reed’s death was twice ruled a suicide, and no evidence has emerged that would suggest otherwise.” However, even if Reed were the victim of homicide, it does not follow that the perpetrator(s) was/were white or that, if they were, racial animus motivated the murder. Evidence is needed to make these claims. Even if the second autopsy found that blunt force trauma to the back of the head was the cause of death, or at least part of the sequence of events that led to Reed being hung from a tree, thus indicating a murder, the substance of a common rumor, the more likely scenario is that somebody had a grievance against Reed and murdered him. Some would object with the quip that the absence of evidence is not evidence of absence. Sure, but when speculating, one has to consider relative likelihoods.
And that is what lies at the crux of this problem. Motivated reasoning makes up for the gap between the evidence and what many would like to believe—or have the others believe: that the United States remains a profoundly white supremacist nation where whites target blacks for violence. As I have shown on this platform, the reality is that whites are far more likely to be victimized (murder, robbery) by a black perpetrator than the other way around. This does not mean that racially-motivated violence does not occur (indeed, I would argue that the disproportionality just noted indicates its presence in contemporary society), but rather that, in the absence of facts indicating racism, it is a leap of faith fueled by ideology to believe without compelling evidence that white supremacy explains the Trey Reed case.
Note: The discussion of viral media claims was adapted from reporting by Daniel Johnson writing for Black Enterprise.
I want to share a narrative I often present in a similar form to my students and conclude with an observation about how some people perceive my politics.
Many of my students identify as progressives (typical of higher-education social science programs) and uniformly view incarceration as a right-wing idea. In fact, incarceration is a liberal invention. Liberals sought to replace torture and retributive approaches with a rational system of justice grounded in the principles of deterrence, incapacitation, and rehabilitation.
I tell this story to help all students understand the moral and political character of modern criminal justice. Part of its value is in showing progressive students how ideology can distort history and principle; it also helps conservative students see that the institutions they support rest on liberal, not traditional conservative, foundations. My goal is to not only correct misperception but also deepen their political-philosophical understanding.
The emergence of modern criminal justice in the eighteenth and early nineteenth centuries was deeply rooted in the liberal tradition, which emphasized individual rights, legal constraints on state power, and rational governance—what Herbert Packer identifies as the “due process model” in his article “Two Models of the Criminal Process,” in a 1964 issue of the University of Pennsylvania Law Review.
Two of the most influential figures shaping this new penal philosophy were Jeremy Bentham and Cesare Beccaria. Their works clarified the aims and methods of punishment in the modern state and circulated widely in Britain, continental Europe, and the American colonies. The philosophers provided the intellectual foundation for the rise of penal confinement and the development of the penitentiary as a core institution of criminal justice. Far from being a right-wing creation, the penitentiary was a liberal reform.
Jeremy Bentham’s 1789 Principles of Morals and Legislation articulated a systematic utilitarian approach to legal and penal reform. Bentham emphasized deterrence and incapacitation as rational goals of punishment, seeking to minimize suffering while maximizing social utility. His architectural design for the Panopticon—a subject on which I devote an entire lecture—symbolized a broader shift toward a humane, systematized mode of punishment intended to replace the arbitrary and often brutal practices of earlier eras.
For Bentham, criminal justice should be guided by general laws, proportionality, and a view of offenders as individuals whose behavior could be shaped through predictable incentives and disincentives. Moreover, he insisted that the judicial process focus on acts rather than actors: class, gender, race, and other statuses were irrelevant; actions were what mattered.
Cesare Beccaria’s 1764 On Crimes and Punishments similarly transformed the moral landscape of criminal law. Writing decades before Bentham, Beccaria offered a powerful Enlightenment critique of disproportionality, secrecy, and torture. He argued for clarity in the law, proportional penalties, and the rational administration of justice. (For this, his book was added to the Index Librorum Prohibitorum, the Church’s official list of forbidden books in 1766.)
Beccaria’s emphasis on legality, liberty, and predictable legal processes resonated deeply with American political leaders. The principles he articulated—visible in key provisions of the Constitution and the Bill of Rights—shaped American commitments to due process, bans on cruel and unusual punishment, and the rights of the accused. Beccaria helped shift the prevailing view toward deprivation of liberty (unfreedom for those who break the law), rather than capital and corporal punishments, as the primary penal instrument of the state.
Inspired by these ideas, reformers in the nascent United States moved rapidly toward creating institutions devoted to penal confinement. The first American penitentiaries emerged in the 1790s, grounded in the belief that offenders could be reformed through regulated labor, separation from corrupting influences, and structured discipline. By the end of the eighteenth century, the penitentiary had become a defining feature of the American penal order.
While Northern states adopted this model most rapidly, Southern states also had early advocates. In Virginia, for example, the establishment of a penitentiary was driven partly by the reformist impulses of Thomas Jefferson, whose broader political philosophy—deeply indebted to John Locke—aligned with liberal commitments to equality under the law, individual rights, and rationalized governance. The system across America was elaborated during the nineteenth century.
The intellectual foundations of these reforms rested squarely on the classical liberal tradition. Drawing from Beccaria, Bentham, Locke, Montesquieu, and other liberal thinkers, American constitutionalism and early criminal justice were built on the idea that political authority derives from the consent and rights of individuals and that punishment must be justified by general principles rather than arbitrary force. This framework informed the Declaration of Independence, the Constitution, and the Bill of Rights, each presupposing a political order grounded in individual liberty, limits on state coercion, and the rule of law.
Seen in this light, I explain to students, the rise of the penitentiary in the United States was not merely an administrative reform but an expression of deeper philosophical commitments. It is a window into the foundation of a free society. Confinement became the preferred mode of punishment precisely because it aligns with liberal principles: it operates through law rather than spectacle, proportionality rather than cruelty, and treats offenders as autonomous individuals capable of reform.
Far from reflecting traditional conservatism, the penitentiary embodies a humane and optimistic vision of justice. The emergence of the penitentiary system stands as a central example of how Enlightenment liberalism reshaped the modern state and gave enduring institutional form to its moral and political ideals.
Of course, as implied above, some now argue that liberalism is not left-wing but right-wing—a view that ignores history. This revisionist approach would classify the Constitution and the Bill of Rights as elements of right-wing governance. If one identifies as “on the left” and equates left-wing politics with progressivism, then liberalism indeed becomes “right-wing” by contrast.
But in truth, progressivism—emerging as a post-liberal ideology supporting the rise of the corporate state after the Civil War, paralleling its social-democratic counterpart in Europe—is not left-wing in the classical sense. Progressivism elevates administrative and bureaucratic authority over the individual. It is an illiberal philosophy.
The point is that, if progressivism—rooted in corporatism and the ascent of a new administrative aristocracy—is labeled “left-wing,” then liberalism—understood as a commitment to individual liberty—becomes “right-wing,” simply because it stands in opposition to progressivism. This reframing reverses the ideological map as it was understood at the time of America’s founding and the French Revolution. Clever, to be sure.
Here’s the upshot: because I am a liberal, the swapping of political-philosophical sides makes me appear right-wing. Is it any mystery, then, why so many self-identified leftists accuse me of switching sides? What happened is that, beginning in earnest around 2018, as I explained in last Saturday’s essay, I shed ideas that contradicted my liberal principles. This meant rejecting the progressive elements in my thinking. Through the distorted lens of the camera obscura, sharpening my thinking with the stone of principle has transformed me into a right-winger. So be it—but there it is. I am much happier as a result.
Trump’s New York “hush-money” case is a farce, a textbook show trial. The purpose of the case was not justice. It was so party parrots could clack around squawking “34 felonies! Rraaawk!, 34 felonies!” “Trump’s a felon! Rraaawk!” They’re still squawking.
Cartoon by Sora
In what is known in the business as a “zombie case,” prosecutors elevated misdemeanors beyond the statute of limitations to felony status by alleging that records were falsified to conceal another crime. How fake was this? Totally fake.
The fakery was present all along. The indictment did not specify exactly what the underlying crime was, pointing opaquely to “election-related” or “financial-related” violations, which, at the outset, denied Trump clear notice of the charge he supposedly intended to conceal. The first question any objective and rational person asks when they see this is how a criminal trial proceeds based on an indictment containing no underlying crime specified.
It gets even more absurd from there. The judge’s jury instructions required jurors to unanimously agree that Trump falsified business records, but did not require them to unanimously agree on which underlying crime Trump intended to commit or hide. In other words, jurors could rely on different theories of what the unspecified underlying felony was, just so long as they unanimously returned a felony conviction.
See the problem? I hope you do. I want to believe you do. All that mattered was that jurors said Trump was guilty of something, but they didn’t have to determine what he was guilty of, which prosecutors never told them, since they prosecuted Trump, wielding an indictment that never specified a crime.
Illustration by Sora
This is why I often respond to the squawking of parrots with the question, “Have you ever read Franz Kafka’s The Trial?” In Kafka’s story, the precise charge against the accused, Josef K, is elusive, the logic of the accusation shifts (there is no logic, really), and K is expected to defend himself against something that is never clearly or fully articulated. K is trapped in a process where the form of legal procedure proceeds with great seriousness while the substance remains phantasmagoric. K is executed, never knowing what he was being executed for.
Adding to the insanity, after the verdict was delivered, the judge, Juan M. Merchan, imposed an unconditional discharge—no prison time, no fine—despite having allowed a conviction structure to stand that could not reasonably be expected to survive appellate review. Merchan could have set aside the verdict, but then he participated in a farce. He allows the show trial to go through the motions. And the point of the whole thing was to train party parrots to squawk a squawk. “34 felonies! Rraaawk!, 34 felonies!” “Trump’s a felon! Rraaawk!”
Even the way the parrots squawk the squawk is brainless. Trump wasn’t convicted of 34 felony counts. He was convicted of one felony offense (whatever it was). So even if one accepts that Trump was legitimately convicted of something, it would not be “34 felonies” but a “34-count felony.” The 34 counts? Every one of them was a misdemeanor that had expired.
There are certain things people say that disqualify them in my eyes from having anything to say worth listening to. This is one of them. Every time I hear somebody say that Trump is a felon, I know they haven’t a clue about the case or the law. Scarier is that they believe that show trials appropriate to the Soviet Union under Stalin should be run in America. They want Trump to be a felon because they loathe him, not because they have a shred of integrity or commitment to the rule of law.
Words are presumed to carry power, especially words that offend people. The very idea that a word can “offend” someone depends on an imagined or assumed structure of power. When a term is labeled a slur, it is usually because it is thought to emerge from, reinforce, or call into being some underlying social hierarchy. For example, there are words that black people can use to describe white people that technically qualify as slurs, yet very few white people are seriously offended by them. There is a presumption that whites hold structural power over blacks and thus their words do not injure. Moreover, whites deserve to suffer slurs since they are the oppressors. The presumed asymmetry of power flows in one direction, and that presumption shapes how the words operate. (Do you see the paradox?)
In the opposite direction, there are words that white people can use toward black people that are deeply hurtful. The assumption is that such words express or invoke a position of power, and that they carry within them the weight of a larger social asymmetry. At the same time, black people may use these same words among themselves and often argue that this usage strips the words of their oppressive power—an act of rhetorically “reclaiming” language from the dominant group.
We see a similar dynamic in words directed at gay people: slurs aimed at gay men or lesbians wound deeply, while parallel slurs thrown at straight people land with far less force. Yet accusations of homophobia, like accusations of racism, can be hurtful because they charge the accused with moral wrongdoing. In that sense, the equivalent offense on one side is the use of a derogatory term; on the other side, it is the accusation that the person is morally tainted for supposedly using or embodying a derogatory attitude that manifests the asymmetry of power.
Over time, some words become so heavily charged that even referencing them without malice becomes taboo. The power dynamic is so baked in that people avoid speaking the word outright and instead reduce it to constructions like “the N-word” or “the F-word.” Yet, everyone who hears the euphemism instantly imagines the actual word in their mind. Even the people who would be offended if they heard the word mentally summon it the moment the euphemism appears. It is in everybody’s head (or else we wouldn’t know what was being conveyed). The taboo becomes paradoxical: the word is forbidden to speak, but impossible not to think.
This dynamic is on my mind today because of the controversy surrounding the word “retarded,” now frequently replaced by “the R-word.” When I was growing up in the early 1960s, words like “idiot,” “imbecile,” and “moron” were understood as synonyms for retarded. Yet today retarded alone has taken on the status of a sacred or forbidden term. It resembles, in a way, the ancient Jewish taboo against vocalizing the actual name of God; instead, one used circumlocutions. Only priests or scribes could speak the divine name. This taboo was built on the assumption of an asymmetrical power relation between the clerical class and ordinary people. Similarly, our modern panel of offensive words functions as a set of secularized sacred terms—words that cannot be uttered because of the social power they are imagined to reveal.
Thus, what we call “offensive language” is really a structure of sacred language embedded within an imagined system of power. This is what postmodern philosophers describe as discursive formation: the idea that language does not so much reflect power as generate and organize it. If one is to have power, one must control language (yet another paradox). While the term is modern, the underlying phenomenon is ancient. Civilizations long before ours used regulated language—taboos, sacred terms, forbidden names—to enforce and perpetuate structures of power. In that sense, nothing about our current landscape of forbidden words is new. The observation is simply that we have reinvented an old form of linguistic sacredness under secular conditions.
When I was growing up in church, I learned something about power that I now see as parallel. I often heard it said that the devil—Satan—has only the power that God allows him. If we imagined Satan as possessing independent, self-generated power, a kind of standalone evil deity, then Judaism and Christianity would be polytheistic rather than monotheistic. But the theology I heard insisted that God alone is sovereign and that anything Satan does occurs only within limits established by God (see the story of Job).
Years ago, during a debate on CNN’s Crossfire between Frank Zappa and a guest—likely someone associated with the Moral Majority, since it occurred during their campaign to ban or label certain song lyrics—Zappa repeatedly emphasized that lyrics are simply “words,” nothing more than letters arranged in a particular order to convey an idea.
Zappa, a well-known atheist, approached the issue from a perspective I share. My objection to any theological system that forbids certain words from being spoken—what is traditionally called blasphemy—has always been strong. I find the creation and exercise of such power offensive. Here, I am not using “offensive” in the sense of hurtful words; rather, I find it offensive when systems restrict people’s freedom to speak. I find it offensive because it is illiberal and totalitarian.
The theological concept of blasphemy has been secularized: the same logic now governs prohibited social words, where uttering them—especially depending on who speaks—can trigger sanctions. This phenomenon shatters the illusion of presumed power. The real power structure is revealed when people find themselves on the disciplinary end of this linguistic control system. This is a situation of inequality; liberty is manifest when everybody enjoys equal access to words to express their thoughts.
It takes a lot of courage, I know, but we should collectively refuse to participate in a system that punishes people for uttering words and should actively work to dismantle such punitive mechanisms. It is not as if we don’t have the tools to wage this fight. The First Amendment to the US Constitution can be understood as a recognition that power structures have historically used punishment for certain forms of speech as a tool of authoritarian control. The Framers rebelled against that power. To allow a system of linguistic control is fundamentally at odds with the free and open society envisioned in American jurisprudence.
A post circulating on X claims that Japan is hostile to Islamic burial practices and that these practices are effectively banned. The claim is not entirely accurate. However, Islamic burial customs indeed face significant constraints in Japan. The post frames the issue as a suppression of religious liberty. My contribution to these threads—posed as a rhetorical question—is whether there are legitimate limits on religious freedom. Of course there are. However, before explaining why, I would like to outline Islamic burial traditions and the current situation in Japan.
Islamic tradition requires burial. It strongly prefers that the deceased be buried as soon as possible—ideally within 24 hours and traditionally before sunset if death occurs earlier in the day. Embalming is generally strongly discouraged or outright prohibited, and cremation is strictly forbidden. In Japan, however, cremation is the overwhelmingly dominant practice (99.8 percent of corpses are cremated).
A small number of Japanese cemeteries accept Muslim burials, but they are few, often far from major Muslim enclaves, and sometimes prohibitively difficult or expensive to access. When local Muslim groups attempt to establish new cemeteries, they frequently encounter strong local resistance based on concerns about cultural identity, groundwater contamination, and property values. As a result, proposed cemeteries are routinely canceled.
The issue, then, is less one of explicit state prohibition than of de facto exclusion resulting from administrative hurdles, community opposition, and cultural norms. In practical terms, Muslims in Japan face significant obstacles to securing a burial that aligns with their faith—an ongoing problem (for them, at least) even without a formal national ban.
My rhetorical question to posters is whether they believe it would constitute an infringement of religious liberty for Japan (or countries most anywhere in the world, for that matter) to prohibit funerary practices involving endocannibalism—anthropologists’ term for the ritual consumption of members of one’s own community as part of mortuary rites. Such practices were not acts of hostility but expressions of cosmological belief, mourning, and reverence for the dead.
This is not a theoretical scenario. Various societies around the world have incorporated ritual cannibalism into their treatment of the dead, viewing it as a compassionate means of honoring the deceased, maintaining spiritual continuity, and strengthening social solidarity.
As an anthropology minor, I took an entire course on cannibalism taught by Dr. Marilyn Wells, whose fieldwork spanned Central America, East and West Africa, and Papua New Guinea. Her lectures and course materials were fascinating. When we reached the topic of endocannibalism in funerary rites, I remember clearly thinking about multiculturalism and whether Western nations should tolerate the practice in the name of religious liberty. Cannibalism is often my go-to example when testing the limits of religious freedom.
One of the best-known examples is the Fore people of Papua New Guinea, who practiced funerary cannibalism into the mid–twentieth century. For the Fore, consuming parts of the deceased preserved the person’s auma—spiritual life force—within the kin group. The auma, the source of vitality, contrasted with the aona, the physical body. Endocannibalism was the consumption of corpses, not the living, and served to protect the community from the kwela, a dangerous spirit believed to linger after death. The practice was eventually suppressed after researchers linked it to kuru, a fatal prion disease.
Concerns about groundwater contamination associated with burial—including those linked to specific Muslim burial methods—are entirely rational, and Japan is within its rights to impose restrictions for public health reasons. But more is at stake. The Japanese have the right to preserve their cultural practices within their own country—a right one might expect cultural relativists to defend.
Yet, within contemporary progressive discourse, the cultural norms of advanced societies such as European and East Asian nations are often treated with contempt and deemed unworthy of protection. Meanwhile, primitive cultures are presumed to possess an absolute right to preserve their traditions, even when doing so imposes significant burdens on the host society. Resistance to extreme and primitive religious practices is thus framed as a violation of the very religious liberties to which advanced populations are expected to subscribe.
The Fore are not the only example. The Wari’ of the Brazilian Amazon also practiced funerary cannibalism. For them, consuming the dead was the most respectful mortuary practice; in contrast to Muslims, burial was considered degrading and emotionally harmful. Anthropologist Beth Conklin has written extensively about how Wari’ mortuary cannibalism expressed compassion, reinforced emotional bonds, and strengthened solidarity among survivors.
Various Melanesian groups likewise practiced funerary cannibalism, often as part of cosmological frameworks that guided the spirit of the dead or preserved aspects of their essence within the lineage. In the Amazon Basin, groups such as the Amahuaca and neighboring peoples consumed parts of the body during mourning rituals. Certain indigenous Australian groups historically ingested charred bone powder as a way of symbolically incorporating the spirit of the deceased.
A recurring theme in my cannibalism course, and more broadly the anthropological and sociological curricula, I can accurately convey the cultural relativists’ viewpoints: although foreign or unsettling to outsiders, these practices are deeply meaningful to those cultures where they appear. If one asks why this should matter, this is the right question. Moreover, why should cultural relativism be anything more than an epistemological problem and methodological approach? It does not follow that it should also be a moral standpoint.
Ultimately, the question comes down to this: Why should the practices of foreign cultures impose burdens on host countries? What moral obligation does a society have to tolerate religious rituals that are profoundly alien to its own traditions—especially when these practices compromise social cohesion, disrupt cultural norms, and threaten public health? While religious liberty is a vital principle, it is not absolute. Host societies have every right—and indeed, a responsibility—to set reasonable limits that protect the culture, values, and welfare of their citizens.
Moreover, tolerance must be reciprocal: just as outsiders should respect the laws and norms of the countries they inhabit, host nations are justified in shaping the boundaries of acceptable practice, particularly when the stakes involve both public safety and the preservation of cultural integrity.
If foreign culture-bearers wish to continue their traditional practices, then they need not enter those countries that do not tolerate extreme or primitive rituals. They can stay where they are. We should prefer that they do. And if they are not allowed to practice their rituals where they are, for example, because their people have been integrated into another superior, more advanced group, then the following generations can thank those who stopped them.
When a person discovers that they are wrong about something—especially something of significance—they ought to ask a further question: What else might I be wrong about? A single error can be dismissed as an isolated lapse (everybody makes mistakes or misses something), but recognizing a substantial error should naturally prompt a broader self-examination.
Beneath that lies an even deeper question: Why was I wrong? For it is the “why” that reveals whether there is a flaw in one’s thinking, methods of reasoning, or habits of evaluating evidence. Identifying the cause of an error helps prevent the same underlying problem from quietly generating future mistakes.
Most people do not reach this deeper question until they have first checked whether they might also be wrong about something else (of course, some people never reach the deeper question). If they uncover a second (or more) significant error(s) and still fail to ask why their judgments are misfiring, the issue is no longer a simple mistake—it becomes a matter of cognitive integrity. A pattern of errors suggests that what requires scrutiny is not just one’s conclusions, but one’s intellectual process itself.
Realizing one was wrong about a single belief may be unremarkable. Realizing one was wrong about two or more important matters calls for a harder look at the structure of one’s thinking. Multiple false beliefs rarely occur by accident; more often, they signal a deeper problem in how a person forms, organizes, and justifies their views.
As mistaken beliefs fall away, the result can be a profound reordering of one’s worldview. But it may also result in recovering deep principles. Indeed, the ability to admit one is wrong, and to see that there is a reason they have arrived a wrong conclusions, itself points to deeper principles about which one may be unaware or have forgotten.
Epistemology concerns the nature and justification of knowledge, while ontology concerns what fundamentally exists or is true. The shift I am describing may ultimately reshape both epistemology and ontology: not only how a person acquires and evaluates knowledge, but also what he believes to be true about reality. When a person confronts the roots of his own errors, both dimensions of his thinking may undergo significant revision. At the same time, as I have suggested, it can result in the reclamation of a deeper understanding.
If the latter, then what explains the buried principle or lost understanding? Affinity and ideology play a central role here. By ideology, I mean a way of thinking that systematically distorts a person’s epistemological approach—assuming, of course, that a rational and undistorted approach is possible (which I believe it is, since I believe in objective truth). Ideology does not merely mislead someone about particular facts; it warps the framework through which facts are assessed. The warping corrupts not only the content of one’s knowledge but also one’s cognitive integrity. The individual’s sense of intellectual honesty, his standards for evidence, and his capacity for self-correction can all erode under the weight of an ideology that supplies ready-made answers and shields its adherents from uncomfortable truths.
Partisan loyalty and tribal affinity also play roles in keeping people away from reason and a clear assessment of evidence—even the evidence itself.
In 2018, when I discovered I was wrong about systemic racism in American criminal justice, I wondered what else I was wrong about—and why. I began taking long walks, during which I reconsidered the things I believed and which of these beliefs were worth keeping and which needed jettisoning. Critical self-examination led to a reflection on the deeper structure of belief-formation. This led me to recover something professional development had compromised: common sense. Of course, men can’t be women. There is no science there. Moreover, my commitment to women’s rights. What was I thinking? That was the problem. I wasn’t. I was following.
This is where humility becomes so important to intellectual development. Humility is the cornerstone of personal growth and meaningful relationships because it allows us to acknowledge that we are not infallible. It’s okay to be wrong. It is not okay to deny oneself the capacity to admit it. It is not fair to others. And it is unfair to oneself.
Recognizing that humans can be wrong requires courage, self-awareness, and a willingness to confront our own limitations. When we admit our errors, we not only correct misunderstandings but also foster trust and openness with others. Of course, we depend on others to extend charity in such situations. Alas, one discovers that some do not wish us to be wrong, especially when they relied upon us for their appeals to authority. If the authority changes his mind on some matter dear to others, it cannot be that he corrected an error, but that he has become misguided in his judgment. The error is what they want to continue believing in. They lose faith in something they should never have faith in: the infallibility of others.
Humility transforms mistakes from sources of shame into opportunities for learning, for getting closer to the truth. By embracing the possibility that one’s perspective may be flawed, a man cultivates empathy, deepens his understanding, and creates space for collaboration, ultimately becoming a wiser and more compassionate individual. Those around him with the same humility can grow with him, or at least acknowledge that opinions can differ. The stubborn can condemn the man for “switching sides.” But that’s their problem, not his.
The media are reporting that President Donald Trump’s friendly Oval Office meeting with the soon-to-be mayor of America’s largest city, Zohran Mamdani, on November 21 roiled parts of the MAGA base. The New York Times was somewhat less optimistic in its assessment. “There was one moment when Zohran Mamdani seemed like he might have bit off a little more than he could chew by making his pilgrimage to the lion’s den that is President Trump’s blinged-out Oval Office,” Shawn McCreesh writes. “The 34-year-old mayor-elect of New York was pressed by a reporter if he thought his host, who was sitting about four inches away, was really ‘a fascist.’ How terribly awkward.” Indeed. “But before Mamdani could get out an answer,” McCreesh continues, “Trump jumped in to throw him a lifeline. ‘That’s OK, you could just say, Yes,’ Trump said, looking highly amused by the whole thing. He waved his hand, as if being called the worst term in the political dictionary was no big deal. ‘OK, all right,’ Mamdani said with a smile.”
New York City mayor-elect Mamdani at the side of President Trump in the Oval Office, November 21, 2025
The interpretation of this moment is easy to get right. Contrary to what progressives desperately want the public to believe, Trump is highly intelligent, and he played Mamdani like a fiddle. Being smeared as a fascist doesn’t play well today. Not just because of overuse, but because calling a liberal businessman from Queens a fascist is so inaccurate that it draws an eyeroll from those who hear it misapplied. They’re thought is: “There you go again.” By playing it cool, seated at the Resolute Desk, an imposing figure even while sitting down, Trump made Mamdani look small and insignificant. He let Mamdani, his arms folded in front of him like a schoolboy, do his thing: talk without saying anything. What anybody prepared to accept reality saw was the mayor-elect bending the knee to the President of the United States. Trump gave Democrats nothing. His strategy was obvious: when Mamdani fails, the Muslim’s sycophants won’t be able to talk about a confrontational moment at the White House.
What observers didn’t know was that Trump had something in his pocket. Just 72 hours later, the White House gave supporters (and most Americans, if they understood the situation), something they had long sought: an executive order that designated Muslim Brotherhood chapters as foreign terrorist organizations: “Within 30 days of the date of this order, the Secretary of State and the Secretary of the Treasury, after consultation with the Attorney General and the Director of National Intelligence, shall submit a joint report to the President, through the Assistant to the President for National Security Affairs, concerning the designation of any Muslim Brotherhood chapters or other subdivisions, including those in Lebanon, Jordan, and Egypt, as foreign terrorist organizations.”
Founded in Egypt in 1928, the Muslim Brotherhood is a transnational Islamist movement that has influenced Islamist organizations and parties worldwide. The Brotherhood plays a chief role in the Islamization project. Trump’s EO allows the federal government to investigate, among other things, the Brotherhood’s public relations firm, the Council on American–Islamic Relations (CAIR). Founded in 1994, CAIR describes its mission as advocating for Muslim Americans, fostering understanding of Islam, and protecting civil liberties. The political action organization, Unity & Justice Fund, CAIR’s super PAC, donated thousands of dollars to New Yorkers for Lower Costs, one of the main PACs backing Mamdani. Mamdani is the smiling face of the Islamization project.
With this EO, Trump is signaling significant movement against the project. But he is doing much more than this. Indeed, even before the November 24 order, following his meeting with Mamdani, Trump ended Temporary Protected Status (TPS) for Somalis in Minnesota. On November 27, the President announced a review of green cards for Afghans—along with holders from 18 other “countries of concern.” The review was triggered by the targeted shooting on November 26 of two National Guard members, who were ambushed near the White House by an Afghan refugee. The Afghan, Rahmanullah Lakanwal, a 29-year-old Afghan national who had previously worked with a CIA-backed paramilitary unit in Afghanistan, was one of tens of thousands imported to the United States by the Biden regime, organized by then-DHS Secretary Alejandro Mayorkas.
Readers will recall that Trump has confronted Islam before. In a January 2017 essay, Executive Order 13769: Its Character and Implications, I argued that, if democracy and liberalism are to prevail, “the state must preserve secular values and practices, and every person who enjoys the blessings of liberty should dedicate himself to ensuring the perpetuation of this state of affairs. A liberal democracy must proceed based on reason.” Therefore, I continued, the conversation about Trump’s actions in 2017 should be grounded in “an understanding of the unique problem Islam presents to human freedom, as well as an examination of the European experience with Muslim immigration.” I noted that “[t]he problem that many on the left fail to consider is the corrosive effects of an ideology antithetical to the values and norms of Western society—government, law, politics, and culture—and the need for a policy that deliberately integrates Muslims with these values and norms, as well as promotes these values in the Islamic world.” I saw in the reaction to Trump’s order “an opportunity to have a broader conversation about Islam and immigration.”
Trump’s actions have Steve Bannon of the podcast War Room embracing the late Christopher Hitchens’ warning about Islam: that the Islamization (or Islamification) of the West is an existential problem. Atheists and liberals have long warned conservatives about the Islamization project, and I think I speak for many of us when I say that we welcome conservatives to the fight. We don’t have much time to turn things around, however, so the more robustly Republicans address the problem, the better (and they’d better put a strategy in place before the 2026 midterm elections). Indeed, America (and the West more broadly) should move aggressively to contain Islam in the same way the West contained communism during the Cold War. Just because Islam calls itself a religion is no reason to throw open the doors of Western civilization to Muslims. After all, as Hutchens noted, it’s not as if communism weren’t also effectively a religion; Islam, a species of clerical fascism, represents no less a threat to the internal security of the nations across the trans-Atlantic space.
Christianity is about charity, love, tolerance, and many other good things. But many Christians have forgotten about or never learned the history of Islamic conquest and the reality that our Christian ancestors took up swords and saved Europe from the fate suffered by the Middle East and North Africa, formerly thriving Christian centers in the world, now primitive hellholes, where women are treated as second class citizens, and the fate of hundreds of millions have fallen into the hands of clerics working from a plagiarism of JudeoChristian texts that twists those scriptures into a totalitarian system. It was Christians, including militant monks, who repelled with violence the Muslim barbarians, drove them from Europe, and secured the future for Christianity. Had they not acted when they did, there would be no Europe. No Europe, no America. No Enlightenment. No human rights. Only clerical fascism. Tragically, modern Christianity has made Nietzsche’s critique of the religion a reality by rejecting the militant side of the faith and suppressing the human instinct for self-preservation (see Republican Virtue and the Unchained Prometheus: The Crossroads of Moral Restraint and the Iron Cage of Rationality).
As I noted in those essays, Muslims have now added to the tactic of military aggression the mass migration of Muslims to the West and the progressive Islamization of the trans-Atlantic space. The tactic of migration is a strategy to conquer the civilized world from within. The softest parts of Christianity, strategically exploited by transnational elites, continue in the progressive attitude that empathizes with Muslims and the barbarian hordes, while rejecting the militancy necessary to repel the existential threat Islam represents to human dignity and freedom. The failure of Westerners to take up both sides of Christianity—the soft (selectively tolerant) and the hard (militant) sides—portends disaster. At the same time, what militancy remains, progressives have aimed at their fellow Westerners. We must not be shy about calling things what they are; the left has become a fifth column in the West, working with our enemies to bring down Western civilization.
Reflecting on this, I have lost confidence in the United Nations and the efficacy of international law to defend freedom and human rights. When the United Nations was founded, it was established on Western values of international cooperation and law. The Universal Declaration of Human Rights emerged from this framework. But not all member states endorsed it in substance, even if they formally signed onto it. Moreover, Muslim-majority nations developed their own declarations of rights—most notably the Cairo Declaration on Human Rights in Islam—which is founded on Sharia rather than the Enlightenment principles that gave rise to democratic republicanism and human rights. As a result, the UN includes a wide array of states whose commitments to democracy and rights are not aligned with the Western standards that originally shaped the institution. These Western standards are not arbitrary; they are the product of reason in the context of European culture, made possible by the Protestant Reformation and the broader intellectual currents of Christian civilization.
This matters when we consider cases such as Israel (see my recent essay How Did the Roles Get Reversed? The Moral Confusion Surrounding Israel and Gaza, and embedded links). If the UN or its agencies are asked to adjudicate whether Israel is responsible for genocide after the massacre of Jews in Israel on October 7, 2023, the judgment would ostensibly rest on the legal definition of genocide—a Western juridical concept. In practice, however, the judgment rendered would be heavily influenced by the political alignments and value systems of states that do not share the underlying philosophical commitments from which those legal definitions arose. Many of these states are openly hostile to Israel and to the West. Perhaps the UN won’t make this determination. But one has reason to worry it will. (And then what?)
When reflecting on this dynamic, it is easy to think of the contrast presented in Star Trek’s construct of the United Federation of Planets. Starfleet included many different species and cultures, but they were all integrated into a framework of shared values rooted in Enlightenment-style principles and liberal norms—equality, reason, tolerance, universalism. Diversity existed, but it was anchored in a common civilizational ethic. In contrast, groups like the Klingons and Romulans, who did not share these principles, remained outside the Federation and were recurring sources of conflict because their worldviews diverged so fundamentally. I raise the matter of a 1960s Sci-fi TV show and its spin-offs because it shaped the beliefs of many Americans who today contemplate the world situation. By portraying such antagonism as occurring out in space, they do not see the Klingons and Romulans as analogs to Muslims.
However, the contemporary terrestrial situation more closely resembles the dark side of that fictional interstellar situation. The real Earth is divided by profoundly different religious and civilizational traditions, and there is no universally accepted philosophical foundation uniting all nations. Had the West colonized the world and brought it to the principles of individualism and secularism, it would be a different matter. Even in its failure to accomplish this, the desire is portrayed as imperial ambition. The UN project to include every state in a single system of international cooperation by tolerating the cultures of barbaric countries and regions has undermined its original purpose. Instead of a mechanism for upholding universal principles, it has become an arena in which illiberal, non-Western, and even totalitarian regimes can leverage their numbers to dilute, reinterpret, or subvert the values the institution was created to advance and defend.
Last night, I revisited an interview conducted with Hitchens (Conversations with History, UC Berkeley’s Harry Kreisler) in which he expresses optimism about the role of international law in holding member nations to account based on a universal standard of treatment. His argument is similar to arguments advanced by pro-Arab intellectuals Noam Chomsky and Norman Finkelstein, who insist on putting Israel’s fate in the United Nations’ hands. However, the validity of their argument depends on a uniformity across the planet of values that align with the underlying principles upon which a just international law must rest. It should be obvious that this is not the case. Given this, one must ask whether justice is what these intellectuals desire or if their sentiments are driven more by a hostility towards the Jewish state.
The reality of the world we live in, with the totalitarian ambitions of China, and its radically different conception of the world, growing more belligerent by the day, and also those of Islam and the rest of the Third World, make such uniformity impossible. The universalism desired by those who established the United Nations and developed further the system of international law presumes the hegemony of the Western worldview. There is no such hegemony. Only in a fantasy world like Star Trek could such a situation exist. At this point, we can’t even count on Europe to uphold the foundational values that support the endeavor. Europe is well into its Islamization phase, and the pessimistic side of me has trouble believing that the continent hasn’t passed the point of no return.
We must therefore ask whether the United Nations is something worth continuing in its present form. How can we allow barbarian cultures and corrupt elements of the West to determine the fate of mankind? At the very least, how can we leave the fate of America to such madness? The situation demands a comprehensive rethink. In the meantime, Trump is doing the right thing: halting mass immigration and reviewing the status of those who have entered our country.
* * *
Because of all the anti-Western and anti-white rhetoric the occasion of Thanksgiving has provoked, I want to close with a couple of historical notes. For it was not just the false claim of “stolen land” that progressives rehearsed (see Gratitude and the Genocide Narrative: Thanksgiving and the Ideology of Historical Responsibility), but the African slave trade. “Never forget,” advocates lecturers. I’ll take them up on that.
First, the Asante (an ethnic group of modern-day Ghana) were deeply involved in the slave trade, particularly from the seventeenth through the nineteenth centuries. Readers may remember that Democrats wore the ceremonial garb of the Asante, the Kente cloth, during the BLM riots, a large-scale uprising against the West and white people triggered by the overdose death of convicted felon George Floyd while in the custody of Minneapolis police.
Second, white Europeans (millions of them) were enslaved in the Barbary States for several centuries. The Muslim slave trade—also called the Arab, Islamic, or Trans-Saharan slave trade—was one of the largest and longest-lasting systems of slavery in world history, spanning over 1,300 years, involving multiple regions and empires, and predating and outlasting the Atlantic slave trade. In fact, slavery continues in the Islamic world. I will say more about the Barbary States, in particular, Tripoli, today an open-air slave market. I will bring these closing remarks to the point about religion and freedom.
During Thomas Jefferson’s presidency, the United States intervened militarily against the Barbary States—Algiers, Morocco, Tripoli, and Tunis—because these North African regimes sponsored piracy and the enslavement or ransoming of captured American and European sailors. For centuries, Barbary corsairs seized ships in the Atlantic and Mediterranean, forcing nations to pay tribute for safe passage. After the American Revolution, the US no longer had British naval protection, and American crews were increasingly captured. Earlier presidents agreed to pay tribute to the Barbary States, but Jefferson believed this was dishonorable and unsustainable.
In 1801, when Tripoli demanded increased payments, Jefferson refused, prompting the ruler of Tripoli to declare war. Jefferson responded by sending the US Navy to the Mediterranean, launching the First Barbary War (1801–1805). The conflict included naval blockades, ship-to-ship battles, and the famous 1804 raid led by Lieutenant Stephen Decatur to destroy the captured USS Philadelphia. The war ultimately forced Tripoli to renounce future tribute demands and release American captives, marking the first major overseas military campaign in US history and establishing America’s willingness to confront piracy and state-sponsored enslavement abroad.
As I noted in a December 2023 essay, Rise of the Domestic Clerical Fascist and the Specter of Christian Nationalism, the Treaty of Peace and Amity with Tripoli (1805), which ended the First Barbary War, included a famous clause emphasizing the secular nature of the US government. “As the Government of the United States of America is not, in any sense, founded on the Christian religion,” Article 11 states, “it is declared that there is no hostility on the part of the United States to the laws, religion, or tranquility of Muslims.” This provision was intended to reassure Tripoli that the US, though largely populated by Christians, was not a religiously motivated state and had no intention of spreading Christianity through its foreign policy.
The inclusion of Article 11, however diplomatically strategic, testifies more profoundly to the American principle of separating religion from government, even in international relations, and is often cited as evidence that the US government was officially secular even while its citizens were predominantly Christian. I have invoked this clause many times in my insistence that the United States is not and should not become a theocratic state.
However, America’s adversaries do not advance such a principle; Islamic countries are not secular even while their citizens are predominantly Muslim. If they did, it might be reasonable to tolerate Muslim immigrants, as they would have been socialized in a secular culture that respected other religious faiths (or, in my case, those who have no faith at all). However, as I have explained many times, since humans are culture-bearers, those bearing cultures incompatible with secular ethics are not suited to reside in America. They should therefore be barred from entering the country.
Whether we are a Christian nation is a point reasonable people can debate, but those who believe all laws derive from Islam are a priori unreasonable people. No discussion is possible with such people. Therefore, the rational policy is to keep those animated by irrational cultures from entering and subverting Western institutions.