In my last blog posting I talked about experience at the Green Bay Area Public School board. There, I spoke about our nation’s fundamental law and its ideals of individual liberty, equality before the law and government, or equal treatment, and religious neutrality. That argument should have carried the day. But it didn’t. The school board voted to continue a policy that does serious violence to the free speech rights of students, as well as allow building administrators the discretion to continue a conditional ban on head coverings. The policy did get one thing right, namely ending sex discrimination in attire, but it got everything else wrong. By failing to take the matter of head coverings out of the hands of administrators, the school board gave administrators permission to violate the civil rights of students. In other words, the board did not protect the rights of individuals.
My sons’ mother is Swedish. She was raised in Sweden and moved to the United States in adulthood. Both of my sons are proud of their Scandinavian heritage. All members of my family wear a Mjölnir pendant, which is the Norse god Thor’s stylized hammer. We also own a seax, or Viking dagger, which was universally carried by Scandinavian males. I could not bring my seax to show board members because knives are not allowed in public buildings. I agree with this policy, which I will explain in a moment. But I want to first emphasize that, although we are a family of secular humanists, we find Scandinavian culture and Norse mythology important to our family’s cultural heritage and we observe many Scandinavian traditions and frequently return to my wife’s motherland to imbibe in those rich traditions.
Seax
My son would not be allowed to bring his seax, a dagger, to school. I do not think he or any other boy should be allowed to bring a dagger to school because we cannot know in advance who may suffer from an antisocial personality disorder or impulse control problems, may be prone to fits of jealous rage, or perceive a need to use a weapon in self-defense or in retaliation to some offense. Because we cannot know this, and because the consequences of a boy using a dagger to harm other students is so great, the school exercises prior restraint in the same way that TSA does not allow knives on airplanes. Instead of waiting until someone uses a weapon on someone else, we recognize the possibility of such an occurrence, and act in a general way to reduce the risk to public safety. This is reasonable. Since we have no way of rationally determining who is safe with a knife, we treat everybody the same. As such, this policy is nondiscriminatory.
There is a way to rationalize differential control by appeal to aggregate statistics. For example, an institution might note group differences in the rates of violence. For example, white males, compared to black males, have a significantly lower rate of violence. Moreover, white males have cultural experiences with knives that differ from the cultural experiences of black males. However, it would be discriminatory to create a policy that forbade the carrying of knives by black students while allowing white students to carry them. I hope the reason why is obvious to you, but to make sure, it is discriminatory because the institution is prejudging concrete individuals based on patterns identified in statistical aggregates. As impressive as numbers can be, the presentation of statistical information does not change the reality that making judgments about individuals based on a characteristics such as skin color is stereotyping.
In October of 2014, administrators at a Washington state school district decided to let a Sikh boy carry a kirpan on school property. A kirpan is a dagger, similar to my son’s seax. Both are impressive and serious weapons. Sikhs have five articles of faith, the five Ks and the kirpan is one of them. All baptized male Sikhs are expected to carry one. But knives are dangerous weapons. We do not allow them on airplanes. We do not allow students to carry them in school. So why is this school district allowing the Sikh boy to carry a dagger? Because it is an article of faith, a religious accommodation must be made. But according to the foundational law of the American republic, government is obliged to remain neutral on questions of religion. It cannot discriminate on this basis, which means it can neither grant privileges nor restrict freedom based on religious reasons. Reasons for limiting freedom must be based on rational secular grounds.
Kirpan
The decision to ban the carrying of knives in public school buildings is based on a concern for student safety, which is a rational justification. An exemption from the rule for Sikhs is based on not on rational secular grounds, but on the grounds of religious identity. This is irrational, since the claim is that, on the basis of religious identity, we assume that all the reasons knives represent a public safety hazard do not apply to Sikhs. Without any evidence, Sikhs are assumed to not suffer from antisocial personality disorder or impulse control problems, are not prone to fits of jealous rage, never perceive a need to use a weapon in self-defense or in retaliation to some offense—or if they do have a reason to defend themselves are allowed to use an effective weapon in that defense, whereas other boys are left only to use their fists or their words. On what rational basis can administrators know that Sikh boys are immune from all the things non-Sikh boys are assumed to be capable of? Administrators are prejudging an entire group of people based on their religious identity. Administrators are saying to my son that he cannot be trusted with a dagger based on his identity. Because is not Sikh he cannot be cleared of the possibility that he may suffer from antisocial personality disorder or any of these others things.
My son would not be allowed to carry a knife in school because of his identity. It would be exactly the same as saying that my son cannot carry a dagger to school because of his race. Of course, race and religion are different. Race doesn’t carry an ideology. Religion is an ideology. One can tell nothing about an individual’s motives and behavioral proclivities on the basis of skin color. Religion is a source of motives and habits. But from a civil rights standpoint, the government is obliged to treat members of different races and different religions in a neutral manner. The government cannot discriminate on the basis of race or religion or national origin. Either everybody can carry knife or nobody can. My son’s liberty is violated because he is not regarded equally before the law. Rather his group membership is judged to make him unworthy of access to a freedom or resource that another boy is allowed access to on the basis of his membership. This is discrimination.
On October 21, 2019, I rose in the open forum at the Green Bay Area Public Schools (GBAPS) board meeting to address an agenda item of concern to those who love liberty and wish to preserve the secular arrangements of our democratic republic: the district’s revision of its dress code. Dress and speech codes are not side issues. They’re not a trivial matter. Controlling people by limiting their manner of presentation of self and restricting their freedom of expression strikes at the heart of the open society.
The dress code as it stood included selective restrictions on messages on clothing and a conditional ban on head coverings, limiting, without rational justification, the freedom of speech, as well as the violating separation of church and state. At the end of the day, the board voted to preserve bans on speech in code and effectively preserve the ban on head coverings, with a religious exception, by deferring to the discretion of school administrators. The policy can be found here: “443.1: Student Dress.” The next day school administrators declared their unanimous support for existing policy in an email to parents.
In this blog, I share with readers my testimony (the full video and text). Fox News covered the meeting and included in their report a video clip of me calling out the District on the problem of policing student dress and expression (“Green Bay School Board alters policy on hats and hoodies in classrooms”). I want readers to have context. Moreover, the board allows speakers only five minutes for testimony in the open forum, so I will elaborate my argument here. I also want to make some observations about the arguments of other speakers and the discussion that ensued, as well as the decision that was reached.
The problem with the conditional ban on head coverings can be put simply: if some students are free to cover their heads in public school buildings on religious grounds, while others students are punished for covering theirs because they have no religious excuse, then administrators are privileging believers while discriminating against nonbelievers. This is a violating of the Establishment Clause of the First Amendment to the U.S. Constitution.
The board’s decision is not only troubling in its violence to the First Amendment. Language in the policy may be interpreted to permit females to cover their faces for religious reasons. 443.1 reads: “Clothing must not cover a student’s face to the extent the student is not identifiable (except clothing worn for religious or medical purposes)” (emphasis mine). Elsewhere it repeats the policy: “Headwear must allow the face and ears to be visible and not interfere with the line of sight to any student or staff (except clothing/headwear worn for religious or medical purposes)” (again, emphasis mine).
Obviously, the policy is focused on accommodating Muslims, adherents to a doctrine that views girls and women as sexual objects obliged to control the male gaze by hiding their female bodies. The policy thus simultaneously discriminates against those who do not subscribe to the Islamic faith, while allowing Muslims to attend school in extreme modesty dress, including burqas and niqabs. Why can Muslim students be non-identifiable but non-Muslim students must be identifiable?
The email sent to all parents states that “school administrators have determined that they will implement the new headwear policy by maintaining their schools’ current practices in regard to hats and hoods. These current practices reflect their schools’ core values, the individual needs of their schools, and their mission to engage students in learning.” We have to ask, is facilitating the oppression of girls by allowing their identity to be erased really consistent with the values of our public schools? Is hampering the ability to engage Muslim girls in learning by hiding their faces from teachers consistent with the educational goals of our public schools? Do administrators, teachers, and staff in the Green Bay public school system, as well as the school board, really care more about not offending the sensibilities of those who subscribe to oppressive religious beliefs than defending individual rights and personal liberty?
In its decision, the school board abdicated its responsibility to uphold the separation of church and state that lies at the heart of American democracy by granting administrators the authority to violate the religious liberty of their students. My son attends Green Bay public schools. Do we not care about his rights because he is not a member of a recognized minority group? No rational justification was provided for why (some) students should not be allowed to wear head coverings in the school building. Indeed, by exempting religious students from the ban on head coverings, the district has admitted that it has no real justification for banning head coverings. The school board failed to protect the right of individuals to express themselves in ways that do no violence to those around them.
* * *
My Testimony Before the Board
My testimony before the school board. Unfortunately, my argument did not prevail. The school board voted to allow administrators, teachers, and staff to discriminate against students on religious grounds.
I come to you today to argue against the district’s dress code. I have four arguments: two based on the First and Fourteenth Amendments to the US Constitution and related legislative and judicial actions; two others based on the problems of stereotyping and stigmatization. I won’t have time to get to them all, so I urge the board to read my longer statement. I am happy to speak with board members about these arguments in greater depth. In the time I have, I take up the question of head coverings, in particular hoodies. District policy states that head coverings are “not permitted in the school building during the school day” with an exception: “Students may wear head coverings for religious reasons.”
This policy is problematic from a constitutional standpoint. The Establishment Clause of theFirst Amendment obliges public institutions to remain neutral on questions of religion. If a freedom is curtailed or expanded, it must be for secular reasons and the restriction must equally apply and the freedom open to all. Accommodations must not privilege some students while limiting others. As I will show, there is no rational reason for restricting head gear, but rather an irrational one, and the religious accommodation indicts the policy.
The Civil Rights Act of 1964 outlaws discrimination based on race, religion, and other categories. Brown v. Board of Education (1954) determined that the doctrine of “separate but equal” is inconsistent with constitutional principles. It is inherently unequal to separate, limit, or privilege students on the basis of race. Would the district ever consider instituting a policy that permitted white students access to head coverings, hoodies, for example, but forbade these to black students? Of course not. Such a policy would violate the spirit of civil rights because it treats individuals differently on the basis of race. Yet the district enforces a policy that treats people differently on the basis of religion. This is a pernicious double standard.
What is more, this double standard rests on implicit stereotypes, one indicating virtue, the other indicating disrepute. The assumption is that religious students who wear head coverings are not dangerous or violent or are not using head coverings to hide headphones or weapons or concealing their identity or that the head coverings are not distracting—all reasons given for why hoodies should be banned in school. If any of these reasons were legitimate, then religious head coverings should not be exempted, since all the possibilities identified apply in those cases, as well. Yet other students, in particular minority youth, especially males, are assumed likely to be using head covering for these illicit reasons. It sees them as unsafe.
Without determining on an individual basis whether students are using a head covering for illicit purposes, a ban with group-based exceptions means concrete individuals are treated as personifications of abstract categories about which, for some, bad intentions are given, while others’ motives are deemed good-intentioned. This is why, in May of this year, Students at Uplift Community High School in Chicago overturned a rule that banned hoodie on their campus. Students at Uplift argued that the ban fed into harmful misconceptions about hoodies being associated with criminal activities, misconceptions based on racial stereotypes. Individual freedom means individuals are treated as such, not judged on the basis of abstract groupings.
Finally, the policy could consider all the exemptions that would allow students with alopecia, cancer, disfigurements, and other stigmatizing problems that for them necessitate head coverings. Presumably, such accommodations are within the principal’s discretionary power. However, just as the policy implies head coverings are disreputable for certain classes of people, exceptions for non-religious reasons spotlights what affected students likely wishes to conceal.
I urge the district to retire this policy and allow students to dress and express themselves in the manner they and their parents see fit. If attire and grooming are found to meet the exceptions identified in the Tinker Standard established by the Supreme Court in 1965 (which I am happy to explain further), then this can be determined on a case-by-case basis, as it should be in a society of individuals who are to be treated as such before the law, not reduced to representatives of abstract impositions and subject to prior restraint.
It is not the place of the district to police student’s clothing and messaging, especially in a manner than grants privileges to some while discriminating against others. No student should enjoy preferential treatment on the basis of race, religion, nationality, or other categories identified in our laws, but should enjoy equal treatment before the law, which should move in the direction of expanding personal liberty, not limiting it.
* * *
What Followed My Testimony
The board asked no follow up questions of me. A citizen who followed me expressed the typical authoritarian claptrap about how hoodies are a reflection of kids these days and their lack of respect for their elders. It was “hike up your pants” cliche. A high school teacher (from West High School) followed him and began by noting the substance of Tinker standard—that it requires any prohibition on expression to show substantial disruption to the learning environment—as if it were evidence for the disruptive nature of hats and hoodies. He leaned on his “thirty years of experience” to insist that hats and hoodies undermine school climate and learning. He presented no other evidence. He claimed that what he is seeing in the classroom is that hats and hoodies allow some students to disengage from the learning process. One cannot tell whether the student wearing a hoodie is listening to music through earbuds. Moreover, hats with brims can be pulled down so that the teacher cannot make eye contact with the student.
Of course, the teacher was compelled by the obvious to stress that only some students use hats and hoodies to disengage. This qualification—which he used to feign reasonableness—speaks to a point I make later about whether it is proper to control some students on the account of other students. Moreover, is it hats and hoodies causing the disengagement in those cases?
He claimed that the problem of using hats and hoodies to disengage is becoming more prevalent, increasing year by year, the phenomenon starting around five years ago. This is an interesting observation, given the growing diversity of Brown Country has, over the last five years, reached the threshold of majority-minority area status. Mexican families have settled and their kids are now coming through the school system. Moreover, the African-American population has grow substantially just within the last few years. It seems there are race and ethnic anxieties underpinning the growing moral panic about hats and hoodies.
Putting this to one side for the moment, how is the ban on hats and hoodies addressing the problem of disengagement if an increasing number of students are using hats and hoodies to disengage? Shouldn’t officials attempt to understand the actual source of disengagement? The teachers claim that, because hats and hoodies are worn by people from all racial and ethnic groups, this is not a discriminatory policy, seems to give the same away. Not wanting to appear to discriminate, a general rule is passed justified by the assumption that all are potentially guilty of violating rules, which people don’t really believe. What they believe is that some students will violate the rules. Which students. Now we can bring back in the question of race and ethnic anxieties.
Later in the meeting, during the forum associated with the agenda item, another high school teacher (this one from East High School) appeared and rehearsed Cosby-esque rhetoric about respectability and career-readiness (had she been a white lady her comments would have been reactionary). She testified about how students routinely flout the dress code, dwelling on an anecdote about a kid who came to class every day, pulled his hoodie around his head, put his face on the desk, and slept the whole period. On one occasion he did not hear the bell at the end of class. From this case she drew the conclusion that the ban on hats and hoodies is necessary.
When asked by a board member why the District should keep a policy students so easily disregard, she imagined aloud how much worse the situation would be if students were allowed to wear hat and hoodies. Under further questioning, and in light of student testimony about the psychological aspects of head coverings (one spoke about anxiety, another about anger), she admitted the personal need to feel secure, but insisted that the classroom was not the place for it. “Students should go to student services for that,” she said. (I guess because that’s where students with emotional and psychological problems get their education.)
As various Board members spoke, one pointed to the district’s survey of administrators, staff, teachers, parents, and SROs (School Resource Officers, i.e. the police) on the matter and suggest we listen to them. After all, why have a survey if we aren’t going to heed the majority? What a stupid opinion crossed my minds. If we’re going to heed the majority opinion expressed in unscientific surveys, then why have the board take up the issue at all? Why have a board at all? Why not just rubber-stamp survey findings? Is that what democracy looks like? Tyranny of the majority?
Throughout all of this, I was hoping the board would ask me for clarification. I could have easily batted down each point. The ignorance of basic civil and human rights displayed by most everybody involved in the discussion, as well as responses to various surveys, was astounding—and distressing (this is why majoritarianism is a recipe for disaster). Seeing the personal freedom of persons the government forces to spend at least a third of their day (half of the time they are conscious) learning stuff the power elite believes will make them “career-ready” (i.e. docile bodies) treated in such a disrespectful and cavalier matter infuriated me. Is the popular desire to keep young people under authority’s thumb perverse sublimation of the memory pains of its own adolescence? Is this all about repressing humiliation by giving back? Did Sigmund Freud have a name for this odious defense mechanism?
Seeing how things were going to go, I left the board meeting. The outcome was predictable. But I have more to say about this.
* * *
The Principles of American and Human Freedom
At the start of the meeting, the audience was asked to rise with the officials and recite the Pledge of Alliance. “I pledge allegiance to the Flag of the United States of America, and to the Republic for which it stands, one Nation under God, indivisible, with liberty and justice for all.” As an atheist, the “under God” part has always irked me (although it is interesting that it does not say “under gods”). That aside, the Pledge contains words we’re supposed to pay attention to. The flag represents the American Republic, which is founded upon core values of individual rights and liberties. Contrary to the multiculturalism that attempts to supersede it, the Pledge tells us that the nation is one nation. It doubles down here: this nation is indivisible. It demands equal treatment of individuals before the law—all are entitled to liberty and justice. Yet, after repeating the Pledge, board members and those who testified failed to connect the meaning of these words to my testimony. I guess the Pledge is just ceremonial. Like the Preamble to the Constitution. Nothing to consider here.
The United States Bill of Rights, ratified on December 15, 1791, lays the basis for ethical conduct with respect to individual liberty and rights. The First Amendment protects personal liberty in the expression of religious and other opinions. Courts have pushed the free speech rights down into every level of government. The Court has recognized that freedom of speech includes personal expression in art, music, and fashion. The First Amendment obligates the government to remain neutral on religious questions. Government can pass no law or policy respecting the establishment of religion. Supreme Court decisions have expanded the Establishment Clause to cover law and policy and incorporated this principle in every state and locality, including public schools. Freedom of (and necessarily freedom from) religion and freedom of speech and expression are landmarks in the progress of man. TheFourteenth Amendment clarifies that every person dwelling or traveling within the jurisdiction of the United States enjoys the protection of the Bill of Rights; the government cannot deny to any individual due process or equal protection under the law. Treating people differently for reasons of nationality, race, or religion is a harmful or unjust manner constitutes discrimination.
In light of this established body of law and the values they represent, the district’s dress code policy does violence to justice. It denies the agency of our young people—the people who will carry forward our democratic republic in our absence. What is the civics lesson here? The policy flies in the face of the Supreme Court ruling that the First Amendment protects student speech. Regardless of political viewpoint, students are allowed to speak, write, and otherwise express thoughts without fear of censorship on the basis of the content of their speech. This precedent is known as the Tinker Standard, established in a 1965 Supreme Court ruling. At issue was the wearing of black armbands by three siblings (the Tinkers) to signal opposition to the Vietnam War.
In the limited time the board gives speakers, I could only mention the standard (hoping I would be asked for clarification). In that ruling, the court held: “It can hardly be argued that either students or teachers shed their constitutional rights to freedom of speech and expression at the schoolhouse gate.” The Supreme Court allowed for two exceptions, both of which require officials to provide sufficient evidence to show that the expression in question could result in “substantial disruption of the school environment” or represent “an invasion of the rights of others.” Obviously, neither exception applied to the Tinkers’ expression. The high school teacher who thought he was rebutting my testimony offered no evidence that hats or hoodies triggered the exceptions identified in the Tinker Standard. And, as we will see, his failure to object to a religious exception negates his argument that such a ban was necessary to the learning environment.
With regards to free expression, policy 443.1 identifies the following as forbidden: “Any clothing, jewelry or personal items identifying an antisocial association or organization referred to in Board policy.” The board, an extension of the government, is censoring speech based on content. The policy bans the following: “Any clothing, jewelry or personal items that use or depict hate speech or targeting groups based on sex; age; race; religion; color; national origin; ancestry; creed; pregnancy; marital status; parental status; homelessness; sexual orientation; gender identity; gender expression; gender non-conformity; physical, mental, emotional or learning disability/handicap; or any other legally-protected status or classification.” The policy also censors speech based on content: “Any clothing, jewelry or personal items that contain pictures and/or writing referring to alcohol, tobacco products, nicotine, sexual references, nudity, profanity, obscenity, unlawful use of weapons, and/or controlled or illegal drugs.”
The district’s dress code is the result of a time machine thrown into reverse. When I was in high school in the 1970s, we wore shifts with messages of all sorts, net shirts, halter and tube tops, short-shorts, mini-skirts, bandanas, pot leaf belt buckles that doubled as paraphernalia, hats—whatever. The district’s policy by comparison is regressive. For example, the policy prohibiting students from wearing clothing with drug references. Based on experience with the district, I know anti-drug messages conveyed by students and teachers are tolerated, even encouraged. D.A.R.E. (Drug Abuse Resistance Network) is a fixture in the district. Why aren’t pro-drug messages allowed? The Supreme Court has found that the government cannot discriminate against viewpoint advocacy. With very few exceptions (obscenity is an example), the government’s ability to restrict speech on the basis of content is sharply limited. To allow censorship is to tolerate thought control by the state and its agents.
We allow free thought and expression in the United States not only because it is a fundamental right in a free society, but also because it is necessary for robust discourse about the issues that concern citizens. Just as the Tinker siblings had a constitutional right to express opposition to the Vietnam War, students have a constitutional right to express opposition to the drug war. The result of current drug prohibition policy is the incarceration of tens of thousands of persons, ruining lives and resulting in family separation and community disorganization. In Guiles v. Marineau (2006), the Court affirmed the right of a student to wear a shirt mocking President George W. Bush that included references to alcohol and drug use. The Court found that censoring the shirt diluted the student’s message and the message did not satisfy Tinker’s “substantial disruption test.”
While I recognize that there are time and place restrictions on speech, I also recognize the obvious, namely that speech on clothing is not disruptive in the same way verbal speech can be. Those who do not wish to receive messages on clothing may disregard them. If a message on a t-shirt is offensive, the least restrictive solution is to not look at it. Therefore, even if speech is restricted on the basis of a significant governmental interest, such as pursuit of a specific goal disrupted by verbal speech, those restricting the speech must allow for alternative channels for communicating the information. It would seem messages on clothing (or expressed in jewelry) is the least problematic alternative for such communications.
Furthermore, the “substantial disruption” test must be applied on a case-by-case basis. This is the opposite of blanket restrictions that require exceptions, a practice that gets the state’s burden wrong. Prior restraint must show that discipline after-the-fact is insufficient to remedy the problem created by said speech. It is unclear what problem pro-drug messages cause, let alone that case-based discipline would be insufficient to remedy any problem that should arise. The same is true for hats and hoodies. Yet the West High School teacher I noted earlier speaks for many when he expresses the desire to control the many for the sake of a few.
As for disruption, even here administrative action may run afoul of the spirit of free speech, since political statements are intended to draw attention to an issue. The right to speak freely is at the same time the right to freely receive speech, to have access to information and opinion. In the case of political messaging, others may become members of the audience if they wish. Expressions aren’t disallowed because an administrator disagrees with their sentiments. Indeed, the measure of the free speech right is the extent to which it protects disagreeable speech. It is not the business of the government to determine what groups or sentiments are “antisocial” in order to censor content. Imagine an organization that distributes with messages warning the public of the peril of sharia—Islamic doctrine that judges women inferior and encourages persecution of homosexuals—being deemed by the school board “anti-social,” specifically “Islamophobic,” and censors the messages on this basis. It’s not hard to imagine this if you try. The policy mentions “hate speech” and “targeting groups” in conjunction with “religion” and “creed.” Protecting administration-sanctioned speech is easy. It hardly needs protection. It’s offensive speech that requires protection. Yet the school board seeks to sanitize student clothing.
As a concrete example, just this year, Fayetteville High School (Arkansas) students who showed up at school in clothing bearing the Confederate flag were sent home. The school principal told the media, “We’re not trying to trample on their First Amendment right. We’re just trying to have a safe and orderly school environment.” “Safe and orderly” are typical excuses for limiting constitutional freedoms. In May of last years, a student in Montana was suspended for wearing a Confederate flag sweatshirt. The student, Mitchell Ballas of Missoula, got it: “The school is in the wrong for saying they can dictate me wearing this sweatshirt. They’re saying it’s offending kids and it’s derogatory and all that, but it’s not. It’s my First Amendment right.” Even if it offensive and derogatory, it is indeed Ballas’ First Amendment right. Students are even demanding the suppression of their own speech. In my adopted state of Wisconsin, Tomah High School students joined with administrators, staff, and teachers to call for the prohibition of Confederate flag items after a student wore clothing featuring the flag of Dixie.
Confederate flag t-shirt
Even when the Supreme Court has tolerated speech restriction, leading lights on the court have dissented in a manner consistent with constitutional principle. In Morse v. Frederick (2007), a ruling that upheld the disciplining of a student, Joseph Frederick, who was wearing a T-shirt that said “Bong Hits 4 Jesus” at a school sponsored event, Justice John Paul Stevens, in a dissent joined by Justice Souter and Justice Ginsburg, argued that “the Court does serious violence to the First Amendment in upholding—indeed, lauding—a school’s decision to punish Frederick for expressing a view with which it disagreed.” Stevens emphasized that “carving out pro-drug speech for uniquely harsh treatment finds no support in our case law and is inimical to the values protected by the First Amendment.” (Note also that the t-shirt could be said to disparage members of a religious group.)
The American Civil Liberties Union (for the record, I sit on the Northeast Wisconsin Board of the ACLU) participated in this case on the side of Frederick, as did the Center for Individual Rights and the National Coalition Against Censorship. Students for Sensible Drug Policy posted concern that banning drug-related speech would—if the principle of viewpoint neutrality was properly observed—undermine their ability to form chapters in public schools. Even conservative groups such as the American Center for Law and Justice and the Rutherford Institute worried that religious opinions could be censored if Frederick’s rights were trammeled.
* * *
Religious Liberty and the Establishment Clause
As I noted in my testimony before the board, district policy states that head coverings are not permitted in the school building during the school day with some exceptions, one for them concerns head coverings worn for religious reasons. However the Establishment Clause of the First Amendment obliges public institutions to remain neutral on questions of religion. If a freedom is curtailed or expanded, it must be for secular reasons (it cannot be for religious reasons, since then the government is no longer neutral) and the restriction must equally apply or the corresponding freedom must be open to all. Put another way, accommodations must not privilege some students while limiting the freedom of others. This should be obvious in light of the logic of civil rights in our nation’s long struggle to achieve equality before the law.
It is contrary to religious liberty to allow believers to engage in activities that authorities have determined inimical to safety and learning. The free exercise of religion is not absolute. It is limited by the Establishment Clause, as well as rational restrictions. First, as noted, the school cannot endorse a religion by privileging its adherents through preferential treatment. Second, if an action is harmful to others, it is properly restricted for the sake of safety. Are Sikh boys allowed to wear the kirpan—a ceremonial dagger—at school? All baptized male Sikhs are expected to carry one. Daggers are dangerous, threatening, and distracting. Head coverings aren’t daggers. But if they are claimed to be dangerous, threatening, and distracting (which school administrators have judged them to be), then there should be no religious exception for them since a purportedly rational reason exists for forbidding them. By allowing Muslims to don head coverings, the district is either saying that Muslims are allowed to engage in a practice that is dangerous, threatening, and distracting or that head coverings are not actually dangerous, threatening, or distracting. If the first claim is true, then no one should be allowed to cover their heads. All students are entitled to learn in a safe and distraction-free environment. If the second is true, then everybody should be free to choose where they will cover their heads, since clearly headgear is not the problem it is purported to be.
A kirpan, worn by baptized male Sikhs
I have made the kirpan analogy many times since the school board meeting not conceiving that it may in fact be allowed in some school districts. So I checked to see and was shocked to discover that, in October of 2014, administrators at a Washington state school district decided to let a Sikh boy carry a kirpan on school property. The school district in announcing the decision stated that they were merely confirming standard practice: Sikhs had always carried kirpan’s at school. Robby Soave wondered aloud in article in Reason magazine: “If Sikh Kids Can Bring Knives to School, Why Can’t Everyone Else?” Soave writes, “I find it irksome, however, that school administrators are willing to recognize a faith-based exception to zero tolerance weapons policies while vigorously enforcing them in every other respect, even when other students have equally valid reasons to carry knives.” He cites the case of Atiya Haynes, a 17-year-old Detroit girl who was expelled from school after her principal, in a random search of students’ personal property, discovered a pocketknife (a gift from her grandfather). Soave asks, “Do the Atiyas of the world really deserve fewer freedoms than Sikh students?” Not if the law is correctly followed.
The Civil Rights Act of 1964 outlaws discrimination based on race, religion, and other categories. Brown v. Board of Education (1954) determined that the doctrine of “separate but equal” is inconsistent with constitutional principles. It is inherently unequal to separate, limit, or privilege students on the basis of race. That the district would never consider instituting a policy that permits white students access to head coverings, hoodies but forbids these to black students but enforce a policy that treats people differently on the basis of religion tells us how deep the Islamophila runs. It will not do to say that in the one case race it at issue but in the other it is religion. Government is not allowed to discriminate on the basis of either.
Somali students with hijab, district allowed head coverings presumed to be immune from all of the problems it associates with hoodies
This is a pernicious double standard. Religious liberty means that law and policy cannot grant differential access to freedoms and resources on the basis of religion. Prohibiting my son, who is a secular humanist, from wearing a head covering on the grounds that he does not have a religious reason to do so violates his religious liberty. The same can be said if, when Muslims are permitted time to pray, my son were denied the same amount of time to contemplate his “sincerely-held moral or ethical beliefs,” which the district recognizes as the same status as religion. Or if my son were denied access to publicly-funded menu based on religiously-restricted food items. All Americans have a right to eat halal and kosher foods. The government cannot restrict some individuals from engaging in a practice it allows others to freely engage in for religious reasons.
This double standard rests on implicit stereotypes, one indicating virtue, the other indicating disrepute. The district’s policy states: “Student dress or grooming should not affect the health or safety of students or disrupt the learning process within the classroom or school.” Some districts around the country even consult with the police, who recommend schools pay attention to what youth in high crime areas wear—areas that are disproportionately poor and African American and Latino. As a school board member confided to a person close to me, staff are terrified of their own students. The assumption is that religious students who wear head coverings are not dangerous or violent or are not using head coverings to hide headphones or weapons or concealing their identity or that the head coverings are not distracting—all reasons given for why hoodies should be banned in school. Other students, in particular minority youth, especially males, are assumed likely to be using head covering for these illicit reasons.
Those who spoke at the school board meeting, including members of the board, even after I exposed the double standard, kept dwelling on the reasons why hoodies should be banned. Even after I explained that if any of these reasons were legitimate, then religious head coverings should not be exempted, since all the possibilities identified apply in those cases, as well. Otherwise, why are hijab-wearing Muslim girls presumed to be immune from all the temptations undermining Christian boys who wear hoodies? By granting a religious exemption for head coverings the district has admitted that the reasons for disallowing head coverings are not of pressing concern. Otherwise, as I noted earlier, it would be reckless to put students at risk by allowing some students to engage in a problematic practice.
We have heard the objections: “They have earbuds under those hoodies!” But earbuds are worn beneath the hijab, too. “They partially conceal their identity!” True also of the hijab. And so on. If the reasons for banning the hoodie were really legitimate, and of such pressing concern, then no exceptions would be allowed, even religious ones. Again, no school has to accommodate all religious observances, especially if they interfere with the educational process and represent a threat to school safety. If a public institution cannot decide who gets head coverings on the basis of race, then how can it decide who gets to wear head coverings on the basis of religion? The policy is discriminatory.
As I said in my testimony, without determining on an individual basis whether students are using a head covering for illicit purposes, a ban with exceptions means concrete individuals are treated as personifications of abstract groups about which, for some, bad intentions are assumed, while others’ motives deemed good intentioned. Individual freedom means individuals are treated as such, not judged on the basis of abstract groupings. I am concerned that school board members did not understand this part of my argument. I grant that it is, especially in the era of group rights and identity policies, hard to grasp the meaning of liberty and justice for all. We mean it at the concrete individual level. Individuals are actually-existing entities. Race and religion are imagined communities. They are observer-relative things for which there is no necessary organic association. An individual is a member of a race or a religion in a way very different that he is a member of the species. Race and religion are social constructs. Religion is an ideological system. You cannot reduce a concrete person to a social construct or an ideological system. Indeed, the attempt to do this results in known repressions. One must ask oneself, What is racism? Are its categories the basis of organizing rights and privileges? Or have we abandoned the ideal of individual equality before the law?
* * *
The School Board Fails to Uphold American Values
When I was a young teacher, I was frustrated by some students using their laptops for purposes other than taking notes. I made a rule that forbade all students from using laptops in my classes. Most students where doing what they should be doing on their laptops, taking notes and checking claims I was making in class. Others were playing games, chatting on social media, and so on. But rather than disciplining only those who were deviating from classroom expectations, I disciplined students irrespective of their actual behavior. This practice was wrong and, as soon as I realized it, I retired the rule.
In announcing the reasoning for approving the policy, the opinion of so-called School Resource Officers was cited. One hundred percent of them oppose hats and hoodies in school buildings. SROs are police officers. Let’s call them that. They are police officers in our schools. Aas a criminologist, I might ask what evidence the police have that could lead anybody to think that hats and hoodies are a threat to school safety? The only thing I can figure out is the pairing of hoodies with the “thug.” However, as a civil libertarian, I have ask how people who are not wearing hoodies in order to do any of the bad things people claim people with hoodies do should have their freedom restricted on account of a handful of people who may be involved in illicit activities? How is it fair to control the many for the actions of the few? Justice means disciplining those who break the rules, not assuming everybody will break the rules and then denying those who follow the rules the liberty due them.
Popular opinion should not bear on this matter. It doesn’t matter whether a majority of teachers and staff want this. What matters is showing that head coverings represent a threat so great that it necessitates banning them. And it must apply the ban across the board. The fear of hoodies is rooted in a cognitive stereotype about the threat of minority youth and the class enemy. There is a desire to cast upon others disrepute and to demand they prove themselves innocent and safe. I am not saying administrators, teachers, staff, and parents in Green Bay are overtly classist and racist. I am saying, however, that they need to take some time to critically reflect on why they feel this way. The Green Bay school board should have done the right thing and retired this discriminatory rule.
In the end, the school board voted to allow principals to decide whether our students may wear hats and hoodies inside schools. Do we let police officers decide what is illegal or is that what elected officials are for? Elected officials determine what is illegal and police officers enforce the law. So why would we let principals, teachers, and staff determine what clothes students wear? Shouldn’t that be the responsibility of the school board, an elected body of community members whose job it is to determine such matters?
Finally, I noted the common argument that we need to get hats and hoodies away from students in order to build upright proletarians for the workforce. Do teachers and parents really believe that the real world is a world devoid of head coverings? Do they really think they are preparing students to interact with real people in the adult world banning hats and hoodies at school? What this talk really represents is a twin fetish for authority and respectability with prejudicial class, race, and ethnic underpinnings. It reveals a politics that sees young people not as their own agents but as objects to be molded according to a disciplinarian ethic and about which suspicions are warranted. It’s unfair, and I urge others to take up this cause. They are revisiting the matter in June of 2020. Show up and voice your concern.
I delivered this talk at the Wisconsin Sociological Association Meetings, held in Kenosha, Wisconsin, October 25, 2019.
I am conducting a crossnational comparative study of the character and efficacy of various correctional approaches in the reduction of criminal recidivism for a range of purposes: providing scholars and practitioners with detailed and focused knowledge on advancements in penology; developing programs for students studying and preparing for careers in the fields of criminology and criminal justice administration; making available to the public sound information and methods appropriate to the development and implementation of policies conducive to building inclusive, safe, and just communities.
The project arose in an examination of comparative statistics in corrections and recidivism for the United States, Sweden, and Norway. Presently, the US incarcerates more persons than any other country and has the highest incarceration rate in the world. According to the Prison Policy Initiative, the total adult correctional population of the United States is nearly seven million persons, with approximately 2.3 million persons in state and federal prisons, jails, and juvenile correctional facilities. The incarceration rate is 698 per 100,000 residents. The US carceral system is also notable for significant class, ethnic, and racial disparities.
The United States has a poor record of rehabilitating those it incarcerates. According to the Bureau of Justice Statistics, recent data show that 68 percent of those released from prison were rearrested within three years of their release. This measure of recidivism is a rough but useful indicator of the problem of reoffending after leaving custodial supervision. The United States is well-known among advanced democracies for its punitive approach to corrections, policies guided by deterrence theory. The typical punishment regime in the United States emphasizes harsh and degrading conditions. These conditions in part explain the United States’ high rate of recidivism.
During approximately the same period, according to the Institute for Criminal Policy Research, inmates in Swedish correctional facilities in 2016 numbered around 5,979 in 46 prisons and 33 jails. Recidivism rates, at around 40 percent within three years, are lower in Sweden than in the United States. Prison population in Norway in 2017 stood at 3,373 in 64 prisons. Evidence presented by the Norwegian government indicates that only 20 percent prisoners in Norway are recidivists.
As with the United States, with the emergence of globalization, changing labor market needs, and migration pressures, Europe has been growing more diverse over the last several decades. Norway and Sweden have managed to retain comparatively lower recidivism rates despite these changes. These facts are noteworthy given the insecurity migrants face in their host countries and criminogenic pressures inherent in their situation (emergent ethnic enclaves, housing shortages, language deficiencies, low labor force attachment, low wages, neighborhood overcrowding, concentrated poverty and indigency).
US observers can thus learn a lot from the way in which Scandinavian societies have addressed the problem of rapid social change while at the same time keeping faith with the compassionate and progressive values that lie behind the popular appreciation for comprehensive rehabilitative strategies. A detailed cross-national comparison of correctional statistics and penal models elaborates knowledge of the relative impacts of punitive versus rehabilitative correctional approaches and brings this understanding to students and practitioners in Northeast Wisconsin and beyond. Other states have taken an interest in the model. In October 2015, a delegation from North Dakota and Hawaii, comprised of state officials, conducted a tour of the prison system in Norway. In September 2017, Alaskan officials organized a similar tour of Norway’s facilities.
The Summer 2018 Research Trip
I traveled to Sweden and Norway in the summer of 2018 to meet with law enforcement officials and determine the feasibility of the project I am discussing and toured facilities and interviewed officials. During my trip, I gained access to educational and correctional institutions in their largest cities of Stockholm and Oslo. My summer trip to Stockholm involved meetings with researchers at the Swedish Prison and Probation Service (SPPS), or Kriminalvården, in Liljeholmen, a district in the Stockholm archipelago. My principle contact was Gustav Tallving, policy officer for the European Organization of Prisons and Correctional Services. In Norway, I traveled to the University College of Norwegian Correctional Service (KRUS), or Kriminalomsorgen, in Lillestrøm, outside of Oslo. My principle contact there was Tore Rokkan, Associate Professor in the Department of Research.
University College of Norwegian Correctional Service (KRUS), Lillestrøm, Norway
The field refers to the Scandinavian approach as the “Nordic model.” The model is focused on preparing inmates for successful reintegration with society after release by focusing on individual variability—or within-subject change—and the needs of people in the greater society. Norway is especially known for an emphasis on restorative justice, an approach that seeks to repair the harm caused by the offense rather than punish the perpetrator. Restorative justice puts victims, offenders, and community members in charge of determining harm done, the needs of those involved, and ways the damage may be repaired. Both Norway and Sweden stress the importance of avoiding isolating prisoners in order to prevent the phenomenon of prisonization, a type of institutionalization that makes it difficult for ex-convicts to transition to life outside of custodial care.
My summer trip yielded several results. At the Mid-South Sociological Association meetings in Birmingham, Alabama, in October 2018, I presented a working paper, “Approaching the Rehabilitative Ideal: The Structure of Crime Control in Sweden and Norway,” in which I discussed the history of the Nordic model with respect to crime and punishment and recidivism rates. At the Midwest Sociological Society meetings in Chicago, Illinois, this past April, I presented a paper titled “Foreign Bodies and the Queue: Shifting Priorities in the Norwegian Correctional System,” that analyzes changes in that system in light of shifts in political hegemony and mass migration.
Me art the Department of Sociology and Work Science at the University of Göteborg
In November 2018, I was invited by Mark Elam, professor in the Department of Sociology and Work Science, to come to the University of Göteborg for my sabbatical semester. They will host my visit and provide me with office space. I received assurances that, when an exchange is worked out with the International Center, the sociology department will support an exchange agreement. In June 2019, I was awarded a sabbatical for fall 2020 to travel to Sweden and Norway to continue my research research.
Theory, Practice, and Education
My interests in this topic are theoretical, practical, and pedagogical. On the theoretical front, I am evaluating the empirical soundness of the social support thesis with respect to rehabilitation and recidivism. Social support is an approach developed by Mark Colvin and associates. The differences between the Nordic and US approaches to punishment and rehabilitation provide a useful test of this thesis.
I graduated from the University of Tennessee with a Ph.D. in Sociology in 2000, with emphases in Criminology and Political Economy. My dissertation was a comprehensive study of the history of crime and punishment in the United States titled, Caste, Class, and Justice: Segregation, Accumulation, and Criminalization in the United States. I have published articles on the subject of crime and punishment in the Journal of Aggression, Maltreatment & Trauma, Journal of Poverty, Journal of Black Studies,Crime, Law, & Social Change, the Encyclopedia of US Prisons and Correctional Facilities, and the Encyclopedia of Social Deviance, as well as presented papers at numerous conferences and symposia. The Nordic project is a continuation of my varied work in this broad area.
On the practical front, I am endeavoring to provide policymakers with alternative correctional approaches that utilize social supports to reduce recidivism. This interest bears on the situation in the state of Wisconsin. Its prison population tripling since 1990, Wisconsin is studying ways to curb prison overcrowding. Wisconsin’s prisons were built to hold approximately 16,000 inmates but hold around 23,000. Wisconsin is on track to hold a record number of inmates by 2019. A chief driver of the large prison population is the high rate of recidivism. One approach to the problem is changing parole violation restrictions and expanding the state’s earned release program. Addressing the problem of crimeless revocation is an essential piece of this. Vital to the success of such reforms is a correctional approach that lessens the coercive character of prison life and provides social supports to prisoners that increase successful reintegration with life beyond prison.
On the teaching front, this project promises to generate knowledge and foster programming that will benefit students studying and preparing for careers in the fields of criminology and criminal justice administration. It will inform several of the classes I teach. Insights into life-course analysis and within-subject change metrics will introduce students to cutting edge methodologies. Partnership with other programs and with the community in rehabilitative and treatment strategies will provide opportunities for new internship positions.
I am on the faculty of Democracy and Justice Studies, a problem-focused interdisciplinary program that brings humanistic and social scientific approaches to bear on the social promises and problems that shape the history and trajectory of the United States and the world. Faculty are drawn from the disciplines of history, political science, and sociology, several of them with expertise in matters of criminal justice and legal studies. Our program offers courses in law and society, constitutional law, gender and the law, law and inequality, criminology, and criminal justice administration. My association with the Social Work program, particularly Dr. Doreen Higgins, and that program’s developing relationship with the University of Gothenburg under Higgins’ direction, promise collaboration in research and curricular development across colleges.
Finally, a travel course to Scandinavia, which I am developing with Higgins, will strengthen not only the Democracy and Justice Studies program in criminal justice, but also its relationship with Northeastern Wisconsin Technical College, as well as student exchanges and combined degree options in the fields of criminology and criminal justice. The travel course will take students to Scandinavia to experience these systems and practices firsthand. Hands-on experience is something I have found to be invaluable.
What first strikes me about Todd Phillip’s Joker is Joaquin Phoenix’s performance. It’s extraordinary. His movements are graceful, while his form is grotesque. His body deformed. His dancing demented. His facial grimaces terrifying. His scary countenance is that of a serial killer. Not a movie serial killer. A real one. Even the way he runs is mesmerizing. Fluid and panicked. His condition demands our empathy—if we can muster it. One hates with him those who mock him.
What next strikes me is the technique of ultra violence shot as a graphic novel leaping to life. The blood spurts and spatters. On the subway train, one can see the panels unfolding. Same with the apartment scene. And the Murray Franklin show. Gritty realism with the right amount of surreality.
Albeit not bloody, and its visual messiness at the end aside, another film that captures the spirit of the graphic novel in this way is Ang Lee’s 2003 Hulk. Moreover, like Phillips, Lee plumbs the depths of the human psyche. Unlike Joker, which is drawing moviegoers in droves, Hulk did not take the box office by storm. Rotten Tomatoes summarized the general opinion of Lee’s Hulk as “ultimately too much talking and not enough smashing.” As I was leaving that film with my son, I heard a man complaining to his son that it took 25 minutes for Hulk to appear.
Joker leaves one mentally exhausted and emotionally drained. The meaning of the film unfolds over the course of two hours. In the end, it’s an account of societal failure to protect children and address the trauma of mass society. Phillips deserves praise for his artistry. The screenplay, by Phillips and Scott Silver, exudes sympathy for those whom society has thrown away.
While this is no ordinary adaptation from the world of comic book heroes and villains, I appreciate the decision to locate the movie in the Batman universe. Gotham is the city. Arkham State Hospital contains many of its secrets. I wanted to imagine how Arthur Fleck would play as Bruce Wayne’s arch-nemesis. We see Bruce as a boy. We see a thug murder his parents. It’s the origin story, but better than previous accounts. There is no vat of chemicals transforming Fleck into the man with green hair, white skin, and that terrible rictus (an image inspired by German impressionist Paul Leni’s 1929 film The Man Who Laughs, based on Victor Hugo’s novel of the same name). In Joker we get a realistic villain. Complex and clinically insane.
Phoenix’sJoker is not the psychopathic anarchist brilliantly performed by Heath Ledger in Christopher Nolan’s 2008 The Dark Knight. This Joker is a spat-upon nihilist—driven to his philosophical position by a merciless society. “I don’t believe in anything,” he says. Phoenix’s Fleck just wants to confirm his existence. The joy we derive from his murderous ambitions flows from a different part of us. It is not the desire for vicarious mass killing, the thrill of seeing the Id unleashed, but the hurt part that wants to take revenge on our tormentors.
Some observers were struck by Joker’s similarities with two other movies, Martin Scorsese’s 1976 Taxi Driver and his 1983 The King of Comedy, both starring Robert De Niro (as Vietnam war veteran Travis Bickle and failed comedian Rupert Pupkin respectively). De Niro plays a role in Joker, as well, as the late night talk show host Murray Franklin. He channels Jerry Lewis’ performance as the (fictional) late night TV host Jerry Langford in The King of Comedy.
While I understand the comparisons, beyond Phillips’ acknowledgment of Scorsese’s work as his inspiration, it is arguably superficial. It’s not uncommon for people to sit in their rooms and engage in imaginative rehearsal (just as it’s not uncommon for people to fantasize about killing people they don’t like). Scorsese being clever enough to put these near-universal experiences on screen shouldn’t leave other filmmakers worrying the accusation of derivation.
Joaquin Phoenix as Joker dancing to Gary Glitter’s “Rock and Roll”
The movie is controversial. Forty years ago it was conservatives who fussed about movie themes and content. Now it’s progressives who take offense. Whereas conservatives virtue signal about modesty, progressives glorify victimhood—just not the Arthur Fleck kind.
Some are finding the weight loss central to the physicality of Phoenix’s performance—52 lbs—triggering. The way he has described the bliss of self-control is, in their estimation, insensitive at best to those with eating disorders. Others find Phillips’ use of Gary Glitter’s “Rock and Roll” not merely insensitive but enabling of sexual predation.
Others do not appreciate on-screen portrayals of mental illness associated with violent ends. The plot runs counter to the narrative that mass violence is due not to mental illness but to movies and video games or to whiteness and masculinity. Indeed, many, portraying the movie as a shout-out to the Incel crowd, are predicting that the movie would inspire white male mass violence. (Philipps could not generate the same level of empathy he does by locating the film inthe madness of social media.)
The movie is set in the early 1980s. Some of my older readers will recall the 1984 New York subway shooting in which Bernhard Goetz, fed up with being terrorized by young men on the 2 train in Manhattan, shot and wounded four people. The “Subway Vigilante” symbolized popular frustration with the extraordinarily high crime rates in the 1980s, which were met with increasing coercive state action, as well as a self-defense movement, upon which the NRA capitalized. Many saw Goetz as a hero. Goetz was ultimately charged with four counts of murder, four counts of aggravated assault, four counts of reckless endangerment, as well as criminal possession of a gun. His justification was self-defense and he was acquitted on all charges except one count of carrying an unlicensed firearm (he served less than a year in prison). Significantly, Phillips changes the number of victims from four to three, their race from black to white, and their class standing from lumpenproletariat to affluent, and their provocation from robbery to aggravated assault.
What most observers miss about Joker is Phillips’ critique of the mob. Joker invites identitarian reactions to mock and trivialize them with the much greater suffering of Fleck and the citizens of Gotham, those who dwell at the bottom of capitalist society, while at the same time critique the morality and usefulness of mob violence. Which is to say that it is, while not pointless, futile. The mob constitutes the subjective milieu of Gotham, always lurking in the background.
But, of course, the mob lurks in the foreground. For Fleck is its personification, lumpenproletarian in the way Karl Marx and Friedrich Engels portray those at the bottom of society, the dangerous class, the social scum. Fleck is suffering from austerity’s cruelty, an early warning of neoliberal rot in the decay of the Great Society, capitalism removing its human mask.
Engels describes the actions of the mob as “primitive rebellion,” a reaction to class oppression with no theory, with no politics—except to concretize the disorder of the capitalist mode of production. The mob builds nothing. They’re not Schumpeter’s “gales of creative destruction.” They’re just destruction, glamouring to eat the rich for no purpose beyond expressing their well-earned resentment. No theory, no plan, they’re fodder for reactionary intrigue.
Bruce Wayne’s father, Thomas, the personification of the one percent, calls the rabble “clowns.” Dangerous but not serious. Fleck is the leader of the clowns. Claiming to stand for nothing, he represents not anarchy but chaos. Wayne and Fleck meet face-to-face in the men’s room of a posh theater with predictable results: the one percent punches down. The story is ultimately about how society fashions madness, not about how the people transcend it.
To draw out the problem in relief, Phillips has, in Marxian fashion, reduced the critique to the futility of disorganized class resentment while keeping the audience sympathy to the plight of capitalism’s victims. He avoids a direct attack on social media and cancel culture by removing the story to the early 1980s, but the critique is also about today’s mob, the ones who drove Phillips from comedy into an exploration of the madness of the crowd (it is a confluence of understandings that finds Douglas Murray’s exploration of the derangement of identity politics—The Madness of Crowds—hit the market in September).
A faux social justice movement, the offspring of the New Left are clowns, too. However, their trauma pales in the light of Fleck’s trials. Fleck is no snowflake. His suffering is exquisite. His reaction to his abuse, shorn of politics and purpose, is meant to make more sense than the politics of the woke scolds, whom Phillips, frustrated that people have missed his point, explicitly calls out (see “Woke Scolds and Twitter Mobs”).
The historic setting of Joker is ideal for conveying today’s societal mood, like science fiction in reverse, throwing us not into the future to behold the problems of today (à laTwilight Zone or Black Mirror), but sending us back in time, to when crime and disorder were making urban life unbearable for the proletariat, a crisis that moved the masses not towards revolutionary action, but drove them to support authoritarian measures to stem the disorder, to identify with the bourgeoisie because the dangerous class was out of control and something had to be done to restore order and safety: “Broken Windows.” Similar conditions, albeit manifesting differently as they do in every age, have returned to many America’s cities. Not just in homelessness and piles of needles and body counts. The police are standing down on the orders of progressive mayors for the new chaos without a purpose.
Whether intentional or not, Phillips’ movie is a work of left realism. Many of Joker’s critics were poked by it.
[Note: This essay was revised 4.11.19 after a second viewing of the film.]
In a 1845 manuscript by Karl Marx, brought to light in 1888 by Friedrich Engels in the latter’s Ludwig Feuerbach, the problem of contemplative materialism is tackled in a difficult and what feels like a preliminary way that nonetheless firms up in light of two earlier works by Marx: “On the Jewish Question” and “Introduction” to (the planned) A Contribution to the Critique of Hegel’s Philosophy of Right, both published in February 1844 in Deutsch-Französische Jahrbücher. (I will refer to the latter throughout as “Introduction.”)
Karl Marx
Engels named the 1945 manuscript “Theses on Feuerbach,” and in them Marx contends that materialism prior to his intervention neglects to consider reality as “human sensuous activity,” that is as practice, as well as in its subjective elements. For Marx, the materialist does not take human agency into account when contemplating the objects in his environment. The subjective side, the active side, Marx argues, is developed by idealism, but only abstractly, since idealism cannot grasp real sensuous activity. Marx argues that human activity is an objective activity objectively guided. Feuerbach, Marx complains, regards theoretical activity, that is abstract conceptualization, as the only genuine human attitude, “while practice is conceived and defined only in its dirty-Jewish form of appearance.” What Feuerbach does not grasp, in Marx’s estimation, is “the significance of ‘revolutionary,’ of ‘practical-critical,’ activity.”
By “dirty-Jewish” Marx is not expressing an anti-Semitic attitude but rather conveying an understanding of the Jewish tradition as one of being in the world, the world of practical activity. This is in contrast to the Christian attitude, with its emphasis on logos (or the word), otherworldliness, and the ascetic life. The Jewish god—Yahweh—is a god who gets his hands dirty in making the world and man. In Genesis, “Yahweh formed man from the dust of the earth. He blew into his nostrils the breath of life, and man became a living being.”
Feuerbach makes this point in The Essence of Christianity (1841). For Feuerbach, religion is the alienated projection of human essence, and thus the work of Yahweh is revealed as an abstract idealization of the work of Jewish people or society. For Feuerbach, the work of materialism is to demythologize the world.
This argument anticipates Émile Durkheim’s arguments in his 1912 Elementary Forms of Religious Life wherein it is posited that religious ideas are expression of a people’s conceptions of their society. Persons perceive something greater than themselves, which, in the fashion of functionalism, Durkheim treats as a superorganismic thing—the conscience collective or collective consciousness—accessible through the myths and rituals we know as religion or religious-like systems. In this view, the world is separated into the sacred and profane.
For Feuerbach, alienation emerges from this separation.
In making this point, Marx is building on observations he made in his 1843 essay “On the Jewish Question,” wherein he theorizes that in its “perfected practice” (and here he is referring to Christianity after Reformation) the “Christian egoism of heavenly bliss is necessarily transformed into the corporal egoism of the Jew, heavenly need is turned into world need, subjectivism into self-interest.” For Marx, “the tenacity of the Jew” is not found in Judaism, but “by the human basis of his [the Jewish] religion,” which he identifies as “practical need” or “egoism.” Marx sees capitalism as the reflection of self-interested activity that finds it origins in religious expression, in “the ideal aspect of practical need,” suppressed for centuries by the asceticism of Catholicism until its supersession by Protestantism. (Max Weber makes a version of this argument in series of essays written between 1904 and 1905 collected in the influential book The Protestant Ethic and the Spirit of Capitalism.)
Marx writes that “in civil society” under Protestantism and the Enlightenment, the practical Jewish attitude is “universally realized and secularized.” “Consequently,” he argues, “not only in the Pentateuch and the Talmud, but in present-day society we find the nature of the modern Jew, and not as an abstract nature but as one that is in the highest degree empirical, not merely as a narrowness of the Jew, but as the Jewish narrowness of society.” The Jewish emphasis on market activity, marginal to European society during Catholic hegemony, finds its practical expression universalized in modernity with the result that modernity becomes narrowly focused on market activity. Peripheral in a feudalistic system, markets become central to a capitalistic system, and thus become the pivot of modern social life, the result being the progressive commodification of social life and a deepening of alienation. (Max Weber likewise argues, in a series of articles published between 1917 and 1919 in the Archiv für Sozialwissenschaft und Sozialpolitik, collected into the book Ancient Judaism, that the Jewish tradition is the pivot upon which the West moves.)
Marx sees in this development a dialectical process in which transcending capitalism is overcoming Judaism. In such a world [communism], there will be no need for Jewishness. “Once society has succeeded in abolishing the empirical essence of Judaism—[that is, capitalism and its preconditions]—the Jew [as an unique identity] will have become impossible, because his consciousness no longer has an object, because the subjective basis of Judaism, practical need, has been humanized [socialized, democratized], and because the conflict between man’s individual-sensuous existence [how society has constituted him] and his species-existence [how he constitutes himself together with others] has been abolished.” Marx states, “The social emancipation of the Jew is the emancipation of society from Judaism.” Here he means individuals with a Jewish identity are emancipated from the Jewish religious and its cultural impositions.
This argument is often interpreted as anti-Semitic. But it is not only the Jews who are emancipated from religious and cultural imposition under communism. So are Christians. Marx’s argument hails from a radical antitheism in which overcoming alienation means liberation from the all the myths and rituals created to control people in part by providing a method of dealing with the strife of alienated life conditions that does not involve actually changing society. Marx does not wish to see only emancipation from Judaism, but emancipation from all imagined communities (which today are multitudinous), a process that incubates in the womb of the nation state.
Marx argues that “Christianity sprang from Judaism.” And, in modernity, Christianity “has merged again in Judaism.” “From the outset,” he explains, “the Christian was the theorizing Jew; the Jew is, therefore, the practical Christian; and the practical Christian has become a Jew again.” He disabuses Protestants of the illusion that Christianity has transcended Judaism. “Christianity had only in semblance overcome real Judaism,” he argues. “It [Christianity] was too noble-minded, too spiritualistic to eliminate the crudity of practical need in any other way than by elevation to the skies.” In other words, you do not overcome strife with painkillers. You overcome the conditions that cause strife with revolutionary action. Christianity cannot substitute for Judaism. Humanism must replace both Christianity and Judaism. This humanism demands the elimination of the conditions that make religious possible. It is therefore not the person who identifies as a Jew or a Christian Marx wishes to see go away. It is the conditions that make such identities possible that he seeks to overthrow and replace with a universal society.
He sees this overcoming in as a historical process. “Christianity is the sublime thought of Judaism, Judaism is the common practical application of Christianity,” Marx writes; “but this application could only become general after Christianity as a developed religion had completed theoretically the estrangement of man from himself and from nature. Only then could Judaism achieve universal dominance and make alienated man and alienated nature into alienable, vendible objects subjected to the slavery of egoistic need and to trading.” Put another way, the highest level of development of Christianity, Protestantism, in the context of modernity, which it helped birth, opens society to capitalism. Protestantism brings the capitalist attitude to the masses. The result deepens man’s self-estrangement, as commerce (selling) “is the practical aspect of alienation. Just as man, as long as he is in the grip of religion, is able to objectify his essential nature only by turning it into something alien, something fantastic, so under the domination of egoistic need he can be active practically, and produce objects in practice, only by putting his products, and his activity, under the domination of an alien being, and bestowing the significance of an alien entity—money—on them.”
Contrary to those who wish to see a childish Marx and a mature Marx (Althusser, for instance), this argument is central to Marx’s thought throughout his work. Hence we find in Capital, Volume I, published in 1867, the following (in chapter seven): “We presuppose labor in a form that stamps it as exclusively human. A spider conducts operations that resemble those of a weaver, and a bee puts to shame many an architect in the construction of her cells. But what distinguishes the worst architect from the best of bees is this, that the architect raises his structure in imagination before he erects it in reality. At the end of every labor-process, we get a result that already existed in the imagination of the laborer at its commencement. He not only effects a change of form in the material on which he works, but he also realizes a purpose of his own that gives the law to his modus operandi, and to which he must subordinate his will.” Because human beings realize themselves in real sensuous activity, because they are conscious, intentional, and social animals, capitalist control over labor activity is the source of alienation in modernity, sublimated as religious and other strange ideologies (such as identity).
Thus, before continuing our analysis of the 1845 manuscript concerning Feuerbach, we must recall Marx’s famous introduction to A Contribution to the Critique of Hegel’s Philosophy of Right, which appears in the same month as “On the Jewish Question,” in Deutsch-Französische Jahrbücher. There Marx writes, “For Germany, the criticism of religion has been essentially completed, and the criticism of religion is the prerequisite of all criticism.” Marx is again leaning heavily on Feuerbach, whose Essence of Christianity articulates the transformative method than Marx adapts to his own project of “a ruthless criticism of everything existing,” a goal revealed to his friend Arnold Ruge in a 1844 letter.
In that 1844 letter, Marx writes that “the socialist principle itself represents, on the whole, only one side, affecting the reality of the true human essence. We have to concern ourselves just as much with the other side, the theoretical existence of man, in other words to make religion, science, etc., the objects of our criticism.” Confirming a belief in an overarching ontology, he tells Ruge, “Reason has always existed, only not always in reasonable form.” The point of ruthless criticism is to cut through the ideological and dogmatic distortions to make plain the antagonisms and sources of strife that generate the illusions and the conditions for them. “The criticism must not be afraid of its own conclusions,” He writes, “nor of conflict with the powers that be.”
He writes in the “Introduction” that “Man, who has found only the reflection of himself in the fantastic reality of heaven, where he sought a superman, will no longer feel disposed to find the mere appearance of himself, the non-man, where he seeks and must seek his true reality.” Hence the importance of irreligious criticism. “The foundation of irreligious criticism is: Man makes religion, religion does not make man. Religion is, indeed, the self-consciousness and self-esteem of man who has either not yet won through to himself, or has already lost himself again.” Marx is no relativist.
At the same time, Marx takes on Feuerbach’s notion of man as yet another abstraction, that of the solitary individual in Feuerbach, which is made clear in the “Theses on Feuerbach.” “But man is no abstract being squatting outside the world,” Marx writes. “Man is the world of man—state, society.” He then connects the two: “This state and this society produce religion, which is an inverted consciousness of the world, because they are an inverted world.” Religion is an ideology produced by a obscuring the contradictions in society. “Religion is the general theory of this world, … its logic in popular form, … its enthusiasm, its moral sanction, its solemn complement, and its universal basis of consolation and justification.” Religion, Marx argues, “is the fantastic realization of the human essence since the human essence has not acquired any true reality.” Why hasn’t human essence acquired any true reality? Are we not present as material beings? Only as species-in-itself, not as species-for-itself. Human essence is presently ascertained in the form of intersecting relations, as we shall see in the 1845 manuscript.
“The struggle against religion is, therefore, indirectly the struggle against that world whose spiritual aroma is religion.” This is where Marx powerfully describes religion as a painkiller: “Religious suffering is, at one and the same time, the expression of real suffering and a protest against real suffering. Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.” The solution to the religion problem is therefore not found merely in Feuerbach’s transformation, where religion is merely exposed as a projection of societal ideals, but in the transformation of actual social conditions to suit human wellbeing. (This is an argument for universal human rights.) “The abolition of religion as the illusory happiness of the people is the demand for their real happiness,” Marx writes. “To call on them to give up their illusions about their condition is to call on them to give up a condition that requires illusions. The criticism of religion is, therefore, in embryo, the criticism of that vale of tears of which religion is the halo.”
Irreligious criticism has a practical, humanist purpose. “The criticism of religion disillusions man, so that he will think, act, and fashion his reality like a man who has discarded his illusions and regained his senses, so that he will move around himself as his own true Sun,” writes Marx. “It is, therefore, the task of history, once the other-world of truth has vanished, to establish the truth of this world.” “Thus,” he concludes, “the criticism of Heaven turns into the criticism of Earth, the criticism of religion into the criticism of law, and the criticism of theology into the criticism of politics.”
Now moving on to the balance of the 1845 manuscript, Marx writes, “The question whether objective truth can be attributed to human thinking is not a question of theory but is a practical question. Man must prove the truth, i.e., the reality and power, the this-sidedness of his thinking, in practice. The dispute over the reality or non-reality of thinking which is isolated from practice is a purely scholastic question.”
He then criticizes what becomes central to the structuralist and functionalist traditions in sociology (that have so influenced identity politics), which see individuals as personifications of identities established by social relations. “The materialist doctrine that men are products of circumstances and upbringing, and that, therefore, changed men are products of changed circumstances and changed upbringing,” writes Marx, “forgets that it is men who change circumstances.”
Marx recognizes that human beings make history even if they have lost control over the history-making process. “The coincidence of the changing of circumstances and of human activity or self-change can be conceived and rationally understood only as revolutionary practice.” In other words: revolutionary practice is the practice of the people seizing control of the mechanisms of making history. In the “plain Marxist” language of C. Wright Mills (in his 1959 Sociological Imagination), “Democracy means the power and the freedom of those controlled by the law to change the law, according to agreed-upon rules—and even to change those rules; but more than that, it means some kind of collective self-control over the structural mechanics of history itself.”
This is the key to understand what Marx means when he writes: “Feuerbach resolves the essence of religion into the essence of man [or “human nature”]. But the essence of man is no abstraction inherent in each single individual. In reality, it is the ensemble of the social relations.” One may be tempted to point to this as Marx anticipating and even endorsing the postmodern attitude, which sees concrete individuals as personifications of abstract categories, a practice that falsely reifies ideological relations (such as race). Marx explains that “Feuerbach, who does not enter upon a criticism of this real essence is hence obliged [to] abstract from the historical process and to define the religious sentiment regarded by itself, and to presuppose an abstract—isolated—human individual.” Moreover, this abstract human individual “can by him only be regarded as ‘species,’ as an inner ‘dumb’ generality which unites many individuals only in a natural way.” In other words, in demythologizing the world, Feuerbach fails to find beneath religious alienation the source of human self-estrangement: the ensemble of social relations that constitutes man’s essence, which is the product of a given historic epoch and mode of exploitation, relations that represent the source of alienation. Religion is not the source but the expression of alienation. The true source, as Marx explains in his 1843 essay discussed above, are the social conditions, conditions constituted by unjust social relations, relations that have constituted essence in identity. What Marx finds valuable in this critique is its application to everything.
The humanist reading is supported by this statement: “Feuerbach starts off from the fact of religious self-estrangement, of the duplication of the world into a religious, imaginary world, and a secular one. His work consists in resolving the religious world into its secular basis. He overlooks the fact that after completing this work, the chief thing still remains to be done. For the fact that the secular basis lifts off from itself and establishes itself in the clouds as an independent realm can only be explained by the inner strife and intrinsic contradictoriness of this secular basis. The latter must itself be understood in its contradiction and then, by the removal of the contradiction, revolutionized. Thus, for instance, once the earthly family is discovered to be the secret of the holy family, the former must itself be annihilated theoretically and practically.” (Marx’s argument would have been more immediately ascertainable had the theses been differently ordered.)
Marx punctuates the point in the following statements: “Feuerbach consequently does not see that the ‘religious sentiment’ is itself a social product, and that the abstract individual that he analyses belongs in reality to a particular social form.” “All social life is essentially practical. All mysteries which lead theory to mysticism find their rational solution in human practice and in the comprehension of this practice.” “The highest point reached by contemplative materialism, that is, materialism which does not comprehend sensuousness as practical activity, is the contemplation of single individuals and of civil society.” “The standpoint of the old materialism is civil society; the standpoint of the new is human society or social humanity.” It is in civil society that man’s essence is constituted by the ensemble of social relations—as well as the ideology of the abstract individual. This is a true but contingent fact. The revolutionary transformation of society, in abolishing contradictory social relations, replaces civil society with human society, and thus abolishes the abstract categories that keep us from being a species-for-itself.
Thus Marx concludes: “Philosophers have hitherto only interpreted the world in various ways; the point is to change it.”
I wrote this analysis several years ago. I was elected chair of the Department of Democracy and Justice Studies in the spring of 2012 and my focus shifted to rebuilding the department after several retirements and colleagues leaving for more lucrative positions. As a consequence, the analysis languished on one of my many hard drives. I have come across it and decided to publish it here on Freedom and Reason. I believe you will find that it provides much needed historical context for those considering the situation in Afghanistan, how we got in this mess and how we might get out of it.
On April 29, 2011, President Barack Obama authorized the Central Intelligence Agency to conduct a raid on a compound in Abbottabad, Pakistan believed to be the hideout of al Qaeda leader Osama bin Laden. On May 1, 2011, President Obama announced that the Naval Special Warfare Development Group, along with several CIA operatives, had entered the compound and killed bin Laden and several occupants.
The political establishment watches the assassination of Osama bin Laden, May Day, 2011
Within a week of the news of bin Laden’s assassination, The Washington Post published an essay by journalist Peter Bergen titled “Five Myths about Osama Bin Laden.”[1] The chief myth Bergen wished to dispel was the claim that the CIA created Osama bin Laden. Bergen writes, “Common among conspiracy theorists is the notion that bin Laden was a CIA creation and that the attacks of Sept. 11, 2001, were blowback from an agency operation gone awry.” He claims to refute this myth by asserting that the CIA had “no dealings” with the “Afghan Arabs” (including bin Laden) and “few direct dealings with any of the Afghan mujaheddin.” Rather “all U.S. aid was funneled through Pakistan’s Inter-Services Intelligence agency.”
As evidence for his claims, Bergen quotes ISI officer Mohammad Yousaf, who asserts in his 1992 book The Bear Trap: “No Americans ever trained or had direct contact with the mujaheddin.” He also cites al Qaeda insiders, namely Ayman al-Zawahiri and Abu Musab al-Suri, who deny that any money from the United States aided the Arab mujahideen. Such denials are expected; it would look particularly bad for the global Islamist movement to have it widely known that al Qaeda were a creation of the U.S. intelligence community.
However, the documentary record, comprised of facts gathered from such mainstream corporate media outlets as The Washington Post, contradicts Bergen’s claims. Bergen is not unique in obfuscating the character of the relations between the United States and, not only bin Laden and al Qaeda, but the constellation of Islamists groups and organizations perpetrating violence around the world. Following the attacks on the United States on September 11, 2001, the mainstream corporate media as a whole manufactured ignorance of the U.S. role in terrorism perpetrated by those claiming to be part of the global Islamic resistance movement.
The present essay is a review of mainstream corporate media stories from the 1970s and 1980s for the purpose of demonstrating that the United States was deeply involved in building the al Qaeda terrorist network, as well as supporting several terrorist groups and organizations around the world. This history is not ancient history. As of the date of this publication, the United States, under Donald Trump, remains in Afghanistan. It is the nation’s longest war. I have students in my freshman classes born after the US invaded. However, bin Laden’s assassination under Obama was something of an official ending of the conflict in the minds of the American public, much in the same way the fall of the Soviet Union made the threat of nuclear war go away.
* * *
On September 11 2001, suicide bombers commandeered four US airliners and piloted three of them into the World Trade Center in New York City and the Pentagon in Washington DC, killing approximately 2,700 civilians. An analysis of the origins of the terrorists who allegedly perpetrated this act is desirable for the purpose of assigning responsibility for the contemporary state of world affairs, which includes the long and continuing war in Central Asia (now the longest war by U.S. forces in the nation’s history). The record shows that the United States government, under the leadership of both national parties, sowed the seeds of an extremist movement against the West and Western influence Central Asia and the Middle East, the harvest of which has, so far, resulted in the deaths of at least several tens of thousands of people, a more disordered world, and a weakening of individual freedom
Islamic terrorism strikes Manhattan September 11, 2001
In April 1978, the People’s Democratic Party of Afghanistan (PDPA), a populist coalition of Khalq (masses or peoples) and Parcham (banner or flag) factions, came to power in the cultural crossroads of Central Asia.[2] In the “Saur Revolution,” an openly communist, mostly urban movement toppled the republican government of Sardar Mohammad Daoud Khan. PDPA’s secretary general was Noor Mohammad Taraki of the Khalq faction. As prime minister and president of the revolutionary council, Taraki immediately set out to change Afghanistan, implementing a far reaching program of political, economic, and social reform that included equal rights for women, the elimination of usury, legalization of labor unions, the establishment of a minimum wage, a graduated income tax, and land redistribution.[3] Afghan society was sorely in need of political and economic reform. Under the Daoud government, ninety percent of Afghanistan’s population of eighteen million was illiterate, infant mortality was fifty percent, and life expectancy was forty years. The United Nations ranked the country as one of the five poorest in the world.[4]
Although opponents of Afghan communism claimed that the Soviet Union orchestrated the revolution,[5] the PDPA was an indigenous phenomenon.[6] Indeed, the people’s faction forced the pro-Moscow banner faction out of government within weeks of assuming power.[7] To be sure, PDPA policies were not popular with all Afghans. Dispossessed members of the exploitive classes opposed economic reforms. Mullahs feared socialism’s secular focus and condemned the emphasis on women’s rights. Officials of the Daoud republic, as well as expelled members of the Parcham faction, were bitter over the loss of political power.[8] Nonetheless, there was initially soft opposition to the PDPA program.[9] However, within a year, a myriad of forces had organized PDPA opponents into an aggressive albeit fractious countermovement and disrupted the Taraki government.[10] Prime Minister and Pashtun nationalist figure Hafizullah Amin, likely an agent for the United States Central Intelligence Agency (CIA), took advantage of the social disorder and ordered Taraki’s execution in October 1979.[11]
Once in power, President Amin, and his followers, the black Khalq, moved quickly to suppress opposition. Taraki loyalists, the red Khalq, were imprisoned, expelled, or executed.[12] The state apparatus grew repressive and the party stifled the pace of progressive reforms, developments that antagonized both the traditional Islamic communities and the masses.[13] At the close of 1979, remnants of the Taraki government and a resurgent Parchami faction, led by Babrak Karmal, overthrew the Amin regime.[14]
Karmal benefited from considerable assistance from Soviet security and military forces, although the Soviet Union denied playing a significant role in the overthrow of Amin.[15] Evoking the Soviet-Afghan Friendship Treaty of 1978, the Karmal government invited the Soviet Union to Afghanistan to help stabilize its control over the country.[16] On orders from Leonid Brezhnev, the Soviet army entered the country in December of 1979.[17] The Soviet Union would occupy Afghanistan for a decade, assisting the PDPA in its ultimately unsuccessful war against determined and well-financed guerrillas.[18]
During the late 1970s and early 1980s, Pakistan, Saudi Arabia, and the United States formed an alliance to assist Afghan guerrillas in their efforts to disrupt Soviet occupation, destabilize the PDPA government, and reverse the social and economic gains made by the Afghan people. Pakistan envisioned an international brigade for the campaign and encouraged Saudi Arabia to send a prince to the region to inspire the jihadists, the proponents of armed religious struggle.[19] The Saudi royal family did not commit any sons to religious war, but an international brigade was organized, the mujahideen, with substantial financial support from the ruling class of Saudi Arabia. The Saudi state, as did other Middle Eastern countries, furnished thousands of mercenaries, thereby at least committing the sons of other families to jihad. The United States supplied the mujahideen with money, advanced weapons technology, logistical support, and extremist Islamic and anti-democratic propaganda. Pakistan, acting as a surrogate for the United States, provided the US government with plausible denial in a proxy war against the Soviet Union.[20]
In providing assistance to reactionary forces in the region, the US-Pakistan-Saudi alliance fostered the development of a repressive countermovement in Afghanistan against socialism and Soviet influence, called the Taliban, and organized the global terrorist network al Qaeda. In time, al Qaeda, led by Saudi millionaire Osama bin Laden, would strike civilian and military targets around the world, including New York City and Washington DC in 2001. At the start of the twenty-first century, US elites would seize these moments, moments that were largely of their own creation, to expand the police state at home and to launch a series of imperialist wars abroad.
* * *
Traditional accounts of United States involvement in Afghanistan record that the United States moved to intervene in Afghanistan in the winter of 1980 when President Jimmy Carter and his foreign policy staff developed a program to aggravate occupying Soviet forces.[21] The usual storytellers rarely depict Washington’s response as particularly confrontational; rather, observers portray the administration as calculating the situation to bog down the communists.[22] However, harassing the Soviet Union in Afghanistan was part of a larger strategy to frustrate Soviet activities in Central Asia—activities Carter speciously characterized as “colonial domination”—that was hardly irresolute.
President Jimmy Carter (right) and National Security Advisor Zbigniew Brzezinski (left)
In his 1980 State of the Union Address, President Carter identified three global developments shaping US foreign policy under his leadership. First, he claimed that there was “steady growth and increased projection of Soviet military power beyond its own borders.” The Soviets were intensifying their confrontation with the West, he asserted. Afghanistan therefore represented a sharpening of communist aggression. Soviet belligerence required the United States to organize an effective counterattack. Second, citing “the overwhelming dependence of the Western democracies on oil supplies from the Middle East,” Carter raised the specter of continued energy shortages if the United States did not move to protect its strategic energy interests. The United States experienced two severe oil shocks in 1973 and 1979, caused in part by OPEC raising prices. The government easily manipulated the American addiction to gasoline and oil to provoke anxiety in the populace. Third, according to Carter, “the press of social and religious and economic and political change in the many nations of the developing world, exemplified by the revolution in Iran,” required U.S. support.[23]
These three developments connected in such a way, he argued, that the United States had to concentrate its energies in the Middle East and Central Asia. The Soviet takeover of Afghanistan, his American audience was led to believe, could very well lead to Soviet control over region and thus command of more than two-thirds of the world’s exportable oil.[24] The Soviet military was within 300 miles of the Indian Ocean, marching ever closer to the Straits of Hormuz. To soften the pecuniary language of the struggle over vital energy resources, Carter sought to enflame religious passions in the region, claiming that Moslems were “justifiably outraged by this aggression,” which he characterized as belligerence against Islamic people. In their move to “subjugate the fiercely independent and deeply religious people of Afghanistan,” said Carter, the Russians were seeking to cement a strategic location that posed “a grave threat to the free movement of Middle East oil.”
To meet the overwhelming Soviet threat, the White House presented to Congress a detailed containment policy. The Carter Doctrine imposed extensive economic sanctions on Russia and struck a symbolic blow by organizing a boycott of the 1980 Olympics, which Moscow was hosting that year.[25] Citing the peace treaty between Israel and Egypt, Carter claimed that his comprehensive strategy strengthened peace and strategic alliances in the Middle East, which this United States would achieve in part through a resolution to the Israel-Palestinian conflict. Carter’s policy reinstated the Selective Service and intensified the massive military buildup already underway. “Our forces must be increased if they are to contain Soviet aggression,” the President said, formally committing the US military to engage any force threatening vital US interests in the Persian Gulf.[26] He recommended tightening controls on intelligence and loosening restraints on agencies that conduct intelligence gathering operations.
The architect of Carter’s Persian Gulf policy was National Security Advisor Zbigniew Brzezinski. A foreign policy advisor to the Kennedy and Johnson administrations during the 1960s, and the first director of the globalist Trilateral Commission in the 1970s, Brzezinski was a devoted anti-communist. His chief concern was that the implosion of relations between the United States and Iran in January 1979 left the US without a dependable bulwark against Soviet projection into Central Asia.[27] Relations between the two countries had become strained when the US-backed government of the Shah Reza Pahlavi—along with the Israeli-US trained secret police force SAVAK that secured the Shah’s rule—collapsed and was replaced with an Islamic state. Wanting a strongman and a reliable security apparatus in the region, the ability of US cold warriors to shape the history of the region became problematic.[28] Brzezinski initially pushed for military intervention in Iran to restore US hegemony. Carter resisted such a drastic move. (It has been suggested that the Soviets were counting on the United States to invade Iran and give them cover in Afghanistan.[29]) Complicating the situation, Iranian activists, angry over the meddling of the United States in Iranian domestic affairs, and in the thrall of an Islamist movement, stormed the US embassy in November 1979 and took dozens of Americans captive, holding fifty-two of them for 444 days.[30] Iran’s chief Imam, Ayatollah Ruhollah Khomeini, refused to demand that the students release the hostages and called for holy war.[31]
A few days after the Soviet invasion of Afghanistan, Brzezinski wrote a memorandum to President Carter in which he asserted that support for the Islamic resistance in Afghanistan was paramount to the interests of the United States. Destabilization of the Afghan government would have to stand in place of a puppet regime in Iran to stem Soviet expansion. (Iran, independently of the United States, aided the mujahideen against the Russians.) Brzezinski urged Carter to send money and arms to the Afghan rebels. Moreover, Brzezinski argued, success depended on bringing other entities into the resistance effort. He suggested coordinating “propaganda” and “covert action” campaigns with willing Muslim countries. The Brzezinski plan offered the bonus of building good will between the United States and Islamic groups. Pakistan was the obvious choice of a forward staging area for operations in Afghanistan, despite the anti-American sentiment prevalent among its people.[32] Brzezinski asked the President to review US Pakistan policy and urge Congress to increase military aid to that country.[33]
Carter’s State of the Union reflected Brzezinski recommendations. It was announced that the administration had “reconfirmed” the 1959 agreement to help Pakistan preserve its “integrity” and “independence.” The president promised that the US would “take action” if Pakistan were threatened by any outside aggression.” He asked the Congress to “reaffirm this agreement” and announced that he was organizing additional economic and military aid for Pakistan. Belying the “human rights” rhetoric typical of his administration (and especially his post-presidency), Carter made these requests as Pakistan’s military ruler, General Mohammed Zia ul-Haq, was moving to enhance state repression and fundamentalist Islamic rule in his own country.[34]
The Carter Doctrine was but the public face of intervention; the White House had in fact pursued destabilization of the PDPA several months before the Soviet invasion. In mid-summer 1979, Carter signed the first directive providing secret aid to the Muslim rebels fighting the PDPA, a presidential finding that permitted the CIA to initiate covert operations.[35] In a memorandum dated July 3, 1979, Brzezinski “warned” Carter that covert US action against the PDPA government would prompt Soviet intervention in Afghanistan. Years later, Brzezinski admitted that provoking the Russians was one of the purposes of intervening in Afghanistan.[36] Brzezinski saw an opportunity to weaken the Soviet Union by putting its military in a difficult situation—a situation made more difficult by extensive US involvement.
“According to the official version of history,” Brzezinski recounts in an interview in the French publication Le Nouvel Observateur in 1998 that “CIA aid to the Mujahedeen began during 1980.” He notes that this was “after the Soviet army invaded Afghanistan, 24 December 1979.” The official version of history is a lie. “But the reality,” Brzezinski admits, “secretly guarded until now, is completely otherwise: Indeed, it was July 3, 1979 that President Carter signed the first directive for secret aid to the opponents of the pro-Soviet regime in Kabul. And that very day, I wrote a note to the president in which I explained to him that in my opinion this aid was going to induce a Soviet military intervention.”
Confronted with the overthrow of the Taraki government, almost certainly orchestrated or at least facilitated by the CIA, and a burgeoning anti-democratic countermovement backed by the US government, the Soviet Union did respond in the predicted fashion. The “secret operation,” Brzezinski said, “had the effect of drawing the Russians into the Afghan trap.” Elated that the Soviet Union fell for the ploy, Brzezinski wrote to Carter: “We now have the opportunity of giving to the USSR its Vietnam war. Indeed, for almost 10 years, Moscow had to carry on a war unsupportable by the government, a conflict that brought about the demoralization and finally the breakup of the Soviet empire.”[37] The claim that their actions brought about the fall of the Soviet Union is a bit of an exaggeration on Brzezinski’s part. But it is no exaggeration to point out that Carter and Brzezinski played a key role in creating the conditions for United States intervention in Afghanistan by inducing Soviet interference in the region.
Carter era policy, which involved U.S. funding to mujahideen, a fact explicitly denied by journalists such as Peter Bergen, triggered a series of events culminating in the worst terrorist attacks on United States soil. This is not to say that the Islamists did not operate with their own motives. The Islamist desire to resurrect the caliphate and spread it globally exists independent of United States intervention in the Muslim world. However, destabilizing secular regimes and spreading advanced weaponry in these parts of the world, as well as using Islamic terrorists for various strategic goals, have enabled the jihadists to wage significant war against the secular West. To be sure, this threat must be confronted. But an effective response depends on an accurate accounting of this history and moving forward with a different foreign policy and national security model.
* * *
Carter never had a chance to personally pursue his policy further. Mired in a stagnant domestic economy and the continuing Iranian hostage crisis, he was defeated in the 1980 presidential election by former California governor and Barry Goldwater protégé Ronald Reagan. Reagan’s ascent to state power changed the nature of the United States’ external behavior. Under Reagan, the United States was to adopt a more aggressive posture. Like Brzezinski, Reagan was a dedicated anti-communist. Unlike Brzezinski, apocalyptic Christian Zionism informed Reagan’s anti-communism.[38] Reagan’s choice of Vice President, former Republican National Committee (RNC) chairman and CIA director George H. W. Bush reinforced his extremism. Bush brought with him close associations with authoritarian figures around the globe. This shift in the ideological character of United States leadership towards ultra-right thinking and practice meant that, while Carter required prodding by hawks to oppose the spread of communism, the Reagan regime would not hesitate to militantly prosecute capitalist encirclement of the socialist world. With providence guiding his regime, Reagan had no compunction about backing authoritarians abroad if those means satisfied the desired end, namely the destruction of worker states and the advancement of corporate power.
Reagan argued that the Soviet Union, guided by expansionist goals articulated in the Brezhnev Doctrine, was increasing its network of communist client states. Administration officials claimed that they could point to several countries that had come under Soviet influence or control within that past decade—Afghanistan, Angola, Cambodia, Congo, Ethiopia, Grenada, Laos, Libya, Madagascar, Mozambique, Nicaragua, and Syria were among the more notable. Several of these countries demonstrated significant insurgent movements. The United States found no shortage of surrounding states eager to act as surrogates for US action in several of these hot spots. Reagan used these arguments to accelerate and enlarge Carter’s military buildup and to deeply involve the United States in conflicts in Angola, Iraq, Iran, Nicaragua, and Cambodia.[39] In 1985, with mounting Iranian military successes against Iraq, and with liberal Democrats in Congress pushing him,[40] Reagan signed National Security Decision Directive 166—US Policy, Programs, and Strategy in Afghanistan—authorizing expanded US covert aid to guerrillas in Afghanistan.[41]
CIA Director William Casey (right) with President Reagan (center) and Vice-President Bush (left)
Cambodia, Nicaragua, Iraq, and Angola all involved extensive US intervention into the internal affairs of nations during the 1980s. However, the US adventure in Afghanistan would become the largest covert action program since World War II.[42] United States taxpayers shelled out approximately four billion dollars to contain the Soviet army in Central Asia. The Reagan-Bush policy had moved far beyond Carter and Brzezinski’s original vision of harassment and containment. Led by a quasi-fascist regime, the United States would become engaged in a full-blown proxy war against the Soviet Union in Central Asia.
Reagan’s NSC team included cloak-and-dagger enthusiasts CIA Director William Casey, Secretary of Defense Caspar Weinberger, and National Security Agency advisors Robert McFarlane and John Poindexter (all figures involved in Bush’s scheme to sell missiles to Iran to finance the contras).[43] With McFarlane annexing the Carter finding, the Reagan NSC directive transformed the goal of the Afghanistan intervention from calculated reaction to determined counterrevolutionary insurgency. The Reagan White House aimed not merely to harass and contain the Soviet Union but to drive the communists out of Afghanistan.[44]
Casey, who managed Reagan’s rise to the power, assumed a direct role in restructuring US intervention in Afghanistan. In October 1984, Casey traveled to a military base south of Islamabad. From there, his guides flew him via helicopter to camps near the Afghan border to observe mujahideen during training exercises. Impressed by what he saw (including the rebels’ skill at bomb making), Casey arranged in 1985 for the CIA to provide the mujahideen with intelligence, including satellite reconnaissance and intercepts of Soviet communications, military operational plans, as well as advanced weapons technology (such as timing and targeting devices for explosives and missiles). Committed to psychological operations, Casey arranged for the distribution of thousands of copies of the Koran and books alleging Soviet atrocities in its southern republics throughout the region.[45] These materials were used in madrasas in Pakistan to indoctrinate a generation of jihadists.
Unlike the infamous Contra operations in Nicaragua, which were defunded and banned by the United States Congress, Casey convinced Congress to fully fund the CIA Afghan program.[46] By 1987, observers estimate that the annual flow of weapons into Afghanistan was around 65,000 tons. Shipments included Stinger anti-aircraft missiles and other sophisticated weaponry.[47] The US distributed aid through various pipelines, including Pakistan’s Inter-Services Intelligence agency (ISI).[48] The Saudis matched US financial contributions, which were funneled directly through the ISI.[49] After the Soviet Union withdrew in early 1989, the Bush administration continued to pour weapons into Afghanistan. The CIA-ISI coalition sustained the authoritarian mentality with copies of extremist-nationalist Islamic tracts, as well as children’s textbooks showing mujahideen killing Soviet soldiers.[50]
By the mid-1980s, the United States was deeply involved in the internal dynamics of Afghanistan, supporting Hezb-e-Islami leader Gulbuddin Hekmatyar.[51] Hekmatyar became a spokesman for the mujahideen and was regularly consulted by the Western media to provide the Afghan worldview. Washington’s courting of Hekmatyar proved at points unwise. Although it did not seem to trouble the White House that Hekmatyar’s followers threw acid in the faces of women who refused to wear the veil, it must have bothered Reagan and Bush that Hekmatyar developed close ties with Iran’s Ayatollah Ruhollah Khomeini, especially that he appeared to spend as much time fomenting internecine warfare among mujahideen factions as he did conducting war against the Soviets.[52]
In addition to aid from various state entities and wealthy contributors, the CIA and the ISI used profits from the Central Asia opium trade to supplement their operations funds. During the 1970s, most of the heroin entering the United States came from the Golden Triangle region in Southeast Asia (Burma-Thailand-Laos). Before US involvement, poppy production was limited and heroin use was rare in the Gold Crescent region in Central Asia (Afghanistan-Pakistan-Iran). When the CIA shifted operations to Central Asia in the 1980s, Afghanistan and northwestern Pakistan became major heroin producers.[53] The General Accounting Office estimated that by 1986 forty percent of the heroin in the United States was originating in the Golden Crescent. Some sources estimated that as much as 80 percent of heroin originated in the region.[54] The CIA arms and supplies pipeline from Karachi, Pakistan to points in Afghanistan served as the return path to Karachi and out to points in North America and Europe.[55] The mujahideen managed the poppy trade in those areas they controlled (frequently battling to determine who would control key trade routes).[56] Once known as the silk route that connected cultures to the East and West, Afghanistan became known as the opium trail.
Reaganites displayed particular affection for a band of guerrillas known as the “Afghans.” The Afghans were not Afghan natives but extremist Muslims from Saudi Arabia, Egypt, and other Middle Eastern and North African countries. Just as he referred to the Contra death squads in Nicaragua as the Central American equivalent of the US “founding fathers,” Reagan bestowed upon the Afghans the honor of “freedom fighters.” The Afghans did not disappoint. They were every bit the merciless butchers that the Contra rebels were.
During this period, the Afghans were affiliated with Hekmatyar and the Hezb-e-Islami. Osama bin Laden, a businessman from a wealthy Saudi Arabian family closely associated with the Saudi royal clan, led the Afghans. Osama arrived in Peshawar in the mid-1980s to head the Maktab al-Khidamat (or MAK), a front organization funneling US weapons and supplies to the mujahideen.[57] Osama’s duties included raising money and soldiers for the war effort. He used his connections with Saudi elites to generate funds. He used his gift for channeling anti-Western and anti-communist hatred to mobilize young Muslim men for jihad. Bin Laden’s star rose rapidly under CIA-ISI tutelage and he soon commanded an army of several thousand Islamic fighters. Saudi Arabia had sent something of a prince after all. In the crucible of the Afghan war, Osama would raise up al Qaeda (Arabic for “The Base”), claimed by Washington to be the largest terrorist network in the world. The U.S. extensively funded bin Laden’s activities, even assisting in the construction of terrorist training camps.
Alliance-organized insurgency, especially after the US distanced itself from Hekmatyar, set the stage for the ascendancy of the Pashtun Taliban, Islamic reactionaries led by religious cleric Mullah Mohammed Omar. Many Taliban were trained in Pakistani-funded madrasas where clerics preached Deobandism, a species of Islam originating in Deoband, India in the nineteenth century as a reaction to British imperialism. Couched in the rhetoric of negation, Deobandism opposes a narrowly interpreted Islamic orthodoxy to Western modernity. Pakistan organized camps for the Taliban in the North-West Frontier Province (ironically, the home of the non-violence Muslim leader Badshah Khan).[58]
After the Soviet withdrawal in 1989 and the long civil war that followed, the Taliban captured Kabul in 1996. The Taliban was formally recognized as the legitimate government of Afghanistan by Pakistan, Saudi Arabia, and the United Arab Emirates in 1997. Keeping their options open, the US was reluctant to legitimate the Taliban. Nevertheless, Washington continued to work closely with the group. When the US shifted loyalties from the Hezb-e-Islami to the Taliban, so too did bin Laden. Al Qaeda would align with the Taliban with the Kabul takeover and bin Laden would become Omar’s “honored guest.” The Taliban ruled most of Afghanistan until its demise in late 2001 at the hands of the US military—having fallen out of favor with Washington—and the Northern Alliance, the anti-Taliban factions of the mujahideen.[59]
During its reign, the Taliban rigidly regulated Afghan society, forcing women out of the public sphere and imposing strict Shari’a, including Hadd offenses and the establishment of muhtasib (vice and virtue enforcers). Rule-breakers were publicly dismembered and soccer fields became gallows. The Taliban sought to wipe out Afghanistan’s rich cultural heritage, going so far as to blow up the Bamiyan Buddhas.
The progress Afghan society had made under the PDPA was reversed. By the time of the Taliban’s collapse, roughly seventy percent of men and eighty-five percent of women could neither read nor write, one-quarter of Afghan children died before the age of five, the average Afghan could expect to live to be forty years of age, infant and maternal morality were the second-highest in the world, and only twelve percent of the population had access to safe drinking water.[60]
* * *
United States interest in gas and oil in Central Asia became clear with the pullout of the Russian military from Afghanistan in 1989 and the sudden collapse of the Soviet system in 1991. By 1992, mostly US-based companies, Amoco, ARCO, British Petroleum, Exxon-Mobil, Pennzoil, Phillips, TexacoChevron, and Unocal, controlled half of all gas and oil investments in the Caspian region. The industry acquired several high profile political figures to advise company operations in the region. Former NSA under President Carter Zbigniew Brzezinski was a consultant for Amoco. Bush’s vice-president Cheney advised Halliburton. Former Secretary of State under presidents Nixon and Ford, Henry Kissinger, and former State Department counterterrorism official, Robert Oakley, were consultants for Unocal. NSA under Bush Junior, Rice served on the board of TexacoChevron. The industry sought to develop the “Stans” (Azerbaijan, Kazakhstan, Turkmenistan and Uzbekistan), with their some ten trillion cubic meters of gas and 115 billion barrels of proven oil reserves, permitting the west to undermine the hegemony of OPEC (Organization of the Petroleum Exporting Countries).
Within less than five years of the fall of the Soviet Union, Unocal, in association with Delta Oil (Saudi Arabia), Gazprom (Russia), and Turkmenrozgas (Turkey), began negotiating with various Afghan factions to secure the right to construct a trans-Afghan pipeline to move fossil fuels from the Caspian Sea basin to the Arabian Sea. Outside of the Middle East, the Caspian Sea region contains the largest proven natural gas and oil reserves in the world (Central Asia has almost 40 percent of the world’s gas reserves and 6 percent of its oil reserves). The United States sought not only to secure these reserves for its increasing energy appetite, but also saw as imperative control over transport, as this permitted control over prices. The desired routes: through Turkey to the Mediterranean and through Afghanistan to Pakistan, thus bypassing routes through Russia, Azerbaijan and Iran. Rerouting oil and gas through Turkmenistan, Afghanistan, and Pakistan would enhance US energy security while simultaneously undercutting Russian and Iranian political and economic influence in Central Asia. Installing an authoritarian government in Afghanistan compliant to US interests became a necessary step towards securing conditions for the further development of exploitable energy sources in Central Asia.
The Unocal consortium, CentGas, was forced to compete with the Argentinean gas company Bridas, which had explored sites in Turkmenistan in the early 1990s and negotiated a deal securing, among other sites, the Yashlar block, with an estimated stock of nearly one trillion cubic meters of gas, lying near the border of Afghanistan. In 1995, Bridas secured a contract to build a pipeline between Turkmenistan and Pakistan pending successful negotiations with Afghanistan. All seemed well when Bridas struck a deal with the Rabbani government, which had come to power in Afghanistan in April 1992 after the fall of the Najibullah regime. United States ally Hekmatyar and the Hezb-I-Islami joined with the Jamiat-I-Islami party, led by Burhanuddin Rabbani, to form the new government. However, in 1994, the Taliban mysteriously emerged, its ranks drawn from Islamic schools in Pakistan, and began taking cities and territories in Afghanistan. By the fall of 1996, the Taliban had toppled the Rabbani government and undermined Bridas’ bid to build a pipeline across Afghanistan. Not coincidentally, CentGas emerged as the frontrunner to build the pipeline across the country.
Unocal worked closely with the Taliban in developing plans for the pipeline. In 1997, Unocal met with Taliban leaders to “educate them about the benefits such a pipeline would bring this desperately poor and war-torn country.” However, Unocal withdrew from the consortium in December 1998 after suspending involvement in August of that year. A 21 August 1998 Unocal statement cited “sharply deteriorating political conditions in the region” and the reluctance of the United States and the United Nations to recognize the Taliban as the legitimate government of Afghanistan as reasons for pulling out. Unocal denied their association with the Taliban in the days following 9-11. In a press release dated 14 September 2001 Unocal averred, “The company is not supporting the Taliban in Afghanistan in any way whatsoever. Nor do we have any project or involvement in Afghanistan.” However, after the United States invaded Afghanistan, toppled the Taliban regime, and emplaced an interim government, oil companies and interim ruler Hamid Karzai and Mohammad Alim Razim, minister for Mines and Industries, restarted the pipeline project talks in the spring 2002. Razim stated that Unocal was the frontrunner to obtain contracts to build the pipeline and that the pipeline is to be built with funds from the reconstruction of Afghanistan, funds supplied by the United States taxpayer.
Crucial to these negotiations is the presence of US envoy to Kabul, Afghanistan-born Zalmay Khalilzad, formerly a lobbyist for the Taliban and oil companies. As special envoy, he ostensibly reports to Secretary of State Colin Powell. However, as a National Security Council (NSC) official and Special Assistant to the President for Southwest Asia, Near East and North Africa, he reports to NSC chief Condoleezza Rice. Khalilzad has a long history working in Republican governments. He headed the Bush-Cheney transition team for the Department of Defense. He served as Counselor to Secretary of Defense Donald Rumsfeld. Under George Bush Senior, Khalilzad served as Assistant Under Secretary of Defense for Policy Planning. He served under Reagan from 1985 to 1989 at the Department of State, where he advised the White House on the Iran-Iraq War and the Soviet War in Afghanistan.
In 1997, as a consultant to Unocal, Khalilzad worked closely with the Taliban in negotiations for establishing oil and gas pipelines through Afghanistan. Khalilzad defended the Taliban in an op-ed piece in The Washington Post, writing, “The recent victory by the Taliban, a traditional orthodox Islamic group, can put Afghanistan on a path toward peace or signal continuing war and even its end as a single entity.” In praising the mujahideen (for whom he raised money for as executive director of the Friends of Afghanistan), he parroted a line from his colleague at Columbia University Brzezinski, claiming that the mujahideen “not only forced the Soviets to withdraw but also played a role in the demise of the Soviet Union itself.” However, the instability in the aftermath of the Soviet withdrawal “has been a source of regional instability and an obstacle to building pipelines to bring Central Asian oil and gas to Pakistan and the world markets.” Downplaying the brutality of the Taliban, he contended that it “does not practice the anti-US style of fundamentalism practiced by Iran—it is closer to the Saudi model. The group upholds a mix of traditional Pashtun values and an orthodox interpretation of Islam.” As risk analyst for Cambridge Energy Research Associates in the mid-1990s, Khalilzad personally entertained a Taliban delegation to Sugarland, Texas in 1997.
In August 1998, the US embassies in Kenya and Tanzania were bombed and Khalilzad promptly changed his position on the Taliban. In an article published in The Washington Quarterly (winter 2000), Khalilzad presented what would become key elements of the Bush policy on Afghanistan. He wrote that administration officials under Clinton in 1994 and 1995 underestimated “the threat [the Taliban] posed to regional stability and US interests.” He noted that Afghanistan’s importance “may grow in the coming years, as Central Asia’s oil and gas reserves, which are estimated to rival those of the North Sea, begin to play a major role in the world energy market.” Afghanistan would serve as a “corridor for this energy.” He impressed the Bush administration, becoming an advisor to the president, and enjoying appointment to the NSC. The United States has indeed established a military presence throughout the Caspian Sea region. The trans-Afghanistan gas pipeline currently being negotiated will stretch 1,650 kilometers.
For documentation for this section, see my article (coauthored with Laurel Phoenix) “The neoconservative assault on the Earth: The Environmental Imperialism of the Bush Administration,” published in Capitalism Nature Socialism May 23 2006. In that article this history is expanded and brought up to date in the context of the Bush administration’s neoconservative policy.
* * *
US support for Islamic extremists in Afghanistan and Pakistan lost much of its raison d’être with two occurrences. First, the Soviet Union’s sudden collapse in 1991 following a failed military coup ended the Cold War. Without a clear geopolitical rationale for involvement in Afghanistan, the US Congress no longer had a reason to throw money at the project. Second, the Taliban became uncooperative in the trans-Afghan project, now led by energy giant Enron (with substantial interests held by Halliburton). The pipeline deal collapsed in August 2001.[61]
The Soviet Union paid a heavy price for its involvement in Afghanistan’s internal affairs. The official state count of Soviet soldiers killed in the period 1979-89 is 13,833. Western and alternative Soviet estimates put the number closer to 40,000.[62] The Afghan people suffered a much worse fate. Close to half a million people were killed in Afghanistan in the period 1979-89. Between 1989 and the mid-1990s, after years of fighting among Afghan factions, approximately 50,000 more Afghans were killed. Some estimates put the Afghan death toll for the period 1979-2001 at two million. Millions more Afghans became refugees, fleeing into Pakistan and Iran. Those who remained behind were subjected to the brutal order of the Taliban. The United States paid a heavy price, as well. On September 9, 2001, Ahmad Shah Massoud, the leader of the Northern Alliance was assassinated by two men posing at reporters. Two days later, suicide bombers commandeered four US airliners and piloted three of them into the World Trade Center in New York City and the Pentagon in Washington DC, killing approximately 2,700 civilians. The hijackers had trained at al Qaeda in Afghanistan and Osama bin Laden and his inner circle had orchestrated the plan.
The attack was not unexpected. Prior to 9-11, bin Laden had been involved in several terrorists attacks. His organization was behind the bombing of the USS Cole in Yemen on October 12, 2000 that left 17 sailors dead.[63] On August 7, 1998, al Qaeda bombed US embassies in Nairobi, Kenya and Dar es Salaam, Tanzania, killing 263 people and injuring more than 5,000. On June 25, 1996, al Qaeda bombed the Khobar military complex near Dhahran in Saudi Arabia, killing 19 US soldiers. The October 3-4, 1993 gunfight in Somalia that left 18 US soldiers dead and 84 wounded was the work of al Qaeda sponsored fighters. And the February 26, 1993 World Trade Center bombing, which killed 6 people and injured more than 1000, was in part organized by al Qaeda. Reagan and Bush’s favorite Islamic warrior had—long before 9-11—turned violently against his benefactors.
Bin Laden loathed the United States for its military, economic, and political support for Israel and its incessant intrusions into the Islamic world, a fact that could not have escaped CIA agents working with the Afghans. In the habit of playing with fire, US intelligence understood that Osama was strategically using America’s wealth and anticommunist obsession to further his objectives in the region. With the Soviet Union dissolving, and the United States inflicting more insult and injury on the Muslim world by sending troops into Saudi Arabia and Kuwait, and then attacking Iraq in 1991, it was inevitable that Osama would cast his ruthless gaze upon America. As the decade advanced, bin Laden and associates moved ever more central to the network of forces making asymmetrical war on the West, a war that had continued after his assassination by US forces under Obama.
* * *
A conjuncture of events moved the United States to groom Afghanistan to become a US client state. The fall of a pro-Western Iranian government, the overthrow of peripheral capitalism in Afghanistan, and the institution of a pro-socialist government in the Golden Crescent, signaled to the US that its hegemony in the region was faltering. After the Soviets occupied Afghanistan in 1979, the United States’ desire to shape Afghanistan’s future became justifiable as aggressive containment of Soviet expansionism.[64] The proxy war in Afghanistan falls within the logic of capitalist encirclement of the Socialist world system, a project that, however immoral and anti-democratic, had the support of much of the American electorate.
Although all parties involved in the conflict are responsible for death and displacement, these outcomes would have been unlikely had the United States not pursued a program of destabilization and insurgency in the region. The United States, along with Saudi Arabia and Pakistan, are substantially responsible for the rise of extremist Islam and the brutal rule of the Taliban in Afghanistan, as well as the creation of a global terrorist network that put millions of human beings in harm’s way. To achieve long-term control over Afghanistan, the Pakistan-Saudi-US alliance cultivated an extremist countermovement of anti-democratic Muslim factions to assume the reigns of the state after the expulsion of the Soviet military and the fall of socialism. The alliance shaped the discontent and frustration of traditionalists into a permanent offensive weapon against socialism. Once socialism was defeated, this weapon turned its wrath upon the United States.
By assisting in the development of al Qaeda and the Taliban against the backdrop of decades of imperialism in the Middle East and Central Asia, the United States government helped prepare the stage for 9-11.[65] However much wealth bin Laden inherited or could amass, it is unlikely that he could have developed the capacity to attack US interests around the world had the Reagan-Bush regime not paid for his terror network and provided al Qaeda with a base of operations in Afghanistan. However, while a narrative has emerged on the left that it is doubtful that bin Laden would have been motivated to carry out attacks against US targets had the United States not for decades violently shaped the Middle East and Central Asia to its material and ideological benefit (an interpretation I shared when I started this research), one must not forget the affirmative motive Islamists seek in reestablishing the Caliphate. The motive for crime must not be reduced to means and opportunity.
Endnotes
[1] Peter Bergen, “Five Myths about Osama bin Laden, “The Washington Post, May 6, 2011.
[2] “Afghanistan rebels say president killed in coup,” The Washington Post, April 29 (1978), A20. “The funeral that turned into a bloody bath,” The Economist, May 6 (1978), 67. Angus Deming, “Kabul’s bloody coup,” Newsweek, May 8 (1978), 55. “Meaning of the latest coup in Afghanistan,” US News & World Report, May 15 (1978), 35.
[3] Henry S. Bradsher, “Afghanistan,” The Washington Quarterly7 (3, 1984): 42. William Blum, Killing Hope: US Military and CIA Interventions Since World War II (Common Courage Press, 1995).
[4] Robin Knight, “Afghanistan’s shaky venture into Marxism,” US News & World Report, December 11 (1978), 55. Jonathan C. Randal, “Tensions in revolutionary Afghanistan,” The Washington Post, November 7 (1978), A17. Randal, “Marxists set new course for backward Afghanistan,” The Washington Post, November 23 (1978), E13.
[5] Angus Deming and Barry Came, “After Kabul’s coup,” Newsweek, May 15 (1978), 39. Thomas W. Lippman, “Leftist Afghan Regime Seen Trying to Obscure Soviet Tie,” The Washington Post, February 23 (1979), A21.
[6] John Ryan, “Afghanistan: A forgotten chapter,” Canadian Dimension, June (2001), 35.
[7] “Afghanistan: Taraki turns on his king-makers,” The Economist, August 26 (1978), 48. Bill Roeder, “Purge in Afghanistan,” Newsweek, September 4 (1978), 14. Stephen Webbe, “Afghan war: Do we share the blame?” Christian Science Monitor, January 24 (1980), B1. Afghan did, however, sign a cooperation treaty with the Soviet Union. Kevin Klose, “Soviets Sign Treaty With Afghanistan; Kabul’s New Rulers,”The Washington Post, December 6 (1978), A17.
[8] “Afghanistan: Next stop, Kabul,” The Economist, February 17 (1979), 65. Jonathan C. Randal, “Tensions in revolutionary Afghanistan,” The Washington Post November 7 (1978), A17. Carol Honsa, “Three years of Marxism haven’t stopped Afghan rebels,” Christian Science Monitor, April 29 (1981), 7.
[9] “Iran and Afghanistan,” The EconomistMay 13 (1978), 70. Kevin Klose, “Soviet Moslem areas show little interest in Islamic revolt,” The Washington Post, December 5 (1979), A20.
[10]“Another holy war,” The Economist, March 24 (1979), 65. “Afghanistan’s Islamic revolt,” Newsweek, April 2 (1979), 47. Stuart Auerbach, “Moslem Rebels Battle Afghan Troops in Remote Region,” The Washington Post, March 22 (1979), A21. The various groups involved included Sayed Ahmad Gailani’s National Islamic Front of Afghanistan; Hezb-e-Islami Afghanistan, led by Gulbuddin Hekmatyar; Yunus Khalis Paiman-i-Ittehadi Islami (Unity of Islamic Forces), an alliance that included Sibghatullah Mujaddidi’s National Liberation Front of Afghanistan; Jamiat-e-Islami party, led by Burhaniddin Rabanni; Mohammed Nabi Mohammedi’s Harakeli Iniqilab Islami Party (Movement for the Islamic Revolution), and New Afghanistan Union National Islamic party, whose main goal is to re-enthrone King Zahir Shah. The Saudis generously bribed these groups in order to fashion an alliance among them. However, journalists visiting the region reported disorganization about the opposition. “Deep in an Afghan cave,” The Economist, February 2 (1980), 52. Edward Giradet, “With Afghan rebels: Ready, willing—able? US News & World Report, February 18 (1980), 38. Among the most determined of these groups was the Islamic fundamentalist Ikhwanis (named after the Egyptian Ikhwan al-Muslimeen).
[11]Amin, the strong man of the Taraki government, had been positioning himself to assume the reigns of power. Stuart Auerbach, “Afghanistan President Quits As Moslem Rebellion Grows; Afghan President Quits as Rebels Gain,” The Washington Post, September 17 (1979), A1. “Afghanistan: Shoot-out in the Kabul corral,” The Economist, September 22 (1979), 60.
[12]Stuart Auerbach, “Foes ‘Eliminated,’ Afghan leader says,” The Washington Post, September 18 (1979), A11.
[13]James Pringle, “Kabul under siege,” Newsweek, September 24 (1979), 62. “Amin punches his point home,” The Economist, November 17 (1979), 68.
[14]Fay Willey, Loren Jenkins, and Kim Willenson, “Russia’s own quagmire,” Newsweek, Sol W. Sanders, “The Soviet Union’s persistent push to the Indian Ocean,” August 6 (1979), 33. Business Week, July 30 (1979), 42. Stuart Auerbach, “Afghan president is toppled in coup: Soviet troops reportedly involved in Kabul fighting,” The Washington Post, December 28 (1979), A1. “The Russians reach the Khyber Pass,” The Economist, January 5 (1980), 25. Jerry Adler, “Moscow’s man in Kabul,” Newsweek, January 7 (1980), 22.
[15]The Washington Post, December 31, 1979, Monday, Final Edition, First Section; A9, 813 words, Soviet Union Denies Involvement in Coup in Afghanistan, By Kevin Klose, Washington Post, Dec. 30, 1979.
[16]“Afghanistan: When Russia signs, look for trouble,” The Economist, January 13 (1979), 50. Kevin Klose, “Moscow justifies actions in Kabul on basis of pact,” The Washington Post, December 29 (1979), A1.
[17]Don Oberdorfer, “U.S. envoy sent to consult Allies on Afghan issue: Carter hits Soviets on Afghan action,” The Washington Post, December 29 (1979), A1.
[18]Edward Girardet, “Divided Afghan tribesmen pull together to oust Soviets,” Christian Science Monitor, May 9 (1980), 5
[19]David Hirst, “Divided Muslim peoples yearn for a new Saladin,” The Guardian, December 12 (1992), 13. Pakistan also had (and continues to have) ambitions in Kashmir and Jammu. Pakistan currently controls approximately one-third of the disputed territory, which they call Azad Kashmir. India holds roughly two-thirds of Jammu and Kashmir. Although anti-Soviet operations were in Pakistan’s interests—as Pakistani elites saw things—the government made their cooperation with US anti-communist goals contingent upon extensive economic and military support from the United States. Stuart Auerbach, “Pakistan ties arms aid to economic assistance: Pakistan details economic, military aid needs,” The Washington Post, January 14 (1980), A1.
[20]Sol W. Sanders, “Pakistan’s new buffer role against the Soviets,” Business Week, August 21 (1978), 48.
[21]Jay Mathews, “Sino-U.S. accord seen on reaction to Soviets: China to receive satellite station, The Washington Post, January 9 (1980), A1. “China—the arms factor,” Christian Science Monitor, January 11 (1980), 24.
[22]Tony Clifton, “Russia’s Vietnam?” Newsweek, June 11 (1979), 67.
[24]Daniel Sutherland, “Washington toughens stance with ‘realistic’ views of Soviet aims,” Christian Science Monitor, January 7 (1980), 1. “Afghanistan takeover—Why Russians acted,” US News & World Report, January 14 (1980), 22. Sol W. Sanders, “Moscow’s next target in its march southward,” Business Week, January 21 (1980), 51.
[25]Edward Walsh and Don Oberdorfer, “U.S. to Withhold Grain From Soviets, Curtail Technological, Diplomatic Ties,” The Washington Post, January 5 (1980), A1. Ronald Koven, “U.S. Rebuffed by France,” The Washington Post, January 7 (1980), A1.
[26]George C. Wilson, “Carter is converted to a big spender on defense projects,” The Washington Post, January 29 (1980), A16.
[27]William Branigin, “Soviets gain from U.S. setback in Iran,” The Washington Post, May 23 (1979), A16.
[28]James Rupert, “Iran undermining anti-Soviet battle Afghan rebels say,” The Toronto Star, October 22 (1986), A12.
[29]Charles Fenyvesi, “Carter’s ‘double-cross’ on Afghanistan: When the Russians invaded Afghanistan, they were counting on us to attack Iran,” The Washington Post, April 12 (1981), D1.
[30]John M. Goshko and J. P. Smith, “Bazargan government resigns in Iran,” The Washington Post, November 7 (1979), A11.
[31]“Iran amok ochlotheocracy takes over,” The Economist, November 10 (1979), 15. “Start of a holy war against ‘infidel’ America?” U.S. News & World Report, December 3, (1979), 11.
[33]Zbigniew Brzezinski, “Reflections on Soviet intervention in Afghanistan,” Memo to President Jimmy Carter, December 26, 1979.
[34]Stuart Auerbach, “Pakistan moves toward Islamic authoritarianism,” The Washington Post, October 21 (1979), A1.
[35]Steve Coll, “Anatomy of a victory: CIA’s covert Afghan war,” The Washington Post, July 19 (1992), A1.
[36]In his view, the “Soviet Vietnam” was the “conflict that brought about the demoralization and…the breakup of the Soviet empire.” Quoted in “How Jimmy Carter and I started the Mujahideen: Interview of Zbigniew Brzezinski,” Le Nouvel Observateur, Jan 15-21 (1998), 76. These quotes are from a translation by William Blum and David N. Gibbs. See Gibbs, “Afghanistan: The Soviet Invasion in Retrospect,” International Politics37 (2000):2: 233-246.
[37]Asked to reflect on the consequences of fueling the extremist Islamic movement, Brzezinski responded, “What is most important to the history of the world? The Taliban or the collapse of the Soviet empire? Some stirred-up Moslems or the liberation of Central Europe and the end of the cold war?” It would be interesting to know if Brzezinski expresses the same opinion in the aftermath of 9-11 and two devastating wars.
[38]Among these were the authoritarian World Anti-Communist League (WACL) and racist ideologues such as Roger Pearson. Pearson is the author of Eugenics and Race. In the late seventies, Pearson was the editor for Policy Review, a publication of the Heritage Foundation. The Heritage Foundation is a conservative think tank largely responsible for Reagan’s policies. See Russ Bellant, Old Nazis, the New Right, and the Republican Party(South End Press, 1991).
[39]Richard Burt, “Moscow’s arms buildup a major issue for Reagan,” The New York Times, December 7 (1980), A1. “Richard Perle: The Pentagon’s powerful hardline on Soviet policy,” Business Week, May 21 (1984), 130.
[40]“Eloise Salholz, “Congressional liberals call for Afghan arms,” Newsweek, July 2 (1984), 17.
[41]Gregory R. Copley, “Pakistan’s great era of challenge,” Defense & Foreign Affairs, February (1985), 8. William J. Holstein, Shahid ur-Rehman, Mark D’Anastasio, and Boyd France, “Gorbachev raises the ante in Afghanistan,” Business Week, May 20 (1985), 83.
[43]“The CIA becomes central again,” The Economist, April 28 (1984), 37. Robert S. Dudney and Orr Kelly, “Inside CIA: What’s really going on?” U.S. News & World Report, June 25 (1984), 27. Philip Taubman, “Casey and his CIA on the rebound,” The New York Times, January 16 (1983), 20. Joseph Lelyveld, “The Director: Running the CIA,” The New York Times, January 20 (1985), 16.
[46]Mary Anne Waver, “Arming Afghans: A tortuous task,” Christian Science Monitor, March 18 (1985), 1. Edward Girardet, “Arming Afghan guerrillas: Perils, secrecy,” Christian Science Monitor, November 20 (1984), 15.
[47]Weapons came from several sources, primarily the United States, Great Britain, and China. Michael Getler, “U.S. stingers boost Afghan rebels’ performance and morale,” The Washington Post, October 14 (1987), A21. Peter Grier, “Reagan’s plan to give small missiles to rebels sparks security concerns,” Christian Science MonitorApril 2 (1986), 1. Robin Wright and John M. Broder, “CIA seeks return of stingers: Action a response to fears of attack,” The Houston Chronicle, July 24 (1993), A16. The CIA’s missile recovery project amounted to a buy back program that cost the US taxpayer’s millions of dollars.
[48]Hamid Hussain, “Lengthening shadows: The spy agencies of Pakistan,” CovertAction Quarterly73 (3, 2000), 18-22.
[51]William Branigin, “Feuding guerrilla groups rely on uneasy Pakistan,” The Washington Post, October 22 (1983), A1. Edward Girardet, “Radical Afghan group undercuts resistance efforts,” Christian Science Monitor, December 30 (1987), 1. George Arney, “the heroes with tarnished haloes: The ruthless and murderous conflicts of Afghanistan’s other war,” The Guardian, January 5 (1988). Michael Hamlyn, “Mujahidin leaders vows to fights for Islamic state,” The Times, February 25 (1988).
[52]Tim Weiner, Blank Check: The Pentagon’s Black Budget (Warner Books, 1990). After Pakistan and the US switched loyalties from Hezb-e-Islami to the Taliban and the Taliban came to power, Hekmatyar fled to Tehran. Hekmatyar was briefly prime minister after the collapse of the Soviet occupation, but he could never conquer Kabul. See also Weiner, “Afghan camps, hidden in hills, stymied Soviet attacks for years,” The New York TimesAugust 24 (1998), A1.
[53]Alfred McCoy, The Politics of Heroin: CIA Complicity in the Global Drug Trade(Lawrence Hill, 1972). David K. Willis, “Hunting down the drug smugglers,” Christian Science Monitor, December 21 (1983), 16. Associated Press, “Drug smuggling ring broken in Afghanistan,” The Toronto Star, December 29 (1986), A19. James Davis, “CIA hunt for the missing stingers,” The Times (India), March 19 (1989).
[54]Jack R. Payton, “Pakistan cost be a future Iran,” St. Petersburg Times, April 19 (1987), D1.
[55]Christina Lamb, “BCCI linked to heroin trade: Pakistan denies ‘black operations’ but hints at CIA link,” Financial Times, July 25 (1991), I1. Steve Coll, “Pakistan’s illicit economies affect BCCI: Bank shaped by environment of corruption and illegal trade in weapons, drugs,” The Washington Post, September 1 (1991), A39.
[56]Elaine Sciolino and Stephen Engelberg, “Fighting narcotics: U.S. is urged to shift tactics,” The New York Times, April 10 (1988), 1. Christina Lamb, “Bhutto sets sights on drugs barons,” Financial Times, June 6 (1989), I6. Ahmed Rashid, “Mujahedin expands killing zone,” The Independent, September 4 (1989).
[57]Vernon Loeb, “A global pan-Islamic network: Terrorism entrepreneur unifies groups financially, politically,” The Washington Post, August 23 (1998), A01. Scott Baldauf and Faye Bowers, “Origins of bin Laden network,” Christian Science Monitor, September 14 (2001), 6. Kevin Flynn and Lou Kilzer, “A close-up look at terrorist leader,” Rocky Mountain News, September 15 (2001), A16.
[58]Kushanava Choudhury, “Idiocy Armed With A Loaded Gun,” The Statesman, March 14, (2001). Choudhury, “Looking into the heart of darkness,” The Statesman, March 18 (2001). Jack Kelley, “US takes on war-hardened Taliban it helped create,” USA Today, September 21 (2001), 1A.
[59]Ahmed Rashid, Taliban: Militant Islam, Oil, and Fundamentalism in Central Asia(Yale University Press, 2000).
[60]“Inside the Taliban US helped cultivate the repressive regime sheltering bin Laden,” The Seattle Times, September 19 (2001), A3.
[65]As Hussain put it in “Lengthening Shadows,” “intelligence agencies (Mossad in the case of Hamas, CIA in the case of the Taliban) have found to their grief that patronizing the reactionary forces is a dangerous game,” p. 22.
Before subsiding in the 1960s, lynching at the hands of white people in the United States would claim the lives of several thousand African Americans.
The lynching of Rubin Stacy, Fort Lauderdale, Florida, July 19, 1935
I became interested in this subject in the 1990s while writing my dissertation, a two-volume study of the race, class, and punishment in American history. After graduate school, I worked with the Tuskegee archives to produce a machine-readable file of the lynching records (which the archivist told me had never been done, which surprised me) and published two academic articles on the subject, one using the Tuskegee data. The first article, appearing in the pages of the Journal of Black Studies in May 2004, was a review essay upon which the present blog entry leans heavy (I am not going to quote myself, as these are my words). The second was an empirical study of lynching and execution in the United States, appearing in Crime, Law, & Social Change in 2006. In that article I challenged the notion that lynching was “self-help” in the underdeveloped southern United States.
My work on this in the opening decade of the twenty-first century was inspired by two publications on lynching that compel Americans to revisit this peculiar form of collective murder and ponder its significance for the sociological and moral understanding of racial violence. The first of these publications is a disturbing collection of lynching photographs by James Allen and associates, Without Sanctuary, published in 2000. The impact of this documentary is visceral. Perhaps more disturbing than the torn and burned bodies of the victims is the countenance on the perpetrators’ faces. Their collective visage is haunting. When I received my copy, in my final year of graduate school (in 2000), I retreated to the only room in my tiny apartment that had a lock on the door—the bathroom—for fear my young son would ask me what I was looking at. The book sickened me. I hid the book away and did not look at it for a long time.
The second publication, Philip Dray’s 2002 At the Hands of Persons Unknown, is a comprehensive accounting of the history of lynching in America. There are no pictures. Yet, the story Dray tells is no less unsettling: a tale of ordinary Americans perpetrating, in ritualized installments, the mass murder of other Americans because they were of a different race. The title of Dray’s book is taken from the typical coroner’s verdict concerning the cause of death in a lynching. This verdict is apt, according to Dray, because “no persons had committed a crime.” The crime was instead “an expression of the community’s will.”
To be sure, the decisions and deeds of individuals in collective action express common sentiment. Yet, the decision to participate in collective action—or collective inaction—is made by individuals. Individuals are responsible for the consequence of their actions. The juxtaposition of the images in Without Sanctuary and Dray’s choice of a book title raised basic albeit unacknowledged problems with the history of racial violence in America. Since the individuals in those frightful photographs are not “persons unknown,” I wondered, why have they remained unnamed for all these years? So I started writing my thoughts about the role of agency and responsibility in explanation. What you will read here are my conclusions about that matter.
* * *
While Without Sanctuary and Persons Unknown raise questions about motive and responsibility, most studies of lynching have pursued different questions, seeking explanation in the phenomenon’s statistical variation in conjunction with macrosocial patterns. These were the works I encountered while writing my dissertation chapter on this period. Researchers focus on demographics, for example the proportion of blacks relative to whites. The crucial pieces: Jay Corzine, James Creech, and Lin Corzine’s “Black Concentration and Lynching in the South: Testing Blalock’s Power-Threat Hypothesis” (Social Forces, 1983) and E. M. Beck, James L. Massey, and Stewart E. Tolnay’s “The Gallows, The Mob, The Vote: Lethal Sanctioning of Blacks in North Carolina and Georgia, 1882-1930,” (Law and Society Review, 1989). The logic of this line of argument is indebted to Hubert Blalock and Peter M. Blau, for example Blalock’s Towards a Theory of Minority Group Relations (1967), and Blau’s Inequality and Heterogeneity (1977).
Researchers also focuse on macroeconomic forces, such as fluctuations in cotton prices, to explain variation in lynching in the United States, for example, Susan Olzak’s “The Political Context of Competition: Lynching and Urban Racial Violence 1882-1914” (Social Forces, 1990) and her expansive The Dynamics of Ethnic Competition and Conflict (1992). Also notable: E. M. Beck and Stewart E. Tolnay’s “The Killing Fields of the Deep South: The Market for Cotton and the Lynching of Blacks, 1882-1930” (American Sociological Review, 1990) and the Corzines and Creech’s “The Tenant Labor Market and Lynching in the South: A Test of Split Labor Market Theory” (Sociological Inquiry, 1988). An early treatment is found in Arthur Raper’s The Tragedy of Lynching (1933). Finally, Lincoln Quillian’s “Group Threat ands Regional Change in Attitudes toward African-Americans,” published in the American Journal of Sociology (1996), provides a compelling theoretical explanation, the approach for which he developed earlier in his “Prejudice as a Response to Perceived Group Threat: Population, Competition, and Anti-Immigrant and Racial Prejudice in Europe,” published in the American Sociological Review in 1995.
Beck and Tolnay’s outstanding 1995 book A Festival of Violence: An Analysis of Southern Lynching, 1882-1930 (and several articles) is exemplary of the positivistic approach. In their work, the authors show that periods of material prosperity ceteris paribus tended to reduce the frequency of lynching, whereas economic depression functioned to increase lynching. They theorize that this was because economic pressures reduced the number of available jobs and increased competition for work. Whereas the white planter class had an interest in exploiting cheap black labor, the existence of a free black labor force threatened white labor—white planters hiring blacks over whites led to whites employing violent tactics to close the labor market to blacks. Lynching was one mechanism used by white labor to intimidate black labor. Found the foundation of this argument see Joel Williamson’s The Crucible of Race: Black-White Relations in the American South Since Emancipation (1984).
The positivistic approach is attractive because linear formulation of causal relations and the assumption of an abstract rational actor are suitable for hypothesis testing using aggregated statistics. Indeed, the findings of such studies are impressive. When demand for cotton declined in the early 1890s, lynching did indeed peak. After the 1890s, when cotton prices rose, there was the predicted decline in lynching. (For caution see John Reed, Gail E. Doss, and Jeanne S. Hulbert’s “Too Good to be False: An Essay in the Folklore of Social Science” in Sociological Inquiry, 1987.) After WWI, when the cotton economy declined dramatically, another wave of racial violence occurred (the re-birth of the Ku Klux Klan is associated with this calamity). Population pressures exacerbated the problem. Rapid population growth in the South produced a surplus of laborers, increasing (at least the perception of) job competition between races and pushing racial tensions to greater heights. Out-migration of blacks is associated with the decline of lynching in the 1930s. Beck and Tolnay theorize that this relieved the perceived need to use lynching as a tool to exclude blacks from the white labor market.
Festival specifies elements of lengthier historical studies on changes in punishment during this period (such as Edward L. Ayers’ Vengeance and Justice, published in 1984). The book shows why there are more killings in one region compared to another and why there were more killings in one year compared to another. However, explaining variation is not the same thing as identifying generative forces. Beck and Tolnay theorize that lynching occurs in the conjuncture of several forces: racist ideology, competition over scarce resources (such as jobs), a permissive government, and various catalysts, such as labor market instability and a high profile crime. For Beck and Tolnay, racism gives ordinary people permission to commit murder.
This approach helps explain why those whites who were, because of the split-labor arrangements of a caste society, far more likely to be in direct competition with members of the lynch mob and their supporters did not lynch each other. But several question remain unanswered. Does racial ideology only provide a technique of neutralizing legal and moral prohibitions against torture and murder? Or might racism also be a positive motive to lynch? Is lynching only about intimating blacks to force them to leave communities and markets? Why did those persons who desired to murder blacks select lynching as the means of closing the labor market (if that’s what they were indeed doing)? And what explains the murder of women and children? With whom were they in competition? Or is lynching also or even more about the affirmation of whiteness?
These and other questions indicate gaps in our understanding of lynching that can only be filled by exploring the cognition of southern whites, by pursuing evidence of motive and making a determination of responsibility. Motive, the reason in back of action, is part of the causal process when we grant human agency. Motives need not be clear to the person who carries out the action. One may claim that he participated in a lynching to avenge a crime perpetrated by the executed. The claim is not false. But it does not tell us why a black man was lynched. Responsibility means that a person is able to answer for an action taken (his conduct) or a failure to act (his obligation). It conveys a moral and legal accountability. To say somebody or something is responsible for a criminal action is saying that someone or something caused a crime to occur. Questions of motive and responsibility do not lie outside the ambit of objective social science. The fig leaf of neutrality should not apply. Putting the matter more simply: how will we know the cause of lynching if we do not explore the mind of the white supremacist?
* * *
Daniel Goldhagen’s Hitler’s Willing Executioners, a book about mass murder published in the twilight of history’s bloodiest century, stands in contrast to three paths of scholarships in the field of Holocaust studies. Along the first path, what some call the “intentionalist” school, scholars theorize that the cause of the Second World War was an elite racist dream imposed upon a sensible and civilized but hapless German citizenry. For this view, see Karl Dietrich Bracher, The German Dictatorship (1968) and Lucy Dawidowicz, The War Against the Jews (1975). Genocide was a means to make the fatherland Judenrein (“Jew-free”). Proponents of this view reference statements and writings of prominent Nazis that presage the elimination, in one way or another, of the European Jewry. Ordinary Germans, duped by charismatic leaders and deft propaganda, were ignorant of the extermination program.
Auschwitz, Poland, c. 1940
Scholars on the second path explain the Shoah as the outcome of impersonal macroeconomic forces. With the world in the throes of global recession, the German state and bourgeoisie, driven by capitalist imperative and having come late to the imperialist plunder of the world, used fascism and territorial expansion (Lebensraum or “living space”) to rebuild the national economy. (For a contrary view see Eberhart Jackal’s 1972 Hitler’s Weltanschauung: A Blueprint for Power.) Later, as an afterthought emerging from the contingencies of world war, an extermination policy developed. The policy of genocide was not premeditated but emergent. This interpretation corresponds to the so-called “functionalist” or “structuralist” school, which examines institutions rather than ideas and human agency. See, for example, Karl Schleunes’ The Twisted Road to Auschwitz (1970), Arno Mayer’s Why Did the Heavens Not Darken? The “Final Solution” in History (1989), Christopher Browning’s The Path to Genocide (1995). (There is a range here, with Browning’s functionalism moderate compared to Mayer’s work.)
For those on the third path, represented most plainly by Hannah Arendt’s “banality of evil” thesis, the insidious nature of the hyper-rational bureaucratic state lies behind genocide. See Arendt’s 1977 Eichmann in Jerusalem: A Report on the Banality of Evil, as well as her 1971 The Origins of Totalitarianism.) Mass murder in the twentieth century was a consequence of the dehumanizing effects of high modernity—rationality taken to its logical conclusion. At the social psychological level, Milgram confirmed Arendt’s assumptions in a series of experiments that showed how ordinary people obey authority even when the task is unpleasant. Eichmann was a bureaucrat following orders. He was one among many. Diffused personal responsibility gave each perpetrator a way to deflect guilt. The executioner who dropped Zyklon B into the showers blamed his commander. The commander blamed the executioner. (See John Conroy’s 2000 Unspeakable Acts, Ordinary People: The Dynamics of Torture.)
These three paths reduce to two basic accounts of the crime. In one, the Shoah was, as the intentionalist claim, men attempting to realize their goal of a racially sanitized world. In the other, as functionalists see it, genocide was the work of marionettes animated by the reflexes of an anonymous and impersonal puppet master. In neither of these explanations do ordinary Germans—if proponents acknowledge ordinary Germans at all—willingly perpetrate the worst mass murder in history. Moreover, scholars are resistant to the idea that National Socialism was a national project appealing to the average German citizen. This is especially true for those whose political sympathies are with the working class, such as the orthodox Marxist. See, for example, Tim Mason’s Nazism, Fascism, and the Working Class (1995). So sensitive were pro-worker thinkers that, according to Daniel Burston in The Legacy of Erich Fromm, Max Horkheimer refused to publish Erich Fromm’s 1929 study of pro-fascist sympathies among German workers, The Working Class in Weimar Germany: A Psychological and Sociological Study, for fear that it would smear the proletariat.
The ordinary German is the focus in Goldhagen’s Willing Executioners. He roots the Holocaust in the culture of anti-Semitism. The majority of Germans shared a hatred for Jews and other non-Germans. Race hatred and race pride motivated the gassings and the shootings. Nazis neither duped nor coerced Germans into mass murder. Embracing their ethnic identity, they were “willing executioners.” Explanations that do not consider motive are theoretically inadequate, according to Goldhagen. To be sure, ascension of the Nazis and macroeconomic instability created the conjunctural moment wherein latent eliminationist anti-Semitism could manifest. Moreover, the rational-bureaucratic state of modernity provided the infrastructure for the mass production of death, as Zygmunt Bauman famously pointed out in the pages of The British Journal of Sociology. However, opportunity and means do not by themselves explain murder. Any complete explanation for the Holocaust must come to terms with the thoughts and actions of ordinary Germans. What is unique in Goldhagen’s approach is the way in which individual responsibility becomes grounds not only for establishing guilt, but also for explaining behavior.
Treating perpetrators as essentially empty vessels and underplaying their wrongdoing is not merely the result of overly objectivistic approaches to the subject of genocide. Holocaust scholars appear to have a hard time acknowledging racism in German culture and the role racists played in genocide. Some scholars, such as Arendt, even dismiss the centrality of anti-Semitism. In Arendt’s view, apolitical bureaucrats perpetrated the Holocaust. The motives of perpetrators, as well as the identity of their victims, yield little useful knowledge from this standpoint. Collective violence is a reflex of the modern state. Evil is banal. We must therefore turn our attention to the hyper-rational ordering of the German state. Other accounts admit anti-Semitism but fail to give it significant causal weight in mass murder (Mayer’s work, for instance). (In this respect, the corpus of lynching studies differs markedly from Holocaust studies.)
Goldhagen sees in conventional explanations a common feature. “They either ignore, deny, or radically minimize the importance of Nazi and perhaps the perpetrators’ ideology, moral values, and conception of the victims, for engendering the perpetrators’ willingness to kill,” he writes. “They do not conceive of the actors as human agents, as people with wills, but as beings moved solely by external forces or by transhistorical and invariant psychological propensities, such as the slavish following of narrow ‘self-interest’.”
Traditional explanations, first, fail to reckon sufficiently “the extraordinary nature of the deed: the mass killing of people.” In his survey of the literature, Goldhagen finds that when slaughters and gassings are recorded, they are rarely analyzed. Without coming to terms with the “the phenomenological horror of the genocidal killings,” the mind of the perpetrators cannot be fully understood and thus a complete explanation for the Holocaust is not possible. Second, “none of the conventional explanations deems the identity of the victims to have mattered.” Goldhagen emphasizes that the identity of the victims—how Germans perceived identity—is a causal factor in genocide. Indeed, he regards the view that the perpetrators were neutral regarding Jews as a “psychological impossibility.” These failings indicate questions in need of answering: Why do ordinary individuals perpetrate mass murder? How do ordinary individuals go about killing others? The question central to the problem Goldhagen confronts is this: Why are some individuals selected to be victims and others not?
Given how the Nazis dehumanized their victims, it is ironic that the perpetrators are in conventional explanations themselves dehumanized. Their dehumanization lies along different planes of course: Jews (and others) were dehumanized by the perpetrators denying that they were human beings worthy of sympathy; on the other plane the perpetrators are dehumanized by excusing their responsibility for perpetrating crimes that reach into the deepest recesses of moral degradation.
The immediate reaction of many in the academic community to Willing Executioners illustrates widespread reluctance to implicate ordinary Germans or German national culture in genocide. Reviewers of Goldhagen’s book quickly mobilized to find the link between “the specific national German tradition” and genocide “not tenable,” and to exculpate Germans of murder, as the matter was put by George Kren in The American Historical Review. Kristen Monroe, the author of The Heart of Altruism, a book about German rescuers of Jews, is exemplary of this approach. In American Political Science she admonishes Goldhagen for producing a “blanket indictment of the German people.” She especially condemns Goldhagen’s ignorance of those Germans who risked their lives to save Jews. Monroe contends that neither the perpetrators nor the rescuers “constitute the most representative sample of the German people.”
Monroe’s comparison effectively treats those Germans who were neither perpetrators nor rescuers as non-agents and not responsible for what happened. Suppose we were to draw a representative sample of Germans and find that most were not among the rescuers. Even if they were not perpetrators, does not their inaction make them complicit in genocide, especially since the overwhelming evidence indicates that Germans knew their neighbors and relatives were murdering Jews? This points to another difference between the bodies of Holocaust and lynching scholarships namely, unlike Monroe, Beck and Tolnay recognize the responsibility of “non-actors” in the mass murder of blacks.
The claim of Monroe’s polemic is false; Goldhagen recognizes that not every German desired to swim with the currents of their day. However, he stresses, this is no reason to deny that the perpetrators and their supporters were German. He writes: “The perpetrators were Germans as much as the soldiers in Vietnam were Americans, even if not all people in either country supported their nation’s efforts.” The premise of Monroe’s work, however, is compelling. Arguments from identity are troubling. At the same time, not all identities are the same; their differences make it more or less difficult to assign responsibility to them.
For categories such as race and sex, it’s not possible to hold everybody who is, for example, white and male, responsible for the actions of a concrete individual who lies at that intersection. There is no substance to these categories that provide motive for action. A white man can be ideologically and morally anything. Ethnicity is a plausible target in the sense that it comes with norms, traditions, and values that may make individuals who belong to them more likely than individuals from other ethnic groups to either perpetrate heinous acts or fail to oppose them when they can. Ethnicity is at least a source of motive. On the other hand, nationality in the civic sense is difficult to process in the same way. Many of my parents’ generation marched against the Vietnam War. In what way could they be held responsible for the actions of their government in Southeast Asia? Many of them did, of course, support the Democratic Party. But this gets us into another area: ideology. Political and religious ideologies are the most obvious sources of motives for action. For example, the extent to which individuals are devoted to the ideas and practices of Islam is predictive of actions that are harmful to people. What is more, Islam as a ruling ideology creates social and cultural structures that are systematically oppressive to some members of those societies.
* * *
Dray’s narratives in Persons Unknown suggest what Goldhagen asks us to do in Willing Executioners, namely grasp the shared consciousness that guides ordinary people to perpetrate extraordinary crimes. It is through the murderous events recounted in detail and the description of the larger culture of white supremacy in Dray’s book that we gain a better understanding of what motivated the perpetrator’s actions. The reluctance of those in positions of power to stem the tide of murder and their inaction facilitating the perpetuation of those atrocities is highlighted, supporting a claim central to Beck and Tolnay’s theory in Festival.
This matter of malign neglect by authorities must not be overlooked. In a detailed criticism of Edward Ayers’ 1985 Vengeance and Justice, a book wherein the argument is made that lynching was not a political act, Drew Faust writes, “Underlying the entire lynching phenomenon was a tacit political decision not to use the power of the state to halt these outrages.” The cooperation of whites from politicians and law enforcement at the state level to through the subaltern ranks of ordinary white southerners reveals the importance of grasping the collective consciousness of the South. Without Sanctuary forces us to consider such matters with its onslaught of photographic proof of murder. We look into the perpetrators’ faces and see their smiles, their complacency, dissociative countenance, and bloodlust. We witness them operating publicly, posing for pictures, without fear of punishment. We know that they know they have permission to murder.
Among the most striking pieces of evidence for racial hatred as the cause of lynching is the character of the perpetrators’ manner of killing. “The story of a lynching,” writes Litwack in Without Sanctuary, “is the story of slow, methodical, sadistic, often highly inventive forms of torture and mutilation.” Such cruelty shows us that racism represents more than a dehumanizing ideology that neutralized conscience. It was not enough for the perpetrators to simply execute their victim—the killers had to murder them in the most excessive and public way. Afterwards, instead of shame and guilt, the perpetrators expressed pride in their actions, taking trophies, fragments of the corpse, selling body parts as souvenirs, proudly displaying the photographs they had taken in local shop windows. There were postcards made of the pictures of lynched blacks, delivered by United States postal workers, captioned, “I was at the barbecue last night.”
An outstanding feature of racial lynching in the U.S. South is that it follows slavery and the absence of controls on the white population. “The demise of slavery, ironically, meant the collapse of an institutional check on violence against Black people,” Manning Marable writes in How Capitalism Underdeveloped Black America. Freedom from some white people left blacks vulnerable to brutality by potentially all white people. This phenomenon was likewise observed in Europe. During the eastward expansion of German hegemony, “spontaneous” collective violence occurred when the Nazis disabled the local police. It seems as though what prevented various ethnic groupings from beating Jews to death in the streets and in their beds was the presence of local law enforcement securing the social order. When the Nazis invaded and destabilized the legal order, they unchained latent exterminationist anti-Semitism and incredible acts of brutality followed. Like the Holocaust, the brutality of lynching and its acceptance by the white community indicates racial hatred and racial interests as the primary causal forces in lynching. But it is in those spaces left uncontrolled by authorities that many of the atrocities of the Holocaust parallel the atrocities of the Lynch mob. It is here that we see the power of ethnicity and ideology—and not the commands of the Nazi state—in providing motive for action.
These facts highlight the problem with research that attempts to link lynching to abstract social forces: Quantifying racial violence serves to downplay or omit the motivational side of the racist character of the perpetrators’ actions, which may in turn disguise the cultural sources of collective murder. Such approaches reduce complex human actions to abstract rational actors animated by invisible market forces. The procedure evacuates an essential truth, namely that whites were guided to murder blacks not because cotton prices rose or fell, but because they shared the cultural values of white supremacy and anti-black racism. Whether they were suffering from a skinny paycheck, or benefiting from the depression in cotton prices in some fashion, they willingly, even eagerly, participated in murdering humans. And for those who consider the role of racism in lynching (Beck and Tolnay, for example), the role of racism must not be interpreted one-sidedly as an ideology that gives individuals permission to kill, that is as a technique of neutralization. Racism must also be reckoned as a cultural and moral force making people want to kill.
A growing understanding among black intellectuals that white supremacy—affirmation of identity and hatred of blacks—in the late nineteenth century was the cause of lynching is illustrated in the first chapter of Persons Unknown. Dray relays an account of intellectual and activist W.E.B. Du Bois’ walk to The Atlanta Constitution on April 24, 1899, to meet with editor Joel Chandler Harris (the author of Uncle Remus stories). Du Bois’ purpose in meeting with Harris was to discuss Du Bois’ study of lynching. Current events made this purpose all the more urgent. A black man named Sam Hose stood accused of murder and rape and the immediate indication was that his lynching was inevitable. The press stoked the fires of racial violence. Some papers even called for Hose’s lynching. Georgia governor Allen D. Candler, an outspoken advocate of lynching, practically endorsed the idea of murdering Hose when he characterized Hose’s deeds as “the most diabolical in the annals of crime.”
As Du Bois’ walked to the Constitution offices, he received news that Hose had been “barbecued” and that a grocery on the very street upon which he was walking had Hose’s knuckles for sale in the window.
The lynching of Sam Hose, April 23, 1899
The lynching of Hose, like so many instances of lynching, had been a spectacular affair, involving slow dismemberment of the victim before burning him alive at the stake. News of Hose’s capture had traveled fast and people everywhere clamored to get to the town of Newman where the lynching was to take place. Atlanta and West Point Railroads offered a special excursion train there. After the train tickets sold out, citizens leapt onto the train, climbing into its windows and clinging to its exterior. The railroad quickly arranged for a second “special,” which filled in the same manner at the first. Thousands congregated in the small town of Newman. Many who were too late to see the actual lynching converged upon the makeshift scaffold, taking trophies—fingers, pieces of wood, fragments of the chains that had held Hose to the tree. The Constitution reported that people were walking around town carrying bones.
All this transpired as Du Bois walked to the Constitution offices and the enormity of news he was receiving forced him to recognition of how unimportant were his efforts that day. Overwhelmed by the gravity of the situation, he turned around and walked back home.
Dray summarizes Du Bois’ conclusion: “Du Bois had been inclined to believe that blacks were mistreated by a minority of coarser whites, and that if the majority of decent white people could be made aware of the injustice of black life in America they would—out of compassion, a sense of justice, even patriotism—act to alleviate the problem. But the manner and spectacle of Hose’s death—the eleven days of hysterical, incendiary newspaper articles, the almost complete lack of responsible intervention from high officials, the crowds running pell-mell from houses of God so as not to miss seeing a human being turned into a heap of ashes, and ultimately, a set of knuckles on display in a grocery store—showed him that lynching was not some twisted aberration in Southern life, but a symptom of a much larger malady. Lynching was simply the most sensational manifestation of an animosity for black people that resided at a deeper level among whites than he had previously thought. It was ingrained in all of white society, its objective nothing less than the continued subordination of blacks at any cost.”
For whatever other reasons whites lynched blacks, it seems certain they did so to affirm their racial superiority. Civil war and reconstruction disrupted the racial order and made racial boundaries ambiguous. Mass murder was a weapon of redemption in a struggle to restore white power. Indeed, racial thinking of this sort is the necessary condition for the periodic waves of ethnic mass murder that have for centuries marked the capitalist world system. At times latent, kept simmering by the culture of white supremacy. At others times, especially during moments of political and economic crisis, fanned into a conflagration.
* * *
The failure of jurisdictions in the United States to prosecute whites who murdered blacks is a testament to the depth of racism in that country. What US citizen has ever been imprisoned or executed for lynching a black person? The 1997 execution of Henry Hayes in Alabama for the 1981 murder of Michael Donald was not for lynching. If and when the murderers of James Byrd of Texas in 1998 are executed, it will not be for lynching. These were “nigger hunts.” However, that Hayes was the first white man to be executed for killing a black man in Alabama since 1913 punctuates the point. The last time Texas executed a white person for killing a black person was in 1854. This was the only such instance.
Because these acts of lynching were murder—the illegal and intentional killing of human beings—persons who perpetrated them are criminally culpable. Failure of authorities to pursue the perpetrators of blacks, hundreds, if not thousands of which are still alive is tacit approval of the motive for lynching. In Germany, where a handful of the perpetrators of genocide were finally judged, many more escaped the stigma of prosecution largely thanks to the resistance of the German judiciary. In his 1998 American Historical Review article “Defining Enemies, Making Victims: Germans, Jews, and the Holocaust,” Omer Bartov shows that “denazification applied a narrow definition of perpetrators, thereby making for a highly inclusive definition of victimhood.” Postwar propagandists depicted “the war as a site of near universal victimhood.” Under the cover of peace and reconciliation, perpetrators and bystanders were either dismissed for their ignorance or turned into automata.
As in the Holocaust, there are likely people still alive who participated in lynching. Why has law enforcement failed to track down and hold members of the lynch mobs responsible for their crimes? Those white faces in Without Sanctuary belong to real flesh-and-blood people. The man in foreground standing beneath the body of R. C. Williams in photograph 88 is somebody’s father, brother, or uncle. That lynching occurred in 1938. Maybe none of Williams’ killers are alive. But they can be named. Might we find their faces in school yearbooks? What was their standing in the community? Did they brag about the time they castrated and murdered Mr. Williams? Do their children know that their parents and grandparents were murderers? The importance of the mystery of these faces becomes even more compelling after learning so many of the victims’ names in various accounts of lynching yet learning nothing about the names of the perpetrators. We know that a mob of whites lynched Sam Hose, but we don’t know the names of those who lynched him.
The desire in the United States to distinguish between whites (well-meaning and naive) on the one hand, and racists (hateful and backward), on the other, is born of the desire to erase the history of anti-black prejudice from collective consciousness. The memory of an America is so deeply racist that it would sanction, even encourage, mob violence is more disturbing to most Americans than the actual killings themselves. The form of argument exemplified by Monroe’s criticism of Goldhagen, whatever her intentions, approaches those arguments that exonerate Southern whites of complicity in violence against blacks who, while not members of the Ku Klux Klan, did little or nothing to improve the conditions of African-Americans, let alone intervene in serial mass murder. Such apologia is transparent in the attempt to draw a distinction between white supremacy and southern heritage, seen for instance in the defense of the confederate flag. This distinction can be accomplished with no more legitimacy than attempts to separate the swastika from National Socialism.
More than this, the desire to differentiate Germans from Nazis during the Nazi period is effectively an attempt to remove from the history of ordinary Germany the subterranean values of anti-Semitism—the values that the Nazis unchained. This is one of the byproducts of insufficiently accounting for, or refusing to recognize culture and motive in collective violence. This problem is widespread in the Holocaust literature (much more so than in the lynching literature). Here, neglecting human agency and ideological conviction of ordinary people puts historical and social scientific explanations in the service of a political desire to reckon hundreds of thousands, if not millions of murderers among the Nazis’ victims. Such interpretations function to diminish and, in some accounts, absolve, responsibility for murder.
* * *
Tapping the collective cognition of a people is a historical-sociological endeavor—the unity of intentionalist and structuralist approaches. The expressions and actions of perpetrators, the identity of their victims, are ultimately the products of social-historical structure and process. Theory must root the collective will in the societal, cultural, and historical contexts in which people are socialized and live out their lives. It is here that human beings learn morality, with all its contradictions, and find themselves in situations that call for the expression of this or that value. We must recognize that people commit murder willingly, not because of “human nature,” but because of their socialization in the dominant ideologies and institution of this sociocultural milieu. Grounds for the decision to murder must be part of the explanation. At the same time, individuals must be held responsible for their decisions. Embedded in the same society are the values of love, non-violence and tolerance. Individuals can refuse to transgress these values.
Quantitative studies, such as Tolnay and Beck’s Festival of Violence, however empirically sound and relevant for their domain, are methodologically constrained in answering important historical, cultural, and phenomenological questions. We must turn to qualitative approaches. Yet, Persons Unknown, as important as it is, is a work of historical narrative. As such, it is undertheorized. Without Sanctuary, its significance unquestionable, is a documentary in need of analysis. Both are descriptions in words and pictures, not works of critical sociological analysis. As powerful as the facts they present are, facts do not speak for themselves.
Willing Executioners is flawed in several other ways. The author neglects the other victims whose identity provoked Germans. He downplays the role non-Germans played in perpetrating genocide (for admittedly analytical reasons). He too readily dismisses condemnatory attitudes on the part of the perpetrators, something John Weiss avoids in Ideology of Death: Why the Holocaust Happened in Germany (1996), for instance, the disproportionate employment of Austrians employed in the agencies of death inferring that they were more enthusiastic about killing Jews than Germans. (Christopher Browning’s work is replete with instances of Germans attempting to diminish their responsibility.)
Nevertheless, despite their weaknesses, Persons Unknown, Without Sanctuary, and Willing Executioners succeed in drawing our attention to shared cognition as a necessary element of collective behavior. They shift our attention away from explanation by impersonal structural forces towards the problem of sociological accounts of motive and social action. In grasping the motivational force behind murder, we turn to the larger social-historical forces that produce shared consciousness and collective conscience. Perhaps it need not be said that no single work can provide all the answers. Studies such as Beck and Tolnay’s Festival of Violence address important aspects of the phenomenon of lynching in the United States. Without Sanctuary and Persons Unknown move us to consider other aspects, especially those aspects of the southern mind that escape quantification.
A version of this essay was published some twenty years ago in New Interventions 9 (2, 1999): 11-15. The journal is out of print, so I am sharing the essay here. Perhaps, in the light of history, I will revisit and interrogate the arguments presented here in a future essay. As with any analysis of on-going conflict, conclusions are based on what one can know at the time.
Bombs are falling on Belgrade, the capital of Yugoslavia. NATO has turned up the heat on Slobodan Milosevic. In truth, NATO is attacking the people of Belgrade. The West is disregarding human lives by targeting a heavily populated area and degrading the infrastructure of a major city. The immediate effects are catastrophic for Yugoslavians. The long-term effect may be the destruction of their country.
The United States and NATO bomb Belgrade. The air strikes occurred March-June 1999
In belligerent tones, President Clinton is telling the world that the Yugoslav President Milosevic will pay “a very high price” for his actions in Kosovo, actions being characterized by the Western media as “genocide.” Propagandists have cast Milosevic as the third coming of Hitler (Saddam Hussein was the second coming). As in the Gulf War and with Saddam Hussein, the US has personalized the Balkan conflict and, as with the Iraqi people, receding into the background are the people of Yugoslavia.
Well, that is not exactly true. The ethnic Albanians who live in the Kosovo region of Yugoslavia are receiving plenty of attention in the Western media.
And for good reason. Since NATO launched its air war over Yugoslavia, some 300,000 ethnic Albanians have fled the province of Kosovo, according to reports by Western authorities. Two hundred thousand are on their way to the border. Western and Albanian sources report that Serbian paramilitary units have been moving from town to town, forcing ethnic Albanians out of their homes, herding them onto trains and trucks, or forcing them to march to the borders of Albania and Macedonia. These sources report the killing of Kosovan men by Serbian death squads.
The images being shown on the television of trains and trucks bloated with women and children are striking. The scene is eerily reminiscent of a tragedy occurring some 50 years ago, when men and older boys, separated from wives and children, were marched off to concentration camps or to die over mass graves. Officials in the West are predicting that if the present rate of expulsion continues, the Kosovo region could be “cleansed” of ethnic Albanians within 10 to 20 days.
What happened? President Clinton told the people of the US and the world that NATO intervention in Yugoslavia was necessary to prevent the outflow of ethnic Albanians from Kosovo into countries to the south of Yugoslavia. The US and NATO had to act immediately, we were told, to prevent the conflict from bursting the seams of Yugoslavia. Clinton warned of an imminent chain reaction, of falling dominoes leading inexorably to World War III. After all, the US leader told us, this is where two world wars began.
Of course, we were not told that it would be the US and the West who would topple the first domino. NATO self-fulfilled the prophecy of a wider conflict. The air campaign has immediately spread the civil war beyond the Yugoslav borders. The enlargement of the conflict has occurred not only because the air strikes have triggered the migration of Kosovo Albanians, a consequence Western propagandists have tried to rationalize, but because the West is organizing violence in Yugoslavia. With the inevitable insertion of military forces on the ground, there will be a full-scale war. Although to a Serb in Belgrade, it is already full-scale war.
The West has successfully transformed a low-level civil war into an international military campaign, and caused an enormous refugee crisis. We are left to wonder whether this was their intention. To answer the question posed by this essay, the Kosovo crisis must ultimately be projected against the background of the history and structure of the capitalist world economy, and the network of geopolitical relations and interests operating the state and military machinery attendant to maintaining and expanding capitalism.
Capitalist globalization and the struggle against world socialism constitute the foundation for the present struggle and the ultimate rationale for NATO. While framed as a defensive posture against world socialism, the American presence in Europe has been as an aggressor in the struggle to advance world capitalism. NATO and the global political and military umbrella of which it is a component have been an integral component of the capitalist globalizing project, by containing world socialism, by putting down nationalist movements in the periphery, and by incorporating into the global economic system those territories formerly controlled by the Soviet Bloc. The alliance with fascists has been a key part of this project. Since the early twentieth century, fascism has been intrinsic to the logic of capitalist development in Europe. After the Second World War, the logic of fascism was globalized, although it has remained a logic subordinate to liberalism—the ugly face of capitalism.
I begin by discussing the policy foreground. There is much confusion here, especially in the way the Western media has cast the struggle. I then discuss three factors that have led to the US involvement in the Balkans: first, the need to justify NATO; second, the global capitalist imperative; and third, the nefarious alliance between Washington and Balkan fascists.
What this essay will show is that despite claims of atrocities being carried out by Serbian paramilitary units operating in Kosovo, the West has neither legitimate justification nor moral authority for organizing war in the Balkans. Indeed, what the US has been doing over in the Balkans has made matters worse for the people of Yugoslavia, Serbs, and Kosovans alike. What is more, NATO action in the Balkans represents a larger strategy to secure the Balkans for global capitalism, and this involves destroying Yugoslavia. NATO is building for itself the rationale—future NATO actions will be based on this precedent—for transgressing national boundaries and putting down any group which threatens the goals of the international bourgeoisie.
* * *
The Policy Foreground: Saving Ethnic Albanians
In the foreground, we have to straighten out the account of the chain of events immediately leading to the NATO attack on Yugoslavia. Not surprisingly, the USA and NATO contrived the situation they used to justify the Kosovo intervention.
The event that ostensibly concerned the West was Yugoslav state repression of the Kosovo Liberation Army (KLA) in the Kosovo province of Yugoslavia. The KLA, operating internally to Yugoslavia, had been carrying out terrorist campaigns against the Yugoslav state. In response, Yugoslav police and the military cracked down on the KLA.
The professed long-range goal of the KLA is to unite ethnic Albanians in Kosovo, Macedonia and Albania into a greater Albania. The struggle to regain the autonomy Kosovans lost with the fall of the larger state of Yugoslavia in the wake of the Cold War was viewed by Kosovan elites as a step towards Kosovan independence, which would then lay the foundation for the larger Albanian state. This position of relative autonomy within Yugoslavia was supported by the USA and Europeans powers. Serbia, threatened by the larger goal of succession by Kosovo Albanians, rejected this position, and took the KLA problem as an internal matter.
Whether one agrees with the aims of the KLA, it is entirely rational given the logic of nation-states for the Yugoslav state to carry out measures to preserve its existence against insurgents and to stabilize territories under its control. Moreover, given the aims of the KLA, concessions to the Kosovan leadership threaten the long-term survival of Yugoslavia.
The conflict escalated until the US and chief European powers believed the struggle had reached a level justifying injecting themselves into the situation as mediators. They drafted a peace agreement, and demanded that both parties sign the treaty, which called for a truce. The demand was backed up by the threat of military force. To show their commitment to peace, NATO began massing troops in the region.
This is what the West calls “diplomacy.” It appeared to some observers, however, that the West was setting up a “solution,” that whereupon its predictable failure, the West would be “forced” to intervene militarily.
When both sides—predictably—rejected the agreement, the United States approached the Kosovan leaders and asked them unilaterally to accept the agreement with the promise that NATO would begin bombing Yugoslavia. Under these conditions, the Kosovan leaders quickly accepted the truce. NATO promptly attacked Yugoslavia.
* * *
The Policy Background: Finding a Purpose for NATO
One of the primary goals of the United States has been to maintain its leadership in Europe. This is accomplished by strengthening NATO. Part of the strategy has been to include Central and Eastern European nations in the alliance.
More important is finding a purpose for the NATO alliance in the wake of the end of the Soviet threat. NATO was originally organised to protect the West and the capitalist world economy from the threat of world socialism. With the end of the Cold War, NATO lost its original rationale—at least its ideological one. It has now become a pressing concern of elites to find a new rationale to justify the existence of the military umbrella. It has been the long-standing position of the Clinton administration that NATO is a vital asset to the cause of peace in the Balkans.
NATO bombing of Belgrade 1999
Their position emerged during the crisis in Bosnia. In testimony before the Foreign Relations Committee (US Senate) in 1995, then Secretary of State Warren Christopher stated that “there will be no peace accord in Bosnia unless NATO and the US head the implementation in a peace accord.” Judging from the actions of NATO, the position holds for Kosovo.
The Clinton administration and Western propagandists have advanced this position by preying on the widespread belief amongst Americans and Europeans that the people of Balkans are simply incapable of governing their affairs. It needs to be remembered that the belief itself has been accomplished through the ethnicization of class and nationalist struggles over resources and territories. This is a classic propaganda strategy.
Propagandists characterize the struggle in the Balkans as “tribal conflict.” The people of the Balkans are characterized as “primitives” held in thrall of “irrational” religious, racialist, and ultranationalist interests that have dispossessed them of the capacity for reason. Their “backward” aspirations must be subdued, and they must “for their own good” be brought into the “community of nations” (although, of course, each under their own ethnic state). In the case of the Balkans, the rhetoric has been reinforced by the propagation of an apocalyptic vision where “racial hatred” lets loose a hell on Earth.
In the past, Washington’s solution to the problems of “backwardness” has been to advocate “modernization,” that is, spreading “democracy” (capitalism) to the infected region. Following this logic, the peoples of the Balkans must be “civilized.” This is a solution that can only be carried out by the most civilized people in the world: the leading countries of NATO.
One of the usual ways Western Europeans and the United States civilize people is by killing them. Witness the bombing of Belgrade.
* * *
Deep Background: Entrenching Global Capitalism
Following the Second World War, it was the goal of the United States to achieve global leadership. Through the political and military hegemony the US accomplished, capitalist elites transnationalized the capitalist mode of production. The people of the world now stand upon the threshold of global society, and this society is a thoroughly capitalist one.
The backdrop of the struggle is therefore global capitalism and the fall of the socialist world system. The fall of socialism opened up vast regions formerly under the control and influence of the Soviet regime for reincorporation into peripheral zones subordinate to the capitalist world economic core. Since the break-up of the socialist world system, transnational corporations have taken an interest in Central and Eastern Europe, including even Russia. They have already injected into the region labor-intensive, low-wage industries. Regional elites have worked with the globalizers to make the region attractive to investors, for example, by privatizing industries formerly owned by the people.
The US has facilitated capitalist development by supporting and implementing a neoliberal project throughout the territories formerly controlled by socialist regimes. This has involved the introduction of World Bank and IMF promoted economic policies involving domestic reorganization of the entire regional economy. These policies have caused widespread misery amongst people who once enjoyed a relatively high standard of living under socialism. The region is being set up to become the European equivalent of the Maquiladora, with export processing zones proliferating the countryside of the once multiethnic socialist state. Once the region is politically stabilized, there is little doubt that capital will flood the region.
Capitalist globalization and the struggle against world socialism constitute the foundation for the present struggle and the ultimate rationale for NATO. The American presence in Europe has been as an aggressor in the struggle to entrench global capitalism. NATO and the global military umbrella of which it is a component have been an integral component of the capitalist globalizing project by forcibly incorporating into the global economic system those territories formerly controlled by the Soviet Union.
* * *
An Ideological Element: Washington and the Fascist Alliance
To a Serb, stealth bombers over Belgrade must look like yet one more chapter in the biography of her people. For hundreds of years, wave after wave of empire builders, from the East and from the West, have taken turns beating down Serbs. The land of Kosovo is holy ground for Serbs because it was there they were defeated by the Ottoman Empire. The Serbs celebrate defeat the way the US celebrates victory. This is because defeat is all the Serbs have ever known.
If you are a Serb, it is quite likely that you will have an older relative who will tell you about the last time the empire-builders sought to annihilate the Serbian people. Back then, during the Second World War, some 750 000 people in Yugoslavia, mostly Serbs, but also Jews and Gypsies, were murdered by the Croatian Ustashi. It must seem to Serbs today, watching Belgrade burn, that the present intervention of the West is a continuation of the fascism that burned their grandparents.
Serbs who accuse the West of fascism are not imagining things. Fascist, racist, and ultranationalist forces have played and continue to playa central role in organizing the destruction of Yugoslavia. After all, the recent history of the Balkan conflict clear testifies to this fact.
What is the connection between Washington and fascism, racism, and ultranationalism in the Balkans? The fascist alliance in the Republican Party is the various ethnic clubs who call themselves “heritage groups.” They constitute the National Republican Heritage Groups Council (NRHGC). They make up fascist wing of the Republican National Committee. There are no black or Jewish ethnic groups in the NRHGC. There are Bulgarian, Cossack, Romanian, Byelorussia, Slovak and Croatian clubs. The NRHGC had a direct line to power during the 1980s and early 1990s in the Reagan and Bush administrations, playing a key role in fashioning US policy in the Balkans. One of their principal goals was the destruction of Yugoslavia, especially the Serbs.
Playing the central role in the NRHGC concerning US policy in Yugoslavia have been, of course, the Croatian Republicans. Croats have been the enemy of the Serbs for a long time. The Croatian Ustashi allied with Nazi Germany during the Second World War. In 1941, Germany conspired with the Croatians to declare Croat independence from Yugoslavia. There began immediately mass exterminations of Orthodox Serbs.
Serbian family killed by Croat Ustashi militia 1941
During the 1980s, the COP openly observed and celebrated the “Croatian Day of Independence.” See, for example, the 1984 Guide to Nationality Observances published by the National Republican Heritage Groups Council of the Republican National Committee, signed by then chairman of the RNC, Frank J Fahrenkopf, Jr. Official Republican Party literature, their propagandists clearly aware of the untidiness of the Nazi-Croat alliance, notes the “unfortunate association” of the Croat Ustashi with the Nazi Party of Germany. What the RNC glosses over is how the German Nazis were horrified witnessing the cruelty of the Croat Ustashi The Ustashi liquidated whole Serbian villages with no mercy. Ovens at Jasenovac burned Serbs alive.
Clinton took over this foreign policy orientation. Indeed, when a renegade contingent of naive and freshman Republicans were set to pull the funding from the Yugoslav military operation several days ago, Clinton called the GOP leadership into the White House and reminded them of Washington’s commitments in the Balkans. Republicans emerged from their meeting, quickly shelved the Republican proposal, and turned immediately to voting for and approving air strikes in Yugoslavia.
With the Serbs having just confronted the heirs of the Ustashi dream of independence in the recent Balkan civil war, history is not just a story of the past for Yugoslavia. History is now. And, year-by-year, day-by-day, history is dissolving Yugoslavia.
* * *
Yugoslavia’s Final Chapter?
The USA and Europe have been carrying out a program to create and strengthen zones of influence in Central and Eastern Europe. It is these efforts, based on economic and political objectives, that have produced conditions favorable to the rise of inter-ethnic struggles. These inter-ethnic struggles have been fostered and often even instigated by the West. The long-standing goal of European powers has meant that a peaceful resolution to the break-up of Yugoslavia was to be avoided, and, indeed, that differences and conflicts in that region are to be heightened and focused.
One of the principal strategies for transnationalizing the bourgeois order has been the process of Americanization, of enculturating the world with the tenets of Americanism, and entrenching the capitalist mode of production everywhere. NATO, a key player in the imperialist war on workers and peasants in that region, represents Americanism in Europe. For years, NATO stood on the fault line of world-historical systems, with capitalism on one side and socialism on the other—the West versus the East. NATO secured the holding pattern that world capitalism had to assume while state socialism exhausted itself in perpetual war readiness and economic isolation.
Any state organizing to stand apart or appearing to stand apart by demanding some autonomy from the global economic order, if that state represents a strategic asset to the globalizers, is a direct threat to Americanism, of capitalism in its global phase of development. Threats to Americanism threaten US interests. The United States has worked tirelessly to undermine any agreement that pushes it out of the Balkans or diminishes the its leadership position in Europe. The US has seen to it that conflict in the Balkans and interethnic atrocities continue. Warmongering is inherent in the strategy being pursued.
Since the establishment of American political-military hegemony, capitalism has globalized and world socialism has fallen. Western Europe remains the conduit through which American hegemony is channelled. Against this backdrop, the final chapter of Yugoslavia has all but been written. Today, we are seeing the final strokes being penned. Yugoslavia, her history in her present, may soon find herself part of a distant past.
The phrase “war on drugs” designates the aggressive prohibition and criminalization of certain substances with the explicit goal of reducing the prevalence of use by the population as a whole or some segment thereof, thus improving the public health and worker productivity. By transforming the targets of control into enemies of the national community and moral order, the “war” metaphor functions to legitimize government tactics the public might otherwise see as conflicting with civil liberties and human rights (analogs: “war on crime” and “war on terrorism”). The consequences of the drug war are many, among them overcrowding in prisons, felony labels, international criminal syndicates, and community and neighborhood disintegration.
The precise definition or meaning of the word “drug” depends on context. The standard medical definition is any substance, such as a medicine, that produces physiological changes in the body. Some such substances are associated with changes in the user’s cognitive and/or emotional state, hence the common verb form of the word meaning to administer a substance producing insensibility or stupor. Medical definitions thus refer to relative benefit and harm, as well as the psychoactive character of substances. Neither harm-reduction nor psychoactivity is enough to provoke prohibition. What is a drug in the context of the drug war rests more on the question of legality; an illicit drug is a product of state action, media-driven panic, and moral entrepreneurial spirit, a social construction publicly appealing to public health and safety.
Accounts of the history of the drug war usually associate the use of the metaphor with the presidency of Richard Nixon, manifest in the passage of the Comprehensive Drug Abuse Prevention and Control Act of 1970and the subsequent creation of the Drug Enforcement Agency (DEA) in 1973. However, the superstructure of drug control is the result of legislation and policy initiatives unfolding over many decades. Moreover, the war rhetoric itself has a long history, appearing in articles published as early as the 1920s in such major US newspapers as The New York Times.
Before 1883, there were no federal drug laws in the United States. Drugs were widely available to the public without significant official consequence. Hemp grew freely throughout the nation with some local laws regulating cannabis as early as the 1860s. Bayer, the makers of aspirin, sold heroin over the counter until 1913. By that time aspirin had became the popular non-addictive alternative and Bayer was phasing out the production and distribution of heroin. Famously, cocaine was a component of the recipe for the beverage Coca-Cola until around 1900 (although the actual amount of the drug contained in the recipe, contained naturally in the leaves of the coca plant, is often exaggerated).
The Progressive Era would see the emergence of widespread drug and alcohol prohibition initiatives. In 1906, the United States Congress passed the Food and Drug Act requiring the labeling of pharmaceuticals. In 1914, Congress passed the Harrison Narcotics Tax Act, imposing a strict regulatory regime on importation, production, and distribution of opiates(any substance produced from alkaloids derived or synthesized from opium poppy), as well ascocaine. The law was in part the result of an international conference on the problems of narcotics conducted at The Hague in 1911, which produced the International Opium Convention of 1912, the world’s first multi-nation drug control treaty.
In 1920, the US federal government would ban the manufacture, sale, and transportation of alcohol. This ushered in the period known as Prohibition. Amid a welter of unintended consequences and widespread public protest, the federal government repealed the federal ban in 1933, deeming it a noble albeit failed experiment. The lesson was not generalized and the government intensified control of narcotics production, distribution, and consumption.
During a radio address in 1835, President Franklin Roosevelt announced the expansion of the Geneva Narcotic Limitation Act, an international effort to combat the narcotics trade that became effective in 1933, and the existence of a Uniform State Narcotic Law pending before several state legislatures. The intent of these measures was to combat what the president described as “the ravages of the narcotic drug evil.” In 1937, Harry Anslinger, Commissioner of the U.S. Treasury Department’s Federal Bureau of Narcotics, succeeded in persuading Congress to pass the Marihuana Tax Act, a bill he authored, which effectively criminalized the production and distribution of cannabis. These measures signaled that, while political elites admitted alcohol prohibition a failure worthy of policy reversal, they intended to aggressively pursue war on other drugs.
The Boggs Act of 1951 and the Narcotics Control Act of 1956 represented major steps in the consolidation of the drug control effort. In his first term, President Dwight Eisenhower announced a major public commitment by the White House to pursue a war on drugs, appointing a special cabinet committee and enjoining them to “omit no practical step to minimize and stamp out narcotic addiction.” In 1961, the international community negotiated the Single Convention of Narcotic Drug treaty, a renewal and refinement of the 1931 Convention, empowering the Commission on Narcotic Drugs and the World Health Organization to determine drug schedules that would guide global enforcement of drug prohibition. These steps would lay the basis for subsequence US law.
The emergence of crime control regime during the 1960s drove the drug war to new heights. Congress passed the Safe Streets and Crime Control Act of 1967 and followed that legislation the following year with the Omnibus Crime Control and Safe Streets Act of 1968. Appealing to war rhetoric in his 1968 State of the Union address, President Johnson announced that crime control would be a crucial element of a second elected term of office and articulated a significantly expanded drug control component. He promised “stricter penalties for those who traffic in LSD and other dangerous drugs” and called for more vigorous enforcement drug laws by increasing the number of Federal drug and narcotics control officials.
Richard Nixon continued Johnson’s wars. In 1969, the Supreme Court ruled key provisions of the Marihuana Tax Act unconstitutional, a decision that moved Congress to quickly replace it and other legislation with the Comprehensive Drug Abuse Prevention and Control Act of 1970, which Nixon promptly signed. The new law established the classification system known as the Schedule, which contains five ranked categories of controlled substances, each determined by a set of criteria that includes the abuse or addiction potential of the drug, as well as consideration of the medicinal acceptability of the drug. The Schedule remains the law of the land. In his second term, Nixon merged the Bureau of Narcotics and Dangerous Drugs, the Office of Drug Abuse Law Enforcement, and other drug control agencies into one powerful office. Today, the DEA has a global reach, including support from the US Department of Defense.
The intensification of the drug war during the 1960s was part of a government effort to expand the policing apparatus and carceral function of the state in the face of widespread youth-based anti-capitalist, anti-racist, and countercultural movements. Mass mediated depictions of the youth counterculture as a force undermining conventional values of obedience to authority and the Protestant work ethic preyed on mainstream and conservative sensibilities, public nerves frayed by the persistence of the Cold War and the struggles for civil rights. Nixon’s domestic policy chief John Ehrlichman told Harper’s Magazine: “The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people…. We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin. And then criminalizing both heavily, we could disrupt those communities.” Ehrlichman specified: “We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news.” He added, “Did we know we were lying about the drugs? Of course we did.”
In the wake of the Watergate scandal and military defeat in Southeast Asia, and with the waning of the organized youth counterculture, drug war intensity diminished in the second half of the 1970s. A significant decline across all age groups in the consumption of illicit drugs, as well as tobacco and alcohol, coincided with this period. The drawdown in the drug war was short lived, however. Another wave of anti-drug legislation marked the 1980s amid a period of major economic and domestic policy changes. In 1984, Congress passed the Comprehensive Crime Control Act and the Comprehensive Forfeiture Act. Congress followed the passage of these acts with the Anti-Drug Abuse Act of 1986, which created mandatory minimum sentences for possession and life sentences for drug dealers. Congress expanded the law in 1988 and in 1994.Until that point, anti-drug policy focused the attention of law enforcement on large-scale drug operations, while allowing the medical and social services community to focus on the treatment of drug users. The new policy emphasized harassing users and minor peddlers with “get tough” laws emphasizing mandatory minimum sentencing. Within a decade of these changes, federal spending on the drug war experienced growth twenty-fold.
Today, the US drug war represents a vast international system of surveillance, policing, and carceral controls amounting to tens of billions of dollars in public spending by the federal and state governments. Nearly two million persons are arrested each year in the United States for drug offenses, around 800,000 of these are for cannabis. Arrests for drugs are more common than arrests for any other criminal offense. Leaving aside admissions due to predicate felony laws and drug-related crimes, roughly half of those in federal prisons and one-fifth of those in state prisons are incarcerated for drug offenses (in contrast, less than five percent of state prison admissions in the mid-1970s were for drugs). Approximately a quarter of Americans on probation and a third of those on parole are drug offenders. With a total prison population in the United States of approximately 2.3 million persons, and more than 7 million under some form of significant correctional control (including persons on probation and parole), these proportions translate into hundreds of thousands of lives disrupted every year by state pursuit and apprehension of drug buyers and sellers.
The drug war has long carried a disparate effect on particular minority groups and, at times, policymakers have appeared to design policy to operate in a racially-conscious manner. Officials used an 1875 ordinance in San Francisco to control opium dens to harass the Chinese community. Chinese immigrants, a crucial labor source in building the United States railroad system in the Western part of the nation following the Civil War, had become surplus labor at the end of the construction boom. The justification for the control of opium use appealed to the virtue of white women. Restriction of other drugs followed similar patterns. Support for the war on narcotics and cocaine in the 1910s was fueled by media depictions of poor blacks as “dope fiends” driven to murder and insanity. Officials accused Mexicans of bringing cannabis into the United States, signaling that prohibition would prove a useful tool to control migrant labor. In 1930s, the campaign against cannabis held forth that the drug inspired the jazz and swing musicof black performers that was alleged to corrupt white youth. Recent history indicates continuity in racially-disparate effects and policies. Some ninety percent of those admitted to prison for drug offenses are black or Latino. Although blacks constitute only 13 percent of the US population, and are no more likely to use illicit drugs than other racial groupings, they represent more than a third of those arrested for drug possession, more than half of those convicted for drug offenses, and just under three-quarters of those sentenced to prison for drug offenses.
The notorious contemporary example of racial disparity in drug laws is seen in the cocaine sentencing law passed during the Reagan Administration. There are different methods of delivering cocaine into the human system. One method is to combine cocaine with baking soda to create a rock-like substance that can be smoked. This form of cocaine acquired the name “crack” because of the crackling sound that occurs when heated. Although many users of cocaine prepare the drug in a manner that allows it to be smoked, users living in impoverished inner city areas, disproportionately African American, are more likely to purchase cocaine already prepared as crack. The Anti-Drug Abuse Act of 1986 established for crack cocaine convictions a penalty 100 times greater than that for powdered cocaine. First-time trafficking of at least five grams of crack triggered a minimum mandatory sentence of five years, whereas the trigger for the same penalty required 500 grams of powder cocaine. By 1997, African Americans were accounting for more than 80 percent of the defendants convicted of crack cocaine offenses. This percentage was unchanged in 2009. In 2010, after years of public protest and official recommendations to reduce the disparity, President Obama signed the Fair Sentencing Act, which reduced the ratio to approximately 18:1. He did not eliminate the disparity.
What explains the phenomenon of drug prohibition? David Musto, in his 2002 Drugs in America: A documentary History, contends that patterns of drug criminalization follow cyclical patterns of state and public tolerance and intolerance. Michael Tonry, in Malign Neglect: Race, Crime, and Punishment in America, published in 1996, echoes Musto’s contention, explaining that there are periods where traditional American notions of personal sovereignty allow people to make their own choices about substance use. During these periods, drug use is considered only mildly, if at all, deviant. At other times, the public mood swings towards intolerance, where drug use is widely seen as deviant and those defending drug use risk moral disapproval or stigmatization. Tonry describes these periods of intolerance as “puritanical periods of uncompromising prohibition.”
However, these swings between tolerance and intolerance themselves require an explanation. As does the secular trend in expanding comprehensiveness. George Rushe and Otto Kirchheimer’s Punishment and Social Structure, published in the 1930s, is instructive. They find that swings between the repressive and rehabilitative attitudes of Western penology are explained by the cyclical nature of capitalist mode of production and the growth of the capitalist state. French philosopher Michel Foucault’s landmark 1975 work Discipline and Punish elaborated the political-ideological side of Rusche and Kirchheimer’s thesis. Historical analysis indicates that the emergence and trajectory of drug prohibition roots in part in the development of industrial structure of the capitalist system, evidenced for example in the fact that the burden of drug prohibition falls more heavily on the working class than on other segments of the population. Drug prohibition serves a productive function. (I discuss the intersection of social class and racial caste in this essay: “Mapping the Junctures of Social Class and Racial Class: An Analytical Model for Theorizing Crime and Punishment in US History.” This model applies as well to the drug war.)
In the 1920s, political theorist Antonio Gramsci linked patterns of drug controls to the rise of Fordism and of Taylorism, which sought to increase efficiency of industrial production through careful control over the lives of the proletariat. Gramsci rejected the simplistic explanation that Puritanism was at work in these phenomena. “Those who deride the initiatives and see them merely as a hypocritical manifestation of ‘Puritanism’ will never be able to understand the importance, the significance, and the objective import of the American phenomenon,” he writes, “which is also the biggest collective effort to create, with unprecedented speed and a consciousness of purpose unique in history, a new type of worker and of man.” Since the industrialist could not impose this control upon society-at-large, the organized community of capitalists called on the state to perform its class function. Historians of this period corroborate Gramsci’s observations. British historian E. P. Thompson, in his 1967 article “Time, Work-discipline and Industrial Capitalism,” observes that the transition to industrial society “entailed a severe restructuring of working habits—new disciplines, new incentives, and a new human nature upon which these incentives could build effectively.” Herbert Gutman, in Work, Culture, and Society in Industrializing America, published in 1977, arrived at similar conclusions.
In the current period, there has been a renewed interest in criminal justice reform and a desire to roll back, at least to some degree, drug prohibition. Trauma-addiction science has suggested a public health approach rather than a police-carceral one. The shift is most noticeable in cannabis legalization (and decriminalization) and greater empathy for victims of opioid addition. However, there is a political economy factor at work in the shift in policy thinking. The economy is in the midst of a long secular expansion. Unemployment is lower today than at any time in the last 50 years. Wages are starting to rise. One way to address the labor shortage is to tap the industrial reserve army that has heretofore been contained by the state carceral function. The capitalist need for labor, which includes enlarging the pool of competing workers to put downward pressure on wages, appears to be softening the hearts of politicians. At the present moment, drug offenders seem to be the least controversial prisoners to bring into the workforce.
In the first, scholars theorize that the categories of deviant behavior that draw official sanction are products of the superstructural imperative to secure and entrench exclusive property and related forms of oppressive social relations and, more specifically, manage labor markets. Because the character of the superstructure reflects the interests of the ruling class, it is these that shape the deviance making process and the content of the categories used to control individuals and groups. Moreover, since the societal structure changes over time, the character of the deviance making enterprise and its products is temporally variable. These ideas express the dialectical theorization of societal development.
In the second, Marxists focus on the criminogenic character of class-based social structures, theorizing in particular that the discontents of capitalism, a system marked by alienation, immiseration, inequality, and injustice, produce the criminogenic conditions requiring the criminal law and necessitating its aggressive enforcement. Oftentimes one finds this view in Marxist penology alongside the analysis of control structures. However, it deemphasizes critical theory of coercive control.
The historical record supports the theory that economic imperative and the attendant political character of a given concrete mode of production shape the control machinery and its deviant categories. At the stage of primitive communism, where there is neither social class nor state and law, one finds scant evidence indicating the existence of formal and coercive social control machinery. Instead, control of deviance appears as informal and not particularly punitive. Moreover, there is little crime and violence in evidence (even when crime is defined beyond the principle of legality). The emergence of the state and law coincides with the appearance of social class and patriarchal relations, which are associated with the arrival of large-scale agriculture. At this point, social control as an institutional force appears. Each successive stage of development in segmentation leads to greater inequality in wealth and power; with each stage, the formal control machinery becomes more extensive and elaborate.
Capitalism represents the highest stage of exploitative relations and therefore achieves the highest level of coercive and ideological control. It is in this context that sophisticated police and carceral structures appear, accompanied by a scientifically framed intellectual system, principally the disciplines of criminology and penology. Other historically unique control systems also emerge, such as the mental health industry, with its own intellectual (or ideological) justifications in tow, taking the forms of psychiatry and psychology.
Noting that this machinery is far more extensive than past arrangements, Marxists theorize that the chronically alienated state of the working class and the problem of managing the fallout from the periodic crises associated with capitalism require an extraordinary control apparatus operating at the boundaries of the structure of work place rules. The latter concern especially flows from Marx’s theory of the general law of capitalist accumulation, presented in Capital I, wherein rising organic composition of capital, defined as the ratio of variable capital to constant capital, swells the ranks of the unemployed, or industrial reserve army.
In the 1990s, Michael Lynch and associates tested hypotheses derived from this theory, positing a relationship between the rate of surplus value and the size and scope of the police and carceral functions in the United States. Their research provides compelling empirical support for the theory.
The interest in the relationship between wage labor and carceral control is a longstanding one in the Marxist literature on crime and deviance. Perhaps the paradigm of modern Marxist penology is Georg Rusche and Otto Kirchheimer’s landmark Punishment and Social Structure, based on Rusche’s seminal article “Labor Market and Penal Sanction.” Rusche posits a relationship between labor supply and rates, types, and intensities of punishments. Harsh physical punishments are associated with economic downturns, a relationship that, he theorizes, is a function of the concomitant rise in surplus laborers; since one’s labor is attached to one’s person, the less valuable one’s labor, the less valuable one’s person. In contrast, rehabilitation and prison labor, publicly appealed to as enlightened reform, take priority during periods of economic expansion. Again, supply-and-demand plays the crucial role: the need for labor shrinks the supply of labor thereby making each laborer more valuable. Stripped of complexities, the pendulum swing between repression and reform is a function of the rhythms of capitalism.
Punishment and Social Structure expands on this idea and, inspired by Marx’s analysis of primitive accumulation in Capital I, explores the history of carceral control. Rusche and Kirchheimer compare the capitalist epoch with the epoch it superseded. There was no centralized state or bureaucracy under feudalism. Conflicts were, for the most part, resolved privately. Moreover, social arrangements were not such to require repressive public control. As a consequence, punishments were not severe. However, by the latter middle ages, private criminal law yielded to greater levels of state control and punishment. Capitalism required the destruction of relations protecting labor under the feudal system: the lord was dispossessed of political and economic power, the master craftsmen of the guilds transformed into an unattached skilled proletarian, and serf and peasant forced off the land via enclosure for use as cheap labor. This was achieved in part through measures criminalizing guilds and unions with the charge of conspiracy, as well as statutes and ordinances expanding the scope of foraging, poaching, trespassing, and vagrancy laws. Theft naturally increased with manufacturing, as objects once owned by those who made them became the property of those who owned the land and means of production. Subsequent works by several scholars, including William Chambliss and Christopher Adamson, have sustained Rusche and Kirchheimer’s thesis.
In the area of legal theory, Evgeny Pashukanis demonstrates that the criminal law (and law in general) embodies an ideology that functions to perpetuate the rule of the bourgeoisie. For example, the principle of equal treatment obscures the reality of class inequality and exploitation by projecting an image of the law with the outward appearance of neutrality and universality. Bourgeois morality becomes common morality, a superstructure concealing the true operation of criminal justice as an apparatus managing the working class for the sake of reproducing the unequal division of property. A leading modern exponent of this view is Jeffrey Reiman, who, in The Rich Get Richer and the Poor Get Prison, argues that, whereas the pretense to universalism legitimizes the use of force by government, the state’s failure to manifest equal justice makes its coercion analogous to criminal violence. He concludes criminal justice is really a criminal justice system.
The second way Marxists approach the subject of crime and deviance is to focus on the criminogenic conditions generated by the capitalist mode of production. This emphasis emerges early in the development of Marxist theory. Engels, in The Conditions of the Working Class in England in 1844, argues that the degrading working conditions prevailing under industrial capitalism demoralize the proletariat, leading to a loss of social control among workers and their children. The discontents of capitalism provide workers with the temptation to engage in deviant behavior and wear down their moral capacity to withstand temptation. Capitalism thus generates the social conditions that turn some members of the working class into criminals. Engels characterizes crime among the working class as a form of “primitive rebellion,” the “earliest, crudest, and least fruitful kind,” which, because of its expression at an individual level, is not only suppressed by the state but also condemned by the working class. For this reason, Engels and Marx are skeptical that working class criminals could be of much use to their revolutionary goals. They write in the Communist Manifesto that the conditions of capitalist society make more probable that working class rogues will play “the part of a bribed tool of reactionary intrigue.” Marx and Engels describe street criminals as “lumpenproletariat,” “social scum, that passively rotting mass thrown off by the lowest layers of the old society.”
The lumpenproletariat (the “dangerous class”)
Marxists do not see the street criminal as the only criminogenic consequence of capitalism. Engels theorizes that capitalism encourages crime among the bourgeoisie, as well, and, further, observes that the character of crime control is shaped by class location. In a passage that anticipates the work of Edwin Sutherland, Engels writes, “Murder has been committed if society knows perfectly well that thousands of workers cannot avoid being sacrificed so long as [capitalist] conditions are allowed to continue.” Willem Bonger, credited with the first full Marxist criminological work of the twentieth century, echoes Engels’ argument, theorizing that the tendency of capitalism to reduce everything to a cash nexus, pitting worker against worker in unforgivingly competitive markets, and promoting egoism over altruism constitutes a criminogenic milieu. This allows Bonger to account for crime across the class structure.
A critique emerged in the late 1970s wherein proponents, most notably Jock Young, took issue with “left idealism,” identified as the tendency to treat the criminal as something of a working class hero. The left idealist, so goes the critique, romanticizes the working class criminal, depicting the rogue as a revolutionary, a portrayal that stands in stark contrast to that painted by Marx and Engels. In one notable example, David Greenberg characterizes the proletarian criminal as the “vanguard of the revolution.” This view of crime, according to the “left realist,” rationalizes the behavior of the proletarian who turns to crime, explicitly justifying behavior harmful to working class interests. Given that most victims of street crime are proletarian, if Marxists criminologists are to represent the interests of the working class, then they must take the problem of working class crime seriously. The arguments of the left realists were influential in the United Kingdom during the 1990s, where they played a role in the development of New Labor’s crime control policies. However, Young himself was critical of these policies as New Labor jettisoned class analysis and shifted the blame to the victims of capitalism.
What of Marxism as a transformational political project? A piece of the project manifests today in the revolutionary act of overthrowing bourgeois definitions of crime. Developing her own conception of crime, the politically committed Marxist indicts the capitalism and associated problems of alienation, imperialism, poverty, racism, and sexism. For example, Julia and Herman Schwendinger argue that the capitalist mode of production represents the systematic violation of human rights as understood from the radical democratic and egalitarian standpoint, or, using Erich Fromm’s distinction, “positive liberty.” The Schwendiners distinguish between personal rights necessary for continued personal existence, such as the right to clean water and nutritious foods, and those necessary for dignified human existence, for example the right to democratic freedoms, an education, housing standards, etc. Marxists judge capitalism incapable of meeting the terms of these rights, as it rests on the exploitation of human labor and the unequal division of the fruits of that labor.
It is in the rejection of bourgeois definitions of crime and the redefining of criminal conceptions along radical egalitarian lines that Marxists most clearly differentiate their project from that of the conflict theorist. Seeing crime and deviance as social problems resulting from inadequate social institutions and the struggle over cultural values and partisanship inhering in a pluralist society, public issues that can in turn be addressed with a more equitable distribution of income and engaged citizenry, conflict theory is ultimately reformist in character.
The conflict social scientist’s failing is that he does not begin with a critique of the material foundation of the social order. His conception of power is rather more idealist; it is the struggle over social power that generates conflict. In contrast, the historical materialist sees conflict as emanating from the mode of material life, with class antagonisms and the property arrangements serving as fuel. Power emanates from the prevailing socioeconomic arrangement.
Criminogenesis in both its senses is ultimately the manifestation of the underlying class struggle that pervades capitalist societies: the criminal justice apparatus is, by design and by historical development, a structure to secure bourgeois rule over the proletarian masses; the impoverished and conflict-ridden conditions generated by capitalism imperil the working family, while, at the same time, encourage the pursuit of profit at the expense of the public good. The Marxist critique strikes at the roots of modern capitalist society, an unjust system that humankind cannot reform, but must instead abolish.