Archive for the ‘Uncategorized’ Category
As Peter mentioned, the constant stream of off-putting articles on grad school is annoying (enraging) for a number of reasons. Although I’m sure it’s meant well – as a form of empathy with ‘our situation’ – it has the perverse effect of shifting the blame to our generation.
A Latenight Rant by Peter
There is no genre more beloved by the old, lazy, and tenured than the “don’t go to grad school,” advice column that seem to spring up every other couple of months or two on the Chronicle or Inside Higher Ed. Writing with nothing but the best paternal intentions, some tenured prof or another explains, with his hand gently patting our shoulder, that he has come to realize that there just aren’t jobs in X field and students really just shouldn’t apply for these PhD programs.
As a member of generation-fucked, I find these types of arguments frustrating. Let me rephrase that. I find them god-damn fucking frustrating. I encounter them mostly from academics, who make some series of arguments about why no one should follow them into graduate school. All the reasons why people say it is a bad idea to go into grad school (terrible job market, no social respect, you will simply be a source of cheap labor, etc…) are all true, of course, but turning them into reasons why you shouldn’t go into grad school misses the point.
Think about it this way: would any good progressive look out across the Rust Belt in 1985, fold their arms, and say (with a certain self-satisfied air of regret), “well I’ve always told Youngstown high school graduates that they shouldn’t go into the steel industry.”
Of course not. They would blame union-busting, and off-shoring, and leveraged buy-outs, and Reagan, and everything else. But they wouldn’t shift the blame onto the workers themselves, who should have known better than to go into that industry.
Obviously people who are considering a PhD or JD have more options than a steel worker did, but anyone who thinks that recent college graduates are just overflowing with good choices is just revealing their own generational entitlement (defined, for the purpose of this post, as anyone who came of age before the country went to the total shitter, especially those who took advantage of that non-shittiness to get good public education, and then gleefully grabbed up all those fun tax cuts and cushy tenured jobs).
What, prey tell, are those would-be English PhDs supposed to do? Journalism? Ha! We know they can’t do law school! Publishing? Not even worth joking about. Secondary school teaching? Not now, after NCLB/Michele Rhee/budget cuts/TFA/Scott Walker have all had a go at teachers. People don’t have interchangeable skills, (we all can’t just smoothly transition from excelling at languages since 7th grade into a career as a chemical engineer) and those of us who hoped to make a living on our writing, thinking, teaching, arguing, etc… don’t have a ton of options these days.
The problem with the “no one should go to grad school” articles are that they, unconsciously or not, shift the blame for the endemic joblessness onto the most vulnerable, those who are, or will soon be, unemployed. This is especially pernicious when these arguments come from tenured faculty who should be exactly the ones who have the greatest responsibility to try to fix the Academy. Implicitly, they accept conservative narratives about individual agency within capitalism. Rather than fight the real enemy (the corporate administrators, the Tea Party Governors, neoliberalism, etc…), they turn it into a moralistic argument about what some 22 year old should be doing. It all becomes a way to justify to themselves why they aren’t helping out the grad student union, or marching with OWS, or challenging their University President.
Now, don’t get me wrong, it often is a terrible idea to go to graduate school. It is generally a terrible idea to be young right now. But let’s not blame some poor kid who wants to dream that he might not have to be a barista for the rest of his life. The people we should be paying attention to are the university presidents, and politicians, and think tank “intellectuals” and everyone else who is destroying our educational system and our economy.
I had just finished teaching a historiography review session for my undergrads who are taking the exam in a little over a week when I was emailed this story about a new mapping tool for the ancient Roman world. Maybe it was just because I had been talking about Braudel, but I couldn’t help but see the comparisons – and the possibilities for ‘total history’ in new digital tools.
Scott Weingart pinpointed what makes this technology so exciting in his blog review of ORBIS:
ORBIS is among the first digital scholarly tools for the humanities (that I have encountered) that really lives up to the name “digital scholarly tool for the humanities.” Beyond being a simple tool, ORBIS is an explicit and transparent argument, a way of presenting research that also happens to allow, by its very existence, further research to be done. It is a map that allows the user to engage in the process of map-making, and a presentation of a process that allows the user to make and explore in ways the initial creators could not have foreseen.
In other words, it’s not just a digital archive for historians (as so many digital tools in the humanities are), or a useful interactive database, like the Slave Voyages Database, that generates so much controversy in part because the user is not involved in the process of formulating the data. And it’s not like the virtual Rome project, which basically just allows the user to see what Rome would have been like. All of these are very cool, but it’s the interactivity of the research that is particularly cool about ORBIS.
I hope people use this for teaching as well as for research. And personally, I think it’d be great if someone put this together for the British Empire. Or for West African trade…..
Today is the day of the London mayoral election. Ken and Boris are squaring off over who gets to claim credit for throwing money at London, and who gets to avoid blame for transport failures. And I, a taxpaying resident of London, who regularly uses public transport, do not get to vote.
As I’ve previously written, studying African colonial history can help to prepare you for the weird and wonderful rules of citizenship, and remind you that there is no ‘natural’ truth to this stuff – it’s all a series of compromises and contingencies. So in the UK, citizens of Commonwealth countries, the Republic of Ireland are allowed to vote in all elections, and the European Union are allowed to vote in local, supralocal (today’s), or regional assembly (Scottish parliament, etc) elections. So (some) former subjects of the Empire are treated locally as citizens. And current members of the EU are treated (sometimes) as citizens.
Americans in Britain usually get told in this kind of situation that we fought a war and so if we wanted those rights we should have stuck around in the Empire, etc etc. But since the Republic of Ireland gets these rights, that seems a little false in this case. On the other hand, the US doesn’t let foreigners vote at all.
So why does this always come back to colonial history? Well, growing up in the US, the story that you get is that the British Empire was bad because it taxed its subjects without allowing them to have elected representation. And there’s a general feeling across the board that part of what makes imperialism so damaging is that it is run by unaccountable autocrats. And a lot of that comes down to tax extraction again, and the idea that the taxes being bullied out of Africans or Indians were not being used to develop the local infrastructure, education provision, etc, but were being used to build parks and public works in London and Manchester.
In other words, there is an immigrant class of most countries today that is not represented in the system in which they’re paying tax. And when a lot of public vitriol is directed at this class, and silly policies are put into place, it becomes clear that their under-representation creates a delightful new way to ‘other’, and to exploit from the inside.
We like to think that all of the great voting struggles were overcome by the anti-colonial revolutions, the civil rights and anti-apartheid movements, women’s suffrage…. we like to think of the West as being complete democracy and of being the only way that modernity could have happened. But as I’ve said in previous posts, there are a lot of different ways that things could have turned out (and could still), and the assumption that immigrants should not vote is just that – an assumption, not a ‘truth’. As the inclusion or exclusion of different types of immigrant in various countries shows, citizenship, and the rights and responsibilities that come with it are invented, and could easily change again.
David Harvey’s new book, Rebel Cities, is the latest entry in his life-long interest in uncovering the intersection between capitalism and urbanization. It’s a collection of previously published, but updated and revised, essays and articles. They are all particularly important to our understanding of both the long fall out of 2008’s economic crash and the rise of urban revolts in Egypt, Greece, New York and elsewhere. You should pay attention to David Harvey for lots of reasons (he’s probably the most important Marxist theorist alive today, and one of the most important intellectuals in general). But you should read him for no other reason than the fact that he was cautioning against the mortgage bubble, and worrying about “what happens if and when this property bubble bursts,” in 2003, years before celebrated bourgeois economists like Nouriel Roubini made their reputation predicting it.
This is a simplification, but basically Harvey has two distinct areas of interest in cities. First, he explores the way that urban spaces are created by distinct modes of capital accumulation; second, he’s interested in the reasons that cities are particularly important sites of class conflict. These are, obviously, related, since cities are important sites of political mobilization and conflict exactly because they have such central roles in the creation and circulation of capital. But loosely, this division frames the two parts of the book.
Capitalists need cities, Harvey argues, because building them up is one of the primary manners in which capitalists can dispose of excess surplus product. That sounds a bit jargony, but basically it comes down to the idea that construction (of buildings, of roads, of infrastructure) allows capitalists to do something with the profits they have made, once reinvestment in other things has run out. I understand that this argument was made in full in Harvey’s classic Limits to Capital. Unfortunately, due to a (sorely disappointed) thief in the West Village who took my messenger bag last year, I never actually finished Limits to Capital. But Rebel City seems like a good introduction for amateurs like myself.
Read the rest of this entry »
Since I wrote my last post on campus novels, I’ve read some more of them – mostly from the student perspective: Starter for Ten, I am Charlotte Simmons, and, most recently, Noughties. All were good in very different ways (and extremely different stylistically). It’s interesting to note that while the differences between academic life in the British and American campus novel are significant (again, as discussed previously), there are far more commonalities for the students themselves. Most notably, too much drinking!
But all of them also dealt with a fear that one of my students voiced at the end of term last week. My student, very interested in the topic we were covering that week (Africa in the Cold War), had decided last minute to change her essay topic to this more interesting one. But she was finding the reading so interesting (and upsetting) that she felt overwhelmed. When she came to my office hour, she explained that there were fundamentally too many options at university! She wanted to do all the things offered: play a sport, attend research seminars that looked interesting, go to film screenings, make friends, do all the reading that she found interesting, take all the classes….. and she was feeling overwhelmed with guilt for not being able to do it all.
This past week saw the release of one of the most anticipated video games of the year: Mass Effect 3 (ME3). In this game, players take control of Commander John/Jane Shepard, a sort of futuristic Navy SEAL. Shepard is charged with defending not only humanity, but all organic existence, from Reapers, “a highly advanced machine race of synthetic/organic starships” (think Cylons). The release of ME3 has been accompanied by the usual discussions about whether videogames are art. (Roger Ebert says no. Everyone under 30 says yes.)
I’d like to elide this discussion for now, mainly because—save for taking introduction to art history—I’m not very familiar with the history or theory of art. What struck me most about ME3 is its extensive focus on diplomacy. Unlike most action roleplaying games, ME3 allows players a significant amount of choice in whether or not they become a “paragon” (basically a good guy) or “renegade” (a devil-may-care good but rough guy). The game centers on building an alliance similar to NATO designed to combat the Reaper menace. Therefore, whether one becomes a paragon or renegade depends, essentially, on how the player conducts him or herself diplomatically. For example, if one attempts to win a given planet over to the alliance through threats or blackmail, one wins renegade points; the opposite is true of paragons. In either case, the game is at heart about diplomacy, a fact that had me questioning the relationship between gaming, history, and international relations.
Although ME3 is just a video game, today a similar game is regularly played by students, professors, and even policymakers. In these modern political war games, players adopt the perspective/persona of a given nation. For instance, Player 1 will play as Barack Obama, while Player Two will become Vladimir Putin, each facing off against the other to address, say, an Iranian nuclear breakout. These games are designed with the purpose of teaching the players “to think like” policymakers. The idea is that practice, even fictional practice, enables one to either think about or prosecute diplomacy. Interestingly, although these games reached their apex in 1950s and 1960s America, their origins may be traced to Weimar Germany. Examining the history of war games not only sheds light on the transnational connections that shaped America’s Cold War foreign policy, but also illuminates important questions about the relationship between gaming, knowledge, and practice.
One of the game’s main developers was a man named Hans Speier, a forgotten, though important, German-American exile intellectual. Speier began his career as the first doctoral student of Karl Mannheim, the creator of the modern sociology of knowledge, at the University of Heidelberg (home of the famous Philosophenweg) After leaving Germany soon after the Machtergreifung, Speier became the youngest founding member of the New School for Social Research’s University in Exile (peruse that list for a who’s who of twentieth century intellectual history). Speier spent the war years working for the Foreign Broadcast Information Service, the Office of War Information, and the Division for Occupied Areas, before helping found the RAND Corporation’s Social Science Division in 1948. Throughout the 1950s, he embodied a more general shift experienced by a generation of German exiles from socialist to Cold Warrior, routinely arguing for the United States to adopt an extremely hardline position vis-à-vis the Soviet Union. (As you’ve probably guessed by now, I’m writing my dissertation on Speier.)
While at RAND, Speier, along with another sociologist, Herbert Goldhamer, standardized the political war game described above. Through RAND connections, the game moved to MIT and then into the Joint Chiefs of Staff’s Joint Gaming Agency (see Chapter 6 of Sharon Ghamari-Tabrizi’s The Worlds of Herman Kahn). A number of well-known figures, including Maxwell Taylor, McGeorge Bundy, and Bobby Kennedy, played the game before making important policy decisions (although the relative influence of the game on decision-making remains obscure). The game’s meta purpose was to teach players about the importance of historical context for international relations and was a reaction against the game theory that dominated RAND’s Economics Division. But the game’s origins remained distant from the Cold War context in which it reached fruition. In the late-1920s, Karl Mannheim developed a new pedagogy with the goal of harmonizing the incredibly contentious democratic politics of Weimar Germany. Mannheim argued that if Weimar democracy was to have a future, intellectuals needed to create a classroom environment where students adopted the personas of representatives of different political parties and political interest groups. By discussing and arguing with each other over the most pressing issues of the day, Mannheim maintained, students would learn to be democrats. Practice was the path to democracy.
Speier adopted and adapted this idea in the Cold War. Political war games are now played throughout the world, and are an important part of many security studies curricula. Moreover, it is the basic notion of the political war game—that gaming diplomacy can make players more astute negotiators—that undergirds ME3’s appeal as an “intellectual” video game. Clearly, individuals interested in politics want to have some way to practice the art. The question, of course, is whether this is ever possible. Can a political war game recreate the Cuban Missile Crisis? Can ME3 teach people what its like to form and maintain an international (or intergalactic) alliance?
The question that underlies this entire post concerns what role knowledge, however acquired, can play in teaching diplomacy and improving outcomes, however defined. A basic assumption of Speier’s, Goldhamer’s, and postwar security studies (and countless model UN clubs) is that it could. But is this the case? Are games more than distractions? Happily, a number of academics have begun to address this and similar questions. Game studies is one of the newest fields in academia and potentially one of the most exciting. I for one very much look forward to seeing how this often-overlooked field develops in the coming years. Perhaps we will soon learn that certain games do indeed fulfill their intended, practical telos. Or perhaps we will learn that they don’t.
I feel the need to reintroduce myself given a long absence from this blog that can only be explained by the strange temporalities of grad student life. Preparation for generals last year, stowed away in a library with too many books, led to my throwing a rope into the real world with posts on Octomom, jury duty, and the Cronon Affair (the latter also including an excellent review of German philosophy for anyone doing a Euro Intellectual History field…). Summer recovery in Germany led to posts like this and this. Then there was the long silence as I adjusted to a new audience–classrooms full of undergrads–last semester. I’m back and blogging again for various reasons but the immediate impetus is to say something that, as a 28-year-old overeducated unmarried heterosexual woman, should be fairly obvious: “I use contraception.”
I raise this point not because it will cause any major waves, but because I hope its dumb obviousness will raise a point that itself is so obvious that I think it has become something like a great grey elephant camouflaged against a great swath of grey newsprint. The recent contraception debates are about the Catholic Church, about the Republicans using any tactic they can to cripple the healthcare bill (see: Blunt amendment), and about a bozo conservative radio personality filled with hot wind. More specifically — and hard to miss in a country that has seen bills that on one hand insist a woman have an ultrasound before an abortion to ensure “informed consent,” and then take away a doctor’s duty to inform a woman of any fetal complications — an attack on women. But “women,” as we should know by now, encompasses an array of different identities, and the attempt to defund contraceptive coverage, and particularly Rush Limbaugh’s attack on Sandra Fluke as a “slut” and use of the term “recreational sex,” signals a construction of women as women-cum-college-girls-”gone-wild.”
At base, Rush Limbaugh argued that “personal sexual recreational activities” should not be subsidized. Adam Serwer over at Mother Jones has a good rundown of this argument, pointing out how it differs from the Catholic bishops’ protests. Media articles have trumpeted that Rush really put his foot in his mouth on this one, causing sponsors to flee, yet the media’s focus on Rush’s choice words, rather than the substance of his argument, has made Rush’s crime one of politesse rather than politics.
Yet what was interesting about this focus on semantics rather than substance is that it seemed like media outlets like Salon and NY Times thought it so obviously ludicrous to call an unmarried woman using contraception a slut, that they didn’t take up the heart of Rush’s claims that “recreational sex” should not be subsidized. Only Emily Bazelon at Slate focused on the slut-shaming in a substantive way, though she concluded that Fluke was revitalizing feminism through sex positivism. I’m not so sure this is what’s happening. I think Elizabeth Drew, in an essay in the New York Review of Books, rightly brought up a more pessimistic picture that Rush
touched a nerve by raising an issue on which many of his followers would agree with him: why should taxpayers pay for insurance (with no copay at that) to make unlimited sexual activity by students worry-free? Whatever people’s attitudes are about young people engaging in what used to be called “recreational sex,” Limbaugh had cleverly made the issue not really the sex but insurance coverage to protect against its possible consequences.
What I find interesting is this focus on “students” and the tight link between this imagined population and the idea of “recreational sex.” The strange thing is that poised, cool, and 30-year-old Sandra Fluke is hardly the image that springs to mind when someone says “college co-ed.” Instead, we think of the much ballyhooed hook-up culture that has resulted in many a wrung hand over the past decade.
Drew goes on to write, “In my view it would have been wiser for them to call as a witness a married woman with an unarguable medical reason for needing contraceptives and who worked for a Catholic institution that denied it.” Why this focus on “unarguable medical reason?” A concentration on defending contraception due to the Pill’s use for overt medical “diseases” (no one, of course, wants to call pregnancy a “disease”) boomerangs attention to the Pill as opposed to the wide array of recommended contraceptive devises, such as IUDs. It undermines the very reason why the EEOC ruled that health insurance companies had to cover prescription contraceptives in the early 2000s: because to not do so was a form of gender discrimination, unlawful under Title VII of the 1964 Civil Rights Act, which will soon celebrate its 50th birthday. And perhaps most importantly, it distracts from the bigger picture recognized by Planned Parenthood v. Casey 20 years ago, though in relation to abortion: “[t]he ability of women to participate equally in the economic and social life of the Nation has been facilitated by their ability to control their reproductive lives.” [Here's a useful chart showing the related rise of female employment since the early 1970s].
The claim that people need to “take responsibility” for their recreational sexual activities is a form of gender discrimination because it is women who bear the costs of unintended pregnancy: financial, social, and, yes, health-related. And of course what is considered “recreational” and what medically-necessary is always in flux, just as what is considered a disability and a normality changes over time. As Georges Canguilhem argued in The Normal and the Pathological, disease and disability are constructed categories; there is no objective science to identifying them. Infertility treatments are today covered by insurance plans because the inability to have children is seen as a “lack,” a deficit that needs treatment. It’s worthwhile to note that while there are a plethora of laws regulating contraception and abortion in the United States, reproductive technologies like IVF go almost entirely unregulated. Viagra is covered by health insurance because erectile dysfunction has become identified as pathology, despite the fact that viagra may be the most recreational of the sex-related drugs. Then again, male sexual functioning is something society has never had trouble supporting, and in fact the 2000 EEOC decision ruled that insurance companies had to cover prescription contraception for women in part because they already covered viagra for men.
We now see the effect of cultural attacks against a female hook-up culture that began in the early 2000s: this portrayal (whose validity has been rightly contested) has bled out to encompass a much wider area of unmarried female sexuality. Sandra Fluke become a “student” not a 30-year-old adult woman. Her activity becomes “recreational,” not part and parcel of her adult well-being. And the contraception that prevents unintended pregnancy becomes unnecessary to her sexual, mental, and bodily health.
In April 1971, 343 French women signed “le manifeste des 343 salopes”–the Manifesto of the 343 Sluts, proclaiming that they had had abortions. That summer, 374 German women proclaimed the same on the cover of Stern. Sex today is accepted but it is painted in broad strokes and still separated along a marital/non-marital divide. Perhaps we do need to reclaim the name “slut” as the Slut Walks do, but more importantly we need to pluralize this binary and ensure that access to contraception does not become sidelined by a construction of medical necessity that excludes women’s everyday sexual activity.
“Liberty for the few – Slavery, in every form, for the mass!”: the Deep Roots of the Birth Control Freakout
Thanks to Rick Santorum, Rush Limbaugh, and the Virginia Legislature we’re engaged in an elevated and enlightened national debate over just exactly how big slutty slut sluts are our nation’s women. We all know, of course, that sex without the intent to procreate is immoral, unless, like Newt Gingrich, you’re in the sanctity of a Congressman/aide relationship. So the question is, of course, exactly how many sexual experiences should women be allowed? 5? 10? Exactly how much should we humiliate those who have unapproved sex? Should they be forced to videotape the sex for Rush’s sweaty amusement? Be raped by the state of Virginia?
Some commentators have noticed that this rash of attacks on women’s rights is a bit strange coming from a political movement that, a year ago, was screaming about getting the government off its back, but is now so eager to get in between our sheets (and our knees). It does raise a serious question: why does the libertarian tradition in this country seem to be so blind when it comes to women’s rights? Why is it that the party that claims to speak for people’s private property rights, is so careless about the autonomy of people’s privates? We shouldn’t be surprised, though, as the conflation of property rights and control of women have deep roots in American history.
Corey Robin has discovered some great intellectual history that partly explains this disconnect, showing that libertarian hero Ludwig von Mises actually had repugnant views on women, worrying that access to birth control might give women too many free choices. And Mike Konczal has also written on some intellectual background. Together they suggest that there is a strong tradition of libertarianism that is not committed, even in theory, to what Robin calls a “project of universal liberty,” not even a project of negative liberty. At least as so far as women are concerned.
I would like to add a little social history to the mix, in a way that I think supplements the analysis of Robin and others. I’m currently reading Stephanie McCurry’s book on the troubles of Confederate nation-making, Confederate Reckoning. A major theme in her work, going back to her Masters of Small Worlds, is the intersection between domination of the home and perceptions of liberty. Many scholars piously tell us of the need to integrate analyses of race, gender, and class, but, other than maybe Glenda Gilmore, I can’t think of anyone who does this as well as McCurry.
In Masters of Small Worlds, she studies small households in the Low Country South Carolina, those with no or few slaves. These poor whites have always been a bit of a problem in historical understanding. In a nutshell, why did those white men who were not profiting from the slave system, still fight and die to protect it? One traditional answer, going back to Edmund Morgan, and before that W.E.B. Dubois, is that race was the factor that tied the poor white to the rich white, creating a “socialism of fools,” which seemed to unite the interests of all white people. McCurry doesn’t disagree, but adds gender to these analyses.
White men’s self-identity, she argues, in the age of the yeomanry, was intricately linked to domination of the home and, especially, domination of dependents: children, women, and slaves. Moreover, this was a process that linked private property with control of slaves and women. Her first chapter in Masters of Small Worlds is about the spread of laws regarding fencing and boundaries. Once this enclosure is complete, and property is ensured, than the white male can exercise control over his subordinates. “The law elided distinctions between forms of property, rendering a man’s control over his enclosure synonymous with his control over the familial and extrafamilial dependents within it.” (p. 14)
The result was an economic system in which the small property holder had total control of his property and total use of the labor of all dependents on this property. Like many yeomanry, they first produced a subsistence, and the remainder they sold for the market. Thus, they weren’t as totally integrated into the market as, say, a New England millworker was, or even a Western grain farmer was. Women’s labor, then, was crucial for the functioning of the economic unit, as they wove, cooked, cleaned, butchered, etc.. But it was a labor that occurred under the control of the male. In defiance of pro-slavery ideology, in fact, white women often worked in the fields alongside white men and slaves. And, though she doesn’t go into this, the reproduction of both the wife and slave women had direct economic benefit for the master.
White Southern men received real and tangible benefits from this system that ensured their near-total autonomy and power within the boundaries of their own property. While at home, they controlled the labor of their subordinates, and in public their status as a free-holding white man (a master) linked them to the elite. McCurry does not actually argue that this common mastery eliminated all class resentment or divides, but it did provide a common language that could be used to mobilize poor whites. Thus on the eve of the war, planter elites argued that the “black Republicans” would threaten the mastery of white men, an argument laden with gender and racial anxiety.
Moreover, this was a tradition that was hostile to most government action. Sure, you needed the government to capture fugitive slaves, protect against rebellion, and punish other transgressors. But, unlike those Whig factory owners in Massachusetts, a Southern freeholder had no need for tariffs or canals, no need for public education, and no need for a systematized and regularized legal code. The conflation of property with racial and gender privilege also partly explains the seeming paradox that the capitalist North actually had a far greater communitarian tradition, far more advanced public goods (libraries, roads, schools, etc…), and a far more advanced anti-capitalist tradition, than the supposedly agrarian South did. Southern white men had extra-good reasons to be suspicious of the Federal Government, as you would have to share power with those idealists from Ohio or Massachusetts who you couldn’t trust on the issue of slavery.
The result was, publically, an ideology that strongly linked the subordination of women and the subordination of blacks with the defense of white liberty and white private property. Few issues were as intricately linked in antebellum times as were black rights and women’s rights. Southern ideologists weren’t alone in noticing that in the North women’s rights activists came almost exclusively out of the ranks of abolitionists. While abolitionists imagined liberty as about individual self-possession and control, Southern ideologues imagined it as household self-possession and control, possession and control being exercised by the white man. George Fitzhugh wrote that abolitionists “give at once the coup de grace to the old world, and to usher in the new golden age, of free love and free lands, of free women and free negroes, of free children and free men.” (these are all bad things, for Fitzhugh). In Cannibals All, he constantly refers to the “women, children, and free negroes” as one group, those fit to be ruled. He also, interestingly, accuses all abolitionists of being socialists: “men once fairly committed to negro slavery agitation … are, in effect, committed to Socialism and Communism, to the most ultra doctrines of Garrison, Goodell, Smith and Andrews – to no private property, no church, no law, no government, – to free love, free lands, free women and free churches.” (p.368)
Now Fitzhugh was no libertarian, obviously, but he was a spokesman of a Southern ruling class that saw no inconsistently in emblazoning both “liberty” and “slavery” on their banners. The reason, as should be clear from McCurry’s analysis, is that the freedom of the white man (as they saw it) really did depend on the subordination of both women and blacks. As Fitzhugh said, in commendable honesty, “To secure true progress, we must unfetter genius, and chain down mediocrity. Liberty for the few – Slavery, in every form, for the mass!” Moreover, you can see how, in his mind, loss of control over women would literally be an assault on private property, as women join slaves as being essential appendages of private property.
I haven’t finished McCurry’s new book yet. But I gather from what I’ve read so far that she will argue that it is exactly this style of freedom that Confederates think they are preserving when they go to war. But, in fact, the war necessarily politicizes and empowers women and slaves, who play a part in bringing down the Southern project.
The relevance, of course, is that, is that out of this social history comes a strong tradition of understanding liberty, not in abstract terms, but in the concrete, as the ability to dominate and control your own subordinates. Moreover this should remind us that the women’s rights movement does entail real losses for men: loss of status, loss of labor, loss of privileges. I think Robin has made similar arguments from an intellectual history point of view. But I think its important to also embed the arguments of classic conservatives in the particular economic forms that give rise to them and where they best grow. I suspect that the average Tea Partier knows relatively little about von Mises’ actual thinking. But the sort of deep cultural sense of control and hierarchy created in antebellum yeomen life (and continued in Jim Crow and after), laid deep roots in American society.
Scholarship and politics don’t mix. At least not according to literary theorist and New York Times blogger Stanley Fish, who has been arguing for years that professors should “save the world on their own time.” Just last week, he reiterated this point in a column about a conference he attended on “originalism,” the contentious legal doctrine that judges should interpret the Constitution as the framers had originally understood it. Despite the subject matter’s obvious implications for hot-button issues like immigration and the health care mandate, Fish happily reported that conference participants stayed focused only on matters of academic concern. They never waded into the territory of political partisanship. As he explained,
It would be an understatement to say that these questions provoke heated discussion in the world at large, but at the conference they were not themselves debated; no one stood up to say that he was for or against the individual mandate, or that citizenship standards should be relaxed or tightened. Instead participants argued (vigorously, but politely and with unfailing generosity) about where and with what methods inquiry into the questions should begin. Actually asking and answering them was left to other arenas (the arenas of the legislature, the courts and the ballot box) where their direct, as opposed to academic, consideration would be appropriate.
While Fish’s insistence on the stark distinction between partisanship and scholarship might strike some as unrealistic, it comes out of his broader view on the nature of academic freedom. From his perspective, academic freedom differs fundamentally from the free speech rights guaranteed in the Bill of Rights. Unlike most workplaces, colleges and universities don’t have the right to fire their academic staff because of their opinions. More accurately, they don’t have the right to do so if they operate under the academic freedom guidelines established nearly a century ago by the American Association of University Professors.
How did faculty members gain these special protections? In the United States, academic freedom began to gain institutional support during the Progressive Era, a period in which many placed a high value on the ability of disinterested expertise to solve social problems. Academic freedom was originally designed to advance such expert knowledge. The AAUP argued that faculty members needed professional autonomy in order to remain free of the corrupting influence of business interests, religious groups, political parties, and labor unions. To advance knowledge, only accredited specialists could judge the merit of academic work: this explains the necessity of peer review.
By politicizing their work, Fish argues, faculty members weaken these philosophical justifications that protect academic freedom. If the broader public believes that professors at the universities they support promote a political agenda—rather than disinterested scholarship—the public will then have reasonable grounds to insert itself into decisions about research and teaching that had once been reserved for academic experts. The rationale for academic autonomy crumbles.
Not long after reading Fish’s recent column, I happened to come across a speech on academic freedom written by the militant historian, Howard Zinn. As anyone at all familiar with Zinn’s work will have probably guessed, the speech promoted a vision of the academic enterprise diametrically opposed to the one articulated by Fish. Delivered to an audience of South African academics in 1982, the speech implored all scholars to fight against the temptations of political complacency. For Zinn, academic freedom had
always meant the right to insist that freedom be more than academic –that the university, because of its special claim to be a place for the pursuit of truth be a place where we can challenge not only the ideas but the institutions, the practices of society, measuring them against millennia-old ideals of equality and justice.
From Zinn’s standpoint, any understanding of academic freedom that urged scholars to remain aloof from contemporary social struggles remained hollow to the core. Professional autonomy might have its place, but at what cost?
American higher education, Zinn insisted, had historically served the interests of wealthy elites that dominated the worlds of big business and the state. As long as faculty members quietly went along their business—training the middle managers and professionals that would keep the deeply unequal society running smoothly—the powers that be would grant them a degree of autonomy and prestige. Should scholars really be content with this state of affairs?
Zinn also maintained that in attempting to remain apolitical, academics actually performed a disservice to scholarship. Under the guise of objectivity, academic standards often masked support for the status quo. These standards encouraged social scientists to put on blinders when they examined issues of racial, sexual, and class inequality. In the name of supposed neutrality, professional disciplines such as engineering and finance often eschewed questions of values all together. This kind of thinking, he believed, helped encourage the mindset that led American academics to play important roles developing weapons and providing expertise for the Vietnam War.
Zinn used his own experience teaching courses at the historically black Spelman College in Atlanta, Georgia in the 1950s and early 1960s to illuminate the limitations of a narrow view of academic freedom. The Spelman campus, he remembered, was beautiful. Ideas were openly discussed within college walls. However, faculty and students were expected to publicly remain silent on segregation. If they had publicly expressed themselves on this issue, it would have caused a scandal and threatened the college’s vaunted autonomy. With the rise of the Civil Rights Movement, Zinn explains, a critical mass of students and faculty stopped self-censoring themselves. They had realized that a measure of academic freedom within the college meant little if it was not accompanied by the right to fight for justice and equality on the outside too. In stark contrast, to Fish, Zinn concludes,
I did not think I could talk about politics and history in the classroom, deal with war and peace, discuss the question of obligation to the state versus obligation to one’s brothers and sisters throughout the world, unless I demonstrated by my actions that these were not academic questions to be decided by scholarly disputation, but real ones to be decided in social struggle.
Zinn practiced what he preached. He served as a faculty advisor to SNCC in the early 1960s. In the 1970s, he engaged in sit-down strikes with campus workers at Boston University. In 1980, he produced one of the most famous and contentious works of revisionist scholarship in American history. Throughout his career, he devoted his writing and public life to exposing injustice. Due to his outspoken activism, he was trailed for decades by the FBI and at least one high-ranking member of his university tried to have him fired.
Is there a middle road between the radical commitment demanded by Zinn and the academic formalism celebrated by Fish? It seems to me that academics often produce first-rate scholarship that also happens to promote a political agenda. There are many works based on meticulous research and judicious reasoning that also make clear interventions into contentious public debates. Just in the past year or two, this appears to be the case in books as varied as Michelle Alexander’s The New Jim Crow: Mass Incarceration in the Age of Colorblindness, Jacob Hacker and Paul Pierson’s Winner-Takes-All Politics: How Washington Made the Rich Richer and Turned its Back on the Middle Class, and, Corey Robin’s The Reactionary Mind: Conservatism from Edmund Burke to Sarah Palin. The authors of these books have all received praise (and criticism) from their peers in academia, while also making important and pointed contributions to debates of major public significance.
Fish is right to the degree that the academy shouldn’t be a place that promotes political propaganda. On the other hand, it would be a sad state indeed if at least some academics didn’t also heed Zinn’s advice. We need more, not less, rigorous works of scholarship that deepen an often shallow public discourse on issues of crucial concern.