Archive for the ‘blog’ Category
William Cronon’s Shout-Out to (the original) PhD Octopus… and How That Relates to College Level Teaching
In this month’s issue of Perspectives on History, American Historical Association president William Cronon wrote an excellent piece on the need for professional historians to be trained for breadth along with depth, to be able to synthesize large amounts of material and ask (and maybe answer) big questions, along with the rigorous but narrow analysis that is typically embodied by dissertation research.
As an aside in this article, Cronon wrote “William James’s provocative 1903 essay, ‘The PhD Octopus,‘ should still be required reading for all scholars.”
Since that’s the name of our little blog, I tend to agree. And what exactly does “The PhD Octopus” say?
James began his essay by telling of a “brilliant” graduate student in philosophy who had been teaching English literature at another university when it was discovered that he did not have a PhD, the “three magical letters” that were a requirement for a teaching position at the university. When the department told the student about the situation, he returned to the Harvard philosophy department and wrote a thesis. Yet James, a member of that department and dissertation committee, noted that they could not pass him.
And so James noted:
Brilliancy and originality by themselves won’t save a thesis for the doctorate; it must also exhibit a heavy technical apparatus of learning; and this our candidate had neglected to bring to bear. So, telling him that he was temporarily rejected, we advised him to pad out the thesis properly, and return with it next year, at the same time informing his new President that this signified nothing as to his merits, that he was of ultra-Ph.D. quality, and one of the strongest men with whom we had ever had to deal.
To our surprise we were given to understand in reply that the quality per se of the man signified nothing in this connection, and that the three magical letters were the thing seriously required. The College had always gloried in a list of faculty members who bore the doctor’s title, and to make a gap in the galaxy, and admit a common fox without a tail, would be a degradation impossible to be thought of. We wrote again, pointing out that a Ph.D. in philosophy would prove little anyhow as to one’s ability to teach literature; we sent separate letters in which we outdid each other in eulogy of our candidate’s powers, for indeed they were great; and at last, mirabile dictu, our eloquence prevailed. He was allowed to retain his appointment provisionally, on condition that one year later at the farthest his miserably naked name should be prolonged by the sacred appendage the lack of which had given so much trouble to all concerned.
This anecdote hits home because I’m about to embark on a college teaching job without my PhD in hand. Like many of my peers, I’ve had virtually no pedagogical training en route to my degree, except for learning by doing as a teaching assistant and as instructor in various courses along the way.
Today we launched the official Ph.D. Octopus Facebook page. We’re finally entering the 21st century, I guess. Heck, we haven’t even really decided how we’re spelling Ph.D. But I guess it’s fitting that I contribute this post along with that piece of news, and the above image, which we’ve been hiding for far too long, crafted by the lovely and talented Parisi Audchaevorakul.
See, over the weekend I was having a conversation with my new friend Holger Syme, a professor of English at University of Toronto. Holger also has a wonderful academic blog called Dispositio. And so we discussed our blogs. Eventually, the conversation turned to the horrendous state of the academic job market (as it does) and then to the process of acquiring those disappearing jobs, and getting tenure, and to the process of peer review.
For those who don’t know, peer review is the process by which academic work is rendered legitimate. In practical terms, it means that when we submit articles to academic journals, the article is reviewed by two of our peers, that is to say, by two other academics in our field, two similar specialists, who might be able to speak the article’s accuracy, originality, and importance, and to the author’s general competence.
The goal of the system is for our peers to operate as gatekeepers. They are the ones who decide if the article is good enough to get in, and the number and quality of articles (and books) that we write determines the fellowships and jobs that we get, and whether we get tenure.
It’s not bad in principle. But there are problems. First, it’s never entirely clear that these two readers are actually experts in your field, or that their judgments are good. If your article is rejected by one journal, of course you can take it to another. But the reality is that two people may dislike your piece but a dozen other equally qualified “peers” might have loved it, and you have no way of knowing, because the peers are anonymous and the process is rather opaque.
Second, and perhaps more important, the process is painfully slow. Even if the two reviewers like your article, it might take weeks or even months for them to actually read it, then they send it back to you with the instruction “revise and resubmit,” and then the process repeats itself. Actually getting it to print can take even longer. Sometimes it takes years before the actual discovery or innovation that your work produces ever sees the light of the day, and that being the very dim light of an academic journal, which even at their most prestigious are read by very view people indeed.
What Holger did that so fascinated me was compare this peer review process to his own blogging. Because Holger has tenure, he can write (within reason) anything that he wants on his blog. He can share his academic work there. And so he does. And when he does, he gets responses in real time. If he provides a novel piece of research, say, a new analysis of one of Shakespeare’s plays, or even digital images of marginalia from the early 17th century, he can get comments, that is to say, peer reviews, immediately. Indeed, that is precisely what happened in the above post. Holger wrote it on December 21, 2011. Professor Martin Wiggins, of the University of Birmingham’s Shakespeare Institute, offered comments and corrections on December 22, 2011, the very next day. Then Holger edited the post, and thanked and responded to Dr. Wiggins in the comments.
Now, if Holger didn’t have tenure, and Professor Wiggins wasn’t a nice person, he could have stolen Holger’s work and done published it with more correct information, or simply published it first in a more reputable setting, and Holger’s path to tenure might have been thwarted. After all, we don’t get credit for our blog posts on the tenure clock. So for someone like me, or any of my non-tenured (or unemployed) co-bloggers, it might be academic suicide to publish our original research out here in cyberspace, rather than in a peer-reviewed journal, or in a book printed by a university press.
On the other hand, I wonder if, in the future, blogs such as these will sort of play the role that Sean Parker’s Napster did for the music industry. If we could all publish our work, safely, in real time, and have legitimate critics respond to it in real time, and edit it in real time, wouldn’t that be a more effective way of advancing scholarship?
This is not to say that peer review should be done away with entirely. But it seems like a community of academic bloggers should at least have some effect in speeding the process up, and ideally in making it more transparent and democratic as well. For example, suppose Dr. Martin Wiggins was simply Mr. Martin Wiggins, amateur Shakespeare buff, who knew enough to provide relevant criticism to Holger’s post. Theoretically, as long as the scholarship is sound, it shouldn’t really matter where it’s coming from.
That’s precisely the point of William James’ 1903 essay, “The Ph.D. Octopus,” that we should not fetishize degrees, like the Ph.D., but instead evaluate work, and academics, on their scholarly merit. We’re not quite there yet, and I’m not quite sure where there is. But I think I’d like to get there eventually.
Why am I not at all surprised to see blogger wunderkind—and member of the progressive establishment blogosphere in good standing—Matthew Yglesias advocate that people scab and cross a picket line? In this case, the dispute is over the strike at the Huffingtonpost. Two labor unions, the Newspaper Guild and the National Writers Union have called a strike and a boycott of the Huffington Post over the fact that they, you know, don’t pay their workers. At the same time, my brother in the UAW, Jonathan Tasini has filed a class action lawsuit against the Huffington Post on behalf of the unpaid bloggers. The activism was launched, of course, after it was announced that Arriana Huffington would make $315 million dollars selling the website to AOL without paying a cent to the bloggers who helped make it popular.
A couple of quick points about the strike and then some broader thoughts:
First, a strike has been called by legitimate unions. You might disagree with tactics, or even, as Yglesias claims, think that it’s counterproductive to the interests of the unpaid bloggers, but scabbing a picket line (even a virtual one) is a serious deal. Unless you have damn good reasons, you should always trust the workers who have called a strike. I don’t see how anyone can call themselves on the Left if they proudly cross a picket line.* And its one thing to do that in private, or because you were unaware of the picket line. Its another to publically advocate scabbing while taking money and publically representing a (supposedly) progressive organization like the Center for American Progress.
Second, It’s easy to overthink the complexity of an issue like this. Stepping back it is, like every other strike, a matter of class loyalties. Do you side with unpaid information-age workers, or AOL, one of the biggest information conglomerates in the world? There is no way that poorly paid information workers will ever get a fair deal unless they organize and fight. You either side with them (like Erik Loomis does) or you side with the faceless multinational corporation (like Yglesias does, whether he intends to our not). There’s no neutral ground in cyberspace.
Erik Loomis, who originally called Yglesias out, compares the bloggers to grad students, and the comparison is apt. In both cases there are groups of information workers who are paid little or nothing on the empty promise that one day in the future you might get a cushy job. Like Fellow at the Center for American Progress Action Fund, or tenured professor. In both cases, though, the very fact that people are willing to work for free at the beginning of their career erodes the need for people in those cushy jobs at the end of the career. In the case of academia, universities won’t hire lots of full time professors when they can rely on TAs and adjuncts (who are all hoping one day to become professors) to teach all the classes. As universities need less and less professors, the competition becomes more and more fierce, and so grad students are more and more willing to take poorly paid jobs in order to pad their resume. The vicious cycle continues.
Yglesias seems to think that because some people are willing to blog for free (as I do!), this means that the bloggers at the Huffington Post shouldn’t strike. By that logic, of course, the workers at the Stella D’Oro factory shouldn’t have struck since many people bake chocolate chip cookies for fun. The obvious difference between most blogs and the Huffingtonpost is that something different happens when your labor enters a marketplace and someone begins to profit off it. Especially when we’re talking $315 Million in profit. I think most of us have a pretty good moral economy in which we understand that you have a moral claim on money that is being made off your labor.
Moreover, as writers on the (actual) left are well aware, the appropriation of cultural work by capital is a central aspect of neoliberalism. The sale of the Huffington Post can be put in the broader context of the enclosure of the digital commons. As Michael Hardt defines it, biopolitical production relies on “the production of ideas, affects, social relations and forms of life.” Neoliberalism is dependent on “capitalist strategies for privatizing the common through mechanisms such as patents and copyrights.” In other words, this isn’t the 1930s anymore, when the relevant fights were in the Ford factories. Today, increasingly labor is intellectual labor, about manipulating data, ideas, and culture. (Of course, there were labor fights in the cultural industry in the 1930s too, I know, but I’m simplifying for the sake of the argument.) The Left has to respond to these changes in labor by finding ways to organize and advocate for workers in these information industries.
But in a broader sense, as Loomis originally pointed out, this whole issue is symptomatic of a larger issue: the decline of the progressive blogosphere as an independent and outsider voice. Yglesias, of course, is a perfect symbol for this. Today, from his nice perch in the Center for American Progress, he spends most of his time lecturing Teachers’ Unions on why we need to privatize our education system, advocating deregulation of one form or another, and asking himself, for every social problem, “what would my econ 101 textbook say?” If he isn’t writing for the New Republic and hosting cocktail parties in 10 years I’ll be shocked. His good friend Ezra Klein, although not as eagerly neoliberal, openly celebrates the technocratic worldview, seemingly unaware that bureaucratic efficiency is not the same thing as justice, equality, and dignity.
The other blogs have either become election oriented advocacy sites (like Dailykos) or got sucked up into one establishment think-tank or establishment magazine or another. The anti-establishment tone, the sense that the Democratic Party had to be saved from itself, the skepticism of elites, etc… are all mostly gone now, replaced by this weird fetishism of the technocrat or sadly predictable loyalty to the Democratic Party. Its true there are still a couple of left-overs from the original wave of bloggers. Your Atrioses, Digbys, and Glenn Greenwalds, who keep alive the flame. But it’s nearly impossible for new voices to enter into the conversation like they are.
And ironically, of course, it is this very system—the control of the blogosphere by established writers and institutions—that shuts out new bloggers and forces them to do humiliating things like work for free for the Huffington Post in the vain hope that they will one day be able to make a living out of it.
* The only exceptions I see are if the strike is a hate-strike, in which case you should cross the line, or rare instances in which some conservative workers strike to derail socialist policies, as when the Canadian doctors tried to prevent the single payer system from being implemented, or the CIA funded anti-Allende strikes in Chile. In those cases a broader solidarity trumps the solidarity you owe the workers. Obviously a virtual picket line can be more difficult to respect than a flesh and blood one. I certainly have followed links to the Huffington Post, and see no purpose in getting too self-righteous about people still reading it now and then. But I certainly won’t advocate it. And certainly won’t advocate the far greater action of writing for it.
We’ve pretty much gotten past the point where cultural commentators decry the internet for dumbing down ideas and have entered an era of where we ask more complex questions about what sort of cultural, political, and social work the Internet, particularly its intellectual and literary production [blogs, onlines mags, newspapers] and social networking features, does. Note that the two actually overlap quite a bit: fodder for thought is exchanged via facebook walls (the undoubtedly apocryphal story of the founding of this blog claims such exchanges as inspiration) while blogs foment social connections (I can count a number of friendships that have directly or indirectly resulted from my writing here).
Weiner’s recent post on Facebook and Tunisia and the discussion that ensued is an example of how we’ve begun to conceive of the Internet as at least Janus-faced. The attempt to historicize the internet by claiming forebears, such as Robert Darnton’s 18th century French scandal sheets, what he calls “pre-modern blogs,” signals that historians are ready to look at the internet as a text, a point of power exchange, a site of identity construction, and so on.
It’s been interesting reading through some historical literature on recent technologies and social movements to see how historians have conceptualized a technology that is still very much in the making. Then again, so is the book. The question is not when should you write a history of something that appears “new,” but how you can write it so that its newness doesn’t drive your narrative. One way to remain cautious yet still attentive to the significance of the technology might be to think of it in terms of Foucauldian beginnings, which imply historical difference, rather than of origins, which presume causation.
A failed attempt to think about the historical significance of new communications technologies, like the cell phone or the internet, in a nuanced way is found in the otherwise good book by Mikael Hard and Andrew Jamison Hubris and Hybrids: A Cultural History of Technology and Science . I won’t even get into the problem of the authors’ underlying premise that modern technology has engendered cyborg hybridity [it might be interesting to think about why we think about it like that, but not to actually appropriate the concept as your own analytical tool. After all, wasn't the first time a human picked up a hammer an instance of techno-human hybridity?]
The following description of cell phones from Hubris and Hybridity rehearses simplistic themes of technological alienation and superficiality completely detached from concurrent phenomena:
The irony seems to be that as we communicate more with more people, the content of this communication becomes ever more superficial. Cell phones definitely allow greater flexibility and the appropriation of new spaces, but do they really guarantee closer contacts and more intimate relations?
These seem silly questions, and my use of Skype this summer to stay connected with family and friends is an easy anecdotal counter.Which perhaps points to a another thing to be aware of when writing histories of things that your readers will consider contemporary to them: everyone has a countering anecdote.
The other, and for me more interesting, example of a historian looking at the effect of the internet was in Ilse Lenz’s Die Neue Frauenbewegung in Deutschland [The New Women's Movement in Germany] where she briefly discusses how the internet in the late 1990s helped to demonstrate the constructiveness of gender, as men and women were able to moonlight as various, multiple sexual and gender identities online in ways they were typically unable to in their public lives.
If we follow Judith Butler and see gender as a performance, an identity that is constituted through repetitive stylized acts, a repetition that both fortifies and undercuts identity as small but potentially significant changes are introduced in the performance, then what has the internet contributed to our conceptualization of gender identities? Some might say that the internet’s wide distribution of hard core porn has re-awoken a primal violent male sexuality [I'll get to that in a different post]. Carl Elliott’s Better than Well describes how the internet allows marginalized queer (in a broad sense) identities to form biosocial communities, thus strengthening such identities against a normalizing world.
I think though that Lenz is describing something different. Elliott’s queer communities conceive of themselves as embodying stabilized, if socially unacceptable, identities. The internet is a tool that allows these identities to be more easily expressed and supported, though it may also help construct such identities as individuals reinterpret their experiences into ossified identities as a result of the “semantic contagion” facilitated by the internet. Still there is no theorizing on how the internet contributes to the subtle shifting of such identities.
So what role does the internet play in the performance of gender? Does an individual’s gender identity change as a result of one’s internet performance in line with Butler’s miniscule but potentially consequential shifts? And is it only those who take on extravagantly different online identities that contribute to these shifts, or do we all?
Let’s take me for an example. Though only briefly. Both because I haven’t actually self-analyzed my gendered blogging experience too much and because I lack a certain desire to publicly plumb my gender identity online — undoubtedly this has something to do with the way I was socialized as a heterosexual female within the cultural milieu in which I was raised. I give you all leave to totally over-analyze the ambivalence, skittishness, and swift ducking behind theory all contained in this last paragraph at will.
When I began to blog over at my old, short-lived personal blog this summer, Something Pending, I made a deliberate effort that lasted perhaps a week, to obscure my gender. Though perhaps I was wrong, I thought the pseudonym Luce could go either way on the binary: perhaps it was short for Lucy (it’s not), perhaps it referred to Henry Luce, the American publisher (it didn’t). My blog design was chosen for its grayness, its ugliness, and what I thought was a sort of gender-neutral tech-iness. I described myself as doing “Criss-crossed thinking on reproduction, technology, and the law. With some historian speak. And maybe a few stray thoughts about my research and travels…” Was “reproduction” a dead give-away? Maybe, depending on my readers’ biases. But as someone who is quite aware of her audience, the lack of obvious gender added another layer of anonymity. I’ve wondered whether it was this gender neutrality or the general anonymity that allowed me to take on what felt like an atypically aggressive tone early on. Of course, that tone has now just become part of my identity as a blogger.
So here are some questions: was I erasing myself or was I passing? In being ‘gender neutral’ was I by default perceived, or did I perceive myself as, masculine? That is, on the internet: if not feminine, then masculine?
Secondly, does the internet mean the death of the subject, and relatedly, the author? Given the cloak of anonymity, do we take the opportunity to masquerade through a number of guises, or do we assume a stabilized one? Do we even construct an even more strongly subjective identity than we do in other spheres of life because we have more control over whom we interact with, more time to consider our written responses, more agency in determining which topics we will and will not engage with?
Robert Darnton made a passing “death of the author” argument about contemporary blogs at a talk I went to recently. But I’m unconvinced that this is the case. Blogs are most often about pseudonymity rather than anonymity after all. And those pseudonyms develop their own voices, often tied to their “real” world identity. My gender neutrality didn’t last that long because I really wanted to talk about abortion and reproduction from an obviously gendered (female) perspective. Maybe I could have challenged notions of femininity and masculinity if I had engaged passionately with these topics while “passing” as male. But at the end of the day, I’ve never been a very good actor.
If you take a look at the lovely “sticker” on the right side of the screen (thanks Luce for figuring how to put that up there), you’ll see that PhD Octopus just won the History News Network’s 6th annual Cliopatria award for “Best New Blog” of 2010. Hooray! We’re really excited and honoured (Nemo and I with a “u,” Wiz and Luce without) to receive this award. We’d like to thank the History News Network at George Mason University for selecting us, and especially this year’s judges. We’d like to thank our professors and fellow doctoral students, for keeping us informed, engaged, inspired, and entertained. We’d also like to thank you, our readers, for reading and following and commenting. We hope to provide more fresh, fun, and interesting posts in the new year.
That’s probably the closest I’ll ever come to an award show acceptance speech.
Here is my review of The Social Network. I guess I should start by offering a spoiler alert.
I thoroughly enjoyed the movie. While I may not have been as impressed as the countless critics who have heaped mountains of praise upon it, I found it extremely entertaining and thought-provoking. Unlike many movies nowadays, The Social Network was not overly long (a brisk two hours), and though it could probably have been a tad shorter, I was never bored. The acting is excellent all around. The dialogue is slick and fun if occasionally a bit forced and contrived. I will say that the movie will undoubtedly be more enjoyable to those with some affiliation or knowledge of Harvard, but I would recommend it to all, especially to those 500 million of us who use Facebook.
It’s important to distinguish between the real Facebook and the fictional Facebook of The Social Network, just as it’s important to distinguish between the real and fictional Mark Zuckerbergs and the real and fictional Harvard Universities.
The real Facebook has its critics, mostly on issues of privacy. Nonetheless, I am a huge fan of the website. Its practical uses are numerous: keeping in touch with old friends, making new ones, sharing photos. For my purposes, Facebook has also served as an intellectual forum. My friends share thoughts and articles, others respond and raucous but intelligent debate often ensues (the debates are sometimes mindless and annoying, but overall a net positive). Indeed, it was because of these very Facebook wall posts and debates that Wiz approached Wotty and me to start this blog: he accurately noted that it was something that we already did on Facebook, so we might as well make it more organized and official. I share all my posts through my Facebook and Twitter accounts. So for those of you who enjoy PhD Octopus, you have Facebook–and Mark Zuckerberg–to thank.
As for the real Mark Zuckerberg, I can only offer limited comment. Though we overlapped at Harvard, I never met him, though in the interest of full disclosure, I did go on two dates with his sister Randi (I had a good time and I think she did, though nothing ever came of it). But since I never met him, I can only go by what I’ve heard and what I’ve read. I’ll admit that in most articles, especially this one from The New Yorker, he comes off very badly. The judgment of a 19 year old is not the same as that of a more mature adult, but it’s also true that many people don’t change all that much of the course of their lives.
One thing that the real Mark Zuckerberg and the fictional Mark Zuckerberg seem to have in common is that they aren’t all that concerned with money. For Harvard graduates there are plenty of tried, tested and true routes to financial reward, the most common being investment banking and consulting. But from what I’ve read, it seems that Zuckerberg did not invent Facebook for the money so much as for the power, and for the desire to leave his mark on the world. To me, that’s somehow more admirable, or at least less douchey.
Apart from this similarity, there appear to be obvious differences between the real and fictional Mark Zuckerbergs. Other critics who know more than I have documented the movie’s falsehoods more effectively than I will here, but suffice it to say that Zuckerberg never had any real interest in Final Clubs and had a serious girlfriend throughout most of the time depicted in the film. He did not invent Facebook to get back at one girl or win over another, or even to become more popular. He did so to fill a demand of Harvard social life–I remember people were annoyed they did not have access to other dorms’ interal university-run facebook sites–and because it was a great idea with potential for growth. Facemash, the first Harvard site Zuckerberg invented that was deemed sexist in The Social Network, in fact had pictures of both men and women.
This inaccuracy has led critics to point to the movie’s misogyny, and they raise a good point. If the story of Facebook is not a male story, it is a story whose principal characters are all men, and Aaron Sorkin and David Fincher would probably have been wiser to give women an even smaller role than to portray them in the offensive manner in which they did.
My own prejudices led me to sympathize with the fictional Zuckerberg. The character, portrayed brilliantly by Jesse Eisenberg, is at the center of a story that may not have gotten Facebook’s founding right, but certainly painted an accurate, if exaggerated portrait of Harvard undergraduate life.
I can attest to the spectacular lameness of AEPi parties. Alpha Epsilon Pi, the Jewish fraternity, has a mixed reputation nationally, but an especially bad one at Harvard, where frats are considered the poor man’s Final Clubs. In some ways, this was literally true: the frats at Harvard, because they did not have fancy mansions right by campus, appealed to a less elite, or elitist, clientele. As a result, they were generally eager to attract members, and were basically inclusive rather than exclusive. I never joined one, but appreciated them for that.
The Final Clubs were another matter. If the wild party depicted early in The Social Network was a bit over the top, it’s also true that clubs were well known as places to get alcohol when bars were closed, along with cocaine and other drugs. A member always guarded the door, filled with undeserved power and authority, determining which students could enter, always preferring women to men (the more attractive the better) and generally distinguishing between the cool and the undesired, from the Club’s perspective of course.
The sexism and misogyny of the Clubs is real and has been written about extensively. It is both unacceptable and pathetic. Yale’s Skull and Bones is co-ed now, as are Princeton’s Eating Clubs. I have no doubt in a generation or less, the Final Clubs will finally admit women as well.
The shameless classism bothered me much more. After all, female Final Clubs did and do exist at Harvard (with much less power and money and their disposal), but their membership is, in my mind, just as unpleasant as the male variety: generally obnoxious and wealthy. This was not universally true, and some Club members were good people (some of my best friends were in Final Clubs!) but as a general rule it held. I had no desire to join one. Moreover, I could never afford to be in a club on my own, and I was not about to ask my middle-class parents for a few hundred dollars a semester so I could get drunk with a bunch of d-bags and prey on similarly unappealing women. And that’s if any Club would have deemed me cool enough to be a member, which is highly unlikely.
With my own anti-Club bias, I found The Social Network‘s relatively favourable impression of Cameron and Tyler Winklevoss upsetting. I never knew the cartoonish would-be villains in real life, but I have a hard time imagining that they were anything other than sensational douchebags.
Not only did they enjoy tremendous inherited wealth and privilege, and the undeserving prestige that comes from membership in the Porcellian (regarded as the most elite Final Club), but they also benefited from excessive athlete worship, ever-present at Harvard, if not as pervasive as at places like Duke, where Fuck Lists are made that venerate varsity athletes as demi-gods. I am reminded of Alexander Portnoy, the protagonist of Philip Roth’s Portnoy’s Complaint, who lamented the existence of WASP men, “engaging, good-natured, confident, clean, swift and powerful halfbacks for the college football teams called Northwestern and Texas Christian and UCLA,” guys who always got the girls ahead of the alienated Jewish intellectuals.
And so I couldn’t help but root for the fictional Zuckerberg, who put the WASPy athletic Winklevii in their place. There is a Jewish angle to the film, mocked by Nate Heller as “a Jewish underclass striving beneath the heel of a WASP-centric, socially draconian culture.” And yet I think the tale the movie tells, if not quite accurate as a portrayal of Harvard in 2003, is nonetheless important when looked at through a different lens.
There’s a sense in which “new money” is “Jew money.” The Jewish immigrant, first from German lands and later from eastern Europe, had an enormous and disproportionate impact on the American economy. And so the fictional Zuckerberg enters the Harvard universe as a dorky outsider, only to turn the WASP world upside down, to the point where he mockingly proclaims that he could buy a Final Club himself.
This Zuckerberg’s most astute observation may be when he remarks that the old-money Winklevii weren’t upset about not getting their website or their millions, but they were upset because for once in their lives, things didn’t go their way. The word “entitlement” comes to mind, and no scene better encapsulates this than their meeting with then Harvard president Larry Summers, who tells them to quit whining and come up with their own idea.
The real Larry Summer is some kind of genius. He is also a man without many social graces, and actor Douglas Urbanksi captures this perfectly. Though Summers has been criticized on this blog, and I’m no fan of him as an economist, I liked him as president of Harvard, as did the majority of the undergraduate population. I admired his opposition to the anti-Israel Divestment campaign, his drive to increase financial aid, his belief in over-hauling the Core Curriculum, his support for the hard sciences and and his skepticism towards area studies. I also loved the way he would down 4 slices of pizza in a sitting at off-the-record sessions with The Harvard Crimson, and I find it hilarious that he frequently falls asleep at meetings.
Beyond all this, however, the character of Summers–Harvard’s first Jewish president–fits in perfectly with Sorkin and Fincher’s anti-WASP narrative. As reviewer David Denby of the The New Yorker describes the movie’s Summers-Winkelvoss encounter:
one can feel, in this seemingly unimportant scene, history falling into place, a shift from one kind of capitalism to another. Fincher and Sorkin wickedly imply that Summers is Zuckerberg thirty years older and many pounds heavier. He has the same kind of brightest-guy-in-the-room arrogance, and little sympathy entitled young men talking about ethics when they’ve been left behind by a faster innovator.
It would be nice to think of Zuckerberg as a sort of Jewish Horatio Alger type in 2003. Truth gets in the way of course: the real Zuckerberg comes from an upper-middle-class Jewish family; his sister went to Horace Mann and he went to Exeter. When I was at Harvard, many Jews were on the inside of Final Clubs looking out. The same is true today. Jews are over-represented (based on their proportion of the population) and extremely comfortable at America’s elite institutions.
Nonetheless, the story in the movie works, though Sorkin takes some license to make it work especially smoothly: Divya Narendra, the Winklevoss’ South-Asian sidekick, is portrayed as something of a nebbishy outsider himself: the real Narendra is athletic and handsome (I met him in the summer of 2002, before any of this went down). At the very beginning of the movie, Zuckerberg makes fun of a fictional ex-girlfriend “Erica Albright,” noting that her last name used to be Albrecht, as her family too sought entry into a more elite, more gentile realm.
Zuckerberg’s opening conversation with Albright may be the most realistic scene in the movie, not for the too sharp yet entertaining dialogue, but for the disdain that so many Harvard students hold towards less selective universities and the people who attend them. I noticed this when I was there, I notice it even more today. I’m an elitist, and I think a certain amount of elitism is ok and even good, but Harvard probably goes to far, telling its students over and over that they are “the best and the brightest” from day one. It often turns smart people into worse human beings. Though economist Greg Mankiw disagrees, Matt Yglesias notes, “most Harvard undergraduates are pretty unlikeable.” I think an important reason for this is because Harvard becomes the most important part of their identity. This effect can be resisted, but only with difficulty. The best treatment is repeated and constant exposure to less elitist individuals, though it can take several years to cure. This, more than anything, is what Sorkin and Fincher got right about The Social Network.
Once again, Octopi are cooler than the animal you named your blog after. This time: an octopus which is predicting the World Cup games. Correctly.
It does raise the question: who thought to themselves, “well I really want to know who will win the World Cup… And I’ve got this Octopus lying around…. Let’s see what he thinks about whether Germany or England will win.”
I guess the answer, as in so much in life, is that British people are weird.
More pictures here.
We started with the Wcubed, but that doesn’t work anymore, as we’ve added a fourth whose name doesn’t begin with w. We moved to CaNaRd, but that conjured up all sorts of quackery.
So now we’re at Ph.D. Octopus.
The name comes from pragmatist philosopher William James‘ 1903 article, which laments the pursuit of the doctoral degree as a “love of titles” unbefitting a country that spurned knighthood and nobility.
This is not to say that we doctoral students of history believe the Ph.D. to be a mere “academic bauble.” We look forward to earning our doctorates (and hopefully tenure-track positions not long after that) and entering the Ivory Tower of Higher Learning.
But, like James, I think I can speak for my fellow bloggers (at least some of them) in asserting that, to borrow the old cliche, we value learning for learning’s sake. We believe that universities “ought to keep truth and disinterested labor always in the foreground.”
At the same time, like James, I think we’ve all been inspired by the rallying cry of the Dreyfusards, to be pragmatic with our historical knowledge, to apply what we’ve learned in the classroom (and the libraries and archives) in order to “know a good man” (or woman) when [we] see him (or her).” So that’s why as best we can, we’ll try to incorporate the historical facts, theories and methods we’ve learned in school in our posts about modern politics and pop culture and everything in between.
And so, with both the snobbish elitism and the ever-present humility and self-doubt befitting a bunch of under-paid and over-educated graduate students, we present you with “Ph.D. Octopus.” Enjoy.