April 9, 2014
As I read Saul Austerlitz’s extended hysterical diatribe against “poptimism,” I almost thought I was reading a parody. After all, who wants to bring back discourses of authenticity/quality/“good music” that privilege white dudes who make mediocre music above women and people of color, who dominate pop music’s landscape?
I could use this blog space to argue why Austerlitz is wrong musically. There are many things that pop music offers that traditional rock does not: crisp production styles; hybridization of genres; timbres that extend the concept of what music is; skilled session players; and songwriters who hail audiences that, yes, include the 13 year olds whom Austerlitz uses to dismiss current pop critics. I could point out that in the canon of poptimism, there are a hell of a lot of examples of good music, whether it’s Chic’s live-instrument take on disco, grounded in Nile Rodger’s guitars and Bernard Edwards’s bass lines; or 1980s synth-pop’s use of interlocking melodies, made all the more impressive when one takes into consideration the limits of the technology producing them; or, hell, Michael Jackson’s Thriller, which indeed won the Pazz and Jop poll. Or I could point out that the Beyoncé album that Austerlitz uses as his entrée to the topic contains complicated, extended song forms, employs a variety of songwriters, and finally puts Beyoncé’s amazing vocal range to good use. Or I could point out that Beyoncé and Justin Timberlake both employ fantastic live bands and pump up the arrangements with skillful playing. Or I could just shout the name “JANELLE MONAE” five hundred times, because there’s a woman who writes pop songs and isn’t afraid of jazz harmonies, sounding like Michael Jackson, or mixing genres in a giant blender.
Or, in contrast, I could point out that all of the indie rock bands he mentions as “daring” are old-fashioned recyclers—even the bands he mentions that I personally enjoy, like Speedy Ortiz, are guilty of that. Or I could point out that a lot of other indie rock bands are moving back and forth into the pop space, too, by writing songs for Beyoncé, or sounding like Fleetwood Mac and Wilson Phillips, or writing songs that draw sonically from R&B and then are covered by indie-loving R&B artists. And I could point out that anything interesting in indie rock in the past ten years has pretty much come from this kind of cross-pollination.
But, you know, that’s a music critic’s argument—and I’m not a music critic. (Though I’d be happy to write about the sonic qualities of the music all damned day. Another time!) Rather, I’d like to look at this purely from the perspective of a feminist ethnomusicologist who focuses on the history of pop music from the postwar era to the present. In short, I’d rather focus on how music criticism is always a product of a critic’s habitus, i.e. their particular social position that incorporates a variety of aspects of identity as well as cultural capital. (That’s a very short gloss on Bourdieu, but this is a blog post and I’m not going to outline all of Distinction here, TYVM). In recent years, though, the old cultural capital standards “high,” “low,” and “middlebrow” culture have shifted into what sociologists Richard A. Peterson and Roger Kern have labeled “omnivorous consumption.” Omnivorous consumption may sound indiscriminate, but the term actually implies that a different set of disctinctions is involved, drawing on both high and low taste cultures. (And, really, Austerlitz is someone who should be very much aware of this. He wrote a book on sitcoms!)
Contrary to what Austerlitz argues, the shift toward omnivorous tastes hasn’t led to a indiscriminate, disproportionate focus on pop, but to a recognition that there are more forms worthy of attention—and, yes, criticism—than just white, middle-class, male-dominated indie rock. That includes, but is not limited to, pop music (and even indie rock—if you look at Pazz and Jop winners, you’ll see that they’re still well represented). It actually means that more forms of music are evaluated than before. “Poptimism” is just one aspect of omnivorous consumption; in terms of Pazz and Jop, it’s also meant that artists like Kanye West have landed in the number 1 spot (more than once).
Even people like Austerlitz, who clings to his indie rock like so much guns and religion, think that it’s fine when music critics cull from high and low in the “long view” of history—very few people would argue that women and people of color got a lot of respect in the early to mid-20th century. This has made a huge difference in the recognition of styles that had previously been shunted from the historical record, including country music and R&B. Jody Rosen, whom Austerlitz “admires” but feels free to criticize, is a wonderful example of someone who is aware of the need to include a wide variety of musics (and people who make them) in the historical record of popular music; Rosen’s recent “100 Years of Pop in New York” for New York magazine is a great example of balancing the race and gender in a list that could easily have been dominated by white dudes. But placing those issues in a historical framework—and only in a historical framework—implies that the conditions of the past do not affect the conditions of the present; additionally, it often allows some critics (definitely not Rosen, to be clear) to think less critically about the present.
And so, we get articles like Austerlitz’s. It’s important to remember that even while there’s a general trend toward omnivorous tastes, not everyone’s going to develop them, and not everyone’s going to understand the new distinctions. In large part, that’s what his critique of “poptimism” is about—resistance to new rules that determine musical worth in cultural terms. (And that’s fine—like what you like!)
But when Austerlitz implies that 1) the critics have the taste of 13 year olds and 2) there’s no criticism in poptimism, there’s something else going on that’s equally worthy of attention. These two implications have something to do with misunderstanding omnivorous tastes, but they also have a lot to do with gender–of the audience, of the critic, and of the artists.
First, the missing word after “13-year-olds” is “girls,” because, let’s face it, the pop music audience is always gendered female. The artists he mentions are women with audiences of teen girls, from Britney Spears to Lorde to Lady Gaga to Katy Perry to Beyoncé to Sky Ferreira to Icona Pop. There’s only one dude mentioned—Robin Thicke—and he has a primarily female audience. If Austerlitz had included artists that 13-year-old boys liked, then I might be willing to give him a pass. But, yes, this is about gender at its core.
The tastes of 13-year-old girls are usually the most easily maligned, whether in pop music or in books or in films. So that’s why Austerlitz uses it—it’s a gendered slam against the critics who earliest embraced poptimism without actually coming out as a sexist.
However, if we take the long view of history, we can see that 13-year-olds (especially girls) can have some pretty damned good taste (in terms of taste as a cultural construct, of course). Thirteen-year-old girls were the first fans of Elvis, the Beatles, the Rolling Stones, the Beach Boys. Thirteen-year-old girls were the target audience for Girl Groups of the early 1960s; while that era was long critically dismissed (aside from a passing acknowledgment of the “genius” of Phil Spector), no one in their right mind would do so today—and it’s not due to “poptimism,” but the long-term covering of Girl Group songs in other genres, from punk to indie rock. (Of course, sometimes teen girls are wrong: If you watch Don’t Look Back, you’ll see a conversation between a teenage girl and Bob Dylan. Like many folk critics of the time, she tells him he was better before he went electric.)
Second, when Austerlitz says there’s no criticism in pop music criticism—that it’s all a celebratory mush of pop excess and fashion and lifestyle, rather than a reflection of a broadened taste palette for music critics—he’s dismissing a lot of writers, especially women. Like, oh, say Ann Powers, who has a very long history of covering both pop and rock with a critical eye, or Maura Johnston, who broadened the taste of the Village Voice in her tenure there. So, it’s also about who’s doing the criticizing, too, and why he might get away with comparing them to 13 year olds.
Finally, the view that criticism about pop isn’t real criticism is also about the gender of the artist; again, note that the overwhelming majority of artists he dismisses are women. Austerlitz’s parallels to literature reveal that he’s not a very broad reader despite earning money from reviewing books, or else he’d know that this same debate has been raging there for a long time, too. He doesn’t seem to know that authors such as Jennifer Weiner have called out reviews sections in The New York Times and the New York Review of Books for disproportionately choosing to review books by male authors. He doesn’t know that what’s considered “literature”—like what’s traditionally been considered “good music”—has everything to do with the gender of the author, and not the content of the book (though I will give him Dan Brown and Stephanie Meyer). He doesn’t know that YA author Maureen Johnson did a series of gender-swapped book covers that demonstrated the arbitrary notion of “boy books” and “girl books” for that teen audience he so flippantly dismisses.
So, there’s definitely something else about Austerlitz’s habitus at work here: it’s the insecurity of the white, male critical voice in a world that has opened up to women—as audience members, as artists, and as critics—as much as it has opened up to tastes that draw from both high and low culture. While I don’t really care what Austerlitz listens to in his free time, I do wish he’d be a little less certain that his kind of taste is the only “good” taste out there.
 Yes, I know some indie rock bands do this, too—but it’s not like Austerlitz was citing them. He’s talking about The National and The Strokes.
March 18, 2014
Back when Heathers was released in 1988, I didn’t see it–pretty much no one did, because it was a flop. But, starting about a year later, I would see it a lot. My older sister and her friends would hold Heathers-themed parties, where they would eat spaghetti (lots of oregano) and corn nuts, and drink blue Kool-Aid (a stand-in for the drain cleaner that killed Heather Chandler) and Perrier (the drink that cemented Kurt and Ram’s homosexuality).
Eventually, they let me join in. I was three years younger than my sister, and this was huge. It meant that I wasn’t being seen as the annoying little sister anymore–at least not always. I would have watched Heathers with them every time even if I hadn’t loved it–but I did, and that made it so much the better. The movie was dark and funny satire, bringing life to all the stereotypes of high school. Even though the characters were totally campy at times, they weren’t entirely cartoons, but somehow perfect representations of late 1980s teen angst bullshit.
So, when I heard that a Heathers musical was in development, I wasn’t sure which elements of the film would translate well. Camp? Yes. Teen angst bullshit? Maybe not. The thing that worried me the most was that the genuine, teen angst core at the heart of Heathers‘ satire would get lost in a haze of 1980s nostalgia–I couldn’t see how that particular ache would survive the translation to musical numbers and the streamlining that musical theatre always necessitates.*
Now, I’m happy to say that I was wrong. Heathers the Musical does amp up the camp–there’s a super-catchy number called “My Dead Gay Son”–but it doesn’t lose the core of genuine feeling. In large part, this is due to Barrett Wilbert Weed’s portrayal of Veronica, which offers a more sympathetic, less jaded, and more nuanced characterization than in the film. The lack of jadedness in the musical is, I think, a result of our collective ’80s nostalgia: no one quite wants to admit just how jaded the late ’80s were (cf The Goldbergs), so productions that draw on those memories somewhat smooth them out. I might have a problem with how Heathers the Musical sometimes makes Veronica more innocent (and less complicit) than the film did, except for the fact that Weed is so damned good in the role.
Weed is a very different Veronica than Winona Ryder was. When we meet HTM‘s Veronica, she’s still a nerdy outsider, planning a weekend of movie-watching with best friend Martha Dunstock (the musical collapses Betty Finn and Martha into one, a change which both makes sense and gives the excellent Katie Ladner a larger role in the production). Veronica’s transformation from outcast to fourth (maybe third) most popular girl in school gives Weed a chance to portray a greater range of feelings, from guilt to ambivalence to glee at finally being popular, than Winona Ryder’s jaded Veronica, who has already been absorbed into the Heathers’ world. At times, this difference is a pretty major reconstruction of character: Weed’s Veronica excitedly loses her virginity to J.D.; Ryder’s Veronica schools Betty Finn on how sex is really not that exciting. At other times, it gives Weed a chance to shine, especially through musical performance. In the song “Dead Girl Walking,” Weed’s voice powerfully conveys the sense of anger, fear, and frustration that only getting kicked of the most powerful clique in school can inspire.
Another place where Veronica’s innocence comes to play is with J.D. When we first meet the dark horse prom contender in the film, he points a gun at Ram and Kurt; in the musical, no gun, just some (admittedly amazing) choreographed fighting. Was a gun too much in this day and age? Or is it just so unbelievable that our more innocent Veronica would fall for a dude who appears at least a little psychotic in his first scene? Also, I’m not sure I would believe that film Veronica ever thought she was in love with film J.D., but the more wide-eyed stage Veronica declares her love quite easily. Again, I don’t think this would work if Winona Ryder played Veronica this way, but with Weed, it makes more sense.
The Heathers themselves start off a little more vicious and a little more cartoon-y than their movie versions (well, except for Heather Chandler, who was always vicious). Last night, Charissa Hoagland stood in for Heather Chandler, the queen bee of all queen bees (usually portrayed by Jessica Keenan Wynn). Hoagland’s Heather was the strongest performance of the three, especially in the first act song “Candy Store.” Hoagland brought a palpable bitchiness to the role, which made it a shame that, well, she’s the first Heather to die. (But it’s also nice that she gets to return as a ghost.)
Other stand-outs in the cast are Evan Todd (Kurt Kelly) and Jon Eidson (Ram Sweeney), who spend most of the second act as ghosts in their underwear, and Katie Ladner (Martha). Kurt and Ram’s song “Blue” is, well, the best song about blue balls I’ve ever heard (not that I listen to that many of them, but…) HTM gives a little more insight into Kurt and Ram than the film, giving them both aggressively macho dads (who have their own, er, moment in “My Dead Gay Son”). Katie Ladner infuses Martha with Betty Finn’s innocence and an eager boy-craziness all her own. Even though a very bitchy guy behind me suggested that her second act solo, “Kindergarten Boyfriend” wasn’t necessary, I disagree: it was worth it for the opportunity to get a few more minutes of Ladner on stage, especially since she gets to show off her wide range.
While Heathers the Musical isn’t the Heathers of my youth, it still captures the fear and dread of high school in a campy, yet resonant, way. Oh, and the music is fantastic–if they release a cast recording, I’ll review it here.
A final note on nostalgia: while I enjoyed the music before the show, it was of a certain ’80s style that no one would have been listening to in 1988/89. This lack of historical specificity always bothers me, but most people will just revel in the catchy pre-show tunes. But, occasionally this happens in the show, too. For example, in the party scene, the Hipster Dork is wearing a Depeche Mode Violator t-shirt. That album was released on March 19, 1990, making it unlikely that anyone would wear said t-shirt to a party in 1989.
*Mind you, this streamlining isn’t necessarily bad. I have a great deal more sympathy for the Sally Bowles of Cabaret than the Sally Bowles of Goodbye to Berlin.
February 10, 2014
Back when I was looking at different grad schools, I knew that I was coming in with some disadvantages. My background was in music history, with an almost-double-major in journalism, but I didn’t have the background in social or cultural theory that a lot of people from fancier institutions almost certainly had. But I made a decision to acknowledge this void, rather than try to hide it. It was a gamble, I knew, but I figured that people would either appreciate this boldness or not.*
So, everywhere I went, I asked everyone the same question: What books would you recommend to someone in my position?
A lot of them recommended traditional ethnomusicology texts. Sure, that was fine, but whatever; I had already figured those out, and they were for the most part as un-theoretical as my undergrad experience. Two people at Columbia recommended the same book, though: Resistance Through Rituals, edited by Stuart Hall and Tony Jefferson. Both of the dudes recommending it seemed pretty cool (one of them would later become my dissertation advisor), and they actually got my question.
I’m not joking when I say that the book changed my outlook on popular music studies. And, though 22-year-old me thought everything in the book was ridiculously dated, it still offered a sense of possibility for taking popular culture as a serious object of study. And, as a bonus, all the bad rock music criticism that attempted to do sociology now made sense to me.
Armed with the Birmingham School and its descendants (especially the work of Angela McRobbie), I now had a research framework for my grad school applications. And, though I ended up moving ever more resolutely in the direction of feminist and queer theory, I’ve built that on a foundation of cultural studies that understands identity’s importance within popular culture.
And, for that, I have to thank Stuart Hall (and Tony Jefferson and Angela McRobbie and pretty much anyone in the first two generations of British subcultural studies).
OK, so, all of this is to say that I got into grad school to write about popular culture, and I’ve been doing that for the past fifteen years. It’s all I’ve ever really wanted to do, in fact; however, I’m not doing it now. For the past six months, I’ve mainly been writing on this blog about adjunct issues, which has turned out to resonate with far more people than I ever expected. However, writing about my failure all the time is pretty exhausting. So, here’s what’s going to happen: I’m going to keep writing about adjunct issues, because I still have a lot to say on that topic. But I’m also going to write more about music in this space, too, because that is what I love.
There won’t a set schedule for this, but I’m guessing it’s going to mean one adjunct post and one pop culture post per week.
*I got in everywhere I made this gamble, so the gamble worked.
February 7, 2014
By now, you may have seen this post on the Chronicle Vitae site. In it, Kelli Marshall, a lecturer at DePaul University, talks about her job as an adjunct as a pretty pleasurable experience. And, really, reading her description: it’s not so bad. She apparently makes more than most adjuncts (almost the starting salary of an assistant professor, she says), has a decent commute, has an office, and is fairly secure in her situation. Oh, and she has a husband, also in academia, with a full-time job.
Now, if you caught me on a really good day, that description could be me (except that my husband is not in academia). On a bad day, like today, when it took me over an hour to commute 3.7 miles via bus, you will only get scowls from my general direction. I’ve been teaching at the same two institutions for the past five years. When one of them had a one-year vacancy, I was given a real salary and benefits for a year; I’m also their preferred adjunct, getting first choice at classes. So, they like me! They really do. (That’s a Sally Field reference, for you youngins who didn’t get it.)
I also enjoy various benefits of the flexible adjunct life: I work from home three days a week (note: I did not say that I had those days off); I can go to the gym or the pool when it’s dead quiet, which is usually around 2 p.m.; I can occasionally meet a friend for a run and maybe tea afterward; I can schedule a haircut in the middle of the afternoon at my usually busy salon. (Yes, this is deliberately obnoxious–I’m driving a point home.)
However, just because I can do these things does not mean that the system of contingent labor in academia is not massively fucked. Nor does it mean that I’m happy in this position–I really do not want to be a housewife with a part-time job. My husband and I would definitely have more economic security if I had guaranteed work. I’d very much like to get rid of my debt, so that maybe we could actually afford to buy a house someday. But, the reality is, I can do a lot of things that other adjuncts can’t because I have a significant amount of economic privilege (not to mention the racial privilege that landed me in a good school in the first place). And I’m not so blind as to let my privilege obscure my critiques of the system–I know others do not have that privilege, and that they are infinitely more screwed over than I am.
Marshall and I fit the traditional image of the adjunct–white, female, married. As Kay Steiger writes in The Nation, academia has a long history of adjunct positions as “Mrs. Professor So-and-So”:
Before women were allowed to be full professors, colleges often allowed them to teach at the adjunct level and wives of professors often picked up extra work as adjunct instructors. As Eileen E. Schell, the author of the 1998 sociological work Gypsy Academics and Mother-Teachers: Gender, Contingent Labor, and Writing Instruction, said that the reputation for adjunct teaching as a women’s profession was so strong that adjuncts were dubbed “the housewives of higher education.”
So, forty-odd years after the women’s movement started, here’s someone in a Mrs. Professor position telling the rest of the world that adjuncting doesn’t have to be that bad. Well, no, it doesn’t, when someone else is taking care of you. But, despite how much privilege Marshall (and, yes, I) may have in our adjunct positions, we are not the reason to accept the system as is. In fact, we–and I include myself in this 100 percent–are part of the problem.
I said this before in my own Vitae piece: When academia views adjuncting as a job for privileged spouses, everyone suffers. The labor of teaching is devalued, treated as a hobby, and paid equivalently. This screws over the vast majority of adjuncts, who, unlike the traditional-but-outdated portrait that both Marshall and I fit into, are not working for pin money.
So, when someone tells you adjuncting isn’t that bad, consider what other resources they have in their lives.
P.S. There’s an unrelated issue at the bottom of Marshall’s post, where she notes that you need to make connections to get a job. Of course that’s true, but to insinuate that other people are stuck in their crappy adjunct jobs because they’re bad networkers–and not that most adjuncting jobs are crappy–is a bit off mark.
January 22, 2014
Earlier this week, I read this column at the Chronicle, which compared the academic job search to Suzanne Collins’s The Hunger Games. On the surface, I should have loved it: I write YA fiction in my spare time,* and, as part of my commitment to that element of my writing life, I read a lot of it–widely, from good to bad, and in between. I love The Hunger Games (well, at least the first two books), and I could see how the brutality of the academic job market made for some darkly humorous parallels.
But, as I read it, I felt deeply annoyed, and then unsettled and somehow angry. It wasn’t, as Rebecca Schuman pointed out, that the column was needlessly anonymous (though it was). It wasn’t that the columnist grossly mischaracterizes the leads of The Hunger Games. Katniss is the least “plucky” YA heroine ever. She’s frequently moody, presents terribly in interviews, and has regular bouts of self-doubt. And, Peeta–well, would you describe someone who becomes a master of camouflage as clueless? Maybe in the movies, but certainly not in the book, where he’s definitely more astute than Katniss about playing for the cameras in the Capitol.
Eventually, I figured out what bothered me. Although the article is a bit of black humor about the market, it misses the real tragedy of both The Hunger Games and the academic job market: that people’s lives are regarded as disposable in both cases.
Now, one of these is fiction, and features real death, while the other is real life, and is possible to recover from. But the parallels are actually more depressing. The Hunger Games takes place in a post-apocalyptic, dystopian United States, in which the citizens of the Capitol live off the wealth produced in the twelve Districts. Every year, 24 children–a girl and a boy from each District–are chosen to participate in a fight to the death broadcast on live television. The Games both provide entertainment and ensure that the Districts do not form alliances and rebel against the Capitol.
While it’s a bit heavy-handed, there are moments of real emotion in the novel. One of these is when Katniss (our not-at-all-plucky heroine) mourns Rue, a tribute from District 11, by covering her body with flowers after she’s murdered. For the reader, this moment drives home the brutality of the Games; after all, Rue is only 12. And (spoiler alert) it’s this waste of life that Katniss continues to fight against in books 2 and 3 of the trilogy.
In the Hunger Games column in the Chronicle, “Atlas Odinshoot” hits at the disregard for academic job candidates in his opening paragraphs:
But the academic job market is a process that necessitates failure. Your application materials will end up in the slush pile at dozens of departments, regardless of how well suited you are for the position or how carefully you tailor your materials. Outstanding candidates can easily fail to find a position.
The fact that the academic job market “necessitates failure” is key here. And, not just failure–failure of “outstanding candidates,” people who are the stereotypical best and brightest, who should be able to succeed at anything. Why is such rigorous training provided to so many people to do a job that will be available to so few? Why is this process so wasteful, so brutal, with such a disregard for the humanity of the job applicants? Who benefits from this system? Why are some people with tenure encouraging the expansion of graduate programs, even in the midst of a clear, ongoing, and progressive contraction of the academic job market? Why are outside connections–especially ones that could offer future employment–discouraged during grad school? And why are people who leave–even when they go on to good jobs and to do exceptional things–still considered failures within academia?
These are only a few of the questions that underly the issue of the job market’s necessary failures, but they are hard to ask without teetering from a cutesy, gallows humor comparison of the Hunger Games into a pit of despair about the utter likelihood that you’ve wasted a good decade of your life and you can do nothing to change the structural issues of the academy. (Or, at least that’s how I’m feeling.) The job market, in its brutality, fosters competition and prevents alliances between the underclasses of academia, and when people decide to leave it–really leave it–they are as good as dead to anyone in academia.
Now, some of you may be saying, “Elizabeth, you volunteered as Tribute! You cannot have expected the outcome to go any other way! You knew the market was bad! It was always bad!” This is true, though the market is markedly worse than when I started (yes, those figures are for German, but it works across the humanities). It is also true that the thing I heard most often in grad school was, “There’s always room for people who do good work,” paired with reassurance that I did good–nay, excellent–work. And yet, here I am, as an adjunct. I guess that’s where there’s room for me, eh?
So, yes, I may have volunteered as Tribute. I don’t have to play the game anymore; I can leave, and I will, as soon as I can find a job elsewhere. But that does nothing to change the system, and I have no idea what to do about that.
*I’m not a published YA author. And, no, I do not write Hunger Games fanfic or vampire stories, so you can just take those jokes elsewhere. I’m currently revising a manuscript and will be looking for agents somewhere in mid-2014.
January 16, 2014
I’ve been meaning to talk about this for a while–in fact, my column for Vitae alludes to my own credit card debt. But with Karen “The Professor Is In” Kelsky’s massive, anonymous Google doc of graduate student debt and Kate Bahn’s “On Privilege and the PhD,” I felt it was time to talk about how you can get into debt even when fully funded.
Now, I’m not going to give you the exact dollar amount of my debt. That’s between me and my credit card companies. But I certainly have it. Here’s how it happened:
Year 1: My stipend is $12,000, my rent $600/month in New York City. My dad helps me pay rent, because he can see that $400/month is not enough to live on. I also have substantial savings from college, mostly because I had a full scholarship that included both tuition and room, and I worked every weekend. Over the first year, my savings dwindle.
Year 2: With my stipend going up to $13,000, there still isn’t room for any financial error. I start working for the Columbia Bartending Agency, which really and truly exists. It’s a great way to make money, but, as I start writing my master’s thesis and TAing, it becomes hard to juggle. I get pressure from both professors and my then-boyfriend to quit bartending (really, the boyfriend, who is rich via his famous mother, wants to dominate my time).
Year 3: I still bartend a little, but it’s hard to keep up with coursework and teaching my own class for the first time. The debts are starting to rack up. Graduate stipend rises to $13,500, but, of course, my rent is going up each year, too. I start attending conferences locally, which isn’t so bad. In the “bad” column, my boyfriend asks me to go on an expensive ski trip with him–his mother’s paying for my ski rental, lessons, and lift ticket. When I get there, it turns out that she is not paying for anything, her present to me is a Nalgene bottle, and I’m out over $1,000. This is not really grad school debt, but I thought I would put it here since it’s the only radically stupid thing I paid for.
Year 4: At the end of Year 3 and beginning of Year 4, I take my comps. I pass with flying colors! However, it’s the last time I get to celebrate: Faculty relationships rapidly deteriorate over the fall semester. By the order of various administrators, people are not talking to each other. I cannot get my dissertation proposal approved, because the people who need to approve it are not allowed in the same room with each other. In January of Year 4, I’m sent on fieldwork with junior faculty approval, having to revoke the paltry amount of funding I did receive, because I lack the paperwork proving I have passed my proposal defense. This is my biggest mistake. It will cost me $20,000. (I finally get a proposal defense date after a giant fracas that involves a professor being forcibly retired. It was BIG DRAMA.)
Year 5: I go to my first national conference, flying from Seattle to Miami. That is not cheap, and, though my flight is funded, the rest of it goes on the credit card. I finish my fieldwork in Seattle and move to San Francisco to conduct more fieldwork and move in with my horrible boyfriend. San Francisco is incredibly expensive; it’s the worst site for fieldwork, ever. After my horrible boyfriend “accidentally” deletes my conference paper, I attend my second national conference. I can’t get funding for this one, because we’re limited to one per year. I do, however, stay with a friend and his wife, which dramatically cuts down on the costs of the conference.
The Year Off: I get stuck in San Francisco for another year, due to a fuck up with funding that was partially my fault and partially due to someone deleting something from my computer (again!). For obvious reasons, I move out of my apartment with the horrible boyfriend. I work all the time, for the worst boss I’ve ever had, and everything sucks. Just as I’m about to go to a conference (even in my year off!), my boss fires me from my $30/hour independent contractor position when she forgets that she gave me permission to go to the conference. It’s either keep my job–flushing the cost of the conference down the toilet–or go and lose my job. I leave, because she is driving me crazy. I end up working at the Gap and a yarn store for six months until I go back to New York. This erases all the progress I’d made on my debt, and adds to it.
Year 6: To make up for the SNAFU that happened the year before, one of the professors in my department makes sure I get a dissertation-writing fellowship, which is $19,000/year. My rent in my sublet is $1,025/month, meaning that I cannot leave my house unless I want to increase my debt. I make real progress, but I do not finish my dissertation by the end of the year. However, the two of the three chapters that I write that year will later win prizes in their conference paper versions. This will not help me get a job, but it’s something. In March, I end up getting evicted when the woman from whom I’m subletting tells the university she’s not returning. In a miraculous turn of events, I end up in the best and cheapest apartment in NYC. However, it requires 1) a broker’s fee and 2) first, last, and security. I borrow money from my dad, who, thankfully, has money to spare and does not want to see me homeless.
Year 7: This is my last year of funding. I’m told to conference, conference, conference. I do. I go to a conference in Hawai’i. I go to a conference in Seattle. I go to a conference in Montreal. The only cheap conference is in Ithaca, NY. These conferences add up rapidly. Even Montreal, which can theoretically be done on the cheap, costs more than it should: the conference is during the Grand Prix de Montreal, which means that any restaurant within walking distance to the conference hotel was raising their regular prices or using prix fixe deals.
Year 8: While adjuncting, I finally finish a draft, but have a hard time pleasing one member of my committee. Another member of my committee, who sees me struggling, encourages me to really put myself out there on the job market. “You never know how far a good conference paper will take you,” she says cheerily. I believe her, but I should not, because no one goes to grad student panels. I go to conferences in Columbus, Seattle, and San Antonio. I defend the same day that Lehman Brothers filed for bankruptcy; the day my diploma is issued, October 11, 2008, the head of the IMF warns of potential international collapse. Jobs disappear, and I find myself stuck adjuncting.
In the years after graduation, working as an adjunct and VAP has meant that I’ve never gotten ahead of my debt in the way that I’d like. It’s always there. The adjunct pay cycle is not conducive to getting out of debt: even if you save during the semester, the pay is so little that it will not cover expenses during the summer.
I worry that this post will just cause people to call me an idiot (see: My Year Off), or to say that I do not deserve to be in academia if I couldn’t hack it financially. But the reality is, aside from dating a terrible human being for four years and going on an expensive ski trip, I don’t think there’s much else I could have done. I went to the best grad school I could, I got as much funding as I could, and I tried to live frugally. I spent money on things that would directly advance my career, such as research and conferences. And I still have debt, for a career that I’m leaving at the end of this year.
Grad school: it’s just not worth the damage to your bank account.
January 10, 2014
Yesterday, I had an interview for a job outside academia. It was absolutely normal and wonderfully refreshing–pleasant, even. The questions were about the job, and how my experience and knowledge is relevant to it, and I answered them to the best of my ability. So, if I do not get the job, I have no regrets, because I was treated like a human being.
However, I can’t say that about my academic job interviews. Of the conference and on-campus interviews I’ve done, there’s exactly one that stood out for being a truly humane experience: it was a small liberal arts college in Maine, where everyone was polite and professional, from the start of the day to the finish at dinner. I didn’t get the job, but I still have an incredible amount of respect for every single person there.
Otherwise, it’s been a series of WTF to just plain wrong. There was the time when I was scheduled on the same day as another candidate. After the committee took too long at lunch, I was asked to start my interview an hour late, and, oh, by the way, they’re not taking me to dinner (which meant they’d already decided on the other person). There was the time when a scholar whose work I really admired turned his back on me and refused to shake my hand because he disliked one of my letter writers. I could go on, but I won’t.
Mostly, though, I’ve heard a ton of bizarro questions. Here are six of them that stick out in my mind:
6. “What do you think about specificity?” This was a real question at an R-1 university. Although I’ve done a lot of thinking about specificity, especially when it comes to writing, I had no idea what this person meant. Specificity is good, in any kind of writing: In an article, you want specific details to prove your point. You see this blog post? It has specific questions, all of which I really experienced. But the vague quality of the question made me unsure if I was supposed to answer about research, teaching, or, say, the lunch menu. This person is a wonderful scholar and probably a good colleague, but this was one of those questions that demonstrated how out-of-touch R-1 academia is with the rest of the world.
5. “Why is this music so angry? There’s too much shouting, and it delegitimizes their feminism.” This was from a professor who identifies as feminist, and she was tone policing the music I was talking about in my job talk. You might think that this was a legitimate question, but it’s not. I could have talked more about why the music was so angry (and, in fact, about a third of my job talk was about that), but the second part is key: the declarative statement showed that she didn’t have room to even hear an answer. This is the worst kind of question to get after a job talk, because it means that the person is already a no-vote, and the best you can do is hope your answer makes you seem collegial yet firm.
4. “How do you see the role of this new hire in your department’s growth?” OK, this is something I asked, and I hope it’s not really a terrible question. I have no idea why it caused the following to happen: One of the committee members slammed his hands down on the table and shouted, “I don’t see us hiring an ethnomusicologist!” He then got up and left, slamming the door behind him. I had fifteen minutes left to ask the rest of the committee questions, but no one would say anything. Worse, no one apologized for his behavior; instead, they acted like it was normal. Dear committee members at that school, if you happen to read this: that’s not professional.
3. “The kids are going into finals. Could you cut your teaching presentation from an hour to, say, 25 minutes?” This was asked as I was getting ready to teach and the students were filing into the classroom. If I had been asked to prepare an alternate 25-minute lesson, this would have been fine. If I’d have been given a day’s notice, this would have been fine. Hell, if I’d have been given a half hour, maybe. But this was just impossible, not to mention incredibly dismissive of the work that I’d put into my sample class.
2. “Can you give me a compelling reason why we should hire an adjunct, when we are trying to raise the profile of the department?” I was adjuncting at a state school, teaching courses for someone who was (no joke) no longer allowed in the country because the university had messed up his green card. Despite the bad karma involved in such a position, I applied when it came open and became the inside candidate; however, the dean had issues with adjuncts in the department, regardless of pedigree or degree obtained (almost universally, the music adjuncts had PhDs, while the tenure-track faculty did not). The school ended up passing on me and a former adjunct who had two well regarded books on excellent presses. They hired someone ABD, because, you know, potential.
1. “So, do you have any kids?” Normally, I’d be happy to talk with people about my plans to have children. But the only reason people ask a woman that in an interview in academia is to suss out whether she’ll be taking leave in the next few years, and/or whether she’s a “serious scholar.” This is also not legal. Yet it’s happened at almost every interview I’ve ever been on.
January 2, 2014
This past week, Ani Di Franco became relevant for the first time since the late 1990s in possibly the worst way. Her “Righteous Retreat” was going to be held at Nottoway Plantation, a resort between New Orleans and Baton Rouge that also happened to be a former slave plantation. An internet fracas ensued, as people rightfully criticized the location. Di Franco issued the worst apology I’ve ever read (The Toast’s parody was spot on). In it, she called the criticism about the choice of location “high velocity bitterness,” generally took a smug tone about being a progressive white woman whose choices should not be judged, and compared a music school for impoverished kids in the Cabildo in New Orleans (another slavery site) with a resort catering to rich white people.*
Perhaps the thing that’s most troubling to me is that last bit. I’ve been to the Cabildo, and it’s nothing like going to a plantation. It’s been part of the Louisiana State Museum since 1908; that’s over a century of functioning as a public entity, complete with educational programs and a permanent exhibition that places slavery in context. Just as importantly, as Ani acknowledges, the work that educators do within the Cabildo–like the Roots of Music group she cites–transform the historical building into a space that benefits people rather than actively, intentionally harms them.
I’ve also been to Nottoway, twice, a long time ago. Back in the 1980s, I went on a lot of plantation tours throughout south Louisiana. At the time, I didn’t have a choice: most of the tours were school field trips with my classmates from my predominantly white Catholic school. At the time, I didn’t think about these houses as being built by slaves or that these houses’ existence was a product of the system of slavery: I was a young white kid born in NJ but growing up in LA, and, to me, the plantations were just beautiful houses with unique architecture.
And then, when I was in middle school, my older cousin went on a tour with my family to Nottoway Plantation.
“Don’t you think it’s weird,” she said, “that no one’s mentioned slaves at all?”
She was right. They hadn’t. It had been “servants” throughout the entire tour: servants selected the cypress beams that were resistant to termites; servants worked in the plantation’s sugar cane fields; servants took care of the family; servants took the “whistler’s walk”* from the outdoor kitchens to the family’s dining room.
It was a jolt to 12-year-old me. Suddenly, I felt a lot less comfortable in the beautiful (but, no joke, completely white) ballroom where we were standing. It wasn’t just a house we were touring. It was a place where black people had been exploited, abused, and even killed just so that white people could live in luxury.
Now, this was 25 years ago, and a lot has changed at Nottoway. The website now acknowledges up front that slavery was a part of plantation life, and that John Randolph, the plantation owner who built the house, had a long history of slave ownership (including the 20 slaves who were part of his wife’s dowry). On its history page, the plantation’s current owners describe life for the slaves, from field hands to household workers. But it repeats the idea that John Randolph was an especially nice owner, though it’s sure to point out that he had economic reasons for being relatively less terrible than other slave owners.
Although the plantation no longer hides its past as a site of slavery, it isn’t a transformative space, either. It’s a resort that replicates the kind of luxury the Randolph family enjoyed in the antebellum era. You can stay in the Randolph family’s bedrooms, which are filled with period-specific antiques. You can stay in the overseer’s cottage–yes, the overseer’s cottage–also with some fine antiques. Or the carriage house or the “cottages,” which are designed like modern hotel rooms. The “cottages” are on the site where slave cabins once stood. In them, you can enjoy all the amenities of a first-class resort. You know, just like those slaves did back then.
Staying in a plantation owner’s or overseer’s house with period-centric antiques isn’t transformative; rather, it glorifies a system of owning other human beings by replicating the conditions afforded to the upper classes at the time. Nor do tidy “cottages” with modern amenities represent the kind of conditions slaves lived in. Rather, they mask the horrors that made places like Nottoway possible.
It’s not transformation. It’s erasure. And no one should be comfortable with that.
*As I was writing this, Ani DiFranco apologized again. It’s better, but still not great. Additionally, many of her fans continue to think it’s A-OK to stay at a plantation.
**Strangely, our tour guide didn’t omit a definition of a “whistler’s walk”: To insure that no one stole food or spit in it, slaves had to whistle as they walked from the outdoor kitchens to the plantation house.
December 30, 2013
I’m an ethnomusicologist. My entire field’s uselessness has been the butt of a joke on 30 Rock, so you don’t need to tell me that going into graduate studies was a lost cause–believe me, I know. Ethnomusicology, though, gives me an interesting perspective from which to observe the dissolution of the academy.
Ethnomusicology is the bastard child of musicology and anthropology; depending on where you do your graduate studies, it’s either more humanistic or more social science-y. I’m a product of the anthro-facing squad at Columbia, where theory and methodology from anthropology were so strictly applied to the study of music in culture that the department has actually placed ethnomusicologists in anthro departments, which is pretty much unheard of (though the opposite is fairly frequent). So, though I work as a “musicologist” right now, teaching music history in a humanities department, I spent most of my time in grad school among social scientists.
One of those people is anthropologist Sherry B. Ortner, who taught the single most useful course I took in grad school, “The Anthropology of the Subject.” (Full disclosure: Sherry was on my dissertation committee and is married to my advisor.) The course basically focused on the twin issues of structure and agency. Somehow, despite the tendency in the world for people to dichotomize this as the “structure vs. agency” problem, Prof. Ortner brought nuance to everything she taught in that class, whether it was Bourdieu or Geertz or Althusser. I got out of that class a thorough understanding of agency as always enmeshed in forces beyond the subject. Your decisions and your actions may be your own, but you always make them within institutional and cultural frameworks.
Which brings me to the real point of this blog post: people in the humanities who refuse to acknowledge that the employment structure of the academy has changed over the past forty years, or that there is any sort of internalized ideology–a structuring structure–that shapes people’s choices within the academy. These are the people that Rebecca Schuman (@pankisseskafka) has called “lifeboaters.” They are the people who have a strong belief in the meritocracy because, you know, it worked for them. They are the people who blame contingency on the overproduction of grad students. They are the people who say, “It’s always been hard to get a job in the academy.”
It’s that last one that’s exceptionally deceptive. Yes, it has always been hard to get a job in the academy. But that does not mean that it has not been growing more and more difficult, or that there are quantifiable structural changes in academic employment. In pretty much every chart that the AAUP and the US Department of Education have produced, the decline of tenured and tenure-track positions is incredibly obvious by the dramatic downward slope on the x-y axis. Here’s one from Alexandre Afonso’s blog post about stratification in the academy, “How Academia Resembles a Drug Gang.”
When you mention this to a lifeboater, as I did to Steven D. Krause on Twitter, they tend to put their hands over their ears, close their eyes, and shout, “Lalalalalalala! Facts and figures mean nothing! They are not discipline specific! You are naive! Lalalalalala!”
I’m saying, in response, that I’m not naive. I don’t think that anyone who is pointing out the damaging effects of the march toward contingency is naive; rather, we’re the ones who see that the structure of the academy has changed so drastically that, as Rebecca Schuman has pointed out repeatedly in her posts and on Twitter, failure has become the norm. That’s not naivete. That’s dismay and anger in acknowledgment that there is real, growing stratification in the academy.
This bring me to the other point about people’s choices within structuring structures: All of us supposedly naive people crafted our careers within institutions that shaped our expectations, if not our actions. As early as my first year of undergrad, I was presented with the idea that academe was somehow the best place for any smart person. In fact, I was talked out of pursuing a career in journalism by several of my professors–writing for normal people just wasn’t the best use of my intelligence.
Though they did say getting a job in the academy would be hard, they said that I could do things to mitigate the risk: get into an Ivy (check), work with distinguished people (check), and present at conferences and publish (check). And, in grad school, I continued to hear, “There’s always a place for people who do good work!” and “I’m sure you’ll get a job if you just hang in there for one more year” and even, “You should be glad you didn’t get that job–your work is too good for a school like that.”
I don’t blame these people, nor do I think they meant me harm. In fact, I like most them a lot on the personal level. But I understand that they, too, are part of a system that is invested in continually bringing in a cheap source of labor (grad students), and which allows just enough people to rise up through the ranks to let the discourse of “good work”/”it’s always been hard to get a job” function. I’m not sure I would have made these choices if anyone had said, “Hey, Elizabeth, you remember how you discarded the idea of being a concert flutist after you learned how much your flute teacher earned? Well, no one’s telling you, but that’s what an adjunct makes, and you’ll even have to compete for that.” Then again, I may have ignored this person, as he or she would have been a lonely voice of rationality among the crowd of academics who were constantly reinforcing the idea that the only form of success in the world was a tenure-track job.
So, I also understand lifeboaters. They, too, are the products of this system, but they got what they wanted. It’s just a shame that they refuse to see the structure changing around them–because, in this move toward contingency, no one is really safe.
December 16, 2013
Oh, hey, this used to be a blog about music. In light of that, here are my reactions to Beyoncé, which came out on Friday (as if you hadn’t already figured that out):
On Beyoncé’s feminism: Yes, Beyoncé’s feminism is rife with consumerism. No, it’s not perfect, academic feminism. But, you know, IDGAF. You know why? Because I’ve spent the past ten years looking at the intersection of feminism and consumer culture, and the thing that I’ve found time and again is that black women are held to different, much higher standards than white women. [In fact, this forms a core part of the argument in the academic book I will one day publish about ’90s feminism and pop music.] But this moving standard is bullshit. Pop music always has ties to consumerism, and if you get hung up on that particular point of analysis, you will end up in a pit of Adorno, or, worse yet, mired in authenticity frameworks. That crap (i.e., the discourse of authenticity, whether feminist or as a pop musician) does not interest me, because, in general, it serves to push black women artists (and sometimes white women artists) to the margins: no one is so authentic as someone who is broke, obscure, and, preferably, dead.
I would much rather a living, imperfect feminism than dead, obscure purity that never reaches beyond academe or music critic circles, TYVM. Beyoncé specifically uses the word “feminist,” which should count for something in a world of women who run away from the term. She employs female musicians on tour, and they kill it. She sings from a specific, black, female, adult, sexual, whole-person subjectivity. On “Flawless,” she even drops a speech from Chimamanda Ngozi Adiche right into the song. Yes, the song’s got issues–“Bow down, bitches” is maybe the kind of competition that Chimamanda is talking about–but its video has Beyoncé giving a nudge and a wink to her own “I woke up like this” advice. I’d rather have this complicated version of feminism reaching out into the world than a ton of the fluff presented as feminist in Bust. So, sure, analyze the imperfections, talk about the implications of consumerism, and wish for better (like a Jay Z who doesn’t think that his reference to Ike Turner on “Drunk in Love” is totally hilarious). But don’t forget that there’s a huge cultural shift in having a black feminist as one of the biggest stars around.*
Beyoncé, the album, is kind of a glorious mess. The songs are sprawling, sometimes the rhyme schemes are completely off, and the ego just wafts off it at times, but, oh, how ambitious it is! Actually, I’m not that upset that it’s ego-driven: pretty much all popular music is, when you get down to it, and some of my favorite albums are sprawling messes. It’s a rare thing to see a black woman behind an ego-driven project. And, as far as ego-driven projects of the year go, it’s pretty damned good. Unlike Lady Gaga’s Artpop, Beyoncé works much of the time. There are definitely times when it doesn’t make sense lyrically, like when “Jealous” had me wondering why she cooked food naked, but then was waiting for Jay Z half-naked. First, why are you cooking naked? What if you burn yourself in a sensitive place, Bey? And why did you put on a few clothes afterward? Musically, some songs–like “Haunted”–feel like three in one. But “Pretty Hurts,” “Blow,” and “No Angel” are already getting incessantly stuck in my head after only a few listens, and the Frank Ocean collaboration “Superpower,” with its timpani rolls and arpeggiated doo-wop harmonies, is really growing on me.
In a world of Mileys, it’s SO refreshing to hear an adult woman talking about sexuality. Seriously, I’m not sure Miley Cyrus has ever had sex. There’s something way too, “Hey, y’all, I’m having SEX!” about her presentation that makes me think she’s the living incarnation of a teenage boy lying about his list of conquests in a bad ’80s movie. Throughout the album, Beyoncé sounds pretty sure of herself sexually, even when she’s insecure about other things. “Blow” is pretty much everyone’s go-to example of this. Even though Pharrell is the co-writer and JT and Timbaland are co-producers of “Blow,” the song never strays from a strong sense of the sexuality of a grown woman. More than that, “Blow” is quite possibly the catchiest song on the album: it fuses Beyonce’s flexible voice and subjectivity to JT’s love for Michael Jackson and Pharrell’s preternatural ability to create hooks. The song makes me want to high-five Beyoncé for talking about oral sex and almost forgive Pharrell for the monstrosity that is “Blurred Lines.” Almost.
Also, it’s a married woman and a mother talking in these songs. Beyonce has never shied away from the autobiographical “I” in her work. Though all pop music (including that influenced by hip-hop) is about persona rather than the person, let’s not forget that even Bey’s public-facing persona is a rare one in pop music: she’s a black married woman in her 30s, with a kid, talking about having sex with her husband. She also talks about how she’s “not been the same since the baby” (“Mine”). Whether or not this is really about Beyoncé as a real human being rather than a persona doesn’t matter–no one talks about that in pop music. And she’s one of the most successful pop stars around.
On the other hand, I cannot get the image of Jay Z and Beyoncé doin’ it out of my head. It’s one of those things, like when your friends tell you that they’re “trying” to have a baby and give each other a knowing look and pat on the hand, or, on the fertility message boards, where people refer to having sex as “baby dancing.” Blargh. I know they’re doing it, they are both attractive people, and I’m sure it’s good, given that they are still together after a decade. But, uh, do I have to hear about them having drunk sex in their kitchen, or the fact that their wild times fucked up Jay Z’s Warhol? I’m cringing, both because I imagine their kitchen is much nicer than mine and I ache for that Warhol, and because “Drunk in Love” is simply not a very good song. (For a much better song about Bey and Jay doin’ it, listen to “Partition.” It has a very sexy groove and lyrics that are just as explicit but somehow–inexplicably, ineffably–better.)
Damn, Beyonce has a flexible voice. Yeah, this is not news, but this album exploits her voice in so many ways. The amount of falsetto and head voice on this album is pretty incredible. “Blow” moves smoothly up and down her range. “No Angel” exploits her falsetto in a way reminiscent of Prince’s “Kiss”: Beyoncé nearly squeaks out of existence in her highest ranges, airy and ephemeral; you get the sense if she actually used chest voice, this whole delicate indie, synth-pop affair of a song would be blasted out of existence. On the flip side, “Haunted” features a growly, low range Beyoncé who just may have been spending a little too much time with her pal Lady Gaga (that “Solomon or Salamander” lyric in particular). Either way, Beyoncé moves beyond the boundaries of big-voiced diva pop/R&B.
For some reason, “Flawless” will not play on my computer or iPod, and it’s killing me. WTF is this about, iTunes? The video works, but I would like to hear the damned song when I’m ignoring people on the subway.
*I don’t want this to be read as a “white feminist think-piece”; rather, it’s a white feminist annoyed by other white feminists who design feminist litmus tests that are impossible for black women to pass. A lot of black women have been pointed this out long before me, and I want to acknowledge that. Mikki Kendall’s recent column in the Guardian and Christina Coleman’s post on the Global Grind are both great instances examining the double-standards of feminism.