Friday, July 20, 2012

What's wrong with academia? Part 1A

I wasn't planning to write anything more on the issue of job security, but I've been really pleasantly surprised by the number of people who've taken the time to engage seriously with my previous post, both in blog and Facebook comments and in private responses. Thanks for your thoughts - I really appreciate it. And I hope that the debate has helped a few people to clarify their own position on this issue, whatever that might be. It's certainly had that effect on me.

I should start by saying that I am extremely unlikely to be in a position where I can implement any of the sweeping changes I proposed. That's for the best, for a number of reasons. For one thing, like Neil (Facebook comment), I'm actually more conflicted than the previous post made out; in that post I was trying to take a line of argumentation to its (approximate) logical extreme, and though it's an extreme that I am sympathetic to, I'm not too fond of extremes in general. For another thing, I'm not sure I'd have the balls to make big changes like this.

I think two major issues have been raised with regard to the alternative system I sketched (as well as a host of more minor ones, such as the increased danger of funding cuts under such a system, as Christine pointed out in a blog comment, and the difficulty of keeping long-term projects afloat, as Katie pointed out in a Facebook comment). These are: "juking the stats", and the issue of job security as an incentive per se (the "family argument"). I'll address these in turn.

Juking the stats
"Impact is up 42%, and the Mayor's gonna love our project on the Big Society."
I think this issue was stated most clearly by Tim (Facebook), Lameen (blog) and Unknown (blog), though in different ways. It's closely related to the "flavour of the month" approach to research funding mentioned by Orestis (blog). Essentially the key problem as I understand it is this: the intention of abolishing permanent positions is to force academics to continue to come up with innovative new work. But one alternative for academics is to become cynical, and to try to game the system by either a) producing a load of hackwork (or at best work that's a "safe bet") and passing it off as research activity, or b) deliberately focusing your research priorities on what others think is awesome (grant-awarding bodies, employers, research assessment bodies, the media) and generating hype and hot air rather than ideas. (On reflection, I guess that a and b are variants of one another.)

This is a genuine concern, and a clear potential practical problem for any approach like the one I sketched. It's worth mentioning that it's a problem right now as well. For instance, in Lisbon recently I was discussing with colleagues a project that had been awarded vast amounts of money by a major grant-awarding body but that seemed to us to be mostly spin. Similarly, as I mentioned in my previous post, research assessment as carried out at present is not enormously difficult to juke, at least insofar as the intent of research assessment is to assess research quality and the metrics used by for instance the REF in arts and humanities are a fairly poor reflection of that. (Publication counts, essentially: you have to submit four; monographs count for two [why two? why not four, or ten, or zero?].) Other metrics used as a proxy for research assessment at present are also not great: citation counts, for instance. It's not as if you cite something solely because you believe it's wonderful research.

Given that the problem exists now, it would only be quantitatively greater under the approach I sketched, not qualitatively different. This leads me to suspect that the issue is an independent one: can a robust metric for research quality or for innovation be devised? I've seen no demonstrative argument to the effect that this is impossible either in principle or in practice (though I'm damned if I can think of anything that would work). More generally, though, when it's put this way it's pretty clear that the increased influence of juking the stats under the approach I outlined is not an argument against the approach. Consider an analogy from the school system. In order to assess pupils' achievements (as well as teaching efficacy etc.), exams are needed. This much is uncontroversial, though the exact extent of examination at primary and secondary level gives rise to heated debates. Now consider a system in which pupils only take one examination - in order to assess their suitability to enter the school in the first place (sorta like the old 11+ in the UK) - and then are left to their own devices, without any assessment. They might advance from year 7 to year 8, say, but this (as, ultimately, in the school system) would be based solely on age. This seems to me to be fully analogous to the current system of permanent academic positions. (In particular, though it's not unheard of for pupils to repeat a year, being demoted to the year below on account of poor performance is not something that often happens, to my knowledge.)

The point is that one has to doubt any argument that goes as follows: "Assessment (of pupils, academics, the Baltimore police force, etc.) is really difficult, and all metrics so far devised are imperfect reflections of what we're actually trying to measure. Therefore, let's not do any assessment at all past a certain point." At best it's a slippery slope argument, and we all know that slippery slope arguments lead to much, much worse things. ;)

The family argument
"Won't somebody please think of the children?"
This is the argument most clearly and repeatedly made against my position, e.g. by Chris, Liv, Katie and Neil (Facebook) and Darkness & Light (blog) and by more than one person in private responses as well.

There are many strands to this argument, but before I mention them I should perhaps explain why in my first post it seemed like I was dismissing the family argument so cavalierly. Underlying that post was the desire to optimize the individual academic's research output. I was tacitly assuming that this is the only goal of academia - which of course it isn't. There are many other sides to academia: teaching, admin (yay!), training others to become good researchers, etc. While the approach I sketched might be good for the research output of individuals, it doesn't look as promising for any of these other sides.

One strand of the family argument is simply a human argument: it's not as good for us as people if we don't have permanent jobs. We can't plan in advance to nearly as great an extent, and of course it's much harder to do things like buying a house and raising a family. Well, this is all obviously true, though of course it will bother some people more than others. I personally don't particularly want to raise a family; I have no particular ties; I am young and mobile. (To those of you in different situations, this particular bias must have seemed painfully obvious from my post.) To the extent that optimizing individual research output is the goal, however, it's irrelevant.

However, note the word "individual" with which I've carefully been hedging. As Chris pointed out in his Facebook comment and subsequent clarification, if we consider the research community as a whole, that could suffer. People who do want to raise a family might decide that academia is not for them, and we might have a mass exodus on our hands. This reduces the "genepool", and is hence bad.

There are a couple of ways of responding to this criticism, though both are super tendentious. First of all, maybe I think that actually the absence of permanent positions should be something that's not restricted to academia but is more prevalent at large. (As, in fact, it already is among people of my generation. One good friend has had several jobs now, in the real world, and found career advancement to be nearly impossible - putting this down to the fact that "old people can't be fired".) If the whole world works in the way that I've been suggesting, then academia would just be one field among many.

Secondly - and I should emphasize that I don't believe this, though the argument could in principle be made - do we really need all those people who would leave the field? Academia is already massively oversubscribed to the extent of the job market being a joke, at least in the arts and humanities. But the smaller genepool must be a bad thing in itself - unless it could be argued that the people who desire permanence, who want to raise families etc. are inherently less good at research than flexible, asocial freaks like me. But I really don't want to go down that road; I'll just note that it's an open question, which could presumably be investigated empirically. (Actually the argument could be put the other way round, as one private response to my post did. If academia is robbed of all the people who are embedded in stable social contexts such as families, it becomes distanced from the social "mainstream", which encourages precisely the kind of philistinism I was scared of in my previous post.)

The final key strand of the family argument is not about families: it's about the other roles of academics. Certainly for teaching purposes, constant change is bad. Departmental leadership and continuity of that kind will also suffer. Perhaps most importantly, as again emphasized in a private response, the role of senior academics in mentoring more junior academics would be compromised. Again, on a narrow reading of optimization of the individual research output, none of this is a problem. But again, if we consider the output of the research community, it's bad.

In this section I haven't been concerned with defending my original argument, at least not beyond pointing out the tacit (and, ultimately, flawed) assumption that underlay it. There's more to academia than the individual's research, that much is clear.

Well, I think I'll stop here. Other interesting points were raised; in particular, my impression is that a lot of the sort of changes I'm suggesting are already in place in the sciences (and that people heartily dislike them). But I don't have the background or knowledge necessary to consider that further, and I wouldn't want to generalize beyond the arts and humanities (which is itself a stretch from linguistics). So, yeah.

Saturday, July 14, 2012

What's wrong with academia? Part 1: Job security

Update, 17th March 2021: I wrote this post nearly a decade ago, and have since become convinced that it's the single worst thing I've ever written. This is especially true given that, at the time, I'd recently taken up a permanent position myself, so it's sick-makingly tone-deaf. Unsupported assertions about 'human nature', unironically appealing to 'meritocracy'... honestly, it'd be better for my reputation if I just deleted it, or retconned it à la Dom Cummings. I'm leaving it here only for the sake of intellectual honesty and accountability. Perhaps unsurprisingly given the fierce reactions this post engendered (see the comments), I never ended up writing parts 2 and 3.

What follows is a collection of musings on various topics that have come to bother me during my first six months in a lectureship. In the interests of structure, I'll focus on three main areas: job security, the relationship between teaching and research, and publishing.

If you're familiar with my general left-wing leanings, you might think you can already anticipate the bones of contention that form the skeleton of this blog post. With regard to job security, for instance, one might expect me to bewail the decreasing availability of permanent positions; and one might expect me to extol the virtues of the oft-unnoticed synergies between teaching and research. In both these cases I will do neither of these things; if anything, the complete opposite viewpoint will emerge. (With regard to publishing, given my own editorial activities, the thread of argument will be a bit more predictable.)

Whether any of this is consistent with the aforementioned left-wing leanings or with my life philosophy in general, or whether I should instead be counted among the Hippocrates, is an interesting question. I'm convinced that my stance is consistent, but that's a discussion for another time; in any case, I do welcome thoughts on this or any other part of the post.

1. Job security

As I've mentioned, it's fashionable and commonplace to find the decreased availability of permanent academic positions deeply worrying - so much so that it's entered into mainstream media discourse. Now this seems to go hand in hand (at the moment, at least) with a general decline in the availability of academic jobs tout court. I'd be the first to say that the latter is an extremely worrying trend, especially when coupled with the general philistinism as regards academia in the UK. Consider the following comment, a response to a Guardian article about the AHRC supposedly being told to study the Big Society:
The country spends £100m on 'arts and humanities research'???

Please cut it all and let's see if we miss it....
Worryingly, this comment is 'recommended' by 62 people... and this is the Guardian we're talking about, not the Daily Fail. And in the meantime, we pay £2 billion a year for a collection of Cold War relics to gather dust, and some people defend this with their lives. Ho hum.

So I'm against a reduction in jobs across academia as a whole. However, this issue is logically separate from the question of whether those jobs should be permanent or temporary/fixed-term. What's more, I've never heard a good argument for permanent academic positions.

Permanent positions make a necessity out of virtue. They are disproportionate post hoc rewards for research achievements, and give no incentive to advance the state of knowledge (which I take to be the primary function of academia as a whole). Let's say you write a decent PhD thesis and a few publications, meet some nice people at conferences, get lucky, and then end up with a job for life. Why is this considered to be a good thing? From that point onwards, it's human nature to kick back and do nothing. From my observations of other supposedly research-active staff (admittedly a small and varied group), if this happens, the worst that the university can do to you is shout at you a little bit. But because you're contractually protected, you can more or less continue to do nothing with impunity.

But let's say that's not the case. Let's say that instead you sit down and churn out the four publications needed to become REFable every few years - or even more. Where is the incentive to innovate, to produce research that will change the state of ideas?

Worse is that academic advancement (at least in the fields with which I'm familiar in the arts and humanities) is still so closely tied to age. 'Being on the ladder', many reflexively call it, and with good reason. Once you're in at the ground floor, every decade or so, a promotion comes along and you go upstairs. You never go downstairs again. Who ever heard of a reader being demoted to lecturer? Or a professor to reader? Why not? Furthermore, ask yourself how many professors you've met who are under the age of 40. Then think about who's doing the top quality research in your field right now - the work you're really excited about, the work that is changing the way people think. How old are they? What is their job title? Whatever the outcome, chances are this group of researchers won't be anything like coextensive with the 50-something professors who have climbed highest on the ladder. This fact seems to be so obvious that I'm amazed at the level of acceptance that exists for it. At best one can conclude that pay in academia isn't in any way performance-related.

My solution? Well, it's not a novel one. One's position at a given time should be related to two things: a) the quality of the work one is doing at that time (in practice, since this is difficult to assess, a fixed time span immediately preceding can serve as a proxy) and b) the quality of one's research proposal. There was a massive outcry a while back when King's College London threatened to make everyone reapply for their own jobs. In principle, as long as the total number of jobs and amount of funding stays proportionate, I think this is an excellent idea. It forces researchers to think about exactly what they're doing and why - and to up their game in order to stay in it. I can see no harm in stipulating that academic positions last for a maximum fixed term of five years. In fact, a lot of good would surely come out of it.

Now one could object that the proposal I'm making here is precisely what grant funding is supposed to achieve in the UK. My response is twofold. Firstly, grant funding (again, at least in the arts and humanities) constitutes only a small amount of the money academics receive: I don't have numbers, but I'd wager that far more is paid on an annual basis to salaried, tenured professors. Therefore, the grant funding solution doesn't go nearly far enough. Secondly, the grant application system is so massively broken in the UK as to be almost completely worthless from the point of view of advancing the state of knowledge. The reason is a classic Catch-22. Grant applications to bodies like the AHRC are like double-blind peer review - except that, crucially, the reviewers know exactly who you are. They need to know this (so I'm told) because they need to assess your suitability for leading a project team, and for managing grant money. How is this assessed? Well, of course in terms of your experience of leading a project team, and of managing grant money. If speculative business financing in general worked on this basis... well, it wouldn't. Work, that is. No interesting project would ever get off the ground. The emphasis on grant-handling experience is particularly bemusing in light of the fact that actually AHRC-funded projects often have no obvious output or endpoint at all. (I use the term 'output' non-traditionally here, to refer to 'any resource that advances the state of knowledge' rather than the more typical 'publications'.) It seems that the AHRC and bodies like it have little concept of what it means for a project to be successful; which makes it all the more odd that they set such high stock in the ability of the project leader to achieve success. (Once again, let me emphasize that publications in and of themselves are NOT 'success'. This will become a lot clearer in part 3.)

The preceding two paragraphs are perhaps a bit deliberately polemical, but you should at least be disabused of the notion that funding bodies are the great levellers. Even if funding bodies played a significant enough role in actual funding to be the deal-breaker, they couldn't vouchsafe the advancement of knowledge because their priorities are wrong and their funding criteria flawed.

The moral of all of this? Academics make such a big deal out of meritocracy in principle that it's hard to see how things could have gone so drastically wrong. Throughout your school, undergraduate and graduate career you're fighting to jump through the next hoop, to advance yourself, to educate yourself. Then when you enter the job market the logic is reversed: you find a hole to crawl into, where you'll be paid a reasonable sum of money. And if you churn out enough publications, take care not to ruffle any feathers in teaching or administration, and maybe get a grant or two, you'll probably get promoted every ten years or so. Whatever happened to onward and upward?