Saturday, December 22, 2012

Prezi for teaching linguistics?

This semester I've been using Prezi as a visual aid in my lectures. I decided it would be a useful exercise to summarize my thoughts on it for the benefit of others, since there's no need to reinvent the wheel by discovering all of Prezi's features and flaws.

The conclusion, for those of you with short attention spans, is that I probably won't be using it again, at least not for regular lecturing. But let's start at the beginning.


I used Prezi for eleven one-hour lectures, which constitute the lecture part of the first-year course Introducing English Grammar at the University of Manchester. It's a basic course, designed to get everyone up to speed on a basic framework for understanding English grammar, from those who have no prior knowledge to those who might have substantial experience with grammatical terminology in a different framework. The course is highly terminology-laden: lots of names for things, and lots of tests one can apply in order to identify those things. There were about 220 students on the course this year.

Prezi, for those of you who haven't come across it, is a piece of presentation software marketed as an alternative to Powerpoint (and to Keynote and Beamer, which are basically the same thing: slides), even as a "Powerpoint killer" in some markets. I won't give a full introduction; take a look for yourself at some of the sample presentations on their website if you're interested. The key thing is that instead of just moving from slide to slide you zoom in and out and move around on one giant canvas.

On to the advantages and disadvantages.


  • Wow factor. This is not to be underestimated. Students are a jaded bunch, and it's difficult to impress them with technology; but by and large have responded well to the general look and feel of it. In my mid-term survey, which had 58 respondents, 74% found the presentations to be attractive, and 0% thought they were ugly. (Though this must in part be due to my general awesomeness as a designer, and not entirely to the software...) Obviously this wow factor will diminish the more people use Prezi for teaching.
  • Clearer conveyal of complex arguments. I genuinely believe Prezi is better for this than its slide-based competitors. Let's say you're presenting a list of things, for instance constituency tests. Rather than having a sequence of slides and presenting them one by one, you can have a kind of spider diagram and zoom in to each test in turn. And having some screens embedded in small form within other screens is great for capturing part-whole relations, and relative importance. In short, here I think Prezi lives up to its claims. In the mid-term survey, 41% stated that they found Prezis easier to follow than Powerpoint presentations, and 19% said the opposite.
Some of the positive comments I received in the survey:
  • "I definitely much prefer the prezi presentations compared to powerpoint presentations, and the Grammar lecture is one of my favourites because of this."
  • "The lecture slides are very attractive and engaging which helps make the concepts more memorable."
  • "The way the information is portrayed is much more interesting than a powerpoint presentation, and the examples are always useful to refer back to." 


  • Poor facilities for typography. Prezi has no bold, underline, italic, superscript, subscript, anything like that. This means that using it for linguistics can be very frustrating. (Sure, you can create a "subscript" by creating a new text box and making it smaller, then manually positioning it in the right place. But that's not a sustainable solution.) Writing out formulas, for instance, or labelled bracketings, is virtually impossible. Prezi also gives you an extremely limited colour palette.
  • No tables. You have to draw all the lines by hand, and space it out by hand; it's not impossible, but very tedious.
  • Incredibly time-consuming. You probably figured this one out from the above two, but it's the main barrier to using this software in a sustainable way. Even a very basic lecture, with nothing fancy added, takes hours to create; much longer than it would in Powerpoint, at any rate.
  • Extremely buggy. The browser-based editor, which I used to create all my Prezis, is bugged beyond belief. Sometimes you'll copy-and-paste something, only to have it appear in a random position somewhere else in the presentation. Sometimes, after moving something, it will suddenly decide to spring back to where it came from. Sometimes line breaks will arbitrarily delete themselves. A colleague who used the desktop version of the editor tells me it's no better. This is unbelievably frustrating even for experienced users (I'd now count myself as one). There is a facility to import Powerpoint slides, but it's just as buggy as you'd expect.
  • Massive files, no handouts. You can download a "portable Prezi" to present offline, but the file size is usually between 40 and 50 megabytes. Not useful. Furthermore, it's difficult to create handouts. You can create PDFs of the screens (which are themselves large-ish files; mine were all between 4 and 13 megabytes), but there's no easy way to do, say, 6 per page. (I found a workaround for this, but like many workarounds it's time-consuming; just what you don't need with Prezi.)
  • Not supported on all computers. Though the portable Prezis are supposed to be standalone files, they need a certain version of Flash to work, and apparently need some sort of internet connection (or sometimes do). Anyway, some Manchester computers, such as the ones in the library, simply won't play them.
  • Motion sickness. Though only 2% of students (one) found the lecture presentations to be nausea-inducing, apparently this can be a general problem with Prezi if you do too much panning.
Some of the negative comments I received in the survey:
  • "The prezi presentations are very effective in lectures but when reviewing I haven't found a way of quickly getting to the slide I want except through flicking through all of them. Similarly, they can't be printed off except one to a page, so for both these reasons I prefer powerpoint for slides that I am going to refer to later."
  • "lectures slides are too long it would be very helpful if  you use powerpoint instead"


Overall, then, I'm sad to say that I think the negatives outweigh the positives. The "wow factor", as I noted, is only relevant to the extent that Prezi is a minority technology: if everyone uses it, it will become much less impressive. That leaves its only lasting advantage as the clarity of representation of complex chains of thought. While it's nice to have, it doesn't justify the time and effort spent on creating them, or the additional problems created for students who want handouts.

I therefore can't recommend Prezi for regular lecturing, and won't be using it in that function myself in future. Still, it was a fun experiment, and I've certainly learned from it - and I hope you find this useful too!

Sunday, August 26, 2012

More chocolate

Greetings (and ratings) from Stuttgart, where everything is closed on Sundays. (Ah, Catholic Europe, how I love thee.)

Waldbeer Joghurt: 7/10

A tasty treat, though very sweet and rather gooey. This variety certainly succeeds in bringing out the distinctive sharp taste of the berries - it's a bit like a Fruits-of-the-Forest cheesecake encased in milk chocolate. Only a slight yoghurty clogginess detracts from it.

Napolitaner Waffel: 4/10

Though wafers and Ritter Sport milk chocolate are good on their own, I don't think the combination quite succeeds. Wafers are at their best when dry and crispy, which the very creamy chocolate coating prevents. The result is a bit soggy and nondescript, though by no means unpleasant.

Friday, July 20, 2012

What's wrong with academia? Part 1A

I wasn't planning to write anything more on the issue of job security, but I've been really pleasantly surprised by the number of people who've taken the time to engage seriously with my previous post, both in blog and Facebook comments and in private responses. Thanks for your thoughts - I really appreciate it. And I hope that the debate has helped a few people to clarify their own position on this issue, whatever that might be. It's certainly had that effect on me.

I should start by saying that I am extremely unlikely to be in a position where I can implement any of the sweeping changes I proposed. That's for the best, for a number of reasons. For one thing, like Neil (Facebook comment), I'm actually more conflicted than the previous post made out; in that post I was trying to take a line of argumentation to its (approximate) logical extreme, and though it's an extreme that I am sympathetic to, I'm not too fond of extremes in general. For another thing, I'm not sure I'd have the balls to make big changes like this.

I think two major issues have been raised with regard to the alternative system I sketched (as well as a host of more minor ones, such as the increased danger of funding cuts under such a system, as Christine pointed out in a blog comment, and the difficulty of keeping long-term projects afloat, as Katie pointed out in a Facebook comment). These are: "juking the stats", and the issue of job security as an incentive per se (the "family argument"). I'll address these in turn.

Juking the stats
"Impact is up 42%, and the Mayor's gonna love our project on the Big Society."
I think this issue was stated most clearly by Tim (Facebook), Lameen (blog) and Unknown (blog), though in different ways. It's closely related to the "flavour of the month" approach to research funding mentioned by Orestis (blog). Essentially the key problem as I understand it is this: the intention of abolishing permanent positions is to force academics to continue to come up with innovative new work. But one alternative for academics is to become cynical, and to try to game the system by either a) producing a load of hackwork (or at best work that's a "safe bet") and passing it off as research activity, or b) deliberately focusing your research priorities on what others think is awesome (grant-awarding bodies, employers, research assessment bodies, the media) and generating hype and hot air rather than ideas. (On reflection, I guess that a and b are variants of one another.)

This is a genuine concern, and a clear potential practical problem for any approach like the one I sketched. It's worth mentioning that it's a problem right now as well. For instance, in Lisbon recently I was discussing with colleagues a project that had been awarded vast amounts of money by a major grant-awarding body but that seemed to us to be mostly spin. Similarly, as I mentioned in my previous post, research assessment as carried out at present is not enormously difficult to juke, at least insofar as the intent of research assessment is to assess research quality and the metrics used by for instance the REF in arts and humanities are a fairly poor reflection of that. (Publication counts, essentially: you have to submit four; monographs count for two [why two? why not four, or ten, or zero?].) Other metrics used as a proxy for research assessment at present are also not great: citation counts, for instance. It's not as if you cite something solely because you believe it's wonderful research.

Given that the problem exists now, it would only be quantitatively greater under the approach I sketched, not qualitatively different. This leads me to suspect that the issue is an independent one: can a robust metric for research quality or for innovation be devised? I've seen no demonstrative argument to the effect that this is impossible either in principle or in practice (though I'm damned if I can think of anything that would work). More generally, though, when it's put this way it's pretty clear that the increased influence of juking the stats under the approach I outlined is not an argument against the approach. Consider an analogy from the school system. In order to assess pupils' achievements (as well as teaching efficacy etc.), exams are needed. This much is uncontroversial, though the exact extent of examination at primary and secondary level gives rise to heated debates. Now consider a system in which pupils only take one examination - in order to assess their suitability to enter the school in the first place (sorta like the old 11+ in the UK) - and then are left to their own devices, without any assessment. They might advance from year 7 to year 8, say, but this (as, ultimately, in the school system) would be based solely on age. This seems to me to be fully analogous to the current system of permanent academic positions. (In particular, though it's not unheard of for pupils to repeat a year, being demoted to the year below on account of poor performance is not something that often happens, to my knowledge.)

The point is that one has to doubt any argument that goes as follows: "Assessment (of pupils, academics, the Baltimore police force, etc.) is really difficult, and all metrics so far devised are imperfect reflections of what we're actually trying to measure. Therefore, let's not do any assessment at all past a certain point." At best it's a slippery slope argument, and we all know that slippery slope arguments lead to much, much worse things. ;)

The family argument
"Won't somebody please think of the children?"
This is the argument most clearly and repeatedly made against my position, e.g. by Chris, Liv, Katie and Neil (Facebook) and Darkness & Light (blog) and by more than one person in private responses as well.

There are many strands to this argument, but before I mention them I should perhaps explain why in my first post it seemed like I was dismissing the family argument so cavalierly. Underlying that post was the desire to optimize the individual academic's research output. I was tacitly assuming that this is the only goal of academia - which of course it isn't. There are many other sides to academia: teaching, admin (yay!), training others to become good researchers, etc. While the approach I sketched might be good for the research output of individuals, it doesn't look as promising for any of these other sides.

One strand of the family argument is simply a human argument: it's not as good for us as people if we don't have permanent jobs. We can't plan in advance to nearly as great an extent, and of course it's much harder to do things like buying a house and raising a family. Well, this is all obviously true, though of course it will bother some people more than others. I personally don't particularly want to raise a family; I have no particular ties; I am young and mobile. (To those of you in different situations, this particular bias must have seemed painfully obvious from my post.) To the extent that optimizing individual research output is the goal, however, it's irrelevant.

However, note the word "individual" with which I've carefully been hedging. As Chris pointed out in his Facebook comment and subsequent clarification, if we consider the research community as a whole, that could suffer. People who do want to raise a family might decide that academia is not for them, and we might have a mass exodus on our hands. This reduces the "genepool", and is hence bad.

There are a couple of ways of responding to this criticism, though both are super tendentious. First of all, maybe I think that actually the absence of permanent positions should be something that's not restricted to academia but is more prevalent at large. (As, in fact, it already is among people of my generation. One good friend has had several jobs now, in the real world, and found career advancement to be nearly impossible - putting this down to the fact that "old people can't be fired".) If the whole world works in the way that I've been suggesting, then academia would just be one field among many.

Secondly - and I should emphasize that I don't believe this, though the argument could in principle be made - do we really need all those people who would leave the field? Academia is already massively oversubscribed to the extent of the job market being a joke, at least in the arts and humanities. But the smaller genepool must be a bad thing in itself - unless it could be argued that the people who desire permanence, who want to raise families etc. are inherently less good at research than flexible, asocial freaks like me. But I really don't want to go down that road; I'll just note that it's an open question, which could presumably be investigated empirically. (Actually the argument could be put the other way round, as one private response to my post did. If academia is robbed of all the people who are embedded in stable social contexts such as families, it becomes distanced from the social "mainstream", which encourages precisely the kind of philistinism I was scared of in my previous post.)

The final key strand of the family argument is not about families: it's about the other roles of academics. Certainly for teaching purposes, constant change is bad. Departmental leadership and continuity of that kind will also suffer. Perhaps most importantly, as again emphasized in a private response, the role of senior academics in mentoring more junior academics would be compromised. Again, on a narrow reading of optimization of the individual research output, none of this is a problem. But again, if we consider the output of the research community, it's bad.

In this section I haven't been concerned with defending my original argument, at least not beyond pointing out the tacit (and, ultimately, flawed) assumption that underlay it. There's more to academia than the individual's research, that much is clear.

Well, I think I'll stop here. Other interesting points were raised; in particular, my impression is that a lot of the sort of changes I'm suggesting are already in place in the sciences (and that people heartily dislike them). But I don't have the background or knowledge necessary to consider that further, and I wouldn't want to generalize beyond the arts and humanities (which is itself a stretch from linguistics). So, yeah.

Saturday, July 14, 2012

What's wrong with academia? Part 1: Job security

Update, 17th March 2021: I wrote this post nearly a decade ago, and have since become convinced that it's the single worst thing I've ever written. This is especially true given that, at the time, I'd recently taken up a permanent position myself, so it's sick-makingly tone-deaf. Unsupported assertions about 'human nature', unironically appealing to 'meritocracy'... honestly, it'd be better for my reputation if I just deleted it, or retconned it à la Dom Cummings. I'm leaving it here only for the sake of intellectual honesty and accountability. Perhaps unsurprisingly given the fierce reactions this post engendered (see the comments), I never ended up writing parts 2 and 3.

What follows is a collection of musings on various topics that have come to bother me during my first six months in a lectureship. In the interests of structure, I'll focus on three main areas: job security, the relationship between teaching and research, and publishing.

If you're familiar with my general left-wing leanings, you might think you can already anticipate the bones of contention that form the skeleton of this blog post. With regard to job security, for instance, one might expect me to bewail the decreasing availability of permanent positions; and one might expect me to extol the virtues of the oft-unnoticed synergies between teaching and research. In both these cases I will do neither of these things; if anything, the complete opposite viewpoint will emerge. (With regard to publishing, given my own editorial activities, the thread of argument will be a bit more predictable.)

Whether any of this is consistent with the aforementioned left-wing leanings or with my life philosophy in general, or whether I should instead be counted among the Hippocrates, is an interesting question. I'm convinced that my stance is consistent, but that's a discussion for another time; in any case, I do welcome thoughts on this or any other part of the post.

1. Job security

As I've mentioned, it's fashionable and commonplace to find the decreased availability of permanent academic positions deeply worrying - so much so that it's entered into mainstream media discourse. Now this seems to go hand in hand (at the moment, at least) with a general decline in the availability of academic jobs tout court. I'd be the first to say that the latter is an extremely worrying trend, especially when coupled with the general philistinism as regards academia in the UK. Consider the following comment, a response to a Guardian article about the AHRC supposedly being told to study the Big Society:
The country spends £100m on 'arts and humanities research'???

Please cut it all and let's see if we miss it....
Worryingly, this comment is 'recommended' by 62 people... and this is the Guardian we're talking about, not the Daily Fail. And in the meantime, we pay £2 billion a year for a collection of Cold War relics to gather dust, and some people defend this with their lives. Ho hum.

So I'm against a reduction in jobs across academia as a whole. However, this issue is logically separate from the question of whether those jobs should be permanent or temporary/fixed-term. What's more, I've never heard a good argument for permanent academic positions.

Permanent positions make a necessity out of virtue. They are disproportionate post hoc rewards for research achievements, and give no incentive to advance the state of knowledge (which I take to be the primary function of academia as a whole). Let's say you write a decent PhD thesis and a few publications, meet some nice people at conferences, get lucky, and then end up with a job for life. Why is this considered to be a good thing? From that point onwards, it's human nature to kick back and do nothing. From my observations of other supposedly research-active staff (admittedly a small and varied group), if this happens, the worst that the university can do to you is shout at you a little bit. But because you're contractually protected, you can more or less continue to do nothing with impunity.

But let's say that's not the case. Let's say that instead you sit down and churn out the four publications needed to become REFable every few years - or even more. Where is the incentive to innovate, to produce research that will change the state of ideas?

Worse is that academic advancement (at least in the fields with which I'm familiar in the arts and humanities) is still so closely tied to age. 'Being on the ladder', many reflexively call it, and with good reason. Once you're in at the ground floor, every decade or so, a promotion comes along and you go upstairs. You never go downstairs again. Who ever heard of a reader being demoted to lecturer? Or a professor to reader? Why not? Furthermore, ask yourself how many professors you've met who are under the age of 40. Then think about who's doing the top quality research in your field right now - the work you're really excited about, the work that is changing the way people think. How old are they? What is their job title? Whatever the outcome, chances are this group of researchers won't be anything like coextensive with the 50-something professors who have climbed highest on the ladder. This fact seems to be so obvious that I'm amazed at the level of acceptance that exists for it. At best one can conclude that pay in academia isn't in any way performance-related.

My solution? Well, it's not a novel one. One's position at a given time should be related to two things: a) the quality of the work one is doing at that time (in practice, since this is difficult to assess, a fixed time span immediately preceding can serve as a proxy) and b) the quality of one's research proposal. There was a massive outcry a while back when King's College London threatened to make everyone reapply for their own jobs. In principle, as long as the total number of jobs and amount of funding stays proportionate, I think this is an excellent idea. It forces researchers to think about exactly what they're doing and why - and to up their game in order to stay in it. I can see no harm in stipulating that academic positions last for a maximum fixed term of five years. In fact, a lot of good would surely come out of it.

Now one could object that the proposal I'm making here is precisely what grant funding is supposed to achieve in the UK. My response is twofold. Firstly, grant funding (again, at least in the arts and humanities) constitutes only a small amount of the money academics receive: I don't have numbers, but I'd wager that far more is paid on an annual basis to salaried, tenured professors. Therefore, the grant funding solution doesn't go nearly far enough. Secondly, the grant application system is so massively broken in the UK as to be almost completely worthless from the point of view of advancing the state of knowledge. The reason is a classic Catch-22. Grant applications to bodies like the AHRC are like double-blind peer review - except that, crucially, the reviewers know exactly who you are. They need to know this (so I'm told) because they need to assess your suitability for leading a project team, and for managing grant money. How is this assessed? Well, of course in terms of your experience of leading a project team, and of managing grant money. If speculative business financing in general worked on this basis... well, it wouldn't. Work, that is. No interesting project would ever get off the ground. The emphasis on grant-handling experience is particularly bemusing in light of the fact that actually AHRC-funded projects often have no obvious output or endpoint at all. (I use the term 'output' non-traditionally here, to refer to 'any resource that advances the state of knowledge' rather than the more typical 'publications'.) It seems that the AHRC and bodies like it have little concept of what it means for a project to be successful; which makes it all the more odd that they set such high stock in the ability of the project leader to achieve success. (Once again, let me emphasize that publications in and of themselves are NOT 'success'. This will become a lot clearer in part 3.)

The preceding two paragraphs are perhaps a bit deliberately polemical, but you should at least be disabused of the notion that funding bodies are the great levellers. Even if funding bodies played a significant enough role in actual funding to be the deal-breaker, they couldn't vouchsafe the advancement of knowledge because their priorities are wrong and their funding criteria flawed.

The moral of all of this? Academics make such a big deal out of meritocracy in principle that it's hard to see how things could have gone so drastically wrong. Throughout your school, undergraduate and graduate career you're fighting to jump through the next hoop, to advance yourself, to educate yourself. Then when you enter the job market the logic is reversed: you find a hole to crawl into, where you'll be paid a reasonable sum of money. And if you churn out enough publications, take care not to ruffle any feathers in teaching or administration, and maybe get a grant or two, you'll probably get promoted every ten years or so. Whatever happened to onward and upward?

Tuesday, June 19, 2012

Chocoholics anonymous

Went to Norway this month. Went on the train via Germany, just so I could get some Ritter Sport on the way. What?

Kakaosplitter: 9/10

This one tastes like crushed-up cocoa beans in chocolate cream encased in chocolate, and indeed it's hard to imagine how that combination could fail. This is an energy-granting variety, which wasn't particularly advantageous for me given that all I had to do that day was sit on trains for 9 hours - but in addition it really lifted my spirits. A lovely balance of smooth and slightly crunchy, one of this year's spring varieties.

Amarena Kirsch: 8/10

Not bad at all. Very fruity, but not overpowering, either in terms of the fruit or the modest liqueur content. Perhaps a little overly sweet, and - as with many varieties - this might have been overcome with the use of dark chocolate. But all in all this was a very enjoyable eat. A summer variety.

Bourbon Vanille: 5/10

Here you could barely taste anything except the vanilla-flavoured yoghurty goo. Not offensively bad, just boring and ill-judged. A spring variety that won't compensate for the April showers.

Saturday, May 19, 2012

Back to Babel?

I've just finished reading David Bellos's recent book on translation, 'Is That a Fish in your Ear?' (Penguin, 2011). It verged between an entertaining read and a frustrating one. Perhaps unsurprisingly, I found the book to be at its least entertaining and convincing when it touched on subjects in my own area of expertise, linguistics. On the other hand, aspects of the book – such as the story of the dragomans of the Ottoman Empire (chapter 11), and the paradoxical language policies of the European Union (chapter 21) – were fascinating, and well narrated.

This isn't meant to be a full review, but since I feel that ITaFiyE misinterprets linguistics and/or sells it short at more than a few points throughout the book, I decided to set the record straight on the open web, especially with regard to three particular chapters.

Chapter 6: Native Command: Is Your Language Really Yours?

This chapter, on what it means to be a L1 or L2 speaker of a language, starts off promisingly enough. Bellos correctly observes that the traditional term 'mother tongue' is misleading, since we learn our L1 just as much from our peers as from our parents. He then goes on to claim that 'communicative competence' is acquired 'between the ages of one and three' – but that the language learned during this period is not always the one that adult speakers feel most comfortable using. The example he cites for this is Latin 700–1700, which uncontroversially had no native speakers during this period, but which was used as a vehicular language for various purposes. Then comes an astonishing leap (p59):
But if a clear distinction can be made between the language learned from your mother and the language in which you operate most effectively for high-born males in Western Europe between 700 and 1700 CE, the very concepts of 'mother tongue' and 'native speaker' need to be looked at again.
Um, really? The distinction seems pretty clear to me. The muddying of the waters in Bellos's book starts with use of the nebulous term 'communicative competence', which does not enter into mainstream definitions of native speaker status, for which grammatical competence is far more crucial. This is a minor quibble. But the claim that high-born males operated most effectively in Latin for a millennium is an incredible one. It may well be the case that for 'formal speech and writing', as well as for 'diplomacy, philosophy, mathematics, science and religion', Latin was the language of choice. However, these high-faluting academic purposes constitute a tiny minority of our total language use. Is the claim really that these people spoke (as adults) to their parents and peers in Latin in everyday situations? The suggestion that they 'thought' in Latin is even more absurd. There's a long literature on the 'language of thought' and how closely it approximates the languages we hear spoken, but the idea that a 15th-century Dutch nobleman, say, would wake up and think Sum esurientem ('I'm hungry') does not enter into it. The evidence from vernacular written traditions in Western Europe also speaks against this assertion. From the very beginning of the period 700–1700, writing – even for academic and ecclesiastical purposes – began to be carried out not in Latin but in the local languages of the area. Alfred's great program of translation into West Saxon English (not mentioned in this book), or the monastery translations of Boethius and other such texts in the Old High German-speaking area, are prime examples. These do not indicate that Latin was a language of thought, or even an effective operating language. Instead they indicate that the Latin of the period was a language on life-support, for which cribs had to be devised so that keen young men could get their heads round it. The worst part of this little paragraph is that even if it were true that a distinction could be drawn between 'the language learned from your mother' (read: first language) and 'the language in which you operate most effectively' in this instance, it wouldn't follow that this somehow invalidates the concept of a 'native speaker'.

This distinction continues to be made throughout the chapter, with the implication that languages learned during the early years of life are of little importance. There is, no doubt, a difference between 'first learned language', in Bellos's terms, and 'operative language'. Bellos then adduces two examples – his father, whose mother spoke to him in Yiddish but who learned English at school age, and his wife, who initially acquired Hungarian but who began to learn French at the age of five. The aim seems to be to deny the significance of the 'first learned language' or 'mother tongue'; and in these terms, it's a reasonable aim. But it misses the point that linguists and specialists in acquisition are trying to make when they talk about something called the 'critical period' or 'critical threshold', a term dating back to Lenneberg (1967). Very simply, in Trudgill's (2011) terms:
Lenneberg’s term refers to the well-known fact, obvious to anyone who has been alive and present in normal human societies for a couple of decades or so, that while small children learn languages perfectly, the vast majority of adults do not, especially in untutored situations.
To be sure, there is disagreement about what the relevant age is, or whether the term 'critical threshold' is really appropriate as opposed to a gradual tailing-off of language learning abilities. Meisel (2011) provides a recent summary. But what is uncontroversial is that adults do not learn languages as well as children. If, indeed, it is possible to isolate a specific age at which language learning ceases to be a cake-walk, that age is more like 7 (Meisel 2011: 134) rather than 3 as proposed by Bellos. Both of the examples that Bellos gives, then, may be evidence that 'first learned language' or 'mother tongue' is not what is important. But neither is problematic for the idea that languages learned during the critical period are learned better than languages learned after.

This misrepresentation colours the rest of the chapter, including Bellos's conclusion (pp65–66), to the effect that it is not important for translations to be into the translator's L1:
[I]t would be futile to insist that the iron rule of L1 translation be imposed on all intercultural relations in the world without also insisting on its inescapable corollary: that every educational system in the world's eighty vehicular languages devote significant resources to producing seventy-nine groups of competent L1 translators in each cohort of graduating students. The only alternative to that still utopian solution would be for speakers of the target languages to become more tolerant and more welcoming of the variants introduced into English, French, German and so forth by L2 translators working very hard indeed to make themselves understood.
There are a few things wrong with this. First, there's a straw man hiding amidst the prose. If we don't accept L1 translators, do we really have to devote huge amounts of money to training enormous numbers of language professionals? (Not that that would be a bad thing, in my opinion.) Not at all, because of something that Bellos himself mentions later in the book: translations can perfectly well be carried out via other languages. To find a translator from Welsh into Cantonese may be tricky, but when the translation passes via English it's a piece of cake. Of course there's a loss of fidelity involved in this two-step process – but one of the most convincing parts of Bellos's overall thesis (see chapter 10) is that the very notion of 'fidelity' is suspect when applied to translation anyhow.

More problematically, it seems no less 'utopian' to believe that L2 translators 'trying very hard' is the right solution. I'm not a translator, but I've worked in the industry, and my father's been in it for thirty years, so I feel I know enough to comment. 'Trying very hard' may be good enough when it comes to translation on a hypothetical academic-philosophical level, or literary translation (addressed in chapter 27 of Bellos's book but assumed tacitly throughout in the examples used, but in the real world of translation mistakes can be deadly. When I was working in Aachen, translating patient information leaflets from German into English, I was acutely aware that if I made a mistake people could be killed. With that in mind, it seems daft to insist that well-meaning L2 translators are as good as the real deal.

(There's also at least one factual error in this chapter. It is stated that 'all babies are languageless at the start of life'. That's not quite true: in fact, the process of language acquisition begins well before birth, as shown in experimental work by Kisilevsky et al. 2003 among others.)

Chapter 14: How Many Words Do We Have For Coffee?

Unlike chapter 6, this chapter sets out to argue for something reasonable. Its aim is to assess the evidence for linguistic relativity – the idea that language shapes thought – and its conclusion (in stark contrast to the gushing quasi-religious masturbatory rhetoric we so often see in the popular press surrounding the issue, for instance Boroditsky 2010) is sensible (p170).
If you go into a Starbuck's and ask for 'coffee' the barista most likely will give you a blank stare. To him the word means absolutely nothing. There are at least thirty-seven words for coffee in my local dialect of Coffeeshop Talk ... You should point this out next time anyone tells you that Eskimo has a hundred words for snow.
This is in general a strong and interesting chapter, even though the more recent work of 'neo-Whorfians' like Boroditsky and Levinson in the last decade is rather unaccountably left out of consideration.

My problem with it is only in how it begins: Bellos trots out a well-worn passage from Sir William Jones's 1786 Discourse, commenting (p161) that this 'is generally reckoned to be the starter's gun' in the development of comparative linguistics. The idea that Jones had this pivotal role is part of the origin mythology of historical linguistics, to be sure – but his significance has almost certainly been massively overestimated, as shown in detail by Campbell & Poser (2008: chapter 3). You can read their chapter if you want the real story, but the gist of it is this:

Firstly, Jones was not particularly original in his contributions. Commonalities between Indo-European languages had frequently been observed before his time, and even the relationship of Sanskrit to these languages was not a new idea.

Secondly, Jones made a lot of mistakes. He considered Peruvian, Chinese and Japanese languages to be part of the same family as the more familiar Indo-European languages, for instance, while leaving out others he should have included, such as Pahlavi, which he classed as Semitic (Campbell & Poser 2008: 37–38).

Thirdly, Jones was working within a biblical framework and viewed his own work as having 'confirmed the Mosaic accounts of the primitive world'; specifically, all the languages of the world could be traced back, according to him, to one of Noah's three sons Ham, Shem and Japhet (Campbell & Poser 2008: 40) – in stark contrast to the backbone of comparative linguistics of the day.

There's more to say, but the point should be clear enough. The idea that Jones was the founder of comparative linguistics is just as much of a myth as the idea that Eskimo has one hundred words for snow. The repetition of the myth is frustrating within the narrow confines of linguistics, and the situation can only get worse if books like this one, intended for a popular audience, perpetuate it further.

Afterbabble: In Lieu of an Epilogue

Epilogues are typically unambitious: summaries of the content and main argument of the book, perhaps, or suggestions for future research. Not so for the ITaFiyE epilogue, which tries – in 34 short pages – to solve the problem of language, the universe, and everything.

Well, perhaps that's overstating the case. But it does attempt to address the problem of the evolution of language, which is almost as thorny an issue. As modern theorists are fond of observing, in 1866 the Linguistic Society of Paris banned debates on the subject. Those same modern theorists often then argue that we have come far enough, nowadays, to lift the ban and talk about the origins of language with impunity. I disagree – though even among linguists I feel like I'm still in the minority here. We're barely any closer to understanding the genetic basis of our language capacity than we were a century ago, and there is still substantial debate as to what language even IS. The concrete proposals made by Noam Chomsky to that effect, as for example in Chomsky (1986), are very often rejected on the basis that they don't tally with our hazy pretheoretical intuitions about language – such as the idea that it is a social phenomenon, whatever that means; see e.g. Enfield (2010). We don't know nearly enough about human prehistory to say when language emerged, and that situation is unlikely to change. Most painfully, very often theorists still fail to distinguish between 'glossogeny', i.e. change in 'languages' (as we pretheoretically know them), and 'phylogeny', the emergence of the human biological capacity for language (whatever form that takes). (On this distinction, see Hurford 1990.)

In historical linguistics, meanwhile, it is quite normal to suggest that standard comparative methods typically can't take us more than 8–12,000 years into the past (Campbell & Poser 2008 have a discussion of this); this is not due to any flaw in the methods themselves, but rather to the build-up of confounding factors and the paucity of relevant data the further back one goes. Any claim about what was going on 40,000 years ago or more is likely to meet with extreme scepticism from any sensible historical linguist. Nevertheless, this is what specialists in the evolution of language get up to constantly. Perhaps not surprising that I can't shake the feeling that the whole field is a waste of time, then. Until someone has something more evidence-based to say, I'm inclined to take the simple route proposed by Berwick & Chomsky (2011): a tiny mutation emerged at some point, in one fell swoop, giving us the ability to put words together like we know we can; that mutation was (unsurprisingly) selectionally advantageous in the long run; and that's all there is to be said. (Curiously, critics of this 'saltationist' viewpoint are often the same people who rake Chomskyan linguistic theory over the coals for its apparent baroque complexity...)

But back to Bellos. He attacks the assumption that 'all languages are, at bottom, the same kind of thing, because, at the start, they were the same thing' (p341). Whether or not we believe the 'because' clause (and there's certainly no linguistic evidence that would lead us to; see again Campbell & Poser 2008), Bellos gives us no reason to doubt the underlying sameness of languages. The fact of linguistic diversity has very little bearing on this; the very fact that we have a concept of language at all, on the other hand, even a pretheoretical one, is evidence for sameness. At some level, we can judge whether something is linguistic or non-linguistic. That alone suggests unity in diversity.

Beyond that, though, there are linguistic arguments for sameness, many of which have been controversial over the years. Bellos groups these together as the argument that languages have 'a grammar', and disposes of it quickly and unconvincingly: rather than being an empirical matter, 'the "grammaticality hypothesis" is an axiom, a circular foundation stone' (p342). Why? Because...
Since traffic lights and the barking of dogs seem to have no discernible rules of combination or no ability to create new combinations, they have no grammar, and because all languages have a grammar in order to count as languages, dog barking and traffic lights are not languages. QED.
One might legitimately argue that this is hardly circular reasoning. Rather, we're attempting to understand the defining characteristics of language in terms of concrete properties, in order to establish what exactly it is that we're doing when we describe something as linguistic or non-linguistic. If these properties turn out to be a poor mapping to what our intuitive conception of language is, or not sharp enough to distinguish language from other things, we can reject them, refine them, retain them on the understanding that no better model has yet been proposed, OR decide that our concrete properties in fact tell us that our intuitive conception is wrong or not clear enough. This is what has happened over the last fifty years with Hockett's (1960) 'design features' of language (which seems to be what Bellos is bashing here, though he doesn't cite it), and with purported universals both in the Chomskyan tradition and the Greenbergian (e.g. Greenberg 1963). It seems to me to be normal scientific practice. For example, we now know that spiders are not insects, despite all our intuitions telling us otherwise, and this is a natural consequence of adopting a particular model of taxonomic classification that is superior to our vague intuitions. We now know that whiteness isn't necessarily a property of swans. Likewise, by the botanical definition of tomatoes that we adopt, they are classed as fruit rather than vegetables. This isn't just axiomatics: we're learning something that we didn't already know. This is scientific progress.

But Bellos, like many authors in and around linguistics, refuses to give up on the intuitive definition. He proceeds as follows (p342):
In a similarly circular way, the axiom of grammaticality pushes to the edge of language study all those uses of human vocal noises – ums, hums, screams, giggles ... and so forth – that don't decompose neatly into nouns, verbs and full stops.
Quite apart from the disingenuousness of this comment (no one ever seriously proposed that full stops were a property of human language, axiomatically or otherwise, as Bellos must well know), I fail to see the problem. Describing these types of noise as 'non-linguistic' seems to me to be entirely fair and reasonable. (Note that this is very different from saying that they don't constitute a worthy object of study in their own right, a claim that no linguist I know would want to be associated with.) What we discover by doing this kind of scientific work is that our intuitive conception of language is so fuzzy and all-encompassing as to be effectively unusable, a point that Chomsky has been making for years (see again Chomsky 1986). If, like Bellos, you find definitions in the Chomskyan mould unpalatable, then the onus is on you to come up with a better operational definition if you want to be thought of as doing serious work on language.

After listing various ways in which languages can be odd (evidentials always seem to come up whenever anyone puts together a list like this!), Bellos somewhat uncharitably (but perhaps not unfairly) states that the attempt to discern what all grammars share 'has got about as far as the search for the Holy Grail'. He then builds another straw man which he proceeds to rip apart: 'all grammars regulate the ways in which free items may be combined to make an acceptable sentence' (p344). The obvious problem is the word 'sentence' here – what does it mean? Nothing (either inside or outside a theory of grammar, as far as I'm aware), and of course we don't speak in sentences, as Bellos points out.

Nor is it really a problem that 'no living language has yet been given a grammar that accounts for absolutely all of the expressions' (p344). Even if this goal were a reasonable one for linguistic theory (Chomsky 1986 argues that it isn't), and even if a living language like 'English' were a coherent object of study (see virtually any work by Chomsky for a brief but irrefutable demonstration that it isn't), does this stymie any attempt? In physics, our best theories of reality can't account for phenomena like dark matter; all this shows is that science (any science) is a work in progress. So it's an absolute nonsense to claim, as Bellos does (p344), that:
Flaws of this magnitude in aerodynamics or the theory of probability would not have allowed the Wright Brothers to get off the ground or the National Lottery to finance the arts.
First off, there's no reason that our scientific theory has to be practically applicable in order to be worth something (look at string theory, for instance). But, in any case, Bellos should look at state-of-the-art work in computational linguistics, where parsers based on handwritten grammars in combination with a simple statistical learning algorithm can robustly parse up to 92.4% of an average corpus of English (see Fossum & Knight 2009). That doesn't seem like crash-and-burn to me.

The afterbabble goes on to compare dialectal variation to primate grooming, and to propose this as a potential evolutionary origin for language, following work by Robin Dunbar. I won't discuss this in any detail, but suffice it to say that the conclusion – that 'The most likely original use of human speech was to be different, not the same' (p351) – presupposes precisely what has been argued so vigorously against earlier in the same chapter, namely a definable object that is 'human speech' (which, since animals fairly intuitively don't have it, must have evolved somehow).

In short, this chapter (and the book as a whole) overreaches itself. Though issues of translation are inevitably bound up with deep questions about the nature of language, ITaFiyE would have been a better book if it had chosen to stick closely to the former and leave the latter to specialists.


Berwick, Robert C., & Noam Chomsky. 2011. The biolinguistic program: the current state of its evolution and development. In Ana Maria di Sciullo & Cedric Boeckx (eds.), The biolinguistic enterprise: new perspectives on the evolution and nature of the human language faculty, 19–41. Oxford: Oxford University Press.
Boroditsky, Lera. 2010. Lost in translation. Wall Street Journal, 23 July.
Campbell, Lyle, & William J. Poser. 2008. Language classification: history and method. Cambridge: Cambridge University Press. 
Chomsky, Noam. 1986. Knowledge of language. New York: Praeger.
Enfield, Nicholas J. 2010. Without social context? Science 329, 1600–1601.
Fossum, Victoria, & Kevin Knight. 2009. Combining constituent parsers. Proceedings of NAACL HLT 2009: Short Papers, 253–256.
Greenberg, Joseph H. 1963. Some universals of grammar with particular reference to the order of meaningful elements. In Joseph H. Greenberg (ed.), Universals of grammar, 73–113. Cambridge, MA: MIT Press.
Hockett, Charles F. 1960. The origin of speech. Scientific American 203, 89–97.
Hurford, James R. 1990. Nativist and functional explanations in language acquisition. In I. M. Roca (ed.), Logical issues in language acquisition, 85–136.
Kisilevsky, Barbara, Sylvia Hains, Kang Lee, Xing Xie, Hefeng Huang, Hai Hui Ye, Ke Zhang, & Zengping Wang. 2003. Effects of experience on fetal voice recognition. Psychological Science 14, 220–224. Lenneberg, Eric. 1967. Biological foundations of language. New York: John Wiley & Sons.
Meisel, Jürgen M. 2011. Bilingual acquisition and theories of diachronic change: bilingualism as cause and effect of grammatical change. Bilingualism: Language and Cognition 14, 121–145. Trudgill, Peter. 2011. Sociolinguistic typology: social determinants of linguistic complexity. Oxford: Oxford University Press.