Trigger Warnings, Free Inquiry, and Avoiding Harm

This entry is by Pat Stokes, who has featured in this blog before.

Once a year, every year, I stand in front of a room full of teenagers and talk about erect penises.

This is not some weird hobby of mine. Someone – the Australian taxpayer, ultimately – pays me to do this. But talking about penises isn’t the problem. The problem is that I warn the class beforehand that we’ll be discussing explicit, if thankfully brief and fairly highbrow, depictions of sex.

Warning students ahead of time like that makes me a terrible educator, you see. All these cotton wool, nanny state ‘trigger warnings’ are making these kids soft and useless. The workforce will eat them alive. Or so generous Twitter users keep telling me.

No, the only way to prepare these impressionable young minds for the horrors of something called the ‘real world’ is to spring descriptions of tumescent phalli on them totally unannounced. It’s the only way they’ll learn.

A genuine tension

Trigger warnings have become associated, rhetorically at least, with the broader issue of freedom of speech on campus and the phenomena of ‘safe spaces’ and no-platforming. Their aim is broadly similar. They’re meant to avoid harms that can be caused by discussing topics that, due to personal or social history, students might find distressing. They don’t shut the topic down, but they give students some capacity to manage their exposure. That, at least, is the theory.

The tension between the needs of open debate and the need to avoid harm is, I’d suggest, a real one, and needs to be acknowledged. On the one hand, if we’re going to model good argument and proper academic inquiry we should follow arguments where they lead, not just where we’d like to be. University should be a place where you are exposed to new and often challenging ideas. Discomfort can be a sign of emerging knowledge.

Academic debates and teaching don’t occur in a vacuum, however. For all the talk of preparing students for the ‘real world,’ there’s only one real world, and universities are inside it just like everything else. We don’t step outside the world while we’re thinking and teaching about it.

You may fancy yourself a locus of pure, ahistorical reason, a mere conduit for ‘the ideas themselves.’ But you’re not. You’re a flesh-and-blood human being partly defined by your history and social position, talking to other flesh-and-blood beings. And that makes your behavior, in the classroom and on the page as much as anywhere else, subject to ethical evaluation. You can’t step outside your ethical and political relationships to those around you. Ethics has no outside.

Trauma ahead

So, back to the penises. I teach a large introductory philosophy unit, originally devised by philosopher-bassist Stan van Hooft, called ‘Love, Sex, and Death.’ We use these dimensions of human life to introduce students to core aspects of philosophical methodology and content. We ask them to wrestle with texts ranging from Plato and neo-Thomistic Natural Law theory to radical feminism, and with questions from ‘what is love?’ and ‘should euthanasia be permitted?’ to ‘is it ok to watch porn?’ and ‘are there by definition no non-substitute masturbators?’ (long story).

The penises are all found in an influential paper by philosopher Martha Nussbaum that contains excerpts from famous literary depictions of sexual objectification. I let students know at the start of the class this material is coming, as well as flagging it beforehand in the lecture. Likewise with other potentially difficult or sensitive topics.

Why? Because there are five hundred students in this unit, and I don’t know them. I don’t know how many of them are victims of sexual assault, but the statistics tell us the number will be distressingly high. We know that sexual objectification is not a mere abstract question but part of lived experience for at least half the class. I also know, because some of them have told me, that students sitting through our discussion of the ethics of sex work have themselves worked in that industry. We look at the arguments around same-sex marriage with LGBTQI students, and pro- and anti-abortion arguments with, no doubt, women who have had abortions. We discuss the badness of death and the question of euthanasia knowing at least some of those present will be carrying recent or sudden loss, past suicide attempts, or terminally ill parents.

We still teach this material. We must teach this material. These are living issues we have no choice but to talk about, and we need to be taught the philosophical skills and background to do so properly. But you can’t talk about love, sex, and death without talking to people who have been injured by all these things. To discuss these is, unavoidably, to stick coldly abstract fingers into old and never-quite-healed wounds. Nor can you talk about race or gender or sexuality without talking about and reactivating histories of power and hurt.

An excess of caution?

So yes, there’s a real problem here. That problem – the unavoidable tension between the intellectual demands of unfettered inquiry and the ethical demand to avoid causing harm – is precisely what trigger warnings, at their best at least, are meant to help us manage, if never solve. They aren’t designed to stop intellectual inquiry, and shouldn’t be allowed to. Their aim is simply to prevent it causing more harm than it has to.

Among academics I’ve spoken to about this (read: I have no real empirical basis for the following claim whatsoever), “trigger warnings” seem to be regarded as nothing more than basic courtesy. Indeed, when I’ve described my practice – a polite, brief warning that there’s explicit content coming up – even people opposed to trigger warnings tend to concede that’s fine. What they object to are all those other trigger warnings they’re sure are out there.

And sure, you could try to argue that the problem isn’t trigger warnings per se, but that trigger warnings have become too comprehensive and thus too restrictive. It’s not hard to find examples that are, on their face, over-the-top or linked to improbable harms. The infamous Wilder edition of Kant with a warning about his dated views (and yes, Kant says some pretty frightful stuff) might well seem excessive or even insulting.

But at least it errs on the side of caution. Long lists of potentially ‘triggering’ topics may be unworkable and ultimately self-defeating, but being aware of too much seems practically far more preferable than remaining blissfully unaware of harms caused.

They’re also a good reminder of the importance of another virtue we should be modelling in both teaching and research: intellectual humility. Are law students going to be traumatised by hearing that something “violates the law”? At first blush we’d probably imagine not. But did the possibility occur to you beforehand? Do you know how that word affects people? What if you found out it does actually cause some people distress – what then? How will you manage that? And if you missed that, what else have you missed? What other consequences of your words have you failed to foresee? If reason and experience alone can let you down like that, how else are you going to know the true effect of your words and actions other than by listening to what people tell you?

Difficult topics don’t go away, and nor does the need to teach them. Trigger warnings are one tool available to us to try and negotiate the rough terrain. But they won’t replace an indispensable set of virtues: tact, sensitivity, consideration – and caution.


Moral philosophers and virtue: What is wrong with being bad?

This contribution is from Andreas Eriksen

The podcast Philosophy Bites once asked Ronald Dworkin “Who is the most impressive philosopher you have met?” Dworkin’s first response was John Rawls: “He is one of those very few philosophers whose saintliness infected the philosophical diction. Reading him has the enormous advantage that knowing him makes what he says sound true. He is an example of what he says.”

But is it particularly meritorious for a moral philosopher qua professional to exemplify one’s own theory? This post is an attempt to articulate a distinct philosophical defense, where I argue that the subject matter of moral philosophy can itself require the form of understanding that guides virtue. I will continue somewhat anecdotally and then make a more analytical point about what moral understanding means.

In the preface to his Morality: An Introduction to Ethics (1972), Bernard Williams said that writing about moral philosophy should be a hazardous business, partly because “one is likely to reveal the limitations and inadequacies of one’s own perceptions more directly  than in, at least, other parts of philosophy.” This is surely not just true of writing and developing moral philosophy, but also of teaching it. Teaching a theory is not reciting it, but rather giving it the voice it requires, explaining how it tries to answer a question, and taking a stand concerning its merits. The “hazardousness” Williams refers to could indicate that a form of bravery is required in doing and teaching moral philosophy; one must be willing to expose oneself to public disclosure of one’s grasp of what constitutes moral relations. This will not merely reveal one’s theoretical comprehension, but also something about one’s moral character. In the end, one must be willing to assert what really matters, not just to an imaginary impartial spectator, but also to oneself.

The basic idea that I extract from this is that teaching moral philosophy stands in a reciprocal relationship with genuine appreciation of moral standards: your moral sensitivity says something about plausibility of what you teach (a generalization of Dworkin’s point), and what you teach says something about your moral sensitivity (my version of Williams’s point). But does of this lead to the further claim that teachers of moral philosophy have to be morally virtuous?

In the preface to his Ethics: Inventing Right and Wrong (1977), J.L. Mackie acknowledges his debt to the classical moral philosophers. However, he emphasizes that he is in agreement with Locke that the ”truest teachers of moral philosophy are the outlaws and thieves.” At first, one might think this makes sense if one takes outlaws as foils that highlight what is valuable about real commitment to moral values. That is, through outlaws we learn about morality in the way we learn about the human condition by contrasting it with animals. That was not Mackie’s point. Rather, he believed we should learn from the outlaw attitude of practicing rules of justice out of convenience, not as a response to an objective moral reality.

I believe this illustrates how the relation between teaching moral philosophy and possessing moral virtue depends on a substantive theory about the status of moral values. In other words, the requirements of excellence in teaching moral philosophy depend on what one takes the subject matter to consist in. For example, Mackie did not believe in objective moral values, hence outlaws provide paradigms of insight. Through their way of living together, they show how unnecessary it is to cling to the superstitious belief in moral objectivity. Allegedly, rational self-interest is the only enlightened foundation.

In turn, if one believes that moral philosophy can and should clarify the virtues as responses to objective moral values, then this seems to require genuine sensitivity to  the demands of kindness, justice, honesty and more. They must to some extent “see” what the virtuous person “sees.” Teachers who lack proper appreciation of these values seem deficient qua moral philosophers. There is something they do not get about their own professional subject matter, namely moral life. Of course, this does not imply that the kind of saintliness ascribed to Rawls sets a standard all must meet. But it does mean that one’s character must be sufficiently shaped by a conceptual space governed by moral values, so that one can at least grasp part of what virtue responds to (even though one may lack full virtuous responsiveness).

It is crucial to understand the attitude of appreciation in the right way here. The claim is not that teachers who lack the understanding that guides moral virtue will necessarily lack access to the correct propositional content. To the contrary, we can imagine non-virtuous teachers who possess the right beliefs, or at least they state largely correct intellectual claims to their interlocutors. By appreciation, however, I am thinking of a mode of awareness that goes beyond mere intellectual endorsement. The difference between appreciation and mere belief is evident in aesthetics; think of the dissimilarity between acknowledging that Bach was great and experiencing his greatness. Similarly, moral appreciation refers to a complex emotional responsiveness that is deeply integrated with character.

The type of appreciation that is at issue here is a form of understanding that guides virtue as described by Aristotle. In the Nichomachean Ethics, the virtuous person is described as having certain feelings “at the right times, about the right things, toward the right people, for the right end, and in the right way” (1106b20). The moral feelings have been habituated to create harmony between what one acknowledges as good and the kinds of actions one takes pleasure in. The virtuous person does not merely endorse certain moral propositions, but identifies with the values these propositions refer to. This identification makes the values appear in a distinct light. They appear as noble or worthy of allegiance, as opposed to just correct according to theoretical reasoning. For Aristotle, then, virtuous thought cannot be separated from emotional engagement. Learning to be good is not merely acquiring the right set of beliefs, but taking the moral content to heart, making certain responses to value part of one’s “second nature.” This Aristotelian theme has been acutely explored by many, but my use of the term appreciation in this connections draws particularly on Stephen Darwall’s Welfare and Rational Care (2002).

Non-virtuous moral philosophers fail to understand the parts of their own subject that are only available through this form of appreciation. Again, the claim is not that moral philosophers have a professional duty to be exceptionally virtuous. Rather, the point is that their lack of the form of understanding that guides virtuous action is a distinct professional deficiency. When this lack is revealed through immoral action it simultaneously reveals a lack of sensitivity to one’s professional subject. This claim seems to be phenomenologically supported. When moral philosophers transgress important norms, we are not only disappointed by the acts per se, but also by how these acts reflect on their appreciation of moral theory. “Didn’t they get it?” we are prone to think if we admired their work or lectures. Or perhaps the acts cast a shadow of doubt over the philosophical message they have tried to convey. Unlike Rawls, knowing them can make what they say sound untrue. That is, unless the teacher is a philosophical skeptic about moral values. I’m not sure what the judgment would be if Mackie himself became an outlaw.


Two kinds of naked, two kinds of blindness — Brendan Larvor

Patrick Stokes recently blogged about his experience of being forced to reflect on his place in philosophy’s gender-structure. The nicest people can suffer from a kind of blindness–like the dog in this parable. Pat, as befits a Kierkegaard scholar, focuses on the condition of the individual consciousness, and this is not to be neglected. Are […]

via Two kinds of naked, two kinds of blindness — Brendan Larvor

Thinking about conferences as places for thinking

In this post, Richard Ashcroft reflects on the shortcomings of academic conferences. 


For a long time, I have been doing my work without going to conferences. Like going to bed early, this is perhaps why I do a lot of reflecting on (academic) life rather than participating in it. In the first half of my academic career I used to go to conferences a lot. But I now have very mixed feelings about them. Here I want to explore some of the reasons why I find conferences problematic.

Let me start by saying why I used to go to so many. In part this was because there was a time when I went to none at all. When I was a graduate student I was fortunate enough to be at a university which was considered a destination for the world’s academics, where famous names and rising stars would come on sabbatical, or for short visits, and where there was an almost continual stream of guest lectures and invited papers and seminars and symposia on pretty much any academic topic of interest. I was in a large and thriving department which had academic staff and graduate students from all around the world. In this environment of often heated discussion, I had a strong sense of high stakes and intellectual challenge. I’ve never known anything quite like it, before or since.

In this context I had very little need or desire to go to conferences. But this was not only because I believed that it was pointless to go there to get what I could already have here. Other factors were in play. One was that I had a feeling that graduate students were not really welcome at conferences. They were for the “grown-ups”. Although these might very well be many of the same grown-ups I saw in my university, I believed that conferences were where they went to be among their peers; all sorts of stuff would be discussed and argued about which were “not suitable for children”. It isn’t that I thought they were up to “campus novel” shenanigans – who knows, maybe some of them were – more a sense of there being a hierarchy whose boundaries it would be unwise to trespass across. Another important factor was that, status or shyness problems aside, no funding was available for graduate students to attend conferences. When some of us did go, the options were either to win a scarce and highly competitive travel grant, or to be somehow subsidised by an academic with a grant or other funding (patronage, in effect), or to be self-funding. And even in those halcyon days of grants and good employment prospects for newly minted PhDs, those of us without family money were generally skint. In all of these ways – good and bad – the conference system was a good reflection of the academic world more generally. Both self-satisfied and anxious, both obsessed with networking and with putting barriers in the way of networking, both a republic of letters and a highly stratified and economically unequal feudal system.

In the succeeding 20 or so years some things have improved. I think that today more support is available for graduates to attend conferences, it is considered more important for them to do so in terms both of networking and career development, and in terms of intellectual exchange. And my impression is that academic life has become somewhat less feudal and hierarchical, at least in the humanities. But perhaps I would say that, from the comfort of my professorial chair. Nonetheless I think the conference system itself has changed very little. When I first started going to conferences I was terribly excited. Partly this was sheer excitement at joining the wider academic community: I would be presenting my work to people who had never heard it before; I would get useful feedback; my ideas would be tested by tough (but, I hoped, fair) criticisms; I would get questions which might make me think again about some things, or open up new questions and new lines for research; I would meet like-minded people working on similar (but, I hoped, not exactly the same) things. I might even make some new friends. I was also a bit afraid – that my presenting style would be bad, or I would mismanage my time, or that I would not be able to deal with questions, or that the wider academic community would think I was a bit of a berk. With one exception, when, I can confirm, I really was both out of my intellectual depth and a bit of a berk, my experience was indeed generally positive. However, even in the honeymoon period of my relationship with conferences, I had reasons for disquiet.

Conferences are good places to see academic tribalism and status games at first hand. Time and again I have seen cadres of staff and students from well-known and prestigious departments move around en bloc, largely keeping to themselves and making critical comments about both individuals who don’t have such a cadre, because they come from a less well known institution, and about cadres from other institutions seen as “the competition”. The framing of these comments is often in terms of “who’s good” and who isn’t, and about method (“we” do these things the right way and “they” don’t). I’ve seen senior academics hold court, and I’ve seen senior academics snub people because they are low status, or because their PhD supervisor is someone the senior academic is having a feud with. I’ve seen graduate students seek out academic “stars” merely to be able to say that they have met (and sometimes, so that they can then ask for a favour later on, on the ground that they’ve met).bmwqyy6ceaauz9t

None of this is particularly surprising; humans do what humans do, wherever humans are gathered in numbers. But it is not the normative story we tell ourselves about conferences. That story is about conferences being a place where status is left at the door, where there is a free exchange of ideas, where the republic of letters is made flesh. But even if we have an unusually egalitarian and open-minded collection of academics, and even if everyone is polite and respectful, there are still structural things which make conferences frequently dispiriting affairs.

Conferences are expensive. They are expensive to put on; and this usually means they are expensive to attend. They are expensive to put on because even small conferences need a large, well-serviced room, with support staff, audio-visual aids, refreshments and so on. They can require booking systems, payment collection systems, arrangements with accommodation providers (hotels or university halls), and travel partners. International conferences might need translation services. These are minimum requirements. If the organisers take equality and diversity seriously – as they should – then there are access and communication support needs, childcare, and support for accompanying adults. A meeting of any complexity usually needs a professional conference organiser. Now it is possible to reduce the direct costs of all this – especially if it is hosted by a university, which will have many of these services as a matter of course. But this does not make real cost reductions – it simply transfers them either to indirect costs, or it transfers them to simple exploitation. Student helpers, acting unpaid, so that they can “benefit from attending at the conference”. Because as everyone knows, sitting at the registration stand for hours on end really does benefit your research. And every academic who comes along to register is actually there to hear about your draft chapter. As if. Of course cost savings can be made, and it’s possible to have a lean and thrifty and thoroughly successful meeting. But this does depend a bit on managing delegates’ expectations (yes, we have a visitor programme – it’s called a bus, you pay the driver and he takes you where you want to go, if it’s on his route). And it also depends on the birthday party principle: we absorb the cost of the party (conference) because we know that we will get to go to other parties hosted by others when it is their turn. But it can be hard to persuade a Faculty Dean to underwrite the cost of a conference on this basis. The Faculty or university absorbs the cost, and conference hosting can be a significant financial risk, but sees little of the benefit. It has to decide that the benefits in terms of reputation and staff job satisfaction and graduate recruitment are worth it, and a better investment than other uses of that money. Some conferences can attract external support, from a learned society or funding body or occasionally charitable or even commercial support. But these all come with quid pro quo’s and are not easy to get. And sometimes that external support will anger and alienate a significant proportion of the delegates (which is why bioethics conferences rarely, if ever, seek support from the pharmaceutical industry).

The cost of conferences is therefore transferred, so far as possible, to the delegates. There is no right to attend a conference, in the sense of it being a positive entitlement. That said many conference organisers do try to help some delegates overcome cost barriers to attendance – bursaries for students, which may offset some part or even the whole of the registration fee. Differential fees according to income bracket, early career status, or country of origin are often tried. Nonetheless all of these things are imperfect – the link to affordability is crude at best, registrants are expected to be honest about their income or career or country-of-residence status, bursaries are usually few and not necessarily awarded to the most in need or most deserving, and so on. And rarely do any of these fee structures apply to accommodation or travel. Travel grants can exist, but they are hard to get. Conference costs might be low, if the conference is in a low-to-middle-income country, but that tends to mean travel costs are higher, and the costs of the conference are never as low as they might be given that the expectations of international delegates have to be met.

An expensive conference to host; an expensive conference to attend. Who pays for all this? As noted, mostly the costs will come from the delegate herself. But this favours the delegate who has a high disposable income, or a generous conference allowance from her employer, or research grants which cover conference attendance, or a departmental subsidy from her employer. Some employers will just give all academic staff (and sometimes graduate students) an allowance which they can use at their discretion; most require the would-be delegate to explain why attendance at this particular conference is necessary. This may be because it would involve absence from ordinary duties (occasionally it may involve political considerations as well). But it will usually involve some justification on the basis of its academic importance. What in practice this usually means is – you can go if you are giving a paper.

There are lots of reasons to go to conferences which don’t involve giving a paper. I will discuss some of these below. But giving a paper is increasingly the minimum requirement for a funder or employer to cover the cost of an academic’s attendance. In my view this is disastrous. Either you have a cap on the number of papers, which excludes many people who might otherwise come from being able (or willing) to do so. Applying that cap will, very likely, involve all those lovely implicit biases in publication which we love so much – so we end up with the usual people giving the usual papers on the usual topics in the usual ways.

As an aside, invited keynote speakers are often the worst case here: Prof. X always says the same things, because Prof. X hasn’t done original work in years, and also because Prof. X has been invited to attract delegates who haven’t yet heard Prof. X give his (usual) speech and he likes to “play the hits”. And the organisers know this, but have invited him not because he’s brilliant or original but because this is all part of the circuits of favours and marketing.

So let’s not cap the number papers (and not have invited keynotes). We won’t just accept all paper proposals – we will have a peer review process to select only those which meet our expectations about quality and relevance to the conference theme. Oh dear. Here come the implicit biases again. In addition, there’s the problem that when most of us write conference abstracts they are more in the nature of a plan for work we hope to do between now and the conference. We might do the work, and find that we don’t think on the day of the conference what we thought on the day of the submission of the abstract. Or we might not do the work. And either way, the conference is not getting what it was promised. And it might well be getting something as good or better than what was promised, or it might be getting ill-digested, under-prepared, banal rubbish. Ok, now suppose our filter is reliable. We still have far too many papers for everyone to get to give the full length seminar paper we’d all like to give in an ideal world. What do we do? We have parallel sessions. And we cut the length of the papers so as to fit the conference timetable. And what do we now have? A shambles.

It is 11:30. Or rather it isn’t, it’s 11:40 because the last session finished late and morning coffee has overrun. We have to fit four papers in before 13:00. Each of those has now lost 5 minutes. We can reduce the question time at the end of each, or maybe we just have a single question time for all the papers together. So each paper, either way, gets even less discussion time than before. One of the speakers is giving her paper in his or her second or third language, and this slows her down. It’s not her fault but there it is. One of the speakers drones on and one for 5 minutes beyond his allotted time, because the chair can’t get him to shut up; he’s a junior colleague who doesn’t know the topic but has been drafted in because the person who was supposed to chair is actually giving a paper in the other parallel session. She’s doing this because her co-author is ill and couldn’t come to the conference, pulling out at the last minute. It’s no one’s fault, and it can’t be helped. Half the audience walk out half way through because they are off to the other session to hear their friend, a fellow PhD student, give a paper and he needs moral support. I have seen very bad papers which are obviously and irredeemably unpublishable; but I have also seen senior professors humiliate graduate students who are presenting work in progress and need a bit of mentoring. And so on. I am not exaggerating – this is normal. There is very little obvious “bad behaviour” here. It’s structural. The conference is badly designed; and I don’t mean this particular conference – I mean the conference-as-we-know-it.

In effect, for structural and economic reasons, we have a formal requirement on delegates (that they each be giving a paper, or have other sources of funding) which excludes many (most?) people who could usefully attend, and destroys the substantive rationale for having the conference in the first place – which, ostensibly, is to allow the presentation of papers and discussion of their merits and interconnections. I have been in parallel sessions in which the only people present were the chair and the speakers. I have also been in keynote speaker sessions in which there were 800 people in the audience. For different reasons, in neither case did we get the interactive, multi-party discussion we tend to think the conference is there to generate.

Having said this, the conference can produce other benefits. For instance, after hearing Prof. X burble on about his hobby horse for 40 minutes, for the second time in as many years,  I had several enjoyable chats with my peers over drinks about how Prof. X was past it, how scandalous it was that he’d been given a platform again, how he and his colleagues seem oblivious to any work outside his own self-referential bubble and so on. This kind of conversation, though unworthy of us, unscholarly, and vicious in all sorts of ways, did go some way toward building affective bonds of community. It is arguably this that conferences do best. Conferences bring people together who might not otherwise have met; and it also brings people together who do tend to meet only at conferences. Some of my oldest academic friends and interlocutors are people I have met at conferences. This has enriched my life, and also my work, though I think it has done so in that order. Conferences can give you a sort of oil check on the what’s going on in the field and whether it is running smoothly. They are an opportunity to meet potential new colleagues and collaborators, and some conferences are effectively hiring fairs.

I sometimes wonder what would happen if we got rid of the papers altogether and met anyway. Some mid-points between that and the status quo do exist – meetings in which papers are pre-circulated and only introduced very briefly just to prime discussion (these tend to be small workshops rather than conferences, however, and that’s a very different animal). Or “poster sessions”, though these are, it is generally held, bad mechanisms for communicating discursive argument. They can work perfectly well for formal arguments however. Indeed, if you can put it in a PowerPoint presentation, you can put it on a poster. Since so many people just read out their damn’ slides anyway… But poster presentations are despised by many, and they feel insulted if they are asked to give a poster instead of a paper. Yet done well, a poster can bring people together for precisely the kind of intimate discussion of detail which is impossible in the 5 minute Q&A after the sainted 20 minute paper. There are no timing problems, no problem if people want to come and go, no problem if you think of your question five minutes after the session is over or want to ask another. Still, a lot of people (at least in philosophy and applied ethics) are decidedly chippy about flying 10 hours and spending thousands of pounds to go and stand next to a laminated sheet of A0 and hope someone stops to talk to them. And, more importantly, a lot of departments would refuse to fund such a trip.

So much for the people who actually get to the conference. Who’s left? Practically everybody. I’ve stressed the cost barrier. But the geographical problems go well beyond price – time is a scarce resource for everyone, and even if you could give up the four days for the conference, two days either side for travel are a serious obstacle.  Then we have a centre/periphery problem, or what we might call geographical moral luck. As I said, I was very fortunate in where I did my graduate studies – I didn’t really have to travel, as people would pretty much come to us. But this confirms a particularly insidious kind of arrogance: because we were where (some of) the action was, we could tend to assume that whatever action there was, was where we were. And our location, both geographical and intellectual, would be assumed to be normative for everyone else. Hence the unlovely sight of British and North American universities claiming some special role in addressing Grand Challenges in Global Health; an imperial mentalité which just doesn’t seem to be dying away. There is an elision between being in possession of the financial and technological capital, being historically responsible for much of the world’s current political and economic condition, and being in possession of moral insight into and authority over “what is to be done.” The conference system perpetuates this. Oh, and it does its bit for global warming too. Adversely.

Who else doesn’t get to go to conferences? Two obvious groups: those who are effectively disabled by the conference system (Deaf people and people with mobility impairments, to name but two groups of people). And those who have other responsibilities for others. Some of the biggest and most important conferences are hosted at the most family-unfriendly times of year – in North America there is a particular practice of holding meetings between Christmas and New Year, conferences which it is effectively obligatory to attend if you are either hiring or being hired – which is everyone. Because who is not either looking for a job, or looking for students or junior staff?

Conferences – for all their flaws and for all their kinds of mechanism of exclusion – attract a particular kind of presenteeism. There is a strong academic version of the Fear of Missing Out. Apart from the social media plague of all your peers posting selfies in smart restaurants with all their (and your) friends having a great time while you are at home unblocking drains and sitting in curriculum design meetings, there is a sense that your work will go unread, and lose currency, if you don’t make an appearance at the conference to remind people that you exist. One solution to this is to chip in while they live tweet or post on Facebook and Instagram, making sardonic remarks. This works for me, but not so much if you are a beginning graduate student who nobody knows and who has to maintain a reputation for being bright, promising and a good potential colleague. Many people have written about the difficulty of combining home and academic life and staying sane. The long hours culture in academe is becoming notorious, and the effect this has on people’s hiring, promotion and tenure, and salaries, is much debated. What I want to highlight here is that if your family life is in any way complicated, then going to conferences becomes much more difficult, and if it is a requirement (either soft, in terms of maintaining reputation in the field and awareness of what’s going on outside the restricted domain of publications, or hard, in terms of it being obligatory in one’s job role), then there is a structural injustice: conferences disadvantage you.

In conclusion: the conference, as we know it, is broken. It can be fun. It can be a context for genuine discussion and enlightenment, for sharing new ideas, for meeting interesting colleagues, for challenge and reflection. But in my view it currently does these things by chance and accident, and its design inhibits, rather than facilitates these things. I don’t know anyone who actually likes conferences. The people I know who go to most conferences seem to do so mainly so they can write their papers in airport lounges, those liminal spaces where they may not have a mobile signal and can be left in peace. This gives me a clue to why they continue: conferences are precisely a holiday from normal rules, they are a perfect excuse not to be doing something else we may be under an obligation to do normally. But just because they are sometimes a remedy for problems elsewhere, doesn’t mean they aren’t equally sick in their own way.





There’s nowt so queer as folk

Here, bioethicist Richard Ashcroft argues that bioethics needs to broaden its scope by considering questions about character as well as the rightness and wrongness of deeds and the goodness and badness of outcomes.  He suggests that one way to get some of the dense texture of lived experience into ethics–and thereby give questions of character more of their proper ground and interest–is to philosophise by reading fiction.  In the course of a full-length novel, the making of choices seems more like stuff that people do and less like occasions for serene rational deliberation.  This idea is not new to philosophy, though it may be new to bioethics.  What is different in Ashcroft’s develoment of these ideas is his choice of novel.  Instead of Henry James, we have M. John Harrison’s Signs of Life, which belongs on a shelf marked ‘the new weird’.  In Ashcroft’s hands, the weirdness is philosophically important, and the weirdest elements are the people.

This poses a challenge to the more naively Aristotelian or eudaimonic versions of virtue ethics.  Even Nietzsche doesn’t really get at the oddness of root human desires, good as he is on their violence, contingency, lewdness, fleshy unreasonableness, etc..  It is certainly something for aspiring educators of character to think about.

Keeping It Real: a workshop

Thursday 7th July, University of Hertfordshire, 9.45-4.30

Room: N205 de Havilland Campus

‘Professionalism means caring for someone else’ – Legal educationalist Clark Cunningham.

Many ethicists claim that sound ethical judgment requires the development of virtuous dispositions. What does this mean for the education of client-facing professionals such as teachers, lawyers, psychotherapists and police officers?  What virtues do such professionals need, and how can they be developed in professional education?

Most work on virtue ethics goes on within the academic discipline of philosophy.  What insight can this work offer these professions and their trainers? And what insights can philosophy gain from encountering the realities of training professionals to engage with the public?

The Manifest Virtue project – led by Dr Brendan Larvor and Professor John Lippitt – seeks, through a blog and planned workshops, to explore these issues.

Our first workshop, Keeping it Real, will explore how various virtues (and, unintentionally, vices) are modelled in the education of certain professional groups.


09:45 Arrivals and coffee/tea/Danishes

10:00-10:30 Introduction and setting the context (BL, JL)

10:30-11:30 Professor Nigel Duncan (Legal Education, City University) “Playing the Wild Card”

11:30-11:45 Coffee/tea/water/biscuits/fruit

11:45-12:45 Chief Supt Jane Swinburne (Chair of the Ethics Committee, Hertfordshire Constabulary)  Embedding the Police Code of Ethics in the Hertfordshire Constabulary – just common sense?

12:45-1:45 Lunch

1:45-2:45 Professor Joy Jarvis and Dr Elizabeth White (Education, University of Hertfordshire)  “Teacher education – a context for modelling professional virtues?”

2:45-3:00 Coffee/tea/water/biscuits/fruit

3:00-4:00 Karen Weixel-Dixon (Psychotherapy, Regent’s University) “Humility as a necessary quality for authentic relationships”

4:00-4:30 Plenary discussion

Attendance is free, but please register in advance by e-mailing Andrew Smith, School of Humanities Research Assistant (

University of Hertfordshire Accident Simulation Centre
Professional Training

Women in Philosophy: What Needs to Change?

women-in-philosophy-cover-image-199x300This is the title of a book edited by Katrina Hutchison and Fiona Jenkins  (Oxford: Oxford University Press, 2013).

Here is a review by Katherine Angel

Angel’s review raises and deliberately embodies some aspects of our basic question.

Its critique of professional academic philosophy goes in the same direction as Michael Barany’s remark that philosophy’s problems will not be solved by doing more philosophy of the same sort.  But Angel goes rather further.

Keeping It Real

In July 2013, I (BL) took part in a week-long Convivium on the Orkney island of Papa Westray. This was a meeting of law lecturers, medical educators, philosophers and theologians, plus a dramatist and an anthropologist, to discuss ethics in professional education, with particular reference to law and medicine. I came away deeply impressed at the systematic efforts in legal and medical education to inculcate a professional ethos in their students. One of the liveliest discussions between the doctors and lawyers was on the topic of how to assess students’ diagnostic interviewing skills. Is it best to use members of the public as subjects in the interviewing examination? This has the advantage of coming closest to reality, but it means that the exam is not standard—the students are not all assessed on the same task, because some subjects will present far more difficult problems, and personalities, than others. This is not fair and not acceptable in a high-stakes assessment. An alternative is to use actors, who can present the same scenario for each student—but this has its own drawbacks (not least the cost; it takes three days to train a ‘standard patient’). The chief risk with a standardised interview examination is that you train doctors or lawyers to interview the standard patient or client as designed for the test, but of course no such person exists in nature.

Listening to this, I became jealous of these clinical subjects that can give their students real things to do, advising real legal clients, prodding real patients, cutting up real dead bodies.  The surgeon trainer, Roger Kneebone, remarked that sending students to draw real blood from real arms is important because in addition to the bare skill of blood-drawing, they tacitly learn a lot of other, hard-to-articulate doctorly stuff (about how, in a professional manner, to touch and manipulate the limbs of strangers in ways that break normal social taboos, for example). He explained that medical students used to learn how to stitch wounds by practicing on pieces of pig skin. This has the drawback that stitching a piece of material that you can turn around and over to get the best angle and light is not like stitching skin on a live body (for a comparison of medical stitching and tailoring, see this film featuring Prof Kneebone visiting Savile Row). So now, some medical students get some practice on fake wounds mounted on real limbs (with theatrical make-up to hide the join).

What part of this could we carry over to philosophy? We know that it is possible to do real philosophy with students.  After all, they are real people with real thoughts and real feelings.  Moreover, an argument really is valid or invalid, even if no-one in the room makes it in earnest.  Nevertheless, these realities do not always lead to authentic philosophy in the classroom.  I was once teaching a module on Hegel and moderating the marking of modules on Kant and Kierkegaard.  Some of the students who wrote essays for the Kant lecturer explaining that Kant’s project succeeds, also wrote essays for me explaining how Hegel’s criticisms of Kant were wholly successful and essays for the Kierkegaard lecturer (JL) on how Kierkegaard revealed Hegel’s philosophy to be a sham.  It is easy to see how they might imagine this to be a rational grade-maximising strategy. Longer experience in philosophy teaches that gaming approaches like this lead to shallow learning and thence to mediocre grades. Grade-maximising is a reliable recipe for not gaining the more valuable gifts that philosophy has to offer, even if it does raise the grades of a student who has decided in advance not to do any deep learning.

One response to this inauthenticity would be to change the curriculum so that it demanded more self-examination from the students.  This is a legitimate angle, especially if it challenges the students’ self-understandings as well as developing and extending them.  “Know thyself” is an ancient imperative and we could do more of it in our curricula.  However, focusing on the self would lack one of the advantages of clinical work: it’s not about me.  One of the deep differences between the workplace and most other institutions that students encounter is that in school, organised extra-curricular activities and university are for the benefit of the student.  Even if you commit a crime and are arrested and imprisoned, the prison is there to punish and reform you, and has people talking to you about your criminality, your anger issues and your drug and alcohol use.  We should not be surprised if some young people are self-absorbed and have a powerful sense of entitlement. What else should we expect when all the institutions they encounter are directed for their benefit?  Work is not like that; the employee, qua employee, is not an end but is rather a means.  This is why going to work for the first time can be a shock, and why work with real clients and patients is educationally valuable in a way that simulations cannot be.  At it happens, many university students now have part-time jobs so they already know that they are not the centre of the world. Indeed, many of them work in retail, so they know plenty about interacting with clients and customers. One of the law lecturers at the convivium observed that, “Professionalism means caring for someone else.” Many students already know this from their paid work. But this experience is disjoint from their philosophical studies.

Ideally, I’d like to find a philosophical task to give to students such that they would harm someone other than themselves if they fluffed it.  Then, their grades would not be the highest stakes in the activity. I conjecture that many students feel no compunction about grade-chasing because there are no serious rival interests—they believe that they don’t seriously hurt anyone else if they pursue their studies cynically. Even if they acknowledge that their grade-chasing may damage the educational experiences of classmates and hurt the lecturer’s feelings, this is unlikely to be decisive because these stakes seem low compared to the importance of their grades. Attitudes might change if we could find a philosophical activity that, like blood-drawing, wound-stitching or clinical legal work, had high stakes for someone else.

So far, the nearest I’ve got is:

  • Group presentations where the group gets the same grade (so free-loading may reduce the grades of other students in the group)
  • taking students to teach in secondary schools and
  • (as part of a module assessment) editing each other’s essays.

None of these raises stakes for others high enough to challenge the students’ own grades for supremacy.  Another possibility might be to have final-year students mark the work of first-years.  Marking a philosophy essay involves philosophical reflection if you do it properly; it’s not checking the essay against some model answer.  This, though, has obvious quality-assurance obstacles. Requiring students to mentor or tutor other, less advanced students is another option, but all of these would be difficult to assess—we would be in the same spot as the doctors and the lawyers, trying to design a clinical practice assessment that is both realistic and fair.

There is another aspect to clinical practice that throws up a direct challenge to philosophical ethics. Part of the value of law-clinics, as the law lecturers at the Papa Westray convivium explained, is that they are diagnostic of selfishness and other character flaws that can lead to professional misconduct. They presented a four-part model (due to James Rest and Muriel Bebeau) of how professional judgments can fail ethically:

  1. Moral blindness (this is usually the case where conflicts of interest lead to malpractice, or where the client wants something that may harm someone else)
  2. Faulty moral reasoning (compatible with moral awareness, this is a failure to think through situations where there are rival interests in play)
  3. Lack of moral motivation (failure to make give ethics its proper importance in competition with other proper professional interests)
  4. Ineffectiveness in  implementing ethical judgments, due to lack of interpersonal skills.

According to the Carnegie Report, the moral sensitivity, moral reasoning ability, moral motivation and implementation skills can be developed in law students. However, this is only possible ‘in role’, either through law clinics or classroom role-play, so that the student moves from observer to actor. These experiences can then provide material for reflection. Thus, this four-part model can be the frame for an effective curriculum in professional legal ethics. (Here I follow Clark Cunningham.)  As I understand it, this approach is not standard in the US or the UK but the studies undertaken so far seem to be promising.

Classroom philosophy, as we currently practice it, rarely works on the fourth item in this list.  Much philosophical ethics is content to work out what the right answers are (and think about the logic of the working out and what rightness means, etc.). Insofar as this is true, this too is a failure to keep it real. We philosophers might profitably look at education in client-oriented professions such as law and medicine to see how we might repair this. Taking responsibility for other people seems to be the surest route to diagnosing and building resistance to the four moral weaknesses listed here.  If this isn’t feasible, it may be possible to use imaginative classroom work, that might include reading fiction, watching films, creative writing and role-play as well as critical examination of the works of philosophers, to work on all four elements.

This question of reality, of having stakes in the room higher than the students’ grades, has a bearing on our base question about the modelling of philosophical virtues. There may be some virtues and vices that are only apparent when dealing with people outside the profession. Attending research seminars and conferences may reveal to students how professional philosophers deal with each other—but what about contact between philosophers and everyone else? In any case, it may be that, in the presence of someone else’s interests and vulnerabilities, some of the characteristics prized by philosophers (such as conceptual precision or speed of thought) may not seem so important after all.

John Lippitt and Nigel Duncan suggested improvements to the text.

Matthew Inglis and Nigel Duncan suggested these options for further reading:

Bebeau, M, ‘Influencing the Moral Dimensions of Dental Practice’ in Rest and Narvaez, (eds), Moral Development in the Professions: Psychology and Applied Ethics, (1994, Hillsdale: Ehrlbaum)

Jones, Ian & Inglis, Matthew (2015) ‘The problem of assessing problem solving: can comparative judgement help?’ Educational Studies in Mathematics July 2015, Volume 89, Issue 3, pp 337-355

Jones, Ian &  Alcock, Lara (2014) Peer assessment without assessment criteria  Studies in Higher Education  Vol. 39, Iss. 10

Donald Nicolson (2008) ‘Education, education, education’: Legal, moral and
clinical, The Law Teacher, 42:2, 145-172,

Donald Nicolson (2013) Calling, Character and Clinical Legal Education: A Cradle to Grave Approach to Inculcating a Love for Justice Legal Ethics
Vol. 16, Iss. 1

Thought For The Day

In this post, BL reflects on Nigel Warburton‘s thoughts on Thought for the Day.  JL’s comments helped improve this post, though he is not responsible for it. 

The philosopher Nigel Warburton has suggested that the Thought For The Day (TFTD) slot on the BBC Radio4 Today programme should be converted into a philosophy slot. He has made a career of presenting philosophy to audiences who don’t have to pass exams in it, so I take his judgment on this seriously. Nevertheless, I don’t think it’s a good option for philosophy, for three reasons: the format, the frame and the established conventions. For similar reasons, I wonder whether it is good for religion too—but that is a question for others.

I want to pursue this because it is an opportunity to explore the conditions for successful philosophical practice. If the Today Programme is a bad place to do philosophy, are our classrooms, philosophy societies, cafés philosophiques, podcasts, blogs, etc. good in the corresponding dimensions?

The format

TFTD is a monologue of two minutes and forty-five seconds. One reason for wondering whether it’s possible to do any philosophy in this format is that philosophy is dialogical (philosophy isn’t alone in this). Philosophy usually starts with some sort of tension, contradiction or disagreement, and while it might in principle be possible to carry out the whole business inside one person’s consciousness, really there needs to be the possibility of back-and-forth between several voices. All philosophers are in dialogue with others (or the books of others), even those like Hume and Wittgenstein who rarely referred to their sources. Even Boethius in his prison cell was thinking in terms drawn from books he had read. A brief moment of philosophical talk is rarely worth much unless one has a) access to some of the history of the discussion of which it is part and b) some opportunity to interrogate it. TFTD certainly fails on b), as there is no opportunity for listeners to engage critically with the contents of TFTD, either on air or online. This then is point one: TFTD is a monologue; philosophy is a dialogue.

Point one needs some elaboration, however, because even dialogue-writing Plato seems to have progressively abandoned the conversational format in his later works, for good reason. While earlier works have conversation all the way through, in Timeus (a late-period work), the conversation is just a preliminary set-up to introduce a long lecture on cosmology by one expert voice. Plato had to do this, because in the conversational model, no single thought ever gets developed in any depth. In the early dialogues, someone presents a naïve thought, Socrates chews on it for a while until it loses its plausibility, and then the chat moves on. This is unsatisfactory in part because it prevents Plato from doing justice to his opponents. He can’t present them properly if his semi-fictional Socrates harries them constantly. So if monologues were good enough for the mature Plato, perhaps they are good enough for TFTD?

This brings us to the brevity of the TFTD slot. Plato abandoned the conversational format in order to give his speakers a good long go at developing their positions. It takes time to present an alternative to the common-sense of the day. On first hearing, an alternative view just sounds weird and easy to dismiss—there must be time to overcome the sense of weirdness and the initial objections. I wonder whether two minutes and forty seconds is long enough. In James Connelly’s post, we met Collingwood’s view that philosophical discussions are only worthwhile when the discussants know each other well. Part of the reason for this (though perhaps not Collingwood’s reason) is that misunderstandings are rife in philosophy. You have to talk for quite a long time before everyone in a discussion has a firm grip on its more abstract terms. It’s notable that the TFTD speakers who are most successful at overcoming the limitations of the format are the regulars with well known doctrinal positions who can use their two minutes and forty seconds to add to what they have said on previous occasions.

The frame

TFTD is broadcast at around 07:45 in the BBC’s talk-radio news and current affairs programme, on Radio 4, its ‘serious’ channel. The content of this programme is part of a connected web of current affairs that extends beyond the BBC into Parliament and beyond. It is the first element in the British national daily news cycle, and political managers try to use it to set the day’s media agenda. This makes the programme itself part of the news—its interviews and reports may have consequences. It also makes the programme rather cynical as the interviewers try to circumvent the media training and planned messaging of the interviewees. Much of it therefore consists of verbal wrestling between people who believe themselves to be quite important. On weekdays, this starts at 06:00.

Then, at 07:45, there is a quick burst of religion. As noted, this is a monologue, so it has none of the back-and-forth life of the rest of the programme, and its content lacks the credibility that comes from having survived interrogation by an interviewer. Worse, it has no consequences. The Today presenters never refer to it after they pick up the programme from the TFTD speaker. TFTD is as isolated from the great ongoing play of wealth and power that makes up most of the Today programme as the sports reports. Even if the speaker succeeds in casting a fresh light on a news story, it makes no difference to the subsequent reporting and discussion of that story.

The effective implication is that religion makes no difference to the conduct of the business and politics that fill most of the airtime. The isolated framing of TFTD tacitly presents religion as something high-minded that can be safely ignored. Philosophy should not be in a hurry to gain that title for itself.

The conventions

As noted, there is no feedback function for TFTD (except for the BBC’s general feedback channels). In an age where every opinion piece on the internet has a comment section, this lack of feedback is outdated and gives the TFTD slot a patrician air (which some of the contributors don’t deserve). This is not an accident. TFTD is the successor to a wartime innovation called ‘Lift Up Your Hearts’. It dates back to the days when national culture and broadcasting were in the hands of a tiny educated elite. As an indicator, the rate of participation in higher education in the UK in 1950 was 3.4% of the population. The remit of the BBC at that time was to make the cultural riches of that elite available to the whole country. That aim was laudable, but it depended on a culture of deference among the 96.6% of the population on the receiving end. That deference has, thankfully, gone. Consider the fate of another mid-century BBC broadcast, The Brains Trust. This was an immensely popular radio show in the 1940s, and then a television show in the 1950s. A panel of distinguished brains took questions on any subject, without prior notice, and the pleasure lay in watching them cook up and bicker about their answers. There was a brief attempt to revive it at the start of this century, but it failed, because, I think, people no longer accept that a degree from an ancient university and a plummy voice entitle a person to a hearing on any and every subject. I’m sure the speakers would reject this, but to my ear, TFTD is a survival of that earlier time. The tone and content of TFTD still depend on the notion that some people are especially qualified to offer edification to everyone else, while the social order on which this notion depended has gone. This change matters for philosophy, too. In a post on philosophical performances, we linked to a Beyond The Fringe spoof of mid-century Oxford philosophy. Part of the joke there was the spectacle of philosophy professors talking about ordinariness and ordinary language—using a version of the English language spoken by almost no-one else in the world.

Aside from the Reithian heritage, TFTD suffers from the obligation to comment on a headline or current anecdote from a religious point of view. This junction between the temporal and the eternal is often rather tenuous. A report by Ekklesia found that “for as many as a third of the scripts studied, the religious link enters like a rather forced afterthought, tagged on in order to legitimise or ‘baptise’ the opinions and comments upon which the Thought is grounded.” There is no reason to think that philosophers will do any better at finding the relevance of their tradition in the rush of current affairs. Indeed, I would expect most academic philosophers to be worse at it than religious leaders, who try to make these links all the time. It takes great skill to say, “these recent events put me in mind of the books I have been reading daily since my youth” without sounding like a stopped clock.

We have been here before. Hegel, in the preface to the Philosophy of Right, explains that philosophy is concerned with the general rational order of things, but becomes ridiculous if it meddles with what are, for philosophy, inessential details on which philosophers have no special expertise:

Plato might have omitted his recommendation to nurses to keep on the move with infants and to rock them continually in their arms. And Fichte too need not have carried… his passport regulations to such a pitch of perfection as to require suspects not merely to sign their passports but to have their likenesses painted on them.

(Tr. TM Knox p. 11)

Hegel’s conception of philosophy is not widely shared nowadays, but some version of this point holds up, I think, for most contemporary philosophical projects. Philosophers seeking to demonstrate their relevance to the world by bringing their understanding of epistemology or modal logic to bear on the day’s headlines run a severe risk of looking as silly as Fichte did when he set about designing passports.


These, then, are the shortcomings of TFTD as a vehicle for philosophy. Perhaps I am wrong about this. After all, some of these difficulties are fixable; the format could be altered to allow the continuity and feedback of a dialogue and the de-haut-en-bas Reithian heritage might be consciously rejected. Perhaps Nigel and others will find ways to use this slot to do real philosophical work. I hope so. My concern here is to use Nigel’s proposal to think more generally about the conditions conducive to good philosophy, and not just on the radio.