Challenges to Faith in Higher Education

Finlay Malcolm is a research fellow at UH.   Here, he describes research into the experiences of students with religious faith.  University can be an unsettling experience for such students.  Finlay argues that in order to understand this and do something about it, we must first find the right way to think about faith.

The experiences of religious students in higher education is a major research interest for sociologists of education. Many of their discoveries are pertinent to philosophical treatments of religious faith. But equally, philosophical accounts of the nature of faith can facilitate sociological research into religion in general, and faith in particular. Exploring these connections is a research activity of mine as a philosopher working on religion and faith. I want to propose a few ways by which philosophy and the sociology of education can positively interact on questions concerning religious faith. The ideas I’ll suggest have a bearing on best practice in both pedagogy and student pastoral work.

To illustrate how philosophy and sociology can positively interact, consider the 2013 publication by a group of sociologists (Guest, Aune, Sharma, Warner), Christianity and the University Experience: Understanding Student Faith. This work is the result of three years of quantitative and qualitative research into the experiences of Christian students in UK universities. The study produced a great deal of novel and interesting insights. One of the main areas of discussion was on the way that the faith of the students is shaped and developed throughout their time at university. Two particular challenges are thought to be significant in this regard. First, there are disruptions to the familiar patterns of enacting one’s faith in daily life, and the encounters one has with different ways of practising one’s faith. Second, there are the familiar intellectual challenges that conflict with a student’s faith, where, in many cases, these arise from studying the university curriculum, but also come from interactions with other students who hold a different set of beliefs.

This study noted that these sorts of challenges were widespread amongst university students, that their overall experiences did influence their religious faith (to varying extents), and that in a few cases, the university experience led to a loss of religious faith for some students. These observations prompt several questions that are of interest to philosophy. For one thing, we might wonder how conflicts of faith are experienced in similar or different ways by university students. What is the phenomenological profile of these experiences? Are they intellectual, emotional, or do they cause a loss of overall direction? What is it like to experience a loss of faith? One place to begin with addressing these questions, is to address the more fundamental question: What is the nature of the faith of university students (Christian or otherwise)?

In general, the sociological definition of religious faith is intentionally broad-stroked. In this study, for instance, faith is attributed to a student when he or she ‘identifies’ as a Christian. But people identify as religious for many differing reasons, for instance, family heritage or upbringing, beliefs held, practices undertaken, long-term commitments, trust in a deity, subscribing to a set of religious creeds, or adopting a religion’s ethical system as a moral compass in life. When a person has faith in the sense that he or she identifies as belonging to some particular religion for one of these reasons, philosophers sometimes refer to this as global faith.

Determining which challenges a student faces, and the experiences that student has, when these are thought of as challenges to a student’s global faith, will depend in what sense the student has global faith. For instance, if the student’s global faith is held in virtue of her practices, then the challenges will be challenges to her practising her religion. Or, if the student’s global faith is held in virtue of his family heritage, then the challenges will be challenges to his heritage as belonging to a particular religion. Of course, the global faith could also be held for numerous reasons, and the challenges may also be manifold. Presumably, challenges to global faith vary markedly, as do the student’s experiences. In some instances, these challenges may be reasonably unproblematic for the student, if, for instance, they are a matter of being unfamiliar with the surroundings one now practices one’s religion in. These kinds of cases may be thought of as essentially positive, character-forming experiences for the student in question. Even when the student’s global faith concerns firmly held beliefs, and these are challenged intellectually as part of the university curriculum, it doesn’t follow, necessarily, that this is a problem for the student.  Suppose that we view religious beliefs simply as irrational views about what the world is like.  On this view, a conflict or crisis of faith is little more than an adjustment in what a student believes to be true that arises from discovering novel information and arguments – an invaluable experience, and the very thing that a good education should provide.

However, global faith – self-identification as belonging to a particular religion – is not the main interest of recent philosophical writing on faith. The concept is generally seen as too broad and imprecise. Perhaps this is a mistake, and more philosophical attention should be focussed on global faith, and philosophers could look to sociological research in developing their work. Nevertheless, philosophers have focussed on much narrower conceptions of faith, of which two in particular stand out: faith-that something is the case, and faith-in some object, like a government, or a person, like a spouse. Should someone experience challenges to, or a loss of, either of these kinds of faith, it could negatively impact on the person in a way that is not always the case with one’s global faith. Despite the potential negative implications of suffering from conflicts or losses of these varieties of faith (faith-in and faith-that), these concepts are not specifically deployed in sociological research. If they were, though, I believe they would lead to a clearer understanding of religious students, and the challenges their faith experiences.

Consider the first variety of faith, for example. When someone has faith-that, she has faith that certain propositions are true. In religious cases of faith, a person might have faith toward numerous propositions, for instance, that God exists, that there is an afterlife, and that the scriptures are divine revelation. Faith-that is generally thought to require belief or acceptance that a proposition is true, and for someone to be in favour of the truth of that proposition. One cannot, for instance, have faith that God exists, whilst wanting it to be false that God exists. Moreover, we generally have faith toward something when we regard that thing positively. So, whilst we may have faith that democracy will succeed, it would be unusual to have faith that we will contract a serious illness. With the second variety of faith – faith-in – one person is in a relation to another person or object, through either trust or reliance. We have faith in democracy, or a friend to make his business a success, or a politician to lead in a virtuous way. In each case, we either trust or rely on the object of faith.

Having faith in something, or that something is the case, is often thought to require being resilient about that thing. So, if I have faith that Theresa May is a strong and stable leader, then my faith won’t be lost, even if she fails, from time to time, to be strong and stable. My faith in her will persevere, at least for a while, through challenges and counter-evidence. Moreover, since I am in favour of the truth of that which I have faith in or toward, then it follows that I desire to see that which I have faith in come to pass. I cannot have faith that there is an afterlife, for instance, if I don’t desire it to be the case that there is an afterlife. Indeed, this conative component to faith may explain why my faith is disposed to persist through challenges and counter-evidence. Faith also has an effect on my overall dispositions and commitments. When I have faith in a God, and faith towards various religious propositions, then I will be disposed to commit myself to a particular way of life, given those commitments.

So, both faith-that and faith-in are thought to require, first, a disposition to persevere with one’s faith; second, a desire for the realisation of the proposition or affection for the object to which one’s faith is directed; and third, a disposition to adopt an array of commitments and plans to a particular course of action or way of life.

Now, if faith is understood in terms of these philosophical analyses of both faith-that and faith-in, we can make several predictions concerning the nature and impact of challenges to a student’s faith. First, faith requires being in favour of the truth of something, and hence being opposed to its falsity. Conflicting information – perhaps arising from the university curriculum – can thereby seem to undermine what the faithful person desires or hopes for. Plausibly, in some cases, students will develop various strategies in order to retain and protect their faith. This may involve remaining part of a close religious community with others of shared faith, who agree with one’s own views. However, some students may disengage to an extent with their studies, with a corresponding effect on their participation and grades. These considerations are particularly applicable to best pedagogical practice.

Second, if someone is disposed to adopt an array of behavioural commitments and plans, then challenges to faith encountered in education do not merely impact on the beliefs of the student, but also on these associated commitments and life plans. When faith is challenged or even lost, these commitments and plans will seem pointless, leaving someone feeling a loss of direction and an overall sense of demoralisation. Undermining plans may accordingly erode what the faithful person cares about, plausibly leading to a loss of personal identity. Moreover, since religious faith often concerns matters of utmost importance, including eternal life and the meaningfulness of their very existence, losing faith in these matters could, in extreme cases, cause an existential crisis and feelings of angst, hopelessness and depression, not unlike suffering a bereavement.

The practical implications of a loss of faith are clearly relevant to policies concerning student wellbeing, and pastoral practices in the university sector that support students through times of hardship. One particular concern is over student retention. If a student’s loss of faith can be as distressing as I have suggested, it seems possible that some students could leave their courses altogether.

By working with the definitions of faith-in and faith-that, which are more precise and robust than mere identification with a religion, sociologists can gain a clearer understanding of the religious faith of university students. These definitions can also helpfully reveal the nature of the challenges to faith that students experience, and can provide for fruitful avenues of further research, particularly concerning interdisciplinary work. Moreover, sociological studies of these varieties of faith could support an empirically-informed philosophical account of faith, and could feed in to university pastoral and pedagogical policy. These are some of the goals I am working towards in my own work, and mark one way by which philosophical and sociological research on religious faith may positively interact.

(Many of the ideas outlined here are shared and co-developed with my collaborator, Michael Scott, from the University of Manchester).




What does a sociologist have to say about the pedagogy of philosophy in higher education?

In comments on an earlier post, Michael Barany argued that philosophy cannot solve its own problems without help from from other disciplines.  Here, sociologists of education Nick Melliss and Claudia Lapping give their view.

As sociologists, one of our main interests is the organisation and function of knowledge in relation to social structures, institutions and identities. Sociologists since Emile Durkheim, one of the most influential founders of the discipline, have been interested in the role of culture, ritual and everyday social practices in binding a society together. Culture, ritual and social practices are here understood as forms of shared knowledge that sustain or bind together collectivities of individuals. Research in the sociology of education, our particular field, focuses its investigations on the organisation and transmission of curricular knowledge within education institutions. We aim to de-naturalise both curricula and pedagogy: to show how they are constructed within, and in the ongoing interests of, the existing, hierarchical organisation of the social world.

Nick is currently completing his doctoral thesis, looking at university seminars in three different disciplines: Midwifery, Classics and Education. In particular, he’s looking at the construction of sacred and profane knowledge, drawing on the work of both Durkheim and Basil Bernstein, a key figure in 20th century sociology of education. For both theorists, sacred knowledge is simultaneously abstracted from the everyday and deeply implicated in both social structure and individual identities. Nick’s analysis of the seminars that he observed can help to illustrate the meaning of the sacred for both professional and academic identities.

In the Midwifery programme that he observed, students were explicitly encouraged to reflect on what it means to gradually take on the identity of a midwife. Nick’s analysis explores the relationship between sacred knowledge and identity in sessions on reflective practice. These were relatively unstructured sessions where students discussed both their own experience, and ideals of the midwife as ‘expert of the norm’ and as someone who works in between the more scientific medical world and the intimate experience of pregnant women. This complex combination of experience and ideals can be understood as the sacred knowledge of the midwife. For the classicist, in the Homer translation tutorial that Nick observed, the sacred knowledge was not, as you might initially expect, formal grammar, or even knowledge of key texts or sources. Rather, Nick argues, the sacred knowledge was the need to doubt, to continually refer to dictionaries and other sources, to check and refine the translation of a particular word or phrase: although the apparent focus of the tutorial was to check the student’s translation of a section of the Iliad, throughout the session the tutor continually modelled doubt, curiosity and the process of checking. Nick’s interpretation of these instances reveals the way the sacred seems to define something unique about what it means to take on the identity of either midwife, or translator. This leads to the question: What is the sacred knowledge of the philosopher today?

It is also worth noting that, while the transmission of sacred knowledge on the midwifery course is relatively formalised, with significant time allotted to reflective practice within the curriculum, the necessity of doubt in the process of translation was not formally recorded in the curriculum documentation of Classics degree. In the more traditional disciplinary fields, perhaps, the most sacred element is not spoken or named out loud. Rather, students are expected to gradually orient themselves to the identity of the discipline, without explicit teaching. While the naming of the sacred element will always risk reduction, the oversimplification of a unique and almost magical element of identity, the refusal to name risks exclusion. This observation is in line with the argument of Basil Bernstein’s classic paper, ‘Class and Pedagogies: Visible and Invisible’. Bernstein argues, very persuasively, that the apparently open, egalitarian child-centred pedagogies of the 1970’s were in fact modelled on middle-class cultures and excluded children from other cultures, who couldn’t interpret the unspoken rules by which they were being assessed. Again, perhaps, the question for university teachers today, and for teachers of philosophy, is: who might we be excluding in the, to us, self-evident and open practices of our classrooms?


Trigger Warnings, Free Inquiry, and Avoiding Harm

This entry is by Pat Stokes, who has featured in this blog before.

Once a year, every year, I stand in front of a room full of teenagers and talk about erect penises.

This is not some weird hobby of mine. Someone – the Australian taxpayer, ultimately – pays me to do this. But talking about penises isn’t the problem. The problem is that I warn the class beforehand that we’ll be discussing explicit, if thankfully brief and fairly highbrow, depictions of sex.

Warning students ahead of time like that makes me a terrible educator, you see. All these cotton wool, nanny state ‘trigger warnings’ are making these kids soft and useless. The workforce will eat them alive. Or so generous Twitter users keep telling me.

No, the only way to prepare these impressionable young minds for the horrors of something called the ‘real world’ is to spring descriptions of tumescent phalli on them totally unannounced. It’s the only way they’ll learn.

A genuine tension

Trigger warnings have become associated, rhetorically at least, with the broader issue of freedom of speech on campus and the phenomena of ‘safe spaces’ and no-platforming. Their aim is broadly similar. They’re meant to avoid harms that can be caused by discussing topics that, due to personal or social history, students might find distressing. They don’t shut the topic down, but they give students some capacity to manage their exposure. That, at least, is the theory.

The tension between the needs of open debate and the need to avoid harm is, I’d suggest, a real one, and needs to be acknowledged. On the one hand, if we’re going to model good argument and proper academic inquiry we should follow arguments where they lead, not just where we’d like to be. University should be a place where you are exposed to new and often challenging ideas. Discomfort can be a sign of emerging knowledge.

Academic debates and teaching don’t occur in a vacuum, however. For all the talk of preparing students for the ‘real world,’ there’s only one real world, and universities are inside it just like everything else. We don’t step outside the world while we’re thinking and teaching about it.

You may fancy yourself a locus of pure, ahistorical reason, a mere conduit for ‘the ideas themselves.’ But you’re not. You’re a flesh-and-blood human being partly defined by your history and social position, talking to other flesh-and-blood beings. And that makes your behavior, in the classroom and on the page as much as anywhere else, subject to ethical evaluation. You can’t step outside your ethical and political relationships to those around you. Ethics has no outside.

Trauma ahead

So, back to the penises. I teach a large introductory philosophy unit, originally devised by philosopher-bassist Stan van Hooft, called ‘Love, Sex, and Death.’ We use these dimensions of human life to introduce students to core aspects of philosophical methodology and content. We ask them to wrestle with texts ranging from Plato and neo-Thomistic Natural Law theory to radical feminism, and with questions from ‘what is love?’ and ‘should euthanasia be permitted?’ to ‘is it ok to watch porn?’ and ‘are there by definition no non-substitute masturbators?’ (long story).

The penises are all found in an influential paper by philosopher Martha Nussbaum that contains excerpts from famous literary depictions of sexual objectification. I let students know at the start of the class this material is coming, as well as flagging it beforehand in the lecture. Likewise with other potentially difficult or sensitive topics.

Why? Because there are five hundred students in this unit, and I don’t know them. I don’t know how many of them are victims of sexual assault, but the statistics tell us the number will be distressingly high. We know that sexual objectification is not a mere abstract question but part of lived experience for at least half the class. I also know, because some of them have told me, that students sitting through our discussion of the ethics of sex work have themselves worked in that industry. We look at the arguments around same-sex marriage with LGBTQI students, and pro- and anti-abortion arguments with, no doubt, women who have had abortions. We discuss the badness of death and the question of euthanasia knowing at least some of those present will be carrying recent or sudden loss, past suicide attempts, or terminally ill parents.

We still teach this material. We must teach this material. These are living issues we have no choice but to talk about, and we need to be taught the philosophical skills and background to do so properly. But you can’t talk about love, sex, and death without talking to people who have been injured by all these things. To discuss these is, unavoidably, to stick coldly abstract fingers into old and never-quite-healed wounds. Nor can you talk about race or gender or sexuality without talking about and reactivating histories of power and hurt.

An excess of caution?

So yes, there’s a real problem here. That problem – the unavoidable tension between the intellectual demands of unfettered inquiry and the ethical demand to avoid causing harm – is precisely what trigger warnings, at their best at least, are meant to help us manage, if never solve. They aren’t designed to stop intellectual inquiry, and shouldn’t be allowed to. Their aim is simply to prevent it causing more harm than it has to.

Among academics I’ve spoken to about this (read: I have no real empirical basis for the following claim whatsoever), “trigger warnings” seem to be regarded as nothing more than basic courtesy. Indeed, when I’ve described my practice – a polite, brief warning that there’s explicit content coming up – even people opposed to trigger warnings tend to concede that’s fine. What they object to are all those other trigger warnings they’re sure are out there.

And sure, you could try to argue that the problem isn’t trigger warnings per se, but that trigger warnings have become too comprehensive and thus too restrictive. It’s not hard to find examples that are, on their face, over-the-top or linked to improbable harms. The infamous Wilder edition of Kant with a warning about his dated views (and yes, Kant says some pretty frightful stuff) might well seem excessive or even insulting.

But at least it errs on the side of caution. Long lists of potentially ‘triggering’ topics may be unworkable and ultimately self-defeating, but being aware of too much seems practically far more preferable than remaining blissfully unaware of harms caused.

They’re also a good reminder of the importance of another virtue we should be modelling in both teaching and research: intellectual humility. Are law students going to be traumatised by hearing that something “violates the law”? At first blush we’d probably imagine not. But did the possibility occur to you beforehand? Do you know how that word affects people? What if you found out it does actually cause some people distress – what then? How will you manage that? And if you missed that, what else have you missed? What other consequences of your words have you failed to foresee? If reason and experience alone can let you down like that, how else are you going to know the true effect of your words and actions other than by listening to what people tell you?

Difficult topics don’t go away, and nor does the need to teach them. Trigger warnings are one tool available to us to try and negotiate the rough terrain. But they won’t replace an indispensable set of virtues: tact, sensitivity, consideration – and caution.


Moral philosophers and virtue: What is wrong with being bad?

This contribution is from Andreas Eriksen

The podcast Philosophy Bites once asked Ronald Dworkin “Who is the most impressive philosopher you have met?” Dworkin’s first response was John Rawls: “He is one of those very few philosophers whose saintliness infected the philosophical diction. Reading him has the enormous advantage that knowing him makes what he says sound true. He is an example of what he says.”

But is it particularly meritorious for a moral philosopher qua professional to exemplify one’s own theory? This post is an attempt to articulate a distinct philosophical defense, where I argue that the subject matter of moral philosophy can itself require the form of understanding that guides virtue. I will continue somewhat anecdotally and then make a more analytical point about what moral understanding means.

In the preface to his Morality: An Introduction to Ethics (1972), Bernard Williams said that writing about moral philosophy should be a hazardous business, partly because “one is likely to reveal the limitations and inadequacies of one’s own perceptions more directly  than in, at least, other parts of philosophy.” This is surely not just true of writing and developing moral philosophy, but also of teaching it. Teaching a theory is not reciting it, but rather giving it the voice it requires, explaining how it tries to answer a question, and taking a stand concerning its merits. The “hazardousness” Williams refers to could indicate that a form of bravery is required in doing and teaching moral philosophy; one must be willing to expose oneself to public disclosure of one’s grasp of what constitutes moral relations. This will not merely reveal one’s theoretical comprehension, but also something about one’s moral character. In the end, one must be willing to assert what really matters, not just to an imaginary impartial spectator, but also to oneself.

The basic idea that I extract from this is that teaching moral philosophy stands in a reciprocal relationship with genuine appreciation of moral standards: your moral sensitivity says something about plausibility of what you teach (a generalization of Dworkin’s point), and what you teach says something about your moral sensitivity (my version of Williams’s point). But does of this lead to the further claim that teachers of moral philosophy have to be morally virtuous?

In the preface to his Ethics: Inventing Right and Wrong (1977), J.L. Mackie acknowledges his debt to the classical moral philosophers. However, he emphasizes that he is in agreement with Locke that the ”truest teachers of moral philosophy are the outlaws and thieves.” At first, one might think this makes sense if one takes outlaws as foils that highlight what is valuable about real commitment to moral values. That is, through outlaws we learn about morality in the way we learn about the human condition by contrasting it with animals. That was not Mackie’s point. Rather, he believed we should learn from the outlaw attitude of practicing rules of justice out of convenience, not as a response to an objective moral reality.

I believe this illustrates how the relation between teaching moral philosophy and possessing moral virtue depends on a substantive theory about the status of moral values. In other words, the requirements of excellence in teaching moral philosophy depend on what one takes the subject matter to consist in. For example, Mackie did not believe in objective moral values, hence outlaws provide paradigms of insight. Through their way of living together, they show how unnecessary it is to cling to the superstitious belief in moral objectivity. Allegedly, rational self-interest is the only enlightened foundation.

In turn, if one believes that moral philosophy can and should clarify the virtues as responses to objective moral values, then this seems to require genuine sensitivity to  the demands of kindness, justice, honesty and more. They must to some extent “see” what the virtuous person “sees.” Teachers who lack proper appreciation of these values seem deficient qua moral philosophers. There is something they do not get about their own professional subject matter, namely moral life. Of course, this does not imply that the kind of saintliness ascribed to Rawls sets a standard all must meet. But it does mean that one’s character must be sufficiently shaped by a conceptual space governed by moral values, so that one can at least grasp part of what virtue responds to (even though one may lack full virtuous responsiveness).

It is crucial to understand the attitude of appreciation in the right way here. The claim is not that teachers who lack the understanding that guides moral virtue will necessarily lack access to the correct propositional content. To the contrary, we can imagine non-virtuous teachers who possess the right beliefs, or at least they state largely correct intellectual claims to their interlocutors. By appreciation, however, I am thinking of a mode of awareness that goes beyond mere intellectual endorsement. The difference between appreciation and mere belief is evident in aesthetics; think of the dissimilarity between acknowledging that Bach was great and experiencing his greatness. Similarly, moral appreciation refers to a complex emotional responsiveness that is deeply integrated with character.

The type of appreciation that is at issue here is a form of understanding that guides virtue as described by Aristotle. In the Nichomachean Ethics, the virtuous person is described as having certain feelings “at the right times, about the right things, toward the right people, for the right end, and in the right way” (1106b20). The moral feelings have been habituated to create harmony between what one acknowledges as good and the kinds of actions one takes pleasure in. The virtuous person does not merely endorse certain moral propositions, but identifies with the values these propositions refer to. This identification makes the values appear in a distinct light. They appear as noble or worthy of allegiance, as opposed to just correct according to theoretical reasoning. For Aristotle, then, virtuous thought cannot be separated from emotional engagement. Learning to be good is not merely acquiring the right set of beliefs, but taking the moral content to heart, making certain responses to value part of one’s “second nature.” This Aristotelian theme has been acutely explored by many, but my use of the term appreciation in this connections draws particularly on Stephen Darwall’s Welfare and Rational Care (2002).

Non-virtuous moral philosophers fail to understand the parts of their own subject that are only available through this form of appreciation. Again, the claim is not that moral philosophers have a professional duty to be exceptionally virtuous. Rather, the point is that their lack of the form of understanding that guides virtuous action is a distinct professional deficiency. When this lack is revealed through immoral action it simultaneously reveals a lack of sensitivity to one’s professional subject. This claim seems to be phenomenologically supported. When moral philosophers transgress important norms, we are not only disappointed by the acts per se, but also by how these acts reflect on their appreciation of moral theory. “Didn’t they get it?” we are prone to think if we admired their work or lectures. Or perhaps the acts cast a shadow of doubt over the philosophical message they have tried to convey. Unlike Rawls, knowing them can make what they say sound untrue. That is, unless the teacher is a philosophical skeptic about moral values. I’m not sure what the judgment would be if Mackie himself became an outlaw.


Two kinds of naked, two kinds of blindness — Brendan Larvor

Patrick Stokes recently blogged about his experience of being forced to reflect on his place in philosophy’s gender-structure. The nicest people can suffer from a kind of blindness–like the dog in this parable. Pat, as befits a Kierkegaard scholar, focuses on the condition of the individual consciousness, and this is not to be neglected. Are […]

via Two kinds of naked, two kinds of blindness — Brendan Larvor

Thinking about conferences as places for thinking

In this post, Richard Ashcroft reflects on the shortcomings of academic conferences. 


For a long time, I have been doing my work without going to conferences. Like going to bed early, this is perhaps why I do a lot of reflecting on (academic) life rather than participating in it. In the first half of my academic career I used to go to conferences a lot. But I now have very mixed feelings about them. Here I want to explore some of the reasons why I find conferences problematic.

Let me start by saying why I used to go to so many. In part this was because there was a time when I went to none at all. When I was a graduate student I was fortunate enough to be at a university which was considered a destination for the world’s academics, where famous names and rising stars would come on sabbatical, or for short visits, and where there was an almost continual stream of guest lectures and invited papers and seminars and symposia on pretty much any academic topic of interest. I was in a large and thriving department which had academic staff and graduate students from all around the world. In this environment of often heated discussion, I had a strong sense of high stakes and intellectual challenge. I’ve never known anything quite like it, before or since.

In this context I had very little need or desire to go to conferences. But this was not only because I believed that it was pointless to go there to get what I could already have here. Other factors were in play. One was that I had a feeling that graduate students were not really welcome at conferences. They were for the “grown-ups”. Although these might very well be many of the same grown-ups I saw in my university, I believed that conferences were where they went to be among their peers; all sorts of stuff would be discussed and argued about which were “not suitable for children”. It isn’t that I thought they were up to “campus novel” shenanigans – who knows, maybe some of them were – more a sense of there being a hierarchy whose boundaries it would be unwise to trespass across. Another important factor was that, status or shyness problems aside, no funding was available for graduate students to attend conferences. When some of us did go, the options were either to win a scarce and highly competitive travel grant, or to be somehow subsidised by an academic with a grant or other funding (patronage, in effect), or to be self-funding. And even in those halcyon days of grants and good employment prospects for newly minted PhDs, those of us without family money were generally skint. In all of these ways – good and bad – the conference system was a good reflection of the academic world more generally. Both self-satisfied and anxious, both obsessed with networking and with putting barriers in the way of networking, both a republic of letters and a highly stratified and economically unequal feudal system.

In the succeeding 20 or so years some things have improved. I think that today more support is available for graduates to attend conferences, it is considered more important for them to do so in terms both of networking and career development, and in terms of intellectual exchange. And my impression is that academic life has become somewhat less feudal and hierarchical, at least in the humanities. But perhaps I would say that, from the comfort of my professorial chair. Nonetheless I think the conference system itself has changed very little. When I first started going to conferences I was terribly excited. Partly this was sheer excitement at joining the wider academic community: I would be presenting my work to people who had never heard it before; I would get useful feedback; my ideas would be tested by tough (but, I hoped, fair) criticisms; I would get questions which might make me think again about some things, or open up new questions and new lines for research; I would meet like-minded people working on similar (but, I hoped, not exactly the same) things. I might even make some new friends. I was also a bit afraid – that my presenting style would be bad, or I would mismanage my time, or that I would not be able to deal with questions, or that the wider academic community would think I was a bit of a berk. With one exception, when, I can confirm, I really was both out of my intellectual depth and a bit of a berk, my experience was indeed generally positive. However, even in the honeymoon period of my relationship with conferences, I had reasons for disquiet.

Conferences are good places to see academic tribalism and status games at first hand. Time and again I have seen cadres of staff and students from well-known and prestigious departments move around en bloc, largely keeping to themselves and making critical comments about both individuals who don’t have such a cadre, because they come from a less well known institution, and about cadres from other institutions seen as “the competition”. The framing of these comments is often in terms of “who’s good” and who isn’t, and about method (“we” do these things the right way and “they” don’t). I’ve seen senior academics hold court, and I’ve seen senior academics snub people because they are low status, or because their PhD supervisor is someone the senior academic is having a feud with. I’ve seen graduate students seek out academic “stars” merely to be able to say that they have met (and sometimes, so that they can then ask for a favour later on, on the ground that they’ve met).bmwqyy6ceaauz9t

None of this is particularly surprising; humans do what humans do, wherever humans are gathered in numbers. But it is not the normative story we tell ourselves about conferences. That story is about conferences being a place where status is left at the door, where there is a free exchange of ideas, where the republic of letters is made flesh. But even if we have an unusually egalitarian and open-minded collection of academics, and even if everyone is polite and respectful, there are still structural things which make conferences frequently dispiriting affairs.

Conferences are expensive. They are expensive to put on; and this usually means they are expensive to attend. They are expensive to put on because even small conferences need a large, well-serviced room, with support staff, audio-visual aids, refreshments and so on. They can require booking systems, payment collection systems, arrangements with accommodation providers (hotels or university halls), and travel partners. International conferences might need translation services. These are minimum requirements. If the organisers take equality and diversity seriously – as they should – then there are access and communication support needs, childcare, and support for accompanying adults. A meeting of any complexity usually needs a professional conference organiser. Now it is possible to reduce the direct costs of all this – especially if it is hosted by a university, which will have many of these services as a matter of course. But this does not make real cost reductions – it simply transfers them either to indirect costs, or it transfers them to simple exploitation. Student helpers, acting unpaid, so that they can “benefit from attending at the conference”. Because as everyone knows, sitting at the registration stand for hours on end really does benefit your research. And every academic who comes along to register is actually there to hear about your draft chapter. As if. Of course cost savings can be made, and it’s possible to have a lean and thrifty and thoroughly successful meeting. But this does depend a bit on managing delegates’ expectations (yes, we have a visitor programme – it’s called a bus, you pay the driver and he takes you where you want to go, if it’s on his route). And it also depends on the birthday party principle: we absorb the cost of the party (conference) because we know that we will get to go to other parties hosted by others when it is their turn. But it can be hard to persuade a Faculty Dean to underwrite the cost of a conference on this basis. The Faculty or university absorbs the cost, and conference hosting can be a significant financial risk, but sees little of the benefit. It has to decide that the benefits in terms of reputation and staff job satisfaction and graduate recruitment are worth it, and a better investment than other uses of that money. Some conferences can attract external support, from a learned society or funding body or occasionally charitable or even commercial support. But these all come with quid pro quo’s and are not easy to get. And sometimes that external support will anger and alienate a significant proportion of the delegates (which is why bioethics conferences rarely, if ever, seek support from the pharmaceutical industry).

The cost of conferences is therefore transferred, so far as possible, to the delegates. There is no right to attend a conference, in the sense of it being a positive entitlement. That said many conference organisers do try to help some delegates overcome cost barriers to attendance – bursaries for students, which may offset some part or even the whole of the registration fee. Differential fees according to income bracket, early career status, or country of origin are often tried. Nonetheless all of these things are imperfect – the link to affordability is crude at best, registrants are expected to be honest about their income or career or country-of-residence status, bursaries are usually few and not necessarily awarded to the most in need or most deserving, and so on. And rarely do any of these fee structures apply to accommodation or travel. Travel grants can exist, but they are hard to get. Conference costs might be low, if the conference is in a low-to-middle-income country, but that tends to mean travel costs are higher, and the costs of the conference are never as low as they might be given that the expectations of international delegates have to be met.

An expensive conference to host; an expensive conference to attend. Who pays for all this? As noted, mostly the costs will come from the delegate herself. But this favours the delegate who has a high disposable income, or a generous conference allowance from her employer, or research grants which cover conference attendance, or a departmental subsidy from her employer. Some employers will just give all academic staff (and sometimes graduate students) an allowance which they can use at their discretion; most require the would-be delegate to explain why attendance at this particular conference is necessary. This may be because it would involve absence from ordinary duties (occasionally it may involve political considerations as well). But it will usually involve some justification on the basis of its academic importance. What in practice this usually means is – you can go if you are giving a paper.

There are lots of reasons to go to conferences which don’t involve giving a paper. I will discuss some of these below. But giving a paper is increasingly the minimum requirement for a funder or employer to cover the cost of an academic’s attendance. In my view this is disastrous. Either you have a cap on the number of papers, which excludes many people who might otherwise come from being able (or willing) to do so. Applying that cap will, very likely, involve all those lovely implicit biases in publication which we love so much – so we end up with the usual people giving the usual papers on the usual topics in the usual ways.

As an aside, invited keynote speakers are often the worst case here: Prof. X always says the same things, because Prof. X hasn’t done original work in years, and also because Prof. X has been invited to attract delegates who haven’t yet heard Prof. X give his (usual) speech and he likes to “play the hits”. And the organisers know this, but have invited him not because he’s brilliant or original but because this is all part of the circuits of favours and marketing.

So let’s not cap the number papers (and not have invited keynotes). We won’t just accept all paper proposals – we will have a peer review process to select only those which meet our expectations about quality and relevance to the conference theme. Oh dear. Here come the implicit biases again. In addition, there’s the problem that when most of us write conference abstracts they are more in the nature of a plan for work we hope to do between now and the conference. We might do the work, and find that we don’t think on the day of the conference what we thought on the day of the submission of the abstract. Or we might not do the work. And either way, the conference is not getting what it was promised. And it might well be getting something as good or better than what was promised, or it might be getting ill-digested, under-prepared, banal rubbish. Ok, now suppose our filter is reliable. We still have far too many papers for everyone to get to give the full length seminar paper we’d all like to give in an ideal world. What do we do? We have parallel sessions. And we cut the length of the papers so as to fit the conference timetable. And what do we now have? A shambles.

It is 11:30. Or rather it isn’t, it’s 11:40 because the last session finished late and morning coffee has overrun. We have to fit four papers in before 13:00. Each of those has now lost 5 minutes. We can reduce the question time at the end of each, or maybe we just have a single question time for all the papers together. So each paper, either way, gets even less discussion time than before. One of the speakers is giving her paper in his or her second or third language, and this slows her down. It’s not her fault but there it is. One of the speakers drones on and one for 5 minutes beyond his allotted time, because the chair can’t get him to shut up; he’s a junior colleague who doesn’t know the topic but has been drafted in because the person who was supposed to chair is actually giving a paper in the other parallel session. She’s doing this because her co-author is ill and couldn’t come to the conference, pulling out at the last minute. It’s no one’s fault, and it can’t be helped. Half the audience walk out half way through because they are off to the other session to hear their friend, a fellow PhD student, give a paper and he needs moral support. I have seen very bad papers which are obviously and irredeemably unpublishable; but I have also seen senior professors humiliate graduate students who are presenting work in progress and need a bit of mentoring. And so on. I am not exaggerating – this is normal. There is very little obvious “bad behaviour” here. It’s structural. The conference is badly designed; and I don’t mean this particular conference – I mean the conference-as-we-know-it.

In effect, for structural and economic reasons, we have a formal requirement on delegates (that they each be giving a paper, or have other sources of funding) which excludes many (most?) people who could usefully attend, and destroys the substantive rationale for having the conference in the first place – which, ostensibly, is to allow the presentation of papers and discussion of their merits and interconnections. I have been in parallel sessions in which the only people present were the chair and the speakers. I have also been in keynote speaker sessions in which there were 800 people in the audience. For different reasons, in neither case did we get the interactive, multi-party discussion we tend to think the conference is there to generate.

Having said this, the conference can produce other benefits. For instance, after hearing Prof. X burble on about his hobby horse for 40 minutes, for the second time in as many years,  I had several enjoyable chats with my peers over drinks about how Prof. X was past it, how scandalous it was that he’d been given a platform again, how he and his colleagues seem oblivious to any work outside his own self-referential bubble and so on. This kind of conversation, though unworthy of us, unscholarly, and vicious in all sorts of ways, did go some way toward building affective bonds of community. It is arguably this that conferences do best. Conferences bring people together who might not otherwise have met; and it also brings people together who do tend to meet only at conferences. Some of my oldest academic friends and interlocutors are people I have met at conferences. This has enriched my life, and also my work, though I think it has done so in that order. Conferences can give you a sort of oil check on the what’s going on in the field and whether it is running smoothly. They are an opportunity to meet potential new colleagues and collaborators, and some conferences are effectively hiring fairs.

I sometimes wonder what would happen if we got rid of the papers altogether and met anyway. Some mid-points between that and the status quo do exist – meetings in which papers are pre-circulated and only introduced very briefly just to prime discussion (these tend to be small workshops rather than conferences, however, and that’s a very different animal). Or “poster sessions”, though these are, it is generally held, bad mechanisms for communicating discursive argument. They can work perfectly well for formal arguments however. Indeed, if you can put it in a PowerPoint presentation, you can put it on a poster. Since so many people just read out their damn’ slides anyway… But poster presentations are despised by many, and they feel insulted if they are asked to give a poster instead of a paper. Yet done well, a poster can bring people together for precisely the kind of intimate discussion of detail which is impossible in the 5 minute Q&A after the sainted 20 minute paper. There are no timing problems, no problem if people want to come and go, no problem if you think of your question five minutes after the session is over or want to ask another. Still, a lot of people (at least in philosophy and applied ethics) are decidedly chippy about flying 10 hours and spending thousands of pounds to go and stand next to a laminated sheet of A0 and hope someone stops to talk to them. And, more importantly, a lot of departments would refuse to fund such a trip.

So much for the people who actually get to the conference. Who’s left? Practically everybody. I’ve stressed the cost barrier. But the geographical problems go well beyond price – time is a scarce resource for everyone, and even if you could give up the four days for the conference, two days either side for travel are a serious obstacle.  Then we have a centre/periphery problem, or what we might call geographical moral luck. As I said, I was very fortunate in where I did my graduate studies – I didn’t really have to travel, as people would pretty much come to us. But this confirms a particularly insidious kind of arrogance: because we were where (some of) the action was, we could tend to assume that whatever action there was, was where we were. And our location, both geographical and intellectual, would be assumed to be normative for everyone else. Hence the unlovely sight of British and North American universities claiming some special role in addressing Grand Challenges in Global Health; an imperial mentalité which just doesn’t seem to be dying away. There is an elision between being in possession of the financial and technological capital, being historically responsible for much of the world’s current political and economic condition, and being in possession of moral insight into and authority over “what is to be done.” The conference system perpetuates this. Oh, and it does its bit for global warming too. Adversely.

Who else doesn’t get to go to conferences? Two obvious groups: those who are effectively disabled by the conference system (Deaf people and people with mobility impairments, to name but two groups of people). And those who have other responsibilities for others. Some of the biggest and most important conferences are hosted at the most family-unfriendly times of year – in North America there is a particular practice of holding meetings between Christmas and New Year, conferences which it is effectively obligatory to attend if you are either hiring or being hired – which is everyone. Because who is not either looking for a job, or looking for students or junior staff?

Conferences – for all their flaws and for all their kinds of mechanism of exclusion – attract a particular kind of presenteeism. There is a strong academic version of the Fear of Missing Out. Apart from the social media plague of all your peers posting selfies in smart restaurants with all their (and your) friends having a great time while you are at home unblocking drains and sitting in curriculum design meetings, there is a sense that your work will go unread, and lose currency, if you don’t make an appearance at the conference to remind people that you exist. One solution to this is to chip in while they live tweet or post on Facebook and Instagram, making sardonic remarks. This works for me, but not so much if you are a beginning graduate student who nobody knows and who has to maintain a reputation for being bright, promising and a good potential colleague. Many people have written about the difficulty of combining home and academic life and staying sane. The long hours culture in academe is becoming notorious, and the effect this has on people’s hiring, promotion and tenure, and salaries, is much debated. What I want to highlight here is that if your family life is in any way complicated, then going to conferences becomes much more difficult, and if it is a requirement (either soft, in terms of maintaining reputation in the field and awareness of what’s going on outside the restricted domain of publications, or hard, in terms of it being obligatory in one’s job role), then there is a structural injustice: conferences disadvantage you.

In conclusion: the conference, as we know it, is broken. It can be fun. It can be a context for genuine discussion and enlightenment, for sharing new ideas, for meeting interesting colleagues, for challenge and reflection. But in my view it currently does these things by chance and accident, and its design inhibits, rather than facilitates these things. I don’t know anyone who actually likes conferences. The people I know who go to most conferences seem to do so mainly so they can write their papers in airport lounges, those liminal spaces where they may not have a mobile signal and can be left in peace. This gives me a clue to why they continue: conferences are precisely a holiday from normal rules, they are a perfect excuse not to be doing something else we may be under an obligation to do normally. But just because they are sometimes a remedy for problems elsewhere, doesn’t mean they aren’t equally sick in their own way.





There’s nowt so queer as folk

Here, bioethicist Richard Ashcroft argues that bioethics needs to broaden its scope by considering questions about character as well as the rightness and wrongness of deeds and the goodness and badness of outcomes.  He suggests that one way to get some of the dense texture of lived experience into ethics–and thereby give questions of character more of their proper ground and interest–is to philosophise by reading fiction.  In the course of a full-length novel, the making of choices seems more like stuff that people do and less like occasions for serene rational deliberation.  This idea is not new to philosophy, though it may be new to bioethics.  What is different in Ashcroft’s develoment of these ideas is his choice of novel.  Instead of Henry James, we have M. John Harrison’s Signs of Life, which belongs on a shelf marked ‘the new weird’.  In Ashcroft’s hands, the weirdness is philosophically important, and the weirdest elements are the people.

This poses a challenge to the more naively Aristotelian or eudaimonic versions of virtue ethics.  Even Nietzsche doesn’t really get at the oddness of root human desires, good as he is on their violence, contingency, lewdness, fleshy unreasonableness, etc..  It is certainly something for aspiring educators of character to think about.

Keeping It Real: a workshop

Thursday 7th July, University of Hertfordshire, 9.45-4.30

Room: N205 de Havilland Campus

‘Professionalism means caring for someone else’ – Legal educationalist Clark Cunningham.

Many ethicists claim that sound ethical judgment requires the development of virtuous dispositions. What does this mean for the education of client-facing professionals such as teachers, lawyers, psychotherapists and police officers?  What virtues do such professionals need, and how can they be developed in professional education?

Most work on virtue ethics goes on within the academic discipline of philosophy.  What insight can this work offer these professions and their trainers? And what insights can philosophy gain from encountering the realities of training professionals to engage with the public?

The Manifest Virtue project – led by Dr Brendan Larvor and Professor John Lippitt – seeks, through a blog and planned workshops, to explore these issues.

Our first workshop, Keeping it Real, will explore how various virtues (and, unintentionally, vices) are modelled in the education of certain professional groups.


09:45 Arrivals and coffee/tea/Danishes

10:00-10:30 Introduction and setting the context (BL, JL)

10:30-11:30 Professor Nigel Duncan (Legal Education, City University) “Playing the Wild Card”

11:30-11:45 Coffee/tea/water/biscuits/fruit

11:45-12:45 Chief Supt Jane Swinburne (Chair of the Ethics Committee, Hertfordshire Constabulary)  Embedding the Police Code of Ethics in the Hertfordshire Constabulary – just common sense?

12:45-1:45 Lunch

1:45-2:45 Professor Joy Jarvis and Dr Elizabeth White (Education, University of Hertfordshire)  “Teacher education – a context for modelling professional virtues?”

2:45-3:00 Coffee/tea/water/biscuits/fruit

3:00-4:00 Karen Weixel-Dixon (Psychotherapy, Regent’s University) “Humility as a necessary quality for authentic relationships”

4:00-4:30 Plenary discussion

Attendance is free, but please register in advance by e-mailing Andrew Smith, School of Humanities Research Assistant (

University of Hertfordshire Accident Simulation Centre
Professional Training

Women in Philosophy: What Needs to Change?

women-in-philosophy-cover-image-199x300This is the title of a book edited by Katrina Hutchison and Fiona Jenkins  (Oxford: Oxford University Press, 2013).

Here is a review by Katherine Angel

Angel’s review raises and deliberately embodies some aspects of our basic question.

Its critique of professional academic philosophy goes in the same direction as Michael Barany’s remark that philosophy’s problems will not be solved by doing more philosophy of the same sort.  But Angel goes rather further.

Keeping It Real

In July 2013, I (BL) took part in a week-long Convivium on the Orkney island of Papa Westray. This was a meeting of law lecturers, medical educators, philosophers and theologians, plus a dramatist and an anthropologist, to discuss ethics in professional education, with particular reference to law and medicine. I came away deeply impressed at the systematic efforts in legal and medical education to inculcate a professional ethos in their students. One of the liveliest discussions between the doctors and lawyers was on the topic of how to assess students’ diagnostic interviewing skills. Is it best to use members of the public as subjects in the interviewing examination? This has the advantage of coming closest to reality, but it means that the exam is not standard—the students are not all assessed on the same task, because some subjects will present far more difficult problems, and personalities, than others. This is not fair and not acceptable in a high-stakes assessment. An alternative is to use actors, who can present the same scenario for each student—but this has its own drawbacks (not least the cost; it takes three days to train a ‘standard patient’). The chief risk with a standardised interview examination is that you train doctors or lawyers to interview the standard patient or client as designed for the test, but of course no such person exists in nature.

Listening to this, I became jealous of these clinical subjects that can give their students real things to do, advising real legal clients, prodding real patients, cutting up real dead bodies.  The surgeon trainer, Roger Kneebone, remarked that sending students to draw real blood from real arms is important because in addition to the bare skill of blood-drawing, they tacitly learn a lot of other, hard-to-articulate doctorly stuff (about how, in a professional manner, to touch and manipulate the limbs of strangers in ways that break normal social taboos, for example). He explained that medical students used to learn how to stitch wounds by practicing on pieces of pig skin. This has the drawback that stitching a piece of material that you can turn around and over to get the best angle and light is not like stitching skin on a live body (for a comparison of medical stitching and tailoring, see this film featuring Prof Kneebone visiting Savile Row). So now, some medical students get some practice on fake wounds mounted on real limbs (with theatrical make-up to hide the join).

What part of this could we carry over to philosophy? We know that it is possible to do real philosophy with students.  After all, they are real people with real thoughts and real feelings.  Moreover, an argument really is valid or invalid, even if no-one in the room makes it in earnest.  Nevertheless, these realities do not always lead to authentic philosophy in the classroom.  I was once teaching a module on Hegel and moderating the marking of modules on Kant and Kierkegaard.  Some of the students who wrote essays for the Kant lecturer explaining that Kant’s project succeeds, also wrote essays for me explaining how Hegel’s criticisms of Kant were wholly successful and essays for the Kierkegaard lecturer (JL) on how Kierkegaard revealed Hegel’s philosophy to be a sham.  It is easy to see how they might imagine this to be a rational grade-maximising strategy. Longer experience in philosophy teaches that gaming approaches like this lead to shallow learning and thence to mediocre grades. Grade-maximising is a reliable recipe for not gaining the more valuable gifts that philosophy has to offer, even if it does raise the grades of a student who has decided in advance not to do any deep learning.

One response to this inauthenticity would be to change the curriculum so that it demanded more self-examination from the students.  This is a legitimate angle, especially if it challenges the students’ self-understandings as well as developing and extending them.  “Know thyself” is an ancient imperative and we could do more of it in our curricula.  However, focusing on the self would lack one of the advantages of clinical work: it’s not about me.  One of the deep differences between the workplace and most other institutions that students encounter is that in school, organised extra-curricular activities and university are for the benefit of the student.  Even if you commit a crime and are arrested and imprisoned, the prison is there to punish and reform you, and has people talking to you about your criminality, your anger issues and your drug and alcohol use.  We should not be surprised if some young people are self-absorbed and have a powerful sense of entitlement. What else should we expect when all the institutions they encounter are directed for their benefit?  Work is not like that; the employee, qua employee, is not an end but is rather a means.  This is why going to work for the first time can be a shock, and why work with real clients and patients is educationally valuable in a way that simulations cannot be.  At it happens, many university students now have part-time jobs so they already know that they are not the centre of the world. Indeed, many of them work in retail, so they know plenty about interacting with clients and customers. One of the law lecturers at the convivium observed that, “Professionalism means caring for someone else.” Many students already know this from their paid work. But this experience is disjoint from their philosophical studies.

Ideally, I’d like to find a philosophical task to give to students such that they would harm someone other than themselves if they fluffed it.  Then, their grades would not be the highest stakes in the activity. I conjecture that many students feel no compunction about grade-chasing because there are no serious rival interests—they believe that they don’t seriously hurt anyone else if they pursue their studies cynically. Even if they acknowledge that their grade-chasing may damage the educational experiences of classmates and hurt the lecturer’s feelings, this is unlikely to be decisive because these stakes seem low compared to the importance of their grades. Attitudes might change if we could find a philosophical activity that, like blood-drawing, wound-stitching or clinical legal work, had high stakes for someone else.

So far, the nearest I’ve got is:

  • Group presentations where the group gets the same grade (so free-loading may reduce the grades of other students in the group)
  • taking students to teach in secondary schools and
  • (as part of a module assessment) editing each other’s essays.

None of these raises stakes for others high enough to challenge the students’ own grades for supremacy.  Another possibility might be to have final-year students mark the work of first-years.  Marking a philosophy essay involves philosophical reflection if you do it properly; it’s not checking the essay against some model answer.  This, though, has obvious quality-assurance obstacles. Requiring students to mentor or tutor other, less advanced students is another option, but all of these would be difficult to assess—we would be in the same spot as the doctors and the lawyers, trying to design a clinical practice assessment that is both realistic and fair.

There is another aspect to clinical practice that throws up a direct challenge to philosophical ethics. Part of the value of law-clinics, as the law lecturers at the Papa Westray convivium explained, is that they are diagnostic of selfishness and other character flaws that can lead to professional misconduct. They presented a four-part model (due to James Rest and Muriel Bebeau) of how professional judgments can fail ethically:

  1. Moral blindness (this is usually the case where conflicts of interest lead to malpractice, or where the client wants something that may harm someone else)
  2. Faulty moral reasoning (compatible with moral awareness, this is a failure to think through situations where there are rival interests in play)
  3. Lack of moral motivation (failure to make give ethics its proper importance in competition with other proper professional interests)
  4. Ineffectiveness in  implementing ethical judgments, due to lack of interpersonal skills.

According to the Carnegie Report, the moral sensitivity, moral reasoning ability, moral motivation and implementation skills can be developed in law students. However, this is only possible ‘in role’, either through law clinics or classroom role-play, so that the student moves from observer to actor. These experiences can then provide material for reflection. Thus, this four-part model can be the frame for an effective curriculum in professional legal ethics. (Here I follow Clark Cunningham.)  As I understand it, this approach is not standard in the US or the UK but the studies undertaken so far seem to be promising.

Classroom philosophy, as we currently practice it, rarely works on the fourth item in this list.  Much philosophical ethics is content to work out what the right answers are (and think about the logic of the working out and what rightness means, etc.). Insofar as this is true, this too is a failure to keep it real. We philosophers might profitably look at education in client-oriented professions such as law and medicine to see how we might repair this. Taking responsibility for other people seems to be the surest route to diagnosing and building resistance to the four moral weaknesses listed here.  If this isn’t feasible, it may be possible to use imaginative classroom work, that might include reading fiction, watching films, creative writing and role-play as well as critical examination of the works of philosophers, to work on all four elements.

This question of reality, of having stakes in the room higher than the students’ grades, has a bearing on our base question about the modelling of philosophical virtues. There may be some virtues and vices that are only apparent when dealing with people outside the profession. Attending research seminars and conferences may reveal to students how professional philosophers deal with each other—but what about contact between philosophers and everyone else? In any case, it may be that, in the presence of someone else’s interests and vulnerabilities, some of the characteristics prized by philosophers (such as conceptual precision or speed of thought) may not seem so important after all.

John Lippitt and Nigel Duncan suggested improvements to the text.

Matthew Inglis and Nigel Duncan suggested these options for further reading:

Bebeau, M, ‘Influencing the Moral Dimensions of Dental Practice’ in Rest and Narvaez, (eds), Moral Development in the Professions: Psychology and Applied Ethics, (1994, Hillsdale: Ehrlbaum)

Jones, Ian & Inglis, Matthew (2015) ‘The problem of assessing problem solving: can comparative judgement help?’ Educational Studies in Mathematics July 2015, Volume 89, Issue 3, pp 337-355

Jones, Ian &  Alcock, Lara (2014) Peer assessment without assessment criteria  Studies in Higher Education  Vol. 39, Iss. 10

Donald Nicolson (2008) ‘Education, education, education’: Legal, moral and
clinical, The Law Teacher, 42:2, 145-172,

Donald Nicolson (2013) Calling, Character and Clinical Legal Education: A Cradle to Grave Approach to Inculcating a Love for Justice Legal Ethics
Vol. 16, Iss. 1