← Return to search results
+
Examining Ethics Logo

Why Moral Psychology is Disturbing: Regina Rini

Overview & Shownotes

Regina Rini holds the Canada Research Chair in Philosophy of Moral and Social Cognition at York University. She joins us today to discuss why we might be disturbed when we learn about the role that psychology plays in our moral decision-making.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Why moral psychology is disturbing” by Regina Rini
  2. Philosopher-neuroscientist Joshua Greene
  3. Deontology
  4. Consequentialist ethics
  5. Kantian theory
  6. The trolley problem
  7. Radiolab episode mentioned in the discussion
  8. Robert Sapolsky
  9. Aristotle’s ethics
  10. Nicomachean ethics
  11. Bernard Williams
  12. Charles Stevenson
  13. Friedrich Nietzsche
  14. Christine Korsgaard and her thoughts on agency
  15. Nic Bommarito
  16. Case developed by a philosopher Nomy Arpaly

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Coulis Coulis” by Blue Dot Sessions

Transcript

Download PDF

Why Moral Psychology is Disturbing: Regina Rini

Christiane Wisehart, host and producer: I’m Christiane Wisehart. And this is Examining Ethics, brought to you by The Janet Prindle Institute for Ethics at DePauw University.

[music: Blue Dot Sessions, Coulis Coulis]

Christiane: Regina Rini holds the Canada Research Chair in Philosophy of Moral and Social Cognition at York University. She explains that we’re often disturbed to learn that our moral decisions might not always be as objective and rational as we’d like to think.

Regina Rini: So there’s a long tradition in philosophy and psychology of looking at the causal origins of our moral judgments, finding out, maybe in evolutionary terms, why do humans judge this way? Or finding out in personal terms why do I or some particular person, why does this particular person have this pattern of moral judgements? So it’s especially interesting when you think about disagreement either intercultural disagreement or interpersonal disagreement where people have differing moral judgments about the same case. So sometimes, people invoke cognitive science or sociology or evolutionary biology to explain the difference as a basis for undermining it. That’s the problem that I’ve been interested in for a while.

Christiane: Stay tuned for our discussion on psychology and morality on today’s episode of Examining Ethics.

[music fades out]

Christiane: I have a pop-cultural understanding of how my own brain works. I’ve learned that my decisions might be influenced by my upbringing or by my particular brain chemistry. I know that if I’d been born with a different brain, in a different family and society, I’d probably make different decisions. And so sometimes I struggle with thoughts like, “Am I hugging my son because I love him and want to show affection, or am I hugging him because when I looked at him, some oxytocin flooded into my brain and my mammalian instincts took over? Am I spending time doing mutual aid because I’ve made a thoughtful, rational decision to engage in direct action, or is it because my mom once said something positive to me about volunteering as a kid?”

Sometimes when I think about decisions and decision-making in this way for too long, I start to feel…squirmy. Or, as my guest Regina Rini would put it, I feel disturbed. In her work, she uncovers why we might feel disturbed by learning about the psychological factors that play into our moral decision-making.
[interview begins]
Christiane: So we’re here to discuss your paper, your fascinating paper “Why moral psychology is disturbing.” So first of all, just give us a brief overview of this idea.
Regina Rini: There’s a long tradition in philosophy and psychology of looking at the causal origins of our moral judgments, finding out, maybe in evolutionary terms, why do humans judge this way? Or finding out in personal terms why do I or some particular person, why does this particular person have this pattern of moral judgements? So it’s especially interesting when you think about disagreement, either intercultural disagreement or interpersonal disagreement, where people have differing moral judgments about the same case.

So sometimes, people invoke cognitive science or sociology or evolutionary biology to explain the difference as a basis for undermining it. That’s the problem that I’ve been interested in for a while. For more than a decade, I’ve been writing a series of papers about this. And so, in this paper, I was trying to focus down from that bigger debate onto one particular question, which was about the experience, the first personal experience of being disconcerted by learning surprising information about your own moral judgments.

So set aside for the moment whether or not other people’s moral judgments are being impacted. Just think about: you have a belief and then some psychologist comes along and says, “Here, I can show you. Here is the origin of that belief in your evolutionary history or your personal history.” in a way you didn’t expect. And you feel unnerved by this discovery. Would an example help here, do you think?

Christiane: Yeah, I was going to ask for an example maybe from your own personal life or maybe even just one from the paper?
Regina Rini: A good example that a lot of people find resonant is this really influential work done by the philosopher-neuroscientist Joshua Greene. Greene is looking at this, this famous dispute about deontological and consequentialist ethics. Basically, should you do whatever achieves the maximum good outcome? Could you sacrifice one person to save five versus a deontological or a Kantian theory that says that sometimes, some ways of achieving the best outcome are not acceptable. That sometimes we have to give up the chance to get the best possible outcome because it involves treating another person like an object or other Kantian sorts of reasons for holding back.

I’m sympathetic to the Kantian line. I have Kantian intuitions. I tend to think there are some restrictions on what we can do to achieve good outcomes. But Joshua Greene’s work starting from about the year 2000 shows a series of studies that supposedly point out that when you’re engaging in deontological judgment, when you’re saying, “It’s not okay to sacrifice one person to save five under certain circumstances.” What’s going on is a part of your brain that is very, very old evolutionarily, that predates the human species. Probably goes back to some sort of simian ancestor. This bit of your brain doesn’t quite get the mathematics or the technology involved in sacrificing one person to save five under some circumstances.

I’m thinking here of the famous trolley problem, which I think at this point a lot of people are aware of through pop culture. It’s getting a little bit tiresome to talk about it. But basically should you turn the switch on the trolley to cause the trolley to run over one person instead of the five it’s going to run over? Greene’s theory is that well, this part of our brains, this really old bit of our brains doesn’t understand trolleys. It doesn’t understand that this is a way of technologically distancing ourselves.

Basically the upshot is, when we get into a situation where we can sacrifice somebody in a way that’s not removed like that, it’s not distally removed, the technology doesn’t give us a separation, then we feel this Kantian urge to not do it. Greene says that instinct is discredited by the fact that we can explain away a difference in intuitions by this origin in this old bit of our psychological processing. This has been really influential in philosophy and psychology in the last 20 years. This kind of research has been incredibly influential.

So part of what I was interested in saying is there’s something going wrong here I think. I think personally I think there’s multiple things going wrong here. I think that the psychology is not as clear as Greene suggests. I think that the actual causal explanation is more complicated. But for right now, for this paper I’m going to bracket that and grant that all of the psychology is fine, that Greene is right in his causal theory. I’m asking, “Well, why does it bother me? Why would it bother me to find out that this Kantian pattern of judgments about trolley cases, why would it bother me to think that, that is rooted in this psychological history?”
Christiane: Yeah and I very much was drawn to your paper because I had a similar experience. I think probably lots of people who think about ethics and morality have had a similar experience. I was listening to an episode of Radiolab and it was about a man who had seizures and he had a surgery to fix the seizures. Postsurgery he became hyper sexual and then was caught with all sorts of things on his computer but one of them was child pornography.

He was put in prison for this and they had a neurobiologist named Robert Sapolsky on the show who was horrified that he was put into prison because his contention was that they literally changed his brain and that changed his behavior and his actions. So that episode has wormed its way deep into my brain and I don’t like to think about it too much because it’s disturbing. So you call this experience when we experience it in our own lives, something called doxastic embarrassment. Am I saying that correctly?

Regina Rini: Yes. That’s right. Doxastic embarrassment.
Christiane: So what is doxastic embarrassment?

Regina Rini: Doxastic embarrassment is the feeling you get when you learn some facts about your moral beliefs that cause you to be disturbed. They make you think, “That’s not how I thought I was making judgments. It bothers me to learn that this is the way I made moral judgments.” So the term doxastic is this slightly weird technical jargon that philosophers like to use but it has to do basically with the idea of making decisions.

It involves actions but it also involves judgments. Basically like how do you decide how to decide? How do you make up your mind about how to act? Or how do you decide what are good criteria for making an evaluative judgment, a moral judgment or an aesthetic judgment? Or even a judgment like who makes a good friend. So doxastic judgment’s a big, really broad category. That includes basically anything where you have some sort of control over how you make choices.

Christiane: You write that one possible reason that we’re experiencing embarrassment is because we realize that maybe it’s an automatic thing [Christiane snaps her fingers], right, that you give an example of someone crossing a street. Seeing that someone’s going to get hit by a car and saving them right away. That they might be embarrassed when they start to think about the fact that they didn’t even think about that. They just did it. So is that really an explanation for doxastic embarrassment?
Regina Rini: This is an explanation a lot of people tend to go to. They think that this is a good explanation that we think we have the illusion that we are constantly in conscious control of all of our judgments. I have the illusion because I think about really extreme cases where like … Think about a case in your own life where you’ve had a moral dilemma, where you have to make a choice but you still have a few weeks before you have to make the choice. So you spend weeks going back-and-forth over and over in your head. You think, “Here’s the reasons for doing this. Here’s the reasons for doing that.” You feel the pull of all the reasons and you have to negotiate it.

We have that as a model in our head of how we make moral choices. Then it turns out that in some cases, we make moral choices without thinking about it, without having conscious access to the reasons for our judgments. So doxastic embarrassment might be explained by our being disconcerted with the thought that we’re acting automatically in ways that we don’t have control over. So the example I started with, it might seem that we have reasons in front of us. But actually we are making our judgments automatically through processes we don’t know about that are operating underneath the surface. The automaticity of them, the fact that they operate without conscious control might be the source of our embarrassment.

But I’ve argued that’s not quite right because sometimes automaticity isn’t problematic. It doesn’t bother us. So when you learn a good habit … Think back here to Aristotle’s ethics. Aristotle argues in the Nicomachean Ethics that the essence of good ethics, of virtue, is to inculcate in yourself a habit of behaving well under pressure, under time constraints or just without even having to think about it. So that like a good person is somebody who is habitually virtuous, habitually kind and generous and thoughtful and brave.

We don’t have to stop and think about it every time. In fact, there’s that famous example the philosopher Bernard Williams gives of a man who is on a cruise ship. This man’s wife falls into the ocean and is drowning. There’s other people drowning too and the man has to decide who he’s going to throw the life preserver to. He feels an inclination to save his wife first but then he stops and thinks, “Is this really morally acceptable that I should save my wife rather than save some other person?” Ultimately he saves his wife but only after dithering for a bit.
Williams says about this case, “Well, there’s something wrong with this man. He has one thought too many rather than just having the immediate response that, yes. He owes a duty of loyalty to his wife to save her first. He wastes time and energy thinking about it.” If you feel the pull of that example from Williams at all, then you have to say, “Wait a minute. Automaticity by itself isn’t necessarily a problem.” When we do the virtuous thing automatically, when we automatically act in a way that instantiates loyalty or kindness, then automaticity is not necessarily a problem. So it seems like automaticity by itself shouldn’t explain doxastic embarrassment. That is, merely the fact I’ve learned that I made a judgment automatically, does not by itself show that I am right to feel disturbed by the origin of my judgment.

Christiane: What are some other plausible explanations that have come up for doxastic embarrassment?
Regina Rini: So another really popular one, I think probably the most popular in philosophy, has to do with the idea of moral truth. So the argument is that moral judgments have an epistemic function. That is, there is a moral truth out there, capital T, moral truth and the goal of moral judgments is to accurately reflect the moral truth. If we can show through psychology that sometimes our moral judgments are sensitive to truth irrelevant things like the way I describe a case but I just change some words. I don’t change the substance, I just change the words. Or things like which part of our brain is active. Whether it’s this old part of our brain that goes back to simian psychology, presumably the moral truth doesn’t depend on which part of your brain is active or which words I use to describe the case.

So if we can show that your moral judgments are sensitive to these truth-irrelevant factors, we can show that your moral judgments are not tracking the moral truth and therefore, your moral judgments are dubious in some way. So in other words the criteria for good moral judgments is epistemic, that they track the moral truth and if we can show through psychology they don’t, then that’s bad. That would explain why you experience doxastic embarrassment because you would be confronting the fact that your judgments don’t track the truth.

It’s kind of similar to if you find out you’ve been ingesting a hallucinogen. Like if you find out you’re walking on a street and you saw people. Then you discover later someone put something in your coffee this morning and those people weren’t even there. You were just hallucinating them, you’d feel unnerved by this, right, but especially if you had a conversation. Imagine if you had a conversation with one of the hallucinatory people, you’d be freaked out about it I think.
So that’s the thought is that we think our moral judgments track an independent moral truth. Psychology can show us that they don’t in certain ways and that is disturbing.

Of course, that is disturbing if you believe in an independent moral truth but I’ve argued that this can’t be the full explanation. The reason to think that is because plenty of people don’t believe in an independent moral truth. There are plenty of philosophers and plenty of people outside of philosophy who just don’t accept that there is an objective capital T, moral truth out there. In philosophy some of them are called noncognitivists who are people that think that moral judgments don’t have a truth value at all. That is, they’re not proposition that can be assigned true or false. They’re more like an emotion or an affective state, a kind of attitude of approval or disapproval.

There are other people, they’re sometimes called error theorists who think that moral judgments try to express truth but in fact, there’s nothing in the world that corresponds to moral judgments that would make them true or false. I’m not taking a side right now on whether these people are right. I’m not saying that the error theorists or the noncognitivists are correct about morality. What I’m trying to point out is that even they, the noncognitivists and error theorists, people who say there’s no such thing as moral truth, even they can experience doxastic embarrassment.

In the paper I give a couple examples of famous figures, Charles Stevenson or Friedrich Nietzsche. These are philosophers who I think are best interpreted as not believing in a capital T, moral truth, but you can find them admitting to experiencing the thing I call doxastic embarrassment, when they learn about the psychological origins of their own moral judgments. So it seems that this worry about tracking moral truth can’t be the whole account.

It might explain why some people worry. Obviously if you believe in a capital T, moral truth and you discover your moral judgments don’t track it, that’s bothersome. But what I’ve tried to point out is that even people who don’t believe that can also experience this phenomenon, therefore, the possibility of moral truth can’t be the full explanation. There’s got to be something else going on that bothers even people who don’t believe in moral truth.

Christiane: If we’re experiencing this kind of embarrassment or if we’re disturbed by learning that the moral choice that I made during a meeting was because I had five cups of coffee instead of one cup of coffee, whatever the case may be, does that mean that I need to rethink all of my beliefs or rethink all of my judgments?
Regina Rini: That’s ultimately the really important question. Ultimately, that’s what I care about but I think that so far, this is not determined by the experience of doxastic embarrassment. It might be that even after you find out the origins of your judgments, you might still endorse continuing to have them. So suppose you find out that there’s some weird confusing kind of disreputable psychological origin to your judgments maybe like once upon a time, some person you didn’t much like expressed a view to the contrary. They said such-and-such about a moral case. And you don’t like them so just to be their enemy you took the other side.

Then you forgot about that and now like 10 years later you’re still holding this other side of a moral judgment. Then someone says, “Hey, remember that time 10 years ago when you adopted this point rhetorically?” And you’re like, “Oh, my God. I’d forgotten about that and I’d started acting as if I really thought this. I kind of think, I do think this now.” So you’ll feel doxastic embarrassment, but that doesn’t necessarily mean you should change your mind.
Maybe there are good reasons why it turns out to be you accidentally lucked into having what are good beliefs to have. Maybe it turns out that you’ve become a better person and you’ve been more generous and kind to others. You’ve made better judgments in other domains, in other cases because you reoriented yourself around what was initially kind of a grubby one-upmanship about getting it over on your enemy.

That might be good then for you to continue holding this belief even though it had a grubby origin. We could have an argument about that. Some people will say, “No, no. I can’t continue if knowing the origin really fatally undermines my continued commitment to this.” I think that’s a further argument but all I want to say so far, is up to this point that there’s a next step. There’s a next set of questions for what to do and obviously whether you believe in moral truth with a capital T, if you believe in that, that’s going to have a big impact on what you say next.

But even if you don’t believe in moral truth, you might have a view about how a constructed morality, a morality that is trying to find a coherent and consistent set of views we would have, even without independent moral truth, that might guide your next steps. There’s lots of ways to go and I have other papers exploring what to do next. But the key point for right now is that merely feeling doxastic embarrassment does not by itself force us to any next steps.

Christiane: Why are we disturbed when we realize that our moral beliefs or our moral judgments might stem from our biology or our psychology?
Regina Rini: Agential disunity, another philosopher’s jargony term, which I’ll unpack. So agency is jargon for philosophers and what it refers to is roughly the idea that when I have agency, when I am an agent, I’m in control of myself to a pretty deep degree. I think what’s really helpful to unpack this is to borrow an idea from the philosopher Christine Korsgaard.

Korsgaard argues that agency consists of two parts, autonomy and efficacy. Autonomy is the idea that the origins of my actions come from inside of me in an important sense. Like something in me, a reflective judgment or a stable habit. Something about me explains why I did what I did. It’s not just that the doctor just gave me an adrenaline shot. My muscles were jerky. It’s not just that some weird psychologist implanted a posthypnotic suggestion and it’s also not that I’ve been brainwashed by some propaganda. Rather, something about me. Something that’s deeply myself is autonomously responsible for my choices.

Though there’s also this other side: efficacy, which is that my choices actually manifest in the world. So if I go around all the time and I’m making up my mind. I decide the right thing to do is X but then I constantly fail to do X because of something about me. Maybe I have like some sort of psychic block. Or maybe some cruel oppressive people get in the way and prevent my choices from ever being exercised in the world because they’re trying to oppress me then I’ll lack efficacy. So agency is two parts. It’s my moral choices come from something inside of me and I am at least sometimes able to express them in the world and actually change the world to match up with what I think is the right choice.

So that’s the idea of agency, both autonomy and efficacy. My thought is that psychic embarrassment happens when we discover that we lack unity in our agency. When we discover that there are different parts of ourselves, different persistent values acting in the back of our minds that have different control over our efficacious actions at different times. In other words there’s not just one set of stable values that is autonomously choosing for you. There’s more than one operating inside of you. The way to see that is I think best with an example.

So suppose that we’re walking down the street and a car comes rushing down the street. You don’t see it and I push you out of the way and I save your life. That’s a nice outcome and I notice something about this case. It might be the case that I acted automatically. That is, I didn’t stop and think but I can still be happy with what happened there because I can see what I’m doing as agential because I acted according to my autonomous values. I value saving your life and I was efficacious. I accomplished what I meant to accomplish.

But now, let’s change the case. Now, let’s imagine walking down the street and the car comes. I see it and without stopping to think, I step back and I let the car hit you. Some psychologist could come along and explain what happened in the following way: “Actually, though I don’t quite realize, I’ve always been deeply jealous of you and have been deeply hoping something bad would happen to you. I mean, we’re friends and I will say we’re friends. I mean it. I’m not making it up. I’m not being a Machiavellian trying to trick you. I really do feel on the surface like we’re friends but underneath it there’s this deep jealousy.”

You can find this in a lot of novels. There’s a lot of novels about people who are friends or family but underneath it there’s a deep resentment. Okay, and so in this story, a psychologist can explain that actually I kind of always wanted something bad to happen to you. In this moment, though I did not consciously think about it, I didn’t like rehearse in my head in the five seconds I had that I want something bad to happen to you. But underneath it, subconsciously this resentment I had towards you took control over my action and caused me to hold back. That’s how you got hit by a car.

Now, what’s important is that this is not like the case of getting a posthypnotic suggestion from somebody else or getting a shot with a drug that causes me to stagger around. This is not something outside of me interfering with my agency. Rather, it’s that underneath the surface I’ve got more than one way of being an agent. There’s a value where I value our friendship. I genuinely do but there’s also this part of me that in kind of a dark and unhappy way values bad things happening to you because I’m jealous or resentful. Those are both values that are a part of me in the relevant sense.

One of them is a nasty one but still we can have nasty values. We can have bad values. So I think that what this case points to is if I find out this has happened, if find out I held back from protecting you because I was expressing this darker set of values, part of what bothers me is not just that they’re bad values but that I didn’t realize I was harboring two different potential agencies. I didn’t realize it was possible for one of them to kind of step out into the center and take over and become the efficacious bit of my agency.

So that toy example with the car is a really stark one, a really clear one. But I think the same pattern is what happens in cases of doxastic embarrassment where when a psychologist shows up and says, “The reason why you morally judge that way is because of some evolutionary explanation or something about your personal psychology.” that revealing to you something that’s built into your own agency into the way you make moral judgments that you didn’t know about. What it’s showing is that you make efficacious choices, agential choices in ways that are disunified from what you consciously think you’re doing. That is disturbing. It’s disturbing to find out that this thing we really value, this moral agency can be disunified in this way.

Christiane: Is the next step then if we know this to try to seek unity? Is that something that we ought to be striving for?
Regina Rini: I mean, so some philosophers will think, “Yes.” A Kantian philosopher will typically answer, “Yes. The right answer at this point is to try to reduce the dissonance or reduce the gap between our conscious selves and our unconscious selves.” Maybe that’s the right answer although I hesitate. I’m not so sure. For one thing, we need an argument. We need a positive argument for why it’s better to be fully unified as an agent and maybe it’s not. I’ve had suggested to me … A philosopher, Nic Bommarito, commented on this paper once when I was writing it and he pointed out that this paper is pretty thoroughly Western and it’s pretty thoroughly Kantian.

He suggested that from some Buddhist perspectives where we see that the image of the single unitary self as an illusion, a harmful illusion, one that leads to suffering, that in fact, we shouldn’t be trying to unify ourselves at all. That it’s actually making it worse. So if we adopt that perspective then maybe the correct response to doxastic embarrassment is to say, “This is a gateway. This is a way to start seeing the value or at least the necessity of appreciating the self as fractured in this way.” That’s an active debate and it’s really hard to know how to have that argument.

We’re getting at really, really deep questions about what it is to be a person, what the value of unified agency could be. If you’re a Kantian, I think you’re going to go back to saying that the nature of being a responsible moral agent is exercising a form of unity and control over yourself. You need to get control over yourself but it gets complicated. We can see examples where this is hard from there’s a case developed by a philosopher Nomy Arpaly of Huckleberry Finn and I’m trying to remember the details from Mark Twain off the top of my head right now.

But Huck Finn is basically helping Jim his friend who’s an escaped enslaved person to get away. Huck has been raised in a kind of Southern American … It’s a pre Civil War Southern American mentality that says that slavery is right and if you’ve got an escaped enslaved person you ought to turn them in. He at least on the surface accepts this. He on the surface accepts the idea that that’s the right thing to do. But Jim is his friend and here along comes some authority figures who he could turn Jim into. In the moment, he very strongly feels the desire to not do that. He doesn’t and he helps Jim hide. He helps Jim escape.

In his head consciously, Huckleberry Finn, is thinking something like, “This is wrong. I owe a duty to my society to turn in Jim.” But he also, underneath, has this strong impulse to save and protect Jim. Presumably we all agree that, that underlying impulse, the one that was not consciously in his mind, the one that kind of took over and protected Jim, that, that’s the right one to have. We’d want to say the right strategy for Huckleberry Finn is to unify in favor of this nonconscious impulse rather than the overlaying conscious idea that he should help uphold society’s existing slavery laws, right? We’d want to say that but it gets really hard to sort out if there’s any principled basis for picking between the conscious and the unconscious parts of our agency.

Here’s a case where we favor the unconscious one. Previously I gave you an example where we favor the conscious one. Is it just kind of we’re saying that whatever’s the right judgment to have in the long run is the one we should favor? But then of course, how do we decide in more complicated cases, which is the right one and that gets circular very quickly because some of the ways we figure out the right judgment to have, come back to this original question about when we should doubt our moral intuitions to begin with. And so I don’t have an answer. I actually don’t think there’s a one-size-fits-all answer to this. So it’s not clear to me that achieving unity is always the right answer and even if it were, it’s not clear we have any objective standards for determining how to achieve unity.

Christiane: So what brought you to this question? Why do you care about this?
Regina Rini: What I’m ultimately driven by in this debate is about moral disagreement. I’m just really interested in the way that in a multicultural, pluralistic, democratic society we have to find a way of living together with people we deeply disagree with. A lot of philosophers try to resolve this by saying, “I’m going to prove which one is the correct moral view to have in the end. We’re going to solve disagreement by everybody coming to agree with my incredibly powerful arguments and my correct position.”

That’s not what I’m trying to do. I think that deep moral disagreement is intractable and inevitable and will never go away. So the real challenge for politics and ultimately for ethics I think is to think about a fair and coherent, rational way for us all to live together in a pluralistic society that accepts deep moral disagreement. So with that as the goal, I get anxious about the use of psychology and I should say here, my background is partly in psychology.

I trained in neuroscience and hesitated between doing a PhD in philosophy and neuroscience at one point. Anyway, so I’m not trying to run down psychology. I think it’s great but I worry when people start wielding psychology or evolutionary biology or sociology or economics as a kind of weapon in a substantive moral disagreement where rather than say like, “What are the reasons for your position? Why do you think that? What are some good explanations for why you would find that compelling?”

I instead want to go straight to how do I explain away your deep moral convictions as just an instance of your psychology or just a background reflection of social indoctrination? Once I explain it away, I don’t have to take it seriously and I can dismiss you until you come around to having the moral beliefs I have. That I think is extremely common and extremely dangerous and actually just morally wrong to do. So I’m wary of using psychology to achieve those ends. So a big part of my work on the cognitive science of moral judgment has been about how do we make sense of the kind of the creepiness of being psychologically explained without that becoming a weapon used in this fight to win, to win fights about intractable moral disagreement?

[Interview ends]

[music: Blue Dot Sessions, Coulis Coulis]

Christiane: If you want to know more about Regina Rini’s work, check out our show notes page at examiningethics.org.

Examining Ethics is hosted by The Janet Prindle Institute for Ethics at DePauw University. Christiane Wisehart wrote and produced the show. Our logo was created by Evie Brosius. Our music is by Blue Dot Sessions and can be found online at sessions.blue. Examining Ethics is made possible by the generous support of DePauw Alumni, friends of the Prindle Institute, and you the listeners. Thank you for your support. The views expressed here are the opinions of the individual speakers alone. They do not represent the position of DePauw University or the Prindle Institute for Ethics.

View All Episodes

Visit Us.

LOCATION

2961 W County Road 225 S
Greencastle, IN 46135
P: (765) 658-4075

GET DIRECTIONS

BUILDING HOURS

Monday - Friday: 8:00AM - 5:00PM
Saturday-Sunday: closed