← Return to search results
Back to Prindle Institute

Illocutionary Silencing and Southern Baptist Abuse

black and white photograph of child with hands over mouth, eyes, and ears

Content Warning: this story contains discussions of sexual, institutional, and religious abuse.

On May 22nd, external investigators released an extensive report detailing patterns of corruption and abuse from the leadership of the Southern Baptist Convention (SBC), the largest denomination of Protestant Christianity in the United States. According to the report, Southern Baptist leaders spent decades silencing victims of sexual abuse while ignoring and covering up accusations against hundreds of Southern Baptist ministers, many of whom were allowed to continue in their roles as pastors and preachers at churches around the country. In general, the Executive Committee of the SBC prioritized shielding itself and the denomination from legal liability, rather than care for the scores of people abused at the hands of SBC clergy. But, after years of public condemnations of the Committee’s behavior, church representatives overwhelmingly voted in June to investigate the Executive Committee itself.

To anyone who has not been listening to years worth of testimony from SBC abuse victims, there is much in the SBC report to shock and appall.

But in this article, I want to consider one important reason why so many (beyond just the members of the SBC Executive Committee) ignored that mountain of testimony, even despite prominent awareness campaigns about sexual abuse in religious spaces after the USA gymnastics abuse trial and the #MeToo movement (like #ChurchToo): in short, in addition to the abuse itself, many of the people who chose to come forward and speak about their experiences suffered the additional injustice of what philosophers of language call illocutionary silencing.

In brief, philosophers (in the “speech act theory” tradition) often identify three distinct elements of a given utterance: the literal words spoken (locution), the function of those words as a communicative act (illocution), and the effects that those words have after they are spoken (perlocution). So, to use the cliché example, if I shout “FIRE!” in a crowded theater, we can distinguish between the following components of my speech:

    • Locution: A word referring to the process of (often dangerous) fuel combustion that produces light and heat.
    • Illocution: A warning that the audience of the utterance could be in danger from an   uncontrolled fire.
    • Perlocution: People exit the theater to escape the fire.

In general, interpreting a speech act involves understanding each of these distinct parts of an utterance.

But this means that silencing someone — or “preventing a person from speaking” — can happen in three different ways. Silencing someone overtly, perhaps by forcibly covering their mouth or shouting them down so as to fully prevent them from uttering words, is an example of locutionary silencing, given that it fully stops a speaker from voicing words at all. On the other side, perlocutionary silencing happens when someone is allowed to speak, but other factors beyond the speaker’s control convene to prevent the expected consequences of that speech from occurring: consider, for example, how you can argue in defense of a position without convincing your audience or how you might invite friends to a party which they do not attend.

Illocutionary silencing, then, lies in between these cases and occurs when a speaker successfully utters words, but those words (because of other factors beyond the speaker’s control) fail to perform the function that the speaker intended: as a common phrase from speech act theory puts it,

illocutionary silencing prevents people from doing things with their words.

Consider a case where a severe storm has damaged local roadways and Susie is trying to warn Calvin about a bridge being closed ahead; even if Susie is unhindered in speaking, if Calvin believes that she isn’t being serious (and interprets her utterance as a joke rather than a warning) then Susie will not have warned Calvin, despite her best attempts to do so.

So, consider the pattern of behavior from the SBC towards the hundreds of people who came forward to report their experiences of assault, grooming, and other forms of abuse: according to the recent investigation, decades of attempted reports were met with “resistance, stonewalling, and even outright hostility” from SBC leadership who, in many cases, chose to slander the victims themselves as “‘opportunistic,’ having a ‘hidden agenda of lawsuits,’ wanting to ‘burn things to the ground,’ and acting as a ‘professional victim.’” Sometimes, the insults towards victims were cast as spiritualized warnings, such as when August Boto (a longtime influential member of the SBC’s legal team) labeled abuse reports as “a satanic scheme to completely distract us from evangelism. It is not the gospel. It is not even a part of the gospel. It is a misdirection play…This is the devil being temporarily successful.” To warp the illocutionary force of an abuse report into a demonic temptation is an unusually offensive form of illocutionary silencing that heaps additional coals onto the heads of people already suffering grave injustices.

And, importantly, this kind of silencing shapes discursive environments beyond just the email inboxes of the SBC Executive Committee: a 2018 report from the Public Religion Research Institute found, for example, that only one group of Americans considered “false accusations made about sexual harrassment or assault” to be a bigger social problem than the actual experience of sexual assault itself — White Evangelical Baptists.

In the New Testament, Jesus warns about the dangers of hypocrisy, saying “Nothing is covered up that will not be uncovered and nothing secret that will not become known. Therefore whatever you have said in the dark will be heard in the light, and what you have whispered behind closed doors will be proclaimed from the housetops” (Luke 12:2-3, NRSVUE). It may well be that, finally, the proclamations by and about the victims of and within the Southern Baptist Convention can be silenced no longer.

Virtually Inhumane: Is It Wrong to Speak Cruelly to Chatbots?

photograph of middle school boy using computer

Smartphone app trends tend to be ephemeral, but one new app is making quite a few headlines. Replika, the app that promises you an AI “assistant,” gives users the option of creating all different sorts of artificially-intelligent companions. For example, a user might want an AI “friend,” or, for a mere $40 per year, they can upgrade to a “romantic partner,” a “mentor,” or a “see how it goes” relationship where anything could happen. The “friend” option is the only kind of AI the user can create and interact with for free, and this kind of relationship has strict barriers. For example, any discussions that skew toward the sexual will be immediately shut down, with users being informed that the conversation is “not available for your current relationship status.” In other words: you have to pay for that.

A recent news story concerning Replika AI chatbots discusses a disturbing trend: male app users are paying for a “romantic relationship” on Replika, and then displaying verbally and emotionally abusive behavior toward their AI partner. This behavior is further encouraged by a community of men presumably engaging in the same hobby, who gather on Reddit to post screenshots of their abusive messages and to mock the responses of the chatbot.

While the app creators find the responses of these users alarming, one thing they are not concerned about is the effect of the AI itself: “Chatbots don’t really have motives and intentions and are not autonomous or sentient. While they might give people the impression that they are human, it’s important to keep in mind that they are not.” The article’s author emphasizes, “as real as a chatbot may feel, nothing you do can actually ‘harm’ them.” Given these educated assumptions about the non-sentience of the Replika AI, are these men actually doing anything morally wrong by writing cruel and demeaning messages? If the messages are not being received by a sentient being, is this behavior akin to shouting insults into the void? And, if so, is it really that immoral?

From a Kantian perspective, the answer may seem to be: not necessarily. As the 17th century Prussian philosopher Immanuel Kant argued, we have moral duties toward rational creatures — that is, human beings, including yourself — and that their rational nature is an essential aspect of why we have duties toward them. Replika AI chatbots are, as far as we can tell, completely non-sentient. Although they may appear rational, they lack the reasoning power of human agents in that they cannot be moved to act based on reasons for or against some action. They can act only within the limits of their programming. So, it seems that, for Kant, we do not have the same duties toward artificially-intelligent agents as we do toward human agents. On the other hand, as AI become more and more advanced, the bounds of their reasoning abilities begin to escape us. This type of advanced machine learning has presented human technologists with what is now known as the “black box problem”: algorithms that have learned so much on “their own” (that is, without the direct aid of human programmers) that their code is too long and complex for humans to be able to read it. So, for some advanced AI, we cannot really say how they reason and make choices! A Kantian may, then, be inclined to argue that we should avoid saying cruel things to AI bots out of a sense of moral caution. Even if we find it unlikely that these bots are genuine agents whom we have duties toward, it is better to be safe than sorry.

But perhaps the most obvious argument against such behavior is one discussed in the article itself: “users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans.” This is a point that echoes the discussion of ethics of the ancient Greek philosopher Aristotle. In book 10 of his Nicomachean Ethics, he writes, “[T]o know what virtue is is not enough; we must endeavour to possess and to practice it, or in some other manner actually ourselves to become good.” Aristotle sees goodness and badness — for him, “virtue” and “vice” — as traits that are ingrained in us through practice. When we often act well, out of a knowledge that we are acting well, we will eventually form various virtues. On the other hand, when we frequently act badly, not attempting to be virtuous, we will quickly become “vicious.”

Consequentialists, on the other hand, will find themselves weighing some tricky questions about how to balance the predicted consequences of amusing oneself with robot abuse. While behavior that encourages or reinforces abusive tendencies is certainly a negative consequence of the app, as the article goes on to note, “being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic.” This catharsis could lead to a non-sentient chatbot taking the brunt of someone’s frustration, rather than their human partner, friend, or family member. Without the ability to vent their frustrations to AI chatbots, would-be users may choose to cultivate virtue in their human relationships — or they may exact cruelty on unsuspecting humans instead. Perhaps, then, allowing the chatbots to serve as potential punching bags is safer than betting on the self-control of the app users. Then again, one worries that users who would otherwise not be inclined toward cruelty may find themselves willing to experiment with controlling or demeaning behavior toward an agent that they believe they cannot harm.

How humans ought to engage with artificial intelligence is a new topic that we are just beginning to think seriously about. Do advanced AI have rights? Are they moral agents/moral patients? How will spending time engaging with AI affect the way we relate to other humans? Will these changes be good, or bad? Either way, as one Reddit user noted, ominously: “Some day the real AIs may dig up some of the… old histories and have opinions on how well we did.” An argument from self-preservation to avoid such virtual cruelty, at the very least.

Sex in the Age of Sex Robots

Editor’s note: sources linked in this article contain images and videos that some readers may find disturbing.

From self-driving cars to smartphones, artificial intelligence has certainly made its way into our everyday lives. So have questions of robotic ethics. Shows like Westworld and Black Mirror have depicted some of the more controversial and abstract dangers of artificial intelligence. Human sex dolls have always been taboo, but a new development in the technology of these sex dolls, specifically their upgrade to robot status, is especially controversial. The whole notion of buying a robot to have sex with is taboo to say the least, but can these sexual acts become unethical, even if they are perpetrated upon a nonliving thing? Is using a sex robot to simulate rape or pedophilia morally permissible? And to what extent should sex robots be regulated?

Continue reading “Sex in the Age of Sex Robots”

Crowdsourcing Justice

The video begins abruptly. Likely recorded on a phone, the footage is shaky and blurry, yet the subject is sickeningly unmistakeable: a crying infant being repeatedly and violently dunked into a bucket of water. First it is held by the arms, then upside down by one leg, then grasped by the face as an unidentified woman pulls it through the water. Near the end of the video, the infant falls silent, the only remaining audio the splashing of water and murmured conversation as the child is dunked again and again.

Continue reading “Crowdsourcing Justice”

What the Ray Rice Video Suggests About Our Moral Thinking

At 1:00 AM on September 8 TMZ posted a disturbing security video showing Ray Rice, formerly of the Baltimore Ravens, punching his then-fiancée, Janay Palmer, rendering her unconscious. At 11:18 AM the Ravens tweeted that Rice’s contract had been terminated. At 11:41 AM, the NFL tweeted that Rice had been suspended from the league indefinitely.

Here’s at least one odd thing about this: it was already known that Rice punched Palmer and rendered her unconscious. As early as February 2014 there were reports of what the video depicted. So, why the outrage now? Why the sudden calls for action? After all, nothing morally relevant is changed by the fact that now many people have seen the punch rather than merely having been told about it.

Perhaps you’re like me, though. Although there were reports of the incident in February 2014, you weren’t aware of the incident until now. There’s nothing about seeing the incident that changes its moral features, you might say, it’s just that the video gave the story a wider reach and now you’re aware of it. This, in turn, increased the pressure on the Ravens and the NFL to take action.

That’s perhaps a comforting thought, at least with respect to our reaction to the case (it’s not so comforting a thought with respect to the Ravens and the NFL). But it masks a thought that is less comfortable, even for you and me. The less comfortable thought is that even if you or I had known about the incident in February, we still probably wouldn’t have responded in the same way as we did after seeing the video. Why? Because there is considerable psychological evidence that our moral responses to cases are strongly influenced by our emotions. [1. For a nice, accessible, summary of some of this research, see Joshua Greene’s 2013 book, Moral Tribes (Penguin Press) His website includes additional papers on the same topic] And—for most of us anyway—seeing a video of domestic violence is much more emotionally engaging than reading a dry report of the same thing.

This should give us pause. Sure, suffering might feel worse if we see it, but does it really make it worse? It seems not. A seen punch hurts just as much as an unseen one; a child that we see starving suffers just as much as one that we do not see. There’s an important lesson here: our moral psychology can sometimes fool us into making spurious distinctions. Our proximity to suffering or way of learning about suffering is not plausibly a morally relevant feature of it, but we often treat it as if it is.[2. This is not a new point. In his 1972 paper, “Famine, Affluence, and Morality”, Peter Singer writes: “The fact that a person is physically near to us, so that we have personal contact with him, may make it more likely that we shall assist him, but this does not show that we ought to help him rather than another who happens to be further away.” (p. 232).] This can have profound consequences, not just with respect to domestic violence in the NFL, but with respect to us playing our appropriate moral role in the world.