← Return to search results
Back to Prindle Institute

Is It Okay to Be Mean to AI?

Fast-food chain Taco Bell recently replaced drive-through workers with AI chatbots at a select number of locations across America. The outcome was perhaps predictable: numerous videos went viral on social media showing customers becoming infuriated with the AI’s mistakes. People also started to see what they could get away with, including one instance where a customer ordered 18,000 waters, temporarily crashing the system.

As AI programs start to occupy more mundane areas of our lives, more and more people are getting mad at them, are being mean to them, or are just trying to mess with them. This behavior has apparently become so pervasive that AI company Anthropic announced that its chatbot Claude would now end conversations when they were deemed “abusive.” Never one to shy away from offering his opinion, Elon Musk went to Twitter to express his concerns, remarking that “torturing AI is not okay.”

Using terms like “abuse” and “torture” already risks anthropomorphizing AI, so let’s ask a simpler question: is it okay to be mean to AI?

We asked a similar question at the Prindle Post a few years ago, when chatbots had only recently become mainstream. That article argued that we should not be cruel to AIs, since by acting cruelly towards one thing we might get into the habit of acting cruelly towards other things, as well. However, chatbots and our relationships with them have changed in the years since their introduction. Is it still the case that we shouldn’t be mean to them? I think the answer has become a bit more complicated.

There is certainly still an argument to be made that, as a rule, we should avoid acting cruelly whenever possible, even if it is towards inanimate objects. Recent developments in AI have, however, raised a potentially different question regarding the treatment of chatbots: whether they can be harmed. The statements from Anthropic and Musk seem to imply that they can, or at least that there is a chance that they can be, and thus that you shouldn’t be cruel to chatbots because doing so at least risks causing harm to the chatbot itself.

In other words, we might think that we shouldn’t be mean to chatbots because they have moral status: they are the kinds of things that can be morally harmed, benefitted, and evaluated as good or bad. There are lots of things that have moral status – people and other complex animals are usually the things we think of first, but we might also think about simpler animals, plants, and maybe even nature. There are also lots of things that we don’t typically think have moral status, as well: inanimate objects, machines, single-cell organisms, things like that.

So how can we determine whether something has moral status? Here’s one approach: whether something has moral status depends on certain properties that it has. For example, we might think that the reason people have moral status is because they have consciousness, or perhaps because they have brains and a nervous system, or some other property. These aren’t the only properties we can choose. For example, 18th-century philosopher Jeremy Bentham argued that animals should be afforded many more rights than they were at the time, not because they have consciousness or the ability to reason, per se, but simply because they are capable of suffering.

What about AI chatbots, then? Despite ongoing hype, there still is no good reason to believe any chatbot is capable of reasoning in the way that people are, nor is there any good reason to believe that they possess “consciousness” or are capable of suffering in any sense. So if it can’t reason, isn’t conscious, and can’t suffer, should we definitively rule out chatbots from having moral status?

There is potentially another way of thinking about moral status: instead of thinking about the properties of the thing itself, we should think about our relationship with it. Philosopher of technology Mark Coeckelbergh considers cases where people have become attached to robot companions, arguing that, for example, “if an elderly person is already very attached to her Paro robot and regards it as a pet or baby, then what needs to be discussed is that relation, rather than the ‘moral standing’ of the robot.” According to this view, it’s not important whether a robot, AI, or really anything else has consciousness or can feel pain when thinking about moral status. Instead, what’s important when considering how we should treat something is our experiences with and relationship to it.

You may have had a similar experience: we can become attached to objects and feel that they deserve consideration that other objects do not. We might also ascribe more moral status to some things rather than others, depending on our relationship with them. For example, someone who eats meat can recognize that their pet dog or cat is comparable in terms of relevant properties to a pig, insofar as they are all capable of suffering, have brains and complex nervous systems, etc. Regardless, although they have no problem eating a pig, they would likely be horrified if someone suggested they eat their pet. In this case, they might ascribe some moral status to a pig, but would ascribe much more moral status to their pet because of their relationship with it.

Indeed, we have also seen cases where people have become very attached to their chatbots, in some cases forming relationships with them or even attempting to marry them. In such cases, we might think that there is a meaningful moral relationship, regardless of any properties the chatbot has. If we were to ascribe a chatbot moral status because of our relationship with it, though, its being a chatbot is incidental: it would be a thing that we are attached to and consider important, but that doesn’t mean that it thereby has any of the important properties we typically associate with having moral status. Nor would our relationship be generalizable: just because one person has an emotional attachment to a chatbot does not mean that all relationships with chatbots are morally significant.

Indeed, we have seen that not all of our experiences with AI have been positive. As AI chatbots and other programs occupy a larger part of our lives, they can make our lives more frustrating and difficult, and thus we might establish relationships with them that do not hold them up as objects of our affection or care, but as obstacles and even detriments to our wellbeing. Are there cases, then, where a chatbot might not be deserving of our care, but rather our condemnation?

For example, we have all likely been in a situation where we had to deal with frustrating technology. Maybe it was an outdated piece of software you were forced to use, or an appliance that never worked as it was supposed to, or a printer that constantly jammed for seemingly no good reason. None of these things have the properties that make them a legitimate subject of moral evaluation: they don’t know what they’re doing, have no intentions to upset anyone, and have none of the obligations that we would expect from a person. Nevertheless, it is the relationship we’ve established with them that seems to make them an appropriate target of our ire. It is not only cathartic to yell profanities at the office printer after its umpteenth failure at completing a simple printing task, it is justified.

When an AI chatbot takes the place of a person and fails to work properly, it is no surprise that we would start to have negative experiences with it. While failing to properly take a Taco Bell order is, all things considered, not a significant indignity, it is symptomatic of a larger trend of problems that AI has been creating, ranging from environmental impact, to job displacement, to overreliance resulting in cognitive debt, to simply creating more work for us than before they existed. Perhaps, then, ordering 18,000 waters in an attempt to crash an unwelcome AI system is less cruel as it is a righteous expression of indignation.

The dominant narrative around AI – perpetrated by tech companies – is that it will bring untold benefits that will make our lives easier, and that it will one day be intelligent in the way human beings are. If these things were true, then it would be easier to be concerned with the so-called “abuse” of AI. However, given that AI programs do not have the properties for moral status, and that our relationships with them are frequently ones of frustration, perhaps being mean to an AI isn’t such a big deal after all.

Robot Kitchens, AI Cooks, and the Meaning of Food

I knew that I was very probably not going to die, of course. Very few people get ill from pufferfish in restaurants. But I still felt giddy as I took my first bite, as though I could taste the proximity of death in that chewy, translucent flesh. I swilled my saki, squeezed some lemon onto the rest of my sashimi, and looked up. Through the serving window I could see the chef who held my life in his busy hands. We made eye contact for a moment. I took another bite. This is absurd. I am absurd. I pictured the people I love, across the ocean in sleeping California, stirring gently in their warm, musky beds.

My experience in Tokyo eating pufferfish, a delicacy known as fugu, was rich and profound. Fugu has an unremarkable taste. But pufferfish is poisonous; it can be lethal unless it is prepared in just the right way by a highly trained chef. My experience was inflected with my knowledge of the food’s provenance and properties: that this flesh in my mouth was swimming in a tank a few minutes ago and was extracted from its lethal encasement by a man who has dedicated his life to this delicate task. Seconds ago, it was twitching on my plate. And now it might bring me a lonely death in an unfamiliar land. This knowledge produced a cascade of emotions and associations as I ate, prompting reflections on my life and the things I care about.

Fugu is an unfamiliar illustration of the familiar fact that our eating experiences are often constituted by more than physical sensations and a drive for sustenance. Attitudes relating to the origin or context of our food (such as a belief that this food might kill me, or that this food was made with a caring hand) often affect our eating experiences. There is much more to food, as a site of human experience and culture, than sensory and nutritional properties.

You would be hard pressed to find someone who denies this. Yet we are on the cusp of societal changes in food production that could systematically alter our relationship to food and, consequently, our eating experiences. These changes are part of broader trends apparent across nearly all spheres of life resulting from advances in artificial intelligence and other automation technologies. Just as an AI system can now drive your taxi, process your loan application, and write your emails, so AI and related automation tools can now make your food, at home or in a restaurant. Many technologists in Silicon Valley are trying to make automated food production ubiquitous. One CEO of a successful company I spoke with said he expects that almost no human beings will be cooking in thirty years’ time, kind of like how today very few humans make soap, toys, or clothing by hand. It may sound ridiculous, but I’ve found that this vision is common in influential industry spaces.

What might life look like if this technological vision were to come about? This question can appear trivial relative to louder questions about autonomous weapons systems, AI medicine, or the existential threat of a superintelligence. It is not a question of life and death. But I think the question points to a more insidious possibility: that our technological advances might quietly erode the conditions that enable us to experience our day-to-day lives as meaningful.

On the one hand, the struggle for sustenance is a universal feature of human life, and everyone is a potential beneficiary of technology that streamlines food production, like AI that invents recipes or performs kitchen managerial work and robots that prepare food. Home cooking robots could save people time and effort that would be better spent elsewhere. A restaurant that staffs fewer humans could save on labor costs and pass these savings on to customers. Robots could mitigate human errors relating to hygiene or allergies. And then there is the possibility of automated systems that can personalize food to each consumer’s specific tastes and dietary requirements. Virtually every technologist I have spoken to in this industry is excited about a future where every diner can receive a bespoke meal that leaves them totally satisfied and healthy, every time.

Automation brings interesting aesthetic possibilities, too. AI can augment human creativity by helping pioneer unusual flavor pairings. The knowledge that your food was created by a sexy robot could enhance your eating experience, especially if the alternative would be a miserable and underpaid laborer.

These are nice possibilities. But one thing that automation tends to do is create distance between humans and the things that are automated. Our food systems already limit our contact with the sources of our food. For example, factory farming hides the processes through which meat is produced, concealing moral problems and detracting from pleasures of eating that are rooted in participation in food production. AI and robotics could create even more distance between us and our food. Think of the Star Trek replicator as an extreme case; the diner calls for food, and it simply appears via a wholly automated process.

Why is the prospect of losing touch with food processes concerning? For some it might not be. There are many sources of value in the world, and there is no one right way to relate to food. But, personally, I find the prospect of losing touch with food concerning because my most memorable food experiences have all been conditioned by my contact with the processes through which my food came to be.

I have a sybaritic streak. I enjoy being regaled at fancy restaurants with diseased goose livers, spherified tonics, perfectly plated tongues, and other edible exotica. But these experiences tend to pass for me like a kaleidoscopic dream, filled with rarefied sensations that can’t be recalled upon waking. The eating experiences I cherish most are those in which my food is thickly connected to other things that I care about, like relationships, ideas, and questions that matter to me. These evocative connections are established through contact with the process through which my food was made.

I’ve already mentioned one example, but I can think of many others. Like when, in the colicky confusion of graduate school, Sam and I slaughtered and consumed a chicken in the living room of his condo so that we might, as men of principle, become better acquainted with the hidden costs of our food. Or when I ordered tripas tacos for Stephen, my houseguest in Santa Barbara, which he thoroughly enjoyed until, three tacos in, he asked me what ‘tripas’ meant. Or when I made that terrible tuna-fish casserole filled with glorious portions of shredded cheese and Goldfish crackers for Amy, Jacob, and Allison so that they might become sensuously acquainted with a piece of my childhood. Or when Catelynn and I sat in that tiny four-seat kitchen overlooking the glittering ocean in Big Sur and were served sushi, omakase style, directly from the chef’s greasy, gentle hands, defining a shared moment of multisensory beauty.

These experiences fit into the fabric of my life in unique and highly meaningful ways. These experiences are mine, but you probably have some like it. The thing to notice is that these sorts of experiences would be inaccessible without contact with the provenance of food. They would not be possible in a world where all food was produced by a Star Trek replicator. This suggests that food automation threatens to erode an important source of human meaning.

Really, there are all sorts of concerns you might have about AI and robotics in the culinary sphere. Many of these have been identified by my colleague Patrick Lin. But for me, the erosion of meaning is worth emphasizing in discussions about technology because this kind of cost resists quantification, making it easy to overlook. It’s the sort of thing that might not show up on a cost-benefit of a tech CEO who speaks glibly about eliminating human cooking.

The point I’m making is not that we should reject automation. The point is that as we augment and replace human labor in restaurants, home kitchens, and other spheres of life, we need to be attentive to how the processes we hope to automate away may enrich our lives. An increase in efficiency according to quantifiable criteria (time, money, waste) can diminish squishier but no less important things. Sometimes this provides a reason to insist on an alternative vision in which humans remain in contact with the processes in question. I would argue this is true in the kitchen; humans should retain robust roles in the processes through which our food comes to be.

After my meal in Tokyo, I used my phone to find an elevated walkway on which to smoke. I took a drag on a cigarette and watched a group of men under an overpass producing music, in the old way, by a faint neon light. I could feel the fugu in my belly, and my thoughts flashed to my loves and hopes. One of the men playing a guitar looked up. We made eye contact for a moment. I took another drag. This is nice. I am happy.

 

Note: This material is based upon work supported by the National Science Foundation under Award No. 2220888.  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

What’s Wrong with AI Therapy Bots?

image of human and chatbot dialog

I have a distinct memory from my childhood: I was on a school trip, at what I think was the Ontario Science Centre, and my classmates and I were messing around with a computer terminal. As this was the early-to-mid 90s the computer itself was a beige slab with a faded keyboard, letters dulled from the hunt-and-pecking of hundreds of previous children on school trips of their own. There were no graphics, just white text on a black screen, and a flashing rectangle indicating where you were supposed to type.

The program was meant to be an “electronic psychotherapist,” either some version of ELIZA – one of the earliest attempts at what we would now classify as a chatbot – or some equivalent Canadian substitute (“Eh-LIZA”?). After starting up the program there was a welcome message, after which it would ask questions – something like “How are you feeling today?” or “What seems to be bothering you?” The rectangle would flash expectantly, store the value of the user’s input in a variable, and then spit it back out, often inelegantly, in a way that was meant to mimic the conversation of a therapist and patient. I remember my classmate typing “I think I’m Napoleon” (the best expression of our understanding of mental illness at the time) and the computer replying: “How long have you had the problem I think I’m Napoleon?”

30-ish years later, I receive a notification on my phone: “Hey Ken, do you want to see something adorable?” It’s from an app called WoeBot, and I’ve been ignoring it. WoeBot is one of several new chatbot therapists that tout that they are “driven by AI”: this particular app claims to sit at the intersections of several different types of therapy – cognitive behavioral therapy, interpersonal psychotherapy, and dialectical behavior therapy, according to their website – and AI that is powered by natural language processing. At the moment, it’s trying to cheer me up by showing me a gif of a kitten.

Inspired by (or worried they’ll get left behind by) programs like ChatGPT, tech companies have been chomping at the bit to create their own AI programs that produce natural-sounding text. The lucrative world of self-help and mental well-being seems like a natural fit for such products, and many claim to solve a longstanding problem in the world of mental healthcare: namely, that while human therapists are expensive and busy, AI therapists are cheap and available whenever you need them. In addition to WoeBot, there’s Wysa – also installed on my phone, and also trying to get my attention – Youper, Fingerprint for Success, and Koko, which recently got into hot water by failing to disclose to its userbase that they were not, in fact, chatting with a human therapist.

Despite having read reports that people have found AI therapy bots to be genuinely helpful, I was skeptical. But I attempted to keep an open mind, and downloaded both WoeBot and Wysa to see what all the fuss was about. After using them for a month, I’ve found them to be very similar: they both “check in” at prescribed times throughout the day, attempt to start up a conversation about any issues that I’ve previously said I wanted to address, and recommend various exercises that will be familiar to those who have ever done any cognitive behavioral therapy. They both offer the option to connect to real therapists (for a price, of course), and perhaps in response to the Koko debacle, neither hides the fact that they are programs (often annoying so: WoeBot is constantly talking about how its friends are other electronics, a schtick that got tired almost immediately).

It’s been an odd experience. The apps send me messages saying that they’re proud of me for doing good work, that they’re sorry if I didn’t find a session to be particularly useful, and that they know that keeping up with therapy can be difficult. But, of course, they’re not proud of me, or sorry, and they don’t know anything. At times their messages are difficult to distinguish from those of a real therapist; at others, they don’t properly parse my input, and respond with messages not unlike “How long have you had the problem I think I’m Napoleon?” If there is any therapeutic value in the suspension of disbelief then it often does not last long.

But apart from a sense of weirdness and the occasional annoyances, are there any ethical concerns surrounding the use of AI therapy chatbots?

There is clearly potential for them to be beneficial: your stock model AI therapist is free, and the therapies that they draw their exercises from are often well-tested in the offline world. A little program that reminds you to take deep breaths when you’re feeling stressed out seems all well and good, so long as it’s obvious that it’s not a real person on the other side.

Whether you think the hype about new AI technology is warranted or not will likely impact your feelings about the new therapy chatbots. Techno-optimists will emphasize the benefit of expanding care to  many more people than could be reached through other means. Those who are skeptical of the hype, however, are likely to think that spending so much money on unproven tech is a poor use of resources: instead of sinking billions into competing chatbots, maybe that money could be spent on helping a wider range of people access traditional mental health resources.

There are also concerns about the ability of AI-driven text generators to go off the rails. Microsoft’s recent experiment with their new AI-powered Bing search had an inauspicious debut, occasionally spouting nonsense and even threatening users. It’s not hard to imagine the harm such unpredictable outputs could cause for someone who relied heavily on their AI therapy bot. Of course, true believers in the new AI revolution will dismiss these worries as growing pains that inevitably come along with the use of any new tech.

What is perhaps troubling is that the apps themselves walk a tightrope between trying to be a sympathetic ear, and reminding you that they’re just bots. The makers of WoeBot recently released research results that suggest that users feel a “bond” with the app, similar to the kind of bond they might feel with a human therapist. This is clearly an intentional choice on the part of the creators, but it brings with it some potential pitfalls.

For example, although the apps I’ve tried have never threatened me, they have occasionally come off as cold and uninterested. During a recent check-in, Wysa asked me to tell it what was bothering me that morning. It turned out to be a lot (the previous few days hadn’t been great). But after typing it all out and sending it along, Wysa quickly cut the conversation short, saying that it seemed like I didn’t want to engage at the moment. I felt rejected. And then I felt stupid that I felt rejected, because there was nothing that was actually rejecting me. Instead of feeling better by letting it all out, I felt worse.

In using the apps I’m reminded of a thought experiment from philosopher Hilary Putnam. He asks us to consider an ant on a beach who, through its search for food and random wanderings, happens to trace out what looks to be a line drawing of Winston Churchill. It is not, however, a picture of Churchill, and the ant did not draw it, at least in the way that you or I might. However, at the end of the day a portrait of Winston Churchill consists of a series of marks on a page (or on a beach), so what, asks Putnam, is the relevant difference between those made by the ant and those made by a person?

His answer is that only the latter are made intentionally, and it is the underlying intention which gives the marks their meaning. WoeBot and Wysa and other AI-powered programs often string together words in ways that look indistinguishable from those that might be written down by a human being on the other side. But there is no intentionality, and without intentionality there is no genuine empathy or concern or encouragement behind the words. They are just marks on a screen that happen to have the same shape as something meaningful.

There is, of course, a necessary kind of disingenuousness that must exist for these bots to have any effect at all. No one is going to feel encouraged to engage with a program that explicitly reminds you that it does not care about you because it does not have the capacity to care. AI therapy requires that you play along. But I quickly got tired of playing make believe with my therapy bots, and it’s overall become increasingly difficult for me to find the value in this kind of ersatz therapy.

I can report one concrete instance in which using an AI therapy bot did seem genuinely helpful. It was guiding me through an exercise, the culmination of which was to get me to pretend as though I were evaluating my own situation as that of a friend, and to consider what I would say to them. It’s an exercise that is frequently used in cognitive behavioral therapy, but one that’s easy to forget to do. In this way, the app checking-in did, in fact, help: I wouldn’t have been as sympathetic to myself had it not reminded me to. But I can’t help but think that if that’s where the benefits of these apps lie – in presenting tried-and-tested exercises from various therapies and reminding you to do them – then the whole thing is over-engineered. If it can’t talk or understand or empathize like a human, then there seems to be little point in there being any artificial intelligence in there at all.

AI therapy bots are still new, and so it remains to be seen whether they will have a lasting impact or just be a flash in the pan. Whatever does end up happening, though, it’s worth considering whether we would even want the promise of AI-powered therapy to come true.

Virtual Work and the Ethics of Outsourcing

photograph of Freshii storefornt

Like a lot of people over the past two years, I’ve been conducting most of my work virtually. Interactions with colleagues, researchers, and other people I’ve talked to have taken place almost exclusively via Zoom, and I even have some colleagues I’ve yet to meet in person. There are pros and cons to the arrangement, and much has been written about how to make the most out of virtual working.

A recent event involving Canadian outlets of restraint chain Freshii, however, has raised some ethical questions about a certain kind of virtual working arrangement, namely the use of virtual cashiers called “Percy.” Here’s how it works: instead of an in-the-flesh cashier to help you with your purchase, a screen will show you a person working remotely, ostensibly adding a personal touch to what might otherwise feel like an impersonal dining experience. The company that created Percy explains their business model as follows:

Unlike a kiosk or a pre-ordering app, which removes human jobs entirely, Percy allows for the face-to-face customer experience, that restaurant owners and operators want to provide their guests, by mobilizing a global and eager workforce.

It is exactly this “global and eager workforce” that has landed Freshii in hot water: it has recently been reported that Freshii is using workers who are living in Nicaragua and are paid a mere $3.75 an hour. In Canada, several ministers and labor critics have harshly criticized the practice, with some calling for new legislation to prevent other companies from doing the same thing.

Of course, outsourcing is nothing new: for years, companies have hired overseas contractors to do work that can be done remotely, and at a fraction of the cost of domestic workers. At least in Canada, companies are not obligated to pay outsourced employees a wage that meets the minimum standards of Canadian wage laws; indeed, the company that produces Percy has maintained that they are not doing anything illegal.

There are many worries one could have with the practice of outsourcing in general, primarily among them: that they take away job opportunities for domestic employees, and that they treat foreign employees unfairly by paying them below minimum wage (at least by the standards of the country where the business is located).

There are also some arguments in favor of the practice: in an op-ed written in response to the controversy, the argument is made that while $3.75 is very little to those living in Canada and the U.S., it is more significant for many people living in Nicaragua. What’s more, with automation risking many jobs regardless, wouldn’t it be better to at least pay someone for this work, as opposed to just giving it to a robot? Of course, this argument risks presenting a false dichotomy – one could, after all, choose to pay workers in Nicaragua a fair wage by Canadian or U.S. standards. But the point is still that such jobs provide income for people who need it.

If arguments about outsourcing are old news, then why all the new outrage? There does seem to be something particularly odd about the virtual cashier. Is it simply that we don’t want to be faced with a controversial issue that we know exists, but would rather ignore, or is there something more going on?

I think discomfort is definitely part of the problem – it is easier to ignore potentially problematic business practices when we are not staring them in the virtual face. But there is perhaps an additional part of the explanation, one that raises metaphysical questions about the nature of virtual work: when you work virtually, where are you?

There is a sense in which the answer to this question is obvious: you are wherever your physical body is. If I’m working remotely and on a Zoom call, the place I am would be in Toronto (seeing as that’s where I live) while my colleagues will be in whatever province or country they happen to be physically present in at the time.

When we are all occupying the same Zoom call, however, we are also in another sense in the same space. Consider the following. In this time of transition between COVID and (hopefully) post-COVID times, many in-person events have become hybrid affairs: some people will attend in-person, and some people will appear virtually on a screen. For instance, many conferences are being held in hybrid formats, as are government hearings, trials, etc.

Let’s say that I give a presentation at such a conference, that I’m one of these virtual attendees, and that I participate while sitting in the comfort of my own apartment. I am physically located in one place, but also attending the conference: I might not be able to be there in person, but there’s a sense in which I am still there, if only virtually.

It’s this virtual there-ness that I think makes a case like Percy feel more troubling. Although a Canadian cashier who worked at Freshii would occupy the physical space of a Freshii restaurant in Canada, a virtual cashier would do much of the same work, interact with the same customers, and see and hear most of the same things. In some sense, they are occupying the same space: the only relevant thing that differentiates them from their local counterpart is that they are not occupying it physically.

What virtual work has taught us, though, is that one’s physical presence really isn’t an important factor in a lot of jobs (excluding jobs that require physical labor, in-person contact, and work that is location-specific, of course). If the work of a Freshii cashier does not require physical presence, then it hardly seems fair that one be compensated at a much lower rate than one’s colleagues for simply not being there. After all, if two employees were physically in the same space, working the same job, we would think they should be compensated the same. Why, then, should it matter if one is there physically, and the other virtually?

Again, this kind of unfairness is present in many different kinds of outsourced work, and whether physical distance has ever been a justification for different rates of pay is up for debate. But with physical presence feeling less and less necessary for so many jobs, new working possibilities call into question the ethics of past practices.

Who Should Program the Morality of Self-Driving Cars?

An image of a self-driving Uber

On Sunday, March 18, Elaine Herzberg died after being hit by Uber’s self-driving car on the road in Tempe, Arizona. Out for a test-run, the video of the collisions suggests that there was a failure of the self-driving technology as well as the in-car driver meant to supervise the testing of the test drive.  Uber has removed its self-driving cars from the road while cooperating with investigations, and discussions of the quickly advancing future of driverless vehicles have once again been stirred up in the press.

Continue reading “Who Should Program the Morality of Self-Driving Cars?”

Digital Decisions in the World of Automated Cars

We’re constantly looking towards the future of technology and gaining excitement for every new innovation that makes our lives easier in some way. Our phones, laptops, tablets, and now even our cars are becoming increasingly smarter. Most new cars on the market today are equipped with GPS navigation, cruise control, and even with some intelligent parallel parking programs. Now, self-driving cars have made their way to the forefront of the automotive revolution.

Continue reading “Digital Decisions in the World of Automated Cars”