← Return to search results
Back to Prindle Institute
Technology

The Tay Experiment: Does AI Require a Moral Compass?

By Erin Law
11 Apr 2016
Screenshot, @TayTweets profile (via Twitter)

In an age of frequent technological developments and innovation, experimentation with artificial intelligence (AI) has become a much-explored realm for corporations like Microsoft. In March 2016, the company launched an AI chatbot on Twitter named Tay with the handle of TayTweets (@TayandYou). Her Twitter description read: “The official account of Tay, Microsoft’s A.I. fam from the Internet that’s got zero chill! The more you talk the smarter Tay gets.” Tay was designed as an experiment in “conversational understanding” –– the more people communicated with Tay, the smarter she would get, learning to engage Twitter users through “casual and playful conversation.”

Unfortunately, the experiment became problematic very quickly.  Less than 24 hours after her launch, Tay began tweeting out misogynistic and racist remarks to all of her followers. Microsoft quickly deleted most of the offensive tweets and took Tay offline to make adjustments. After an accidental brief revival, Tay began tweeting more offensive content related to drug use before being taken down by Microsoft once again. The company blamed Tay’s behavior on online trolls, stating that there was a “coordinated effort” to trick the program’s “commenting skills.” 

Tech industry professionals and observers have criticized Microsoft for not following typical AI-building community protocol of including some sort of conscious into Tay’s engineered design. A tweet from user linkedin park (@UnburntWitch) reads: “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.” Simply put, Tay had the ability to incorporate new ideas into her own, but no guiding mechanism in place to help her identify useful and useless information, leading her to eventually mimic and regurgitate the hateful tweets she was receiving. 

The attention that Tay and Microsoft have received from this situation has contributed significantly to the conversation, and worry, about the use of artificial intelligence in the future. A tweet from a user named Gerry (@geraldmellor) reads: “‘Tay’ went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all concerned about the future of AI.” This leads to many questions surrounding artificial intelligence and the future. Was Tay’s rant of offensive tweets simply a glitch in her commenting feature, or a reflection of beliefs that are still dominant in society? Further, can artificial intelligence be programmed to have a moral compass, and if so, should it? Is it problematic for programmers to have the power to subjectively program AI to know what is ‘right’ and ‘wrong’, or is it their responsibility to do so for the betterment of society?  It is unclear if Microsoft will succeed or forfeit in the race for better AI, but Tay’s influence on AI discourse will undoubtedly keep this topic at the forefront of the public tech discussion.

Erin graduated from DePauw University with Sociology and Spanish majors. She was a member of the Media Fellows program and Kappa Kappa Gamma sorority, and she is a Minnesota native.
Related Stories