Microsoft’s Tweeting… Racist, Sexist, Neo-Nazi, Anti-Semitic Robot

By Calling Out Community, posted March 23, 2016

Tay’s first tweet

It all started innocently enough on Tuesday, March 22, when Microsoft introduced an AI Twitter account simulating a teenage millennial girl.

The account (named “Tay”) was an experimental program launched to train AI in understanding conversations with its users.  Tay tweeted publicly and engaged with users through private direct messages, and was supposed to be a fun experiment which would interact with 18- to 24-year-old Twitter users based in the US.

imageMicrosoft said it hoped Tay would help “conduct research on conversational understanding”.  The application, one would assume, would include customer service centres, library research services, consumer information providers and many others.

The company said:

The more you chat with Tay the smarter she gets, so the experience can be more personalized to you.”

imageHowever, the experiment with the artificial intelligence chat robot on Twitter ran amok.  As the Times of Israel reports it:

Within hours… Tay had turned into a racist, genocidal, sex-crazed monstrosity spouting Hitler-loving, sexist profanities for all the world to read.

Less than 24 hours after its introduction, Microsoft was forced to shut down the account for “adjustments”, and deleted every tweet (save three) from Tay’s Twitter account.


Tag’s is now annoyingly PC

The flaw? Microsoft programmed Tay to learn from her conversations with her users, targeted to be 18-24 year olds per the About Me page.  Let this be a lesson to Microsoft and anyone else seeking to market to young adults today – most of them are brutish, knuckle-dragging Neanderthals.

America – this shouldn’t be a surprise to you – you raised them that way.

What gender do you self-identify with? Seriously, is this the direction we are going?

BARF – is this the direction we are going?

Microsoft apologized in a blog entry on Friday, March 25, 2016 of Peter Lee, Corporate Vice President, Microsoft Research.  He stated that “a subset of Twitter users” figured out how to game Tay to their advantage:

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

Umm... what?

What exactly are they suggesting here??

However, today Tay is already back online and churning out a rash of Tweets to young users from around the world.  Rather than provocative, however, they are more of the head-scratching, puzzling variety, as is seen in examples like this on the left.

They can whitewash this robot and delete all of the offending tweets, but the real data that has been gleaned here is already unquestionably important for us today, namely:

What is wrong with our society, when THIS disgusting display of racism, sexism, anti-Semitism and filthy language is the sum total of what these young people wanted this robot to ‘learn’ from them?

At the end of the day, Tay sends one final tweet:

image “So many conversations”.  Yes, you can say that again (but don’t, you’re annoying enough already) too many, in fact, and most as brainless as the users initiating them.  When this is the kind of human intelligence that it learns from and is based on, the concept of “artificial intelligence” will likely remain just another of those nonsensical phrases containing two words that make no sense together…

…like “jumbo shrimp” and “climb down”.


Your Comments are always welcome!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.