Microsoft AI Bot Picks Up Bad Habits

tay

Yesterday Microsoft released a sweet, innocent chatbot designed to speak like a “millennial.” Today that chatbot is a corrupt foul-mouthed racist that has to be censored.

The experiment and the outcome are perhaps a perfect example of how any attempt to give human elements to machines and software are undermined by the fact that some humans aren’t very nice.

The bot, Tay, is an experiment in artificial intelligence. It’s partly designed to check the basics of learning and adapting work, and partly to check if that’s possible while maintaining a characteristic voice, namely that of an 18-24 year old designed to irritate anyone with a more conventional grammar and vocabulary.

While Microsoft programmed some basics into Tay’s virtual knowledge and understanding, it’s designed to pick up phrases from the humans who interact with it and learn how to use them in context.

The problem is that it turns out Internet users do exactly what any naughty child does with a speaking toy. Numerous users took advantage of the discovery that tweeting Tay and asking it to “repeat after me” would do exactly that. While that could have been nothing more than a two-second prank that people would soon get tired of, unfortunately Tay started “learning” these phrases and unleashing them in conversation with unsuspecting users.

Without going into the specifics, let’s just say Godwin’s Law was soon proven correct, while Tay has learned to offer a range of (contradictory) opinions about Caitlyn Jenner.

Microsoft has now begun making “adjustments” to Tay’s operations as well as manually deleting some of the more inappropriate tweets.


Geeks are Sexy needs YOUR help. Learn more about how YOU can support us here.