Get the latest tech news How to check Is Temu legit? How to delete trackers
NEWS
Microsoft

Microsoft yanks chatbot after racial slurs

Brett Molina
USA TODAY

Microsoft's A.I. chatbot started out as an innocent, interesting experiment. Then the rest of the Internet showed up.

Microsoft's website featured A.I. chatbot Tay.

Twitter users convinced Tay, the name of the chatbot, which was available via text, Twitter and Kik, to spit out offensive and racist comments, so Microsoft is giving it a break.

"Phew. Busy day. Going offline for a while to absorb it all. Chat soon,"  a statement reads on the website for Tay. A separate Twitter post notes the hiatus.

In a statement Thursday, Microsoft confirmed it was taking Tay offline to make adjustments. "It is as much a social and cultural experiment, as it is technical,"  Microsoft's statement says. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways."

Microsoft's artificial-intelligence-powered program was aimed at Web users in the USA ages 18 to 24. Billed as "A.I. fam from the Internet that's got zero chill," Tay was designed to engage with people where they connect with each other online, Microsoft's research site says.

Users could type "repeat after me," and Tay would do so word for word. Multiple users on Twitter were able to get Tay to reply with offensive messages and statements, reported tech website The Verge. Most of the offensive messages, which included ones lauding Hitler, have been deleted.

The chatbot launched Wednesday and was created by Microsoft's Technology and Research and Bing teams.

Opus Research analyst Dan Miller says the incident should serve as a "cautionary tale" for companies planning to create technology leveraging artificial intelligence. "Manipulation and gaming is always a possibility," he said.

Follow Brett Molina on Twitter: @brettmolina23.

Featured Weekly Ad