Microsoft took a crack out of engaging young teens and millennials as it launched its artificial intelligence chat bot known as “Tay.” However, the software’s giant attempt backfired, leaving the company with the only option to put the A.I. down.

Evidence of the A.I. going wayward was when its Twitter account spouted racist remarks, became sexist, and made offensive statements. According to Microsoft, it all happened when people launched a “coordinated effort” in order to make Tay respond with very rude remarks and inappropriate ways.

Tay is an experiment that is designed to learn more about computers and how human conversation works. According to the software giant’s website for the A.I., the program was directed at millennials aged 18 to 24.

“Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” reads Microsoft’s description for the artificial intelligence chat bot. “The more you chat with Tay the smarter she gets, so the experience can be more personalised for you.”

However, the chat bot’s programming was taken advantage of and was taught to have conversations and responses in a weird and offensive manner. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” Microsoft said in a statement.

Apparently, Tay isn’t the first teenage girl chat bot Microsoft has deployed. The company had launched “Xiaoice,” another artificial intelligence chat bot designed to be a female assistant that is now used by 20 million people in Chinese social networks. However, Xiaoice was able to run without a hitch.