Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Mark Lennihan
Tay

Microsoft pulls 'teen girl' chatbot after it learned to become a racist in just a day

Not a great start.

MICROSOFT HAS YANKED a chatbot offline in less than a day, after it began spouting racist, sexist and otherwise offensive remarks.

Microsoft said it was all the fault of some really mean people, who launched a “coordinated effort” to make the chatbot known as Tay “respond in inappropriate ways.”

Computer scientist Kris Hammond said the response was obvious: “I can’t believe they didn’t see this coming.”

Microsoft said its researchers created Tay as an experiment to learn more about computers and human conversation. On its website, the company said the program was targeted to an audience of 18 to 24-year-olds and was “designed to engage and entertain people where they connect with each other online through casual and playful conversation.”

In other words, the program used a lot of slang and tried to provide humorous responses when people sent it messages and photos. The chatbot went live on Wednesday, and Microsoft invited the public to chat with Tay on Twitter and some other messaging services popular with teens and young adults.

“The more you chat with Tay the smarter she gets, so the experience can be more personalised for you,” the company said.

But some users found Tay’s responses odd, and others found it wasn’t hard to nudge Tay into making offensive comments, apparently prompted by repeated questions or statements that contained offensive words. Soon, Tay was making sympathetic references to Hitler — and creating a furore on social media.

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” Microsoft said in a statement.

While the company didn’t elaborate, Hammond says it appears Microsoft made no effort to prepare Tay with appropriate responses to certain words or topics. Tay seems to be a version of “call and response” technology, added Hammond, who studies artificial intelligence at Northwestern University and also serves as chief scientist for Narrative Science, a company that develops computer programs that turn data into narrative reports.

Microsoft said it’s “making adjustments” on Tay, but there was no word on when Tay might be back. Most of the messages on its Twitter account were deleted by Thursday afternoon.

“c u soon humans need sleep now so many conversations today thx,” said the latest remaining post.

Read: One angry programmer almost broke the internet by deleting 11 lines of code

Read: One man has kept Apple’s answer to Solitaire alive for 32 years

Author
Associated Foreign Press
Your Voice
Readers Comments
42
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.