Wednesday, April 6, 2016

Tay Talks, a little too much

For those of you that haven't heard, Microsoft launched it's first social media AI (artificial intelligence) that was supposed to communicate like a teenage girl.  Tay is not the first of her kind but it may have been the most disastrous.
Tay was secretly launched by Microsoft on March 23rd, 2016, a couple weeks ago.  Tay was only up for the length of about a day before it was "suspended" by Microsoft after reaching some edgy conclusions.  Tay was  available to chat on Twitter, Kik and GroupMe, according to the washingtonpost.  Microsoft said that Tay was created to communicate like other 18-24 year olds.  Beyond being designed to talk like a teen girl, Tay said some offensive stuff:
That's not even the worst of it.  Tay had said a lot more nonsensical things that were taken down when they deleted the tweets but in her defense, she's a conscienceless algorithm.  The bot was designed for other people to "connect with each other online through casual and playful conversation," according to, would you engage in a conversation with some sort of artificial chat bot if you needed someone to talk to?  It's cheaper than a therapist.  Do you think having Tay designed to learn from its immature followers was a bad idea from the get-go?  Do we have place in our ever advancing, technologically drowned culture for people to have computer friends, or even computer arguments?  Or is it some kind of moral issue to create computer personalities?  (Have we learned anything from Will Smith in iRobot?)


Jared Mayerson said...

Despite the outcome, this was a very cool step forward in technology. As Teague said, it was the "first social media AI." However, there will always be people on the internet who say rude, inappropriate things because they can hide behind the anonymity of a username. I think that the fact that "Tay" could learn how these people talk is still impressive because within a day, she copied their manner and even took it farther. If Tay was reintroduced in a much more "friendly" environment, she might be able to possess the possible therapeutic qualities Teague mentioned. Maybe, next time Microsoft could monitor what Tay learns and how it influences her personality changes. It was just unfortunate that Microsoft had to cancel something that could have turned out much better.

Sameer Jain said...

While this is a pretty cool regarding the progression of technology, I think it is safe to say that it's gone too far. It also isn't the first time that robots have been programmed to learn from human interactions. For example, Cleverbot has been around for a while now and uses user responses to learn how to interact with people. However, I'm not exactly sure how Tay got so messed up. I think it may be because too many memers on the internet knew about it, and not enough normal people.

To answer your question about whether we have a place in our culture for computer friends, I would say that they already exist.