Tay |
That's not even the worst of it. Tay had said a lot more nonsensical things that were taken down when they deleted the tweets but in her defense, she's a conscienceless algorithm. The bot was designed for other people to "connect with each other online through casual and playful conversation," according to techrepublic.org, would you engage in a conversation with some sort of artificial chat bot if you needed someone to talk to? It's cheaper than a therapist. Do you think having Tay designed to learn from its immature followers was a bad idea from the get-go? Do we have place in our ever advancing, technologically drowned culture for people to have computer friends, or even computer arguments? Or is it some kind of moral issue to create computer personalities? (Have we learned anything from Will Smith in iRobot?)
https://www.washingtonpost.com/news/the-intersect/wp/2016/03/23/meet-tay-the-creepy-realistic-robot-who-talks-just-like-a-teen/
http://www.businessinsider.com/microsoft-launches-tay-teen-chatbot-2016-3
http://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/
http://imgur.com/gallery/yMz1B
2 comments:
Despite the outcome, this was a very cool step forward in technology. As Teague said, it was the "first social media AI." However, there will always be people on the internet who say rude, inappropriate things because they can hide behind the anonymity of a username. I think that the fact that "Tay" could learn how these people talk is still impressive because within a day, she copied their manner and even took it farther. If Tay was reintroduced in a much more "friendly" environment, she might be able to possess the possible therapeutic qualities Teague mentioned. Maybe, next time Microsoft could monitor what Tay learns and how it influences her personality changes. It was just unfortunate that Microsoft had to cancel something that could have turned out much better.
While this is a pretty cool regarding the progression of technology, I think it is safe to say that it's gone too far. It also isn't the first time that robots have been programmed to learn from human interactions. For example, Cleverbot has been around for a while now and uses user responses to learn how to interact with people. However, I'm not exactly sure how Tay got so messed up. I think it may be because too many memers on the internet knew about it, and not enough normal people.
To answer your question about whether we have a place in our culture for computer friends, I would say that they already exist.
Post a Comment