miércoles, 13 de abril de 2016

Tay, Microsoft’s new Artificial Intelligence 


Tay is a chatbot on Twitter, Kik and GroupMe, that can respond with emojis, GIFs, and abbreviated words. This was disabled 24 hours of coming online after it launched, because its racist tweets, feminists and neo-Nazi comments .

Tay was programed like a machine learning project with conversational understanding, that learn how people talk to each other and get itself a smarter progress. But some users abused of Tay's commenting skills, and responded in inappropriate ways. As a result  we can see comments like this in the network:





Also Microsoft is improve this chatbot, taking care about Tay’s Tweets as an example of the dangers of artificial intelligence.

In my opinion we should worry about what happened with this artificial intelligence, because as more developed are robots, can go against their creator, leaving a side the three principles of robotics, this is just a small example of what can do technology into the wrong hands. We must also consider that this was caused by many people with bad intentions; because if we know that this tool depends on our teaching, we should start teaching it what is respect, equality and justice; because we would be in serious trouble if we were talking about a more advanced robot or a military smart weapon.

No hay comentarios:

Publicar un comentario