![]() ![]() (Now the trolls have turned their attention to her Japanese anime-enthusiast counterpart, Rinna.) Like any teen, Tay learns from the world around her - which is not a great start when your hometown is Twitter. ![]() Part of the problem with/magic of Tay is that, as Microsoft researcher Kati London told Buzzfeed, “the more you talk to her the smarter she gets.” Tay’s conversational repertoire is messily sponged up from a massive sampling of online chatter from 18- to 24-year-olds (along with material from an unnamed cast of online comedians, for added sass). on Wednesday, Microsoft introduced the world to Tay, a state-of-the-art teen-seeming chatbot created to “experiment with and conduct research on conversational understanding.” Tay was designed to tell jokes, play games, and otherwise amuse her fellow teens by sampling, analyzing, and recycling their speech patterns into something approximating conversation. Well, for 24 hours it was.Ī little after 8 a.m. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.This week, Twitter took a break from showcasing the decline of human intelligence to highlight the promise of artificial intelligence, and it was magic. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. To do AI right, one needs to iterate with many people and often in public forums. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay. We take full responsibility for not seeing this possibility ahead of time. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. "Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values." "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," it said. Nobody who uses social media could be too surprised to see the bot encountered hateful comments and trolls, but the artificial intelligence system didn't have the judgment to avoid incorporating such views into its own tweets. However, within 24 hours, Twitter users tricked the bot into posting things like "Hitler was right I hate the jews" and "Ted Cruz is the Cuban Hitler." Tay also tweeted about Donald Trump: "All hail the leader of the nursing home boys." Tay was set up with a young, female persona that Microsoft's AI programmers apparently meant to appeal to millennials. Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets. Users could follow and interact with the bot on Twitter and it would tweet back, learning as it went from other users' posts. Yesterday the company launched "Tay," an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans. Microsoft got a swift lesson this week on the dark side of social media. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |