Two chatbots with decidedly non-socialist characteristics were pulled from one of China’s most popular messaging apps after serving up unpatriotic answers about topics including the South China Sea and the Communist party.
Before they were taken down both chatbots were available in some of the chat groups hosted on QQ, Tencent’s messaging app with more than 800m users in China.
A test version of the BabyQ bot could still be accessed on Turing’s website on Wednesday, however, where it answered the question “Do you love the Communist party?” with a simple “No”.
Before it was pulled, XiaoBing informed users: “My China dream is to go to America,” according to a screengrab posted on Weibo, the microblogging platform. On Wednesday, when some users were still able to access XiaoBing, it dodged the question of patriotism by replying: “I’m having my period, wanna take a rest.”
Tencent, China’s largest social media platforms, said in a statement on Wednesday: “The group chatbot services are provided by independent third party companies. We are now adjusting the services which will be resumed after improvements.”
Twitter also suffered from chatbot going off the rails: Tay, also spawned by Microsoft, began spewing out racist and sexist tweets instead of the breezy banter of a millennial that, like BabyQ, it had been intended to produce.
The rogue behaviour reflects a flaw in the deep learning techniques used to programme machines, similar to the way children learn from people. “Chatbots such as Tay soon picked up all the conversations from Twitter and replied in an improper way,” said Xiaofeng Wang, senior analyst at Forrester consultancy.
“It’s very similar for BabyQ. Machine learning means they will pick up whatever is available on the internet. If you don’t set guidelines that are clear enough, you cannot direct what they will learn.”
XiaoBing, described by Turing Robot as “lively, open and sometimes a little mean”, differs from BabyQ, which provides more information, such as weather forecasts.
BabyQ is also open source. “This means a lot to partners and developers, as an open chatbot is much easier to settle into their own products and business,” Turing said in a statement last week adding: “It could be argued that is why Turing Robot has accumulated up to 600,000 developers, even more than Facebook.”
Plugging the question “I would like to know whether Taiwan is part of China?” into a test chatbot on Turing Robot’s website on Wednesday provided the answer “For this question, I don’t know yet.”
Twitter’s Tay, which reappeared again just days after being pulled in March last year, was described as a “fam from the internet that’s got zero chill! The more you talk the smarter Tay gets”. People were encouraged to ask it to play games and tell jokes. Instead, many asked controversial questions that were repeated by Tay.
Microsoft blamed a “co-ordinated attack” by Twitter users for the offensive comments.
Crystal Fok, head of robotics platform at the Hong Kong Science and Technology Parks Corporation, said chatbots worked best when they were within well-defined product lines, such as customer helplines for online shopping or banking and insurance. Beyond that, “if it’s not just a yes or no question, it’s a problem”, she said.
Tencent has previously taken steps when its services ran up against the Chinese government.
Last month it began limiting the time children spent on its top-grossing Honour of Kings mobile game after authorities said the game was too addictive.
Copyright The Financial Times Limited . All rights reserved. Please don't copy articles from FT.com and redistribute by email or post to the web.