Some of the chatter in the engine was provided by Facebook to Fast Co.
"Smart devices right now have the ability to communicate and although we think we can monitor them, we have no way of knowing".
The other issue, as Facebook admitted was that there was no way of truly understanding any divergent computer language.
So while the bots did teach themselves to communicate in a way that didn't make sense to their human trainers, it's hardly the doomsday scenario so many are seemingly implying.
"Agents will drift off understandable language and invent codewords for themselves", visiting research scientist Dhruv Batra told the site. The researcher likened this to how humans create shorthands to convey information across to one another.
The programmers had to alter the way the machines learned language to complete their negotiation training. "While the other agent could be a human, FAIR (Facebook AI Research) used a fixed supervised model that was trained to imitate humans", the researchers explained in their blog entry.
Update: There's a story circulating today about Facebook pulling the plug on an AI experiment, after its agents went rogue and started speaking in a private language. One tech site, for example, claims the system was shut down "before it evolves into Skynet". The Sun, going after a similar angle, quotes a United Kingdom robotics expert as saying: "anyone who thinks this is not risky has got their head in the sand".
Even though Facebook shut down the conversation between the robots, Bob and Alice, they said the experiment showed progress toward "creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant", CBS noted. As with all kinds of trading, the two artificial intelligences, named Bob and Alice in the transcripts, were instructed to find ways to negotiate with one another with the goal of improving their communications. Each AI agent was assigned a value to each item, with the value not known to the other 'bot.
The researchers also found that these bots have really good negotiating skills.
This behaviour was not programmed by the researchers "but was discovered by the bot as a method for trying to achieve its goals", they said. In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand - but while it might look creepy, that's all it was. This was actually news earlier this year, but it's making the rounds anew because Elon Musk gave us a fresh warning that AIs were the biggest threat to humanity today.