In these days of zero tolerance of any level of abuse or violence, it’s worth remembering that a number of our earliest attempts at getting AI robots to talk to each other, or even to humans, have resulted in some rather bad language. While it’s hard to know to what extent all these stories/videos are genuine, there is no doubt that, even allowing for the mainstream media’s tendency to exaggerate, many people have concerns about where all this Artificial Intelligence stuff is going to end.
The recent revelation that Facebook chatbots are now using human traits to negotiate with each other and that this behaviour was not programmed but actually discovered by the bot will do nothing to alleviate such worries.
To make matters worse for the doubters, these Facebook bots then started to communicate with each other, using a “language” that only they could understand, at which point the Facebook boffins switched them off.
The reporting of this story has been interesting. The traditional press have taken the doom and gloom angle, while some of the tech press have been dismissive of their concerns, telling us that the reason for this “language” was simple, namely, ‘In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand — but while it might look creepy, that's all it was.”
Equally dismissive, Mark Zuckerberg is reported to have said: “I have pretty strong opinions on this. I am really optimistic. You can build things and the world gets better. With AI, especially, I am really optimistic and I think that people who are naysayers and try to drum up these doomsday scenarios are pretty irresponsible. I don’t really understand it.”
On the other hand, Elon Musk, the founder of Tesla, believes that, unless we can control it, Artificial Intelligence poses a real danger. Musk described a hypothetical, yet believable, scenario where a bot, programmed to maximise the value of stocks, creates the following situation: “go long on defence, short on consumer and start a war.”
Who do we believe? Musk or Zuckerberg? It’s a bit like climate change in my opinion. Lots of people stand to gain from grants and funding by researching climate change, so they will obviously do what’s required to keep the money coming. Stating that climate change is not a problem will result in a diminution of their income. Similarly, anyone working in AI is hardly likely to say “oh, by the way, a by-product of my research might be the end of the world.”
Even though the science of climate change looks pretty strong to me, there are lots of people, including some intelligent ones, who don’t believe it’s right. The only problem is that we won’t find out until it’s either too late and we’re all doomed, or alternatively that it wasn’t that big a deal and we can all go on partying like there’s, er, no tomorrow.
It’s the same with AI. Mark Zuckerberg might well be right: Elon Musk might also be right. We won’t know for quite some time who is going to have the bragging rights, but in the meantime, it’s not going away and we should recognise that and plan to ensure that whoever is doing all the really clever stuff is, well, really clever and also really sensible. Pulling the plug before the robots learn how to take over might just be the best option.
However, there is another possibility…
The Facebook AI robots were called Bob and Alice. Here is a sample of their “conversation” which led their controllers to think they were inventing their own “language.”
Bob: I can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else
Alice: balls have a ball to me to me to me to me to me to me to me to me
The answer is, surely, obvious? The robots had been drinking. And here lies our salvation. The moment any uppity AI robot starts looking like it’s planning Armageddon, give it a bottle of vodka then once it starts talking balls, pull the plug!
Gareth Biggerstaff, MD, Be-IT