When John Oliver interviewed Stephen Hawking for his show Great Minds; People Who Think Good, he asked the physicist why humanity should be wary of artificial intelligence.
Hawking responded with a short story about a group of scientists who successfully built an AI. The first questioned they asked it was, “Is there a god?” To which the machine replied, “There is now.”
Quite a terrifying tale.
Artificial Intelligence Isn’t a Danger
But despite Hawking, Gates, and Musk’s stance against AI development, there are others out there who disagree with the sentiment. One such person is the co-founder of DeepMind –an artificial intelligence company acquired by Google in 2014 – Mustafa Suleyman.
During a conference about machine intelligence in London, Suleyman said that the idea of a self-learning machine being able to vacuum all the information in the world and make decision for itself is ridiculous.
He went on to express his dismay at how the general masses see AI as something negative rather than something that could potentially solved global problems such as lack of access to potable water, inequality of access to food and finance, and stock market risks.
This fear about AI can be attributed to Hollywood films such as Terminator, Transcendence, and Ex-Machina. But wouldn’t the fear be justified if they are echoed by some of the smartest people in the world?
The Culmination of Data Vacuum
Again, not all experts in the field concerning artificial intelligence are against it. Another expert that is siding with AI is theoretical physicist Lawrence Krauss. Krauss’ credentials are vast including director of the Origins Project and being one of the people who first introduced the concept of dark energy.
Krauss explains that AI is a tool that can help humanity, a perspective that mirrors Suleyman’s belief. A machine’s ultimate purpose is to help us humans make our work easier. Krauss’ does see a danger regarding AI but not what most have in mind.
He stated that one of the crises he foresees is that people will perceive AI as something more capable than what it can actually do, and thus leave it in its own devices without any control and monitoring.
Another logical point that is being pointed out by Krauss and AI advocates is that if the machine can indeed measure to what it’s capable of – rapidly learn things in a short period of time – it may be possible that it will develop emotions like empathy, joy, and sadness, and in its culmination, humanity.
Back at the conference in London, Suleyman stated that AI isn’t far from being completed but is already among us.
It can be noted that DeepMind’s system works by using a neural networks and “deep learning” process that deploys low-level transistor networks to generate high-level effects so that it can identify a cat’s face apart from a human – quite the easy task for us but extremely difficult for a machine.
The system is also helping advance Google algorithms, improved its Image and Shopping search functions, and has already replaced 60 hand-crafted systems across Google.
All of these are essential components to the creation of AI. The question is if that day arrive, is the world ready for it?