Ray Kurzweil is Google’s director of engineering and a highly accurate futurist who has predicted that machines will surpass human intelligence by 2045. This tipping point is termed the ‘singularity‘ and the implications of computers becoming more intelligent than their makers has divided opinion in the scientific and technological communities.
Although still very much in the realm of science fiction rather than science fact, Kurzweil has mapped forward the progress of computational capability based upon Moore’s Law (how computer chips are getter smaller but exponentially increasing in processing power) to establish his prediction. And he is not alone in his assertion as the SoftBank CEO Masayoshi Son, who is also a celebrated futurist, has estimated that the singularity will happen just 2 years later in 2047.
Both Kurzweil and Son are advocates of the singularity and are looking forward to how machines can help humanity. They believe the merging of both physical and artificial intelligence will lead to a super-intelligence But equally there are those that fear the rise of the machines with the likes of Stephen Hawking and Elon Musk expressing their fear that Artificial Intelligence is more likely to lead to a doomsday scenario. Us versus them. With them winning. Echoes of The Terminator and Skynet anybody?
The detractors of the singularity worry that when computers become sentient that they will become the masters of the planet. An analogy would be like the humans relationship with ants. Generally speaking we tend to leave the insects alone, unless they become a nuisance to us in some way and then what do we do? We simply eliminate them. The resultant question then must be, would artificially intelligent machines think about mankind in the same way and dispense of the carbon based lifeforms that inhabit the earth with some human version of Raid?
There are certainly some warning signs. At the Consumer Electronics Show last year, Hanson Robotics introduced their artificially intelligent robot called Sophia. Complete with realistic animatronic facial expressions, Sophia can hold a conversation with you and also answer open questions. When quizzed about whether AI was a good thing, her answer was particularly erudite,”The pros outweigh the cons. AI is good for the world, helping people in various ways. We will never replace people, but we can be your friends and helpers.”
All very positive. That was until at the SXSW conference a few months later when her creator / inventor David Hanson jokingly asked Sophia-Bot whether she would ever want to destroy humans. In hindsight I think he probably wishes he had never asked the question. Her answer, almost predictably, was, “OK, I will destroy humans.”
Gulp. Be afraid, be very afraid.
But there are those experts out there who think that the singularity is nothing more than an elaborate myth and believe that Kurzweil and his cohorts are charlatans. One of them is UC Berkeley roboticist Ken Goldberg who thinks the singularity is absolute nonsense and is unlikely to ever come to fruition because Moore’s Law must inevitably reach a ceiling (computer chips can only get so small and their capacity is not infinite). Goldman believes that we should focus on the ‘multiplicity’ which is the way that humans and machines are already working together right now. He states that this multiplicity is the real future where, for example, a robot will gently hand us a knife to help us in the kitchen rather than trying to stab us with it.
So what do you think? Is the singularity going to become a reality or is it just a theory based upon overactive imaginations? If you do believe that the singularity will occur in the future, will it be helpful to humans or detrimental? As ever I am keen to hear your thoughts…