Unbridled AI bigger threat than N Korea, warns Elon Musk
14 Aug 2017
Technology maverick Elon Musk has ratcheted up his fears about artificial intelligence (AI) posing a threat to humanity, saying in a series of tweets that people should be more concerned with AI than the risk posed by escalating tensions with nuclear-armed North Korea.
"If you're not concerned about AI safety, you should be. Vastly more risk than North Korea," the Tesla and SpaceX boss tweeted.
''OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go,'' said another tweet, referring to his nonprofit startup, OpenAI, defeating several of the world's best players at a video game.
The tongue-in-cheek post contained a picture of an ad against gambling addiction ad stating, ''In the end the machines will win'' - not only referring to gambling machines.
''Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too,'' he further tweeted.
Musk has called for regulation of AI in the past, saying last month that regulation is needed now or ''by the time we are reactive in AI regulation, it's too late''.
Speaking at the National Governor's Association in July, Musk described AI as ''the greatest risk we face as a civilization'' and called for swift and decisive government intervention to oversee the technology's development.
''On the artificial intelligence front, I have access to the very most cutting edge AI, and I think people should be really concerned about it,'' Musk said in a question and answer session with Nevada governor Brian Sandoval.
Many experts, including award-winning physicist Stephen Hawking, are wary of AI developing too quickly. The possible threats it could pose may sound like science fiction, but they could ultimately prove to be valid concerns.
Hawking warned last December that AI and increasing automation is going to decimate middle class jobs, worsening inequality and risking significant political upheaval.
In a column in The Guardian, he wrote that "the automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining" (See: Humanity on brink as AI to decimate jobs: Stephen Hawking).
Another expert, Michael Vassar, chief science officer of MetaMed Research, has stated, ''If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.''
Efforts are already underway to begin to formulate some of these rules to ensure the development of ''ethically aligned'' AI, as Futurism reports. The Institute of Electrical and Electronics Engineers presented their first draft of guidelines which they hope will steer developers in the correct direction.
Additionally, the biggest names in tech are also coming together to self-regulate before government steps in. Researchers and scientists from large tech companies like Google, Amazon, Microsoft, IBM, and Facebook have already initiated discussions to ensure that AI is a benefit to humanity and not a threat.
Artificial Intelligence has a long way to go before it can get anywhere near advanced enough to pose a threat. However, progress is moving forward by leaps and bounds. One expert, Ray Kurzweil, predicts that computers will be smarter than humans by 2045 - a paradigm shift known as The Singularity.
Perhaps tech companies self-policing will be enough to ensure those fears are unfounded, or perhaps the government's hand will ultimately be needed. But continuing as if the threat doesn't exist is no longer an option.