Stephen Hawking again warns against AI, says it could replace humans

04 Nov 2017

1

Renowned physicist Stephen Hawking has once again voiced his concerns about artificial intelligence, saying robots could eventually replace humans completely.

The renowned physicist believes that someday someone will design Artificial Intelligence (AI) which will improve itself to an extent where it will "outperform humans".

Earlier in December last year, in an article in The Guardian, Hawking had warned, "the automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining." (See: Humanity on brink as AI to decimate jobs: Stephen Hawking)

In an interview with Wired magazine, Professor Hawking said, "I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans."

Hawking also spoke about requiring a new space age in order to entice young people to get involved in science. He said there would be "serious consequences" if more people didn't show an interest in space exploration.

The 75-year-old paraplegic said a new space programme should be worked on urgently by the science community "with a view to eventually colonising suitable planets for human habitation".

"I believe we have reached the point of no return. Our earth is becoming too small for us, global population is increasing at an alarming rate and we are in danger of self-destructing," he said. (See: Colonise new planet or perish: Stephen Hawking gives us only 100 years on earth)

This is not the first time that Prof Hawking has warned the world about artificial intelligence as well as the need to colonise other planets. Hawking, along with technology czar Elon Musk of Tesla and SpaceX fame, are among the few major voices on the dangers of AI and the need for humans to find other homes (See: Musk, 115 other experts warn against military use of AI).

Back in 2015, he also expressed fears that AI could grow so powerful it might end up killing humans unintentionally.

''The real risk with AI isn't malice but competence,'' he said. ''A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.''

Elon Musk, who has also expressed major concerns over AI, said he himself should be ''on the list of people who should absolutely not be allowed to develop digital super-intelligence''.

Business History Videos

History of hovercraft Part 3 | Industry study | Business History

History of hovercraft Part 3...

Today I shall talk a bit more about the military plans for ...

By Kiron Kasbekar | Presenter: Kiron Kasbekar

History of hovercraft Part 2 | Industry study | Business History

History of hovercraft Part 2...

In this episode of our history of hovercraft, we shall exam...

By Kiron Kasbekar | Presenter: Kiron Kasbekar

History of Hovercraft Part 1 | Industry study | Business History

History of Hovercraft Part 1...

If you’ve been a James Bond movie fan, you may recall seein...

By Kiron Kasbekar | Presenter: Kiron Kasbekar

History of Trams in India | Industry study | Business History

History of Trams in India | ...

The video I am presenting to you is based on a script writt...

By Aniket Gupta | Presenter: Sheetal Gaikwad

view more
View details about the software product Informachine News Trackers