Those inner-geek prayers of yours have been answered.
The Stephen Hawking, the exceptional world-renowned physicist, just published the responses to the questions Reddit users put to him a few months ago.
There's some seriously brilliant stuff in here, so buckle up. You'll come out the other side armed with intelligent-sounding debate topics for the pub, if nothing else.
Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done?
If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.
Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call "The Terminator Conversation." My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. Your viewpoints (and Elon Musk's) are often presented by the media as a belief in "evil AI," though of course that's not what your signed letter says. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style "evil AI" is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?
You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.
One might think it impossible for a creature to ever acquire a higher intelligence than its creator. Do you agree? If yes, then how do you think artificial intelligence can ever pose a threat to the human race (their creators)? 2. If it was possible for artificial intelligence to surpass humans in intelligence, where would you define the line of “It’s enough”? In other words, how smart do you think the human race can make AI, while ensuring that it doesn’t surpass them in intelligence?
It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
Dr Hawking, What is the one mystery that you find most intriguing, and why? Thank you.
Women. My PA reminds me that although I have a PhD in physics women should remain a mystery.
I would love to ask Professor Hawking something a bit different if that is OK? There are more than enough science related questions that are being asked so much more eloquently than I could ever ask so, just for the fun of it: What is your favourite song ever written and why? What is your favourite movie of all time and why? What was the last thing you saw on-line that you found hilarious?
“Have I Told You Lately” by Rod Stewart. Jules et Jim, 1962. The Big Bang Theory.
Professor Hawking, in 1995 I was at a video rental store in Cambridge. My parents left myself and my brother sitting on a bench watching a TV playing Wayne's World 2. (We were on vacation from Canada.) Your nurse wheeled you up and we all watched about 5 minutes of that movie together. My father, seeing this, insisted on renting the movie since if it was good enough for you it must be good enough for us. Any chance you remember seeing Wayne's World 2?
_ Image: Shutterstock _