Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A brief collection of Hinton's recent comments on AGI risk, published by Kaj Sotala on May 4, 2023 on LessWrong.
Since I've seen some people doubt whether Geoff Hinton is actually concerned about AGI risk (as opposed to e.g. the NYT spinning an anti-tech agenda in their interview of him), I thought I'd put together a brief collection of his recent comments on the topic.
Written interviews
New York Times, May 1:
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work. [...]
Dr. Hinton [originally] thought [systems like ChatGPT were] a powerful way for machines to understand and generate language, but [...] inferior to the way humans handled language. [...]
Then, last year [...] his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.” [...]
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. [...]
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Technology Review, May 2:
People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you’re an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what’s the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it’s guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they’re usually right.”
Which is Hinton? “I’m mildly depressed,” he says. “Which is why I’m scared.” [...]
... even if a bad actor doesn’t seize the machines, there are other concerns about subgoals, Hinton says.
“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?” [...]
When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door.
Video interviews
CNN, May 2:
INTERVIEWER: You've spoken out saying that AI could manipulate or possibly figure out a way to kill humans. How could it kill humans?
HINTON: Well eventually, if it gets to be much smarter than us, it'll be very good at manipulation because it will have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it'll figure out ways of getting around restrictions we put on it. It'll figure out ways of manipulating people to do what it wants.
INTERVIEWER: So what do we do? Do we just need to pull the plug on it right now? Do we need to put in far more restrictions and backstops on this? How do we solve this problem?
HINTON:...
view more