Dr. Geoffrey Hinton first spoke about his concerns with the New York Times, saying—as a growing number of whistleblowers have also warned—it’s difficult to see “how you can prevent the bad actors from using [A.I.] for bad things.” Hinton, 75, has an extensive and almost impossible to summarize background in the larger world of academia, specifically with regards to early and impactful work connected with the eventual A.I. moment we now find ourselves in.
Of particular note among Hinton’s comments is his assertion that he no longer believes we are decades away from such tech eclipsing human intelligence. Speaking with CNN in a separate interview, he elaborated further.
“If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us,” Hinton said when asked about A.I. harming or even killing people, as seen above. “There are very few examples of a more intelligent thing being controlled by a less intelligent thing and it knows how to program so it will figure out ways of getting around restrictions we put on it. It’ll figure out ways of manipulating people to do what it wants.”
As for a solution, Hinton has cautiously expressed that it’s “not clear” that humans can solve what is now a “really serious” problem in need of great consideration. However, on an existential level, Hinton has just as cautiously expressed some hope that world leaders can at least agree this is “bad for all of us.”
When reached for comment on Wednesday, a rep for Google shared the following statement from Chief Scientist Jeff Dean:
“Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google. I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!
Hinton’s warning is in line with the concerns raised in recent months by others in tech, as well as artists and writers who have rightfully made similar points about the core problem of blindly trusting that corporations and others in power will simply do the right thing. As history has repeatedly shown us, such entities and related individuals almost never do the right thing.
As I pointed out in Complex’s recent coverage of the first Writers Guild of America (WGA) strike in 15 years, this tech—in theory—could be used in a very limited and responsible capacity. But these types of uses are likely of little interest to CEOs who are overfed with buzz and whose day-to-day duties consist almost exclusively of canning human beings and pointing at line graphs.
For recent evidence of this, look no further than how studios responded to the A.I. portion of the WGA’s proposals. In short, they rejected the proposal, which—among other things—called for the banning of A.I. from writing or rewriting literary material.