Sam Altman, known as the PT Barnum of artificial intelligence, has a message for those who care about the technology he’s spent his life promoting: Don’t worry, the tech geeks are working on it.
Let’s back up a bit.
Altman, the 39-year-old venture capitalist and CEO of OpenAI, spoke with journalist Andrew Ross Sorkin at the New York Times Dealbook Summit on Wednesday. As gentle but affable as ever, Altman almost made you forget he’s a billionaire doomsayer who has also repeatedly warned about the risks of artificial intelligence. At one point, Sorkin asked, “Do you believe that governments or anyone else can figure out how to avoid” the existential threat posed by superintelligent AI systems?
Cue the shy boy’s deflection.
“I’m sure the researchers will figure out how to avoid that,” Altman replied. “I think the smartest people in the world will work on a range of technical problems. You know, I’m a little overly optimistic by nature, but I think they’ll figure it out.”
He went on to suggest that perhaps the AI itself is so smart that it will figure out how to control itself, but didn’t elaborate.
“We have this magic—” Altman says, but then corrects himself, “Not magic. We have this incredible science called deep learning that can help us solve these very hard problems.”
Ah, yes. ExxonMobil will solve the climate crisis…
Look, it’s hard not to be drawn to Altman, who did not respond to requests for comment. He keeps his cool, knowing that even if his technology disrupts the global economy, he’ll be safe in his bunker off the coast of California. (“I have guns, gold, potassium iodide, antibiotics, batteries, water, IDF gas masks, and a big piece of land in Big Sur that I can fly to,” he said.) But for the rest of us, it would be nice to hear Altman or any of his fellow AI boosters explain what they mean when they say “we’ll figure it out.”
Even AI researchers admit they still don’t understand exactly how the technology works. A report commissioned by the U.S. State Department called AI systems essentially black boxes that pose an “extinction-level threat” to humanity.
Even if researchers can sort out the technical issues and solve what they call the “coordination problem” — making sure AI models don’t become monster robots that destroy the world — Altman acknowledged that there will still be problems that some people or some governments will have to solve.
At the Dealbook Summit, Altman again put the onus on regulating the technology on some imaginary international organization made up of rational adults who don’t want to kill each other. He told Sorkin, even if “even if we can make this [superintelligence model] technically safe, which I think we will find a way to do, we have to have faith in our governments…there has to be global coordination…I think we’ll rise to the challenge, but it seems challenging.”
There are a lot of assumptions in this, and it reflects a myopic understanding of how policymaking and global coordination actually work: which is to say, slowly, inefficiently, and often not at all.
This kind of naivety must be instilled in the 1% elite in Silicon Valley, who are keen to stuff AI into every device we use, despite the technology’s flaws. That’s not to say it’s not useful! AI is being used to do all sorts of cool things, like helping people with disabilities or the elderly, as my colleague Clare Duffy has reported. Some AI models are doing some exciting things with biochemistry (which is frankly beyond my comprehension, but I trust the honest scientists who won the Nobel Prize for this technology earlier this year).