000018 Sutskever Refuses to Answer the Q: How Will AGI Be Built? He Has No Answer
“Another thing that makes this discussion difficult is that we are talking about systems that don’t exist, that we don’t know how to build.” (1:06:10)
At some points in his recent interview, Ilya Sutskever seems to believe no one knows how to build intelligent computers, but for most of the discussion he speaks as if it’s proven they will be built (and soon), though he doesn’t give us the proof.
There’s a common template to these conversations with leading figures in the AI world. They spend little time on important matters—like: How is it going to be built?—and a lot of time speculating on what a future with AGI will look like.
The purpose of this article is not to dwell on the evidence that no one knows how to build AGI (e.g. a lot of hype is based on benchmark performance—recent reviews led by Oxford and the European Union’s research branch find little proof of correlations between benchmark success and useful applications).
The latest is recent reports of scientists successfully using AI as something like a research assistant—to speed up their work. GPT-5 even produced useful novel proposals by finding correlations in its dataset across diverse domains humans hadn’t thought to connect. Authors of the papers often emphasise that human experts remain crucial. As things stand, reports by no means point to anything like the “intelligence” predicted by many people in the AI world.
And Sutskever at some times seems to agree we are in the dark when it comes to how to get to AGI (e.g. see the quote at the start of this article). In the interview he announces that we are back in an “age of research” (21:19)—we need more research to figure out how to make computers intelligent (in other words, we don’t have the research or understanding now).
What are we to make of the lack of evidence and of Sutskever occasionally admitting we don’t know how to get to AGI, while at the same time he predicts we will have AI models that learn like humans and become superhuman in “five to twenty” years (1:22:12).
You want to ask him for details. What’s his evidence “Superintelligence is within reach”? (That’s the first line on his company’s website.) What, for example, are the specific engineering principles he’s thinking about?
And in the interview Sutskever is in fact asked how we might get on track to building AGI.
As he and Dwarkesh are discussing learning—which they seem to agree is important for machine intelligence—and their feeling that computers can’t do it properly, Dwarkesh asks:
“What is the ML analogy that could realize something like this?” (30:12)
That means—How could we build computers that learn? How are we going to get there?
Sutskever doesn’t really answer the question. His first response is:
“One of the things that you’ve been asking about is how can the teenage driver self-correct and learn from their experience without an external teacher? The answer is that they have their value function. They have a general sense … they start to drive, and they already have a sense of how they’re driving immediately.” (30:24)
For those who don’t know—the value function is figuring out, in an undertaking, if you’re on the right track at a relatively early step, as opposed to waiting until the completion of the task to find out if you succeeded or not.
But responding to the question of how machines will come to learn like humans by proposing “ongoing feedback” leaves many important questions unanswered.
And the interviewer too seems to find Sutskever’s answer unsatisfying, as he reposes the question—how are we going to build computers that learn:
“It seems like humans have some solution, but I’m curious about how they are doing it and why is it so hard? How do we need to reconceptualize the way we’re training models to make something like this possible?” (31:17)
This time, Sutskever explicitly refuses to answer:
“You know, that is a great question to ask, and it’s a question I have a lot of opinions about. But unfortunately, we live in a world where not all machine learning ideas are discussed freely, and this is one of them.” (31:27)
Here, on the essential point—Will computers be able to learn, and what’s the engineering that might get us there?—Sutskever’s response is that he cannot tell us his thoughts (presumably because they are ideas he needs to keep secret for advantage over competitors).
Most of the discussion in the remainder of the interview is premised on the assumption that machine superintelligence is going to be achieved, though no evidence it will is presented.
Perhaps Sutskever has legitimate reasons for hiding his thoughts. But no one has presented a theory of how to get to intelligent computers. And if people in the industry say: I can’t tell you my thoughts on the matter, but superintelligence is within reach—what is the public supposed to make of that?
While Sutskever won’t reveal his ideas and as a result gives no indication of how we could get to AGI, he spends a lot of the interview discussing what impacts that hypothetical technology might have on the world.
This is something you often encounter when you listen to leading figures in the AI world talk about the prospect of intelligent computers. They skip over the essential questions—how will AGI actually be built, what are the engineering principles, what are intelligence, learning, decision-making, thinking, innovation, etc.?—and they talk about what some future world in which AGI exists will look like.
This topic-changing framework perhaps propels many of the myths around AI. When many people taken as authoritative (e.g. researchers like Sutskever and Hinton, or businessmen like Altman and Musk) carry out conversations about what this future world with AGI will look like, it’s perhaps easy to get caught up with them and forget they have presented no theory of how intelligent machines might come about.
Another example—Sutskever speaks of producing a computer (50:15) “like a super intelligent 15 year old” that “learns” but he again gives no description of what the engineering behind that “learning” might be.
What are we to make of this repeated failure to address fundamental matters of machine intelligence, while a lot of people seem to believe machines certainly will be intelligent (and probably very soon)?
It wouldn’t be the first time in history a society believed something that turned out to be untrue.
.
.
.
Take a look at my other articles:
On Eliezer Yudkowsky’s unsubstantiated arguments
On AI 2027’s misrepresentation of scientific reports
On what words mean to computers
.
Twitter/X: x.com/OscarMDavies
