Ezra Klein: How should I feel about the idea these [artificial intelligences] will be privately owned, if they’re going to be this powerful?
Sam Altman: Strange. If we think about the major “big iron” engineering projects of the past – the ones where there were large geopolitical and certainly social consequences – just to pick two examples… we could talk about the Apollo project and the Manhattan Project… those really took the wealth of nations to do. No company like us was going to to that. But the cost of technology does its thing, companies and people get wealthier over time. You now do have a world where certainly the mega caps like Google can join this effort, but even much smaller organisations like OpenAI can get enough capital together – barely – to be able to be competitive here…
That’s weird. I have misgivings about that. Probably the non-technical thing I think most about is, let’s say that we do make the true AGI, like the one from the movies sci-fi movies–
EK: the Artificial General Intelligence–
SA: Yeah. How do we want to think about how decisions are made there, how it’s governed, who gets to use it, what for, how the wealth that it creates is shared? If this is going to be one of those species-defining moments, it for sure should not be in the hands of a company – certainly not ours – but saying what we want the structure to be there, how we want to make decisions about it, what the equivalent of our constitution should be, that’s new ground for us and we’re trying to figure it out now.
EK: Open AI begins as a non-profit. It becomes a for-profit in part because it needs to raise money and resources… One of the worries I have about this is that even if people want to be very cautious about what the incentives of it are, that just in order to do it you have to submit to those incentives. That just in order to raise the money there has to be a business model, a backer… and I wondered… was that a missed opportunity for the public sector? Should it be that the public sector is spending the money to build this, either by funding groups like yours or consortium of academic groups or…
SA: Little known fact: we tried to get the public sector to fund us, before we went to the cap profit model – there was no interest. But… I think if the country was working in a different way, I would say a better way, this would be a public sector project. But it’s not, and here we are.
And I think it’s important that there’s an effort like ours doing this, that even if not an official American flag effort, will represent some of the values that we all hold dear, that’s better than a lot of the other ways I could imagine someone else doing this project or this going.
And one of the incentives that we were very nervous about was the incentive for unlimited profit, where more is always better. And I think you can see ways that’s gone wrong with profit or attention or usage or whatever, where if you have well-meaning people in a room but they’re trying to make a metric go up and to the right, some weird stuff can happen. And I think with these very powerful general purpose AI systems in particular, you do not want an incentive to maximise profit indefinitely. So by putting this voluntary cap on ourselves above which none of the employees or investors get any more money, which I think if you do have a powerful AI will be somewhat trivial to hit, I think we avoid the worst of the incentives, or at least the ones that we were most worried about.
The Ezra Klein Show – Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power
It’s an interesting interview. Recommended.