Sam Altman in gesprek met John Collison
Sam Altman, CEO van OpenAI, spreekt met John Collison, mede-oprichter en president van Stripe, en deelt zijn optimisme over de toekomst van AI. Zijn favoriete toepassingen voor GPT komen aan bod, maar ook wat OpenAI anders doet dan de meeste start-ups, waarom de grote belangen van AI zorgen voor samenwerking in de branche en waarom hij het advies dat hij gaf bij Y Combinator vandaag de dag zelf niet zou opvolgen.
PATRICK COLLISON: Hi, Sam. Thanks very much for joining us today.
SAM ALTMAN: Hi, Patrick. Glad to be here, but I thought I was supposed to be talking about John.
PATRICK COLLISON: Hmm, I think John has a bug.
JOHN COLLISON: Get out of here, Patrick. You had your own talk this morning. Sorry, Sam. Patrick’s always deepfaking his way into my talks.
Anyway, we should probably step out of these little circles and get going.
(Music playing)
JOHN COLLISON: All righty. I am delighted to be joined here in the flesh properly with Sam Altman. And Sam is at this stage basically part of Silicon Valley lore. You founded Loopt when you were 19?
SAM ALTMAN: I think so.
JOHN COLLISON: Eighteen? Yeah. And then you’re on stage for the original iPhone launch as, you know, one of the key partners. You sold Loopt to Green Dot. Sam ran Y Combinator for many, many years and most recently founded OpenAI, which powered many of the demos, you know, the GPT-powered natural language queries that you saw earlier in the keynote.
So, Sam, thank you.
SAM ALTMAN: Thanks for having me.
JOHN COLLISON: You tweeted in 2014, long before you I saw this reference recently, long before you started OpenAI: AI will either be the best or worst thing ever, highest volatility of any tech I can think of. How are we doing?
SAM ALTMAN: I think we’re going to be great.
JOHN COLLISON: Okay.
SAM ALTMAN: I am much more confident than but I don’t remember tweeting that, but I’m much more confident than I felt around then, at that time, about the ability to manage this. I think we are on such a different technology tree than my default would have been and the way things were going then. I thought this idea that we’re going to have, like, RL agents that learn to play increasingly more complex games, where they’re really just trying to, like, beat humans, and sort of we’re going to, like, keep letting that go, that didn’t seem like a great trajectory to me.
But I think what we’re doing now, where we have systems that are really optimized to help us and work in a way that we can sort of see what they’re doing step by step, even if we can’t see exactly what’s happening inside the neural network, I feel way better.
JOHN COLLISON: Oh, okay. Because you have been kind of an AI doomer in the past, that this stuff is really dangerous. So it sounds like you’re meaningfully less so now?
SAM ALTMAN: Yeah. Well, I still think we have to treat this with, like, extreme seriousness and that there is legitimate existential risk here, but I think through the hard work of people at OpenAI and other AGI labs, we’re making significant progress and still have a very long way to go towards, like, a trajectory we can all be pretty happy about.
JOHN COLLISON: There’s probably a set of people here who follow the Internet very closely and have read a lot of LessWrong posts and, you know, keep up with what, you know, Eliezer is saying and everything and get why one might be worried about AI.
And then I think there’s probably a set of people here who just, you know, have real lives, and they, you know, look at the progress in AI, and they’re, like, that’s cool, and, you know, autocorrect still doesn’t work that well. And just it doesn’t seem like certainly the AI in my autocorrect is, you know out -- is going to kill me anytime soon. What is if people are going to worry about AI, what should they be worried about?
SAM ALTMAN: How many people are really worried about AI?
JOHN COLLISON: A decent number.
SAM ALTMAN: Not that much. That’s great.
SAM ALTMAN: Look, at some level I fundamentally agree with you. This is just, like, just software; right? Other software has deterministic parts, nondeterministic parts. It’s you know, it’s like fancy autocorrect. So we could take that belief. But I think from some very other way, like, truly, you know, no tricks, no you know, no, like no parlor tricks, like, we fundamentally have figured out how to create systems that can learn, and that is something that I think is difficult to overstate the importance of.
And when it moves from fancy autocorrect to something that is much more complex than that, it may not be this, like, super clear, super bright light; it may be a lot of the gray area, but as these systems are capable of doing more and more sophisticated tasks and figuring out more and more autonomously increasingly complex problems, that certainly does shift the way the world works. At a minimum, things just go way faster in the world. Like we have these AIs that are now kind of, like, part of our society. And, you know, that can go a lot of different ways, but it does mean that what any individual is capable of doing and the speed at which the world evolves is going to be very different.
JOHN COLLISON: Maybe a good analogy is, you know, I heard one take on, you know, the recent financial turbulence in bank runs. Someone saying that just bank runs are different in the age of Twitter.
SAM ALTMAN: Yeah.
JOHN COLLISON: When you have much more virality in the -- kind of the information cascades.
SAM ALTMAN: Yeah.
JOHN COLLISON: Maybe that’s a good example of, like, there isn’t anything inherently dangerous about, you know, a bunch of tweets, but it does cause knock-on effects to reckon with.
SAM ALTMAN: In the previous financial crisis that everybody that I think that that was probably calibrated for, it was, like, pre-iPhone and pre-Twitter pretty much. You know, there were iPhones, but not everyone had them and not everyone used everything mobile all the time. And so that was like this example of where people were totally miscalibrated because of two important technological shifts, and both of those I would say will be smaller and less revolutionary than what AI is going to be like.
JOHN COLLISON: Yep. So you guys have kind of two tigers by the tail now with GPT4 or the GPTs, the
SAM ALTMAN: Yeah.
JOHN COLLISON: series of them and ChatGPT as well. And everyone is saying ChatGPT is the fastest growing consumer product in history. Like what’s the superlative for ChatGPT?
SAM ALTMAN: I mean, I don’t know exactly what are those things, but I will tell you, as John mentioned, I used to run Y Combinator, and people would, like, ask me for advice. And if you had told me as a startup you were going to, like, release a product that had no viral loop, no sharing features, no social anything, no network effect, like, no, like, you know, nothing, like, no reason that you using it should make anyone else want to sign up for it, also, that you weren’t going to, like, have the ability to use it without registering it in the first place, like, I would say, like, you are in bad shape. And I feel now like I’ve given bad advice for a long time.
JOHN COLLISON: Well, ChatGPT is viral, just not within the product.
SAM ALTMAN: Right.
JOHN COLLISON: People post tweet screenshots. People, you know, are
SAM ALTMAN: Right. But that’s, I mean, I would have said in a vacuum that viral loop is, like, too difficult. Like, you know, it worked for Wordle, I guess.
JOHN COLLISON: Well, just so you know, don’t listen too much—
SAM ALTMAN: But not for that long.
JOHN COLLISON: Don’t listen too much to investors.
SAM ALTMAN: Clearly.
JOHN COLLISON: for product advice. I feel like we knew that -- we knew that already. What is your favorite like, again, we had in the keynote, and we’re very excited about just natural language queries. Like SQL has always been in the interface.
SAM ALTMAN: Yeah.
JOHN COLLISON: And so human language is a much nicer interface. We’ve been using it a lot inside Stripe for query optimization that works really well. What are your favorite GPT use cases? Because you must have the best catalog of anyone.
SAM ALTMAN: You touched on this. But before I talk about specific use cases, my favorite thing of all is we have finally and I think legitimately gotten to a new computer interface. And there has been this arc for a long time of computers getting easier and more natural to use. So you’re all holding up iPhones recording this. Those are, like, way easier than a computer. You get to, like, you know, just use your hands instead of having to use a mouse. It’s, like, a little bit more direct.
But language is even more natural and even simpler, and we’re used to conveying very complex concepts to each other via language, and then things just happen and that’s it. So the fact that we now have this new way to interact with computers and just sort of new idea of software I think is going to go super far, and one of the surprises will be just how many places that turns out to be the right way to do it.
JOHN COLLISON: I think kind of maybe a funny restatement of that is: Do you remember in this was during your Y Combinator days, so I presume you saw this a lot. There was this hype cycle in I want to say 2016 or 2017 in Silicon Valley around conversational bots. And people thought conversational bots were the be-all and end-all. And, you know, you would not go on to the 1-800-Flowers.com website. You would instead, like, be interfacing with, like, a flower bot through, like, a natural language interface, and that’s how you would buy flowers. I just remember there’s some demo of that at some point. And it always seemed a bit off to me because, like, the bots were so clunky.
SAM ALTMAN: So dumb, yeah.
JOHN COLLISON: Whereas now we just maybe need to bring back that vision because maybe it was exactly right.
SAM ALTMAN: It’s definitely happening now. That’s I think like many other tech hype cycles, it was right in terms of what people wanted; the technology just couldn’t deliver it.
JOHN COLLISON: Yeah.
SAM ALTMAN: And to the degree that technology can deliver it, I still think it’s a way people want to use software.
JOHN COLLISON: Yeah.
SAM ALTMAN: And we’re seeing that. You asked about, like, favorite use cases. We’re seeing that from, like, how people want to learn things and these new sort of, like, AI tutors or medical advisors to just how they want to control a computer. And, yeah, so I think that the user desire was right. The technology just wasn’t there.
JOHN COLLISON: Yeah. One of the ways we conceived this inside of Stripe is anywhere someone is doing just manual work or working with—you know, on a series of tasks, having a team of research assistants with them, you know, cueing up all of the relevant data and having everything they need at their fingertips, you know, having done any, you know, mise en place, you know, any work that they can do, you know, to get things ready for them is probably just a huge productivity boost.
SAM ALTMAN: Yeah.
JOHN COLLISON: So that’s one of the ways we’re looking at it.
Okay. Let’s talk a bit about how OpenAI actually works. How do you produce things like well, actually, before I get to that, what’s next after can you give us the inside scoop? You know, you have GPT4, ChatGPT.
SAM ALTMAN: Probably five. I mean maybe.
JOHN COLLISON: Okay. That’s
JOHN COLLISON: But I mean or 4.5 --
SAM ALTMAN: Yeah.
JOHN COLLISON: -- judging based on your numbering system. Anything else we should be excited about? Not, obviously, the new products, but just directionally, like, there’s a group of people here who are
SAM ALTMAN: Yeah.
JOHN COLLISON: very excited about technology developments, so let’s prime them.
SAM ALTMAN: Multimodality is definitely something we’re very excited about, so you want these
JOHN COLLISON: Explain that to the non-AI.
SAM ALTMAN: Yeah. You want a system that can’t only interact with text but can do text and audio and images and video and, you know, who knows what else.
Obviously, you want actually, I think one of the areas that people most want is just better reliability. So if you could have the like the best one out of 10,000 responses to every GPT4 query, you’d probably be pretty happy most of the time. There’s a lot of intelligence in there. It’s just not very evenly distributed to the average query. So if we can make that much better, if we can make the system way, way more reliable, that’s a big deal.
And then eventually you want a system that can, like, help do a better job of generating new knowledge, but that’s going to be a little bit.
JOHN COLLISON: The reliability is a good point. So, you know, people in AI I notice you love using this term "hallucinate." And in general people in AI love making up words for stuff. And so, you know, it’s inference rather than how long it takes to run, and it’s hallucinate rather than just making shit up.
SAM ALTMAN: Yeah, that’s a weird term.
JOHN COLLISON: Sometimes I know, it sounds like, you know anyway, but this is one of the main problems and certainly, you know, in, say, our use cases, like, we do need the things to be true.
SAM ALTMAN: Yeah.
JOHN COLLISON: And, you know, there’s been historically an area of mixed performance, sort of the Will’s mustache of GPT performance. And so it seems like we’re on a path to fixing that?
SAM ALTMAN: We’re on a path to improving it a lot. We have been improving it every week, but one of the things that I think is just, like, deeply true about how people use this is humans are fairly forgiving somewhat forgiving of other humans making mistakes, extremely little forgiveness for computers making mistakes. And so the threshold for what it means to solve this is not what it would take to have your human colleague solve it; it’s much, much higher. And so that it is going to take a while, and I think it’ll be a very high bar.
JOHN COLLISON: GPT already makes up far less stuff than your average dinner party guest?
SAM ALTMAN: For sure. But we’ve got to go much further.
JOHN COLLISON: And you said it’s improving week over week. I’m actually curious, how do you guys operationally track model performance inside of OpenAI? Like what will what is the
SAM ALTMAN: Yeah. We have a suite of evals. And every time we, you know, do a different fine tune or make a new model, we’re looking at it constantly and saying how’s it doing on this whole stack of stuff.
JOHN COLLISON: But is it kind of like the CPI basket, where you have to constantly keep changing what’s in the eval?
SAM ALTMAN: You just keep making them harder.
JOHN COLLISON: Yeah.
SAM ALTMAN: You, like—you know, we got a ways to go on any kind of robustness—
JOHN COLLISON: Yeah, yeah.
SAM ALTMAN: —eval.
JOHN COLLISON: And let’s talk about how OpenAI works. And, again, I’m sure there’s lots of people here who are super curious. Like you guys have, for example, the research versus product I think split or maybe
SAM ALTMAN: Yeah.
JOHN COLLISON: you know, that’s probably interesting for people to hear about and just in general your philosophies as you run OpenAI.
SAM ALTMAN: Well, the first thing I’d say is how it works changes, like, every year or even less than that. We’re still in such a steep part of the technology discovery curve that, you know, everyone you make these, like, great plans. What’s that quote, like, everyone’s got a plan until they get punched in the face or something like that?
JOHN COLLISON: Yeah, Mike Tyson.
SAM ALTMAN: Yeah.
JOHN COLLISON: Yeah.
SAM ALTMAN: And then you get, like, punched in the face and you realize the org structure has got to change or what you thought you were going to do or the product you wanted to make is not possible or there’s something much better or, like, you know, who knows what.
So our thing that we try to have is like our add variant is that we are a very truth-seeking research culture. We want to get the things to work. We don’t care if our ideas are the ones that turn out not to be right. And one of the things that I think really held the field back is that a lot of the kind of research heavyweights in the field for a long time, the whole culture of, like, academic AI research, I think it wanted certain approaches to work more than it wanted the technology to work. And so we have this we’ll do whatever it takes. We will, like, follow the truth wherever it leads. If the answer is we just have to, like, go scale up these, like, ridiculously big systems, rather which is way less intellectually satisfying than discovering this new brilliant algorithm, but if that former is the one that works, that’s what we’re going to do. And that was really a culture that we built from the beginning, but it means that, like, things are difficult to predict.
JOHN COLLISON: Is it accurate to say there’s kind of a cyclicality to AI in discovering new techniques and then throwing a whole pile of compute at it and then kind of going back and discovering new techniques and then throwing the whole pile of compute, and maybe part of your success is, as you say, not being precious about just throwing a whole pile of compute at it?
SAM ALTMAN: That idea you could sort of say there’s a cyclicality to that, but that whole idea of, like, throw a lot of compute didn’t really exist at all until a few years ago.
JOHN COLLISON: Really?
SAM ALTMAN: Yeah. This is not like an old idea.
JOHN COLLISON: Hmm.
SAM ALTMAN: Or it was sort of maybe known in some abstract sense, but people other than us weren’t really just saying, like, let’s just make that language model bigger. Let’s do the hard engineering.
JOHN COLLISON: Because it kind of seems inelegant to an engineer.
SAM ALTMAN: It does.
JOHN COLLISON: that we’re just going to brute force this.
SAM ALTMAN: It does.
JOHN COLLISON: Yeah.
SAM ALTMAN: So I don’t like and, you know, the next thing we do—like big computers are going to be important for sure—
JOHN COLLISON: Yeah.
SAM ALTMAN: —but I think we’ve now squeezed a lot of the easy juice there.
JOHN COLLISON: Yeah.
SAM ALTMAN: And so now it’s back to, like, let’s figure out these very new ideas.
JOHN COLLISON: You’re like a lock-picking expert who uses a crowbar.
SAM ALTMAN: But we have this new tool. We have, like, the GPT system that can help us with all sorts of things.
JOHN COLLISON: Yeah. How do you create a truth-seeking research culture?
SAM ALTMAN: I think it’s basically all about the people. Like with any company, it is all about the people and particularly the initial people you hire, and we tried to just be very diligent about who those people were and who they weren’t.
JOHN COLLISON: Mhmm.
SAM ALTMAN: So there’s, like, a lot of people who were just, like, quite prestigious people that we tried hard to hire or did hire in some cases in the early days that didn’t fit how we thought things were going to go.
JOHN COLLISON: And it sounds like it is quite important to you that people are not they don’t have a particular team that they support in terms of techniques. You just have to be open-minded.
SAM ALTMAN: Yeah. We have a very high internal flux of movement between teams and new teams starting and stopping and people changing to go work on something else, and I think that’s I think that is really important to how we work and also different than I think how a lot of -- how a lot of other people work.
JOHN COLLISON: How do you know if research is worth funding? Because, again, you know, on a product you can kind of see if anyone is using it or not. You get a lot of real-time feedback. One of the defining aspects of research is maybe you’re working away for five years and it looks like it’s a dead end and then suddenly it takes off.
SAM ALTMAN: Yeah. I think you just nothing deep beyond that. You just have to accept the much longer time horizon, and you need oh, this is another thing about kind of the culture of how you get to this. You need people who are not that dependent on external validation. Everybody is like a little bit, and I think it does no good to pretend otherwise, but you want people who are, like, relatively not. And if you know, if we’re going that’s some approach that we think is working but a lot of experts in the field say, like, that’s really not going to work
JOHN COLLISON: Mhmm.
SAM ALTMAN: they actually say much meaner versions of it than that, you need people who will keep doing that in the same way that if you don’t get the normal dopamine hits of more frequent contact with reality while you’re trying to research an idea you believe in but isn’t yet proven, you need you need people who are, like, going to be okay through that many-year journey through the wilderness.
JOHN COLLISON: Hmm. And it sounds like you tried to create a lot of connectedness with the organization, where it’s not, you know, I join a team, and then I work on that team for five years. People will get bounced around somewhat deliberately.
SAM ALTMAN: Yeah.
JOHN COLLISON: And this is your third startup if you count kind of YC as a startup?
SAM ALTMAN: Yeah.
JOHN COLLISON: And how do you lead OpenAI differently or do you try to run things differently from your previous companies?
SAM ALTMAN: Running a research group is very different than anything else and certainly like like an investment firm is different again. My sense is you can learn some of the standard things, like, you know, how to deal with people. That transfers pretty well. But I think in general people try to apply way too many lessons from stuff they’ve previously done to what they’re doing now, and I think you really want to take you really want to try to come in with a fresh pair of eyes and meet the shape of whatever is in front of you.
And so I think, like, OpenAI, we didn’t it, like, went against basically all of the YC advice. You know, we took us four and a half years to launch a product. We’re going to be the most capital-intensive startup in Silicon Valley history. We kind of didn’t we were, like, building a technology without any idea of who our customer was going to be or what they were going to use it for.
JOHN COLLISON: Jokes aside, did you have investors wagging their fingers at you and, you know, telling you that you’re doing it wrong in some way?
SAM ALTMAN: Yeah. And I was just sort of, like, I don’t really care. Don’t invest. You know, you can, like, sell your stock to somebody else.
JOHN COLLISON: That sounds different from previous startups, where, you know, maybe you’re more self-actualized.
SAM ALTMAN: Yeah.
JOHN COLLISON: Maybe you don’t have to care so much.
SAM ALTMAN: Yeah. But we did.
JOHN COLLISON: But it sounds like you did a deep learning.
SAM ALTMAN: We did change things like that.
JOHN COLLISON: Yeah. Now, OpenAI, again, currently, you know, ChatGPT is on everyone’s lips. The products are working incredibly well. GPT4 is delivering a huge amount of real value to people. I presume it did not feel that way all along the journey. Maybe do you have any fun war stories or just
SAM ALTMAN: Yeah.
JOHN COLLISON: what was it like?
SAM ALTMAN: I do. But I will say, like, no one is as honest about how bad the bad early days are as they should be because it’s so embarrassing in retrospect, but --
SAM ALTMAN: Nervous laughter out there. But the beginning of OpenAI was really tough, like, stuff was not working that well. We were this, like, extremely rag-tag group of people. We were, as I mentioned earlier, like, mocked by everyone serious in the field. There was this like kind of, like, pretty famous guy that runs a lab another AI lab that is not doing as well as OpenAI anymore that would, like, go around to the media when they would write something nice about OpenAI and tell them, like, you know, I don’t know why you’re saying stuff. This is just not a good group of people.
JOHN COLLISON: You had a kerfuffle with Elon Musk?
SAM ALTMAN: We had a kerfuffle with Elon Musk. But more than that, like, we just didn’t have working technical progress. Like we had, like, some little things. You know, we did some projects that kind of worked. We even, like, did Doda, which was great, and it was what let us raise the money from Microsoft that let us do the GPT series, but it was, like, it was not it was deeply unclear how we were going to make AGI, and we were, like, unbelievably outgunned by DeepMind at the time; and a lot of people were, like, why are you even doing this, like, DeepMind is untouchable.
No one had figured out the idea of language models. Certainly no one had figured out the idea of putting in API around one or releasing ChatGPT. So, yeah, it was, like it was pretty bad. It was pretty hard. And we just, like, kept on the one foot in front of another, but we were, like, constantly not able to find enough money or compute or people, and you just keep going, and eventually something works.
JOHN COLLISON: When you say you just keep going, so if I think back to the because, again, I think every company that, you know, gets to some scale definitely goes through the period, you know, the dark times. And if I think back to Stripe’s dark times, like, you know, there’s a point early on where, like, just after launch and, you know, the endorphins have worn off, and we had, like, a bunch of early people leave all at once at one point, and, you know, it’s very draining, and, you know, you lose some of your own confidence, and you think about what keeps you going. And, one, you know, it helps if you have an idea that you’re really excited about, but at least in our case, also, there’s a momentum from serving customers.
SAM ALTMAN: Yeah.
JOHN COLLISON: Where, I don’t know, it’s, like, just, well, we got to stay going because, you know, we have all these requests coming into the API and we have all these customers to serve.
SAM ALTMAN: Yeah.
JOHN COLLISON: So we kind of don’t have a choice. It sounds, like, you know, you guys were doing more foundational research. You did not have that customer momentum to keep you going, and so were you not tempted to just pack it all up and quit?
SAM ALTMAN: You know, there were all of these so I think Silicon Valley used to be really good at research labs and then it wasn’t, and so we didn’t have like like there’s good canonical advice now about what to do as a sort of product startup, and it’s totally true. And this is one of the reasons YC pushes for it so much. Once you have customers that you have to serve and, like, you really feel that and they pull it out of you, but one of the things that we had to sort of, like, rediscover about how to run a research lab was how you keep that internal momentum going when you don’t have customers or you don’t have, like, contact with reality to know if you’re doing the right thing, and it is really hard.
A good set of colleagues matters. We would try to, like, figure out how to set up, like, enough external touch points that kind of like for our own, like, mine included, like, sanity would, like, fake the equivalent of having customer feedback, but there were, like, a lot of things we had to rediscover how to do this.
JOHN COLLISON: Hmm. You know, you described the intense competition and, you know, some of the other folks in this space like DeepMind. One thing I just observed with your saying this is the AI space feels very competitive in spirit, where, like, people really want to, you know, outpace the competition and, like, there’s a lot of competitive juices flowing, you know, like, in a, you know, a sports game or something like that, and, I don’t know, it sort of reminds you of the early days of computing where like
SAM ALTMAN: Yeah.
JOHN COLLISON: you know, Microsoft versus Apple, like, I mean, that was a real competitive battle; whereas, I don’t know, you know, there’s other folks in our space, and they’re all grand, and, you know -- yeah, just it’s a bit more chill somehow. And so is there something about these superfast moving
SAM ALTMAN: Actually, I think there’s a way in which the AI field is very competitive, as you said, but I think there’s this other way in which it’s the least competitive, because I think everyone working on it on this sort of frontier realizes what’s at stake, and we really want it to go well. And so although, you know, people want to, like, be faster and there’s this, like, island, like, there is I think a really deep appreciation for the need to collaborate and get this right, and that is different than any space I’ve seen before.
JOHN COLLISON: What form does the collaboration take?
SAM ALTMAN: I mean, it’s mostly just, like, the leaders of the labs talking to each other.
JOHN COLLISON: Mhmm.
SAM ALTMAN: But increasingly I think it’ll be about governments doing it.
JOHN COLLISON: What will happen with the governments? You know, there’s a lot of talk now of AI regulation. What should I be
SAM ALTMAN: I’ll tell you what I want. I don’t know what they’ll do. What I want is some I think that computing systems above a certain threshold, as we make more and more algorithmic progress, are kind of as powerful for all the good and also as dangerous for the bad as, like, any other human technology we’ve had yet. And what I would like is something like the IAEA, the International Atomic Energy Group Agency that does it for AI.
And so if you were going to create a giant AI training system, and if you’re going to train models that are above a certain capacity, like, there should be global regulatory authority over that that should say, you know, here’s the test you have to pass before you deploy something. Here’s, like, the standard for audits.
JOHN COLLISON: Well, wait, as we, you know, shut down nuclear plants in Europe and, you know, fire up the new coal plants, and, you know, maybe it feels like we’re running slightly backwards from a climate point of view, is the global nuclear regulatory apparatus really the model we want?
SAM ALTMAN: We haven’t had a bomb go off in war since 1945, and I think most people in 1945 would have given a very low probability of that happening. It’s not been perfect. It’s definitely been screwed up on the power side. I’m honestly not sure that’s the IAEA’s fault. Like the NRC in the US has never, as far as I know, approved a plant that has then gone on to be built in the 45 years its existed, which is, like, incredibly shameful. But, yeah, we haven’t had a bomb go off.
JOHN COLLISON: And you credit at least to some degree
SAM ALTMAN: At least some of that.
JOHN COLLISON: -- to the IAEA. Sorry, you just reminded me with nuclear, I forgot in Sam’s intro—so, you know, Sam has done all these very impressive things, and just while doing that, in his spare time, he has also done things like been the primary backer of Helion, which is the leading I think nuclear fusion.
SAM ALTMAN: Yeah.
JOHN COLLISON: startup. And should we be excited about
SAM ALTMAN: You should be excited about that.
JOHN COLLISON: having fusion in the future?
SAM ALTMAN: I so fusion is going to work. What I think is still unclear
JOHN COLLISON: I’m sorry. Fusion will work?
SAM ALTMAN: Fusion will work.
JOHN COLLISON: Okay. You heard it here.
SAM ALTMAN: But, but
JOHN COLLISON: Sorry. I already tweeted it out. Yeah.
SAM ALTMAN: The thing that I think matters now is how cheap we can make the energy out of the system and how quickly we can deploy it to the world, and these are both tremendous challenges. Maybe not quite as hard as getting the actual physics to work, but up there. And so whether there’s like it’s totally possible we get to a world where fusion works, but it is more expensive than solar plus storage. That’d be really sad.
And it’s totally possible that fusion works and it’s cheap, but we just can’t manufacture it at the, like, thousands of gigawatts of generative capacity we need for Earth. And if both things happen, if the cost, like, falls by a factor of 10 or 100, then the demand really goes crazy, and it’s really, really hard to make enough. But I think probably we will figure it out, and probably we will get to a world where in addition to the cost of intelligence falling dramatically, the cost of energy falls dramatically too.
And if both of those things happen in the same time -- I would argue that they are currently the two most important inputs to the whole economy -- we get to a super different place.
JOHN COLLISON: Yeah. I was going to say, are those two things more coupled than people realize, where electricity
SAM ALTMAN: In all these ways
JOHN COLLISON: is one of the limiting factors
SAM ALTMAN: Yeah. In all these ways they are. Like if you really want to make, like, the biggest, most capable super intelligent system you can, you need eyewatering amounts of energy. And if you have an AI that can help you, like, move faster and do better materials science, you can probably get the fusion a little bit faster too.
JOHN COLLISON: Hmm. Is there as you trace back the text tree and look at something like Helion, you know, get all these interesting questions about what’s dependent on what. You know, could the Romans have had their own Industrial Revolution or something like that, or, you know, were there technologies that were needed. And could we have had what Helion is doing in, say, the ’60s?
SAM ALTMAN: No.
JOHN COLLISON: Okay.
SAM ALTMAN: Again, it’s, like, it’s always funny how these things play out, but the whole semiconductor revolution also means that we can do the kind of switches that we need for Helion now. 10 years ago, 15 years ago certainly it just would have been totally impossible. The materials science, the progress that we’ve made for many different parts of the machine, the computer modeling stuff that we can get to, like, understand how the plasma is going to work, none of these things would have been possible before.
JOHN COLLISON: So it’s the product of decades of materials science
SAM ALTMAN: On many different
JOHN COLLISON: and computer modeling?
SAM ALTMAN: Yeah. It’s the product of, like, major progress in five different areas being multiplied together.
JOHN COLLISON: Mhmm.
SAM ALTMAN: That means you can do things that the last time around people made serious fusion efforts were totally impossible.
JOHN COLLISON: Okay. Okay. Sorry, I distracted us. Going back to AI regulations, so you think capping compute is just a sensible, pragmatic way to like
SAM ALTMAN: I wouldn’t cap it. I think that I mean, maybe you should at some points along the way, but, like, long term that’s clearly not the answer. But I would just say above a certain above increasing thresholds there are increasing regulatory demands.
So if you’re going to make a system that is likely to have some very powerful capabilities, you should have a different regulatory regime than if you’re, like, training a model on your laptop.
JOHN COLLISON: Do you think we’ll sadly just back into regulation through essentially protectionism and kind of, you know, humans protecting human turf, where—you know, the lawyers have figured this out, where, you know, various things just require a lawyer. Why? Because we make the rules. We’re the lawyers.
SAM ALTMAN: I don’t, you know, I used to worry about that. I still have some worry about that. You can, like, see all these mistakes where people have tried to protect a bunch of jobs that they just that’s not a winning strategy. I think the tenor of the conversation is quickly shifting towards this AGI thing is just different than anything we’ve faced before, and let’s focus on that.
JOHN COLLISON: Mhmm. So as we look at again, you have, you know, around 3,000 people in the room working on, you know, in every imaginable industry, and, you know, I’m sure many of them are working actually on AI startups. You know, if you were putting out a request for AI startups, you know, you guys are, obviously, with OpenAI powering through your API products, a huge number, maybe most all of the AI startups, what do you want to see people develop? Like what would be on your request for a startups list?
SAM ALTMAN: We touched on this earlier, but, like, anything that’s in the direction of how can you design a new interface model, that just seems to keep working, you know, like from this sort of chatbot interface to something that before was more difficult to do to the sort of, like, copilot for every different workflow, this seems to me like an easy area to go after.
What’s happening in education is amazing. So, like, the AI tutors are surprising me to the upside, and I suspect can go much, much further. But, you know, the most fun part of this is you always underestimate the creative energy of the world when it really gets focused on something.
JOHN COLLISON: Yeah. We work with at Stripe an extraordinary number of vertical SaaS companies, where they are, you know, providing software for some very specific vertical, and it goes, like, really niche. I mean, there are, like, very popular, you know, software platforms for managing a Boy Scout troop, because as it turns out, there’s a lot of Boy Scout troops out there, and I know they’re all running around.
SAM ALTMAN: Yeah.
JOHN COLLISON: You know, you need a software platform to manage them. But, anyway, you know, we see this across every kind of single industry, and I was thinking that it will probably be software platforms that are the vehicle through which many of the productivity gains of AI are distributed to kind of the economy more broadly, because most companies are not tech companies, but SaaS is probably how you can deliver a lot of these productivity gains.
SAM ALTMAN: I think that’s probably right, but I would say my own intuitions are uncertain here. Like, you know, maybe where most of the gains come is in another few turns of the crank you can just say, like, hey, AI, please cure all cancer for me, and then it goes “choo choo” and gives you the answer, and, like, people get it that way.
JOHN COLLISON: That’d be fun. Obviously, it’s very exciting.
JOHN COLLISON: It’d be very exciting, actually, when you say education, just like the Bloom 2 sigma effect that tutoring kids
SAM ALTMAN: Yeah.
JOHN COLLISON: one on one with mastery learning techniques
SAM ALTMAN: Yeah.
JOHN COLLISON: where you, like, wait until the kid has actually understood it until you go to the next level. The reason they call it the Bloom 2 sigma effect is because it produces two sigmas of outperformance versus the baseline, and that is maybe
SAM ALTMAN: Yeah.
JOHN COLLISON: the most exciting thing.
SAM ALTMAN: I think this is why the AI tutors are surprising so much on the upside, because they can do exactly this so well
JOHN COLLISON: Yeah.
SAM ALTMAN: one on one.
JOHN COLLISON: There’s two sigmas. Like that’s a lot of
SAM ALTMAN: It’s a lot.
JOHN COLLISON: sigmas.
SAM ALTMAN: It is a lot of sigmas.
JOHN COLLISON: The companies here that are not AI companies, any advice for them as they look to adopt AI within their business? So
SAM ALTMAN: Yeah. Internal productivity. We’re seeing a huge divergence between people at companies -- like individual people at companies and companies that embrace versus companies that don’t embrace AI tools. Whether your business has anything to do with AI or not, you can certainly, like, figure out how to dramatically impact your, you know, productivity per unit time, and I think it’d be very dangerous not to -- not to learn how to do that.
JOHN COLLISON: When you talk about AGI, three years, five years? What are you thinking?
SAM ALTMAN: Honestly, the, like, goalposts on that one are moving so much that I think at this point we’re kind of we’d have to get into, like, a very precise talk agreement on what the definitions are, and then I could give you some range.
JOHN COLLISON: What is your working definition?
SAM ALTMAN: Well, my own one is moving, too, as we get closer. Like a thing that I would say is definitely AGI is a system that can dramatically increase the rate of scientific progress. And I don’t think this has to mean, like, we have this one system that is, like, going off and discovering all the science itself. That’s actually not what I expect will happen, but that somehow working in collaboration with us and sort of, like, becoming part of society in all these ways, like, our collective rate of scientific progress dramatically changes shape, and I would expect that by the end of the decade.
JOHN COLLISON: Yeah. I mean, it feels like if that is your your criterion, we can see that actually quite soon.
SAM ALTMAN: Yeah. But, again, it’s, like, not a fully autonomous system.
JOHN COLLISON: Yes.
SAM ALTMAN: Like do you some people wouldn’t count that.
JOHN COLLISON: Yeah, yeah, yeah, yeah. Well, that’s something that’s kind of the moving definition.
Many people probably don’t know this. You were, along with Paul Graham, Stripe’s first angel investor.
SAM ALTMAN: Yeah.
JOHN COLLISON: Why did you invest in Stripe?
SAM ALTMAN: I thought and continue to believe in addition to the fact that this is just, like, one of the largest markets that if you can take friction out of economic transactions, which is part of why the Internet is exciting as a whole but only one part, that that would just, like, enable a tremendous amount of growth in the economy and sort of better stuff happening faster, cheaper, better, more accessible, all of that.
And that has become even more. Like let’s say Stripe has certainly outperformed anyone’s expectation, but that fundamental belief I felt then and I feel very strongly now. I also thought you and Patrick were awesome. And that’s kind of, like, all I look for in a startup, like, big-market idea that I personally want to support great founders, something that would have, like, real long-term defensibility.
It was so long ago, I actually wrote a check, a physical check, one of the last times probably in my life. But, you know, when you say it
JOHN COLLISON: Kind of ironic for Stripe when you think about it.
SAM ALTMAN: Well, I mean, it makes sense,
SAM ALTMAN: It makes sense that it would be one of the last ones.
JOHN COLLISON: Yeah. We can digitize that.
SAM ALTMAN: Like, you know, when people say, like, I wrote a check into this startup, like, I actually wrote a check into Stripe.
JOHN COLLISON: Yeah. We have our new RFAs. We can digitize all that for you now.
SAM ALTMAN: Great.
JOHN COLLISON: But you guys are, obviously, now monetizing ChatGPT with Stripe. Any product feedback for us, sir?
SAM ALTMAN: We’re very happy. And I’ll actually tell you one thing that I that the team is probably tired of hearing about this. Like Stripe does such a good job with customer support, documentation, that whole experience, like, so much better than basically any other startup.
And OpenAI, we’re going to get better at this. I apologize to everyone. It’s clearly the bottom of the pack. But it’s very inspiring, and we appreciate it.
JOHN COLLISON: When you say documentation, actually, one thing I’m quite excited about and, again, this is, like, a very narrow, I don’t know, thing to be excited about with GPT4, but I am, is, like, technical documentation. Like, Stripe has produced a lot of quite technical, quite deep documentation, but everyone’s use case is kind of different, where it’s, like, well, I’m building a platform in the Netherlands to accept point-of-sale payments with the new S700 reader, which has the longest battery life
SAM ALTMAN: Yeah.
JOHN COLLISON: of any reader we’ve ever shipped, and I just want to, you know, figure out what my implementation plan is.
SAM ALTMAN: Yeah.
JOHN COLLISON: That is the kind of thing GPT4 is excellent for.
SAM ALTMAN: Yeah. This seems like clearly the future.
JOHN COLLISON: Yeah.
SAM ALTMAN: Where you’ll just go to GPT4 or some system like it and ask it any question you have, and it’ll give you a perfect custom answer.
JOHN COLLISON: Actually, it may be a narrow product question. Will you, like, go to the Stripe integration builder, and the integration builder will come up with a thing for you, or is everyone kind of overthinking it, and actually you’ll just go to GPT4 and say, hey, GPT4, how should I build my Stripe integration?
SAM ALTMAN: We’re still trying to figure out the answer to that question. You know, do people want to use one centralized chatbot for their entire workflow, or do they want the chat interface everywhere? So we’re like you know, we’re trying versions where it all comes together with plugins. We’re trying versions where we make it easy to embed chatlike experiences elsewhere. And I think the world hasn’t figured this out yet.
JOHN COLLISON: Yeah. And you guys you should pitch your API product. Because, again, one of the things that people are worried about is, like, the data usage and, like, you know, you want to be able to use data, but you don’t necessarily want to go back into feeding, you know, the models and everything like that, but you guys have solved all that with your, you know, API product.
SAM ALTMAN: Yeah.
JOHN COLLISON: Just people may not know that.
SAM ALTMAN: So we’re trying to, like, clear this misconception up. We don’t ever train on any data submitted to the API. So you can, like, use it comfortably as a business. That will never get into our training set. And, you know, that’s, like, really important to us, because we realize how sensitive the data is. We don’t retain it long term. We don’t train on it. It’s, like, your data. We totally get that. We have plenty of other ways to improve our models.
We will continue to drive prices down a lot over time. We will continue to make the models better. Like we have this vision that we can make our APIs pretty simple to use, and then as and pretty, like, long-term stable, and then as we make the models smarter and smarter, it just sort of, like, lifts up the whole Internet, the whole technology industry, with more intelligence.
So that is our, like, we want to always, like, astonish people on how much intelligence they get for how little money. And I think we can keep doing that for a long time.
JOHN COLLISON: When you talk about data, I think that one thing that maybe surprises people is how small the models are. Like I don’t know if you have really too many thoughts on just how small the various models are. Will we see a lot of on device or, you know, on prem or, like, on my individual device inference?
SAM ALTMAN: It will be a long time until you can have, like, a GPT4 class model running on your iPhone, but, you know, technology marches on, and some day.
JOHN COLLISON: Yeah. So maybe for specialized use cases
SAM ALTMAN: Yes.
JOHN COLLISON: but probably not kind of for broad use cases.
Okay. And last question, because we are running up against time. But, again, you talk to a lot of startups. You advise a lot of startups. You have companies big and small, not just startups, but including some startups in the room. As you stare down 2023, I mean, we’ve spent a lot of time talking about AI in particular, but, also, as you look at the macroeconomy, as you look at the hiring market, as you look at, you know, the return to office, you know, we were talking about backstage, everything like that, just what are you advising your startups on these days? What mistakes do you see people making that you try to correct? What advice would you leave everyone with?
SAM ALTMAN: I mean, look, I think definitely one of the tech industries worst takes in a long time was that everybody could go full remote forever and startups didn’t need to be together in person. And, you know, there was going to be no loss of creativity. I would say that the experiment on that is over, and the technology is not yet good enough that people can be full remote forever, particularly not startups.
I feel pretty strongly that startups need a lot of in-person time, and the more fragile and nuanced and uncertain a set of ideas are, the more time you need together in person. Like that is the thing that we still have not translated on to, like, Slack and Zoom and whatever else.
And I was talking to a lot of recent startups who have been who were all remote the whole time and decided not to come back versus the ones that either started in person or came back together in person, and they’re different. Like there’s a huge separation there. So I think that’s one big thing. The sort of macroeconomy, I don’t really know.
JOHN COLLISON: You’ve always had good predictions on the macro.
SAM ALTMAN: I only think about AI now. I used to have, like, all these interests.
SAM ALTMAN: You know, I used to, like, have a lot of ideas, and now I only have one. I mean, presumably the economy recovers at some point.
JOHN COLLISON: That’s actually as specific as many serious macro-prognosticators—or as specific as many macro-prognosticators, I guess, but you’re just much more concise. They use many more words to express roughly the same sentiment.
SAM ALTMAN: Right. Yeah.
JOHN COLLISON: Okay. So get back to the office, and presumably the economy recovers at some point.
AUDIENCE: I have a question.
JOHN COLLISON: Well, we’re not taking live questions, but apparently we are with this person. So we’ll take one live question, then we’ll close.
JOHN COLLISON: This is a new way.
SAM ALTMAN: Sure.
AUDIENCE: So there was a research paper about the emergent abilities to be more original in how we measure progress than the inherent property of language models itself.
SAM ALTMAN: The question was: There was a paper about emergent properties being more something of how we measure progress than an inherent property of language models itself. I didn’t see the paper. I totally disagree. But I think it’s, like, a question of how you define emergent properties.
The number of experts that sort of try to weigh in about why this is just statistical prediction, there’s no intelligence, there’s, you know, nothing useful here versus, like, the experience of people who use these models, I’d say the experts are clearly in the wrong in some important sense, and you can safely just, like, ignore all of their Twitter hyperventilation.
JOHN COLLISON: Not speaking about anyone in particular?
SAM ALTMAN: So many people in particular.
SAM ALTMAN: But in the same way, like, you know, you can say, well, this really isn’t emergent behavior; it’s a measurement error. If it’s doing something new and useful, that is emergent behavior in a sense that I think like utility is the figure of merit here, but more than that, I think now that we can just, like, see a year or two into the future, there is definitely going to be behavior that I would say just categorically doesn’t exist right now that is going to come in a way that I would call and I think most normal people would call emergent, and what matters is the utility it eventually provides us.
JOHN COLLISON: Okay. Sam, thank you so much.
SAM ALTMAN: Thank you.
SAM ALTMAN: Thanks for having me.