How can we help everyone become a 10x engineer?
Présentations, Developers
Durée
Remplissez le formulaire pour regarder la vidéo en entier
The developer experience is being reshaped by emerging technologies and societal trends, from AI copilots to hybrid work environments. We talk to engineers and product leaders who are navigating these changes and seek to answer the question: how can we help everyone become a 10x engineer?
Speakers
Sarah Guo, Founder and Managing Partner, Conviction
Scott Densmore, VP of Engineering, Github
Romain Huet, Head of Developer Experience, OpenAI
Ainsley Escorce-Jones, Principal Engineer, Developer Infrastructure, Stripe
Lee Robinson, VP of Product, Vercel
SARAH GUO: Hi everyone. I’m so happy to be here at Stripe Sessions. My name is Sarah Guo, and I’m the founder of an early-stage venture capital firm called Conviction, focused on the technology transformation that’s happening now because of AI. I think it is a generational opportunity. And before that, I spent about a decade as an investor, including in developer tools and developer infrastructure at Greylock. I am really excited to talk about today’s topic, which is what we can do to make everyone a 10x engineer and what that world looks like.
And with me today... I’m super excited to be joined by four people at some of the engineering cultures and products that are most respected in our ecosystem: GitHub, OpenAI, Stripe, and Vercel. And we’ll have our panelists introduce themselves and I’m sure we’ll have a really good discussion today. With that, Scott, Romain, Ainsley, and Lee, please join me.
This is the only time we’ll stiffly go in order, I promise. Scott, if we could start with you, everybody of course knows your product and I think most people here will use it five to seven days of seven. Can you talk a little bit about yourself and what part of GitHub you work on?
SCOTT DENSMORE: I am Scott Densmore. I work at GitHub. I am the VP of engineering for GitHub Copilot. So, if you have any questions about Copilot, you can always ask me. You may not like to answer, but I’ll at least answer it for you.
ROMAIN HUET: Hi, I am Romain. Very nostalgic to be here because I used to work many years at Stripe. So it’s awesome to see all of you here at Sessions. I’m at OpenAI, a lead developer experience. So I work with many of you—startups, builders, developers—who are building with our models and our APIs.
SARAH GUO: Ainsley.
AINSLEY ESCORCE-JONES: Hey, everyone, I’m Ainsley. I still work at Stripe. I’m the IC that leads our developer infrastructure team; we build all of the tools internally to keep our engineers productive, IDEs, builds, etc.
LEE ROBINSON: And I’m Lee, I’m the VP of product at Vercel. We build tools for frontend developers and I’m just so happy to be on this panel with companies that I respect so much and products that I use every single day. It’s really funny. Stripe Sessions built with Vercel, which is so cool. And I use GitHub every single day. I use Stripe for Vercel. So really awesome to be here.
SARAH GUO: Awesome. Okay, so I will start with one thing you’re proud of and one thing that you still think needs to be solved in the developer experience or is being solved. So Lee, maybe something starting with you like. What is Vercel working on or shipped recently that you think is great and one thing that you think is unsolved?
LEE ROBINSON: You know, one thing I’m proud of, I don’t think we were the first person to do this, but I think we helped influence the DX of the industry for developer tools, which is when you have an error, when something goes wrong, you want to get it solved as fast as possible. And often you can’t put enough context in the stack trace or in the message you get back. And in very early versions of Next.js, which is a React framework that we built, we included a link which, you know, seems to make sense now. I think a lot of tools do this. I know Stripe and OpenAI have this and GitHub does too. And it goes out to your webpage, your docs, your error page that provides a bunch more context.
It provides code you can copy/paste, debugging tips, and you know, now in the future maybe even a button you can click and get an LLM-assisted answer. So I don’t think we were the first to do that, but I do feel like we’ve helped push that further in the industry, which is a huge boon. And there’s something we’re working on that I’m really excited about is a product called v0.dev, which allows you to use natural language or upload images and maybe in the future speak as well too and generate frontend code. So generate React tailwind code and copy/paste it and put in your app and get a first version going really quickly.
SARAH GUO: Yeah, how many people here have tried v0. in the audience? Okay, okay, that’s pretty good.
LEE ROBINSON: Nice.
SARAH GUO: Ainsley, can you tell us a little bit about something that you think makes the developer experience within Stripe amazing?
AINSLEY ESCORCE-JONES: Yeah.
SARAH GUO: Maybe that the rest of the world should have?
AINSLEY ESCORCE-JONES: Yeah, I don’t know of anything that you know, is super secret or that anyone shouldn’t have. One of the things we do at Stripe is, every six months, we send around a survey to every single engineer and we ask them, “How do you feel about your tools? How do you feel about developer experience?” And I would say maybe 18 months ago, one of the things that came up just an awful lot was people who were struggling to find good documentation. I feel like every company has this internally. You know, the documentation is stale, fragmented, you can’t find it. But we really tripled down on trying to solve that problem because it just seemed to come up in every single comment that we saw.
So we built our own internal documentation tool that we call Trailhead, and it has a bunch of really cool features, especially around making sure that people verify their documentation and make sure it stays up-to-date. Otherwise, it kind of drifts away and gets archived. But also, on top of that, because we trusted that we understood all of our documentation and that there was good information hierarchy, we then were using LLM tooling. We built a bot on top of it that you can interact with from your browser, from Slack, from inside your IDE… all of the places that you are… and just ask questions about documentation that we then felt comfortable, like were going to be accurate and…
SARAH GUO: This sounds like an unreachable ideal, man. How do you get compliance on something like that? Or like, you know, uptake?
AINSLEY ESCORCE-JONES: Yeah, but like, in the most recent one, we just ran the survey and we got the results last week and you can just see the steep drop-off after we spent so much time and effort on perfecting this tool and advertising it to people. People just don’t find issue with that particular thing. People always have other things for us to do, but that one we really did a good job on, I think.
SARAH GUO: Okay, so special Stripe cultural thing. Scott, how many people have tried GitHub Copilot at some point? Every hand gets up, right? Even on your personal projects. What is something you’re excited about?
SCOTT DENSMORE: I think I’m excited about some stuff that we’ve been working on that we announced back at Universe on extensibility. I always say every developer has to wear multiple hats. We expect every developer to be an expert in everything, whether that be your observability tools, Vercel, Stripe, or anything that you can use. And I always said, “How do we do something to bring those experts to developers?” Because just because you’re supposed to be an expert doesn’t mean you are. And we want to keep those developers in their flow state. So just like you saw today in the keynote with Stripe and how they’re integrating into GitHub Copilot, we want to bring a lot of this ecosystem to developers, whether that be something like Stripe or Vercel or any other tool that you use, so that you can interact with those tools in a natural language way just like you do with GitHub Copilot. And these things become ubiquitous that you can bring with you everywhere, whether that be in your editor or .com or any of those places. That’s what I’m super excited about.
SARAH GUO: Romain, you haven’t been at OpenAI that long. How long has it been?
ROMAIN HUET: About eight months now.
SARAH GUO: Okay, eight months.
ROMAIN HUET: Which is a lot in AI years, you know?
SARAH GUO: Yeah, yeah, yeah. That was, like, that’s what? Six generations? I’ll ask you an even more specific question: What’s something surprising to you about the developer experience within OpenAI? And anything you think you can share about how you’re changing the developer experience?
ROMAIN HUET: Yeah, one thing I’m really proud of in terms of what OpenAI has been able to accomplish is that when you go back to the mission of what we’re trying to accomplish, we’re trying to be building this safe AGI that benefits all of humanity. But of course, OpenAI started as a very kind of research-y lab and it was unclear whether we’d even have a developer product or even developer experience. And so to me, like, the G in AGI really lives in the API now. The ability for builders and developers to build all the things that we would otherwise not build ourselves. So I think that’s been amazing to see in the past year alone.
We see now people pushing the boundaries of the models but also experimenting with agency software and apps. So I think that’s very exciting. In terms of DX specifically, I think what’s interesting is that more and more developers now are getting sophisticated with AI and their needs and their trade-offs. You know, people are asking us like, “Okay, I’d like to have a model that’s a little faster for this, but as intelligent as GPT-4 for that.” You know, they have very complex use cases that they try to get done. And so we spend a lot of our time, myself included, talking to developers and builders to understand like, okay, how can we help you get to the finish line with that use case and how can you deploy it to production?
SARAH GUO: So I think the premise of our discussion is actually like, what’s changing because of AI, right? So we are investors, we’re early investors in Cognition, which makes a thing called Devin, which is, you know, an AI software engineering assistant. You can think of it as an intern, today. It’s going to become a smarter intern as well as, you know, a company still instill around, let’s say, like visual QA and technical debt, but there are many different fields one could attack. I’d love to hear what you think is a near-term impact and a long-term impact. And I feel like we have to start with Scott here. AI on the development process, yeah.
SCOTT DENSMORE: Yeah, I mean, the near term is already here today. You talk about things like, we have Workspace, which is the way we start thinking about how developers interact with what their problems they’re trying to solve, right? I always say, “We don’t pay developers to type; we have them solve problems.” So what is the actual problem you’re trying to solve? And if I can do that in a more natural way, whether that be a task list that I go through that can generate code for me that actually does a lot of the work. I think that’s near term and I think that’s a lot nearer than people believe.
And in the long term, it’s the bringing together of all these agents, these skills, these apps that together, that can be ubiquitous. And I say that you should be able to take them everywhere you go. And I think that’s not too far in the future, but it’s like, if I can, as we say about Copilot, it’s ubiquitous. If I can bring that everywhere I’m at, I can bring that code with me and then if we bring it everywhere it is, then we want to become this company that reaches every developer and we want to increase the amount of developers and we want to democratize that development with natural language. And I think that’s really where it’s going.
SARAH GUO: Sorry, where would I be taking this outside of GitHub?
SCOTT DENSMORE: Lots of places.
SARAH GUO: Okay. Okay.
SCOTT DENSMORE: Find out more soon.
SARAH GUO: Okay, cryptic answer. We’ll watch out. I mean, Ainsley, are you shipping things here already? Are you experimenting with AI tooling inside Stripe?
AINSLEY ESCORCE-JONES: For sure. We’re kind of doing the same thing as everyone else. I would say about 30% of engineers at Stripe use Copilot regularly in order to get code completion suggestions. And they have that built into their IDEs when they get started. And so I think that stuff is going to continue to increase. I was talking to Romain about this backstage, which was, I think there’s an interesting kind of like next step I want to see, with some of the tooling that does code completion and code suggestions, which is I feel like companies of our size at Stripe have kind of molded our programming languages in a way that is appropriate for our domain. So we have special annotations to talk about privacy or security or things like that. And that’s present nowhere else in the world outside of Stripe, which means that just pulling a tool off the shelf and bringing it into Stripe, we can’t accept as many of the suggestions as we would want to.
And so part of me wants to bridge that gap, which is like companies of our size to be able to have that tooling be useful, specifically within our contexts. And then I would be really excited about code completion tooling. I think outside of that, like separate to just kind of like in IDE code stuff, I think the question I’m really interested in answering is, how quickly can we bring relevant context to people, not just about the code that they’re looking at, but Stripe operates in a really complex domain and you bring a new engineer into like the finance industry, there’s so much they need to understand. How do you bring them up to speed as fast as possible? Because onboarding takes a long time and the shorter we can make that, the more productive people can be.
SARAH GUO: So context could be, it could be customer impact, it could be compliance, it could be networks you work with.
AINSLEY ESCORCE-JONES: Right, exactly.
SARAH GUO: Long list. Yeah.
AINSLEY ESCORCE-JONES: What’s the chargeback flow diagram for this really obscure payment method is something that a new engineer is definitely not going to understand, but how can we get them to understand that as soon as possible?
SARAH GUO: How do you solve that problem today?
AINSLEY ESCORCE-JONES: There is just a lot of documentation that you have to read and real complicated diagrams that you have to kind of grok and then translate that back and forth from the code. But that’s not optimal, right?
SARAH GUO: Yeah, it feels like we could do better than that someday.
AINSLEY ESCORCE-JONES: Right.
SARAH GUO: Lee, when you guys launched v0, what did you hope it would do in the long term?
LEE ROBINSON: Yeah, we actually had quite a bit leading up to v0 where we were dogfooding this internally. So kind of going back a little bit, it started with… we were experimenting building a lot of things with AI models and we wanted to make it really easy to get started. So we built this really small abstraction at the start that we called an AI SDK, that just made it easy to stream back responses for chat-like UIs. And then we started building and building and we’re like, “Well actually, I really want to use this for our own tools at Vercel. I want to be able to quickly generate frontend code and just get a quick prototype online, see how it works.” And really, dogfooding even led to the creation of v0, where we want to put this out in the world and allow more developers to quickly build their first version. But it’s morphed a lot since then as we’ve got feedback from customers.
And I do think that this is… at least one path toward the future of AI is, how do we reimagine the chat-like experience to give more rich interactive components? I think, you know, at mentioning a GPT or you know, at mentioning Stripe in the context of Copilot, like those are already big improvements in terms of what functionality you have. One thing we’ve been experimenting with we’re calling generative UI, which basically is just meaning in the context of an interaction with a large language model, I also want to return back components. Whether that’s React components or maybe more in the future. But if you imagine like Google Search today, you can type in “what’s the weather” and you get the dynamic weather widget or you can say, “what’s the score of the Giants game?” You can get back the SF Giants game. We want to make that technology democratized and available for everyone. And we’ve been building a bunch of cool demos if you want to try it out. One with Gemini, which is pretty cool, Gemini.vercel.ai. It’s like, how you could imagine a United Flight app would allow you to book flights. So I say I want a flight from San Francisco to Rome and I want to pick my seats and then click “pay” to do my Stripe checkout. And I just think that that AI native user experience is just getting started. And so much of the experiences that we all build and the products that we build can really be reimagined in this AI first way. And we’re just getting started.
SARAH GUO: Romain, Sam, and Greg and other OpenAI leaders have, I think, for a reasonably long time, talked about AIs that will make us better at developing AI, right? And given that’s the mission of OpenAI, safe AGI, does it make you more efficient yet as an organization?
ROMAIN HUET: I would say so, yeah. We are already using models pretty [much] throughout the organization, so yes, for sure. And for software engineering, in particular and for developers, Paul Graham, the founder of YC, had this tweet a few weeks ago saying that the 20-year-old programmers now tend to be as good as the 28-year-old programmers. And what’s interesting about that, when you think about that statement—
SARAH GUO: Wait, you have to stop and tell me if you believe that’s true all of you first.
ROMAIN HUET: I personally think that’s true.
SARAH GUO: Because it’ll tell you how old all of us are.
LEE ROBINSON: It certainly helped a lot to have access to these tools.
SARAH GUO: Okay.
LEE ROBINSON: I mean, if I was 20 years old, I think it would definitely help accelerate my career growth.
AINSLEY ESCORCE-JONES: Yeah, I’m not sure how we would measure that, but...
LEE ROBINSON: True.
SARAH GUO: Let’s say two things like productivity, and then like speed of growth.
AINSLEY ESCORCE-JONES: Oh. I think productivity maybe. I think there’s a bunch of really cool tooling that just exists now that just didn’t exist back then. So I think you can just throw stuff out. But I think that also includes like no-code stuff and just other tooling that’s advanced over time as well. Like everything combined probably helps with that.
Speed of growth? That’s a good question. I feel like I’m already sufficiently far away from that point in my career that I’m not sure what I would even be like learning, but I think yeah, like being able to have maybe like a back-and-forth with something that is trained on the sum total of all of our knowledge of programming probably seems like a pretty useful thing to have.
SARAH GUO: Scott.
SCOTT DENSMORE: Of course I believe this.
SARAH GUO: Yeah. Yeah.
SCOTT DENSMORE: I mean even at GitHub, we measure this when we bring in new folks.
SARAH GUO: Are my metrics right? Or how do you think about: are you making engineers better?
SCOTT DENSMORE: Are we making engineers better? I would say yes. We’ve actually lowered the barrier for entry. So if I look at what we call an SE1, SE2, being able to contribute, this goes from months to weeks. So I think we’ve made them better in terms of contributing to the code understanding and we’ve given them more confidence. I think that’s part of, like as a… if I remember way back when I had an IDE before I traded in for all my outlook and things. I have... You know, it took me a long time to get confident with programming, but now you can actually, the folks that are doing it are becoming more confident in programming and they can start taking on more things. And I think that’s just lowering the barrier of entry so you can become much more productive.
SARAH GUO: Yeah, my general optimism and field of work makes me like a very strong “yes” on this question overall. And then like something between age and orneriness, I’m like, nobody knows how to manage memory anymore, right? Like nobody understands hardware. And so I think that like abstractions, components, and tools, they do create the ability to, I guess, get productive quickly with less system-level understanding. But you know, generally a strong optimist on this. I’ll ask one more long-term question before we’ll go to audience questions, which is really like, what do you think the role of a software engineer is a decade from now? You can’t predict more than like six months in AI. So this is an impossible question, but I have to ask anyway. Romain, you want to start us out?
ROMAIN HUET: Sure. When you look at the progress we’ve made with AI models, like we had this model called Codex in 2021, specifically trained for coding. And I think at the time you could expect Codex to maybe write a few lines of code at best. And then of course, like GPT-3, more broadly, was a model that we think was pretty good at generating like five-second tasks. You know, you could take on some of these tasks. Now fast forward to GPT-4, you can reasonably think that most of the tasks you give to GPT-4, like five-minute tasks, especially for coding, you know, quick functions and so on in your code base. Now what happens, like if you imagine that models get even better from here. What if you could take on multi-hour tasks with a better model? And so I think what I usually tel startups and developers who are building products with our model is to think you have two strategies to think about our models.
It’s either you bet that the models that are the best they can ever be and so you try to plug the hole for your particular use case, or you bet on the fact they’re going to be even better, and in which case, you’re going to plan for that future. And I think over time, if we plan for that latter category, which I strongly believe in, then the role of a developer will be much more about, you know, orchestrating agents in some ways, like prompting these models to kind of direct them in natural language in terms of what you want to accomplish. And in this future, I think we’ll be spending a lot less time on tedious tasks like writing unit tests, functional tests… the AI models are going to be fantastic at this. But conversely, you’ll be able to bring more of your taste and more of your, you know, product design expertise and kind of like direct these models in a better way to achieve your goals. So yeah, I think it’s going to be super exciting to be an engineer and a developer in the years to come.
SARAH GUO: Ainsley?
AINSLEY ESCORCE-JONES: Yeah, sure. I think maybe if you think of like, software engineer more abstractly than a person that writes code and more at the fact he is like, you know, a person who bends kind of like computers to do some kind of like human will in some sense.
SARAH GUO: That’s a better definition, yeah.
AINSLEY ESCORCE-JONES: Yeah.
SARAH GUO: Yeah.
AINSLEY ESCORCE-JONES: Yeah, like because I mean, I’ve been in the industry for, maybe over a decade now and even I don’t do that the same way that I used to, although exactly, you know, I still do the same kind of thing. I think more people will be able to do that, like more people will be able to get computers to do things that they find useful or interesting, but I always think there are going to be gradations of that. So I think really what’s going to change is that we’ll probably just be more ambitious in what we think is even possible and the tooling that we have around us will make that perhaps equivalently difficult to how it is today. It’s kind of my sense. When exactly, I have no idea. Like, I don’t know if I’m talking about two years in the future, three years of the future or like, six months in the future at this point.
SARAH GUO: Lee, what happens? Are we all just frontend folks and everything else disappears?
LEE ROBINSON: You know, in some ways, the product experience becomes the most important thing and I think we’re already trending in that direction in some way. But if we make two assumptions, which I think are good assumptions, which is that the cost of models will go down and the models will just keep getting better, it will definitely change how we all build products today. I don’t know the exact timelines, but at least what I’ve been advising to people who are just getting started in their careers is like, especially as a frontend developer myself, I tell a lot of the people who are self frontend developers today, I’m like frontend, backend, like maybe that stuff doesn’t matter as much in the future is why you’re seeing a lot of like product engineer, design engineer type roles.
It’s these people who are wielding these tools, whether they’re large language models or some kind of React 3 fiber or some kind of 3D tools, and they’re using them to make really incredible product experiences. And if you imagine a world where we have AI agents that can go do a bunch of code in the background for you, I think that becomes even more powerful. You’re spending more of your time in the end customer experience.
SARAH GUO: Scott, anything to add here?
SCOTT DENSMORE: I think the future is basically systems thinking for these. Today, we think about code, we think about, you know, what we all want is to think about the systems that we live in and how do we program that system. I think there’ll be a lot, you know, I think 10% of the planet will be programmers and I think it’ll be something similar to Star Trek where you actually are interacting with natural language and you are asking things to happen and those things happen without you having to worry about them to the system. 10 years is a long time.
SARAH GUO: 10 years is a really long time, yeah. I think a question that often gets asked is, should you still learn to code? What’s still worth understanding as an engineer? And I really like Ainsley’s definition of a software person is like you’re trying to bend computers to do your will and computers can do many things and that doesn’t necessarily mean, “Oh, I’m going to spend a couple hours like slogging through this API documentation making sure I understand exactly how to deal with this edge case,” right? It might be something a bit broader than that, hopefully, much sooner than 10 years from now that is systems-level thinking. But one observation that I keep in my mind right now is, there [are] 25 or 30 million professional software engineers in the world, many of whom use Stripe and all of these products. But if you think about the number of people who could benefit from bending computers to their will, it’s a lot closer to 8 billion than 30 million, right?
And so I think the ability to... Lots of people here I think will have heard the term “schlep blindness.” I think about this concept a lot. When something just looks like it’s going to be an insurmountable amount of work, we turn away from it. And that happens in many engineering projects. And so my most optimistic view on it, which is my real view, is we are going to get a lot more ambitious. You’re going to have teams who can produce a lot more, and actually there’s relatively unlimited demand for bending computers to your will in the world. And so I think it’s still going to be pretty important to understand how to do that, just the form of it will change.
Okay, I do have some questions from the audience, so please submit since we’re going to go to that now. Okay, a question from Ethan, “What is something that’s changed your own personal developer productivity over the years?” And you can’t say Copilot, Scott.
SCOTT DENSMORE: What has changed—
SARAH GUO: A practice, a tool, anything. Yeah.
SCOTT DENSMORE: A tool? I’ve been in developer tools all my life, so picking one is going to be really hard. I’m not going to say Copilot, but things like the ability to see all the changes around me. If you think about if you’re in your favorite editor today, you see all of these changes that happen around you, all the information that’s there in front of you to keep you in that flow state rather than what I grew up with, which was just a text box that you type things into and hopefully had it compile when you hit “Compile.” I think that’s the kind of thing that makes me productive.
ROMAIN HUET: I can go. I’m guessing for me, it will not be original very much, but I think ChatGPT, frankly. Using it as a brainstorming partner when you’re starting a project or you are about to refactor something.
SARAH GUO: Predictable.
ROMAIN HUET: Yeah, it’s pretty awesome.
SCOTT DENSMORE: Why did he get to say ChatGPT?
SARAH GUO: Yeah, no, no, no.
Okay, okay. You can’t say Stripe, and you can’t say Vercel. So that’s the answer.
ROMAIN HUET: Okay.
SARAH GUO: Yes, we know…
ROMAIN HUET: Good point.
SARAH GUO: We know these things are interesting, yeah.
ROMAIN HUET: Do you want to go?
AINSLEY ESCORCE-JONES: Yeah, I’ll go. For me, I guess I started programming when I was pretty young, but by that, I mean I learned one language, which was PHP, and then only wrote that language for a really long time. And then I went to college and they forced me to learn a bunch of different paradigms in a bunch of different languages. And I felt like for me, the biggest thing that changed was I stopped thinking about how would I solve this problem in one language and just more abstractly started thinking about, you know, what is the shape or the problem that I’m trying to solve and suddenly stop thinking of myself as like a PHP dev or a Ruby dev or a Go dev and just, once again, someone who can just get computers to do roughly what I’m trying to get them to do. And I think that was probably the biggest shift is just like being able to switch paradigms.
SARAH GUO: Is there anything you did actively to cultivate that, or just happened?
AINSLEY ESCORCE-JONES: Honestly, it kind of just happened by force. I don’t think, unless I was kind of forced to for you know, coursework reasons, I’m not sure I would’ve taken myself out of that hole and tried a bunch of different things. So you can definitely do this without being told to do it by a professor. It just so happens that that was the reason that I started doing it. And then I was like, okay, now every time I see something new pop up on a channel or something, I’m like I should try this paradigm just so that I never get stuck in that rut.
LEE ROBINSON: So I can say ChatGPT. No, I’m just kidding. It has, for sure. I think for me, like going back to when I was learning to code, I was very into mobile and the barrier to get my site or my app online was, it was something that really had me explore the web much deeper when I was learning to code. And I really had this newfound appreciation for the ability for anybody to just put content online. And I think what’s made me be more productive and I have a lot of gratitude towards over the years is the advances in the web platform itself. So in the browsers, in the standards bodies. And also giving a shout-out to Tailwind for helping me write CSS better.
SARAH GUO: One question we have is around... there are a lot of companies and open-source projects touting the value of their AI developer productivity tool. How should we test or tell what’s actually useful? Scott, you build the product, how do we give out the product? You know?
SCOTT DENSMORE: So if you think about, if you’re talking about the tool that you use, the product itself, we test internally. I would do two things just like a lot of people have mentioned. We do surveys of the developers themselves and we put a lot of emphasis on that sentiment, but we also do a lot of collecting of metrics about, for example, in Copilot, how many completions, how many things have you accepted? How many chat interactions have you had? So that we see are you actually using the product and how much engagement do we have? And then how much is it providing you useful content? So whenever you pick a tool, you should do that. Make sure they have metrics that you can use, that you can measure and hopefully, you already have some measurement that you have to kind of compare to. And then definitely do surveys for your developers because happy developers make more productive developers.
SARAH GUO: Other recommendations here?
AINSLEY ESCORCE-JONES: I have one thing. Maybe it’s like more of a recommendation for you, Scott, that I would love to be able to measure. Like, internally at Stripe. So you track for example, accept rate, right? Which is like, you know, what is the likelihood that someone kind of takes a suggestion in their IDE? I would love to know how likely it is to make it all the way through to production, right?
SARAH GUO: It’s a good one, yeah.
AINSLEY ESCORCE-JONES: Like, what is the likelihood this gets deployed, like it makes it through the IDE, our code review process, like all unchanged. I think that’s like a really interesting question.
SCOTT DENSMORE: How much AI code has made it to production?
AINSLEY ESCORCE-JONES: Yes.
SCOTT DENSMORE: Got it.
SARAH GUO: Yeah, I’ll give you, also, an investor’s view of this because we look at, you know, different interesting companies here all the time. And it is really related to Scott’s answer, which is, I actually think that organizations should be very permissive of these tools. Not because I’m talking [about] my book, but because, you know, developer tools, they’re generally adopted by end users. It’s very hard to make developers use anything, right? And these tools cannot be useful unless there is interaction with them. They need direction, they need integration, they need context. And so, our view is engagement and usage is the first metric. And then I think exactly as you said, or sort of baked into Ainsley’s answer is a question on quality and effectiveness of the generated code.
So I want to ask you guys about that too, because what we want is the functional maintainable high-quality code that is in production, and it’s not yet clear how much of that we’re getting, right? I think it’s quite hard to measure. You know, another question that we have here, which I think is a good one is, if it becomes very cheap to generate a lot of code and it is, you might have the… there’s probably a nicer name for this, but like kind of the... the Tesla self-driving problem, right? Which is, is it full self-driving star asterisk yet? I don’t know, but I’m going to let it drive ’cause then I can twiddle on my phone, right? And so humans are actually very willing to trust these systems when they begin to see them work. How do we deal with… are you concerned or how do you deal with the fact that you might get, you know, much more and much lower-quality code? Or do you think it’s a concern?
SCOTT DENSMORE: There’s a reason we call it Copilot. We never want to take the human nature out of this. The developer is the true piece of this that we want to enable. And so we always say, you know, we get better all the time with great models of generating better code, but it also is not the code maybe that you would write. So just like anything else, code that I would write, I would submit. I don’t just check it into main and it goes to production. I usually go through peer reviews. I have, you know, back and forth with this. But the same thing happens with any of these tools that you use that you should keep the developer at the center of this so that—
SARAH GUO: I gotta push on this a little bit.
SCOTT DENSMORE: I knew you’re going to say something.
SARAH GUO: Because in real engineering organizations, like reviews, what if you’re reviewing a lot more code and there’s a lot of like, “looks good to me, check” going on. How do we change the way we do that?
SCOTT DENSMORE: You still have to hold people accountable, right? I think there’s this idea that, you know, I have this question a lot of people ask me like, “What happens when people just keep hitting tab and eventually, it just writes all the code for me and checks it in?” I was like, “Well hopefully, you have someone there checking the code.” And I mean, you could even get to a point where I think, back to your point is, you could have models that check the code itself, right? It’s getting to this point where you can have this balance of, you know, I have the human interaction but I also can have before the human peer review ever happens, I can have the models check the code and you could say, you know, kind of have this check and balances. But I think you always will have this human interaction of saying, is this the right thing to do? I don’t think you’re going to get away from that, not any time in the near future.
ROMAIN HUET: And I think what matters, too, is at the pace at which the models are evolving, the fact that they can enter contact with reality every day, every week in every possible use case is what matters also to us when we look at what’s happening in the space. And so, for instance, when you compare how you use a model, or even ChatGPT as an interface six months ago from today, you are maybe now more and more familiar by the day on how to use the tool, where it performs better in some other ways, etc.
And I think when we look at coding in particular, the more developers use these models every day, I think we’re also seeing this trend overall is that developers and all humans frankly, want an AI that’s smarter but also more personalized, more context aware. And so I think it’s very likely that in the future, we’ll see developers use these models every day, but in a way that kind of assists them to write code the same way they would because they have context of that code base and the code they’ve previously written. And I think as we get into this agentic kind of workflow, developers will have more and more appetite to kinda trust the output of these models because they recognize themselves in some ways by what’s being output.
SARAH GUO: I really like this question, if I can find it again from Jake, because I think it reflects a reality of any grown-up code bases. So it is, you know, let’s say some code bases are older, poor naming, inconsistent conventions, spaghetti code, stuff like, where the gal who wrote it has been gone for five years and it’s like, there’s no documentation. You know, it’s more difficult today to get coherent advice and generations from AI tools. Is that something you can see improving or, you know, how do you better make use of tools in an environment where it doesn’t actually make great sense?
SCOTT DENSMORE: So I’m going to take this one because we’ve been talking to a lot of customers about this as well. There’s lots of code out there, the code I wrote. But you’re describing the code that I used to write six weeks ago. So today, I think it hits a little bit [of] what we talked about is the ability to, it’s not just about these generative models, it’s what models you can build on top of that. So if you have a large set of code, if you could build your own model on top of your code based on these generative models, you have much more context. And there’s two ways to do that. You could build your own fine-tune model or you could do something like build a vector index and do retrieval augmentation off of these. So that [is] what you want, and this is back to the systems level thinking of more context around the code that you have, not just the code that you’re looking at in your editor but you want it to know all the things. So like, you know, you have access to all of this knowledge.
And then the next phase of that is this, I want to take that and turn it into something else. I mean, we have many customers that have old 50-year-old legacy COBOL code that have bad namings, everything’s named Z. How do you turn that into something? So you have to look at the whole and then go like, “Do I want to turn this to Java?” Or something like that. And these things will come, and I think that’s like the next phase of this, and that’s where the context comes in so that you can get that and make those changes over time.
ROMAIN HUET: Yeah, it’s all context and also the ability to break down a complex problem into multiple subtask subprocesses. And I think when you start at the very top—something like Devin, for instance, from Cognition is able to kind of create this action plan of like, okay, [these are] all the things I have to go through, and based on all of these subtasks and subprocesses, you can then distribute that to different kind of models at different tiers, different trade-offs of quality and cost and latency. But I think, yeah, it’s like all of these contexts as you describe it from RAG or fine-tuning and then the ability to break down an action plan.
AINSLEY ESCORCE-JONES: I also think, I was talking to some of our peer companies a couple weeks ago and they were actually showing us some of their refactoring tooling that they have now with like a little bit of LLM flavor in there as well. And I do think there’s a world that we get to where the unowned crafty part of the code base isn’t something that people are so terrified to actually interact with because we now have tooling that makes it—
SARAH GUO: That’s the most optimistic thing I’ve heard in this discussion, yeah.
AINSLEY ESCORCE-JONES: We may be able to go into them and fix them because when the bar gets significantly lowered to actually start to make changes within those places, right? What if a poor request came to you that was just like, this thing hasn’t changed in four years. I see that you’ve been migrating to a new abstraction over the last couple PRs, this piece of code could be migrated to the abstraction and would no longer be like crafty or old or scary, right? I would love that and it’s maybe not worth it for our most senior engineer to go and do that work when they could be doing other things. But like, there is tooling that could exist in the near future that would go and do that.
LEE ROBINSON: And I guess this is the case for either good commit messages or good PR descriptions. Because then you also, you have your code, but the evolution over time, with context about, you know, why you added that thing or code comments, you know, if you write them, any of those could help.
SARAH GUO: What recommendations do you have to, you know, engineers and engineering leaders about how to progressively adopt this stuff? I mean, there’s different takes, right? You could allow anybody to use sort of end user code completion. It is like, oh, we’re going to try building stuff internally in terms of something that understands our context. Or starting with specific use cases. So like test generation or documentation generation I think are very common ones. Where do you guys think is the right place? Where do you start yourselves?
AINSLEY ESCORCE-JONES: We just make it really safe and easy internally to experiment with LLMs, right? Both, all of the different models that, I mean there are so many models coming out, and so we have our own internal proxy that lets you just choose one. And also try the same prompt across a bunch of different ones and compare latency or how good the answer is. But also to do that safely, obviously, Stripe is regulated and you can’t just be firing off LLM requests to anyone’s cloud. And so how do you host those internally or somewhere that you can trust? And then once you have it to be safe and rapid to iterate, then you just let people go and do a bunch of experiments, try stuff out internally within developer infrastructure. We have this shared Google document that’s just like [a] list of things that I think would be cool for us to try out. And either I’m going to get to it or someone else will get to it and they just throw a demo in there and we’ll check it out and like, if it’s good, we’ll gradually adopt it. So that’s at least our strategy.
LEE ROBINSON: Yeah, I think there [are] two parts here. One is the use of AI as a consumer and the other is the use of AI as a developer to build tools for developers. On the first one, I just try to, and my recommendation would be just to try to inject AI into as many use cases as you can as a consumer. Just maybe it’ll help you write that paper faster or get you your first draft or it’ll help you write your documentation faster, which it certainly is, has helped me, you know, find words that maybe I could reduce usage of duplicates or avoid idioms and things like that.
You can kinda put your style guide directly into a GPT or into some model directly. For developers and building products for developers and you’re trying to get up-to-speed and understand all these tools, I mean, it’s cliche, but my recommendation is to get your hands on and actually build with some of these things, even if it is the most basic chat bot, just getting the texture of actually building something in JavaScript or in Python and just seeing what it actually feels like to integrate with a LLM might give you a lot of ideas how to bring that back to your company and find some, you know, LLM augmented experience for your product.
SARAH GUO: I think it’s interesting that this has become the most upvoted question from Antonio because it’s actually about other functions that we all work with, but the question is essentially, I think it’s broadly believed you’re going to see developers become more productive. You know, you’re going to see more 10x and 100x engineers in large part because of this tooling, but just, you know, industry progress overall. How do those philosophies and tools, apply to other teams and roles in a company? So could be, you know, product or design or anything else that engineering interacts with or do they apply?
SCOTT DENSMORE: I would pick one and it’s often the most forgotten, but one of the most important is your support staff. A lot of times these folks can be technical and we find that they want to have access to, I know our support staff, you know, everything is open so they go to GitHub and they look for code and they look for ways that the problems can be solved, but now people are using, internally, Copilot to actually do that. Rather than just having to dig through the code, they can ask more natural language questions. So that’s some, one of the functions that’s actually benefiting from using these types of tools.
ROMAIN HUET: Yeah, and I think you mentioned earlier like, there [are] maybe 50, 60 million developers in the world, and all of them are going to become increasingly more productive and it’s already happening every single day. But I think what’s interesting is, when you look at any typical organization, you have people in finance, in HR, in product, sometimes in marketing, and they all sometimes depend on engineering bandwidth for some of their day-to-day tasks. And they’re always blocked on them. And of course, developers are such a scarce resource that you can’t really have access to them easily. But now, what if you have a billion developers because now all of a sudden with tools like ChatGPT, they can write a little bit of code just to get them unblocked. Even Stripe, for instance, when you use Sigma, now you don’t really have to understand SQL. You can just write in natural language what you’re trying to accomplish and then the SQL is written for you. It’s so amazing to see that everyone will become more productive, including those that are developer-adjacent.
SARAH GUO: Yeah, I actually, I love this insight because I think there’s a way to look at software engineering and then just technical skills, right? And technical knowledge. So if it is support or somebody who just needs data in an organization, I think that’s really exciting, yeah.
LEE ROBINSON: Yeah, I’ll take design because I think this entire field has always had the elusive handover problem where it was like, I go from Figma or Sketch or Adobe and I need to bring it into, you know, some developer tool. And I think it’s made progress with more interactive rich prototyping in those tools. But I think the best designers have always, you know, especially building for developer products specifically, have had their hands on the code and have liked to build as well too. And I know that I’ve seen, you know, working with great designers, their ability to really quickly get real prototypes up and running and hack on those with real code now thanks to using LLMs more.
SARAH GUO: Okay, I have one fun question to close us off here. And it is not a particular comment on this company, it’s just a product everybody will know and it’s kind of memetic. DocuSign is a large company. If you want to replicate DocuSign as a software product, not any of the rest of the organization, how many engineers would it take in 2026?
SCOTT DENSMORE: 10.
ROMAIN HUET: Five.
AINSLEY ESCORCE-JONES: I was going to go lower.
SARAH GUO: There’s only one right answer here, I guess.
AINSLEY ESCORCE-JONES: Yeah, yeah, I was going to go one, zero. I guess [it] depends [on] what you define as a software engineer at that point.
SARAH GUO: Oh, okay. Can’t get any better.
AINSLEY ESCORCE-JONES: At like the very core of it.
SARAH GUO: Yeah.
LEE ROBINSON: It’s like to do the core product like today, it’s kind of one, there’s open-source clones that can do quite a bit. Then you get into enterprise, RBAC, features, then it’s like, okay, now I need Devin to go write some of that code for me or something. But yeah, maybe one or two.
SARAH GUO: Okay, I think that is very exposing of people’s beliefs of how much productivity we’re going to get. So I hope that’s true. We’re going to get a lot better software. Thank you, guys, so much.
ROMAIN HUET: Yeah.
SCOTT DENSMORE: Thank you.
AINSLEY ESCORCE-JONES: Thank you.
LEE ROBINSON: Thank you.