Accelerating product development in the age of AI
Artificial intelligence is fast becoming a generation-defining technological breakthrough. Hear how product leaders from GitHub and ServiceNow are putting AI to use, from encouraging agile technical experimentation to driving real business outcomes. You’ll also learn how to plan your own AI investments by evaluating emerging use cases and shifting organizational culture.
Panelists
Inbal Shani, CPO, GitHub
Rao Surapaneni, SVP of ATG Engineering, ServiceNow
EMILY SANDS: Welcome. I’m Emily Sands. I lead the info org at Stripe, including our ML infra and data science teams. And we’re here to talk about AI, and in particular, AI in the context of product development.
So, Bill Gates recently wrote on his blog that the age of AI has begun. He’s asserted that it will be as revolutionary as mobile phones or the internet, that entire industries will reorient around the technology, and that businesses will distinguish themselves by how well they use it. So, pretty casual.
With that in mind, I am delighted to be moderating today’s panel. We’re going to go deep on real-life examples of how some of your peers are harnessing AI in their own products and hear from them tips that can hopefully help all of us in our journeys.
So, please join me in welcoming to the stage, Rao Surapaneni, the senior vice president of ATG engineering at ServiceNow, and Inbal Shani, who is chief product officer at GitHub.
Awesome. Thank you both for being here today.
INBAL SHANI: Thank you for having us.
EMILY SANDS: Inbal flew in just for us. Rao’s been hanging out with us all morning.
You know, Inbal, obviously, AI is not new to you in your current CPO at GitHub, but also at Amazon and Microsoft before that. You’ve been at it for decades. So, curious to hear just a little bit about kind of your journey and what you’re seeing in the current period. Like, is there a momentous shift? If so, what’s the shift?
INBAL SHANI: Yeah. So, my journey with AI started actually I think more than two decades ago—yeah, don’t count my age. But I started my AI journey actually in my master’s. I had a chance to tune genetic algorithms. If you heard about them, they’re a lot of fun. It just takes days to get results. And I was building control systems and the idea was to build an optimization. So, that was actually my first venture into the world of AI.
And then after that, it was Amazon, and I had the chance to be part of the Alexa organization and build some of the interesting domains. I think the cool one was we built customer service on Alexa, so, really, the ability to help customers, to think about a conversation with Alexa to help you solve problems that you have with Amazon.
Microsoft was a virtual support agent, which was again, you know, again, on the conversational. And then recently with GitHub on the amazing journey that Copilot is, really focusing on enabling developers and productivity and reducing time and all of that fun.
EMILY SANDS: Awesome. Well, we’re going to talk a lot more about Copilot. But really, like, how different is today? Is it different? Is it just, you know, an extension of the past?
INBAL SHANI: I think it’s different because historically, AI was very niche. So, you used to build specific algorithms depending on the problem you wanted to solve, and then tuning took a long time because at the same time that AI evolved, also our compute ability has evolved. So, tuning the models and really running the models, getting a solution or suggestion, it took time. And now what we see in the new world of LLM is that you have more generic-type algorithms that are incorporating a lot of different algorithms inside them and then what happens is that you build a model that actually is applicable for a lot of disciplines. So, we’re moving from the world of niche implementation to more generic algorithms and then each one of the companies can build their own implementation on really the out that they want to get out of the LLM models.
EMILY SANDS: So, moving from niche, it takes forever to run, slow going, one application at a time, to, hey, now we have these great shared foundations, models; we can use them nearly across the board to figure out the use case.
INBAL SHANI: Yes.
EMILY SANDS: Awesome. And you know, Rao, you also have a super interesting perspective from your work at ServiceNow and before that. I’m curious to hear sort of how you look at today versus two or three years ago.
RAO SURAPANENI: Definitely, today, two things are different. So, you have, as cliché as it may be, really democratizing the AI through APIs. That’s one. The second one, the capabilities, there’s a huge step change in what you can actually do. So, applying these two, you can actually deliver applications that are real, human-like in their connotations, and what you can actually deliver to your end user, whether it’s your employees or to your end customer.
EMILY SANDS: And just for everybody in the room, when you say democratizing through APIs, you’re saying, hey, you can now build AI-powered applications without building a model. You just hit an API.
RAO SURAPANENI: Absolutely. My own background has been just within—started with the computer vision and then worked on a little bit of speech recognition, again, on both sides of building a platform as well as applying those solutions to deliver to the end users.
Now, looking at where we are, the natural language understanding as well as natural language generation is so good that you can actually make it real for your users.
EMILY SANDS: Awesome. You know, one of the things that stands out to me about the current shift, yes, all the things you said and also how it’s manifesting for the users, right? So, like, we’ve all been experiencing AI for a while. It influences our feed. It influences how you route customer service success. It influences how you do code scanning and figure out dependencies in code. But I think the current period is much more like putting the AI in the hands of the users. A lot of the generative experiences, the user is engaging directly with the AI, versus sort of under the hood.
So, before we get to that, I would love to sort of just level-set on existing AI applications very concretely that were central to your business and whether and how you’re evolving those in the context of generative AI. So, maybe, Inbal, starting with you.
INBAL SHANI: Yeah. So, I think for us, specifically for GitHub, there was always AI in code scanning and there was a different set of algorithms. And now with the introduction of LLM, we’re working on improving our code scanning.
EMILY SANDS: And just code scanning, what is it, what do we mean?
INBAL SHANI: Oh. So, code scanning is really the ability to run your code and identify vulnerabilities in your code and it’s part of our GitHub Advanced Security Solution that really aims to bring security and take it away from—it’s not take it away—it’s really scale it to the entire developer population, so bringing security to every developer and helping all developers write more secure code.
And code scanning is one element of that and that was really focusing on allowing kind of identifying vulnerabilities and recommendations to our developers in terms of how to avoid vulnerabilities or fixing them, so that part always had AI.
But now with LLM, with the work we’re doing with Copilot, it’s really taking it to the next level of taking AI across the stack.
EMILY SANDS: And similarly at ServiceNow, right? You are tagging tickets, you are routing tickets, you are optimizing tickets. You had a virtual agent that was a bunch of AI happening under the door, some actually involving the end user, some not. Rao, what do you think are the ways in which your current historic applications kind of evolved with modern LLMs or are they basically the same?
RAO SURAPANENI: Absolutely. So, ServiceNow sells intelligent workflow platforms for enterprises to route their activity in enterprise, whether it’s IT, service management, or HR or customer service. We always had a variety of predictive applications given text input. A user types in, I need help with password reset. Typically that used to go to an agent who would look at it and say, okay, this needs to go to IT department. So, that peer zero agent is no longer needed to be required to route that ticket.
So, we have always had that classification as a solution but now with the LLMs, we can make it much more efficient, accurate, and we don’t need to do per customer training. So, now there is a global model, these large language models are so good that you can actually replace those without the additional overhead of managing it per customer.
EMILY SANDS: So, that means you can launch a higher-performing model for customers faster, cheaper?
RAO SURAPANENI: Absolutely.
EMILY SANDS: All right. So, we talked a little bit about existing applications, how they’re evolving with LLMs. There’s also a bunch of folks in the room who are thinking, hey, I don’t have AI applications. I don’t—you know, I’ve already evolved them for the current tech. There are a bunch of net new things we can all build today because of modern LLMs. We saw one that we spun up quickly here at Stripe this morning around asking Stripe Sigma a business question in plain English.
I’m curious, concrete examples of what’s top of mind for you, either stuff you’ve shipped recently or stuff that’s on the horizon? And I guess, Inbal, Copilot, such a standout. Maybe we can just start there.
INBAL SHANI: Sure. So, in GitHub, we’re kind of splitting the way we’re thinking about our products into four pillars. We’re talking about productivity for developers, security for developers, and then the ability to scale your application and, of course, security. And we’re thinking about AI. And Copilot, the AI pair programmer that we have launched, and then Copilot X which is a recent launch, is really starting to take the AI across the developer platform and across these four pillars.
So, if you think about how we are thinking about AI, AI is a critical and important ingredient across all our platforms, all our solutions, with the idea to make the code more secure, to make developers more productive, to be able to scale your application, to be able to do more things than you were able to do before when you use GitHub. And again, it’s taking away a lot of that boilerplate type of work with the introduction of AI.
EMILY SANDS: Okay. That’s like a lot, a lot, a lot.
INBAL SHANI: That’s a lot.
EMILY SANDS: My head is spinning and I’m imagining like okay, like, I’m going to develop this new product, it’s going to be like this bang-up, amazing—by the way, Stripe loves it. We released it to some of our developers in a pilot and like the people were clamoring for more. So, we rolled it out to 100% last month.
I’m curious. What was the journey to get there? What did you start with? How did you, like, make sure the teams were on the right thing? Just like where do you even start?
INBAL SHANI: Yeah, so, we do have GitHub Next, which is kind of our incubation hub, and we’re fortunate to have this group of scientists that their entire role is to figure out what’s next and exciting for developers and how can we continue evolving our tools. What is kind of our three-year horizon when we’re talking about developer productivity or developer security.
So, Copilot basically started from there and then we built that partnership with OpenAI and we were able to ship Copilot as part of that.
So, it started as an idea of what can we do more for developers. And it came a lot from looking into signals. What are some of the things that developers are doing that are repeated, that maybe we can introduce a pair programmer and take it away from them? How can we accelerate productivity for developers even more?
So, the idea of Copilot, the pair programmer, came from there. And then we’ve seen the success of that and it’s like, oh, that’s amazing. We see – if I am throwing numbers, 65% in terms of increasing productivity. 75% increasing happiness in developers. And then 46% of code when customers are using Copilot or using Copilot. And that is a step change because imagine you can take that to your company and you can improve developer productivity just by incorporating Copilot.
And now we’re on the journey in terms of okay, so, what’s next?
EMILY SANDS: Yeah, awesome. You talked a little bit about sort of this incubator, accelerator situation you had internally, and it struck me that, hey, that group is really optimizing for like the user needs, the business problems, but with a close tie-in to the tech. Like, what’s possible. And I see a lot of companies struggle, like, marrying the two. Right? There’s like one group off thinking about the business or user problem and there’s one group off thinking about the technology and like, I don’t think the right solutions are going to be built from either in isolation. We really need to marry them.
So, I’m curious. Rao, anything you’re doing at ServiceNow to bring together the business side with the AI so we can get the right products to market quickly?
RAO SURAPANENI: So, it is definitely not isolated to AI itself. We have done this for our journey with ServiceNow. We usually have an ideathon. Just, you don’t need the technology, you don’t even sometimes, looking at the customer challenges –
EMILY SANDS: So, ideathon is distinct from hackathon because it’s not just hackers.
RAO SURAPANENI: Yeah, it’s not just engineering either. Just come up with an idea. It’s more about thinking of the whitespace. What are the possible things that we can solve?
You also have the hackathons that is usually engineer or developer-driven. And then we bring in the business owners, whether it is the product leaders or the business owners. And we do the second pass where we try to combine all three vectors, from the ideas, whitespace ideas, to incremental ideas, what we can solve for the customers immediately, and then we go from there.
We also measure based on how many of these ideas are making it into the next release, and is it the horizon one or horizon two idea? So, we kind of relatively streamlined process, but we usually start with the divergence of ideas and then converge on the implementation plan.
EMILY SANDS: Okay. And once you have an idea that you think you want to run with, what does success look like? How are you setting OKRs around these projects? A bunch of the AI applications have inherent uncertainty. They are not deterministic. So, are you optimizing for model performance? Are you optimizing for business outcomes? Like, what’s your North Star?
RAO SURAPANENI: We always have the North Star as the customer outcomes. We look at that as the outcome. Now, whether it is the generative AI or AI or it could even be just a UX innovation on how you present this information, to look at that as our North Star. Absolutely, generative AI gives you new capabilities and also new challenges.
So, the inherent uncertainty is there but we need to figure out how to de-risk it and to have those checkpoints along the way to deliver. We always look for those short feedback loops so that we can build something, get internal validation, build something bigger, get a customer validation, and then go from there.
EMILY SANDS: Yeah. We’re talking a lot about sort of imaginary people in an organization doing something. Who are these people? Are they developers? Are they data scientists? Are they machine learning engineers? Are they prompt engineers? You know? And has that changed over time? Is that changing?
INBAL SHANI: Is the question for me?
EMILY SANDS: Anyone who wants to take it.
INBAL SHANI: I think for us, it’s everyone. Everyone takes part. It’s the data scientists. It’s the engineers. It’s the product managers. The entire company is basically involved in taking AI and thinking about what is going to be the developer need in a few years from now, so we should invest right now. So, it is coming a lot from the voice of the customers, that is coming from our field representatives, it’s coming from our revenue, it’s coming through customer conversation. And then it goes back to the business and we’re looking into what are the set of problems we should invest right now and in the future and that’s where products step in. And then there is a joint work between engineering and the data scientists to really build it together.
EMILY SANDS: Yeah, yeah. Folks are talking a lot about prompt engineers these days. I can’t find anybody with a PhD in prompt engineering. What do you look for in your prompt engineers, Rao? And what are prompt engineers? What are they doing, anyway?
RAO SURAPANENI: Absolutely. I definitely think that I am living that and analyzing what is going on with that. So, typically, we always start with the engineering team because it’s the engineering and science team that I run. To me, I’ll talk about the prompt engineering, I’ll first start with the characteristics of a good prompt engineer as I see it.
One, they need to be inquisitive and super comfortable with experimentation. What I see is the engineers just playing with the prompts, seeing what’s going in and then what’s coming out.
They need to be able to experiment and feel and identify the patterns of the output. So, I see the engineers doing a pretty good job of feeling out what is going in and then seeing what is the outcoming.
The part that is missing in that exercise, typical engineers, where I found engineers are not generally the best, is the content analysis, the output itself. So, typically, I will see the engineers say, okay, I gave this prompt, the answer looks like this. I am good with it.
But then when we bring in our linguists, people who are classically trained in language arts, they are able to identify subtle things. Is the tone right? Is the empathy right? Is it an active voice or a passive voice? How do you want to present this information back?
So, I feel like the right characteristics would be being inquisitive, experimentative, and then actually going into the details of the content that is being produced. It’s not just software works, I’m good. So, you have to know getting into the next level of content output as well.
EMILY SANDS: Awesome. Let’s talk a moment about sort of the technology foundations. Many of our organizations have infrastructure teams, ML infrastructure teams, data engineering teams. How have sort of the horizontal platform needs of organizations to enable these AI products changed in the context of maybe you can just hit an API? I am curious; what do you need to be building? What are you excited about building in your own teams? What would you advise other companies to be building when it comes to horizontal foundations?
RAO SURAPANENI: For me, I run a shared AI platform team, and my job is to enable and empower our internal developers to build products using those capabilities. There is definitely now an amazing platform enabled through APIs, but there is a lot more to do around the scaffolding of how we use these products, whether it is experimenting with the prompts. You can make that efficient, just like a developer can be efficient with automated test suites. There can be additional controls on how the enterprise data and security is managed. And then ultimately, when the responses come back, especially in an enterprise setting, having a level of confidence on hallucinations or being able to validate that content is equally important. So, that is where I am focusing on for my AI platform team in addition to whatever else we have been doing around that.
INBAL SHANI: It is very similar for us. The way we’re thinking about AI and what we’re doing is we’re trying now to create that platform that is AI-specific, and some of the challenges we have is there is an increasing need for Copilot and the usage of Copilot. So, we’re investing a lot in being able to extend that platform. It was building a solution. Now we need to evolve it into a platform. So, how are we introducing these APIs? Are we going to open these APIs for everyone? So, a lot of the thinking in terms of how we’re looking into the future of AI from the GitHub perspective is how much of a platform are we investing versus how much of a product we are building.
EMILY SANDS: As we’re all kind of thinking about our portfolio of AI investments, there’s sort of the quick wins, the short-term stuff, right. Let’s let you talk to Docs or whatever. There is also the sort of the longer run bets, right? What if you had a business consultant in your back pocket that could tell you the thing to do to optimize your business?
I’m curious; as you both think about your own portfolio allocation, where are you positioning the teams? How much is like quick wins, like, there’s a bunch of stuff we could just do approximately out of the box? And how much is kind of longer run bets? Has that changed at all in the context of modern LLMS?
INBAL SHANI: I think that the idea that you need to balance between short-term wins and long-term gains never went away. It’s just that now AI is the thing that we’re trying to balance. And of course, there is the need to solve things right now. There is a lot of demand. We put some products out there. Customers want more. They want to see a quick evolution. But then we also need to start thinking about, okay, how do we scale it and where eventually we want to go. And I just hinted on the fact that we’re trying to build an AI platform, we’re trying to introduce a set of APIs, and we’re doing all of that. So, that is more of a long-term play for us.
So, it’s really balancing between what are some of the problems that are immediate problems versus what are the things that we anticipate that developers will need in the future that we need to start working on it right now.
RAO SURAPANENI: Absolutely. I think the long-term strategy, especially if you are starting with what is the customer outcome you want to drive, that is not going to change. How you get there can change, especially incorporating new technologies. We do look at small, incremental improvements and validating them. Having those inner feedback loops that can help you guide towards that north star. That continuity needs to drive. Having focus on the agility and how we take that next step and the next step and the step after is always going to be the key for us.
EMILY SANDS: Yeah. Okay, so you want to be agile. You want to move fast. You’ll sometimes ship stuff to production that isn’t perfect and I’m curious how you think about quality, human in the loop workflows, especially setting expectations with users. I think it was like different when AI was under the hood and it just ranked your feed and you shouldn’t even know that AI was doing it, and now it’s like the individual, the end user is actually engaging with directly. So, what’s good enough and how do you manage expectations?
RAO SURAPANENI: I can start with that. So, human-in-the-loop is an awesome space, especially in this technology. You have to leverage that. We lead with that today. Just based on the newness of the technology and the concerns, whether or not the product hallucinates, the possibility of hallucination does create friction in rollout of these products.
EMILY SANDS: And what does it mean, you lead with that? How does that manifest in the experience? Who are your humans in the loop?
RAO SURAPANENI: So, when—in a ServiceNow context, you have end users, whether it’s your employees or the consumer space, your customers’ customers. For ServiceNow, we support the agents, agents who are dealing with your IT tickets or the HR tickets. They are incentivized expert human-in-the-loop.
When AI suggests something, they know the domain. They know what is right, what is wrong. And they are able to, one, help their end user, and two, through implicit actions, they can give you feedback on what is good or what is not. When a product is hallucinating, they can give you that insight. So, you tend to leverage that.
So, majority of the products that you see out in the field today are human-in-the-loop scenarios. A developer is an amazing example of a human-in-the-loop. AI can suggest but the developer is still incentivized to make sure the code works, tests are run, it fits in very well when integrating it, and finally, it is shipped. Yes, AI is helping and making a human do better, but it is still going through that cycle.
EMILY SANDS: Anything you’d add?
INBAL SHANI: I think for us in GitHub, we started with Copilot for individual and that was deliberate. The idea is to put Copilot in the hands of as many users as we can, and we are fortunate to have, you know, more than 100 million developers that are part of our community. So, we were able to create, like, a very fast feedback loop the moment—we experiment, we experiment, and then we said, okay, enough experimentation. Let’s put it in the hands of the users and let’s start getting some more concrete feedback and seeing kind of looking into metrics like acceptance rate and happiness and productivity and measuring the signals and kind of tuning and balancing and creating all of the right things before we took it to the more conservative because now we’re introducing it to enterprises and businesses and we know that includes change management and the person in the loop and all of that. So, it is really finding that balance of experimentation and how you can create a more fast feedback loop for experimentation.
EMILY SANDS: Awesome. So, find users that are okay being part of the beta, getting it off the ground, providing that feedback loop.
INBAL SHANI: Yeah.
EMILY SANDS: Let’s talk a little bit more about speed. Abe Stanway, who is CTO of Amperon, had a great tweet the other day, which is that all hard problems are just slow feedback loops in disguise. And I feel like a lot of AI is this slow feedback loop in disguise, especially traditional AI, applied ML, where you’re doing it offline and then doing it online with sample data and then doing it online with real data.
Any tips or tricks for just like speeding it all up? I feel like time to market here is so key.
INBAL SHANI: Again, it’s about experimentation. If you’re looking always to optimize to 100% of the quality, then you’ll ship something that you’ll get a very slow feedback loop. Because AI is evolving so fast, you really need that shorter feedback loop. You need to start building that community for yourself that you can put more experiments in the hands of the customer so you can start testing it and shorten that feedback loop. Getting more information, more signals from your customers will help you speed up the ability to basically innovate and put something in the hands of your customers.
EMILY SANDS: Yeah. Rao, any tips or tricks from you on speeding up?
RAO SURAPANENI: Definitely having access to data, especially customer pertinent data, getting as much of that upfront possible, just investing in the data acquisition and the pipelines will help you do a lot more upfront before you even get to the customer. That’s always been the goal. And you can shift left with that approach.
EMILY SANDS: Yeah. We talked a little bit about OKRs. Let’s talk about actual, like, success metrics, go, no-go decisions when you do launch something. You talked a bunch about experiments. You put something out there. It’s a deterministic experience. You can AB test it. You can see if it, you know, performs better than the baseline or control. I don’t know; these AI products are different. Like, they often perform worst to start and the question isn’t are they better than existing, it's do they improve sufficiently over time?
So, I’m just curious how you all think about, like, the quality of experience and the success of experiments in deciding whether or not to push forward with something or abandon.
INBAL SHANI: Yeah. The big one is defining the success metrics that you’re trying to measure because in the world of AI, there is so much signals, there is a lot of data. And then if you are looking to evaluate the success of AI against a traditional way of measuring things, then often you will miss out on signals that you are missing.
So, thinking slightly differently, what are – why are you introducing AI in that specific product? What are you trying to achieve? What is your optimal goal? And then looking into improvements with time. And yes, sometimes you’ll get a lot of noise on the signal, so creating a bigger pool for experimentation will help you source more data but eventually you need to put a timeframe and say, okay, this is the timeframe that I gave my product to be successful, I am seeing the success, or it’s not. And then deciding to pivot.
From that signal that you’re collecting during that experimentation, it will give you a lot of indication, are you doing the right things or not?
RAO SURAPANENI: For us, it really starts with setting the expectations. We start with building the product. We have a program called Now on Now. So, internally, we use this original product. So, we have amazing partners within our organization that actually take the first version, try it out. Sometimes it doesn’t work. We then improve it quickly. So, by the time we get to the customers, we already evaluated in a real world scenario using our own internal beta usage.
The other thing that’s kind of changed with the whole ChatGPT is previously it used to prove that AI works. So, we are to explain why and what is the threshold and all of that. Now, we are seeing a step difference in how customers are coming to us and saying, help me leverage this technology so that I can do better for my users. So, there’s a lot more interest in being part of that early customer validation and how we leverage this.
EMILY SANDS: That’s so interesting. You know, it strikes me that sort of traditional applied AI was all about like seamless correctness. You got it right, you measured yourself in whether you got it right, there was a source of truth on whether it was right. And in the current world, it’s kind of like usefulness, right? The end user is engaging directly with the technology. It’s not about it being seamlessly correct under the hood. It’s about it being useful.
So, are you seeing more sort of talking to users, doing more standard sort of product market fit type work? Because the AI is like part of the lived experience versus under the hood and like, I am actually curious in your seat at CPO, like, how does that manifest?
INBAL SHANI: So, I think GitHub is a very user-type product to begin with. So, having the engagement with users and with our customers, with our developers, something that is basically a second nature to us. That is how we operate.
With the introduction of AI, it is really about trying to assess how that set of developers are now accepting it and is it really helpful? And there were lots of fears at the beginning. Is it going to take our job? Is it going to replace us? And the idea is like, no. You still need a human-in-the-loop. But then how are the developers going to be better developers because now they have a better set of tools in their hands?
So, there is a lot of customer engagement when we’re thinking about the evolution of AI and how are we introducing it.
EMILY SANDS: So, we want human-in-the-loop.
INBAL SHANI: Yeah.
EMILY SANDS: We also don’t want to create unnecessary friction for our users. We want it to be more useful than it is a tax. How do you all think about sort of getting the feedback loop naturally into the user experience? Is that like first principles, how you’re building? Is that an afterthought? When are you considering the feedback loop?
INBAL SHANI: In every given moment, because we are looking on acceptance rate and we are looking into feedback—
EMILY SANDS: And acceptance rate is defined as?
INBAL SHANI: It’s defining when you accept a suggestion that Copilot is putting in front of you. So, you are getting a pair programmer. They are suggesting a piece of code based on a prompt you provided. And then if you accept it, that’s acceptance rate.
We have surveys that measure productivity. When customers are saying, oh, I wasn’t able to produce a code in that specific amount of time. And then we’re also serving businesses to see how much their productivity has improved when using Copilot.
EMILY SANDS: Yeah. Well, you’ve both—oh, yeah, go ahead.
RAO SURAPANENI: I can add to that. So, some of the user experience plays a huge role in this. We—it’s pretty structured approach you can take to this. When AI is highly confident, you can actually default and fill in some fields and move forward. Of course, you still want to indicate that this is suggested by AI.
When we are not so confident is maybe multiple choice or close enough. We actually give that option to the user to say, you may want to choose between the top two or top three choices. So, that is when AI is confident, extremely confident, or somewhat confident.
There are also cases when it is valuable, it is confidently wrong, right? So, you always want to have an out when something like a false negative happens.
So, you look at these kinds of situations and see where can AI apply and do the action on behalf of the user and where can we suggest and take their input and where should we just say you need to figure it out on your own and these are what I thought. So, you kind of look at it and user experience augmenting AI as an approach.
EMILY SANDS: Awesome. You all have shipped a bunch of cool stuff, both like, quick wins on improvements on your current products and also net new products in the current context. I am wondering, like, what did you do wrong? What do you wish you had done differently? What did you learn that you don’t want others to have to learn? Like, you know, imagine I am two years behind you. What do you want to tell me?
RAO SURAPANENI: I’d definitely start with understanding the data and making sure you have the right level of controls, governance, access, and being able to apply that to the product development side.
Definitely getting really rich and robust in that place will help you accelerate and be more agile with what you want to deliver to your customers. I would definitely start with that and that accelerates everything else.
INBAL SHANI: I think the biggest thing I would think about is start with the customer. Understand how AI plays a role in solving the customer’s problem. It is easy to build a cool solution that is amazing, but if there is no need for the customers, then you didn’t really solve the problem. So, always focus on the business need, on the customer problem you are trying to solve, and figure out how are you introducing AI in a way that can help them that can also increase adoption in a very fast phase. And acknowledge that even if you are introducing AI in front of your customers, they have a learning curve. It is not something that happens tomorrow, even if you build the most amazing product that is AI-powered. Customers have a learning curve because it is a new set of solutions and it is going to take them time to adjust and adapt. So, be patient with them and keep—hold their hand while they are going through that journey, that learning journey.
EMILY SANDS: And you’ve lived and breathed it. But sort of what are the patterns you look for to know if a customer pain can be solved with AI?
INBAL SHANI: That’s a good question.
EMILY SANDS: Casual question, casual question.
INBAL SHANI: Casual question.
It goes into what are the set of tenets and what is your company vision and mission? What is the problem you are trying to solve? And then if you introduce AI in which part of the business, it is going to help accelerate that. We’re focusing on productivity. That’s our main goal. We want to make the developers more productive. And then as part of that, we’re focusing on adding security and the ability to scale. But productivity is the big investment for GitHub. We want to be home for all developers. Productivity is number one.
So, when we’re looking into AI, where can we apply AI as part of our solution that will help customers be more productive? That is where we started with pair programmer and then we’ve identified that there are more needs. But it was really focusing through the laser of what is the core pillar, who we are as a company. What can we do better if we’re introducing that ingredient of AI and where should we start?
EMILY SANDS: When you think about building AI into folks’ core products, this is going to be very specific to what is their core products, but I’m curious if you think there are applications that just like everyone or greater than 50% of companies should be considering? Use LLMs to do X.
INBAL SHANI: That’s a good one. That’s another casual one.
I think conversational AI. Chat is something that is very strong. It’s very powerful. It’s not going to solve everyone’s problem. But the more you introduce kind of the ability to ask questions in a more natural way, then these are the areas that you will see more success when you are applying AI. I think that’s generally true no matter what the business that you are running.
If you have an engagement with a customer, if you’re introducing the ability for a chat, if it’s asking conversation—if it’s asking a question or if it’s looking for an article or if it’s doing something, something using that conversation, that’s basically I think one of the most important ingredients.
EMILY SANDS: Anything conversational.
INBAL SHANI: Anything conversational.
EMILY SANDS: And we’re having so many conversations.
INBAL SHANI: Right.
EMILY SANDS: Rao, how about you? Anything top-of-mind?
RAO SURAPANENI: Inbal definitely covered that. So, for ServiceNow, chat bots and intelligence is a big part of our business. Like I mentioned, whether it is HR or IT or customer service, we power a lot of those enterprise interactions and conversations. That is what we are focusing on.
We do have a few more things coming up but I’ll have to not talk about those at this point. We have our own conference coming up in two weeks and there will be more to learn from that.
EMILY SANDS: Awesome. And we didn’t – we didn’t cover it all, LLM applications for within your own enterprise, so if you want those, talk to Rao and he can build those products, but I think there’s also a bunch of internal applications we can spin up ourselves.
Awesome. With that, we are in the final few minutes of our time together. So, super fun just to talk about the current shift to generative AI and what it is meaning for existing applications and how it’s unlocking new applications and appreciated all the – all the quick tips on how to get stuff to market quickly and how to have human-in-the-loop and how to set expectations with users.
I guess just to close out, I’m curious if you all have a personal experience with LLMs that was really delightful, beyond the business, beyond the product, how you’re using it in your day to day.
INBAL SHANI: Well, a personal anecdote. So, my older son has finished his college application. For some reason, he decided he wanted to go study computer engineering. I had nothing to do with it, I promise.
EMILY SANDS: Not prompt engineering; they don’t have that yet.
INBAL SHANI: Not yet, they don’t teach it. That’s one of the things we’re talking about in GitHub, how do we take what we learn about AI and now we’re taking it to our education system. We’re trying to figure out what can we do. We don’t have it concrete.
But when we wrote his application and he wrote his essay and I was like, I was curious. Okay, if I am going to ask Copilot, write me an essay for application and here is the prompt, and actually, Copilot produced a pretty nice essay. Not that we used it. He’s done it all by himself.
EMILY SANDS: We wouldn’t tell the university anyway. That’s a good one.
INBAL SHANI: Please don’t. But we actually didn’t use it because I think one of the things, it didn’t come across as human enough, which is something we have learned, but overall, kind of the structure, the information, the way–it was amazing. So, that was fun.
EMILY SANDS: It’s truly the generative piece. It’s the creation, right?
INBAL SHANI: Yes. And it was a fun experiment because it was like, wow.
EMILY SANDS: I may or may not have written today’s questions using ChatGPT. Rao, how about you?
RAO SURAPANENI: So, I have a nine-year-old niece. Her favorite character is SpongeBob SquarePants. And she has amazing ideas on what else could be more funny. So, in the summer, we plan to use Midjourney and ChatGPT to see if she wants to write a comic based on her creativity. So, we’re going to give it a try all summer holidays.
EMILY SANDS: Beautiful. I love that. So, that’s a wrap for today’s panel. Huge thanks to Inbal and Rao both for all of their insights. And for those who want more fun, we have our fireside later today with Sam Altman.