Daniela Amodei of Anthropic on how she keeps her head (and her principles) amid an AI media frenzy

Amodei resized

Amodei cofounded Anthropic with her brother, Dario. The company wants to keep humans at the center of the AI story.

For Daniela Amodei, cofounder and president of the generative AI company Anthropic, the sudden explosion of interest in her field has been an opportunity for reflection. “It can feel overwhelming to try to think about how the whole world is reacting to AI,” she said. “We have to internally ask ourselves, why did we do this? Why did we start Anthropic? Which is that we want to help make AI systems safer from day one.”

Since February, Anthropic—which is registered as a public benefit corporation and specializes in AI safety and research—has announced a partnership with Google, closed a $450 million funding round, and released its Claude chatbot (including the newest model, Claude 2, released yesterday). Its previous two years were a scramble to create a large language model while scaling up a company that’s also publishing research—and leading the discussion of artificial intelligence in the worlds of media, politics, and tech itself. Amid this growth, Anthropic has turned to Stripe—trusting that the scalability and international focus of Stripe’s payments platform will allow it to serve a rapidly growing number of customers worldwide.

Amodei spoke to Stripe about why she believes such a broad mission is so critical—and why she thinks a company whose purpose is to benefit humanity at large can succeed in a crowded, competitive market.

What’s behind the name Anthropic? Why was that the word you chose?

One reason is that “anthropic” means relating to humans, and a lot of what has been important for us, as we're working on these evermore powerful generative AI tools that are interacting with the world, is wanting to make sure that humans are still at the center of that story. We hope people use Claude as a partner or a collaborator that helps humans do the things that they want to do and live the lives they want to live. We also make sure that humans are at the center of our process, whether it’s reinforcement learning from human feedback or just thinking about how AI is going to impact the world more broadly.

How did you narrow that mission into a specific set of projects to work on when you launched?

When we first left OpenAI, we had to build a large generative language model again from scratch. So our first year was about getting a company up and running, fundraising, the nuts and bolts, focused on training this large language model.

But we were also doing the safety research on top of it to try and make it the safest model available in the market. When we started we had something like six teams and the majority of them were working on safety on top of, or directly in, the model. We always had plans to deploy a product, but really until about six to nine months ago, we were a research shop.

What do you think the benefit is of having physicists work on these issues? Was that an intentional choice?

My brother Dario, who's the CEO, has a PhD in physics, and so does our chief science officer, Jared Kaplan, who's also one of our cofounders. So some of it was a network thing. But I also think a lot of the work that our team was best known for was done by people with that background. Probably the two things we were best known for is being the team that developed GPT-3 and the team that wrote this paper about scaling laws, about how to predict when certain capabilities will emerge in generative AI systems. That research came from applying physics principles to AI.

You’ve been working in this field for a while, but it feels like the rest of the world just discovered it all at once. What’s it like to be a founder of a company in the middle of this hurricane?

I was telling my husband that sometimes it feels a little bit like when you're running down stairs really quickly. You can’t think too hard about it or you're gonna fall. That’s not to say we’re not self-reflective, because the thing that has felt really important to me is being grounded in our principles and owning our part of it. It can feel overwhelming to try to think about how the whole world is reacting to AI, how different groups and stakeholders are interacting with it. So we have to internally say, why did we do this? Why did we start Anthropic? Which is that we want to help make AI systems safer from day one.

How do you sell investors and potential partners on the value of caution and deliberation?

I think trust and safety is something that the market wants. We think this is the correct thing to do from a moral perspective, but it’s also good for business. Of course, there might be cases where those two things conflict, but in our experience we’ve felt like they can be fairly synchronous.

Individuals and businesses don't want models that are producing harmful, dishonest, or unhelpful outputs. If you're like, "Hey, can I give you a language model? It will lie to you about half the time, and it will probably produce toxic content," no consumer is going to say, "that sounds great, I would love to pay you for that." People are saying they want the false positive rate to be much closer to zero than what's currently available. They want the safest version of the model.

What do your days look like?

I’m not naturally a morning person but I’ve shifted my schedule to be one. I work out almost every morning, early in the morning, and then spend some time with my son, Galileo, and my husband. It helps me go into the day in a good mood.

I try as much as possible to have a couple of hours free first thing during the work day to tackle big projects and have some open thinking time. The rest of my day is tied up in meetings—I have much more of a manager versus a maker schedule. It’s a combo of one on ones, team syncs, external events, strategy and decision-making meetings, recruiting, and other things. I always try to build in some space to meet with new hires and hear about their experience at Anthropic, though this is getting harder as we grow!

Tyler Cowen recently wrote in Bloomberg that AI could mean the end of large corporate structures as we know them because the product manages itself. Do you see Anthropic eventually needing to staff up?

The unit of value created per researcher is probably higher than what you might see at a traditional business, but you still need a certain number of people to sell your product and do customer support and quality control. When we developed Claude v1, we were about 60 people. We're now closer to 140, and most of that growth has been on the business and product side, doing customer support and trust and safety.