New Delhi: Daniela Amodei, co-founder and president of Anthropic, an AI safety and research company known for its Claude AI models, has broken every stereotype of the typical Silicon Valley ‘tech-bro’ AI founder. Her major? Literature. From a congressional campaign worker to the president of an AI company, Amodei just wanted to make the world a better place.
Over the past week, Anthropic’s pre-IPO valuation has crossed $1 trillion, higher than its main competitor OpenAI, which now stands at $852 billion. Has the Sam Altman-led AI company been ‘beaten’ by the brainchild of a literature grad?
“If you look through my background, you would be like, ‘What is this lady actually good at?’” Amodei jokes in an interview at the Stanford School of Business.
But standing at the frontier of the AI race is a woman driven by curiosity who wants to make sure that safety is at the foundation of an AI venture.
An unlikely founder
Amodei completed her Bachelors of Arts in English literature, politics, and music from the University of California, Santa Cruz. Her passion for music even earned her a partial scholarship.
Amodei was a fresh literature graduate in 2009 when the global financial crisis had just sent shockwaves through global economies and left the job market in shambles. At the time, the question that guided her was: “How do you build something that is of consequence and has a real purpose?”
She forayed into international development and global health, working at Conservation Through Public Health, an Uganda-based NGO working to help humans coexist with biodiversity and wildlife. During a brief stint in politics, Amodei even took part in campaign work for Capitol Hill.
In 2013, she returned to Silicon Valley and joined Stripe, a financial service platform designed to facilitate online payments, which was still in its infancy at the time. As a recruiter, she helped grow their team from barely 40 employees to more than 300. She was also one of Stripe’s Risk Managers, where she worked with engineers, machine learning experts, lawyers, and others to make the platform safer to use.
Then came AI.
At Stripe, Amodei’s work centred around risk, fraud, and the policies to counter that. When she joined OpenAI, she saw the kind of risks surrounding such powerful technology.
“The terminology and the jargon can feel overwhelming at first. I just kept asking questions until I felt like I could understand it,” said Amodei, who carved her way into the tech world through her curiosity.
She admits that while she could not have trained language models, there were other ways she would be able to contribute. Looking into the role of making AI safer and more ethical was one such avenue.
Also read: Saadat Hasan Manto shaped Indian cinema long before parallel cinema became mainstream
Founding Anthropic
In 2020, Amodei and her brother Dario, along with five others from OpenAI, resigned. The brother-sister duo had decided that they wanted to build the kind of AI infrastructure where safety was not an afterthought but rather the foundation.
Anthropic, which comes from the Greek word ‘anthropos’ meaning human being, was founded in 2021. The public benefit corporation did not want to build just an intelligent chatbot but a system trained to think about ethics.
In 2023, Anthropic launched the first version of Claude. Since then, the AI chatbot has gone through several upgrades, and Anthropic itself has grown significantly through investments by Google, Amazon, etc.
Amodei calls Claude a ‘patient tutor’, an ‘individualised professor’, and an enabler of work rather than a bot that just gives you the right answers. But what truly differentiates Claude from other ChatBots is the way it was trained.
Large Language Models (LLMs) are trained through a reward system. If they answer the question right, they get a reward. If they give a wrong answer, they are denied the reward. However, at Anthropic, the team chose to train Claude using a framework that allowed it to think about ethics. The documents used include the UN Universal Declaration of Human Rights, the Apple terms of service, and others.
In January 2026, when Anthropic published Claude’s 80-page Constitution, the subtitle on the page read ‘our vision for Claude’s character’.
“We really want the models to understand a broader system of ethics. Over time (this) not only increased the ethicalness of its answers, but it also increased its overall intelligence and capability,” said Amodei in an interview with Sixth Street on February 19.
Also read: Why the third attempt is the lucky charm for UPSC aspirants
Race to the top
The world knows about the race to the bottom. Standards are lowered, and regulations are relaxed—all for a competitive edge. But Amodei seeks to create a new race. The race to the top.
By sharing Anthropic’s research about safety and responsibility, Amodei wants to set a higher standard that others in the industry can follow.
“Safety means taking a form of radical responsibility for the technology that we’re developing,” she said.
However, the Amodei siblings, who set out to make AI safer and more ethical, had to put their foot down when the US Department of Defence ordered Anthropic to allow unrestricted use of its technology.
A $200 million contract ended. Anthropic was labelled a ‘supply chain risk’, a designation reserved for foreign companies like the Chinese telecom company Huawei and Russian cybersecurity firm Kaspersky. Lawsuits began.
The conversation around security, AI, and Anthropic is far from over. For now, the company has withheld released Mythos, a new model which brought along with it a potential for cyber warfare. These are the moments where Amodei says that the team returns to their initial mission.
“We want to get this technology to you as quickly as possible, but it is irresponsible of us to release it until we are confident that all of the patching that needs to be done has been done,” said Amodei.
Social media companies are often criticised for their impact. But according to Amodei, at the time, they were just trying to scale up without thinking about what the consequences of that could be. Now, AI companies have seen what can happen when technology is scaled up, and they can prepare better for those consequences, she says.
“We’re gonna be careful. We’re going to say—how can we imagine a world where everything goes right, but also a world where everything goes wrong?”
(Edited by Saptak Datta)

