File image of Amazon Echo, whose virtual assistant is Alexa | Pexels
File image of Amazon Echo, whose virtual assistant is Alexa | Pexels
Text Size:

Endowing voice AIs with personalities makes sense. But choosing just the right one is tricky. What implicit judgments do designers reveal through the types of characters who are presented and those who are not? The question is hard to dodge because people probe. “We see a lot of users trying to understand who Cortana is,” Foster says.

For starters, personality designers must decide if they are creating a fundamentally humanlike character. The answer doesn’t have to be yes. For example, consider Poncho, an AI that provides weather forecasts via a conversational messaging app. Poncho is similar to the major voice assistants in many ways. The character was crafted by a creative team that includes a comedian who performs with the Upright Citizens Brigade. The team’s work is anchored by a personality brief. Poncho, though, is not human; as the app graphics make clear, he is a hoodie-wearing orange cat.

Whichever character type they chose, designers walk a fine line. They maintain that, while they are shooting for lifelike personas, by no means are their products pretending to actually be alive. Doing so would stoke dystopian fears that intelligent machines will take over the world. AI creators also rebuff suggestions that they are synthesizing life, which would offend religious or ethical beliefs. So designers tread carefully. As Foster puts it, “One of the main principles we have is that Cortana knows she is an AI, and she’s not trying to be human.”

Also read: Artificial intelligence can take your job, so political leaders need to start doing theirs

As an experiment, I tried asking all of the major voice AIs, “Are you alive?”

“I’m alive-ish,” Cortana replied.

In a similar vein, Alexa said, “I’m not really alive, but I can be lively sometimes.”

We are deeply grateful to our readers & viewers for their time, trust and subscriptions.

Quality journalism is expensive and needs readers to pay for it. Your support will define our work and ThePrint’s future.


The Google Assistant was clear-cut on the matter. “Well, you are made up of cells and I am made up of code,” it said.

Siri, meanwhile, was the vaguest. “I’m not sure that matters,” she answered.

Foster says that while the writers don’t want Cortana to masquerade as human they also don’t want her to come across as an intimidating machine. It’s a tricky balance. “She’s not trying to be better than humans,” Foster says. “That’s a creative stake we put in the ground.”

I tested Cortana’s humility by asking, “How smart are you?”

“I’d probably beat your average toaster in a math quiz,” she replied. “But then again, I can’t make toast.”

Some users, rather than directly confronting AIs, ask questions whose answers might imply living status. People, for instance, like to ask Cortana about her favorite food. But Cortana is engineered to know that as an AI she can’t actually ingest anything. She once told me, “I dream of one day getting to taste waffles.”

Cortana gets so many questions that presuppose her status as a person living in the physical world that the writers had to rigorously define a no-go zone. They call it the “human realm,” within which Cortana’s answers are all some variant of “I’m sorry, that doesn’t apply to me because I am an AI.” As Deborah Harrison, one of the Cortana writers, explains, “She doesn’t have hands. She doesn’t own a house. She doesn’t have a garden. She doesn’t go to the store and sell apples.” People also ask Cortana about human relationships. These inquiries, too, lead nowhere. She doesn’t have siblings or parents. She doesn’t go to school or have a teacher.

Writing for a character who is lifelike but not actually alive is challenging. But for the Cortana team, this conflicted existential status is creatively inspiring. “We are revealing that she is not a human entity with human intelligence,” Foster says. “And then on the other hand, we don’t want to poke the bubble too much of that revered space we call the imaginary world. There’s this tension between the two.”

After figuring out existential status, persona designers must wrestle with the equally fraught issue of gender. Is the AI styled as male, female, or neither? What is the rationale for that choice, and how does it affect people’s interactions with the technology?

Asked if she is a man or a woman, Siri replies, “I exist beyond your human concept of gender.” When I put the same question to Cortana, the response was, “Well, technically I’m a cloud of infinitesimal data computation.” As noncommittal as these replies are, Apple and Microsoft may envision both assistants as being female. People at those companies sometimes use feminine pronouns to refer to their AIs, though one gets the sense that they’ve been told not to. And both Siri and Cortana have female-sounding names.

The Google Assistant’s answer to the gender question is, “I’m all-inclusive.” This claim of not having a gender is more credible. “Assistant” is neither a female- nor a male-sounding name. And Google’s employees are disciplined about referring to the technology as an “it.”

Alexa, rounding out the group, bucks the trend of neutrality. “I’m female in character,” she will say.

Also read: Siri and Alexa are helping the world become a more literate place

Regardless of what the big tech companies program their devices to say, most people think of the major voice AIs as being female. No wonder: By default, all of them speak with what sounds like a woman’s voice. (Men’s voices have a typical frequency of 120 Hz while women average 210 Hz.) In fairness, Apple and Google users can change this in their device settings; my wife, for instance, uses the lower voice that she calls “Mr. Siri.” At the time of this writing, however, Microsoft and Amazon offered only female voices.

One school of thought holds that female voices are more popular. Derek Connell, a senior vice president for search at Microsoft, told the New York Times that “in our research for Cortana, both men and women prefer a woman, younger, for their personal assistant, by a country mile.” In prelaunch testing for Alexa, Amazon customers similarly expressed a preference for a female voice. Speaking to CNN in 2011, Nass, the Wired for Speech author, said that even fetuses in the womb have been shown to respond to the voices of their mothers but not their fathers, he said. “It is a well-established phenomenon that the human brain is developed to like female voices.”

But historical tradition as much as scientific research may be the reason that female personas prevail. In World War II, female voices were used in airplane navigation systems because cockpit designers believed that they would cut through the din of flight better than men’s voices.

Further back, telephone operators in the United States from the 1880s onward were nearly always women, establishing a cultural norm that the disembodied voice coming from a phone should be female. In a senior thesis entitled “Phantom of the Operator,” Georgetown University student Mary Zost details how telephone companies trained and publicly promoted their operators as exemplifying classic female traits. The operators were instructed to be subservient, maternally caring, polite, and helpful. They were educated, well-spoken, and knowledgeable. They were young and single. And, according to a training manual cited by Zost, the operators were taught to exhibit “an even temper that [would] calm and humor the most obstreperous man.” AI designers today may not know this historic legacy. Nonetheless, they seem to be striving to build voice AIs with similar traits.

The predominance of female personas strikes many people as sexist. The job of being a secretary or administrative assistant has historically been relegated to women; making digital assistants female by default rehashes this unequal dynamic. Female AIs also play to the science-fiction fantasy of the sexy “fembot.” As Hilary Bergen, a graduate student at Concordia University in Montreal, puts it, today’s virtual assistants are “imprisoned at the intersection of affective labor, male desire, and the weaponized female body.” The assertion that bots perpetuate gender stereotypes and stoke inappropriate interactions cannot be dismissed as some academic’s hot take, either. Conversation designers widely report that people flirt with, sexually proposition, and harass bots; some experts estimate that these types of remarks account for 5 to 10 percent of all utterances.

Not wanting to support dated gender roles, some companies resist going female by default. For instance,, which makes a scheduling assistant bot, asks new customers to choose either Amy Ingram or Andrew Ingram as the identity. (Men tend to prefer Amy, the company says, while women more often go for Andrew.)

Other companies opt for no gender at all. With chatbots that communicate via text only, makers can express neutrality simply by concocting a name that doesn’t ring as either male or female. For instance, in designing a customer-service bot, Capital One chose the name Eno, which had the additional advantage of being the word “one” backward. When you ask Eno whether it is male or female, the bot replies that it is “binary.” Kasisto, which makes a financial- advice chatbot called MyKAI, took a similar tack. “We thought there was too much inertia in the decision to make assistants female,” says company founder Dror Oren, “so we decided . . . to make it genderless.”

This excerpt from ‘Talk to Me’ by James Vlahos has been published with permission from Penguin Random House Business Books, UK.

Subscribe to our channels on YouTube & Telegram

News media is in a crisis & only you can fix it

You are reading this because you value good, intelligent and objective journalism. We thank you for your time and your trust.

You also know that the news media is facing an unprecedented crisis. It is likely that you are also hearing of the brutal layoffs and pay-cuts hitting the industry. There are many reasons why the media’s economics is broken. But a big one is that good people are not yet paying enough for good journalism.

We have a newsroom filled with talented young reporters. We also have the country’s most robust editing and fact-checking team, finest news photographers and video professionals. We are building India’s most ambitious and energetic news platform. And we aren’t even three yet.

At ThePrint, we invest in quality journalists. We pay them fairly and on time even in this difficult period. As you may have noticed, we do not flinch from spending whatever it takes to make sure our reporters reach where the story is. Our stellar coronavirus coverage is a good example. You can check some of it here.

This comes with a sizable cost. For us to continue bringing quality journalism, we need readers like you to pay for it. Because the advertising market is broken too.

If you think we deserve your support, do join us in this endeavour to strengthen fair, free, courageous, and questioning journalism, please click on the link below. Your support will define our journalism, and ThePrint’s future. It will take just a few seconds of your time.

Support Our Journalism

1 Comment Share Your Views


  1. Customers and business professionals around the globe are both thrilled about the evolution of voice assistant technology such as Alexa. These are exciting times in which we live. Chatbots have a huge potential even in the eCommerce space to create a more frictionless shopping experience for the customers by trialing voice ordering. We at Engati have started the journey. Do read our collection of blogs at, test our platform and provide us feedback.


Please enter your comment!
Please enter your name here