scorecardresearch
Thursday, July 31, 2025
Support Our Journalism
HomeHealthAI's most important architects can be neurodivergent people. Here is why

AI’s most important architects can be neurodivergent people. Here is why

Neurodivergent voices are often excluded from policy discussions because their needs are seen as too complex or niche. But when they share lived experiences, they make these needs visible to policy-makers.

Follow Us :
Text Size:

What if the missing ingredient in ethical AI is the untapped creativity and lived experience of neurodivergent minds?

I work at the intersection of storytelling, marketing, psychology and technology. A space where nuance is everything and where neurodivergent people have something powerful to offer. I have seen how much more creative, ethical and human our digital future could be if people like me are in the room when big AI decisions are made.

I spend my days translating complex technology into stories that resonate and campaigns that connect. Yet, even in the world of storytelling and branding, the perspectives of neurodivergent people are too often missing from the conversation. As AI quietly shapes decisions about our health, jobs and education, the urgency to design technology that is ethical and inclusive has never been greater. Neurodivergent individuals with autism, ADHD, dyslexia and other cognitive differences are not just users of AI. We could be its most vital architects.

Neurodivergence: The hidden asset in AI ethics

In marketing, we are trained to spot patterns, challenge assumptions and find new angles. Neurodivergent minds excel at these skills. According to a 2022 Deloitte report, teams with neurodivergent members are 30% more productive in innovation-focused roles. Brands such as SAP and Microsoft have launched neurodiversity hiring programmes that are transforming their creative and technical teams. And CAI Neurodiverse Solutions increased neurodivergent hires by 38% by rethinking recruitment.

Despite these successes, most AI frameworks still reflect neurotypical assumptions. They often overlook the communication styles and behaviours that make neurodivergent individuals unique. The result? Tools and campaigns that misinterpret or even exclude the very people who could help them break through the noise. Sometimes, it feels like being the only person at a party who knows the punchline to the joke, but nobody’s listening.

Storytelling: Turning lived experience into strategy

Storytelling is more than a way to share information. It’s how we reveal the impact of technology on human lives. Stories have the power to change minds and open doors. Neurodivergent voices are often excluded from policy discussions because their needs are seen as too complex or niche. But when we share lived experiences, we make these needs visible to policy-makers.

2023 study in Nature found that autistic individuals often experience disproportionate harms from algorithmic content moderation. These tools, designed to reduce harm online, can misread and exclude the very people who rely on them most. I once tested an AI-driven image generator for a Gen Z campaign, asking for visuals that felt playful and a little chaotic, to match the campaign’s energy. Instead, the tool kept producing polished, stock-photo perfection, missing the quirky vibe entirely. This is a common pitfall with AI-generated images: they often lack originality and can feel generic, which weakens brand differentiation and misses the emotional connection that comes from authentic, human creativity.

For neurodivergent creatives who thrive on unconventional thinking and nuance, these limitations are glaring. When I share these stories with colleagues, I often hear, “That happened to me, too.” These are not just anecdotes; they are real feedback for the systems we are building. Storytelling is about empathy. When brands and platforms listen to neurodivergent individuals, they unlock new ways to reach audiences and solve problems that others miss.

What happens when neurodivergent people are left out

AI has a reputation for mirroring the biases it finds in society, but it can also be trained to challenge those biases. For example, refining recruitment models to recognize and value neurodiverse traits means AI can become a tool for more equitable hiring. Of course, this is easier said than done. AI advancements often come with their own headaches, especially when public training data bakes in old prejudices. Without clear ethical guidelines, even the biggest players can stumble. Google’s Gemini, for example, produced illogical results in its attempt to sidestep bias. That is why embracing a wider range of perspectives is not just a nice-to-have but essential for ethical AI development.

Excluding neurodivergent people from AI and marketing design has consequences:

• Barriers instead of supports: Many neurodivergent people rely on AI and digital tools for organizing thoughts, managing time and communicating clearly. If these tools are not built with us in mind, they become obstacles. Policies that block AI tools in schools or workplaces often forget that, for some, these tools are essential.

• Bias disguised as objectivity: AI trained on limited data can misread neurodivergent behaviour as unprofessional, penalizing people for being themselves.

• Workplace bias in recruitment: Even with diversity hiring goals, recruitment algorithms can filter out neurodivergent candidates who do not fit a neurotypical mould.

• Missed innovation: Neurodivergent thinkers often spot cracks in the system before anyone else. If we are not in the room, innovation suffers.

• Legal and ethical blind spots: Ignoring neurodivergent needs in tech design risks violating global rights frameworks like the UN Convention on the Rights of Persons with Disabilities. Beyond legality, it erodes public trust.

For a real-time look at progress, see the Microsoft Diversity & Inclusion Report.

Practical strategies for inclusive AI governance

Inclusion is not a checkbox. It means giving neurodivergent people the microphone, the budget and the authority to influence outcomes. Here are strategies I have seen work in marketing and tech:

• Co-design with neurodivergent participants: Involve neurodivergent people at every stage, from brainstorming to deployment.

• Empower neurodivergent-led audits: Create independent teams led by neurodivergent people to stress-test systems for ethical blind spots and edge cases.

• Flip the mentorship model: Pair leaders with neurodivergent mentors so those most impacted by technology guide those building it.

• Explain the black box: Any AI decision affecting someone’s job, healthcare or education should include a plain-language explanation.

• Fund labs that think differently: Invest in innovation labs where neurodivergent creators, engineers and designers prototype new approaches.

• Build continuous feedback loops: Establish long-term advisory councils of neurodivergent users to shape products from ideation through deployment.

• Improve data from the inside: Involve neurodivergent people in building and curating datasets to catch nuance and context others might miss.

• Diversify oversight: Ensure regulatory bodies reflect a broad range of communication and behavioural patterns. The AI Now Institute calls for stakeholder diversity in oversight and this should be standard.

Why now and why this matters

As global institutions race to set standards for AI, there is a narrow window to make inclusion the default, rather than an afterthought. The World Economic Forum aims to shape a future where technology is equitable, ethical and innovative. That future cannot be built without neurodivergent minds.

AI and marketing that overlook cognitive diversity are less effective and less innovative. When we design for the full spectrum of human thinking, everyone benefits.

The future is ours to shape

Change is already happening. Companies like SAP and Microsoft show that when neurodivergent voices are empowered, AI becomes more creative, just and human.

The next chapter of AI is being written now. We can keep building systems that misunderstand and exclude or we can open the doors to the full spectrum of human minds.

I believe technology should open doors, not close them. Let us build AI and stories that see us, value us and work for all of us. That is the future worth creating. It starts with us, now.

This article is republished from the World Economic Forum under a Creative Commons license. Read the original article.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular