A young recruiter in Bengaluru told me that the algorithm doesn’t discriminate. “It just looks at data.” I wanted to believe him.
As a content and brand marketer who’s embraced tech, I have seen the upside and the ugly. The upside is speed, scope, and new forms of storytelling. But the same tools can quietly reproduce the old biases that my people from marginalised castes already fight every day. In short, the machine promises neutrality, but the data it eats often isn’t neutral. That’s not abstract—there’s mounting evidence from India and global research that algorithms trained on skewed data can silence, misclassify, misrepresent, and financially exclude Dalits at scale. The more I see how these systems operate, the more I worry that AI may not free us from prejudice. It may fossilise it.
We often assume technology is neutral, a clean break from the messiness of human bias. Yet, algorithms are only as fair as the data they are trained on. And in India, that data is soaked in centuries of caste hierarchy. CVs from upper-caste names dominate elite institutions. Dalit and Muslim-majority districts are tagged as “high risk” because poverty is entrenched there. Content moderation systems, primed by majoritarian sentiment, routinely flag Ambedkarite assertion as “hate speech” while casteist slurs (bhimta, chamar, chandal, ricebag, etc) slip through unchecked. AI is faithfully learning the old prejudices.
Discrimination hiding behind objectivity
Generative AI image/video models trained on massive web datasets frequently reproduce racial and socioeconomic stereotypes. Prompts for “high-paying jobs” produce lighter-skinned, ‘upper-caste’ looking images; prompts for low-status work produce darker-skinned figures or visual cues of poverty. Recent academic work specifically quantifies India-centric biases in image output and shows models equate “Indianness” with higher-caste visual markers while depicting Dalit identities with poverty markers. It matters for brand creatives, art directors and producers who increasingly use AI tools for mood boards, casting mockups, and campaign images.
In practice, this means that a Dalit actor or protagonist often ends up depicted in stereotyped ways in AI mockups used to pitch a series— and brand creatives who rely on them amplify harmful frames without intent.
Recommendation engines power who gets seen on YouTube, Instagram reels, Spotify playlists and even publisher homepages. If the training signals (likes, comments, reshares) favour mainstream and majoritarian content, Dalit storytelling (nuanced caste critique, oral histories, counter-narratives) struggles to break the “engagement” loop that amplifies content. Investigations of platform dynamics in India show majoritarian patterns persist online, and experts warn that algorithmic curation can entrench these patterns.
A Dalit writer’s long-form explainer or a producer’s short documentary gets poor recommendations, not because of quality, but because the model’s engagement priors undervalue its audience or mislabel its tone.
The real danger is not just discrimination, but that it hides behind a facade of objectivity. A recruiter can shrug: the system rejected the candidate. A bank can say: the algorithm flagged the loan. Responsibility gets outsourced to the machine, while prejudice is quietly legitimised by the authority of mathematics.
This “tech washing” is especially dangerous in India, where digital systems are becoming the backbone of everyday life, eg, Aadhaar for identity, UPI for payments, predictive analytics for welfare and even policing. A single biased recruiter may block a few careers; a biased algorithm can exclude millions at once, invisibly.
This is not a distant risk. A neo-Buddhist entrepreneur in Uttar Pradesh may be denied credit because his pin code is flagged as high-risk. On social media, Dalit voices celebrating BR Ambedkar can be silenced by moderation systems that normalise dominance but punish assertion. Each decision feels technical, but together they map a bleak picture: the future being shaped by machines that think exactly like the casteist society we live in.
Also read: Why Dalit conversion to Buddhism hasn’t taken off, and how it still can
What this looks like in my work sphere
- A social media manager who is Dalit finds her Ambedkar-themed campaign repeatedly mass-reported and auto-flagged; reviews take days, and the campaign loses momentum, while posts with casteist insults remain visible.
- A junior copywriter with a non-Anglicised name gets fewer interview calls if an employer uses an ATS shortlist; internal audits later show the ATS ranked candidates using signals correlated with past hires.
- A brand marketer runs a CSR film about Dalit dignity; ad delivery optimisers serve the film mostly to urban, upper-caste segments (higher click-throughs), so the film fails to reach the marginalised communities it intended to engage.
- A producer from a Dalit background is denied fintech credit because the model flags the project’s shoot location as “high-risk” due to historical economic indicators, slowing production.
There is a fix, and it requires urgency. AI systems must be audited for bias, just as financial accounts are. Datasets must be made more representative, not built solely on histories of exclusion. Companies and governments must be transparent about how algorithms decide who gets jobs, loans, or visibility. Citizens must have the right to question those decisions. Most importantly, Dalit individuals need to be in the rooms where tech is built. Who creates the machine determines whose realities it reflects.
For Dalits, the struggle has always been about fighting invisibility—in classrooms, offices, boardrooms, and politics. Now that the battle extends to the hidden world of algorithms. India stands at a fork: one path leads to a digital future that automates centuries of discrimination; the other to a more inclusive one, where technology finally does what it promises, ie, level the playing field. The choice is ours. But every time an algorithm silently decides who is employable, trustworthy, or heard, tomorrow’s India is already being coded.
I believe in tech; it’s the best tool we have to amplify unheard voices, but belief alone isn’t enough. If we let convenience and opaque optimisation dictate how creative talent is discovered, represented and funded, we risk building a digital culture that mirrors caste hierarchies. To avoid that, we must demand transparency, diversify the rooms where models are built, and insist that every tool shaping careers, funding and visibility is audited for caste impact. Otherwise, the future is where creativity is curated by biased models.
Vaibhav Wankhede is a creative marketer and writer. Views are personal.
(Edited by Ratan Priya)
You are there to audit AI for caste-bias. Why get so tense and worried.