scorecardresearch
Wednesday, July 23, 2025
YourTurnSubscriberWrites: Why algorithmic state is already here, and regulation is 20 years...

SubscriberWrites: Why algorithmic state is already here, and regulation is 20 years late

Algorithmic decisions scale and depersonalize power. Unlike a welfare officer’s error, an algorithm's choice is often opaque—and nearly impossible to challenge.

Thank you dear subscribers, we are overwhelmed with your response.

Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/subscribe/

By the time policymakers wake up, the state will have become a machine. Not in theory. In practice. Not because it chose to, but because it never paused to ask what it was becoming.

From welfare to warfare, states now run on lines of code–not wit. Predictive algorithms determine who gets stopped on the street. AI systems flag welfare fraud before a human ever reads a form. Biometric databases decide who gets food, rations, even the right to vote. The state, once made of people and paper, is now made of data and decisions that no one can fully explain.

Take Aadhaar, India’s biometric identity system. Over a billion citizens are now enrolled, with fingerprints and iris scans attached to a unique 12-digit number. This system powers welfare, subsidies, pensions, and even education admissions. In 2017, the government claimed it saved $9 billion by weeding out “ghost beneficiaries” through Aadhaar-linked accounts. But it also locked out real people when fingerprints didn’t match—often the elderly, the disabled, or manual laborers whose worn-out hands defied scanners. The machine had no room for error, even if humans do.

This is the algorithmic state—not a dystopian  future or an Orwellian fiction—but a present reality, a reality we live in.

And we are regulating it with laws written in the age of typewriters.

The problem is not just that AI is in government. It’s that the government has outsourced critical functions to systems it doesn’t fully control or  fully understand. In the Netherlands, an algorithm used to detect welfare fraud (SyRI) was struck down in court for violating privacy and reinforcing bias. It profiled low-income neighborhoods, flagged entire families for investigations, and never explained why. The Dutch court ruled that this violated the European Convention on Human Rights. But for years, the system had already done its damage, tearing apart families with each line of subsequent code.

The United States is no better. In cities like Chicago and Los Angeles, predictive policing tools have used historical crime data to forecast where future crimes might occur. But crime data isn’t neutral. Crime data is shaped by decades of over-policing in Black and brown neighborhoods. Feeding biased data into a system doesn’t remove the bias; it automates it. The result is a feedback loop of over-surveillance and over-suspicion.

What’s missing is accountability. AI systems do not stand trial. Bureaucrats hide behind software they neither wrote nor understand. Procurement contracts are signed with private firms that shield their algorithms as trade secrets. When harm happens—when someone is wrongly denied welfare, or falsely flagged as a threat—no one knows whom to blame.

We’re told these systems are more “efficient.” But efficiency is not justice. Automation can reduce costs. It cannot replace discretion, empathy, or democratic debate.

Even the most powerful AI models—those used to scan tax records, flag fraud, or decide visa eligibility—still rely on rules set by us humans. The problem is that those rules, once handed over to code, become opaque. You cannot cross-examine an algorithm. You cannot reason with one. You can only be judged by it. An admirable computer-scientist I deeply respect told me, what’s written code is written. Nothing else, nothing more. Just code.

The law is behind. Public administration laws in most democracies still presume that decisions are made by humans guided by precedent, reason, and review. They were not written for black-box systems that evolve, retrain, and refine themselves over time. This is not just a legal loophole—it’s a legitimacy crisis.

Data protection laws, where they exist, are toothless. In India, the new Digital Personal Data Protection Act (2023) gives sweeping exemptions to government entities. In the United States, no federal privacy law comprehensively regulates how government agencies use AI. Europe’s AI Act, though ambitious, focuses mainly on corporate AI risks. The public sector is barely mentioned.

This silence is strategic. Governments benefit from opacity. Algorithmic decisions can be scaled, depersonalized, and outsourced. But each layer of abstraction moves power further from the people. A welfare officer’s mistake can be corrected. An algorithm’s decision often can’t be explained—let alone challenged.

So what should be done?

First, we need an enforceable right to explanation. Citizens must have the right to know when an algorithm is used in a decision about them—and why it decided as it did.

Second, all public-sector algorithms must be auditable by independent watchdogs. That includes access to training data, design logic, and risk assessments. Black boxes have no place in a democracy.

Third, there must be a public registry of all AI systems used in government. No algorithm should operate in secret.

Finally, lawmakers need to build capacity. AI is not a “tech issue.” It is a governance issue. Every ministry using AI should have trained oversight teams that understand the tools they deploy. We don’t let surgeons operate without licenses. Why do we let unverified code make life-altering decisions?

The algorithmic state is already here. The question now is whether we will govern it—or be governed by it.

The author is a student at The Shri Ram School, Aravali. Views are fully personal.

These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here