Thank you dear subscribers, we are overwhelmed with your response.
Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/
A recent article by Vaibhav Wankhede in The Print (published 16 September 2025) stating the pitfalls of the use of AI leading to discriminatory practices in professional life, all behind the garb of ‘objectivity’ and ‘neutrality’ raises an important question – Is there a way by which we can use AI to our advantage without becoming an easy prey to its algorithmic traps that use identity markers to discriminate along lines of caste, class, religion, nationality and language?
One’s identity is an integral part of who one is. In fact, one wonders if there is more to a person than what spells out his/her identity along these markers. And, yet when data pertaining to these identity markers are used to ‘filter out’ so called ‘unwelcome elements’, based on societal prejudices and pressures as pointed out by Vaibhav Wankhede, there is an issue. It raises questions of ‘justice’, ‘fairness’ ‘opportunities’ and ‘level playing fields’ putting us squarely in the realm of John Rawls’ political philosophy – a philosophy that is well-known for its theory of socio-economic and political justice.
With respect to AI the question that instantly comes to mind is whether AI can operate as an ‘agent’ of justice under Rawls’ ‘Veil of Ignorance’. Can it be made to function under the Rawlsian ‘Veil of Ignorance’? Can AI possibly ‘ignore’ the data bases and the societal biases that are linked with them? The data bases are the very core of AI, the very basis of its functionality. Can it possibly tear out of its own identity and take an ‘objective’ ‘neutral’ stand outside that?
John Rawls’ thought experiment couched in the concept of the ‘veil of ignorance’ was a hypothetical thought experiment. The purpose was to arrive at the principles of justice that would govern social institutions. It required the thinkers/lawmakers to assume ‘ignorance’ about their own vantage positions in society, their ‘identities’, when choosing one option that they could consider fair from various options open to them. Since they were required to ‘bracket’ their identities, it was expected that their choice would be fair, objective and neutral. Unaware of their personal vantage point, each would engage in a ‘what if’ exercise. What if the choice of option was disadvantageous to oneself or someone close to one? It was assumed that being unaware of one’s own position, one would play safe and thereby play just.
For humans to engage in this ‘hypothetical exercise’ – the exercise of thinking ‘what if I were to happen to be in that vulnerable position of disadvantage’ is difficult, but not entirely impossible. It is difficult because it would amount to jumping out of one’s own identity and taking a stand outside it. But, it is possible because we are more than what our contingent identities make us to be. We are ethical agents who strive for fairness and justice. And, that is why one can expect our law makers to undertake this hypothetical thought experiment and in that process act fairly. But, unlike the human agent, AI has no need to ‘play safe’ or ‘play fair’. It is guided by the principles of algorithms and their mechanical functions. It need not, and cannot engage in any ‘what if’ exercise. It could not care less. But, since its operations affect us we need to see how we can circumvent the algorithmic trap that it inadvertently lays out.
So how does one rein in the relentless powers of the play of AI? It’s a dilemma that cuts across the realms of efficient functionality and ethical fairness and ironically all in the garb of ‘objectivity’ and ‘neutrality’. Such dilemmas in the use of AI will gradually lead us to consider replacing all our identity markers – caste, class, race, gender, nationality, etc. with alpha-numeric markers where supposedly there would be a ‘level playing field’. Perhaps, the only way to beat the algorithm of AI is Arithmetic – giving AI a dose of its own medicine! The catch is that in doing so, we would lose our social identity to a bunch of numbers and alphabets!!
Shashi Motilal
Professor of Philosophy and Ethics (Retd.)
University of Delhi
These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.