There has been a slew of newer sector-specific strategy documents in India that have discussed the role of artificial intelligence as a disruptive technological force in the future. But these strategy documents mentioning AI need to do more to highlight its implications for access to services in a particular sector. Researchers need to bring specificity in not just impact, but also applications and ethics into their analysis.
As per the report of the ‘Steering Committee on Fintech Related Issues’ released in September, “AI is expected to transform the manner in delivery of such (financial) services”. The roadmap for ‘Health System for a New India’ released by Niti Aayog on 18 November mentions the need to keep up with global technological changes such as AI to “augment clinicians’ knowledge”.
Similarly, a September 2018 document by Niti Aayog ‘Transforming India’s Mobility’ suggested the use of AI for Intelligent Traffic Management Systems. Two other AI strategy documents in 2018, the first released by The Task Force on Artificial Intelligence and the second by Niti Aayog (‘National Strategy for Artificial Intelligence’), mention healthcare, agriculture and education as focus areas for AI intervention. All major public utilities such as education, law and justice, finance and transportation are amenable to AI intervention. Essentially, there is no domain of public life that could not be touched by AI. It is, precisely, an omni-use technology.
The proliferation of AI in public and policy discourse is in-part rooted in computational changes that have enabled certain kinds of decision-making that were not possible a decade ago. However, the term has also seen a semantic expansion; many data-driven decision-making techniques that have existed for decades are now termed AI.
A gap in AI technology
When defined in such broad terms, it is not surprising that AI is predicted to impact nearly all aspects of civic life. AI will have an impact on mundane daily activities, for example, by enabling different forms of analytics or automation of customer support systems. It will also drive novel technological products such as self-driving automobiles or autonomous weapons. It is a general-purpose technology that, like computers, can and will be incorporated to varying degrees of success in different domains.
The growing research and use of AI along with recent reports of discrimination via algorithmic systems have led to well-founded concerns around the technology. Some of these concerns are not unique to AI. Lack of transparency due to blackboxing of technologies is a long recognised concern with socio-technical systems. There are other concerns emanating from the uniqueness of the technology itself — the inscrutability of decisions in newer approaches, such as neural networks, is a case in point. Image recognition based on neural networks incorrectly identifies the image of white noise as a cheetah, and even the engineers behind the system cannot explain why. While image recognition algorithms are approaching nearly human-level accuracy, they still fail in unpredictable ways.
This lack of explainability has deep ramifications for its application in domains such as judiciary where an explanation is necessary. Research groups across the world have attempted to coalesce the gamut of concerns with AI in the form of acronyms such as FAT (fairness, accountability and transparency) and FEAP (fairness, explainability, accuracy and privacy). The conciseness of these acronyms notwithstanding, the apprehensions about AI are not easily managed or resolved and are emerging as research disciplines in and of themselves.
Addressing AI’s ethical concerns
Since innovation in AI is primarily led by private sector players, there are strong incentives for this sector to create a market for it. Consequently, one sees highly specialised companies claiming to build or utilise AI in particular segment-specific startups using the technology for fraud detection in credit markets, others building facial recognition for law enforcement agencies, others using it for analytics in agriculture. The AI Task Force and Niti Aayog documents, in highlighting the different focus areas for the application of AI, provide a concerted direction for the integration of AI in specific sectors. Finally, sector-specific strategy documents released by different ministries in India also list cases where AI can be used, providing heightened focus to AI deployment strategies.
In an interesting contrast, when it comes to ethical considerations of AI, conversations have tended to be fairly high-level. The two AI strategy documents also highlight the importance of “Responsible AI”. Both documents list the importance of fairness, accountability and explainability of AI-based systems. But contrary to the treatment of sections on application of AI, they fall short of any sectoral engagement with these concerns. What does fairness in AI mean for the applications suggested in agriculture, vis-a-vis finance? While the documents get specific on how to apply AI in different sectors, they leave the resolution of ethics for future research endeavours.
While talking in general terms about the ethics of AI, we lose the ability to act concretely to address these concerns. We have clear case studies on the application of AI but only a statement of high-level values. The challenge arises in the implementation of systems with these stated values, which can often conflict with each other in practice. A mere statement of values is inadequate for a strategy on responsible AI. In the meantime, the Ministry of Electronics and Information Technology (MeitY) and Niti Aayog continue to argue over leadership in the development of AI; and propped by the private sector, AI continues to find easy takers in different sectors.
Specificity the need of the hour
Specificity in the application of AI in a particular sector must be matched with specificity in the treatment of ethical considerations of AI in that sector. Explainability, for example, might not be equally important in all domains (think lending versus language translation). In some segments, a loss of explainability of the decision might indeed be justified by greater predictive accuracy.
Researchers can play a critical role in fleshing out these sectoral concerns and developing actionable strategies for the development of AI that aligns with the prioritisation of often competing values within that sector.
Specificity not only enables our ability to act but also ensures that we aren’t creating and fighting bogeyman when our attention is better directed elsewhere.
This article is part of a series examining The Future of Data in partnership with Carnegie India leading up to its Global Technology Summit 2019 in Bengaluru from 4-6 December 2019. More details about the summit are available here.
The author researches the interaction of predictive analytics and machine learning, loosely called artificial intelligence, with development and governance. She maintains Tattle, an open source project for archiving content from chat apps in India. She was previously a research fellow with the Center for Long-Term Cybersecurity at UC Berkeley.