Technology has significantly driven India’s growth over the last decade. Be it through the rise of well-funded startups and ‘unicorns’, or technology-enabled governance, or with India’s emergence as a testbed for innovation and R&D.
A 2018 report by the Startup India Initiative puts it in perspective. “India also boasts of being home to the third largest unicorn community, with over 16 high valued startups having raised over $17.27 billion funding, with overall valuation of over $58 billion,” it says.
But with such exponential growth comes a set of policy and regulatory challenges.
First, the government policy and the regulatory framework need to be aligned to enable the growth of a robust technological ecosystem, rather than impede it.
Second, the rise of the digital has led to new vulnerabilities. A digitally connected ecosystem is rife with security concerns, and this can get worse if digital literacy has not kept pace with digital use. With personal data becoming a critical tool for monetisation and profiling, the incentive from both industry actors and the state to secure such data and respect individual privacy is quite low. Both the Facebook-Cambridge Analytica controversy and the unrestricted seeding of Aadhaar data in multiple databases show how individual and community rights face threat.
Therefore, respect for privacy and individual/community rights must be externally imposed with an indigenous regulatory framework for new technologies. Three principles are integral to this transition.
Identify the problem clearly
The first principle is clear identification of the problem that regulation must address.
The draft e-commerce policy released for discussion in 2019 defines e-commerce as an activity that includes “buying, selling, marketing or distribution of (i) goods, including digital products and (ii) services; through electronic network”. This is an extremely broad definition and brings a diverse range of activities, from online retail to app-based health delivery, under regulatory control.
The document also attempts to make policy for a host of different problems – data; infrastructure development; e-commerce marketplace regulations for anti-counterfeiting, anti-piracy and foreign direct investment; consumer protection; payment-related issues; export promotion; and content liability exemption.
Now, the concerns of social media are far removed from fashion retail, and consumer woes pertaining to online travel booking differ vastly from digital health solutions. The unfortunate result is a heavily diluted effort that can lead to regulatory overreach.
Regulatory approach, therefore, must shift course from deciding in advance the range of business activities that need regulation, to identifying the specific problem that the proposed regulations must address.
Develop risk-based regulatory approach
The second principle is prioritising a risk-based and responsive regulatory approach.
When regulating unfamiliar territory, as is mostly the case with new technologies, the tendency to entirely ban an activity or create restrictive pre-activity licensing models is high.
Data localisation in India suffers from a failure to adopt a risk-based approach. At the heart of this debate is whether private entities must be compelled to store the data of Indian citizens in servers located in India.
A compelling rationale offered in support of this measure is that law enforcement officials find it difficult to investigate criminal misconduct when data resides in servers located elsewhere. Another rationale is the threat to national security, given the possibility that foreign governments can spy on Indian citizens, taking advantage of the fact that the data resides in servers within their jurisdiction. A third rationale is that localisation can help advance a domestic artificial intelligence and data ecosystem, as done by China.
But amid these multiple narratives, there is no clear study by the government or any regulator on the extent of harm caused as a result of servers residing outside India, or on the costs and benefits of complying with this policy shift.
Risk assessment must involve conversations with all stakeholders and an engagement with data that goes beyond projected fears and growth narratives to build a regulatory framework that best balances innovation and protection.
Value democratic tenets
The third principle is valuing democratic tenets and fundamental rights. The rise of the internet and digital technology has resulted in a loss of traditional state power and authority, making the bureaucracy wanting to re-assert control.
This re-assertion now presents itself in the form of various regulatory controls. Some examples are – demands to keep the privacy baseline low so that the state can easily access private communications; attempts to monitor online speech and impose criminal and civil liabilities upon those expressing unpopular or undesirable views; listing restrictive business requirements for private actors.
These controls, increasingly justified on the grounds that China has relied on similar interventions to successfully build its innovation ecosystem, can have harmful consequences in a democracy.
As our experience with Section 66A of the Information Technology Act, 2000 – which was subsequently struck down by the Supreme Court in Shreya Singhal v. Union of India – demonstrates, the need to regulate online behaviour or technological innovation should not emanate from a deep-seated desire to command and control. Such desire is likely to result in unconstitutional behaviour and impermissible inroads into the fundamental rights of citizens.
What India must do now
Any regulatory intervention in the field of technology policy must begin with a clear outlining of the harms involved and a mapping of the various alternate policy measures that could address them.
The European Union has followed this as part of its Better Regulation principles. The responsibility placed on the regulator to explain the reason behind the regulation can provide certainty, accountability and check arbitrary intervention.
Regulation in new technologies should also enable experimenting with bespoke regulatory approaches and tools, as well as with innovative market solutions, in a contained low-risk environment.
In several countries, experimental regulation has made way for sandboxing schemes. The UK Financial Conduct Authority’s Project Innovate is an example of regulatory sandboxing for financial technologies. Australia, Singapore, Switzerland, Hong Kong, Thailand, Abu Dhabi and Malaysia have also been experimenting with similar initiatives.
Since many new technologies cannot be clearly confined within the regulatory jurisdiction of any one regulator, India needs to develop strategies for better inter-agency coordination. The data localisation controversy revealed how different regulatory bodies were at odds with each other on how to address this issue.
Because data is an asset that cuts across multiple sectors, it is imperative to build better coordination and bring in some uniformity in decision-making on matters of data governance. The Obama administration had set up an Emerging Technologies Interagency Policy Coordination Committee to tackle the problem of siloed decision-making.
Finally, important regulatory interventions must have a mandatory clause about rights impact assessment. The current relationship between regulators and civil society is largely one of distrust, especially when it comes to regulating the internet and digital technology. The only way to address this is by ensuring the regulator is rule-bound to first gauge the implications of the proposed regulatory approach on fundamental and human rights, and then take a call on implementing it.
The author is a fellow at the Centre for Policy Research.
This is the twenty-first in a series of articles titled “Policy Challenges 2019-2024” under ThePrint-Centre for Policy Research (CPR) collaboration. A longer version of this piece is available on the CPR website at www.cprindia.org. The full policy document on a range of issues addressed in this series is available on CPR’s website.