On 21 November, the Department of Consumer Affairs launched a standard developed by the Bureau of Indian Standards to tackle challenges around fake and misleading reviews on e-commerce websites. The framework titled ‘Online Consumer Reviews — Principles and Requirements for their Collection, Moderation and Publication’ applies to diverse businesses such as search engines, social media platforms, e-commerce websites, travel sites and food delivery apps that depend on these reviews to vouch for products and services.
Among other things, the new guidelines call for organisations to appoint review administrators who will be responsible for verifying the identity of consumers, moderating reviews against a predetermined set of criteria, checking for biases and problematic language in them. The ultimate goal of these guidelines is to ensure that fake consumer reviews are weeded out from digital platforms. So far, their adoption is voluntary but the government has indicated that, in future, it could make it be mandatory for platforms to abide by them.
No objective truth, no clarity
A close examination reveals that implementing the BIS guidelines will be difficult, for multiple reasons. First, the framework asks the review administrator to analyse whether or not a particular review contains “problematic language”. Not only is this clause vague, it also assumes that all administrators will find the same set of words and phrases objectionable. Nothing could be further from the truth. What one review administrator may find problematic, might be perfectly kosher for someone else. This is akin to content moderation challenges on social media, an area fraught with subjectivity in the absence of objective ‘truth’.
Second, the BIS suggests that review administrators “shall not knowingly publish reviews that have been purchased and/or written by individuals employed for that purpose by the supplier/seller, or by a third party.” If platforms were to abide by this clause, they would have to invest a significant amount of time, money and human resources to do so. While big platforms might grudgingly comply, it may not be possible for their smaller counterparts because of budgetary constraints. More importantly, to establish that a particular review was paid-for, review administrators will have to trace the money trail. How will they do so? The guidelines provide no answer.
Third, the foreword to the guidelines says that the review standards will cover a range of products and services including those offered by electricians, plumbers and lawyers. However, unlike the first two, there is no forum on which consumers can post reviews about lawyers. The only way they can “review” a lawyer’s services is via social media posts, which are already governed by the IT Rules, 2021. There is no clarity on whether these guidelines apply to social media posts. But if they do apply, then there is a clear clash with the legal immunity available to digital intermediaries, for content published on their websites.
France Consumer Code
Fake reviews should be weeded out from digital platforms. They are a booming industry and, according to one study, they can influence up to $152 billion of online sales globally. Other countries too are well aware of the risks of fake reviews and are putting measures in place to tackle them. In 2016, France created L111-7-2 of the Consumer Code under its Digital Republic Act. The code mandates digital marketplaces to verify that those posting reviews have bought the item or service they’re writing about. It also tells them to state whether reviews are monitored and to list out the procedure. Under the same code, marketplaces also have to inform consumers about the rejection of their review and the reasons for doing so. They also have to provide functionalities to companies, whose products or services are being questioned, to report doubts about the authenticity of reviews. Marketplaces that do not conform to the Consumer Code will be fined €375,000.
In India, companies too are aware of the risks that fake online reviews pose and are taking steps to curb them. Food delivery app Zomato, for instance, makes consumers agree that the content they post on the app is truthful, accurate and not created by a bot, before they sign up to use it. In 2020, the company even updated its app to prevent “bad actors” from posting fake reviews.
Despite these measures, the problem persisted, prompting the government to intervene with the BIS guidelines. Though well-intentioned, their implementation will be difficult. A more pragmatic and executable approach would be for the BIS to harmonise the guidelines with existing regulations, such as the IT Rules 2021 and the upcoming Data Protection Bill 2022. More importantly, the BIS should hold wider consultations with a range of tech companies. That’s because all of them have distinct characteristics and functionalities. So, while it may be easy for an e-commerce company to abide by a particular mandate, it might not be so for a social media platform. The guidelines need to reflect these nuances to be implementable.
This article is part of ThePrint-Koan Advisory series that analyses emerging policies, laws and regulations in India’s technology sector. Read all the articles here.
(Edited by Ratan Priya)