New Delhi: Meta-owned instant messaging application WhatsApp banned over 4.7 million (47 lakh) Indian accounts in the month of March either for violating the country’s laws or WhatsApp’s terms of service, according to the company’s monthly transparency report released Monday.
“In accordance with the IT Rules 2021, we’ve published our report for the month of March 2023. This user-safety report contains details of the user complaints received and the corresponding action taken by WhatsApp, as well as WhatsApp’s own preventive actions to combat abuse on our platform. As captured in the latest Monthly Report, WhatsApp banned over 4.7 million accounts in the month of March,” a WhatsApp spokesperson said in a statement.
According to the report, WhatsApp banned a total of 47,15,906 Indian accounts during March this year. At least 16,59,385 of these accounts were banned proactively, before any reports from users. The company had banned 45,97,400 accounts in February and 29,18,000 accounts in January this year, compared to 36,77,000 accounts in December and 37,16,000 accounts in November last year.
Any WhatsApp account with a +91 phone number is identified as an Indian account.
Further, the platform said that its grievance officer received a total of 4,720 grievances from users in India via emails and postal mail. Of these, at least 4,316 were appeals for bans which led to action against 553 accounts.
“‘Accounts Actioned’ denotes reports where we took remedial action based on the report. Taking action denotes either banning an account or a previously banned account being restored as a result of the complaint,” WhatsApp said in a statement Monday.
In its report, the company said that in addition to “responding to and actioning on user complaints through the grievance channel, WhatsApp also deploys tools and resources to prevent harmful behaviour on the platform”.
“We are particularly focused on prevention because we believe it is much better to stop harmful activity from happening in the first place than to detect it after harm has occurred,” read the report.
It added that the abuse detection operates at three stages of an account’s lifestyle: at registration, during messaging, and in response to negative feedback, which it receives in the form of user reports and blocks. “A team of analysts augments these systems to evaluate edge cases and help improve our effectiveness over time,” it added.
(Edited by Amrtansh Arora)
Also Read: Modi govt bans 14 instant messaging apps to ‘cut communication between terrorists & their handlers’