WhatsApp has stated that it has banned over 17.59 lakh accounts in November

Published:

In the month of November, WhatsApp suspended or terminated 17,59,000 accounts associated with Indian cell numbers. It stated in its most recent compliance report, which was published in January, that it reached the decision to suspend accounts based on the abuse detection technique, which incorporates input from other users and their appeals via the Report function in the app.

Rules 4(1)(d) and 4(2)(d) of India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, which came into effect in 2021, require the government to publish compliance reports at the start of each month at the latest. WhatsApp emphasizes two things in these reports: grievances received from Indian users through various modes of sending grievances, such as email and snail mail, and accounts that were “banned” through WhatsApp’s “prevention and detection methods for violating the laws of India or WhatsApp’s Terms of Service,” according to the reports.

In November, WhatsApp received a total of 602 reports from individuals who had joined with an Indian cell number, according to company records. The most common type of user appeal was to have their accounts suspended, and the total number of such appeals was 357 in total. Only 36 out of 357 accounts were targeted by WhatsApp, with the majority of them being banned. These 36 accounts are included in the overall number of banned accounts, which is 17,59,000, which includes all other banned accounts.

Others for which complaints were received include “Account support”, “Product support”, and “Safety,” to name a few categories. According to WhatsApp’s report, no action was taken in response to the petitions in these categories. According to the firm, users who have reports linked to “Safety” are instructed to utilize the in-app reporting facility, and the reports received through this way are not “recorded as an action performed,” according to the company.

Furthermore, WhatsApp’s abuse detection systems are geared on preventing bad behavior on the network, rather than detecting it. Abuse detection operates at three points of an account’s lifecycle: during registration, during messaging, and in response to negative feedback, in order to prevent this type of behavior. The user reports and blocks are included in the final category.

“Basically, when you block someone and then report them, WhatsApp takes notice of this behavior and places the user in question under the microscope of its abuse detection system. In order to assess edge situations and assist us enhance our efficacy over time, we have a staff of analysts working alongside these systems.” According to WhatsApp, “we will continue to increase openness in our activities and publish more details about our efforts in future reports.”

Related articles

Recent articles

[tds_leads title_text="Subscribe" input_placeholder="Your email address" btn_horiz_align="content-horiz-center" pp_checkbox="yes" pp_msg="SSd2ZSUyMHJlYWQlMjBhbmQlMjBhY2NlcHQlMjB0aGUlMjAlM0NhJTIwaHJlZiUzRCUyMiUyMyUyMiUzRVByaXZhY3klMjBQb2xpY3klM0MlMkZhJTNFLg=="]