On 25 July 2018, Unofficial PMO India, a Facebook page which then had 2,51,000 followers and regularly uploaded memes against the current NDA government, ran afoul of the social media platform’s community standards — a set of rules that “outline what is and what is not allowed on Facebook…and apply around the world to all types of content.” The page’s troubles began after it uploaded a collage of a picture of Prime Minister Narendra Modi pinching the ears of a child, juxtaposed with an image of Adolf Hitler in a similar photo op. The image was removed by Facebook within hours of uploading and the page followed soon after. The page is run by Farza, a friend who is based out of Mumbai, and three other people. Just after the page was pulled down, Farza called me in a panic. He was worried that with the image and page gone, his personal account would be next in line. A few hours later, the personal profiles of all the page administrators, including Farza, were suspended for 30 days. “What to do?” Farza asked me, in another emergency call from miles away. I did not have a clear answer.
For years now, every week I receive similar panic calls from Facebook users whose accounts have been suspended for violating unspecified aspects of the community standards. Every such call was from users who posted content against the Narendra Modi government and its ideological associates. Since 2015, I have documented every call I received to understand whether these are all mere coincidences on account of faulty algorithms or if there is an agenda to Facebook’s suspensions. The duration of suspensions ranged from one to thirty days, and in a handful of cases, even more. There was no clarity on what content attracted what duration of suspension.
After three years of tabulating this data, a pattern has emerged. All the suspended profiles I dealt with were barred on the pretext of enforcing Facebook’s community standards. As I noticed recurring patterns, I tabulated a majority of the suspended profiles into three categories of Facebook activity — upload of memes against Narendra Modi; protests against the ruling government’s policies; sharing content which comes under the previous two categories. In all cases, no clear reason was ever provided as to which post broke the community rule or which community guidelines were not adhered to. These clarifications were not provided even after the profiles were reinstated.
With 270 million active users in India, and 2.27 billion monthly active users worldwide, Facebook, with its multifaceted social-media platforms, has colossal social and economic impact. In this Facebook country, what constituted the community standards was a mystery until April 2018, when the company finally opened up the guidelines to the public. Speaking on the occasion, Monika Bickert, the head of product policy and counterterrorism at Facebook, told Reuters, “The Community Standards guidelines are not a static set of policies. The rules change often.” Bickert said that every two weeks she leads a content-standards forum, where senior executives meet to review the social network’s policies for taking down objectionable content. The group, which comprises executives from the company’s policy division, also receives inputs from more than a hundred external organisations and experts on areas of concern, such as child exploitation and terrorism.
But how these community policies are formed or changed is still not available for an audit. A changing community policy is like a changing constitution — hard to stabilise. In the cracks of these changes, Facebook thrives as a banana republic of the digital world. Digital-rights groups have been requesting to audit Facebook’s community standards algorithms for years due to its caustic social impact, to no avail.
On 25 July 2018, Unofficial PMO India, a Facebook page which then had 2,51,000 followers and regularly uploaded memes against the current NDA government, ran afoul of the social media platform’s community standards — a set of rules that “outline what is and what is not allowed on Facebook…and apply around the world to all types of content.” The page’s troubles began after it uploaded a collage of a picture of Prime Minister Narendra Modi pinching the ears of a child, juxtaposed with an image of Adolf Hitler in a similar photo op. The image was removed by Facebook within hours of uploading and the page followed soon after. The page is run by Farza, a friend who is based out of Mumbai, and three other people. Just after the page was pulled down, Farza called me in a panic. He was worried that with the image and page gone, his personal account would be next in line. A few hours later, the personal profiles of all the page administrators, including Farza, were suspended for 30 days. “What to do?” Farza asked me, in another emergency call from miles away. I did not have a clear answer.
For years now, every week I receive similar panic calls from Facebook users whose accounts have been suspended for violating unspecified aspects of the community standards. Every such call was from users who posted content against the Narendra Modi government and its ideological associates. Since 2015, I have documented every call I received to understand whether these are all mere coincidences on account of faulty algorithms or if there is an agenda to Facebook’s suspensions. The duration of suspensions ranged from one to thirty days, and in a handful of cases, even more. There was no clarity on what content attracted what duration of suspension.
After three years of tabulating this data, a pattern has emerged. All the suspended profiles I dealt with were barred on the pretext of enforcing Facebook’s community standards. As I noticed recurring patterns, I tabulated a majority of the suspended profiles into three categories of Facebook activity — upload of memes against Narendra Modi; protests against the ruling government’s policies; sharing content which comes under the previous two categories. In all cases, no clear reason was ever provided as to which post broke the community rule or which community guidelines were not adhered to. These clarifications were not provided even after the profiles were reinstated.
With 270 million active users in India, and 2.27 billion monthly active users worldwide, Facebook, with its multifaceted social-media platforms, has colossal social and economic impact. In this Facebook country, what constituted the community standards was a mystery until April 2018, when the company finally opened up the guidelines to the public. Speaking on the occasion, Monika Bickert, the head of product policy and counterterrorism at Facebook, told Reuters, “The Community Standards guidelines are not a static set of policies. The rules change often.” Bickert said that every two weeks she leads a content-standards forum, where senior executives meet to review the social network’s policies for taking down objectionable content. The group, which comprises executives from the company’s policy division, also receives inputs from more than a hundred external organisations and experts on areas of concern, such as child exploitation and terrorism.
But how these community policies are formed or changed is still not available for an audit. A changing community policy is like a changing constitution — hard to stabilise. In the cracks of these changes, Facebook thrives as a banana republic of the digital world. Digital-rights groups have been requesting to audit Facebook’s community standards algorithms for years due to its caustic social impact, to no avail.
There is not much clarity on how and when Facebook formed these community guidelines. In 2015, when I first got involved with activism around Facebook’s policies, the guidelines were limited to addressing an audience based in the United States, where I was residing at that time. On a warm, bright day in August 2015, I read a post on Facebook, where a woman was being bullied, ridiculed and sexually harassed by thousands of men. She had questioned the merits of the late Indian president APJ Abdul Kalam in a Facebook post. A barrage of vitriol followed. She called out a misogynistic response to her post and in retaliation an army of male trolls began to abuse her verbally, using “prostitute” as an insult. Few spoke in her defence, even though the post was public. Stunned by the silence of everyone around watching the spectacle for over five days, I hurriedly wrote about her in Global Voices — an international and multilingual community of bloggers, journalists, translators, academics, and human-rights activists — that I joined in 2011.
After my story about the threats women were facing on Facebook for speaking up was published, the perpetrators instantly turned towards me. They issued threats targeting my family and even threatened to choke me at Miami Beach, where I was then living. One posted a picture with a noose around my neck. I contacted Facebook Forums — a space created by Facebook for users — and asked them to take down the threatening posts. However, instead of helping me, Facebook suspended my profile. It was highly likely that my profile had been reported, as the threats I received had talked about taking it down.
When my profile was taken down, I tried to log back in, but was unsuccessful. To date, Facebook does not have a system of prior warnings that lets users know that their profile may be suspended for a particular reason. The person whose profile has been suspended is immediately logged out; they are then requested to contact the Facebook Community Center. When I contacted the community centre, Facebook asked me to submit my government identification. I refused. My editors at Global Voices backed me up in this refusal to share private data with Facebook. As I read around the issue, I realised that there were quite a large number of people all over the world affected in a similar way. In 2015, Facebook was receiving a lot of flak for its real-name policy — a policy wherein Facebook insisted that users create or access profiles by providing their official or legal names and submit legal identification, including government issued identities. A campaign against the policy was already underway, supported by prominent digital activists such as Soraya Chemaly, Nighat Dad, Ellery Biddle, Anja Kovacs and multiple feminist organisations from Afghanistan to the United States, among others. In October 2015, after a sustained campaign by more than fifty organisations who came together to form The Nameless Coalition, Facebook relented and rolled back some of its bureaucratic regulations, toned down its real name policy and let me in.
Incidentally, Facebook restored my account without any further request for identification. Since then, I began following up with Facebook whenever new profiles were suspended.
On 25 July 2018, Unofficial PMO India, a Facebook page which then had 2,51,000 followers and regularly uploaded memes against the current NDA government, ran afoul of the social media platform’s community standards — a set of rules that “outline what is and what is not allowed on Facebook…and apply around the world to all types of content.” The page’s troubles began after it uploaded a collage of a picture of Prime Minister Narendra Modi pinching the ears of a child, juxtaposed with an image of Adolf Hitler in a similar photo op. The image was removed by Facebook within hours of uploading and the page followed soon after. The page is run by Farza, a friend who is based out of Mumbai, and three other people. Just after the page was pulled down, Farza called me in a panic. He was worried that with the image and page gone, his personal account would be next in line. A few hours later, the personal profiles of all the page administrators, including Farza, were suspended for 30 days. “What to do?” Farza asked me, in another emergency call from miles away. I did not have a clear answer.
For years now, every week I receive similar panic calls from Facebook users whose accounts have been suspended for violating unspecified aspects of the community standards. Every such call was from users who posted content against the Narendra Modi government and its ideological associates. Since 2015, I have documented every call I received to understand whether these are all mere coincidences on account of faulty algorithms or if there is an agenda to Facebook’s suspensions. The duration of suspensions ranged from one to thirty days, and in a handful of cases, even more. There was no clarity on what content attracted what duration of suspension.
After three years of tabulating this data, a pattern has emerged. All the suspended profiles I dealt with were barred on the pretext of enforcing Facebook’s community standards. As I noticed recurring patterns, I tabulated a majority of the suspended profiles into three categories of Facebook activity — upload of memes against Narendra Modi; protests against the ruling government’s policies; sharing content which comes under the previous two categories. In all cases, no clear reason was ever provided as to which post broke the community rule or which community guidelines were not adhered to. These clarifications were not provided even after the profiles were reinstated.
With 270 million active users in India, and 2.27 billion monthly active users worldwide, Facebook, with its multifaceted social-media platforms, has colossal social and economic impact. In this Facebook country, what constituted the community standards was a mystery until April 2018, when the company finally opened up the guidelines to the public. Speaking on the occasion, Monika Bickert, the head of product policy and counterterrorism at Facebook, told Reuters, “The Community Standards guidelines are not a static set of policies. The rules change often.” Bickert said that every two weeks she leads a content-standards forum, where senior executives meet to review the social network’s policies for taking down objectionable content. The group, which comprises executives from the company’s policy division, also receives inputs from more than a hundred external organisations and experts on areas of concern, such as child exploitation and terrorism.
But how these community policies are formed or changed is still not available for an audit. A changing community policy is like a changing constitution — hard to stabilise. In the cracks of these changes, Facebook thrives as a banana republic of the digital world. Digital-rights groups have been requesting to audit Facebook’s community standards algorithms for years due to its caustic social impact, to no avail.
There is not much clarity on how and when Facebook formed these community guidelines. In 2015, when I first got involved with activism around Facebook’s policies, the guidelines were limited to addressing an audience based in the United States, where I was residing at that time. On a warm, bright day in August 2015, I read a post on Facebook, where a woman was being bullied, ridiculed and sexually harassed by thousands of men. She had questioned the merits of the late Indian president APJ Abdul Kalam in a Facebook post. A barrage of vitriol followed. She called out a misogynistic response to her post and in retaliation an army of male trolls began to abuse her verbally, using “prostitute” as an insult. Few spoke in her defence, even though the post was public. Stunned by the silence of everyone around watching the spectacle for over five days, I hurriedly wrote about her in Global Voices — an international and multilingual community of bloggers, journalists, translators, academics, and human-rights activists — that I joined in 2011.
After my story about the threats women were facing on Facebook for speaking up was published, the perpetrators instantly turned towards me. They issued threats targeting my family and even threatened to choke me at Miami Beach, where I was then living. One posted a picture with a noose around my neck. I contacted Facebook Forums — a space created by Facebook for users — and asked them to take down the threatening posts. However, instead of helping me, Facebook suspended my profile. It was highly likely that my profile had been reported, as the threats I received had talked about taking it down.
When my profile was taken down, I tried to log back in, but was unsuccessful. To date, Facebook does not have a system of prior warnings that lets users know that their profile may be suspended for a particular reason. The person whose profile has been suspended is immediately logged out; they are then requested to contact the Facebook Community Center. When I contacted the community centre, Facebook asked me to submit my government identification. I refused. My editors at Global Voices backed me up in this refusal to share private data with Facebook. As I read around the issue, I realised that there were quite a large number of people all over the world affected in a similar way. In 2015, Facebook was receiving a lot of flak for its real-name policy — a policy wherein Facebook insisted that users create or access profiles by providing their official or legal names and submit legal identification, including government issued identities. A campaign against the policy was already underway, supported by prominent digital activists such as Soraya Chemaly, Nighat Dad, Ellery Biddle, Anja Kovacs and multiple feminist organisations from Afghanistan to the United States, among others. In October 2015, after a sustained campaign by more than fifty organisations who came together to form The Nameless Coalition, Facebook relented and rolled back some of its bureaucratic regulations, toned down its real name policy and let me in.
Incidentally, Facebook restored my account without any further request for identification. Since then, I began following up with Facebook whenever new profiles were suspended.
To understand whether the rollout of transparent content policing would progress, as promised during the real-name campaign, and to study patterns of profile suspension, I started to maintain a spreadsheet of profiles of people who approached me for help. I would log the name, date and the possibly offending post. Then I would reach out to Facebook for explanations regarding their suspension — which post triggered the suspension, which community guideline it violated, what the penalty attached was and the appropriate form of response to get the profile back online. I added the turnaround time after suspension to my records, to get a sense of the gravity of respective offences. It should be noted that the person of contact from Facebook for Indian profiles, Shruti Moghe, Facebook’s Policy Programs Manager for India and South Asia, would sometimes disappear and refuse to respond until I added Facebook’s policy heads, such as Antigone Davis, the head of global safety, and Bickert, to the mail trails.
My findings dwell on how Facebook has deflected in India, curtailed freedom of speech on many occasions and when questioned, dragged its feet. In 2016, 11 profile owners approached me to ask for help with reinstating their accounts. In 2017, the numbers went up to 26, a jump of more than 50 percent. In 2018, it became a regular affair for me to be contacted about one profile every two weeks.
On 25 May 2016, the well-known left-wing activist Kavita Krishnan was threatened with physical rape on Facebook. She approached me when her attempts to get Facebook to take action against the person threatening her were met with silence. In an attempt to escalate the issue, I reached out to Bickert. Bickert wrote me a lengthy email, stating that Facebook was a safe place for women, but that it had a different set of community guidelines for “celebrity” women. Facebook refused to take down the post threatening Krishnan.
On 25 July 2018, Unofficial PMO India, a Facebook page which then had 2,51,000 followers and regularly uploaded memes against the current NDA government, ran afoul of the social media platform’s community standards — a set of rules that “outline what is and what is not allowed on Facebook…and apply around the world to all types of content.” The page’s troubles began after it uploaded a collage of a picture of Prime Minister Narendra Modi pinching the ears of a child, juxtaposed with an image of Adolf Hitler in a similar photo op. The image was removed by Facebook within hours of uploading and the page followed soon after. The page is run by Farza, a friend who is based out of Mumbai, and three other people. Just after the page was pulled down, Farza called me in a panic. He was worried that with the image and page gone, his personal account would be next in line. A few hours later, the personal profiles of all the page administrators, including Farza, were suspended for 30 days. “What to do?” Farza asked me, in another emergency call from miles away. I did not have a clear answer.
For years now, every week I receive similar panic calls from Facebook users whose accounts have been suspended for violating unspecified aspects of the community standards. Every such call was from users who posted content against the Narendra Modi government and its ideological associates. Since 2015, I have documented every call I received to understand whether these are all mere coincidences on account of faulty algorithms or if there is an agenda to Facebook’s suspensions. The duration of suspensions ranged from one to thirty days, and in a handful of cases, even more. There was no clarity on what content attracted what duration of suspension.
After three years of tabulating this data, a pattern has emerged. All the suspended profiles I dealt with were barred on the pretext of enforcing Facebook’s community standards. As I noticed recurring patterns, I tabulated a majority of the suspended profiles into three categories of Facebook activity — upload of memes against Narendra Modi; protests against the ruling government’s policies; sharing content which comes under the previous two categories. In all cases, no clear reason was ever provided as to which post broke the community rule or which community guidelines were not adhered to. These clarifications were not provided even after the profiles were reinstated.
With 270 million active users in India, and 2.27 billion monthly active users worldwide, Facebook, with its multifaceted social-media platforms, has colossal social and economic impact. In this Facebook country, what constituted the community standards was a mystery until April 2018, when the company finally opened up the guidelines to the public. Speaking on the occasion, Monika Bickert, the head of product policy and counterterrorism at Facebook, told Reuters, “The Community Standards guidelines are not a static set of policies. The rules change often.” Bickert said that every two weeks she leads a content-standards forum, where senior executives meet to review the social network’s policies for taking down objectionable content. The group, which comprises executives from the company’s policy division, also receives inputs from more than a hundred external organisations and experts on areas of concern, such as child exploitation and terrorism.
But how these community policies are formed or changed is still not available for an audit. A changing community policy is like a changing constitution — hard to stabilise. In the cracks of these changes, Facebook thrives as a banana republic of the digital world. Digital-rights groups have been requesting to audit Facebook’s community standards algorithms for years due to its caustic social impact, to no avail.
There is not much clarity on how and when Facebook formed these community guidelines. In 2015, when I first got involved with activism around Facebook’s policies, the guidelines were limited to addressing an audience based in the United States, where I was residing at that time. On a warm, bright day in August 2015, I read a post on Facebook, where a woman was being bullied, ridiculed and sexually harassed by thousands of men. She had questioned the merits of the late Indian president APJ Abdul Kalam in a Facebook post. A barrage of vitriol followed. She called out a misogynistic response to her post and in retaliation an army of male trolls began to abuse her verbally, using “prostitute” as an insult. Few spoke in her defence, even though the post was public. Stunned by the silence of everyone around watching the spectacle for over five days, I hurriedly wrote about her in Global Voices — an international and multilingual community of bloggers, journalists, translators, academics, and human-rights activists — that I joined in 2011.
After my story about the threats women were facing on Facebook for speaking up was published, the perpetrators instantly turned towards me. They issued threats targeting my family and even threatened to choke me at Miami Beach, where I was then living. One posted a picture with a noose around my neck. I contacted Facebook Forums — a space created by Facebook for users — and asked them to take down the threatening posts. However, instead of helping me, Facebook suspended my profile. It was highly likely that my profile had been reported, as the threats I received had talked about taking it down.
When my profile was taken down, I tried to log back in, but was unsuccessful. To date, Facebook does not have a system of prior warnings that lets users know that their profile may be suspended for a particular reason. The person whose profile has been suspended is immediately logged out; they are then requested to contact the Facebook Community Center. When I contacted the community centre, Facebook asked me to submit my government identification. I refused. My editors at Global Voices backed me up in this refusal to share private data with Facebook. As I read around the issue, I realised that there were quite a large number of people all over the world affected in a similar way. In 2015, Facebook was receiving a lot of flak for its real-name policy — a policy wherein Facebook insisted that users create or access profiles by providing their official or legal names and submit legal identification, including government issued identities. A campaign against the policy was already underway, supported by prominent digital activists such as Soraya Chemaly, Nighat Dad, Ellery Biddle, Anja Kovacs and multiple feminist organisations from Afghanistan to the United States, among others. In October 2015, after a sustained campaign by more than fifty organisations who came together to form The Nameless Coalition, Facebook relented and rolled back some of its bureaucratic regulations, toned down its real name policy and let me in.
Incidentally, Facebook restored my account without any further request for identification. Since then, I began following up with Facebook whenever new profiles were suspended.
To understand whether the rollout of transparent content policing would progress, as promised during the real-name campaign, and to study patterns of profile suspension, I started to maintain a spreadsheet of profiles of people who approached me for help. I would log the name, date and the possibly offending post. Then I would reach out to Facebook for explanations regarding their suspension — which post triggered the suspension, which community guideline it violated, what the penalty attached was and the appropriate form of response to get the profile back online. I added the turnaround time after suspension to my records, to get a sense of the gravity of respective offences. It should be noted that the person of contact from Facebook for Indian profiles, Shruti Moghe, Facebook’s Policy Programs Manager for India and South Asia, would sometimes disappear and refuse to respond until I added Facebook’s policy heads, such as Antigone Davis, the head of global safety, and Bickert, to the mail trails.
My findings dwell on how Facebook has deflected in India, curtailed freedom of speech on many occasions and when questioned, dragged its feet. In 2016, 11 profile owners approached me to ask for help with reinstating their accounts. In 2017, the numbers went up to 26, a jump of more than 50 percent. In 2018, it became a regular affair for me to be contacted about one profile every two weeks.
On 25 May 2016, the well-known left-wing activist Kavita Krishnan was threatened with physical rape on Facebook. She approached me when her attempts to get Facebook to take action against the person threatening her were met with silence. In an attempt to escalate the issue, I reached out to Bickert. Bickert wrote me a lengthy email, stating that Facebook was a safe place for women, but that it had a different set of community guidelines for “celebrity” women. Facebook refused to take down the post threatening Krishnan. Courtesy Inji Pennu Courtesy Inji Pennu Courtesy Inji Pennu
After the campaign against the real name policy, Facebook started to deploy various criteria for judging profiles, such as nudity and violence, but none of these turned out to be valid when profiles were pulled down. When I continually reported gender violence, such as that faced by Krishnan, not a single profile took a hit. But for anti-Modi critics, a simple phrase such as “orange monkey” came under Facebook’s radar as profanity. Take the case of Rajiv Tyagi. A popular social-media influencer figure, columnist and an erstwhile Indian Air Force officer, Rajiv Tyagi was suspended twice — once in 2016 and then again in 2018. His posts are popular for their scathing criticism of the Modi government. On 24 August 2018, he was blocked for 30 days for calling out a profile of a user who appeared to be right-leaning. In his email to me, he wrote, “I have been blocked for commenting in a discussion thread on my wall, words to the effect, ‘Why do you have an orange bandar (monkey in Hindi) as your dp? I thought we had evolved from monkeys a long time ago…’!” Tyagi continued, “I cannot imagine how such a comment violates Facebook’s ‘community standards’ so violently, that I get blocked for 30 days!” When escalated, Facebook did not respond to my queries as to why the profile was suspended, as has been the case with every profile I have intervened on behalf of. Tyagi’s profile was restored on 31 August.
Independent media organisations, especially start-ups, also seem to have had a similar experience with Facebook. In the cases I came across, Facebook appeared to have marketed to them aggressively and invited them to have a space on their platform, promising wide reach and advertising. But when these media outlets did not toe the current government’s line, their profiles often got suspended. The reasons provided by Facebook in such cases have invariably turned out to be ambiguous and, when pressed for further detail, Facebook has always attributed the suspensions to bugs. One could only surmise that the attempt was to push independent media to submit to its demands and monitor them. Out of the few such media outlets that contacted me, Vartha Bharati, a Kannada daily, and BFirst, a liberal news portal started by journalists, faced suspension. These publications even lost the money they spent on Facebook advertisements because their profiles were suspended and they could not get any traffic.
On 25 July 2018, Unofficial PMO India, a Facebook page which then had 2,51,000 followers and regularly uploaded memes against the current NDA government, ran afoul of the social media platform’s community standards — a set of rules that “outline what is and what is not allowed on Facebook…and apply around the world to all types of content.” The page’s troubles began after it uploaded a collage of a picture of Prime Minister Narendra Modi pinching the ears of a child, juxtaposed with an image of Adolf Hitler in a similar photo op. The image was removed by Facebook within hours of uploading and the page followed soon after. The page is run by Farza, a friend who is based out of Mumbai, and three other people. Just after the page was pulled down, Farza called me in a panic. He was worried that with the image and page gone, his personal account would be next in line. A few hours later, the personal profiles of all the page administrators, including Farza, were suspended for 30 days. “What to do?” Farza asked me, in another emergency call from miles away. I did not have a clear answer.
For years now, every week I receive similar panic calls from Facebook users whose accounts have been suspended for violating unspecified aspects of the community standards. Every such call was from users who posted content against the Narendra Modi government and its ideological associates. Since 2015, I have documented every call I received to understand whether these are all mere coincidences on account of faulty algorithms or if there is an agenda to Facebook’s suspensions. The duration of suspensions ranged from one to thirty days, and in a handful of cases, even more. There was no clarity on what content attracted what duration of suspension.
After three years of tabulating this data, a pattern has emerged. All the suspended profiles I dealt with were barred on the pretext of enforcing Facebook’s community standards. As I noticed recurring patterns, I tabulated a majority of the suspended profiles into three categories of Facebook activity — upload of memes against Narendra Modi; protests against the ruling government’s policies; sharing content which comes under the previous two categories. In all cases, no clear reason was ever provided as to which post broke the community rule or which community guidelines were not adhered to. These clarifications were not provided even after the profiles were reinstated.
With 270 million active users in India, and 2.27 billion monthly active users worldwide, Facebook, with its multifaceted social-media platforms, has colossal social and economic impact. In this Facebook country, what constituted the community standards was a mystery until April 2018, when the company finally opened up the guidelines to the public. Speaking on the occasion, Monika Bickert, the head of product policy and counterterrorism at Facebook, told Reuters, “The Community Standards guidelines are not a static set of policies. The rules change often.” Bickert said that every two weeks she leads a content-standards forum, where senior executives meet to review the social network’s policies for taking down objectionable content. The group, which comprises executives from the company’s policy division, also receives inputs from more than a hundred external organisations and experts on areas of concern, such as child exploitation and terrorism.
But how these community policies are formed or changed is still not available for an audit. A changing community policy is like a changing constitution — hard to stabilise. In the cracks of these changes, Facebook thrives as a banana republic of the digital world. Digital-rights groups have been requesting to audit Facebook’s community standards algorithms for years due to its caustic social impact, to no avail.
There is not much clarity on how and when Facebook formed these community guidelines. In 2015, when I first got involved with activism around Facebook’s policies, the guidelines were limited to addressing an audience based in the United States, where I was residing at that time. On a warm, bright day in August 2015, I read a post on Facebook, where a woman was being bullied, ridiculed and sexually harassed by thousands of men. She had questioned the merits of the late Indian president APJ Abdul Kalam in a Facebook post. A barrage of vitriol followed. She called out a misogynistic response to her post and in retaliation an army of male trolls began to abuse her verbally, using “prostitute” as an insult. Few spoke in her defence, even though the post was public. Stunned by the silence of everyone around watching the spectacle for over five days, I hurriedly wrote about her in Global Voices — an international and multilingual community of bloggers, journalists, translators, academics, and human-rights activists — that I joined in 2011.
After my story about the threats women were facing on Facebook for speaking up was published, the perpetrators instantly turned towards me. They issued threats targeting my family and even threatened to choke me at Miami Beach, where I was then living. One posted a picture with a noose around my neck. I contacted Facebook Forums — a space created by Facebook for users — and asked them to take down the threatening posts. However, instead of helping me, Facebook suspended my profile. It was highly likely that my profile had been reported, as the threats I received had talked about taking it down.
When my profile was taken down, I tried to log back in, but was unsuccessful. To date, Facebook does not have a system of prior warnings that lets users know that their profile may be suspended for a particular reason. The person whose profile has been suspended is immediately logged out; they are then requested to contact the Facebook Community Center. When I contacted the community centre, Facebook asked me to submit my government identification. I refused. My editors at Global Voices backed me up in this refusal to share private data with Facebook. As I read around the issue, I realised that there were quite a large number of people all over the world affected in a similar way. In 2015, Facebook was receiving a lot of flak for its real-name policy — a policy wherein Facebook insisted that users create or access profiles by providing their official or legal names and submit legal identification, including government issued identities. A campaign against the policy was already underway, supported by prominent digital activists such as Soraya Chemaly, Nighat Dad, Ellery Biddle, Anja Kovacs and multiple feminist organisations from Afghanistan to the United States, among others. In October 2015, after a sustained campaign by more than fifty organisations who came together to form The Nameless Coalition, Facebook relented and rolled back some of its bureaucratic regulations, toned down its real name policy and let me in.
Incidentally, Facebook restored my account without any further request for identification. Since then, I began following up with Facebook whenever new profiles were suspended.
To understand whether the rollout of transparent content policing would progress, as promised during the real-name campaign, and to study patterns of profile suspension, I started to maintain a spreadsheet of profiles of people who approached me for help. I would log the name, date and the possibly offending post. Then I would reach out to Facebook for explanations regarding their suspension — which post triggered the suspension, which community guideline it violated, what the penalty attached was and the appropriate form of response to get the profile back online. I added the turnaround time after suspension to my records, to get a sense of the gravity of respective offences. It should be noted that the person of contact from Facebook for Indian profiles, Shruti Moghe, Facebook’s Policy Programs Manager for India and South Asia, would sometimes disappear and refuse to respond until I added Facebook’s policy heads, such as Antigone Davis, the head of global safety, and Bickert, to the mail trails.
My findings dwell on how Facebook has deflected in India, curtailed freedom of speech on many occasions and when questioned, dragged its feet. In 2016, 11 profile owners approached me to ask for help with reinstating their accounts. In 2017, the numbers went up to 26, a jump of more than 50 percent. In 2018, it became a regular affair for me to be contacted about one profile every two weeks.
On 25 May 2016, the well-known left-wing activist Kavita Krishnan was threatened with physical rape on Facebook. She approached me when her attempts to get Facebook to take action against the person threatening her were met with silence. In an attempt to escalate the issue, I reached out to Bickert. Bickert wrote me a lengthy email, stating that Facebook was a safe place for women, but that it had a different set of community guidelines for “celebrity” women. Facebook refused to take down the post threatening Krishnan. Courtesy Inji Pennu Courtesy Inji Pennu Courtesy Inji Pennu
After the campaign against the real name policy, Facebook started to deploy various criteria for judging profiles, such as nudity and violence, but none of these turned out to be valid when profiles were pulled down. When I continually reported gender violence, such as that faced by Krishnan, not a single profile took a hit. But for anti-Modi critics, a simple phrase such as “orange monkey” came under Facebook’s radar as profanity. Take the case of Rajiv Tyagi. A popular social-media influencer figure, columnist and an erstwhile Indian Air Force officer, Rajiv Tyagi was suspended twice — once in 2016 and then again in 2018. His posts are popular for their scathing criticism of the Modi government. On 24 August 2018, he was blocked for 30 days for calling out a profile of a user who appeared to be right-leaning. In his email to me, he wrote, “I have been blocked for commenting in a discussion thread on my wall, words to the effect, ‘Why do you have an orange bandar (monkey in Hindi) as your dp? I thought we had evolved from monkeys a long time ago…’!” Tyagi continued, “I cannot imagine how such a comment violates Facebook’s ‘community standards’ so violently, that I get blocked for 30 days!” When escalated, Facebook did not respond to my queries as to why the profile was suspended, as has been the case with every profile I have intervened on behalf of. Tyagi’s profile was restored on 31 August.
Independent media organisations, especially start-ups, also seem to have had a similar experience with Facebook. In the cases I came across, Facebook appeared to have marketed to them aggressively and invited them to have a space on their platform, promising wide reach and advertising. But when these media outlets did not toe the current government’s line, their profiles often got suspended. The reasons provided by Facebook in such cases have invariably turned out to be ambiguous and, when pressed for further detail, Facebook has always attributed the suspensions to bugs. One could only surmise that the attempt was to push independent media to submit to its demands and monitor them. Out of the few such media outlets that contacted me, Vartha Bharati, a Kannada daily, and BFirst, a liberal news portal started by journalists, faced suspension. These publications even lost the money they spent on Facebook advertisements because their profiles were suspended and they could not get any traffic. Current Issue July 2019
Vartha Bharati’s coverage was often critical of the Modi administration, and the publication ran into a lot of trouble with Facebook in 2017. The page was first blocked on 26 June 2017. The ban was lifted two days later, after the publication followed up with Facebook. As in every case, no explanation was given. However, the page was blocked again on 10 July 2017. The editors reached out to Facebook India the same day, but there was no response. I then escalated the issue to the United States office and the suspension was revoked.
Similarly, BFirst.in, which is often critical of the BJP, was suspended in January 2018. Mohammed M from Vartha Bharati referred the publication to me for help with restoring their Facebook accounts. Alvin Mendonca of BFirst wrote to me: “Suddenly Facebook has removed our FB page by giving no reason. We are helpless now and need your guidance to reclaim our Facebook page.” BFirstkannada, a sister site of BFirst.in, also suffered a similar fate. Its Facebook page was also taken down at the same time. But on 2 January 2018, Facebook suddenly restored both the pages, without any explanation.
In April 2018, ahead of the Karnataka state assembly elections scheduled to take place the following month, Feku Express — a Facebook page with more than 5,12,000 followers, many of whom were from Karnataka — was also taken down. I received an email from the owner of Feku Express on 21 April 2018. The profile administrator said that on 18 April 2018 they “received a notification from Facebook stating the page has been unpublished for sharing pornographic content.” The notification gave the administrator the “option to appeal,” which they immediately availed. However, there was no response from Facebook, and a day later, the administrator’s personal profile was also blocked without any notification. According to the administrator, “My page is a political ideology page and it does not have porn content and I have not posted any porn still the page was removed. So we request your help in restoring the deleted page.”
Even after I sent multiple personal requests to Moghe, the page was not restored. My pleas were ignored until 9 May, when Facebook decided to restore the page, without any communication or clarification regarding the suspension and the offending post. Elections were held a week later on 15 May 2018. By this time, Feku Express had been around for two years, and Facebook could not provide any evidence to support its claim that the page had pornographic content, nor did it bother to do so. Even if the page had been flagged as pornographic, it is not clear how Facebook concluded that this claim was valid. Notably, Feku Express is dedicated to debunking claims made by Modi or poking fun of the NDA and its ministers. When the first page was taken down, the administrator immediately created another page. This parallel page was also suspended.
Until now I have not received a single call from a user who supports the current government. Further, other than the takedown of Postcard News — a digital news-site notorious for peddling fake news — on 12 July 2018, digital-rights activists can attest to the constant and unrelenting presence of a multitude of Hindutva pages or profiles on Facebook. On 2 July 2018, I was alerted to a Facebook Live broadcast, where five Hindu right-wing men were asking Hindus to rampantly kill Muslims. Since the live video was broadcasted on the Uttar Pradesh chief minister Adityanath’s page, it was shared widely. The link was taken down after a few hours by Facebook, after several users complained about it. But nothing happened to the page; it remained intact, and one of the users, Surendra Singh, who shared the live video asking people to kill Muslims, is still on the platform. How is it that none of Facebook’s checkpoints of violence, hate speech or safety are applicable in the case of Hindutva pages?
Similarly, there is the case of Deepak Sharma, a Facebook user from Rajasthan, who constantly posts hate speech against Muslims and women. He uses Facebook Live often and constantly threatens to wipe out Muslims. On 16 December 2018, he posted a video with similar threats. The video now has 77,000 views, 1,056 shares, 980 comments and 1,400 reactions. I, along with a few other digital activists, reported this post using Facebook’s community guidelines. Facebook responded that the content of the video does not go against their community guidelines — the standard response to Hindutva hate pages. How is it that Facebook cannot monitor such hate content when anything against Narendra Modi or the alternative right is taken down immediately?
On 6 October 2018, Caravan Daily reported, “Since the last week of September, Facebook has disabled personal accounts of several leading journalists including Ajay Prakash (News Editor, Dainik Bhaskar), Prema Negi (Editor, Janjwar.com), Rifat Jawaid (Editor, JanataKaRepoter.com and former Editor of BBC) and Aijaz Zaka Syed, an award-winning Indian journalist and columnist based in the Gulf and former Opinion editor, Khaleej Times. Syed has also been associated with Caravan Daily as a columnist. In addition, Facebook disabled accounts of editors of BoltaHindustan.com, and Mumtaz Alam, Editor of Caravan Daily and Syed Ghazanfar Abbas, National Correspondent of Caravan Daily.” All these accounts and profiles were suspended without any prior notice for violating Facebook’s community standards.
As I examined these cases, a clear pattern emerged on how Facebook’s algorithm was written and manipulated. The algorithm targets only a certain section of the profiles, for reasons only Facebook is privy to. As the frequency of suspension has increased, I noticed that now when profiles are down, activists and independent observers just wait out the suspension period. Evidently, activists critical of the government are losing their voice on social media against Facebook’s black-box policies. Speaking with activists, I have observed that once a profile is out of the feeds for 30 days, its earlier fiery vigour is also lost. This is the carrot and stick, where freedom of speech is curtailed randomly and frequently without fully stopping it and the information dissemination velocity is lost.
In 2015, while meeting with Bickert and her team, I discussed the gaps in Facebook’s policing of right-wing fundamentalists groups, women activists facing death and rape threats. I was trying to impress upon them that Facebook’s approach was woefully inadequate to understand the cultural sensitivity of India, the colloquial and native language nuances, and how the FAQ’s on community guidelines would not work. Ankhi Das, who is Facebook’s Public Policy Director for South Asia, snapped back at me, “You are living in US, you won’t understand India.” Maya Leela, an activist who is also a friend, was present there and heard Das’s response. Das’s retort sounded familiar to me — it was the language of Hindutva trolls.
Originally published in The Caravan.