The pandemic has ensured that many of us are now hardwired into the digital sphere. Whether for work or for socializing, screens have replaced shoes. In a recent internet fad called the #ChallengeAccepted campaign, women tagged other women they knew to show solidarity by posting black and white photos of themselves. In an internet tag race, each user tags others thus forming a daisy chain of women posting encouraging selfies.
This isn’t the first time challenges like these have spread like wildfire. The saree challenge, the 10-year challenge, and many others seem to follow the same pattern. The feel-good factor and moral appeal of these online challenges result in a surge in the number of participants in these unprecedented times. The Challenge Accepted hashtag has over 4.5 million posts on Instagram. But are they really harmless fun between friends? A way to show solidarity among women? Or is it a more sinister means to an end? That of data mining.
A ‘photo challenge’ such as this one on Instagram, no matter what moral message attached to it, is users handing over data en masse to its parent company, Facebook. The irony being that the act of following internet trends and challenges are counterintuitive to the bold act of feminist resistance, users claim to be showing, complicit in their possible exploitation by handing over visual data.
So is participating in these challenges any different or riskier than posting a selfie on any given day? Akademi Mag spoke to Shraddha Mahilkar, Policy and Advocacy Lead at Internet Democracy, about a possible breach in privacy, “In short, posting a photo randomly on Instagram contributes to the data collection at the background and since Instagram is not independent of Facebook, so there is an entire web going on in the background which connects every other data posted on Instagram to all of your data on all the other platform. That is how big data is formed. Everything is connected to everything else in the background. But since your profile will have certain filters (which will make you feel like you are in control but, the truth is, it’s only a smokescreen, the parent company has access to all the data and it can be used by them since you agreed to all the random permission requests you got while trying to install and access the services on the app. This potential risk becomes multifold when you tag and make the photos accessible in a public domain. Tagging seems to be a viable solution for the identification of data subjects. According to Stone et al, a method for auto-tagging images within social networks which increases recognition performance beyond that of a baseline face recognition system. Facebook also provides an auto-tagging mechanism that automatically suggests possible tags for images.”
Internet challenges that rely on users tagging their networks, who in turn tag more, increase the scale and scope of getting tracked exponentially through multiple channels, creating a domino effect. The number of people tagging each other through friends and friends of friends eventually means the original user’s details can be tracked through a web of multiple channels.
The #ChallengeAccepted campaign may have started off in a well-intentioned manner. According to a report published by kqed.org, the intent was to draw attention to the alarmingly high rate of femicide in Turkey, a 200% increase since 237 women were lost in 2013. The tipping point for protests to break out both online and on-ground was the brutal murder of a 27-year-old student named Pinar Gültekin, her body found five days after her July 16 disappearance, strangled and partially burned body, stashed inside an oil drum. But soon enough Instagram’s algorithm revealed that the original intent of justice for Turkish women was drowned out by millions of women posting #womensupportingwomen selfies, an overwhelming majority of whom were white. Internet challenges pose particular risks to women. Earlier in the year, women took to posting photos of themselves in sarees after being nominated by their friends and colleagues. Catfishing, blackmail through morphed images, stalking, rape and death threats are always a possibility online as they are in-ground reality. The feel-good factor of these challenges often subverts the need for self-censorship.
“The constant and rigorous emphasis placed on the female body in societies across the world tells us two things: One, our bodies are something that we should hide, and paradoxically two, our bodies are something that is constantly on display. The presence of surveillance cameras in public or private spaces – hidden or otherwise – encapsulates this dichotomy perfectly. […] When it comes to spaces that tend to be male-dominated, your crime is the presence of your body, and the camera is, by extension, justified in capturing what you are supposed to hide,” — Richa Kaul Padte
Indiscriminate sharing of personal information poses several risks according to Mahilkar, “Our digital footprints are the traces we leave behind when we use the internet, from surfing the web to shopping. Almost all the websites on the internet need our personal information to get access to their services. These sites often store cookies and save our personal information and later use it for various purposes. Most of the time this information is not encrypted and can be accessed by anyone. This mishandling of personal information may lead to serious consequences. If you are a woman then there is also the same patriarchal norm and objectification follows you online as it is offline.” The question then, is what are users agreeing to when they accept the terms and conditions mentioned in the fine print?
“Facebook: basically they will own everything that goes into creating and managing the account even after you have deleted your account. Information and content you provide. Content that is created or shared and messages or communications with others. This can include information in or about the content that you provide (e.g. metadata), such as the location of a photo or the date a file was created. It can also include what you see through features that we provide, such as their camera, so they can do things such as suggest masks and filters that you might like or offer you tips on using camera formats.”, says Mahilkar.
Besides this information, Facebook also has access to users’ networks and connections, usage such as transactions made on its products, device information and information from third-party partners. User profiling is invasive, to say the least. Mahilkar further highlights, “Every profile has a standard set of profile information associated with it for example name, country, email address and can contain one or more albums and posts.”
So are the accusations against Facebook using facial recognition in the #10YearChallenge for data mining justified? According to Mahilkar, “The possibilities of data mining cannot be denied when it comes to these kinds of trends. According to O’Neil #10YearChallenge provides the perfect raw material for setting up a facial recognition algorithm. The then-and-now photographs are ideal for such an exercise because they are labelled in such a way that makes it easier to recognize, they are clear shots of people among other factors. Facebook has been denying the possibilities of such data mining but the truth is that Cambridge Analytica has harvested more than 87 million FB users, out of which 70 million were US citizens, that were used to build a powerful software programme that would predict and influence choices at the US elections.
According to a report in The Independent, Delhi Police used facial recognition technology to trace 3000 missing children in just four days. During the anti-CAA protests, it was reported that the Delhi Police was using the same technology to screen ‘Habitual Protestors.’ Like any other technology, facial recognition is just software but the way our society is utilizing this tool makes facial recognition both extensive and intrusive.
“For surveillance of the individuals, it could use our faces and information online for algorithmic mining. It has fundamental issues; it could be used to discriminate people based on their progression. It is possible that age progression recognition technology could be used in health care and health insurance assessments, for example, if you age faster than your cohorts, perhaps you’re not a very good insurance risk. You may have to pay more or can be denied coverage.” — Shraddha Mahilkar, Policy and Advocacy Lead, Internet Democracy
Originally published in Akademi Mag.