India’s data protection draft ignores key next-generation rights

by Nayantara Ranganathan

(This article was first published in Asia Times. You can find the link to the original article here.)

In the last week of July, two events happened in rapid succession: the release of India’s much-awaited draft data-protection legislation, and a breaking news story that Watson, IBM’s computing system that helps physicians recommend individualized cancer treatments, was offering unsafe and incorrect” options. Interestingly, the report accompanying the draft bill cited the tie-up between Manipal Hospitals, one of India’s large chain of hospitals, and IBM Watson as an illustration of the benefits that artificial intelligence” can bring us.

Now, if a cancer patient were circumspect about getting symptoms analyzed by IBM Watson, he or she could pursue treatment in a hospital that has not tied up with machine-learning platforms to devise plans, doing something critical in this context: exercising a choice not to be subject to IBM Watson.

Between the hype-fueled adoption of machine learning and artificial intelligence in governance on the one hand, and the unfathomable nature of their harms on the other, lies a crucial issue that has been lost so far.

The question to ask, therefore, is whether the draft bill and report, prepared by the Srikrishna Committee, set up to study data protection and privacy in India, provides safeguards such as opt-outs or explanations to people. The short answer is no, but the explanation why this is so is a long one.

A few articulations of rights under the European Union’s General Data Protection Regulation (GDPR) are useful as starting points, but which the draft bill has summarily dismissed. And the reasons offered by the Srikrishna Committee are completely hollow.

The next generation of rights

The GDPR is the current gold standard globally for privacy laws, and it has a bundle of rights that address the asymmetry of power between people and entities involved in large-scale data processing. As it has only in effect for a little over two months, it remains to be seen to what extent these rights empower people in practice.

Right to explanation

This right allows for people to require that the logic behind decisions made by automated systems is explained. Quite like how consent” has been treated in the Indian report, a right to explanation also has a normative value, even if there are challenges to its practical operationalization.

The right to explanation is an important tool when it comes to philosophical as well as practical considerations of assigning responsibility for the consequences of automated systems. It also allows for assumptions and biases embedded in the design of systems to be examined.

Given state and central governments’ predisposition to rely on data-driven” modes of governance, it would be a self-aware move to introduce such a right.

The right to object

The GDPR allows people to object to the processing of their personal data when it is done on the grounds of public interest” or legitimate interest.” So for example, if the municipal corporation decides to use a facial-recognition system to detect the moods of the residents of a particular colony to gauge citizen satisfaction, one may object.

The right to object to processing for the purpose of direct marketing

According to this right, for example, one may object to being profiled for age, perceived gender and political views as 38, female, holds leftist views” for the purposes of targeting one with advertisements. This right could arguably make the kind of targeting that Cambridge Analytica did through Facebook much harder.

Right to object to decisions solely based on automated processing

This right allows one to object to being the subject of a decision based solely on automated processing if the processing affects his or her rights. For example, if a schoolteacher were to lose her right to livelihood on the basis of a decision arrived at through a computing system that does not use significant human inputs, then she would have a right to object to such processing.

Instead of an across-the-board ban, the EU regulation allows for subjective considerations like the vulnerability of the individual, the intrusiveness of profiling, and the person’s expectations to be taken into consideration.

Passing the buck

The Srikrishna Committee report states that the individual level is not the best one at which to deal with problems with large-scale data processing. The responsibility should instead lie with the data fiduciary and Data Protection Authority, as that is more efficacious.”

The report also states that the onus of protecting people lies with the relevant legislation under which such processing is happening, and supplies the example of the Unique Identification Authority of India processing core biometric data under the Aadhaar Act. However, it is important to note that the Aadhaar Act in its current form does not allow transparency or control when it comes to processing of personal data by the UIDAI. Therefore, such an outsourcing would be of no use to protecting people and their data.

The data protection discards these emerging articulations of the right to privacy and instead claims that it is solved through an ex-ante accountability” framework. An ex-ante accountability framework cannot exercise a choice on behalf of persons, with regard to their individual circumstances. Privacy after all is relative, and means different things to different persons, at different times.

Conclusion

By failing to vest people with this new generation of rights, the bill does not equip people with negotiating power in the age of big data and AI. The report places far too much faith in State institutions and sets up a self-fulfilling prophecy, by deeming individuals as an inappropriate unit at which to tackle these harms. It doesn’t have to be like that.