Mar 2021 03 – 10
Abstract
The fourth annual ACM FAccT conference is organised by the Association for Computing Machinery. Radhika Radhakrishnan from the Internet Democracy Project is invited to conduct a tutorial on ‘AI on the Ground Approach ‘ along with other experts.
ACM FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars from computer science, law, social sciences, and humanities to investigate and tackle issues in this emerging area. Research challenges are not limited to technological solutions regarding potential bias but include the question of whether decisions should be outsourced to data- and code-driven computing systems. We particularly seek to evaluate technical solutions with respect to existing problems, reflecting upon their benefits and risks; to address pivotal questions about economic incentive structures, perverse implications, distribution of power, and redistribution of welfare; and to ground research on fairness, accountability, and transparency in existing legal requirements.
The conference will bring together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Radhika Radhakrishnan will be presenting a tutorial on ‘AI on the Ground Approach’, along with other experts in the field where they will discuss their experiences and learnings as global South computational ethnographers studying AI.
AI plays an increasingly prominent role in modern society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals’ incarceration, and the hiring of new employees, and it’s not hard to envision that soon they will underpin most of the society’s decision infra-structure. Despite the high stakes entailed by this task, there is still a lack of formal understanding of some basic properties of such systems, including due to issues of fairness, accountability, and transparency. In particular, we currently do not fully comprehend how to design systems that abide by the decision-constraints agreed by society, including to avoid perpetuating preceding prejudicial practices and reinforcing previous biases (that are possibly present in the training data). The growing interest in these issues has led to a number of criteria trying to account for unfairness, but choosing a metric that the system must adhere to be deemed fair remains an elusive matter, almost invariably made in an arbitrary fashion, without much justification or rationale.
The tutorial is aimed towards filling in this gap by providing a mathematical framework based on causal inference for helping the system’s designer to choose a criterion that optimizes fairness in a principled and transparent fashion.
Tutorial — ‘AI on the Ground Approach’
Date — March 4, 2021
Time — 09:30 PM to 10:30 PM (IST)
Mode — Virtual
Presenters — Radhika Radhakrishnan (Internet Democracy Project), Noopur Raval (NYU), Ranjit Singh (Data & Society), and Vidushi Marda (Article19)
Topics to be covered in the tutorial: (1) The politics of studying ‘up’ (negotiating power, establishing expertise, and gaining access to AI/ML developers and experts), (2) how to navigate trade secrets, institutional secrecy, non-disclosure agreements, threats, liability issues while attempting to study proprietary and public domain AI systems, (3) how to study/map/trace and where to collect data in highly distributed, networked systems (imbrications) and based on those choices, what kinds of visibilities and invisibilities accompany AI ethnographies and finally, (4) a discussion on the politics of AI Ethnography – what kinds of relationships do such researchers have to build and maintain with individuals, government departments and corporations and what implications does such positionality have on their ability to produce impactful knowledge?
Check out the agenda here.
To register please click here.