Not My AI: A feminist framework to challenge algorithmic decision-making systems deployed by the public sector

Researcher
Coding Rights
Downloadable Report/Publication

Throughout the Latin American region, governments are in the process of testing and piloting a wide variety of artificial intelligence (AI) systems to deploy public services. But what are the feminist and human rights implications?

As machines are designed and operated by the very same humans in power, these AI systems are mostly likely to cause or propagate harm and discrimination based on gender and all its intersectionalities of race, class, sexuality, age, territoriality, etc., therefore posing worrisome trends that should be of concern to feminist movements.

Image
Oppressive A.I.: Feminist Categories to Understand its Political Effects
About this research

Taking Latin America as a point of departure, as it is where we originate from both as researchers and feminists, this research seeks to contribute to the development of an anti-colonial feminist framework to question artificial intelligence (AI) systems that are being deployed by the public sector, particularly focused social welfare programmes. Our ultimate goal is to develop arguments that enable us to build bridges for advocacy with different human rights groups, particularly feminists and LGBTIQ+ groups, especially in Latin America, but not only. We hope that, collectively, we can foster conversations towards an overarching anti-colonial feminist critique to address governmental trends of adopting AI systems that are not only disregarding human rights implications but are also, once again, replicating hetero-patriarchy, white supremacy and colonialism through neoliberal techno-solutionist narratives exported to the world by Silicon Valley.

Introduction

Accordion content

In the hype of artificial intelligence (AI), we are observing a world where states are increasingly adopting algorithmic decision-making systems as a magic wand that promises to “solve” social, economic, environmental and political problems. As if machines were able to erase societal biases and structural inequalities, instead of just automating them, we are gradually observing states using narratives around tech innovation for spending public resources in questionable ways, sharing sensitive citizen data with private companies and, ultimately, dismissing any attempt of a collective, democratic and transparent response to core societal challenges.

Latin America is not an exception. Throughout the region, governments are in the stage of testing and piloting a wide variety of AI systems to deploy public services. In an initial mapping exercise, we could identify five trending areas: education; judicial system; policing; public health, and social benefits. Among such trends, we decided to focus our case-based analysis on AI projects applied in the overlap of education and distribution of social benefits. What are the feminist and human rights implications of using algorithmic decision-making to determine the provision of social benefits and other public services? As machines are designed and operated by the very same humans in power, these AI systems are mostly likely to cause or propagate harm and discrimination based on gender and all its intersectionalities of race, class, sexuality, age, territoriality, therefore, posing worrisome trends that should be of concern to feminist movements.

Taking Latin America as a point of departure, as it is where we both as researchers and feminists originate from, this investigation seeks to contribute to the development of an anti-colonial feminist framework to question AI systems that are being deployed by the public sector, particularly focused social welfare programmes. Our ultimate goal is to develop arguments that enable us to build bridges for advocacy with different human rights groups, particularly feminists and LGBTIQ groups, especially in Latin America, but not only. We hope that, in collectivity, we can foster conversations towards an overarching anti-colonial feminist critique to address governmental trends of adopting AI systems that are not only disregarding human rights but are also, once again, replicating heteropatriarchy, white supremacy and colonialism through neoliberal techno-solutionist narratives exported to the world by Silicon Valley.

This article is the result of research conducted by the authors in close collaboration with the Feminist Research Network (FIRN) and currently composes the core structure of the notmy.ai platform. The platform continues to be developed with the goal to increase critical thinking through a series of conversations around the development of a feminist toolkit to question algorithmic decisions-making systems that are being deployed by the public sector. Going beyond the liberal approach of human rights, feminist theories and practices, it builds political structures for us to imagine other worlds based on solidarity, equity and social-environmental justice. As AI is gradually pervading several issues that are in the core of feminist agendas, the need for supporting feminist movements to understand the development of these emerging technologies becomes key in order to fight against automatised social injustice and to imagine feminist futures. Therefore, this report seeks to bring the feminist movements closer to the social and political problems that many algorithmic decisions carry with them. To reach such end, we start by posing three research questions:

  • What are the leading causes of governments implementing AI and other methods of algorithmic decision-making processes in Latin America to address issues of public services?
  • What are the critical implications of such technologies in the enforcement of gender equality,cultural diversity, sexual and reproductive rights?
  • How can we learn from feminist theories to provide guidelines to balance the power dynamic enforced by the usages of AI and another algorithmic decision-making systems?

To address them, this text is divided into four sections. We start by addressing the overarching question of this work: Why AI is a feminist issue? We want to address this inquiry empirically, starting from an initial mapping of AI systems being deployed by the public sector in Chile, Brazil, Argentina, Colombia, Mexico and Uruguay to determine the provision of social benefits and other public services, but actually, are more likely to be causing harm and challenging feminist agendas. Then we review critical thinking around AI used in the so-called Digital Welfare Systems towards drafting a feminist framework to grasp what would constitute an oppressive AI. Then we dig deeper into two cases in which AI is being deployed in distribution of social benefits and educational systems in the region: the Childhood Alert System in Chile and a system to predict school dropouts and teenage pregnancy developed for Microsoft Azure in partnership with governments from Argentina and Brazil. These case analyses will be based on an anti-colonial feminist approach, and not only human rights, as one of the starting points to interrogate the algorithmic decisions and will serve as a test of the oppressive AI framework, drafted as empirical feminist categories to understand power dynamics behind automated decision-making systems. This report ends with considerations about the next steps of notmy.ai towards using oppressive AI framework as a first tool to expand the conversations about feminist implications in the deployment of AI systems. In addition, more positively, the report concludes with the potential of hacking oppression by envisioning transfeminist technologies through feminist values that were brainstormed in a series of workshops conducted with the Oracle for Transfeminist Technologies. In this way, we can foresee the power of conversations that playfully envision speculative transfeminist technologies as a tool to take us from imagination to action.

Why AI is a feminist issue?

Accordion content

Many states around the world are increasingly using algorithmic decision-making tools to determine the distribution of goods and services, including education, public health services, policing and housing, among others. Referring to the term “Digital Welfare States”, the former United Nations Rapporteur on Extreme Poverty and Human Rights, Philip Alston, has criticised the phenomenon in which “systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish.” Particularly within the U.S.A., where some of these projects have been developed further than pilot phases, confronted with the evidence on bias and harm caused by automated decisions, AI programmes deployed in public services have faced criticism on several fronts. More recently, governments in Latin America are also following this hype, sometimes with the support of U.S.A. companies that are using the region as a laboratory of ideas which, perhaps fearing criticism in their home countries, are not even tested in the U.S.A. first. With the goal to build a case-based, anti-colonial feminist critique to question these systems from perspectives that go beyond well-put criticisms from the global North, through desk research and a questionnaire distributed across digital rights networks in the region, we have mapped projects where algorithmic decision-making systems are being deployed by governments with likely harmful implications on gender equality and all its intersectionalities. As Tendayi Achiume, Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, poses in the report “Racial discrimination and emerging digital technologies”, databases used in these systems are the product of human design and can be biased in various ways, potentially leading to – intentional or unintentional – discrimination or exclusion of certain populations, in particular, minorities as based on racial, ethnic, religious and gender identity.

As a result, as of April 2021, we have mapped 24 cases with likely harmful implications on gender equality and all its intersectionalities in Chile, Brazil, Argentina, Colombia, Mexico and Uruguay , which we were able to classify into five categories: judicial system, education, policing, social benefits and public health. Several of them are in an early stage of deployment or developed as pilots.

It is important to highlight that this mapping was not intended to present an overall and comprehensive record of all the existing cases of AI deployed by the public sector in Latin America that might have such harmful implications. That is a particularly difficult task, mostly if we consider the lack of transparency about these projects that exists in many of our countries and very common press announcements full of shiny promises that are then difficult to follow through other channels. The reason we left an open form at notmy.ai was to continue collecting information on new projects and possible harms. Nevertheless, above anything, our mapping had a less ambitious goal which was to point to general trends about the areas of application and collect evidence that shows that AI in the public sector is already a reality in the region which demands critical opinion and awareness raising.