With Hazel T. Biana at DLSU College of Liberal Arts, Paola Ricaurte at Harvard University, Paulette Senior at the Canadian Women’s Foundation, and Benjamin Prud’homme at Mila – Quebec Artificial Intelligence Institute. Artificial intelligence (AI) is moving at breakneck speed. What are its possible benefits for women and equity-seeking people? Can we implement AI in a way that doesn’t contribute to gender inequalities, harms, and injustices? How do we ensure no one is left out of decisions and implementation of this technology?

Canadian Women’s Foundation President and CEO Paulette Senior joined in a fascinating panel on AI and Reducing Gender Based Inequalities at the June Conference of Montreal organized by the International Economic Forum of the Americas. You’ll hear clips of her and international experts speaking to benefits and pitfalls we have to act on if we want to achieve gender justice and avoid harms women and gender-diverse people are at risk of around the world in this massive technological tidal shift.

We start with Hazel T. Biana, Associate Professor at the Department of Philosophy at DLSU College of Liberal Arts. Then we move to Paola Ricaurte, Co-founder of Tierra Común, Associate Professor at Tecnológico de Monterrey and Faculty Associate at Berkman Klein Center for Internet & Society at Harvard University. Next is Paulette herself, followed by Benjamin Prud’homme, Executive Director of AI for Humanity at Mila – Quebec Artificial Intelligence Institute.

The panel itself was moderated by Naser Faruqui, Director of Education and Science at International Development Research Centre.

Transcript

00:00:04 Andrea

Artificial intelligence is heavy on our minds and on our newsfeeds. Some of the conversations about it are exciting, and some are downright scary. Does AI hold promise to be used in a way that won’t create more gender inequalities?

I’m Andrea Gunraj from the Canadian Women’s Foundation.

Welcome to Alright, Now What? A podcast from the Canadian Women’s Foundation. We put an intersectional feminist lens on stories that make you wonder, “why is this still happening?” We explore systemic routes and strategies for change that will move us closer to the goal of gender justice.

The work of the Canadian Women’s Foundation and our partners takes place on traditional First Nations, Métis and Inuit territories. We are grateful for the opportunity to meet and work on this land. However, we recognize that land acknowledgements are not enough. We need to pursue truth, reconciliation, decolonization, and allyship in an ongoing effort to make right with all our relations.

00:01:10 Andrea

The development of artificial intelligence is happening at breakneck speed. It’s a fundamental shift we’re hearing a lot about, both good and bad. Its potential benefits rub up against its disruptive and destructive potential.

Exploring its impact on our lives – its impact on gender equality and injustices – is more urgent than ever.

What are AI’s possible benefits for women and equity-seeking people? Can we implement artificial intelligence in a way that doesn’t contribute to increased inequalities, harms, and injustices? How do we ensure that no one is left out of the decisions and implementation of this technology?

Canadian Women’s Foundation President and CEO Paulette Senior joined a fascinating panel on AI and Reducing Gender Based Inequalities at the June Conference of Montreal organized by the International Economic Forum of the Americas. Today, you’ll hear clips of her and international experts speaking to benefits and pitfalls we have to act on if we want to achieve gender justice and avoid harms women and gender-diverse people are at risk of around the world in this massive technological tidal shift.

We start with Hazel T. Biana, Associate Professor at the Department of Philosophy at DLSU College of Liberal Arts. Then we move to Paola Ricaurte, Co-founder of Tierra Común, Associate Professor at Tecnológico de Monterrey and Faculty Associate at Berkman Klein Center for Internet & Society at Harvard University. Next is Paulette herself, followed by Benjamin Prud’homme, Executive Director of AI for Humanity at Mila – Quebec Artificial Intelligence Institute. The panel itself was moderated by Naser Faruqui, Director of Education and Science at International Development Research Centre.

00:03:12 Hazel

First of all, I come from Southeast Asia, from the Philippines. So basically, we have one of the most dangerous transport systems in the world. So when visitors come over I tell them, “do not take public transit, take a cab; it’s safer.”

So you know, when we were doing some studies and we were trying to find out what do actual women and vulnerable groups need in terms of transit, we realized that in terms of thinking, in terms of the philosophy behind the technology that has been developed in the Philippines, is that most of these AIs tend to be victim blaming because they are not created by the women themselves or the vulnerable group themselves.

So what we decided is that we went to the grassroots level, we talked to survivors of violence in transit, and we tried to see how AI could actually help deter these types of violence. So in terms of benefit, I think that AI can serve as a way to make vulnerable groups safer and make them more empowered.

But at the same time there is a risk. And one of the risks that you could encounter is the type of data that you gather that these women willingly give. So that’s the reason why there’s a challenge to ensure that your cybersecurity experts, your UBX designers, have that type of feminist framework of thinking in order to inspire the type of AI that they are developing.

00:04:51 Paola

When we speak about innovation, we are speaking about a specific type of innovation that interests us, and it’s women’s innovation. The innovation that is coming from communities, grassroots innovation, because currently, we have an innovation ecosystem that are excluding women, and as Hazel was mentioning, and communities, from the capacity to innovate and take the tech and develop the tech that they need.

I belong, as was mentioned, to the Feminist AI Research Network, I am leading the Latin American and Caribbean Hub. And in our project that is called “Incubating Feminist AI,” we are supporting multidisciplinary teams led by women that are developing technologies based in the community’s needs and also that are addressing the problems that women face in our local context.

Unfortunately, Latin America is one of the regions with the highest rates of feminicides. And also we are facing not only physical violence but as you know, we also face algorithmic violence. And we also come from a region where we have many communities that have been systematically excluded from every institutional opportunity. They are excluded from innovation, they are excluded from education, they are excluded from the opportunities that can lead them to achieve a better life.

So we are currently supporting many projects. And I would love to talk about only four. ADEA is a product developed by Fundacíon Vía Libre in Argentina. So it’s an NLP tool that can be used to democratize the auditing of algorithmic biases in Spanish, because currently AI doesn’t speak many languages. This tool helps to identify not only gender biases, but also class biases, professional biases. So it’s a tool to understand intersectional biases in general. We have another project called AymurAI and this is a tool developed by the NGO Data Género to help the judiciary system identify gender biases in the sentences that are handed down by the courts. So this tool helps women to have access to justice.

We have another tool that is developed by Pit Policy Lab, it’s called La Independiente. And it’s an AI-based platform to make visible the work of women in the crowd-work economy. So many women in Latin America work as data labourers, but they are not visible to anyone and AI is a tool that is based in human labour. Another project is Tecnicas Rudas, and it’s working with an Indigenous community in the north of Mexico to develop an artificial intelligence tool for water management based on ancestral knowledge and forms of community organization.

So these are examples that show that technologies can be developed in a different way considering the needs of the communities and developed by the communities and not by someone else. And of course these tools are aimed to achieve gender equity and social justice.

00:08:10 Paulette

I’m really happy to be a part of this panel and if you had asked me five, six, seven years ago, would I be looking at the intersection of gender justice and digital justice or AI, I would have said not me. But it’s important that organizations like mine are part of this conversation. Because we are certainly on the forefront of addressing deep inequalities for a long time and certainly one of them has been the gender pay gap, which has been persistent, which has remained part of the struggle around being able to achieve or move forward on gender equality, among other things, you know, we are also dealing with issues around gender-based violence.

We see that the opportunity that AI offers is to be able to pinpoint, as an example, what are some of the areas that we could actually do further research in, but to do it at probably frightening speed compared to what we have in place now in terms of research and be able to pinpoint exactly where the barriers are to being able to achieve or narrow the gap.

And to do that in a way that is not in this sort of binary of men and women, but to do it across the intersection of identity that we know. For example, Indigenous women or racialized women experience much wider gaps when it comes to any injustice, but certainly in terms of the gender pay gap as well. It could be one of the things that we could actually utilize AI to help us actually achieve gender justice.

00:09:59 Benjamin

I would say at least two things. The first one is if we, in a way, take a step back and look at AI as somewhat of a revolution or a very significant moment in time. I think that the first opportunity is just not to repeat mistakes, that would be the very least, but perhaps use this as a moment to do things differently. You know, one of the things we need not to repeat is to make inclusion a sideline of the conversation.

To give you an example, and my intention is absolutely not to throw blame, but in fact to create a moment of reflection. The panel before us was a panel about economic growth related to technology. The house was full. The panel here there’s a bit less people and it’s one about inclusion. The panel before us was all male and so I think that we need to kind of remind ourselves that inclusion lives in the ways you value, you embody. And so it should really live throughout all conversations, and I think that’s the first opportunity about AI. If we get it right and we actually create the right tables with the right people, the right organizations, I think that’s opportunity number one.

Then I think opportunity number two is yes, kind of concrete AI projects that support, you know, the voices of communities and in this particular case, the voices of women, by putting them at the center of the projects, and so at Mila we, I mean, I could give examples: we have a project called Biasly that aims at using AI to detect misogynistic biases and written texts; we have a project aimed at using AI to detect organized human trafficking activity; another one around modern slavery, and of course, as you could imagine, both human trafficking and modern slavery are far and foremost affecting women. So there are very concrete ways in which AI can support that work, but my main message around those is really using that important conversation as an occasion to do better on that question.

00:11:58 Hazel

The benefit of that is that you can prevent human trafficking, potentially. The risk of that is you tend to implement certain gender stereotypes and profiling that it is sort of a way to enforce systems of domination and power.

So I think that’s one of the risks. While you can prevent things, you can also reinforce things. Yeah, that’s one example. Another example is, I was talking about earlier, with regards to the type of data that you collect from vulnerable groups. So you know, this is the data that they willingly give. Fine, you let them sign all these data privacy concern, you know, tick boxes. But do they really read them? Do they really know what information they’re giving? So you know that type of risk is something that we should be wary of. How do we control the data? Who holds the data? Who keeps it? Who manipulates it?

00:12:58 Paola

I would say that currently AI is reinforcing, as was mentioned before, systemic inequalities and I think that’s the greatest risk; how AI is used to reinforce these historical, economic, social, cultural, knowledge and also environmental inequalities. We need to prevent AI to continue being a technology of privilege. We don’t need technologies that are controlled by only a few group of actors. We don’t need technologies that surveil and control women’s bodies. We don’t need technologies that are used to monitor and profile marginalized people, for example, refugees or people that are in movement.

00:13:52 Paulette

You know, as my co-panellists speak, what comes to mind for me is that those who hold the data, collect the data, hold the data, manipulate the data, hold the power. What we’re really dealing with here is even further deepened inequities on top of the inequities that already existed prior. So it’s really replicating that, if not widening the gap in terms of the inequities. So we see that whether it’s in sort of surveillance data, we know in North America it’s being used in different means. We hear in the States it’s being used in terms of how judges make decisions around who gets and doesn’t get bails. Pretty scary. We know what the results are. We know who are the folks that are that are in prison the most. It’s racialized people, poor people, people with mental health issues.

Collection of the data is already inequitable in and of itself because of who’s doing the collection and then who gets to manipulate it. We know that women are not in the same numbers in terms of those who are at the table at the point of development or even deciding the parameters of the collection of the data. The same with racialized folks, people with disabilities. Yet it’s being considered as ways to address some of these inequities.

So we’re really using an inequitable process addressing something that’s already inequitable. It’s not going to end well. I think the risks can become insurmountable if we are not taking stock, taking pause. I mean those who are much more educated than the rest of the population are asking for a pause, and that causes me not just pause, but it scares the living daylights out of me.

So what is it that we really need to put in place around some safeguards, some guard rails around the development of the data, and then really questioning the use of the data, the ownership of the data as well as how that data is being collected. And having folks at the table in order to mitigate, prevent, if not disrupt that process.

00:16:23 Andrea

Alright, Now What? This AI and Reducing Gender Based Inequalities panel is a rich discussion you just got a small taste of. Watch the full thing by visiting the YouTube channel of the International Economic Forum of the Americas.

Please listen, subscribe, rate, and review this podcast. If you appreciate this content, please consider becoming a monthly donor to the Canadian Women’s Foundation. People like you will make the goal of gender justice a reality. Visit canadianwomen.org to give today and thank you for your tireless support.