Surveillance Tech Panel at Virtual Progress 2020

This speech was given on a panel about surveillance tech at the Centre for Australian Progress' conference which was delivered virtually due to COVID.
Hi everyone. I am joining you from the unceded lands of Gadigal people of the Eora Nation. I would also like to pay my respects to elders past and present. My name is Dhaksh Sooriyakumaran. I am a Tamil person and the Human Rights and Racial Justice Director at the Australasian Centre for Corporate Responsibility. ACCR is a research and shareholder activist organisation, focusing on holding listed companies to account on how they manage climate, labour, and human rights issues.
So last year I was here at Progress speaking on a panel titled Dismantling Progressive White Supremacy. And this time around I’m here to discuss surveillance technologies and how they impact on marginalised groups..but thinking this topic more and more, I found myself coming back to wanting to talk again about dismantling progressive white supremacy.
Let me explain.
But before I get to that I just want to tell you about the type of technology I am focused on in my work. We look at surveillance technology being deployed to police borders. This includes AI-enabled monitoring and detection, biometrics, smart borders, and the use of phone and social media tracking. This is a huge and growing industry with main companies being tech giants (such as Amazon and IBM), but also the world’s biggest weapons companies (such as Lockhead Martin, Thales, Airbus etc) are coming together to essentially automate the border - a very terrifying prospect given that border policing is increasingly militarised.
How does this relate to dismantling progressive white supremacy?
Firstly, some of the worst perpetrators of collecting data that could be weaponised against people seeking asylum and refugees, are actually humanitarian organisations. UNHCR and the World Food Program for example have their own biometric registration systems developed.
Secondly, organisations and individuals operating in the space of human rights and technology tend to focus issues that affect more privileged communities, and therefore develop analyses that invisiblise white supremacy. What do I mean by this?
In the predominantly white elite human rights discourses, you hear people talk about / acknowledges racist algorithms, or that datasets are racially biased, or that teams developing tech are not diverse.
But what about the fact that the VERY PURPOSE THESE technologies exist is racialised surveillance and policing?
It’s not an accident or a coincidence that US Customs and Border Protection (CBP) was involved in policing protesters. The agency flew an unarmed Predator drone over demonstrations in Minneapolis, tech they usually deploy at the US border. And it’s because the technologies are effective at, and DESIGN FOR racialized surveillance and policing.
White elites in this space LOVE to talk about ethical AI, responsible AI, or principles for how AI can be aligned with human rights.
We know for a fact that the most dangerous AI is being developed by actors that operate in the military context. They have access to big budgets, and their very purpose is cause harm, particularly to the most vulnerable groups in society, such as communities of colour, and more specifically people seeking asylum and refugees. There are v unlikely to adhere to any principles.
My plea to progressive campaigners and activists is that we need to re-orient towards the ACTUAL problems, and stop invisibilizing the system of white supremacy that we are operating within and that we are all a part of.