Laws are like pancakes


Laws are like pancakes

Hi hi hello everyone — we’ve just wrapped up our podcast series on FAccT. In case you weren’t aware that this series even existed and you now feel woefully behind, here’s a quick rundown: First I spoke to Andrew Strait about our favourite papers presented at the conference; it was a great chat and a good overview of what FAccT even is. Then I interviewed the authors of three of my favourite papers…


Tech as an intermediary between people & police

— By Georgia Iacovou

Many of you may well be aware of the ongoing relationship that Ring (owned by Amazon) has with police departments. Ring make home security systems: e.g. a doorbell that is also a surveillance camera, and connects to your phone. Owners of these cameras can then get on an app together and discuss who they see passing by their front doors, and enable each other’s paranoia. More critically: Ring will also train police to sell their devices to citizens, and until recently, they could request footage from Ring owners without a warrant, via an in-app mechanism called Request for Assistance (RFA).

What we’re looking at here is a silent connection between communities and law enforcement, made possible by the adoption of a new technology — if you can call it new. In last week’s podcast, Alix interviewed Marta Ziosi and Dasha Pruss about their paper: Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool. In it they discuss ShotSpotter, an audio surveillance system which automatically detects gunfire. The interesting thing here is not that police departments spend money on deploying these kinds of technologies, but that they tout them as addressing community needs. The authors note that sometimes tech like this is used in response to an erosion of public trust; a kind of ‘anything will do’ scramble for a technological solution to a social problem.

So in this sense, a piece of carceral tech that is always on, and alerts the police in real time if it detects wrongdoing (and, crucially, is installed disproportionately in areas home to marginalised groups), can be flipped on its head and framed as an accountability system: because it detects gunshots — which means, technically, community members can also scrutinise the data if they so choose, and catch instances where police have fired their guns. This is, at best, a whimsical attempt at transparency and accountability.

Predictive systems like these are sold as a means of keeping people safe, or even as a way to be fair and objective — and they allow those with power to malign transparency as something that guarantees accountability. In First Law, Bad Law, Alix interviewed Lara Groves and Jacob Metcalf about a new law local to New York, which states that any employers using algorithms in their hiring processes must conduct independent algorithmic bias audits, and publish these on their websites. Jacob points out that “the local law 144 is a transparency law only […] there's nothing in the law that says that you can't have the most discriminatory algorithm possible.” — leading Alix to compare laws to pancakes (the first one is often… not great).

The undercurrent of these two issues is the techno-optimist urge to build: the idea that if we use enough technology, we can identify biases, bring criminals to justice, and hold powerful actors accountable all at the touch of a button. Almost as if, within technology at least, you can only be positive if you’re being additive. In our episode about abandoning algorithms, Nari Johnson and Sanika Moharana discuss framing abandonment as a positive thing — where a developer might be part way through building out an algorithmic system and realise that actually, it may cause harm, and so the positive move would be stop development altogether.

They also discuss disgorgement which is a word that is hard to not get excited about: this is where an algorithm is not just ‘turned off’ but every instance of its use is scrubbed away. It’s interesting how this might be approached with the technologies touched on above — who enforces the disgorgement of ShotSpotter? Or an algorithm that screens CVs? And, how do we decide who or what replaces those systems, if anything?


We also put out a trailer for our next mini series

It’s called Exhibit X and it’s about the ways you can — and maybe can’t — use litigation and the courts as a meaningful mechanism to hold tech firms accountable for their harms. You can find the trailer here or on your favourite podcast platform.

As usual, we always want feed back on everything we make — if you have any thoughts to share on our podcasts we would love to hear them. Are the episodes clear? Are you learning anything? Are there any issues you want to see covered? Hit reply and let me know!

Alix


If this was forwarded to you, sign up here.

Computer Says Maybe

A newsletter & podcast about AI and politics

Read more from Computer Says Maybe

Our questions of the year Hey there— I know this is likely one of many year-end reflection emails in your inbox, so thanks for even opening! In our final newsletter of 2024 we wanted to center the questions we were most driven by this year, and we’ll wrap this up with our 2025 plans and opportunities to work with you. What’s been on our radar in 2024? The question: Who are the dominant voices in AI coverage, and how are they shaping the narrative? The action: We dug deep into this with our...

🎧 Technology’s role in the climate crisis Hello all — just a quick podcast update from us. We’ve recently wrapped up a miniseries called Net0++. We interviewed activists, researchers, a journalist, and even an expert in concrete (yep) — to help us explore how the tech industry’s business model is worsening the climate crisis. In Big Dirty Data Centres, we heard from Jenna Ruddock and Boxi Wu, who shared their work on data centre expansion — which has ramped up with the AI hype — and how local...

Breaking the wave of harmful AI Hi Reader, Since the US election, we’ve been reflecting a lot on the power of collaboration and coordination to address the complex, structural challenges we’re facing. One voice can make a difference, but many voices, working together, can effect real change. And when we consistently act in concert, incremental progress can snowball into something much bigger - a movement. This month (and beyond), we’re focusing on both amplifying the voices of experts and...