Laws are like pancakes


Laws are like pancakes

Hi hi hello everyone — we’ve just wrapped up our podcast series on FAccT. In case you weren’t aware that this series even existed and you now feel woefully behind, here’s a quick rundown: First I spoke to Andrew Strait about our favourite papers presented at the conference; it was a great chat and a good overview of what FAccT even is. Then I interviewed the authors of three of my favourite papers…


Tech as an intermediary between people & police

— By Georgia Iacovou

Many of you may well be aware of the ongoing relationship that Ring (owned by Amazon) has with police departments. Ring make home security systems: e.g. a doorbell that is also a surveillance camera, and connects to your phone. Owners of these cameras can then get on an app together and discuss who they see passing by their front doors, and enable each other’s paranoia. More critically: Ring will also train police to sell their devices to citizens, and until recently, they could request footage from Ring owners without a warrant, via an in-app mechanism called Request for Assistance (RFA).

What we’re looking at here is a silent connection between communities and law enforcement, made possible by the adoption of a new technology — if you can call it new. In last week’s podcast, Alix interviewed Marta Ziosi and Dasha Pruss about their paper: Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool. In it they discuss ShotSpotter, an audio surveillance system which automatically detects gunfire. The interesting thing here is not that police departments spend money on deploying these kinds of technologies, but that they tout them as addressing community needs. The authors note that sometimes tech like this is used in response to an erosion of public trust; a kind of ‘anything will do’ scramble for a technological solution to a social problem.

So in this sense, a piece of carceral tech that is always on, and alerts the police in real time if it detects wrongdoing (and, crucially, is installed disproportionately in areas home to marginalised groups), can be flipped on its head and framed as an accountability system: because it detects gunshots — which means, technically, community members can also scrutinise the data if they so choose, and catch instances where police have fired their guns. This is, at best, a whimsical attempt at transparency and accountability.

Predictive systems like these are sold as a means of keeping people safe, or even as a way to be fair and objective — and they allow those with power to malign transparency as something that guarantees accountability. In First Law, Bad Law, Alix interviewed Lara Groves and Jacob Metcalf about a new law local to New York, which states that any employers using algorithms in their hiring processes must conduct independent algorithmic bias audits, and publish these on their websites. Jacob points out that “the local law 144 is a transparency law only […] there's nothing in the law that says that you can't have the most discriminatory algorithm possible.” — leading Alix to compare laws to pancakes (the first one is often… not great).

The undercurrent of these two issues is the techno-optimist urge to build: the idea that if we use enough technology, we can identify biases, bring criminals to justice, and hold powerful actors accountable all at the touch of a button. Almost as if, within technology at least, you can only be positive if you’re being additive. In our episode about abandoning algorithms, Nari Johnson and Sanika Moharana discuss framing abandonment as a positive thing — where a developer might be part way through building out an algorithmic system and realise that actually, it may cause harm, and so the positive move would be stop development altogether.

They also discuss disgorgement which is a word that is hard to not get excited about: this is where an algorithm is not just ‘turned off’ but every instance of its use is scrubbed away. It’s interesting how this might be approached with the technologies touched on above — who enforces the disgorgement of ShotSpotter? Or an algorithm that screens CVs? And, how do we decide who or what replaces those systems, if anything?


We also put out a trailer for our next mini series

It’s called Exhibit X and it’s about the ways you can — and maybe can’t — use litigation and the courts as a meaningful mechanism to hold tech firms accountable for their harms. You can find the trailer here or on your favourite podcast platform.

As usual, we always want feed back on everything we make — if you have any thoughts to share on our podcasts we would love to hear them. Are the episodes clear? Are you learning anything? Are there any issues you want to see covered? Hit reply and let me know!

Alix


If this was forwarded to you, sign up here.

Computer Says Maybe

A newsletter & podcast about AI and politics

Read more from Computer Says Maybe

Trump just won the presidency — what do we do now? Hey there, Just before the election we had a drafted a normal newsletter about the impacts AI did or didn’t have on the election. But like many of the conversations about technology politics, it feels like we missed the forest for the trees. 🎧 Prathm and I sat down to reflect and discuss what this outcome might mean for the technology politics work we’re all doing. You can listen here. I also wanted to share my thinking at the personal,...

Hello friends of CSM! This year we’ve had a bottomless brunch of big tech trials, which somehow feels like progress but also sort of like… we’re slowly getting nowhere? We wanted to understand better what it means to take big tech to court: in what ways are they ducking out of being accountable for their harms? What kinds of expert witnesses are litigators calling on to build a case? And what makes an expert witness anyway? Yep, it’s a lot. A few weeks ago, we wrapped up a podcast miniseries...

What the FAccT? Hello hello everyone — a couple of weeks ago I was at FAccT, and I read loads of interesting research papers, lots of which had relevant findings on technology politics issues, and some that were too wonky to really absorb. After spending a week surrounded by smart people working on issues I’m interested in, I have some thoughts on the general state of this type of technology research. If you prefer the medium of voice, I actually had a great conversation (at least I enjoyed...