Laws are like pancakes


Laws are like pancakes

Hi hi hello everyone — we’ve just wrapped up our podcast series on FAccT. In case you weren’t aware that this series even existed and you now feel woefully behind, here’s a quick rundown: First I spoke to Andrew Strait about our favourite papers presented at the conference; it was a great chat and a good overview of what FAccT even is. Then I interviewed the authors of three of my favourite papers…


Tech as an intermediary between people & police

— By Georgia Iacovou

Many of you may well be aware of the ongoing relationship that Ring (owned by Amazon) has with police departments. Ring make home security systems: e.g. a doorbell that is also a surveillance camera, and connects to your phone. Owners of these cameras can then get on an app together and discuss who they see passing by their front doors, and enable each other’s paranoia. More critically: Ring will also train police to sell their devices to citizens, and until recently, they could request footage from Ring owners without a warrant, via an in-app mechanism called Request for Assistance (RFA).

What we’re looking at here is a silent connection between communities and law enforcement, made possible by the adoption of a new technology — if you can call it new. In last week’s podcast, Alix interviewed Marta Ziosi and Dasha Pruss about their paper: Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool. In it they discuss ShotSpotter, an audio surveillance system which automatically detects gunfire. The interesting thing here is not that police departments spend money on deploying these kinds of technologies, but that they tout them as addressing community needs. The authors note that sometimes tech like this is used in response to an erosion of public trust; a kind of ‘anything will do’ scramble for a technological solution to a social problem.

So in this sense, a piece of carceral tech that is always on, and alerts the police in real time if it detects wrongdoing (and, crucially, is installed disproportionately in areas home to marginalised groups), can be flipped on its head and framed as an accountability system: because it detects gunshots — which means, technically, community members can also scrutinise the data if they so choose, and catch instances where police have fired their guns. This is, at best, a whimsical attempt at transparency and accountability.

Predictive systems like these are sold as a means of keeping people safe, or even as a way to be fair and objective — and they allow those with power to malign transparency as something that guarantees accountability. In First Law, Bad Law, Alix interviewed Lara Groves and Jacob Metcalf about a new law local to New York, which states that any employers using algorithms in their hiring processes must conduct independent algorithmic bias audits, and publish these on their websites. Jacob points out that “the local law 144 is a transparency law only […] there's nothing in the law that says that you can't have the most discriminatory algorithm possible.” — leading Alix to compare laws to pancakes (the first one is often… not great).

The undercurrent of these two issues is the techno-optimist urge to build: the idea that if we use enough technology, we can identify biases, bring criminals to justice, and hold powerful actors accountable all at the touch of a button. Almost as if, within technology at least, you can only be positive if you’re being additive. In our episode about abandoning algorithms, Nari Johnson and Sanika Moharana discuss framing abandonment as a positive thing — where a developer might be part way through building out an algorithmic system and realise that actually, it may cause harm, and so the positive move would be stop development altogether.

They also discuss disgorgement which is a word that is hard to not get excited about: this is where an algorithm is not just ‘turned off’ but every instance of its use is scrubbed away. It’s interesting how this might be approached with the technologies touched on above — who enforces the disgorgement of ShotSpotter? Or an algorithm that screens CVs? And, how do we decide who or what replaces those systems, if anything?


We also put out a trailer for our next mini series

It’s called Exhibit X and it’s about the ways you can — and maybe can’t — use litigation and the courts as a meaningful mechanism to hold tech firms accountable for their harms. You can find the trailer here or on your favourite podcast platform.

As usual, we always want feed back on everything we make — if you have any thoughts to share on our podcasts we would love to hear them. Are the episodes clear? Are you learning anything? Are there any issues you want to see covered? Hit reply and let me know!

Alix


If this was forwarded to you, sign up here.

Computer Says Maybe

A newsletter & podcast about AI and politics

Read more from Computer Says Maybe

RSVP now! The People's Policy: Holding Big Tech Accountable CSM LIVE next Monday, March 2nd on YouTube - 5:30pm MT (7:30pm ET) Hey everyone— Next Monday, I’m hosting a live conversation, “The People’s Policy,” about how we can shape the future of Big Tech accountability at the local, state, and national level. I'd love for you to join us—RVSP here. Big Tech is lobbying hard at every level of government, pushing to expand data center development, pitching public agencies on discriminatory and...

A note from Alix | Podcast Highlights | Highlights from the Collective All hail scale...what are we doing here? Hey everyone, In Brazil, someone stole a professor’s identity and tried to take $80,000 with it. In Rwanda, a finance minister was warned that funding from the Tony Blair Institute could disappear if Oracle was removed from the government tech stack. In India, a system glitch cut 2 million people off from food rations. What do these stories have in common? They’re all tied to...

A note from Alix | Podcast Highlights | Highlights from the Collective What’s ahead in 2026 Hey everyone, It’s funny how we often treat the start of a new year like a magical clean slate. Lately, I’ve been thinking about why we reach for that feeling in the first place. Especially after a year like 2025, it’s natural to want a moment to imagine something new. But when I look ahead, I don’t see a break between 2025 and 2026. I see a through-line — one shaped by power and profit, yes, but also...