AI, what the hell is it?


AI, what the hell is it?

Hi Reader,

People use the term AI to mean a million different things, and for loads of reasons: some people use it to point at the combined trajectory of new technologies; others use it to raise money from salivating investors who will fund anything that has ‘AI’ in its name; and some use it to sound smart at a time when it feels terrifying not to be in the loop of the new.

But time and again, friends and colleagues have pointed to the fact that it’s not actually a thing. It’s many different things grouped together — and in a lot of cases, it’s just an imaginary set of capabilities.

If this bugs you, I have a request.

Over the next few weeks, I want to put something together that outlines the various ways people define AI, and explore the use of AI as a political term.

If you’re game, I’d love to hear you wax poetic about how you would define AI. What the hell does it mean? What about the term annoys you? When do you find it useful? I would love to hear from you.

👉 Feel free to record your intellectual ramblings here, or just hit reply and write up your rough thoughts.

Please know that this is kind of an experiment, so don’t feel like you have to share something perfect — what we do with your responses depends on how many of you respond. Ideally I really want to cut together a meaty montage of your thoughts and ideas to use in future podcast episodes. That way everyone can share in the kaleidoscope of knowledge!


What we’ve been looking at this month

Microsoft has just paid $650m to Inflection AI in order to start licensing their models on Azure. They have also ‘hired’ (poached) most of the Inflection team into Microsoft, including the co-founders. Before we get into the grittier details of this bizarre and unapologetic deal, it’s important to note that all the executives involved here represent a bottomless brunch of men who also co-founded other important tech companies, like LinkedIn and DeepMind.

Satya Nadella characterises the deal like this: “Several members of the Inflection team have chosen to join Mustafa and Karén at Microsoft” — a soft choice of words, as is customary with these kinds of announcements. I’m struggling to imagine a scenario where Inflection employees wouldn’t opt to move over to a tech giant with unlimited resources and consistent revenues and — just a guess — larger salaries. Furthermore, most of the chat coming from the executives that remain at Inflection is about financial upside for them, and their investors. The Information has also revealed that $30m of the deal money was given straight to Inflection to ensure that they wouldn’t sue Microsoft for poaching their staff. Which feels like an important detail.

This is not a story about innovation; it’s one of consolidation and growth. Giants like Microsoft are basically just succubi for tech and talent. It’s incredibly hard to detangle the products they’ve made themselves from the competition that they’ve absorbed: they acquired Github in 2018 and then struck a $1bn deal with OpenAI just a year later, giving them exclusive licensing rights to GPT-3. This beastly marriage of acquisitions brought us Github Copilot — the tool that auto-fills code from natural language. Please, feel free to confuse this with Copilot, which is the name of Microsoft’s wider suite of generative AI tools. It’s hard to say there’s much happening here beyond the desire to calcify influence over the production of software, the deployment of cloud infrastructure, and the general idea that all of this is inevitable.

Inflection also received a $1.3bn investment from Microsoft (and Nvidia et al), partly to deliver what they promised to be “one of the largest AI training clusters in the world, comprising 22,000 Nvidia H100 GPUs”. The desperate push for more and more chips exemplifies the unholy allure of the generative AI craze. The promise of AGI — which at this point, is just a concept, let’s face it — has legitimised an unfortunate throng of events and attitudes: one being the increased production of high-end GPUs and data centres in order to ingest more training data from places such as a recently IPOed Reddit. Let’s not forget that cloud computing has now surpassed the airline industry in its yearly output of carbon emissions. The continued development of AI systems will only add to this, and it’s a pretty big price to pay for a technology that doesn’t even have a use case yet.

And now, the prevailing ‘infinite chip production is good, actually’ attitude has wandered delicately into the political sphere. The Biden admin have just announced a semiconductor initiative which will allegedly create tens of thousands of jobs in manufacturing chips. The perceived potential of generative AI is, perhaps predictably, being used as both a political springboard, and an excuse to funnel as much talent and resource towards improving the tech’s capabilities and reputation.

Couple this with OpenAI’s push to fund the production of ‘good journalism’, and that puts us in a place where our creative and intellectual labour is only valued by how much it can improve and optimise AI systems; human flourishing will therefore be downstream from AI — shouldn’t it be the other way around?


Updates from Computer Says Maybe

This month we want to highlight a collection of essays just published by AI Now, entitled AI Nationalism(s): Global Industrial Policy Approaches to AI. These were edited by friends of Computer Says Maybe Sarah Myers-West and Amba Kak.

The essays serve as a comprehensive examination of AI as a pursuit of nationalism and industrial policy. One of the key questions they ask is why there has been a recent increase in the government’s procurement of AI products specifically, when these products may not be appropriate to solve the urgent problems of the day, such as the climate crisis or the funding of healthcare. We highly recommend reading any one of the essays if you want a thorough, well-researched long-read (which… why wouldn’t you?).

New Protagonist Network

We are providing media training for a network of professionals in the socio-technical space so that they can speak more publicly about the work they are doing, and shift narrative control away from the business men who dominate the field.

If you want to be part of this, register your interest in the New Protagonist Network here.

📅 Jump straight in and meet other network members on the 9th of April: whether or not you’ve registered to be part of the New Protagonist Network, we want to invite you to our first community meetup. This is a chance to meet with peers who are working on similar issues to you, and start discussing strategies to shift the AI media narrative.


Next up on our podcast: content moderation and labour rights

Finally, I just want to tease what’s coming up on our next podcast episode because it’s gonna be a big one:

  • I interviewed the founders of Foxglove about legal cases brought against Meta over their horrific content moderation practices, and the mass ‘redundancies’ experienced by outsourced content moderation staff from Sama
  • I also spoke to James Mwanjau, who discussed the drudgery of having been a content moderator, and being given bizarre job titles such as ‘feel good manager’ — listen to the episode to learn what that even means
  • Many of the conversations I had in this one explored the ways in which human content moderation labour is made invisible by large tech companies, and how the lines between automation, AI, and alienating low-waged work are consistently blurred.

If you still haven’t heard the Computer Says Maybe podcast yet, you can get it on any streaming platform, or just go to our website to listen to our episode about AI and elections.

Thank you for reading!

Alix & CSM team


If this was forwarded to you, sign up here.

Computer Says Maybe

A newsletter & podcast about AI and politics

Read more from Computer Says Maybe

Trump just won the presidency — what do we do now? Hey there, Just before the election we had a drafted a normal newsletter about the impacts AI did or didn’t have on the election. But like many of the conversations about technology politics, it feels like we missed the forest for the trees. 🎧 Prathm and I sat down to reflect and discuss what this outcome might mean for the technology politics work we’re all doing. You can listen here. I also wanted to share my thinking at the personal,...

Hello friends of CSM! This year we’ve had a bottomless brunch of big tech trials, which somehow feels like progress but also sort of like… we’re slowly getting nowhere? We wanted to understand better what it means to take big tech to court: in what ways are they ducking out of being accountable for their harms? What kinds of expert witnesses are litigators calling on to build a case? And what makes an expert witness anyway? Yep, it’s a lot. A few weeks ago, we wrapped up a podcast miniseries...

Laws are like pancakes Hi hi hello everyone — we’ve just wrapped up our podcast series on FAccT. In case you weren’t aware that this series even existed and you now feel woefully behind, here’s a quick rundown: First I spoke to Andrew Strait about our favourite papers presented at the conference; it was a great chat and a good overview of what FAccT even is. Then I interviewed the authors of three of my favourite papers… In Abandoning Algorithms I Interviewed Nari Johnson and Sanika Moharana...