The Angry Machine

This article first appeared in Village magazine, February/March 2024 edition

A few summers ago, walking around an agricultural show on a sunny Sunday afternoon with the smell of fresh cut grass in the air, I came across a vendor selling candy floss, proudly labelled “Fat Free” and “Gluten Free”. In the same way, every Silicon Valley huckster and app seller is rushing to slap an “A.I.” over any product with an algorithm.

And with predictable regularity, news outlets are full of opinion columns and politicians clutching pearls and firing up a moral panic.

So is there really something to worry about? Or to put it another way, is there anything new to worry about. There are some who argue everything AI does, any reasonably adept artist could already do with Photoshop and similar effects. Maybe AI does it faster, but as anyone who counts the fingers has noted, it isn’t necessarily better. Indeed, this column has previously noted many claims about the technology make just as much sense if “A.I.” is replaced by the words “magic pixies”.

Even before the wheels came off OpenAI with the firing of ceo Sam Altman, several commentators had pointed out “A.I” was mostly a hype cycle, drawing comparisons to the ponzi-like cryptocurrency hypes built on blockchain, or to Theranos, the blood-tests start-up which achieved a valuation of $10 billion based on imaginary technologies.

One investment analyst coined the term “grift shift” to describe the move from crypto to AI in 2023.

The problems with A.I. are cumulative. As search engines deploy the new technologies to answer questions from users, websites scramble to provide content for them. Since most websites make money from advertising impressions, generative programmes are used to populate pages with plausible looking text. It isn’t factual information, but so long as a user clicks a link and leaves a page impression, the website makes money. And since the generative programmes work by combing websites to “learn”, those generated pages become raw material for the next generation. Think of the way a photocopy of a photocopy degrades, except applied to knowledge instead of ink on a page. The well is poisoned.

Some of the effects become jokes, such as a screenshot where a Google AI confidently declares no African country begins with the letter K, adding “the closest is Kenya, which starts with a ‘K’ sound, but is actually spelled with a ‘K’ sound.”

This is clearly gibberish, and identifiable as such by any human reader. But there are more insidious examples. In the first days of the war in Gaza, several posters on social media chased clout by posting footage from video games, labelling it was images from Gaza. Within days, magic pixies were confidently asserting genuine news video was in fact taken from computer games.

One tech journalist maintains a regularly updated article chronicling the expanding list of “mistakes, mishaps, and failures” surrounding the technology. It’s an educational read, from the deaths due to “self-driving” cars killing pedestrians, to false accusations of plagiarism, fraud and embezzlement, and poisonous cooking recipes.

Some experts doubt machine “hallucinations” can ever be fixed, even as tech news enthusiastically reported OpenAI ceo Sam Altman had been fired (and rehired) because of reports the company’s AI project had been too successful, creating an AI that could “threaten humanity”. We’ve been here before, with Silicon Valley snake-oils ranging from bored apes to Meta’s adventures in virtual meeting spaces.

And it’s not all just magic pixies. A.I. is weaponised to deliberately spread disinformation and pollute debate. As has been observed more than once, the aim of authoritarians is not just to spread disinformation, but to create uncertainty, breaking trust.

During the Dublin Riots, emergency services, fire brigade and Garda continued to send out updates on X/Twitter. But no one saw them, because the algorithm fills timelines with popular tweets, not news.

The algorithm looks for widely shared posts, and promotes them to users, accelerating the shares. And rage is a hot emotion, leading people to share quickly. The anger machine is designed to find, amplify and spread heat. Every rumour, every fake or image, every false report, whether an error or deliberate, spreads far and fast. The anger machine spreads heat, not light.

If a lie travels halfway round the world before the truth has its boots on, then AI, with its ability to generate bullshit a magnitude more quickly, will circle the globe several times. Regulators tend to show up a day late, after the damage is done. And Irish regulators, hampered by miniscule budgets compared to their Silicon Valley opponents, and often subject to regulatory capture, will struggle to keep up.

Regulators may do better in the long term, extracting fines at the end of an inquiry process, but disinformation needs to be corrected fast. That task falls to journalists, many just as overworked and under-resourced as internet and news regulators.