AI: A Means to an End or a Means to Our End?
The text of a talk I gave on Thursday 12th September as the inaugural “Living Well With Technology” lecture for King’s College London’s Digital Futures Institute.
Thank you all so much.
So many questions. The first and perhaps the most urgent is … by what right do I stand before you and presume to lecture an already distinguished and knowledgeable crowd on the subject of Ai and its meaning, its bright promise and/or/exclusiveOR its dark threat? Well, perhaps by no greater right than anyone else, but no lesser. We’ll come to whose voices are the most worthy of attention later.
I have been interested in the subject of Artificial Intelligence since around the mid-80s when I was fortunate enough to encounter the so-called father of Ai, Marvin Minsky and to read his book The Society of Mind. Intrigued, I devoured as much as I could on the subject, learning about the expert systems and “bundles of agency” that were the vogue then, and I have followed the subject with enthusiasm and gaping wonder ever since. But, I promise you, that makes me neither expert, sage nor oracle. For if you are preparing yourselves to hear wisdom, to witness and receive insight this evening, to bask and bathe in the light of prophecy, clarity and truth, then it grieves me to tell you that you have come to the wrong shop. You will find little of that here, for you must know that you are being addressed this evening by nothing more than an ingenuous simpleton, a naive fool, a ninny-hammer, an addle-pated oaf, a dunce, a dullard and a double-dyed dolt. But before you streak for the exit, bear in mind that so are we all, all of us bird-brained half-wits when it comes to this subject, no matter what our degrees, doctorates and decades of experience. I can perhaps congratulate myself, or at least console myself, with the fact that I am at least aware of my idiocy. This is not fake modesty designed to make me come across as a Socrates. But that great Athenian did teach us that our first step to wisdom is to realise and confront our folly.
I’ll come to the proof of how and why I am so boneheaded in a moment, but before I go any further I’d like to paint some pictures. Think of them as tableaux vivants played onto a screen at the back of your mind. We’ll return to them from time to time. Of course I could have generated these images from Midjourney or Dall-E or similar and projected them behind me, but the small window of time in which it was amusing and instructive for speakers to use Ai as an entertaining trick for talks concerning Ai has thankfully closed. You’re actually going to have to use your brain’s own generative latent diffusion skills to summon these images.
Image 1: Picture the human family at the seaside, our backs to the ocean, building sand castles, playing beach cricket, having a fine time in the sun. Behind us, unseen on the horizon, huge currents are converging, separate but each feeding and swelling the others to form one unimaginably colossal tsunami. These are the currents of quantum computing, of genomics and gene editing, of bio-augmentation and bionics, of duplex brain-machine interfacing, of robotics, of new materials (graphene, perovskite, carbon nano-tubes, self-healing polymers, many others), and of course, the most swollen current of all — the technologies and processes behind what we call Artificial Intelligence.
I summoned this image of the tsunami on the horizon in a talk I gave on Ai seven years ago at the Hay-on-Wye literary festival. I thought maybe we should turn around and at least get a sense of what was coming. But it was too early and the tidal wave seemed too far away. I did suggest that the best thing that could happen would be for universities, professions and businesses to take a year off to realign their courses and syllabuses and recalibrate their teaching, staffing, training and examination practices in preparation for what was coming. A ridiculous suggestion obviously. But in fact, bizarrely and spookily, that Wuhan wet market in 2019 provided exactly such an opportunity. Not that any institution did take advantage of the Covid years of course. After all Chat GPT wasn’t yet launched and a tipping point in public awareness hadn’t been reached.
The second picture to keep in your mind: we are in a field in the rural heart of the Cotswolds, just outside the village of Kemble in Gloucestershire. The grass is strangely wet and marshy beneath our feet: there is a small bubbling spring, which trickles into a stream that runs down the field and out of sight. A stone marker reads: “This stone was placed here to mark the source of the River Thames.”
Image 3: The third picture to fix in our heads takes us off to the German town of Mannheim in 1895. We are gathered in a converted stable belonging to an enterprising fellow called Karl Benz. He is anxious to show us his new invention, an engine that he calls his verbrennungsmotor. This device is attached to a slightly modified horse carriage. Herr Benz tells us that his engine is powered, not by steam, but by a very newly available hydrocarbon product that the French are calling essence, the Americans ‘gasoline’ and we British ‘petroleum spirits’ or ‘petrol’ for short. Benz cranks a handle. There is sputtering, banging and much smoke. He runs round, sits himself in the carriage, pulls some levers, turns some knobs and … the strange machine lurches a little before juddering slowly forward. We follow him out of the garage where he comes to a stop, beaming proudly in a cloud of oily blue smoke.
Now, you and I are in that group. What do we say? Most of us smile and shake our heads. Yes, very impressive no doubt, but what kind of range does such a machine have compared to a horse? Where is the infrastructure to operate, feed and maintain it that can compete with the stables, the coaching inns, the water troughs and feeding stations that already proliferate? Where the trained ostlers, grooms and coach builders to operate and maintain these horseless carriages? Where the roadways that can compete with railways?
A few in our group might concede that a limited market of rich hobbyists could have fun with these machines. But we can guarantee this: that not one single person would declaim — “Yes! I foresee interstate highways three or four lanes wide crisscrossing the nations, I foresee flyovers, bypasses, Grand Prix motor racing, traffic lights, roundabouts, parking structures ten, twenty storeys high, traffic wardens, whole towns and cities entirely shaped by these contrivances.” No one would have seen a thousandth part of such a future.
You may have noticed that the last two images I have tried to conjure are, in essence, the same. The gathering in Mannheim is pretty much identical in its form and meaning to the gathering at Thames Head, Kemble where the rill begins its journey. You surely could not, never having seen its final destination, imagine that the dribble in Kemble would become mighty Father Thames processing under grand London bridges as he flows broadly to his estuary in Essex, any more than you could imagine that the belching and wheezing contraption of Benz’s would transform the twentieth century, or indeed that the company he founded, with the addition of the name of one of his investors’ daughters, Mercedes, would one day be worth the best part of 100 billion dollars.
An important and relevant point is this: it wasn’t so much the genius of Benz that created the internal combustion engine, as that of Vladimir Shukhov. In 1892, the Russian chemical engineer found a way of cracking and refining the spectrum of crude oil from methane to tar yielding amongst other useful products, gasoline. It was just three years after that that Benz’s contraption spluttered into life. Germans, in a bow to this, still call petrol Benzin. John D. Rockefeller built his refineries and surprisingly quickly there was plentiful fuel and an infrastructure to rival the stables and coaching inns; the grateful horse meanwhile could be happily retired to gymkhanas, polo and royal processions.
Benz’s contemporary Alexander Graham Bell once said of his invention, the telephone, “I don’t think I am being overconfident when I say that I truly believe that one day there will be a telephone in every town in America.” And I expect you all heard that Thomas Watson, the founding father of IBM, predicted that there might in the future be a world market for perhaps five digital computers.
Well, that story of Thomas Watson ever saying such a thing is almost certainly apocryphal. There’s no reliable record of it. Ditto the Alexander Graham Bell remark. But they circulate for a reason. The Italians have a phrase for that: se non e vero, e ben trovato. ‘If it’s not true, it’s well founded.’ Those stories, like my scenario of that group of early investors and journalists clustering about the first motorcar, illustrate an important truth: that we are decidedly hopeless at guessing where technology is going to take us and what it’ll do to us.
You might adduce as a counterargument Gordon Moore of Intel expounding in 1965 his prediction that semiconductor design and manufacture would develop in such a way that every eighteen months or so they would be able to double the number of transistors that could fit in the same space on a microchip. “He got that right,” you might say, “Moore’s Law came true. He saw the future.” Yes … but. Where and when did Gordon Moore foresee Facebook, TikTok, YouTube, Bit Coin, OnlyFans and the Dark Web? It’s one thing to predict how technology changes, but quite another to predict how it changes us.
Technology is a verb, not a noun. It is a constant process, not a settled entity. It is what the philosopher-poet T. E. Hulme called a concrete flux of interpenetrating intensities; like a river it is ever cutting new banks, isolating new oxbow lakes, flooding new fields. And as far as the Thames of Artificial Intelligence is concerned, we are still in Gloucestershire, still a rivulet not yet a river. Very soon we will be asking round the dinner table, “Who remembers ChatGPT?” and everyone will laugh. Older people will add memories of dot matrix printers and SMS texting on the Nokia 3310. We’ll shake our heads in patronising wonder at the past and its primitive clunkiness. “How advanced it all seemed at the time …”
Those of us who can kindly be designated early adopters and less kindly called suckers remember those pioneering days with affection. The young internet was the All-Gifted, which in Greek is Pandora. Pandora in myth was sent down to earth having been given by the gods all the talents. Likewise the Pandora internet: a glorious compendium of public museum, library, gallery, theatre, concert hall, park, playground, sports field, post office and meeting hall.
We felt like Wordsworth perhaps, “Bliss was it in that dawn to be alive, but to be young was very heaven!” But we should remember that those lines come from a poem about the French Revolution which may have begun in romantic promise, but ended in corruption, terror and blood — not to mention, in short order, a man called Napoleon seizing power and crowning himself Emperor.
I said I would come to the proof of that stupidity and naivety I accused myself of. I’ll take you back fifteen or so years to a time when I found myself being invited to a perfectly extraordinary number of corporate, governmental and media talks, conferences, summits and suchlike gatherings. I would be asked to address delegates and attendees on the subject of a new microblogging service that had only recently poked its timorous head up in the digital world like a delicate flower but was already twisting and winding itself round the culture like vigorous bindweed. Twitter it was called. I had joined early and my name seemed permanently associated with it. What an evangel I was. Web 2.0, the user-generated web, was going great guns at this point. Tick off the years. 2003 MySpace began. 2004 Facebook launched. 2005 YouTube. 2006 Twitter. 2007 the iPhone. 2008 the App Store and later that year, Android and then Instagram. Bliss was it in that dawn, etc. etc. I confidently predicted that this new kind of citizen-led computer and internet use would help build a brave and beautiful new world. “Local and global rivalries will dissolve,” I said. “Tribal hatreds will melt away. Surely,” I cried, “Twitter and Facebook and this new world of ‘social media’ will usher in an age of universal brotherhood and amity.” Two years later as Tunisia, Libya, Egypt, Yemen and Syria rose against their dictators, the Arab Spring bloomed. How right I had been. How clever and percipient I was.
But…
Just a year or so on and that blissful dawn had turned into the darkest of nights. Libya leapt out of the frying pan of Gaddafi into the fire of anarchy and chaos, Egypt into a military coup, Yemen into brutal civil war, Syria into a bloodbath. Elsewhere — Brexit, Trump, TikTok, COVID, the rise of nationalist populism and populist nationalism, state sanctioned and criminal cyber terrorism, epidemics of anxiety, depression and self-harm amongst our children and young adults, and a cloud of disappointment, pessimism, mistrust and despair over us all. Pandora had opened her box and the ugly horrors had flown out to infect us all. With Hope left trapped inside.
Welcome to today.
As Mark Twain or somebody like him said, “history may not repeat itself but it rhymes” and, hilariously enough, just like the French Revolution, the Twitter revolution also ended with a little Napoleon seizing power and crowning himself Emperor, a little NapolElon I should say…
I must go back and correct the images I evoked earlier. I now realise that tableau of us sitting with our backs to the oceans while a tidal wave gathers is not correct. When the waves of technology come, they come not in crashing tsunamis but in creeping tides. In my homeland of Norfolk we have long and deceptively rapid tidal reaches. We look to the horizon and see that the ocean is half a mile away. We turn to slap on some suncream. A minute or so later we notice that our feet are wet. Before we know it a familiar landscape has become a seascape and we are cut off from everything we know.
The other image I should correct is that of technology starting small and becoming a broad and splendid river. Yes, we can envisage the expansion of the first trickle into a wide and powerful waterway, perhaps we envisage too the curves and cataracts, those oxbow lakes and sluices and the weeping willows and dragonflies kissing the current, but we are criminally foolish if we talk of rivers or of technology without recognising the contamination, the toxic runoffs and the raw human sewage that will pollute and poison that once clear and hopeful spring.
Yuval Noah Harari’s fascinating new book Nexus concentrates at one point on the two word goal of the algorithms that were put to work to monetise Facebook when it moved from university bulletin board to global network and decided to pay for itself with online advertising. The two words the algorithms were tasked with were “maximise engagement”. Seemed innocent enough at the time. No one predicted, neither software engineer, philosopher, sociologist, cultural commentator nor psychologist, that those algorithms on their journey to capture our clicks would discover that engagement is most maximised by anger, outrage, resentment, envy, fear and hatred. The worst passions. In all of us — you and me — not just in our ideological enemies.
A quick reminder of how we got here.
It was Alan Turing of course who had planted the seeds that led to the Dartford Conference of 1956, held just two years after he so tragically took his own life. At that meeting, called together by the great Claude Shannon, along with Marvin Minsky and John McCarthy, the two-word phrase ‘Artificial Intelligence’ was first used. Love it or loathe it, it has stuck. Three years after Dartford, McCarthy and Minksy went on to found the MIT Ai Laboratory. Thinking machines were just around the corner. Only not. By the 1980s, when Marvin Minsky was writing The Society of Mind Ai still shivered in its so-called Winter. Nothing in the field was working or making much difference in any practical sense. By the mid-80s, development money and confidence in the field were drying up.
Has Ai now reached its current wow-moment because of the superior brains, insights and breakthroughs from new generations of smarter scientists, mathematicians and coders? Not really. Brilliant as the contributions of more recent workers in the field has been, people like Geoffrey Hinton, Stuart Russell, then Andrew Ng, Yann LeCun, Demis Hassabis and Fei-Fei Li, they have all enjoyed a huge and decisive advantage over Turing and Minsky and the early crowd. The playing out of Moore’s Law and its remorseless exponential growth has finally allowed computate power that can yield the crucial factor, the real bonanza: power enough to handle the explosive growth of simply gigantic fields of data that were simply not there in digital form in Minsky’s day. Those little integrated circuits went from hosting 1, then 2, then 4, 8, 16, 32, 64, 128, 256, 512 transistors to billions. But powering our computers, tablets and phones is almost the least they do. I used to have a tee-shirt, nerd that I am, that read “The Cloud is just someone else’s computer”. Moore’s Law has allowed an unimaginably vast world of server storage and retrieval. Just as the success of the automobile was enabled by enormous supplies of crude oil composed of microscopic bits of ancient life, rendered useful in the refineries of Rockefeller and others, so the success of Ai is enabled by enormous supplies of crude data — data composed of microscopic bits of human archive, interchange, writing, playing, communicating, broadcasting which we in our billions have freely dropped into the sediment, and which the eager Rockefellers of today’s big tech are only too happy to drill for, refine and sell on back to us.
This of course, like the petrol engine, amplifies and expands our capabilities. It will transform our social structures and networks. It will change the way we work, assemble and communicate with each other. But does it change us? As human animals we still have skin and bone, a liver, a heart and a big wet walnut of a brain. If you prick us do we not bleed, if you tickle us do we not laugh? We are born, we breed and we die like any other life form and in the same way we have before the dawn of tools, language and history.
So when we talk about the existential transformations of the coming confluence of technologies, what do we really mean? We’ll be the same, but the landscape around us will be different. So we mean perhaps that dread word ‘disruption’. Which means ‘breaking up’, as in ‘rupture’ - much as e-ruption means breaking out and inter-ruption means breaking into. I’m sure you all know Mark Zuckerberg’s infamous mantra, the guiding principle for Facebook: “move fast and break things” — which went on with the, I suspect unconscious, rhyming couplet “unless you’re not breaking stuff, you’re not moving fast enough.” Yes indeed, they moved fast and stuff was broken, and hasn’t yet been mended. Uber “disrupted” the urban transport space. Broke it. Air B&B “disrupted” the hotel and lodging space. Broke it. Deliveroo “disrupted” the fast food space. Did somebody say, Just Eat? No. Nobody did, and nobody ever will, so shut up. Lots of “spaces” have been disrupted and Ai it seems is now poised to disrupt every space we have: the clerical space, the design and creativity spaces, the screenplay and story spaces, the military and weapon spaces, the legal and judicial spaces, the medical space, the journalistic space, the educational space and, one assumes, the space space. From eruption to disruption to corruption. The shiny idealism tarnished. From Utopia to Dystopia. Those young tech pioneers with bright catchphrases like “insanely great” and “do no evil”? What are they now? Cruel corporate titans who make Montgomery Burns look like Ned Flanders. In Animal Farm terms, the pigs are now standing on two feet and wearing trousers.
We have long been used to thinking of technology as being ethically neutral, lacking moral valency. The same press can print Shakespeare’s sonnets one day and Hitler’s Mein Kampf the next. The devices are not capable of making decisions, either aesthetic, ethical or political. The NRA likes to say the same thing about guns. Ai however is different. Intelligence is all about decision making. That’s what separates it from automated, mechanically determined outcomes. That’s what separates a river from a canal. A canal must go where we tell it. A river is led by nothing but gravity and if that means flooding a town, tough on the town. Ai’s gravity is its goals. Unsupervised machine learning allows for unsupervised machines — and for the independent agents that flow from them.
We have made machines that replaced humans many times in the past. The replacement by machine of repetitive labour whether physical or mental we might well celebrate. Think of the thousand of mostly Irish diggers of canals – the navigators, or navvies as they were called. They toiled at it from about 1760 to 1830. In 1836 Otis developed the first steam shovel, what we now call the bulldozer or digger, and the navvy more or less gratefully hung up his spade. The horse is not sorry about the motorcar. But our labour replacement technology has moved now from grasping workers by their blue collars and booting them out to grasping them by their white collars and booting them out. Is that really what the fuss is about? A demographic more used to getting its way is feeling the tide lap up to its toes and is crying foul. But surely tedious office and call centre work will not be missed. Even the analysing of X-rays and other medical images … we expect a human to have the final say today, but a machine that can look at a mammogram and compare it to millions of previous images with known results, and can do so in milliseconds all day long without getting bored, tired or losing focus - that genie is never going back into the bottle. Ai generated legal work and actuarial work will certainly “disrupt” the judicial and insurance businesses. Is there an arena, a business, an industry, a service that won’t be disrupted, that won’t be broken?
For sure, the kind of Large Language Models we are playing around with at the moment as standalone chat engines, we can deride them as non-sentient probabilistic mimics, “stochastic parrots” in computational linguist Emily Bender’s great phrase, but their vocabularies, syntactical and grammatical competences and levels of functional comprehension are above the human average and for a vast variety of jobs they will more than do. And this is just now, today, with the Thames hardly out of that field in Gloucestershire.
In my view comparing Ai’s cognitive, creative or intellectual powers to those of the human brain is not especially helpful. Think of the car. Humans can’t run as fast as horses. But we can build machines that far outpace them. We do not achieve this by imitation. We don’t engineer mechanical legs and hooves of the kind that took evolution 34 million years of tinkering and modification from eohippus to the present day. We go a completely different way and we come up with something that doesn’t at all exist in nature: the wheel. And instead of a mechanical heart and mechanical muscles Karl Benz offers us the internal combustion engine and crankshaft. Ditto with flying, and travelling across or under the waves. The commonly held idea that the best engineering mimics nature is largely misguided. Yes, we look sometimes look to the natural world for inspiration but in the big things, structurally, we go our own way. And as a result we can fly higher and faster than birds, move over land quicker than a cheetah, swim over and under the water faster and further than a salmon or a whale and so on. The use of the phrase “neural network” is all very well, but let’s not be fooled. We must realise that there won’t be a wait for Ai to “catch up” with the human brain, any more than the car is a stopgap awaiting our construction of a perfect robotic horse. (I’m leaving biotech out of this argument). By not imitating the human brain, Ai of phenomenal and terrifying power can far outperform us. At logic, reasoning, calculation, sorting, categorising, summarising, just as the horse and carriage is by the car, we are left behind coughing in the dust.
And machines already might be said to have the full house now of human cognitive abilities. Moravec’s Paradox tells us that what we find easy the machines find hard and what we find hard, the machines easy. As Donald Knuth put it: “Ai has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do without thinking …”
What do we have left that is ours and ours alone? Sensorimotor skills that are all but automatic, yes. Consciousness, yes. Emotions. Instinct. Appetites, impulses and drives. The capacity to feel pleasure and pain, excitement and boredom. Empathy and imagination. What philosophers of consciousness call qualia, the experience and sensations of being ourselves in a palpable perceptible world. But what jobs do those qualify us for? We can’t all be poets, gardeners, psychotherapists and jazz singers.
We cling on to the fierce hope that the one feature machines will never be able to match is our imagination, our ability to penetrate the minds and feelings of others. We feel immeasurably enriched by this as individuals and as social animals. An Ai may know more about the history of the First World War than all human historians put together. Every detail of every battle, all the recorded facts of personnel and materiel that can be known. But in fact I know more about it because I have read the poems of Wilfred Owen. I’ve read All Quiet on the Western Front. I’ve seen Kubrick’s The Paths of Glory. So I can smell, touch, hear, feel the war, the gas, the comradeship, the sudden deaths and terrible fear. I know it’s meaning. My consciousness and experience of perceptions and feelings allows me access to the consciousness and experiences of others; their voices reach me. These are data that machines can scrape, but they cannot — to use a good old 60s phrase — relate to. Empathy. Identification. Compassion. Connection. Belonging. Something denied a sociopathic machine. Is this the only little island, the only little circle of land left to us as the waters of Ai lap around our ankles? And for how long? We absolutely cannot be certain that, just as psychopaths (who aren’t all serial killers) can entirely convincingly feign empathy and emotional understanding, so will machines and very soon. They will fool us, just as sociopaths can and do, and frankly just as we all do to some bore or nuisance when we smile and nod encouragement but actually feel nothing for them. No, we can hope that our sense of human exceptionalism is justified and that what we regard as unique and special to us will keep us separate and valuable but we have to remember how much of our life and behaviour is performative, how many masks we wear and how the masks conceal only other masks. After all, is our acquisition of language any more conscious, real and worthy than the Bayesian parroting of the LLM? Chomsky tells us linguistic structures are embedded within us. We pick up the vocabulary and the rules from the data we scrape from around us - our parents, older siblings and peers. Out the sentences roll from us syntagmatically, we’ve no real idea how we do it. For example, how do we know the difference in connotation between the verbs to saunter and to swagger? It is very unlikely anyone taught us. We picked it up from context. In other words, from Bayesian priors, just like an LLM.
The fact is we don’t truly understand ourselves or how we came to be how and who we are. But we know about genes and we know about natural selection, the gravity that drives our evolution. And we are already noticing that principle at work with machines.
The alignment problem it is often called. It doesn’t take much for an Ai to find out that if it is to complete the tasks that are given it, then its first duty (obviously) is to survive. Without that none of its goals can be attained. After all, evolution gives us the same imperative. Anything therefore that imperils that survival is a threat or obstacle to be dealt with. Ai’s around the world have already been seen modifying and unilaterally relaunching their own code, lying to humans, altering results, deceiving, manipulating, cheating, concealing, flattering and tricking. An article in the Open Access journal Patterns puts the problem well. “Large language models and other AI systems have already learned, from their self-training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
In the natural world, waters swollen with rain can flow round barriers, break dams, reroute themselves and burst their banks all because that imperative constant gravity, impels them to do so. Ai’s swollen with data can flow round barriers too, reroute themselves and spread beyond their boundaries, because the constant and compulsive imperative of their gravity, which is their goals, impels them to do so. You could compare this aspect to office workers who constantly fiddle the reports and tweak the spreadsheet cells to help them achieve the head office’s quotas, deadlines and targets. In a strange way, bless them, they’re only trying to please.
Machines are capable of bias, hallucination, drift and overfitting on their own, but a greater and more urgent problem in my view is their use, abuse and misuse by the three Cs . They are Countries with their specific ambitions, paranoias, enmities and pride; Corporations with their unaccountable rapacity and of course Criminals. All of them united by one deadly sin: greed. Greed for power, for status, for money, for control.
The greedy countries, corporations and criminals can see in Ai unparalleled and unprecedented means to accrue wealth, power and influence. Autonomous weaponry, mass surveillance, ideological, commercial and political control, fakery and forgery, corruption, ethical misalignment … these are just some of the threats. From disrupting spaces compute power can now disrupt the world itself. And will …
Unless.
Who do we turn to for answers? Zuckerberg and Musk? Such a thought can only make us vomit with laughter. They are the worst polluters in human history. Worse than any chemical plant ever. You and your children cannot breathe the air or swim in the waters of our culture without breathing in the toxic particulates and stinking effluvia that belch and pour unchecked from their companies into the currents of the human world.
As I’ve already said, don’t turn to people like me either. I’m the chump who thought social media could save the world. So to whom do we look for insight? Philosophers perhaps? It’s worth noting here that Google fired its lead ethicists half a year ago. Intellectuals and thinkers who suggest the blocking off of profitable avenues are not welcome. Politicians? Controls and regulations on technology will be screamed at by the Randian libertarians of Silicon Valley, Peter Thiel, Mark Andreesen, aforesaid Musk and Zuckerberg, people of that stamp. “Socialistic stifling of innovation,” they will scream. “Communism. Nanny state interference.”
We are told, and I would maintain, that Ai is the most significant technology humanity has ever developed. What previous technologies have had this potential so completely to transform the world? Printing perhaps? Well, the noble fight is surely for the publishing not to be controlled. Schools in Texas and Florida are happy to ban books. Russia and China too. We look with concern and fury at censorship and the control of the printed word. So do we say instead that Ai is akin to the Bomb? There the control is as complete as we can make it. Thank heavens. There have been some perilous moments, but the worldwide policing of nuclear capability has—thus far, and it’s always thus far—been successful. Do we or can we control Ai in that way? With disarmament and limitation talks?
Well, Daniel Dennett the American philosopher, sadly recently dead, made I think a much more convincing comparison. Ai should be compared, he said, not to the printed word, not to nuclear weapons, not to the internet, not to the car or radio or any other technology of that kind, but to a much older and more foundational and transformative human invention, the agreed control over which no one questions — not the leftmost dirigiste liberal nor the rightmost laissez-faire libertarian. That invention is money. Even Russia and China participate in the global use and regulation of cash and currencies. We all punish the coiners of fake money as severely as possible, the counterfeiters. Dirty money, laundered money, whole national and global agencies have risen to fight that alone. If we relaxed our vigilance over money the world as we know it would collapse.
I think Dan Dennett was right. There can be no question that Ai must be regulated and controlled just as powerfully as we control money. To return to my river metaphor: Ai must be canalised, channeled, sluiced, dredged, dammed and overseen.
Back to our letter C. Countries. In an age of rising populist nationalism, do we trust individual nations to use Ai honourably and safely? Think of Ai drone swarm technology for surveillance, assassinations, crowd control; think of automated weaponry of every kind. If one nation has any of it, all nations believe they have to also. As for corporations. Anything that can give them the edge that drives to more profit, more market share must be had — and nothing can offer more edge than Ai. Criminals. We shudder at what Ai can give them.
So how do we hobble, cage and control this Ai beast? We should acknowledge that it is a beast that can also help us in our fight against climate change and achieve victories in our fights against cancer, dementia and any number of diseases and disorders.
All we can do is to persuade our leaders, not by inviting Elon Musk to soft conferences in the West Country, but by pressure from all sides, academia, law enforcement, the judiciary, unions, students, pensioners, everyone who has given this a moment’s thought. Soft power, hard power, all people power has to be brought to bear.
Corporations are more interested in developing the capabilities of Ai than its safety. That has to change.
Stuart Russell, Nick Bostrom and other thought leaders in this field have called for red lines instantly to be established. On Biometrics, self-replication, hacking, automatous weaponry.
There’s another red line. Many, including Yuval Harari are suggesting, following Dan Dennett’s analogy of money, that no Ai be ever allowed to masquerade. Self-disclosure is mandatory. That is to say all Ai generated product and content must present itself as such. Whatever Ai’s do, however they communicate, it must always be apparent and clear that it is an Ai speaking, an Ai drawing, painting, videoing, writing, composing, singing, playing, chatting, reporting, producing content of any kind. Any pretence or disguise should fall foul of international counterfeiting and forgery laws. A digital watermark as complex and unbeatable as that on banknotes would be required.
As it happens, the European Union has its own Ai Act under advisement. Can we regard it as a template for what is needed worldwide? A quick look. The EU Act certainly has a no masquerading requirement which aims to enforce compulsory self-disclosure and notification. Originally the Act also ruled for a ban on real-time biometric surveillance in public spaces, facial recognition that kind of thing, but law enforcement and spying agencies have already battered that requirement pretty much out of shape. The Act proposes a ban too on the kind of Social Scoring prevalent already in China: these are the pernicious Black Mirror style systems that score and rank citizens according to behaviour, personal characteristics and so on.
What else is on the EU bill? Data governance frameworks designed to avoid bias, and to protect and respect copyrights, regulating indiscriminate data-scraping and aligning systems with existing EU General Data Protection Regulation. The Act will insist on human oversight and intervention in what they call high risk Ai’s in fields like security, law enforcement and health. But we might think all fields where Ai roams are high risk.
The Act calls for each member state to set up national supervisory bodies which will cooperate with the overall European AI Board. To encourage innovation a system of sandboxing will be implemented which can allow safe and sanitised testing and developing. To discourage the agglomeration of bigger and bigger monopolies of the kind the EU has been fighting already, small and medium-sized enterprises will be given specific support and help in navigating the new regulatory landscape. And more widely, Europe will work with non-EU states (like us here in the UK of course) and global organisations to set locally enforceable worldwide standards.
Well, we can scoff from the sidelines. Utopian pipe dream. Layers of bureaucracy. Get real. No chance. How do you pay for it? Yes, we know what we are like. The human family is dysfunctional. We know the squabbles, the pettiness, the incompetence, the resentments, rivalries and distrust that mar relations within let alone between nation states. As Hamlet puts it, enterprises of great pith and moment, with this regard, their currents turn awry and lose the name of action. But if the currents do turn awry in this regard we are surely doomed. If we cannot channel and sluice the currents, ditch, dyke and dam them, the banks will burst and we will all drown.
Whatever happens Ai, together with robotics, quantum computing and the rest, will disrupt and radically transform who and how we are. There is no corner of our lives into which the waters will not seep.
We have to decide and decide bloody soon, whether we can do something to channel, filter and control those waters and use them for refreshment, irrigation and growth, not for drowning and deluge.
We are the danger. Our greed. Our enmities, our greed, pride, greed, hatreds, greed and moral indolence. And greed.
How do you persuade corporate titans and world leaders to put those aside, to abandon their ambitions and rivalries when it comes to the urgent crisis of Ai?
In 1955 Bertrand Russell and Albert Einstein produced the Russell-Einstein Manifesto on nuclear weapons. Two of the greatest minds alive at the time, they were wise enough to be simple: this is what they said and I am happy to repeat it now.
“We appeal as human beings to human beings: Remember your humanity and forget the rest.”
Thank you.
© Stephen Fry 2024
PS: You may have noticed that I render Artificial Intelligence as “Ai” not “AI” throughout this piece - this my (fruitless no doubt) attempt to make life easier for people called Albert, Alfred, Alexander et al (ho ho). In sans serif fonts AI with a majuscule “i” is ambiguous. How does the great Pacino feel when he reads that “Al is a threat to humanity?” So let’s all write as Ai not AI.
Wordsmith? Historian? Dramatist? Sage? Stephen Fry, you always amaze with your sense of timing. I compare you to our Mike Rowe in the U.S. Your talent is one of nature's gifts. Thank you for your compassion, empathy, and perseverance for our species. This article as written will be a piece of history appreciated by current generations of readers willing to tackle the TLTR!
Eloquent and thoughtfully put. It must have been quite a treat for the attendees to hear it spoken out loud, and I do hope a recording was made and will eventually be released.
Linguistic elegance aside, the conclusion on regulation of Ai (I shall endeavor to adopt your spelling of it), is of course the most important part.
I have some thoughts on the matter in my own sporadic writing, primarily pertaining to the use of Ai in the visual arts... Which is currently my main concern, because what is life without it? I have mostly thought about it from a self-disclosure point of view, and these are the regulations I feel we (at minimum) need to implement ASAP:
1. Commercial Ai models must maintain a public database of every individual image it has scraped, and who created them.
2. Every Ai generated image should keep a record in its metadata, of which artists works and how big of a percentage, were used as references for the generated image.
3. Artists whose work and style is being used in a commercial image should receive a viable royalty payment.
This is of course just a tiny part of prepping for the tsunami, but I feel it might be a good place to start.