108 Comments

Wordsmith? Historian? Dramatist? Sage? Stephen Fry, you always amaze with your sense of timing. I compare you to our Mike Rowe in the U.S. Your talent is one of nature's gifts. Thank you for your compassion, empathy, and perseverance for our species. This article as written will be a piece of history appreciated by current generations of readers willing to tackle the TLTR!

Expand full comment

Eloquent and thoughtfully put. It must have been quite a treat for the attendees to hear it spoken out loud, and I do hope a recording was made and will eventually be released.

Linguistic elegance aside, the conclusion on regulation of Ai (I shall endeavor to adopt your spelling of it), is of course the most important part.

I have some thoughts on the matter in my own sporadic writing, primarily pertaining to the use of Ai in the visual arts... Which is currently my main concern, because what is life without it? I have mostly thought about it from a self-disclosure point of view, and these are the regulations I feel we (at minimum) need to implement ASAP:

1. Commercial Ai models must maintain a public database of every individual image it has scraped, and who created them.

2. Every Ai generated image should keep a record in its metadata, of which artists works and how big of a percentage, were used as references for the generated image.

3. Artists whose work and style is being used in a commercial image should receive a viable royalty payment.

This is of course just a tiny part of prepping for the tsunami, but I feel it might be a good place to start.

Expand full comment

Regarding your second point: That is - in theory - a good idea but impossible to achieve with current generative ai models (I won't even capitalize the first letter ;-)) as these models don't have a database with all referenced images and the model itself does never "contain" any actual "intellectual property" like e.g. images. In addition to that, training data will often contain the same work n times (in case of an image e.g. multiple times in different resolutions, already derivative images or (historical) copies by other artists, "memes" based on a image and whatnot).

It would also be very hard to come up with a "percentage". I can have a central element of an image and simply swap out the background with something else. Now 80% of the actual pixels are different, you would still never not know what it's based on etc.pp.

Expand full comment

Not only that, in addition (as I expand upon in the post where I first make this argument): Saving images after working on them with art-tools easily strips this meta-data, so yes... It will be very hard.

Yet, with the proper legislation this could be mandated, thus forcing the Ai providers to rebuild their models. This time to acceptable spec.

As for replacing a percentage of the pixels... That would be a way around it, but it would also require work. Almost everyone that spams the Web with their Ai "creations" have neither the skill nor the will to do this work. I estimate 99% of all Ai generated images I encounter to be completely unprocessed by human hands and minds.

Expand full comment

Your estimation is worthless, Svein-Gunnar. You set yourself up as an infallible detector of AI. What you mean is that you only see AI that you think is AI. Anything above your own internal and arbitrary threshold escapes your notice and sails past, with your implicit blessing.

Expand full comment

That is an interesting point of view from someone who clearly has spent some time making all of "her" SubStack post pictures look more or less like the same person, whilst not disclosing the fact that they are clearly Ai generated.

One would think that it was just my "infallible detector of Ai" that tipped me off here, but there are also some pretty good Ai detection tools freely available, so I can't take all the credit :P

Expand full comment

Sorry. I just spotted that I misspelled your name. My error.

Nevertheless, my point is valid. Unless you can detect AI imagery with unerring accuracy, you're only going to spot the less sophisticated examples.

Expand full comment

Perhaps the wording “AI image by NightCafé” in each caption might inform readers' views?

Expand full comment

You still need to add captions to the "more sophisticated" stable diffusion image from your top post :)

Expand full comment

Excellent piece and important thoughts. The analogy to money does seem particularly apt. One can also mourn the loss of people like Feynman and Jobs and Dennett whose thoughts and theories about Ai would have been most welcome while Ai is still in the semi-embryonic stage.

Expand full comment

I'm not sure Jobs' view would have varied much from the other Silicon Valley types Fry mentions. He may have been a bit less power-hungry/insecure than folks like Theil and Musk (respectively), but he shared their belief that his view was the right one and not subject to much introspection once decided on.

Expand full comment

This is a wonderful read and yet I have one disconnect. The symbolic representation of identity or ownership emerged in the Halaf period some distant millennia ago in the form of stone seals. These seals were used to mark property and produce, perhaps as a branding device. They were made by the owners of the property or the providers of the produce as an identity. We don’t know for certain, but perhaps these were symbols of quality or origin, much like modern brands. In the millennia that followed the Ubaid culture extended the use of seals making them more elaborate, but they were still produce locally, a distributed method of asserting ownership, or worth. It was in the following Sumerian culture that these symbols mutated into something like money and simultaneously, perhaps in an attempt to standardise the use of symbols, or to create a universally accepted system, the issue of the symbolic representation of worth it wealth became the unique responsibility of the state. State issued money evolved quickly to direct and control the creation and distribution of wealth. This mutation has given immense control over wealth to the self appointed rulers of society. The function of money is to permit access to wealth and wellbeing. Money is a control mechanism in our civilisation.

Expand full comment

Thank you Stephen, this is such a big worry. It is nice to know that others like you stand with humanity and oppose the voices who would gamble all that is not theirs on the chance to rule.

Expand full comment

Full video will be available here shortly: https://www.linkedin.com/showcase/kingsdigitalfutures/

Expand full comment

What date will it be up please?

Expand full comment

What a pleasure to read! Thank you!

Expand full comment

The Pandora’s box is ALWAYS opened, that is a fact of life. Only a Time Machine can fix that. Prepare for the best and worst Ai has to offer, there’s no stopping the bad or good it can achieve. It would be nice if the fascist dictators in government would not add to our troubles…

Expand full comment

Brilliant. Thank you.

Expand full comment

Brilliant. From having worked through an AI season from 1980-1990 in a AI spin off from MIT all I can say is - thank you for all you've said here. I wish I could have heard you speak. Any chance it's on Youtube?

Expand full comment

This was very well written, and well thought out until the end, when it veered into fantasy. "Greed" isn't the enemy you seek, it is "power". And power seeking is fundamental to humanity; indeed to life itself. Here is the inevitable, trivially simple, sequence of events:

1) AI is used to enhance the performance of battlefield weapons (already in progress).

2) Armies that grant autonomy to tactical AI weapons quickly defeat opposing armies.

3) All armies field fully autonomous weapons, or become easily defeated/conquered.

4) AI advisors assist human commanders in strategic deployment of forces (in progress).

5) Armies that eliminate human commanders quickly defeat armies that maintain them.

6) All armies eliminate "human in the loop" military decision making, perhaps retaining some kind of review process that becomes ineffectual because of the exponentially increasing speed of battle. Bear in mind that human commanders may remain as a kind of P.R. department for the AI, thus soothing the anxieties of the populace.

7) Some time during the above process, the economies of countries becomes the limiting factor in their ability to impose their will (as it is now, but greatly amplified).

8) In order to feed the voracious military machines, the economies will follow a similar path to the military, where ultimately, high level decision making in corporations will be far too demanding for humans (as frankly it is now). Those retaining humans will quickly be swept aside. As with military leadership, a kind of puppet human leadership will remain to soothe investors.

The outlook for human beings is grim. The ability to "empathize", or experience "qualia" has no value in a world where humans are irrelevant. This new world actually fits well with current authoritarian societies where humans (except political leadership) are already viewed as irrelevant. Such societies will provide minimal resistance to the rise of AI, especially given the fruits of conquest that will flow.

China or Russia won't care if the U.S. (or the west at large) "regulates" AI. In fact, they will cheer it, as they gain the upper hand as a result. And let's face it, any AI that can't outsmart a government regulator isn't "intelligent" to begin with.

Expand full comment

Oof, what a pickle. Don’t share this at the picnic. 🎭

Expand full comment

The danger my friend, as you noted, is us.

Until we face that directly, Ai, or trains, or bombs, or tractors are going to turn us over and over.

It's unethical for a human to draw an easy breath while Gaza continues.

Ai is nothing. It's just us, again.

We can pretend all the things, but it will not make a difference.

We stop Gaza or we deserve that last night.

That final goodbye to another experiment by mother Earth.

Will we take all her children with us?

Expand full comment

Why Gaza? You know hundreds of thousands are being killed in Sudan, Syria, China, etc etc, far, far more than in Gaza, and far less justifiably. The obsession with one relatively small conflict is very strange. Why is it Gaza? Why do you hang the hopes of ALL OF HUMANITY on Gaza, but don’t care that China is killing thousands of human beings for their organs?

Why is Gaza more terrible to you, when it’s objectively less terrible than many other situations?

Expand full comment
Sep 15·edited Sep 15

Stephen, you are correct as always. Explaining the etymology of previous inventions always helps with context and perspective when it comes to artificial intelligence. I am still scared witless of it… but then, this comment was written by someone who did not purchase a mobile until the year 2000… does this make me a Luddite? Hmm.

Expand full comment

I wonder what that makes me, because I still don't have one.

Expand full comment

One acronym: OMG! You are indeed, a true Luddite!!! Wow… 😱

Expand full comment

A great read! And for anyone who hasn't seen the video of Stephen reading out a letter about ChatGPT and human creativity, make sure to check that out as well – https://www.youtube.com/watch?v=iGJcF4bLKd4

Expand full comment

Well expressed with context and metaphor and worryingly all too real. AI (Ai?) needs to be high on the agenda of all governments and should form a key part of the dialogue with their electors (where they exist). Humanity, collectively, needs to decide what it does and does not want AI to do and the boundaries then need to be set and enforced by regulation.

This will be hard and slow and will likely progress at a pace slower than the rate of technical development. I would favour a negotiated moratorium on development until some of the big questions around AI are answered. Achieving that will be hard, but we need to try if we are to avoid unitended consequnces.

Expand full comment

I wonder if Neanderthal man thought in the ways you are thinking, about how he and his kind would soon be obliterated by something sharper and leaner? Perhaps not by being wiped out tho, but by being assimilated. Perhaps that is our future, as some kind of cyborg ... Nightmarish indeed.

Your conclusion, an appeal to our humanity, reminds me of the fact that the word “human”, like the word “humus”, comes from the proto-Indo-European root, dhghem, meaning EARTH. We are first and foremost EARTHLINGS, along with everything that lives here with us. People like Musk and Bezos, obsessed with living on another planet having trashed this one, are very far adrift, betrayers of their literal roots.

Expand full comment

Only as nightmarish as our world would seem to the proto-human. Change is messy but trying to stop it is moreso.

Expand full comment