162 Comments

Wordsmith? Historian? Dramatist? Sage? Stephen Fry, you always amaze with your sense of timing. I compare you to our Mike Rowe in the U.S. Your talent is one of nature's gifts. Thank you for your compassion, empathy, and perseverance for our species. This article as written will be a piece of history appreciated by current generations of readers willing to tackle the TLTR!

Expand full comment

Eloquent and thoughtfully put. It must have been quite a treat for the attendees to hear it spoken out loud, and I do hope a recording was made and will eventually be released.

Linguistic elegance aside, the conclusion on regulation of Ai (I shall endeavor to adopt your spelling of it), is of course the most important part.

I have some thoughts on the matter in my own sporadic writing, primarily pertaining to the use of Ai in the visual arts... Which is currently my main concern, because what is life without it? I have mostly thought about it from a self-disclosure point of view, and these are the regulations I feel we (at minimum) need to implement ASAP:

1. Commercial Ai models must maintain a public database of every individual image it has scraped, and who created them.

2. Every Ai generated image should keep a record in its metadata, of which artists works and how big of a percentage, were used as references for the generated image.

3. Artists whose work and style is being used in a commercial image should receive a viable royalty payment.

This is of course just a tiny part of prepping for the tsunami, but I feel it might be a good place to start.

Expand full comment

Regarding your second point: That is - in theory - a good idea but impossible to achieve with current generative ai models (I won't even capitalize the first letter ;-)) as these models don't have a database with all referenced images and the model itself does never "contain" any actual "intellectual property" like e.g. images. In addition to that, training data will often contain the same work n times (in case of an image e.g. multiple times in different resolutions, already derivative images or (historical) copies by other artists, "memes" based on a image and whatnot).

It would also be very hard to come up with a "percentage". I can have a central element of an image and simply swap out the background with something else. Now 80% of the actual pixels are different, you would still never not know what it's based on etc.pp.

Expand full comment

Not only that, in addition (as I expand upon in the post where I first make this argument): Saving images after working on them with art-tools easily strips this meta-data, so yes... It will be very hard.

Yet, with the proper legislation this could be mandated, thus forcing the Ai providers to rebuild their models. This time to acceptable spec.

As for replacing a percentage of the pixels... That would be a way around it, but it would also require work. Almost everyone that spams the Web with their Ai "creations" have neither the skill nor the will to do this work. I estimate 99% of all Ai generated images I encounter to be completely unprocessed by human hands and minds.

Expand full comment

Your estimation is worthless, Svein-Gunnar. You set yourself up as an infallible detector of AI. What you mean is that you only see AI that you think is AI. Anything above your own internal and arbitrary threshold escapes your notice and sails past, with your implicit blessing.

Expand full comment

That is an interesting point of view from someone who clearly has spent some time making all of "her" SubStack post pictures look more or less like the same person, whilst not disclosing the fact that they are clearly Ai generated.

One would think that it was just my "infallible detector of Ai" that tipped me off here, but there are also some pretty good Ai detection tools freely available, so I can't take all the credit :P

Expand full comment

Sorry. I just spotted that I misspelled your name. My error.

Nevertheless, my point is valid. Unless you can detect AI imagery with unerring accuracy, you're only going to spot the less sophisticated examples.

Expand full comment

Perhaps the wording “AI image by NightCafé” in each caption might inform readers' views?

Expand full comment

You still need to add captions to the "more sophisticated" stable diffusion image from your top post :)

Expand full comment

There is a recording available on the Kings College YouTube with the same title as this article!

Expand full comment

Thank you Stephen, this is such a big worry. It is nice to know that others like you stand with humanity and oppose the voices who would gamble all that is not theirs on the chance to rule.

Expand full comment

This is a wonderful read and yet I have one disconnect. The symbolic representation of identity or ownership emerged in the Halaf period some distant millennia ago in the form of stone seals. These seals were used to mark property and produce, perhaps as a branding device. They were made by the owners of the property or the providers of the produce as an identity. We don’t know for certain, but perhaps these were symbols of quality or origin, much like modern brands. In the millennia that followed the Ubaid culture extended the use of seals making them more elaborate, but they were still produce locally, a distributed method of asserting ownership, or worth. It was in the following Sumerian culture that these symbols mutated into something like money and simultaneously, perhaps in an attempt to standardise the use of symbols, or to create a universally accepted system, the issue of the symbolic representation of worth it wealth became the unique responsibility of the state. State issued money evolved quickly to direct and control the creation and distribution of wealth. This mutation has given immense control over wealth to the self appointed rulers of society. The function of money is to permit access to wealth and wellbeing. Money is a control mechanism in our civilisation.

Expand full comment

Excellent piece and important thoughts. The analogy to money does seem particularly apt. One can also mourn the loss of people like Feynman and Jobs and Dennett whose thoughts and theories about Ai would have been most welcome while Ai is still in the semi-embryonic stage.

Expand full comment

I'm not sure Jobs' view would have varied much from the other Silicon Valley types Fry mentions. He may have been a bit less power-hungry/insecure than folks like Theil and Musk (respectively), but he shared their belief that his view was the right one and not subject to much introspection once decided on.

Expand full comment

Maybe if Jobs were still alive, at his age he might have gained wisdom, humility, even faith as he journeyed through life’s middle age and who knows how that would have affected his life work. Even more so if he had survived his illness maybe the fragility of being human would have also contributed to his direction with Apple and beyond? But thank you Stephen for your sharing this great talk 👍

Expand full comment

It's always hard (and risky) to speculate on what might have been. Perhaps Jobs would have developed a sense of humility.

But based on what I've seen in people, gaining "faith" (becoming more religious) as they move thru life tends to make them even more close minded in their views. The people most certain they are right and you are wrong are those most devout in their faith.

Expand full comment

Hi Christopher, I think you are probably right about faith and narrow mindedness in some. But not all and I thank God that’s not my story. Quite the opposite, more narrow in youth and thanks to people like Richard Rohr and Jesus more open with age. Bless you and namaste 🙏.

Expand full comment

Hey, Rich.

Those who actually follow the tenets of their religion are usually "good people". For example, Christians who really do follow the example of Christ would help the poor, feed the hungry, turn the other cheek, and generally be kind, thoughtful, non-judgmental, and keep their religious beliefs to themselves (Matthew 6: 1-6).

Unfortunately the vast majority of "people of faith" use it as an excuse to cover their insecurities by trying to control others can live their lives, condemning anyone who doesn't behave as they believe is proper, and generally be self-righteous hypocritical assholes. Whether it's Christians who want women to die in hospital parking lots, Muslims who fly planes into buildings, or Jews who claim the indiscriminate bombing of children is "self- defense".

Again, there are people who see their faith as a way to genuinely be better people. But the core of any religion is surrendering your own judgment to what others tell you and that leads the vast majority of people to abdicate their responsibility for their own choices and actions under the guise of "God told me to".

Expand full comment

You sound like the kind of person I would enjoy chatting to more - my faith journey is certainly non of the above, my experience of religion is very mixed, but I am thank full that the stuff in my life has, as Nick Cave says, made me meet the world with arms open rather than closed- bless you and have a good weekend ☺️

Expand full comment

This was very well written, and well thought out until the end, when it veered into fantasy. "Greed" isn't the enemy you seek, it is "power". And power seeking is fundamental to humanity; indeed to life itself. Here is the inevitable, trivially simple, sequence of events:

1) AI is used to enhance the performance of battlefield weapons (already in progress).

2) Armies that grant autonomy to tactical AI weapons quickly defeat opposing armies.

3) All armies field fully autonomous weapons, or become easily defeated/conquered.

4) AI advisors assist human commanders in strategic deployment of forces (in progress).

5) Armies that eliminate human commanders quickly defeat armies that maintain them.

6) All armies eliminate "human in the loop" military decision making, perhaps retaining some kind of review process that becomes ineffectual because of the exponentially increasing speed of battle. Bear in mind that human commanders may remain as a kind of P.R. department for the AI, thus soothing the anxieties of the populace.

7) Some time during the above process, the economies of countries becomes the limiting factor in their ability to impose their will (as it is now, but greatly amplified).

8) In order to feed the voracious military machines, the economies will follow a similar path to the military, where ultimately, high level decision making in corporations will be far too demanding for humans (as frankly it is now). Those retaining humans will quickly be swept aside. As with military leadership, a kind of puppet human leadership will remain to soothe investors.

The outlook for human beings is grim. The ability to "empathize", or experience "qualia" has no value in a world where humans are irrelevant. This new world actually fits well with current authoritarian societies where humans (except political leadership) are already viewed as irrelevant. Such societies will provide minimal resistance to the rise of AI, especially given the fruits of conquest that will flow.

China or Russia won't care if the U.S. (or the west at large) "regulates" AI. In fact, they will cheer it, as they gain the upper hand as a result. And let's face it, any AI that can't outsmart a government regulator isn't "intelligent" to begin with.

Expand full comment

Oof, what a pickle. Don’t share this at the picnic. 🎭

Expand full comment

I think there's a parallel fear for work. What happens when Ai does all the work. What happens to the rest of us? Will the owners share the gains so that we can relax on the beach? Or will they all become trillionaires while we become paupers?

Expand full comment

Full video will be available here shortly: https://www.linkedin.com/showcase/kingsdigitalfutures/

Expand full comment

What date will it be up please?

Expand full comment

The Pandora’s box is ALWAYS opened, that is a fact of life. Only a Time Machine can fix that. Prepare for the best and worst Ai has to offer, there’s no stopping the bad or good it can achieve. It would be nice if the fascist dictators in government would not add to our troubles…

Expand full comment

What a pleasure to read! Thank you!

Expand full comment

Damn fine thinking & writing!

May Ai be forever modeled on the thoughtfulness, wit, and measure of Stephen Fry. That's a technological advancement I can get behind.

Expand full comment

Until a voice to text function can decipher the difference between to, too, and two, I'm sure as hell not enthusiastic about letting it drive my car......

Expand full comment

Twaddle. What you mean is that you only notice the errors. Just how do you intend to identify the perfect, let alone police it?

Expand full comment

Um.... I really don't give a shit about the grammar. I am referring to being in a 3000 pound missile doing 75 miles an hour, Twaddlewaffle.

Expand full comment

Surely ability to drive a car should be the criteria for being allowed to drive a car? There are adult humans who can't reliably make correct grammatical choices - would you ban them from getting a driver's licence?

Fry mentions Moravec's paradox - the advance of AI is a notoriously jagged frontier. AI can defeat chess grandmasters, but currently has trouble telling the difference between a shadow and a pothole on the road. Your assertion makes about as much sense as if you'd declared that you *would* let AI drive your car because it can beat you at chess.

Expand full comment

You're getting a big deep in the weeds of a rhetorical jungle. It stands to reason that operating a vehicle is a task that requires more computing speed and power than understanding syntax and basic grammatical concepts such as synonyms and homonyms.

Expand full comment

You bring me to a thought I've had for years. Instead of voter ID, voter IQ tests are in order. If one must show competence in driving a vehicle I would think it should least of qualifications to vote. Competency concerning the body politic that is. Rudimentary knowledge of policy. Obviously, the matriculations of federal government should be a known quantity. I mean, is school house rock no longer thing?

Expand full comment

Brilliant. Thank you.

Expand full comment

A great read! And for anyone who hasn't seen the video of Stephen reading out a letter about ChatGPT and human creativity, make sure to check that out as well – https://www.youtube.com/watch?v=iGJcF4bLKd4

Expand full comment

Brilliant. From having worked through an AI season from 1980-1990 in a AI spin off from MIT all I can say is - thank you for all you've said here. I wish I could have heard you speak. Any chance it's on Youtube?

Expand full comment

The danger my friend, as you noted, is us.

Until we face that directly, Ai, or trains, or bombs, or tractors are going to turn us over and over.

It's unethical for a human to draw an easy breath while Gaza continues.

Ai is nothing. It's just us, again.

We can pretend all the things, but it will not make a difference.

We stop Gaza or we deserve that last night.

That final goodbye to another experiment by mother Earth.

Will we take all her children with us?

Expand full comment

Why Gaza? You know hundreds of thousands are being killed in Sudan, Syria, China, etc etc, far, far more than in Gaza, and far less justifiably. The obsession with one relatively small conflict is very strange. Why is it Gaza? Why do you hang the hopes of ALL OF HUMANITY on Gaza, but don’t care that China is killing thousands of human beings for their organs?

Why is Gaza more terrible to you, when it’s objectively less terrible than many other situations?

Expand full comment

Stephen, you are correct as always. Explaining the etymology of previous inventions always helps with context and perspective when it comes to artificial intelligence. I am still scared witless of it… but then, this comment was written by someone who did not purchase a mobile until the year 2000… does this make me a Luddite? Hmm.

Expand full comment

I wonder what that makes me, because I still don't have one.

Expand full comment

One acronym: OMG! You are indeed, a true Luddite!!! Wow… 😱

Expand full comment