259 Comments
User's avatar
Jeffrey's avatar

Wordsmith? Historian? Dramatist? Sage? Stephen Fry, you always amaze with your sense of timing. I compare you to our Mike Rowe in the U.S. Your talent is one of nature's gifts. Thank you for your compassion, empathy, and perseverance for our species. This article as written will be a piece of history appreciated by current generations of readers willing to tackle the TLTR!

Expand full comment
creesto cowplop's avatar

Nice comment, but Rowe is a fraud.

Expand full comment
Svein-Gunnar Johansen's avatar

Eloquent and thoughtfully put. It must have been quite a treat for the attendees to hear it spoken out loud, and I do hope a recording was made and will eventually be released.

Linguistic elegance aside, the conclusion on regulation of Ai (I shall endeavor to adopt your spelling of it), is of course the most important part.

I have some thoughts on the matter in my own sporadic writing, primarily pertaining to the use of Ai in the visual arts... Which is currently my main concern, because what is life without it? I have mostly thought about it from a self-disclosure point of view, and these are the regulations I feel we (at minimum) need to implement ASAP:

1. Commercial Ai models must maintain a public database of every individual image it has scraped, and who created them.

2. Every Ai generated image should keep a record in its metadata, of which artists works and how big of a percentage, were used as references for the generated image.

3. Artists whose work and style is being used in a commercial image should receive a viable royalty payment.

This is of course just a tiny part of prepping for the tsunami, but I feel it might be a good place to start.

Expand full comment
Brandon George's avatar

There is a recording available on the Kings College YouTube with the same title as this article!

Expand full comment
Seb's avatar

Regarding your second point: That is - in theory - a good idea but impossible to achieve with current generative ai models (I won't even capitalize the first letter ;-)) as these models don't have a database with all referenced images and the model itself does never "contain" any actual "intellectual property" like e.g. images. In addition to that, training data will often contain the same work n times (in case of an image e.g. multiple times in different resolutions, already derivative images or (historical) copies by other artists, "memes" based on a image and whatnot).

It would also be very hard to come up with a "percentage". I can have a central element of an image and simply swap out the background with something else. Now 80% of the actual pixels are different, you would still never not know what it's based on etc.pp.

Expand full comment
Svein-Gunnar Johansen's avatar

Not only that, in addition (as I expand upon in the post where I first make this argument): Saving images after working on them with art-tools easily strips this meta-data, so yes... It will be very hard.

Yet, with the proper legislation this could be mandated, thus forcing the Ai providers to rebuild their models. This time to acceptable spec.

As for replacing a percentage of the pixels... That would be a way around it, but it would also require work. Almost everyone that spams the Web with their Ai "creations" have neither the skill nor the will to do this work. I estimate 99% of all Ai generated images I encounter to be completely unprocessed by human hands and minds.

Expand full comment
Britni Pepper's avatar

Your estimation is worthless, Svein-Gunnar. You set yourself up as an infallible detector of AI. What you mean is that you only see AI that you think is AI. Anything above your own internal and arbitrary threshold escapes your notice and sails past, with your implicit blessing.

Expand full comment
Svein-Gunnar Johansen's avatar

That is an interesting point of view from someone who clearly has spent some time making all of "her" SubStack post pictures look more or less like the same person, whilst not disclosing the fact that they are clearly Ai generated.

One would think that it was just my "infallible detector of Ai" that tipped me off here, but there are also some pretty good Ai detection tools freely available, so I can't take all the credit :P

Expand full comment
Britni Pepper's avatar

Perhaps the wording “AI image by NightCafé” in each caption might inform readers' views?

Expand full comment
Svein-Gunnar Johansen's avatar

You still need to add captions to the "more sophisticated" stable diffusion image from your top post :)

Expand full comment
Britni Pepper's avatar

Sorry. I just spotted that I misspelled your name. My error.

Nevertheless, my point is valid. Unless you can detect AI imagery with unerring accuracy, you're only going to spot the less sophisticated examples.

Expand full comment
GiGi Grimm's avatar

I love your ideas on this, particularly regarding artists receiving royalties for imitations/use of their "style" or work.

I work as a teaching assistant in an elementary school. Last week, our music teacher was instructing the kiddos on how to "write" songs using Ai. He explained that it was a tool and that it was important for them to understand how to use it. I agree with that, to a point, but I was disappointed that he did not mention that Ai was plagiarizing the work of other people without their permission. And also, I felt he should have talked about the difference between the process of Ai "creating" music and the process a human goes through to create- the thought processes, the emotion, and the satisfaction derived from completing said work.

Sorry for the tangent to basically just say, "I agree with you".

Expand full comment
Svein-Gunnar Johansen's avatar

Thanks for the support :)

As for the music teacher using Ai to teach the kids how to "write" songs: This does sound a little dubious in my book. I have tried these tools, and fun as they might be to use... Unless one already is somewhat proficient in piecing together a composition, I suspect the potential for learning anything about making music is low. It more or less starts at the wrong end, bypassing creativity entirely.

If you're interested in a more detailed take on my thoughts about Ai and licensing, here's a link to my post on the topic:

https://backtobasic.substack.com/p/a-pragmatic-approach-to-the-current

Expand full comment
GiGi Grimm's avatar

I just read your piece. I like the way you compared Ai scraping to the practice of sampling music. I agree we should credit artists whose work is "sampled " in generative AI.

Expand full comment
Cartoon Wizard's avatar

You may wish to revisit ChatGPT. The game is afoot, or in this case, a boot

Expand full comment
Alexander Hawking's avatar

This was very well written, and well thought out until the end, when it veered into fantasy. "Greed" isn't the enemy you seek, it is "power". And power seeking is fundamental to humanity; indeed to life itself. Here is the inevitable, trivially simple, sequence of events:

1) AI is used to enhance the performance of battlefield weapons (already in progress).

2) Armies that grant autonomy to tactical AI weapons quickly defeat opposing armies.

3) All armies field fully autonomous weapons, or become easily defeated/conquered.

4) AI advisors assist human commanders in strategic deployment of forces (in progress).

5) Armies that eliminate human commanders quickly defeat armies that maintain them.

6) All armies eliminate "human in the loop" military decision making, perhaps retaining some kind of review process that becomes ineffectual because of the exponentially increasing speed of battle. Bear in mind that human commanders may remain as a kind of P.R. department for the AI, thus soothing the anxieties of the populace.

7) Some time during the above process, the economies of countries becomes the limiting factor in their ability to impose their will (as it is now, but greatly amplified).

8) In order to feed the voracious military machines, the economies will follow a similar path to the military, where ultimately, high level decision making in corporations will be far too demanding for humans (as frankly it is now). Those retaining humans will quickly be swept aside. As with military leadership, a kind of puppet human leadership will remain to soothe investors.

The outlook for human beings is grim. The ability to "empathize", or experience "qualia" has no value in a world where humans are irrelevant. This new world actually fits well with current authoritarian societies where humans (except political leadership) are already viewed as irrelevant. Such societies will provide minimal resistance to the rise of AI, especially given the fruits of conquest that will flow.

China or Russia won't care if the U.S. (or the west at large) "regulates" AI. In fact, they will cheer it, as they gain the upper hand as a result. And let's face it, any AI that can't outsmart a government regulator isn't "intelligent" to begin with.

Expand full comment
Just a Placebo's avatar

Oof, what a pickle. Don’t share this at the picnic. 🎭

Expand full comment
Anna Cummishey's avatar

Although the development of science and technology is often used in war first, I hope that the development of AI can enter people's lives and help us enjoy life better, rather than becoming a tool of war. Nuclear bombs and nuclear energy depend on our choices. Sometimes we can have them, but we don't have to use them (nuclear bombs)

Expand full comment
Ragged Clown's avatar

I think there's a parallel fear for work. What happens when Ai does all the work. What happens to the rest of us? Will the owners share the gains so that we can relax on the beach? Or will they all become trillionaires while we become paupers?

Expand full comment
Steve Kelsey's avatar

This is a wonderful read and yet I have one disconnect. The symbolic representation of identity or ownership emerged in the Halaf period some distant millennia ago in the form of stone seals. These seals were used to mark property and produce, perhaps as a branding device. They were made by the owners of the property or the providers of the produce as an identity. We don’t know for certain, but perhaps these were symbols of quality or origin, much like modern brands. In the millennia that followed the Ubaid culture extended the use of seals making them more elaborate, but they were still produce locally, a distributed method of asserting ownership, or worth. It was in the following Sumerian culture that these symbols mutated into something like money and simultaneously, perhaps in an attempt to standardise the use of symbols, or to create a universally accepted system, the issue of the symbolic representation of worth or wealth became the unique responsibility of the state. State issued money evolved quickly to direct and control the creation and distribution of wealth. This mutation has given immense control over wealth to the self appointed rulers of society. The function of money is to permit access to wealth and wellbeing. Money is a control mechanism in our civilisation.

Expand full comment
Ian B.'s avatar

Excellent piece and important thoughts. The analogy to money does seem particularly apt. One can also mourn the loss of people like Feynman and Jobs and Dennett whose thoughts and theories about Ai would have been most welcome while Ai is still in the semi-embryonic stage.

Expand full comment
Christopher Foxx's avatar

I'm not sure Jobs' view would have varied much from the other Silicon Valley types Fry mentions. He may have been a bit less power-hungry/insecure than folks like Theil and Musk (respectively), but he shared their belief that his view was the right one and not subject to much introspection once decided on.

Expand full comment
Rich Ward's avatar

Maybe if Jobs were still alive, at his age he might have gained wisdom, humility, even faith as he journeyed through life’s middle age and who knows how that would have affected his life work. Even more so if he had survived his illness maybe the fragility of being human would have also contributed to his direction with Apple and beyond? But thank you Stephen for your sharing this great talk 👍

Expand full comment
Christopher Foxx's avatar

It's always hard (and risky) to speculate on what might have been. Perhaps Jobs would have developed a sense of humility.

But based on what I've seen in people, gaining "faith" (becoming more religious) as they move thru life tends to make them even more close minded in their views. The people most certain they are right and you are wrong are those most devout in their faith.

Expand full comment
Rich Ward's avatar

Hi Christopher, I think you are probably right about faith and narrow mindedness in some. But not all and I thank God that’s not my story. Quite the opposite, more narrow in youth and thanks to people like Richard Rohr and Jesus more open with age. Bless you and namaste 🙏.

Expand full comment
Christopher Foxx's avatar

Hey, Rich.

Those who actually follow the tenets of their religion are usually "good people". For example, Christians who really do follow the example of Christ would help the poor, feed the hungry, turn the other cheek, and generally be kind, thoughtful, non-judgmental, and keep their religious beliefs to themselves (Matthew 6: 1-6).

Unfortunately the vast majority of "people of faith" use it as an excuse to cover their insecurities by trying to control others can live their lives, condemning anyone who doesn't behave as they believe is proper, and generally be self-righteous hypocritical assholes. Whether it's Christians who want women to die in hospital parking lots, Muslims who fly planes into buildings, or Jews who claim the indiscriminate bombing of children is "self- defense".

Again, there are people who see their faith as a way to genuinely be better people. But the core of any religion is surrendering your own judgment to what others tell you and that leads the vast majority of people to abdicate their responsibility for their own choices and actions under the guise of "God told me to".

Expand full comment
Rich Ward's avatar

You sound like the kind of person I would enjoy chatting to more - my faith journey is certainly non of the above, my experience of religion is very mixed, but I am thank full that the stuff in my life has, as Nick Cave says, made me meet the world with arms open rather than closed- bless you and have a good weekend ☺️

Expand full comment
piplup's avatar

Thank you Stephen, this is such a big worry. It is nice to know that others like you stand with humanity and oppose the voices who would gamble all that is not theirs on the chance to rule.

Expand full comment
Daniel B O'Donnell's avatar

Full video will be available here shortly: https://www.linkedin.com/showcase/kingsdigitalfutures/

Expand full comment
MG's avatar

What date will it be up please?

Expand full comment
Frank Canzolino's avatar

The Pandora’s box is ALWAYS opened, that is a fact of life. Only a Time Machine can fix that. Prepare for the best and worst Ai has to offer, there’s no stopping the bad or good it can achieve. It would be nice if the fascist dictators in government would not add to our troubles…

Expand full comment
Justin Riestra's avatar

Damn fine thinking & writing!

May Ai be forever modeled on the thoughtfulness, wit, and measure of Stephen Fry. That's a technological advancement I can get behind.

Expand full comment
Dimitar Vangelov's avatar

A great read! And for anyone who hasn't seen the video of Stephen reading out a letter about ChatGPT and human creativity, make sure to check that out as well – https://www.youtube.com/watch?v=iGJcF4bLKd4

Expand full comment
Sergej Klementinovski's avatar

What a pleasure to read! Thank you!

Expand full comment
Tom Parish's avatar

Brilliant. From having worked through an AI season from 1980-1990 in a AI spin off from MIT all I can say is - thank you for all you've said here. I wish I could have heard you speak. Any chance it's on Youtube?

Expand full comment
Hudson E Baldwin lll's avatar

Until a voice to text function can decipher the difference between to, too, and two, I'm sure as hell not enthusiastic about letting it drive my car......

Expand full comment
Britni Pepper's avatar

Twaddle. What you mean is that you only notice the errors. Just how do you intend to identify the perfect, let alone police it?

Expand full comment
Hudson E Baldwin lll's avatar

Um.... I really don't give a shit about the grammar. I am referring to being in a 3000 pound missile doing 75 miles an hour, Twaddlewaffle.

Expand full comment
Res Nullius's avatar

Surely ability to drive a car should be the criteria for being allowed to drive a car? There are adult humans who can't reliably make correct grammatical choices - would you ban them from getting a driver's licence?

Fry mentions Moravec's paradox - the advance of AI is a notoriously jagged frontier. AI can defeat chess grandmasters, but currently has trouble telling the difference between a shadow and a pothole on the road. Your assertion makes about as much sense as if you'd declared that you *would* let AI drive your car because it can beat you at chess.

Expand full comment
Hudson E Baldwin lll's avatar

You're getting a big deep in the weeds of a rhetorical jungle. It stands to reason that operating a vehicle is a task that requires more computing speed and power than understanding syntax and basic grammatical concepts such as synonyms and homonyms.

Expand full comment
Hudson E Baldwin lll's avatar

You bring me to a thought I've had for years. Instead of voter ID, voter IQ tests are in order. If one must show competence in driving a vehicle I would think it should least of qualifications to vote. Competency concerning the body politic that is. Rudimentary knowledge of policy. Obviously, the matriculations of federal government should be a known quantity. I mean, is school house rock no longer thing?

Expand full comment
Alex Tolley's avatar

So the disabled couldn't vote? Should the elderly no longer competent to drive be denied the vote? Remember literacy tests were used to prevent the Blacl populations from voting in the Jim Crow southern states.

Expand full comment
Victoria Lynn Devereaux's avatar

i am sharing a link to this. a much appreciated delve into the unknown. absolutely enjoyed reading your take on Ai. and thankful you allowed those of us who cannot always afford subscriptions to partake of this talk. i am an elder, so my time here is short, that said, i am a part of the consciousness of my era and a bit of a nerd with a mechanical lean to my mind, and your talk addresses this. so. thank you. i am a native of austin texas, where musk has desecrated boca chica, south padre island with his spacex, built a huge tesla factory near austin and plans to move ‘x’ here. lawsy, she says. battery has so many meanings…i ramble. again, enjoyed the read tremendously.

Expand full comment
Rona Topaz's avatar

Stephen, you are correct as always. Explaining the etymology of previous inventions always helps with context and perspective when it comes to artificial intelligence. I am still scared witless of it… but then, this comment was written by someone who did not purchase a mobile until the year 2000… does this make me a Luddite? Hmm.

Expand full comment
Ben Woestenburg's avatar

I wonder what that makes me, because I still don't have one.

Expand full comment
Rona Topaz's avatar

One acronym: OMG! You are indeed, a true Luddite!!! Wow… 😱

Expand full comment
Colin Devonshire's avatar

Brilliant. Thank you.

Expand full comment