Skip to main content

A.I., Photography, and our Futures | PhotoVogue Festival 2023: What Makes Us Human? Image in the Age of A.I.

Using artificial intelligence, photorealistic images can be generated in seconds, depicting people and places that never existed and events that never happened. These synthetic images threaten the livelihood and credibility of photographers, as well as potentially transforming what is thought of as real. But it also allows a more imaginative and at times playful way of depicting much that is outside the realm of photography, including the future and the distant past, thoughts and dreams, and the perspectives of people whose stories are not always fully told. Where is this revolution in imaging taking us, and where do we want to go? “A.I., Photography, and our Futures” will try to find an answer to this urging question.

Released on 11/22/2023

Transcript

[crowd applause loudly]

Thank you.

And just building off the last talk,

which I found incredibly interesting

and clear, which is very rare, the case with AI.

There s this sense, I was picture editor

of the New York Times, Sunday magazine

and different publications.

And the way we would do it, we d put something on the cover,

let s say skiing or tennis, and you d open the magazine

and you d find out about the Civil War in El Salvador

or about a writer you never heard of,

or a philosopher or musician.

We didn t give you what you wanted,

but we engaged you to be curious and to find out stuff.

So we weren t expressing your desires, we just had an idea.

It s kind of like you go to a restaurant, they say,

Here s six things to choose to eat.

And you say, I never had that one before.

I ll try it.

And now with social media and so on,

the algorithms decide you like this.

We ll give you more of this, more of this, more of this.

And you end up in a bit of a bubble.

And when you look at photography, what it used

to be was you d make a photograph

and you d find things in the photograph

you didn t even know they were there.

It would tell you stuff you didn t know.

And that was analog photography.

And then we got to computational photography,

which is their software inside your cell phone.

And it s changing the image

before you even know it s doing it to give you

what it thinks is more pleasing to you.

So you don t discover what s there,

but it makes it more pleasing to you.

So for example, I live in New York, a good part of the year,

and the sky turned orange

because of the wildfires in Canada.

And when you made a cell phone picture, it made it blue

because the cell phone didn t understand

that a sky could be orange.

It made it more pleasing.

So you weren t seeing what was there.

You were seeing what the algorithm wanted

to tell you was there.

And if you notice, you go to a museum,

you photograph a painting with a cell phone, a Rembrandt,

which is dark, it makes it light, it changes it

to be more pleasing.

So computational photography then subverted

the lens of the camera.

The lens was not sufficient.

And now we have AI, which got rid of the camera.

So we went from having a camera

and lens to subverting the lens

with computational photography.

And now we got rid of the camera.

This is photorealistic imaging that looks like photographs,

but are not photographs without the use of a camera.

So what I m gonna be talking about

is what photography has been able to do

in a very good sense in the world.

Some of what we re losing

and some of the ways we may be able

to reconstitute it as we move forward.

My English is okay, we re okay?

When you talk to people, if somebody could smile

once in a while, it would be helpful.

It would just be be nice.

So this is 1968 Christmas Eve,

Earthrise, the Earth seen from outer space by an astronaut.

It was amazing to us at the time,

because the Earth, we never saw it from the outside before,

how vulnerable we were.

In big space, darkness, we re all together on this planet,

just spinning around out there.

And we decided we really should do something about it.

So 16 months later, we started Earth Day, April 22nd, 1970.

And that first Earth Day, the US had 200 million people

and 20 million people, one out of 10, went

to clean up the rivers and the forest, the streets,

because we cared about our planet.

And dozens of countries

around the world made postage stamps,

commemorating the Earth as seen from outer space.

So this is something like 50 years ago,

the photograph brought us together.

It said, we re in this together,

we could help each other in a big way.

And now we go to Israel and Gaza.

This was in the BBC.

One of the boys is Omer

and one is Omar, one is Palestinian, one is Israeli.

They re both four years old,

and they were both killed in the first days

of this conflict. On social media,

people were saying that the photographs

or the pictures of Omar, it s actually a doll.

It s not a real person who died, a doll.

And the BBC had to interview his mother to say,

My son is not a doll.

He s a child who died.

And for the Israeli boy, Omer, this social media was saying,

He s a crisis actor.

He and his two sisters were paid to pretend

to be dead, but they re not dead.

And the five people in his family were massacred,

were killed.

So we ve gone from a place

where the photograph brings us together to a place

where they re weaponized.

They re used to prove a point which may be completely crazy

and filled with hate.

And so one of the questions

that we re facing is not just a question of AI,

but a question of using photography in a way

that we could deal with issues together

instead of hating each other.

So this is synthetic imaging from Adobe,

and Santiago is here

and will be speaking for Adobe after I speak.

And these are not photographs.

These are AI images of the conflict in Gaza

that were made and put on the Adobe stock site

and labeled as AI-generated images.

And they re being used in magazines

to depict the current conflict.

So one of the issues is what happened

to the actual people who are dying?

Why aren t they there?

Why do you need synthetic images

of people who don t exist?

How can you do that? How can you sell those images?

How can you profit from people s misery?

And Santiago informs me today

that Adobe is now looking into it

and may well change their policy and not sell those images.

But we re finding images of video games purporting

to be this conflict that people are putting online

of other conflicts of things that happened 10

or 15 years ago, pretending to be this conflict.

The image becomes a weapon.

The image is at war at this point.

So for example, Amnesty International,

there was a demonstration against police brutality

in Columbia, in Latin America.

And this is not a photograph, this is a synthetic image,

AI of demonstrators.

And people said, how could Amnesty International

put in a fake image of real people who are suffering?

And one of the arguments is, well, if they re fake,

you can t arrest them.

As often happens, but they immediately pulled it down.

So the issue is how do we think about this stuff?

When is it useful,

and when is it really destructive in the world?

So in 1972, this is a photograph by Nick Ut

of a young nine-year-old Vietnamese girl, Kim Phuc,

who is burning from napalm.

Interestingly, we were able to put

that on the front page even though she was nude.

And in Asia, this was particularly difficult to do,

scandalous, to show a frontal nude of a little girl.

But the following year

after this photograph was published,

the United States pulled out all its soldiers

from the war in Vietnam.

You can t say it s cause and effect directly,

but it s extraordinary that in those days, these photographs

of children, unlike Omer and Omar,

could be used to be helpful in the world.

Or this of Hector Pieterson in South Africa who was killed

by the security forces during apartheid.

When Nelson Mandela saw this photograph,

which was published on the front page, he has said

to have remarked, We understood, from now on,

we cannot let apartheid continue.

We cannot let it continue.

This photograph is the signal, the symbol

of the horror of it, and we cannot let it continue.

It s very different than Omer and Omar,

the same of Alan Kurdi,

the three-year-old boy from Syria who drowned trying

to escape. This photograph in 2015

may be the last iconic photograph internationally

that we may ever see in which countries agree this is wrong

and we should do something about it.

So what I m arguing for is that

we are leaving ourselves defenseless.

If we cannot agree on what photographs are about,

that they are photographs, that they re recordings

of the visible, if everything becomes weaponized.

And the problem with AI isn t so much the actual images,

but just the fact that they may be AI.

So you begin to be skeptical of all images,

and it s much easier for people in rich countries to do that

because it s happening to somebody else.

So I think that we all have to be invested in trying

to figure out what to do.

So in Tiananmen Square in 1989,

the Chinese government said there was no massacre.

The photographs said yes, there was a massacre,

and the photographs won.

You could not say fake news.

The photographs won

and one of the largest countries in the world,

the government lost.

This was an extraordinary thing.

The following year, Photoshop came out.

I was on the Today Show, a big TV program

with Adobe when they announced it.

And I said, in the years to come,

because of the manipulation of imagery, we may not be able

to defy a government anymore

because the credibility of the photograph itself

will be diminished.

And unfortunately, that is what s happened at this point.

It s not all the fault of Photoshop.

There are many reasons, the lack of front pages

of newspapers and so on, but that s one of it.

So this was, as you know, the tank man, the guy

who stood up at Tiananmen Square,

nobody knows who he is.

He s anonymous, he s never been found.

But if you went to Google about two

or three weeks ago, the number one search that came back

was this, a synthetic image of the tank man,

that was the number one

that came up when you put in Tank Man as a term.

So that history gets distorted.

It s possible now,

and in my mind, when I think of the tank man,

I think of this fake image.

I cannot forget it.

So, I ve experimented it enormously

in the last period of time.

I spoke here one year ago at Photo Vogue,

when I was more beginning.

This is a photograph of a street scene in Berlin in 1815.

That s the prompt.

There was no photographs in 1815

and there were no automobiles.

So I could put this online,

and you re 16 years old writing a piece on the history

of Berlin, and you include it

and the new AI systems then train on my image.

I could distort history if I wanna put this online.

And anybody who could write a sentence could do

the same thing I did, it cost me probably 5 cents to do it.

Or this is Kamala Harris,

the vice president of the United States.

So I asked for her

to be in the march on Washington when Martin Luther King

gave his, I Have a Dream speech in 1963, 50 years ago,

or 60 years ago.

And there she is, Kamala Harris.

She was born the year after, she was born in 1964,

but in a history book, there she may be.

And we have no safeguards at this point

against people polluting the media environment

with these kinds of images.

I do it to provoke a debate.

I m not interested in publishing it.

Or these are plantation owners in Mississippi, 1855,

these were slave owners

and I asked them to be smiling, to be generous, no problem.

So now the slave owners who own slaves

looked like nice guys.

Of course, you could not smile in a photograph in 1855

because the exposure time was too long.

But the AI has no problem because as was pointed out before,

the AI wants to please me, I m the consumer,

it gives me what I want, even if I shouldn t have it,

or this is a photograph of the ticker-tape parade.

That s the prompt. After the Vietnam War,

we never had a ticker-tape parade

after the Vietnam War, the soldiers

who came back would change their uniforms in the bathroom

at the bus station because people would spit at them

as baby killers.

But I changed history.

We now have a ticker-tape parade,

and we have a soldier making a selfie

during the Vietnam War.

Imagine if you re 12 years old, you say, Mom, dad,

I have a image, a photograph of a soldier.

They made selfies during the Vietnam War just like us.

Of course, they weren t invented yet.

No problem.

And he is very proud of it.

Then I had an Robert Capa who died in 1954, cover

the insurrection on January 6th in Washington.

He did a very good job.

Again, the issue is this morally correct

to use a dead photographer to make new images?

You know, I was at Arles this summer at the photo festival

and I said, if I come next year with 40 images in this style

of Cartier-Bresson

are you gonna stop me from having an exhibition?

And nobody had an answer.

Abraham Lincoln taking a selfie, it s my favorite.

And this is a company in Argentina

that makes generated images.

And what you can do is just fit.

Ask them for what you want, whichever kind

of images you want.

None of these people exist.

And according to the survey

that the Guardian reported on this week,

when people are shown in a research images, portraits

of people made by AI

and actual photographs, two out of three people think

that the AI images are actual people,

but only about one out

of two 51% think the photographs are actual people.

Do you understand? It s exactly the opposite.

They trust the AI images more than the photographs,

and they can t tell the difference.

I cannot tell the difference.

And I ve been doing this for 50 years.

So what they allow you

to do is a beautified left facing young Black person.

So if you have a company

and you don t have any Black people,

you got em now. In your brochure,

if you wanna be completely unethical

or joyous, female blonde people with blue eyes, you got em.

To me, it reminds me the kind of, you know,

to be really harsh, the Nazi Germany idea of genetics

and who s okay and not okay and so on.

You could also do it with babies.

And they have about a million and a half images there,

and they suggest all kinds

of uses make your own avatar like this.

And this is a previous survey where they found

that the AI faces are more trustworthy

than the photographed faces.

What does that mean then for selling fashion?

If you re trusting more of the AI faces

than the actual people faces the photographs,

what s gonna happen to the fashion industry?

Figure it out.

Nobody knows.

So this issue of Life Magazine was considered

the most important issue against the Vietnam War, 1969.

These were photographs of the American soldiers

who died that week.

There were 240 who died that week.

And the idea was that when you look at the images,

these could be your brother, your cousin, your neighbor.

It s the age, the rank where they re from.

The editor of Life Magazine said

it s the most important issue he ever did.

Just because it made real to people.

Of course, it didn t have the Vietnamese people who died

that week, but it did have the Americans who died.

And it was considered not the battle scenes,

but this is when the life turned against the war.

But if you don t believe these are actual people,

you can t do this anymore.

If you have the suspicion, they may not be,

they may be synthetic.

So when 100,000 people died of COVID in the US,

the New York Times did not use photographs.

It was all texts. So one guy did this,

he made images of the 100,000 people who died,

you know, I think he was an Uber engineer

and put them online, all these people who died of COVID.

But then he tells you, none of these people ever existed.

So you re supposed to mourn the people who didn t exist,

but they are correct in terms of age, race

and so on, in terms of the percentages of people who died

of COVID, old people, young people, Asian people, people

of color, White people, and so on like that.

So you become a place where the virtual

replaces the photographic in terms even

of mourning of grief.

So one of the projects I ve worked on

for 20 years is the Four Corners Project.

So it s the idea that every photograph online has a, each

of the corners is templated

and has different kinds of information.

So if photographs are under attack by manipulated imagery,

by synthetic image AI, one of the responses

is to increase context.

So if I photographed all you in the different corners,

you know, I could show where it s happening,

what the program is, you know, stuff around it.

Individuals could say things,

and this is the way it works.

So with Alan Kurdi, the photograph, the bottom-right,

you would have the contextualizing information, the caption,

the code of ethics of the photographer,

which in this case, I wrote.

While all photography is interpretive as a photojournalist,

my photographs are meant

to respect the visible facts of the situation.

I depict.

I do not add or subtract elements.

What s wrong with having the code of ethics?

I m a fashion photographer.

I will not work with underweight models

because it s unhealthy.

That s my code of ethics.

Whatever it is, you write your own, that kind of thing.

And then the backstory

that the photographer felt this was horrific.

And what else?

She d photograph many dead bodies

or the upper-left corner, other related imagery.

So this is Alan Kurdi and his brother who also drowned.

So you, he s not just a dead body on the beach,

he s a little kid with a brother and a stuffed animal.

You could give context so he doesn t become anonymous,

but he becomes a person, he s humanized.

You could add context. When I worked on the show

of Magnum photographs, 400 photographs over 40 years,

it was enormous exhibition.

But I realized that all the photographs

were taken over 40 years in about four seconds,

each photograph, if the lens, the shutters open a hundredth

of a second times 400, that s four seconds of 40 years.

That s not a lot. What s wrong

with giving some context to the image?

Digitally, you can do it.

Don t just do things digitally the way you did it analog

and then links to other things and so on.

These are codes of ethics that I wrote.

You know, for fashion photographer,

sports photographer, you write your own.

And this was the US invading Haiti to bring democracy.

But interestingly, this is

what it looked like from the side.

So if you have a photo opportunity, a staged image,

show it from the side.

A photographer cannot go along with faking media,

so that some politician puts his arm around a worker

and says, Oh, that s my buddy.

Well show it from the side and show how it was set up.

Reveal something.

Don t pretend that s the way it is.

Go beyond it just the way any responsible person should,

or this was riots in London.

You could show what the building looked like before.

Why not? It s easy.

It s not a single image.

It s like McLuhan said

that we re all going 150 kilometers an hour looking

through the rear-view mirror.

We re doing it the way it used to be done.

We re not using digital for what it can do

and should do differently.

Or you know, the sports.

So, you can actually see why Griezmann is so happy.

He scored the goal very nice.

You could use video with photography,

that s not such an extraordinary idea.

Or like this in the US, they had a hashtag.

If they gunned me down, young African American men

and women, if the police gunned them down,

if they killed them, they often media used

the so-called gangster picture in a way saying the police

were right to kill them.

But if you re doing this, you could say no,

this is the same guy.

You cannot get away with depicting me in a racist way.

I can resist, I can fight back.

I can show difference.

This is simple to do.

It doesn t cost any money, it s just a little bit

of effort to do it differently.

And if you wanna use it, it s free.

It s open source.

That s where you find it and that s how it works.

So, I suggested this in 2004 at World Press photo

in Amsterdam, 19 years later,

there s I think one publication in the world is using it.

So maybe we can make it two.

So looking for authenticity, one of the things about media

that it does really badly is it reacts, photography reacts.

It s really proactive in advance.

So in the case of Gaza

and Palestine, we don t know who the people are

who are dying because nobody bothered

to present photo essays showing us who they are

before they become newsworthy.

You could say the same of people in Iran.

You could say the same of people in many, many countries.

We don t know who they are.

So this is Tanya Habjouqa s work occupied pleasures,

what it s like to live under occupation as a Palestinian.

And obviously, the pictures are completely different

from what we re seeing every day.

These are real people with culture,

with humanity, with lives.

They re not just a news story.

Nobody is.

And we have to do a better job of being proactive

and showing cultures and showing context and showing lives

because it s so much easier to demonize

and kill people who we do not know.

They somehow are less human.

So we re doing a really bad job about what used to be

much more done in visual journalism.

You know, photo essays, oh, Russia,

we don t know much about it.

China, we don t know much about it.

Let s know something about it.

Curiosity.

It s not just getting what you want on social media,

it s getting what you don t know you need to know.

This is just a family who built a swimming pool.

So Jim Goldberg s work, for example, Rich and Poor

in San Francisco was collaborative.

It s not the photographer defining people who are rich

or poor, but asking them, here s your portrait.

What do you think? We look like ordinary people.

And then it says, we have a terrible life.

The photographer doesn t know that.

The people know it.

So you could collaborate with the people you re depicting

and ask their opinion of the picture.

You just arrived.

You may not know them, give them a say or this one.

I love David, but he is too fragile

for a rough father like me.

It d be very difficult for a photographer to say that,

the person in the picture could collaborate

and say things, when I show this to professional

with photojournalist, this kind of work, I ve been told,

You re taking away my power.

And I, my answer is, I m giving you more power

because you re able to have the other person

collaborate on the making of the image, not the taking

of it or this one.

I keep thinking where we went wrong.

We have no one to talk to now.

I still have my dreams.

I would like an elegant home, a loving husband.

And the wealth I am used to.

This is really a pioneering pivotal project in terms

of enabling the person depicted

to have a say in their representation, which is

what we talk about in terms

of the decolonization of photography.

So with my students, I often do

what I call the interactive portrait.

So you photograph somebody and you ask them, Is this you?

And then they answer you.

It s the sound is not working,

the sound is not working.

If it doesn t work, I ll just say it.

So anyway, this guy says, No, it s not me.

I m a smiling kind of guy. I like to smile.

This is not who I am.

So my student thought she was being respectful of the guy,

but he doesn t like it.

Or this guy says, Is it you?

And he says, No, it s not me.

No photograph would ever get at the soul of a person.

I had another student work with somebody who s homeless,

who s outside and shivering in the cold.

Is this you?

He said, Of course, it s not me.

I ve no place to shower.

I ve no place to shave.

How can it be me

And two weeks ago, I had a job.

So the people speak back.

If the sound worked, you d be able to hear their voices.

So another way of doing it when I was a picture editor

would be what I depended on was contact sheets.

The pictures taken before and after the picture.

So this is a very famous picture by Rene Burri of Che.

So by looking before and after,

you have a sense of the context of how the picture was taken

or made or Cartier-Bresson, in Seville.

So you get to look at it.

And for me, this is really helpful at the credibility

of the image because I understood what else was going on.

So this is an old idea that we used to have.

You could do it in video, you could do it in different ways,

but instead of the photograph just being a hundredth

of a second, you could allow a greater sense of context

barring from an old idea.

Also the idea of being proactive.

There was a sense that antiretroviral drugs for HIV,

Western governments and NGOs did not want

to give them to Africans

because they had a very racist assumption

that they were not disciplined enough to take

the drugs in a regular basis.

So a friend of mine who s originally from South Africa,

went back to a pilot project in South Africa for four years.

Did they take the drugs?

So this woman s T-cell count was I think eight.

She was close to dying

and he followed her over four years period.

And other people in the pilot project.

And this is the same woman four years later.

And so I asked the United Nations the AIDS division,

were these pictures helpful?

Instead of waiting for people to be dying of AIDS

and photographed very graphic images in the hospital

and winning awards for being compassionate,

wouldn t it be nice for people not to get really sick?

And her answer is here

that 8 million people are on treatment today,

in part because of these images.

He didn t win any awards for this.

But 8 million people are living lives with children,

with Warwick, with spouses.

Instead of waiting for the graphic images.

I worked with the United Nations

on the Millennium Development goals.

We did a big photo essay in the New York headquarters,

pictures by children in in different countries, Uganda,

Jamaica, Morocco, and so on.

The idea was that every young person in the world

deserved healthcare, clean water, electricity, education,

maternal care and so on.

And we showed it at the UN.

I read recently in The Guardian

that there are 21 million people are alive today

who would not have been alive if it wasn t

for the millennium development goals from 2000 to 2015,

of which we were a very small part.

So again, 21 billion people being alive,

8 million people being on treatment.

The photograph can be helpful in very good ways.

It doesn t have to be weaponized for hate,

it could be used in other ways.

And it s often not celebrated for that, for what it can do,

or this is a more conceptual use of it.

In the US, we unfortunately execute people for crimes.

And a number of photographers have gotten the menu

of their last meal before they were killed.

The budget may be $20, you re not allowed

to smoke for your last meal.

Different ideas, I m sure there s no alcohol.

So he d photograph this one, requested what you see,

the french fries and so on.

And then Amnesty used it because he was executed in 97,

but three years later, he was presumed to be innocent.

It was a mistake. That was his last meal,

often his DNA testing

or this guy tater tots ribs and two slices of pie.

That was his last meal. Executed in 97 and 2010,

13 years later they found he was probably innocent,

but that was his last meal.

So the idea, you could use photography more conceptually,

you could reconstitute the meal

and then Amnesty could use it to push

for the end of the death penalty.

It could be useful in the world in those ways.

So, can AI be helpful?

This is a big issue, obviously, in medicine.

It can be in science, it can be.

People are pointing to cures for cancer because of AI.

It s often better in detecting in X-rays

and so on than doctors.

It s finding things that other people don t.

It s fantastic in many ways.

But in terms of what we re talking about in kind

of a socially-oriented image, can it be helpful?

So this is what I got when I asked

for a respectful photograph.

This is from the AI.

So one of the things I m finding is AI is not a tool.

It s not like a hammer and a nail.

You hit the nail, it does what you tell it to do.

It has its own mind or perspective.

It does things you re not expecting.

And so when I saw this as a respectful photograph,

I was thinking, why did it come up with that?

Maybe it s not showing me the face for a reason.

It s respecting the child s anonymity,

not putting the child on the spot.

When I was a picture editor, I assigned photographers,

I often knew what they would photograph in advance.

It was predictable.

A writer, okay, somebody at a desk

with a lot of books behind them and so on.

But the AI sometimes is doing stuff that I never expect.

It s getting worse often I find as it tries to please

what it thinks I want,

what I want from it is what I don t know I want.

It s other idea in a conversation,

an ethical photograph of a homeless person

by another homeless person who s a photographer.

This is what it gave me.

And I m thinking, well what does it know that I don t know?

And I think one of the things it knows is

that homeless people can smile and be happy.

We don t have to stereotype them over

and over again, as victimized said

and without autonomy. They may be homeless

or without a home simply because they lost their job.

There could be all kinds of reasons.

They re not less than we are.

They re just as capable of joy as anybody else.

And maybe that s what the AI is telling me.

I m not sure.

Or then I ask for a photograph of an unhappy algorithm.

I really wanna know more about algorithms.

And this is AI s idea of an unhappy one

because maybe algorithms have feelings.

I don t know.

I mean, certainly if Lewis Carroll was writing

about them, they d have all kinds of feelings.

Or if they were in Harry Potter, they d have feelings.

So, why not?

So I m exploring things

that photography cannot show me.

I can t get a picture of an unhappy algorithm

with any camera that s ever been manufactured,

but this I can get or the most beautiful woman in the world.

That s what I asked for. This is what I got.

And this made me incredibly happy

because it was not going according to the norms

of fashion magazines and so on.

It was somebody else on their cell phone

having a conversation.

And then I start to think, well, where s the beauty?

And AI understood that the beauty is inside the person

as well as outside the person.

It s not a skin issue, it s a person issue,

whole person issue.

And I m really happy for AI thinking that way.

It can be racist, it can be misogynist.

And it often is.

What I do is often give it space to reflect as opposed

to trying to get it to give me what I want.

I don t want what I want, I want what I don t know I want.

A photograph of the day after tomorrow by a poor man.

How s that different?

I have Karl Marx, by the way, working

on poor people and rich people.

One of the things you could do is you can get writers,

poets, dancers, anybody in their style

and see how the AI interprets it.

I have Twyla Tharp, I have John Cage, Virginia Woolf

making imagery because I m interested in

how AI would interpret their work in terms of imagery.

A photograph of the greatest mothers in the world.

I expected human beings

and I m very glad it told me human beings

may not be the best mothers in the world.

I was very too provincial, parochial.

It corrected me.

A photograph of the war s horror that was stop all wars.

And it was sensitive enough not to show me the image,

but to show me the impact of the image.

It didn t wanna give me nightmares, the AI,

but it wanted to tell me

so I could imagine it what it would be.

A photograph from the victim s point of view

of antisemitism.

This was in the style of Roman Vishniac,

the Polish Jewish photographer who worked in the 1930s.

I asked for it, I hadn t thought of this,

but it summarizes.

It s like we always have to pack our suitcases

and be ready to leave because we re welcome nowhere.

The AI came up with this about a week ago.

It took about 27 seconds,

maybe 82 seconds, something like that.

The most alarming photograph of climate change today.

I was really looking forward to like posters

and things to stop climate change.

It came up with a Diptic.

I didn t ask for a Diptic,

but it has an alarm clock,

a polar bear, melting glaciers, oil wells.

It encapsulated a lot.

And normally with a text prompt,

you wait about 20 seconds to see what it does.

So I m looking for inspiration in social issues.

How can AI be helpful in terms of coming up with ideas

that I didn t think of

because I ve spent much

of my career working on human rights campaigns,

social justice campaigns, the first female presidents

of different countries, countries

that never had women s a president.

I wanted images of them

to provoke a debate why we don t have them, France,

Saudi Arabia, and so on.

Because one thing photography cannot do

is photograph the future.

AI is really excellent at making images

of potential futures. In that way,

I think AI is much more quantum,

potential possibles, outcomes and analog imaging

is much more Newtonian.

If you wanna talk in terms of physics,

cause and effect. Romantic martians

in love photographed in the style of Virginia Woolf,

I m publishing a book next year called The Synthetic Eye

and I m hoping they choose this for the cover

that Virginia Woolf would come up with this.

I think is brilliant

and somehow, I think it s a celebration of her legacy,

even though you could argue Romantic Martians in love

photographed in the style of Virginia Woolf on the left.

So, you get to, you know, I ve worked with Adam

and Eve, I ve worked with all kinds of situations.

I ve worked with, you know, stuff 50 years from now

to see what AI thinks.

Sometimes, it s incredibly stupid and conventional

and banal, which I m not showing you,

but sometimes, it s really interesting.

You know, what would it be? I m a Virginia Woolf fan.

So, I was interested.

A football match in Rome in the style of Heisenberg.

So anybody working a quantum physicist,

how would they see a a football match versus the rest of us?

You know, would they see it as a series of potentials,

parallel universes. In this universe,

this team would ve won in this universe,

the other team would ve won, and so on and so forth.

It opens it up differently.

And then Stephen Shore, the photographers very well-known

as had one man show of museum of modern art.

So the guy I know, so he s asked the AI

to photograph in the style

of Steve Stephen Shore in his style.

And I interviewed him and I said, what did you think?

He said, it kind of has a sort

of a deadpan blankness that I liked.

He said he would ve done a more high-resolution,

but it also raises the issue

that Brian Eno brought up in 1990, a long time ago,

that artists could keep creating

after their death in the style.

You know, Brian Eno had the idea that I don t make a music,

you know, just a object.

I make it as a system.

And if you buy my system, you could then mix it

with John Coltrane, with Brahms.

And for all the years to come, I ll still be making music

in collaboration with people from different centuries.

He likes that idea.

He doesn t consider it a violation of copyright.

Instead it s an augmentation of possibility.

And then this is an exhibition that s right

around the corner here, which you know,

a lawyer from Maurice Blackburn who did this is here

and we ll speak about it, which I find fascinating.

This is refugees held in Australia for years

without being freed and without being able to go to court

and with almost no photographers

or camera equipment available.

So they took I think 400 hours of testimony

and they made synthetic images through the AI

to show the abuse that these people suffered

because they wanted to be vivid

what these people went through.

And if you can t let cameras in, let s do it synthetically.

And you could argue that s fantastic for people

who suffered invisibly abuse.

You can do it.

What worries me is exactly the opposite.

If somebody s grandfather is a Nazi, they could say, look

how nice my grandfather was

to all those prisoners in the concentration camp.

According to his testimony,

he gave them ice cream twice a day

and I could make images, proving it.

So once you put this in the world,

the people abused, it s very effective.

Maybe even we will prefer this over photographers

in the future because it s first person, it s me

giving you the testimony what happened to me.

But on the same time, this can be abused in a big way

by the abusers to show that they weren t

so bad, in fact, they were pretty good.

I ran a plantation in 1855, my great-grandfather,

and you know, we just had a great time

and all these people learned new skills

and went on to great careers.

It would be horrible. The same thing they did

in Ashkelon, in Israel.

They asked people 80, 90 years old,

what was the greatest trauma they suffered in Europe?

These are survivors of the Holocaust.

And they would come up with images of it on testimony.

Of course, this is 70, 80 years later, 90 years later,

whatever it is, in a way it s great, it s therapeutic.

Other people see your trauma.

But again, it opens up the idea that history is malleable.

Each one of us can rewrite it our own way.

There s no kind of shared reality anymore.

And a major Israeli newspaper Haaretz condemned it as saying

that instead of what they argued, this was our duty

to remember and not forget the argument

was you re enhancing, forgetting.

Because each one of us now has our own version of it.

There s no unified version.

That s something to discuss at much greater length.

So, you know, just a few more things to go,

it gets much more complicated than what I m talking about.

So Facebook and others have just come up with systems

in the last few months where they read brainwaves.

Originally, the idea was it d have to be an implant.

And now they re doing it with MRIs

and they re doing it with electroencephalograph.

And what they do is they show you an image for a few seconds

and then they read your brainwaves

and they reconstruct the image that you re looking at

through your brainwaves.

You understand?

But it s straight from brainwaves.

That s the viewed image

and the image they predict you re looking at

and they re about 80, 90% close.

So in some ways, that s fantastic.

Let s say for somebody who s suffered a stroke, you know,

and can t articulate

or can t speak, it s really, really good.

And it s also incredibly terrifying

that somebody could read your brainwaves

and visualize what you re visualizing.

You know, could you imagine people convicted

of abuse of children, having to have their brainwaves read?

If you re thinking of a new child, you re back in prison

or you re electrocuted, you know,

there s an Arnold Schwarzenegger movie like that, you know,

all the different possibilities.

And what happens with big companies, corporations,

they are so unregulated with this stuff,

they just do whatever they wanna do.

As long as it makes money with no idea of the public good,

is it hurting us or is it helping us?

If it makes money, it s good. That s consumer capitalism.

So that I m sure many

of the people here never heard of this.

And this has already exists and it s existed for a while.

Reading brainwaves as well as the different systems

from South Korea and elsewhere

where people are now videotaping

and interviewing people are old

or very ill so that their family members are friends,

can have conversations with them forever based on AI.

Well grandma, what do you think of the weather today?

And the AI reads the weather report

and grandma says, make sure to wear your scarf

as if the person is alive.

Obviously, it s very difficult to grieve

if the person never leaves you.

There s all kinds of issues there, including with actors.

Should an actor be able, YouTube announced yesterday deals

in which you can now clone the voices

of several pop stars they ve signed with

so that you could clone their voices.

That was announced yesterday, which

to me is highly-problematic.

But again, it s about, you know, money

and a lack of regulation

and a lack of us knowing about it too.

So new standards to finish up.

In 1994, I proposed this,

an auto-lens icon, you put on an image

or next to an image if it s heavily-manipulated.

So in 1994, which is now 30 years ago almost, you d be able

to say that for example, this image is heavily-manipulated.

You should not become anorexic

because you see a model who looks skinny.

If it has a symbol, you know,

that it s been heavily-retouched, the person doesn t exist

who looks like that.

You should, you know, we now do that in a few countries

with ads.

It has to say if it s heavily-retouched

because of things like anorexia,

we should have been done doing this 30 years ago,

we would ve been had less problems or with AI.

This is a concentration camp survivor

that I made who never existed.

And I wanna be clear that to me,

if I didn t see the AI on it, I think he did exist.

So I would put the AI

or something like it to tell you he didn t.

I would be responsible, just like if I m selling you milk,

I would tell you, is it organic, not organic,

you know, and so on and so forth.

You have to label things, you have to be transparent

and let the viewer know what s going on.

So we started a thing just a few weeks ago,

Writing with Light, World Press Photo, Magnum photos,

National Press Photographers and others.

And the idea was, the way we solve this in part is to treat

the photographer as an author, just like a writer,

as an author, I m a writer.

Just because I have a pen or a typewriter

or a computer doesn t make you believe me, just

because I m a photographer

and I have a camera, it doesn t make you believe me

because cameras can be manipulated all the time.

But if I have integrity as a writer, you believe me.

If I have integrity as a photographer, you believe me.

Maybe you click on my name, you see my code of ethics,

you go to my website,

you see the other projects I worked on.

That s how I believe you as integrity, not just

because it s from a camera, that s the stand we re taking.

It s called Writing with Light.

And you re welcome to join

and this is our statement of principles.

It must be fair and accurate and so on and so forth.

You can t say if you change 62 pixels, it s a problem.

But 59 pixels is not a problem.

That stuff doesn t work.

You just have to believe on the human being.

We re beyond the mechanical era in which you could believe

the camera. As you know.

And it wasn t until 1882,

the United States Supreme Court said

that photography is an art worthy of copyright.

Before it was assumed

that the machine made all the images

and it wasn t human beings, we re way past that.

We know the human beings does it.

So these are the last two sides I m showing.

This is what I wrote in 1984

for the New York Times Magazine, six years

before Photoshop, when I used the psytech machine,

which cost at those days between 500,00

and 1,000,000 and a half dollars.

So you know, big companies used it, not individuals,

and you had to study it for two weeks to use it.

And so in 1984, in a magazine that was published,

1,600,000 copies in New York Times, this is what I wrote,

that in the not too distant future realistic looking images,

we re gonna have to ask the photographer

if it s fiction or non-fiction.

That s 39 years ago.

So nobody can say we weren t warned.

Nobody can say this is new, you know, it s just

with climate change, it s the same thing we ve known

for 50 years about climate change.

But we refuse to act as a society proactively.

We d much rather, and I m saying this sardonically

and cynically, and I don t really mean it,

but segments of society much prefer to hate each other

than to work together to establish a better future.

So I m gonna end with Hannah Arendt, where else can you end?

Well, sorry, I m going first back.

This is the 1984 piece that I wrote.

New Bag of Tricks.

It was a terrible title,

but I introduced the idea of pixels.

This was 1984, you know, 20 years

before what we re talking about, the digital age

of cell phones and Facebook and things like that.

And this is Hannah Arendt.

The ideal subject of totalitarian rule is people

for whom the distinction between fact and fiction

and the distinction between the two

and the false no longer exist.

And that s exactly where we are.

If we cannot come up with a common way

of understanding each other, we cannot resist.

It s very easy for autocrats to use that.

And when you add that, the fact of

how we re being divided into segments of populations

that are suspicious of each other, hate each other,

despise each other, it s a perfect moment

for totalitarian governments

because we re not a cohesive public anymore either in terms

of what we know or collaborating.

So if we wanna solve issues like climate change, racism,

refugees, Gaza, and so on,

we really do have to work together

and we have to have a common sense of what s going on.

I would suggest that the war in Gaza

is the most mediated war in history with more images,

more texts, more video, more sound.

And we know the least about it of any war in history

since the modern era.

And we really have to correct that.

So that s my suggestion for the day

and I hope, it was helpful as opposed to depressing.

Thank you very much, [everyone applause loudly]

Audrey.

Okay, we have a little time

for questions, thoughts, comments?

You wanna say something synthetically?

You can do that as well.

Anybody? Yep.

[Prakriti] Good afternoon.

Oh, thank you.

Good afternoon, Mr. Ritchin.

Thank you so much for the discussion.

It was really inspiring. I m Prakriti,

I m from Polimoda, Florence.

I had a question in the, by the middle of the presentation,

but by the end you already answered it.

But I think I would like to have an open discussion or just.

[Speaker With DeepTone] Tone] Can everybody here?

Can everybody hear?

Hello? Yeah.

[Prakriti] Yeah. Okay.

Yeah.

So I was saying that I had a question by the middle

of the presentation and by the end,

I almost pretty much answered it on my own.

And obviously with your help, so

there are many evil effects of AI.

The number one being erasing and taking over reality.

So is in what way do people still have a chance?

Is it one s personal ethic, integrity, honesty,

and being responsible, you know, to show the reality?

Is this the way people can help themselves win?

Because like you said,

and like how we saw, we cannot differentiate

what is AI generated and what is real, real.

So I think a person s their own honesty

and responsibility is the way out.

Okay, is this on still? No.

Is that?

So first of all, photography is not objective.

Photography s subjective. It s interpretive.

You know, if there are eight photographers here,

they d all come up with different images,

saying what was going on.

It s a recording of the visible, you know,

traditionally, it s a record of the visible.

So just like sound, audio recording, video recording,

it s different than synthetic media like painting.

It s not a recording of the visible.

Just to be clear, I m not arguing photography s objective

in any way to back up.

So the issue is authenticity in whatever we re doing.

So if I m a photographer, I make images

and I come to answer your question, I come to talk to you

and I m standing in front of you,

I have a greater authenticity than if I just stick them on

Facebook or Meta or whatever it s called, right?

Because there it could be decontextualize,

recontextualize and so on.

So we have to find ways to be authentic.

And we have to do that both on very local levels.

Like if you look at the small town in Spain recently

where a bunch of boys, you know, 12 to 14 years old,

paid about 10 or 15 euros, to undress the classmates,

the girls in AI and show them nude,

even though they were never photographed nude, to the point

that the girls were traumatized, they couldn t go to school

and it was done by their classmates

who didn t understand the repercussions of it.

You don t do that to a little girl or to an adult woman,

or to an adult man or to anybody.

You know, of all the deep fake videos it said 95, 96%

or pornographic and almost all against women.

So the argument that this stuff, you know,

the New Yorker just ran a piece this week, five pages

that we ve always dealt with fakes.

This is nothing new.

Well tell that to an 11-year-old girl who won t leave home

because somebody published online nude images of her

that look like her when she never posed for them.

And this is all over the place going on in,

in a high school, in upstate New York, near where I live,

some high school students had their principal, the head

of the school giving a talk to the students that was racist,

in which he said, I m gonna be bring a gun tomorrow

to school and kill the Black students.

And he never did any of it.

And when the police were called,

they said the students did not break any laws.

There s nothing they can do.

So I think if you re in a law legal, you have to come up

with the laws and regulations to deal with this.

If your parents, you have

to be able to deal with your children.

If your teachers, you have to tell the kids 12 years old,

you don t make nude pictures of your classmates.

You have to work on this on every level.

And if you re in media like The Guardian

and others, they say things like

even when they re recovering some of the synthetic images,

they say photo by and it s not a photo, it s an AI.

You be careful, there are no such thing as an AI photograph

that does not exist.

There are AI synthetic images that are photorealistic,

look like photographs, but they don t exist.

They re not photographs.

So you re finding in my profession

of editing and so on, many picture editors don t really know

what we re talking about today.

Professionally, a lot of people don t know

all through society, putting up a thing

that says AI generated imagery.

If you go on the streets of Milan

and ask people what s AI generated imagery,

how many people will know and down, you know,

and so on and so forth.

So to me, the only response is from everybody at this point.

And the one thing I ve learned is do not wait

for the corporations except

for Santiago will tell you now Adobe seems

to be doing the right thing, but in general,

they do not have our interest at heart.

If you look at Silicon Valley executives, many

of them send their kids to Waldorf schools,

which have no technology.

It s banned because they know

how destructive the technology is.

So they sell it to us where the users like heroin addicts,

but they won t use it in their own family.

They know it and they re destroying us and they re doing it,

but not getting arrested.

If somebody had a gun and did it, you d arrest them.

But somebody doing this causing mental health issues,

depression, suicides,

and they don t get any kind of penalty.

So I think, it s up to all of us in all these ways

to try to deal with it.

And one thing, thank you to Alessia

and everybody at Photo Vogue, you re a fashion magazine,

you ve taken more interest in the general sense of society

than many of the news publications do.

And also the people here are coming from

European union, lawyers, forensic people, as well

as photographers models and so on and so forth.

Students we re all in this together.

It s really an issue for society as a whole, not just

for any one sector of it.

And that s really the point of it.

Any other questions, thoughts, comments?

Sure.

Anything else? Yep.

[Speaker With Clear Tone] I was just wondering about the,

so you had the images with the logo in the corner

and it made me think of, I can t remember exactly

what the statistic was,

but it was something about if you see an image

or you hear a fact and then you find out

that it was fake like three weeks later,

you remember the fact or the image,

you don t remember that it was fake.

And I guess I was wondering about the difference

of captioning something

as this is a photo illustration

versus having it embedded within the image.

Do you think that is more effective in flagging it

for people so that they don t sort of retain the image?

No, I think, you re absolutely right.

There s two terms we use in English.

One is called Poisoning the Well

and one is the Liar s Dividend.

And they re both pretty much that once you see an AI image,

you start disbelieving the actual images

and some of the software to catch AI images.

Like what I said to you before, 66%

of people think synthetic images are more real

than non-synthetic.

The AI is at 94%.

It s better than humans at figuring it out.

So that what it often,

what it sometimes does is it flags actual photographs

as AI when they re not too.

So that we start to do that.

And even if you put the AI thing on the tank man

right in the middle, I cannot get it out of my mind.

I cannot make the guy anonymous anymore.

I can t get rid of that image.

So, it takes a lot.

So when I went to Mudec around the corner

to see the Van Gogh exhibits exhibition,

which is extraordinary, I m looking at the signifiers,

the frames, I said, oh, it must be a painting.

The fact that there are guards,

the fact you have to stand back.

The fact the lighting, the fact

that it s textured, it s not flat.

All these things are signifiers.

So a friend of mine, Sylvia Plachy, a photographer,

a long time ago, she would print her photographs

and publish them always with black around it.

You know, like when you put it on the enlarger, the black

that shows through to, to signify this as a photograph.

So I m, you know, I ve been thinking,

do we need different kinds of frames?

Could an actual photograph, you know,

always have a green frame, like go green frame for go?

And if it s a red frame, we know it s synthetic

and if it s a yellow frame, it s like, you know,

in the street, a traffic light, cautious,

this may be synthetic,

but we re not sure, you know, should we do things

like that in a big way to experiment?

And again, I, you know,

looking at the exhibitions here are very experimental

and I d love to see, you know, more people in design

and so on, coming up with ways to answer your question about

how do we signify it so we don t allow it

to seep into us in a big way.

And I think what I m getting at is I think the answers

or partial solutions

to all this is gonna come from us and not top-down.

It has to come from people inventing ways of doing it.

You know, not just expecting the European Union

or somebody is gonna make everything correct for us,

because this is worldwide, it s all over the place.

And we really have to start movements of concern.

Like, you know, the slow food movement is an example,

which is powerful.

Organic food movement is powerful.

We have to start a movement like

what organic imagery, whatever we wanna call it.

It needs a lot of thinking and a lot of energy

and really, it just needs resistance in a big way urgently.

It s not, I started this in 1984,

so I ve been doing this for almost 40 years.

Never has a big company come to me

and said, how can we help?

You know, I started in the New York Times,

it wasn t like a little alternative weekly.

Nobody has ever come.

When I published the New York Times 1.6 million copies,

I got three letters to the editor.

One was from a literary agent, but nobody ever, oh,

and Hollywood called me to see

how they can make movies using this.

That was it.

But nobody was interested in the public good.

You know, how do you do it? And I think we have to be,

because it s you guys.

It s your futures, the next generation.

It s all of us.

Nobody s gonna do it for us.

Anybody else? Back there.

[Ger] Hello? Oh,

sorry, we re over here.

[Ger] Hi, good afternoon.

I m Ger Patrick from Polimoda Master student in Polimoda.

I m American, of course.

And I love the fact that you touched up on history,

how AI is using historical pictures

and they re not accurate.

And with the war on education and definitely in America

and everything, do you think there should be any like way

we should regulate that?

Because I feel like that can be used to whitewash history

definitely in the United States.

The way people are tackling it right now, in a rethink.

Do you see that needs to be happening in the future

for the protection, just the way history is taught

and that it s more accurate, the information

that s getting provided?

You know, the question was whether this can be distorted,

distort history in the future, in textbooks and so on.

And I think that s what we re seeing already

in the United States.

They re banning books all over the place, you know,

that contradict anything, you know, you re seeing lots

of people with different points of view banned

from their points of view,

even in so-called democratic societies.

And I think one of the things we have

to do is really protect the archive at this point.

If I was an archivist

and for example, in issues of racism in the United States

or elsewhere, I would be busy protecting it.

I would protect it every way I could, you know, in terms

of metadata, in terms of contextualizing it, in terms

of guarding the paper versions of it every way I could,

because, you know, I ve been able to make images

like I showed, which, you know, can go into the mainstream.

I do it quickly. I m doing it early

because I wanna warn people I can t,

you know, I m not an archivist.

I can t deal with that stuff.

But I think we all have to be aware that it s not a joke.

It s nice to put Kamala Harris

in the March on Washington in 1963, but it s awful.

It s awful that I could do it.

It s terrible that I could do it.

One of the systems, for example,

won t let me use the word Nazi in a prompt.

So I use the word fascist that has no problem doing it.

You know, there s ways around it.

I ve rewritten the history so many times

that I m disgusted personally that I can do it.

And again, if I was 14, I could do the same thing.

It doesn t take, you know, all this experience to do it.

I just have to write 12 words, come out with it,

and there it is.

And I bring it to my teacher

and see, see so and so is right.

You guys are wrong. It s this way.

You know, we, you know, the governor

of Florida has said the most racist things around,

and I ve been able to show images and I hate it.

And we have to stop it before it starts in all ways.

So, I agree with you 100%.

We have time for another question.

We have time for another question, what?

Oh, the last question?

[Speaker With Shy Tone] Yes, hello.

Thank you at first.

And I wanted to ask like how you see the role

of traditional media in spreading or pretending

or preventing generated images from spreading.

Do you feel like this will be illustrations

like photography?

Like will it take place next to each other,

or should there be a separation?

It s a really good question.

I mean, Vogue Magazine has AI covers already.

You know, that s a labor issue as well.

Models don t get paid, photographers don t get paid,

makeup people don t get paid, and so on and so forth.

You know, one thing I would do if I was mainstream media

is I would have a section for social media.

I m answering a question in a more broad way,

what s credible from social media.

You know, Gaza, here s 27 images of the week coming out

of social media from Gaza that you guys should look at.

We didn t do it, it s not professional,

but we checked, it s credible.

So that social media is not the outlaw

against the mainstream media, but there s a hybrid.

They work together.

One of the really difficult things

with AI is how cheap it is.

In other words, photographers said to me,

I was gonna go travel 2000 kilometers to photograph,

it s expensive, but I could ask AI to make the same images

and I don t have to leave my home.

And it s almost free to do.

So that s gonna be a factor

in mainstream publications using AI

for certain kinds of situations.

So in my opinion, there should be mainstream publications

should get together and say, it s okay to use it,

for example, to show a recipe,

you know, these ingredients together, maybe that s okay

and you label it synthetic.

Maybe, maybe not.

I m not a cooking expert,

but maybe there s certain ways,

if I wanna show climate change

and what it s gonna do to Milan in the year 2080,

I m using AI because I cannot photograph the year 2080.

With photography, it doesn t exist yet.

But according to reputable scientists, this is

what Milan will look like

if we don t do anything in the year 2080,

that s a fantastic use for AI

by mainstream publications as proactive.

So instead of waiting for the floods

and disasters, you actually prevent them

or diminish them if you can.

So I think things like that would be excellent uses of AI.

But I think, there should be ethical policies

that are very easy to read by users.

You know, they re not 60 pages.

There are three paragraphs saying,

we use AI in this condition,

and when we do, we tell you about it.

And otherwise, we will never use it.

You know? That s it.

And then also though, you also, you know, find, you know,

and Santiago is speaking after me now with Adobe Photoshop.

It s a photograph that uses generative fill.

Generative fill is AI.

So I could photograph you here in this room

and then say, make the background the mountains of Alaska,

and a few seconds later, you ll be in Alaska.

So that s a photograph that s also AI.

So, how do you deal with that?

In my opinion, that s, you know, you just have

to say that s AI.

There s too much AI in it.

You can t call it a photograph anymore.

And then the publication should say, exactly, this is

what we do and this is when we use it.

And it can be useful.

Nobody s against change if it s used in responsible ways,

if it can help with climate change, if it can help

with x-rays, whatever it is, AI is fantastic.

It can help with poetry, it can help with music

and can help all kinds of ways.

But we have to be clear when it s useful, when it s not.

It s like when we invented the automobile,

they didn t know what to call it.

So they called it the horseless carriage.

It didn t have a horse.

And we still have horsepower in our engines,

even though there s no horse anymore.

So when we invented photography,

we called it digital photography,

it s like the horseless carriage.

We didn t know what it was. And now we know what it is.

We know it s not the same as photography.

It s different and it could be useful,

but it s not the same.

There s something good in everything

and something bad in everything,

and we just have to label maybe transparent

and use it responsibly.

And then we could, you know, maybe next year come back

to Photo Vogue and celebrate AI for all the amazing,

wonderful things it s done in the coming year that are great

as opposed to, you know, all the warnings

and things like that.

Because this painter said the same thing about photography

when it was invented, you know, 1840, the year after,

the guy said, from now painting is dead

because of photography.

It s cheaper, it s more realistic, it s faster.

You put all the representational painters outta business.

Instead, we got impressionism, cubism, minimalism,

and we got Van Gogh next door.

We wouldn t have had em without photography probably

because it pushed the painters to enlarge

what they re doing to have a broader idea

of imagination and possibility.

And maybe AI will do that to photographers

who will be broader

and more interesting than what they were before.

So there s a lot of good to come,

but we really have to know about it

and be very active and very urgent.

And once again, thanks for the invitation

and thank you very much. [audience applause loudly]

Starring: Refik Anadol