Skip to main content

Addressing Ethical Dilemmas: Image in the age of A.I. | PhotoVogue Festival 2023: What Makes Us Human? Image in the Age of A.I.

In this group discussion, a diverse panel of experts and thought leaders from various fields, including photography, technology, law, and journalism, come together to delve into the ethical dilemmas posed by artificial intelligence in image creation and dissemination. The discussion explores critical topics such as the implications of A.I. for image makers, the role of AI in combating digital misinformation, the legal and ethical frameworks governing AI in the EU, and the intersection of multimedia forensics and A.I. Additionally, the panel discusses the importance of fact-checking, media literacy, and the challenges of navigating truth in the age of AI. As the festival's opening group conversation, it offers a unique opportunity for a comprehensive exploration of complex ethical issues arising from A.I. in image creation and dissemination. This discussion brings together experts, many of whom delivered individual presentations earlier in the day, providing a platform for the convergence of diverse perspectives and a deeper examination of these challenges.

Released on 11/22/2023

Transcript

[footsteps walking on stage]

Are we missing, Fred?

Should we slide down?

[audience member speaking faintly]

[Audience Member] [indistinct] Get coffee.

I believe Fred is getting some coffee,

so we will do a little slow start

until he comes back and I think he is right there.

So, hello everyone.

Thank you for staying through this evening

and for this exciting panel.

I ve been here all day listening to these amazing talks,

so I hope you were able to enjoy this talks.

I m honored to be moderating this panel with all of you,

but I ve spent all day listening

to you all saying really smart things.

So here s what you can expect from this conversation.

I ll do a brief opening so we re all on the same page.

And then I d love to spend the next hour

in kind of three parts, which is the present, the future,

and what we can do next.

The goal of this panel will be for you to hear

from our panelists, not me.

And I want you all to be in conversation with one another.

We got to hear so much from each one of you,

and I think it s really the interdisciplinary of this topic

and kind of how all of our fields collide together

that makes this topic so fascinating, interesting,

and ethically sticky.

So just a little brief intro about me.

As Daniel said, I m VP of Content Strategy and Growth.

My background is in journalism and news organizations.

I most recently worked at the Texas Tribune,

which is a nonprofit news organization,

and I was also at the New York Times,

helping run the social team during the Trump presidency,

if you can imagine being on Twitter every day

during those years.

So that s where I ll mostly come through this problem

of generative AI from.

So just very briefly, AI is inescapable right now.

You could be talking about movies which could lead you

to the recent SAG actor strike.

One of the longest labor crises in Hollywood,

and their final negotiations around AI

to protect their images and likeness.

You could be talking about everything happening

in the Middle East and that would lead you to the torrent

of misinformation and the impact of AI.

You could even be talking about the Beatles,

which could lead you to their first quote unquote new song

since 1995, called Now and Then,

created with the help of AI.

So from all the talks today,

and Daniele I agree with you, I had to change the script

so many times after hearing all of your talks.

The main threads I heard were around trust and truth.

What and whom can you trust

and how can we know something to be true?

I think it s too basic to say that this is all good

or all bad.

Instead, I think there s always an equal and opposite

reaction on both.

This moment is stretching our imagination in both directions

about the possibility and also all the risks.

So I hope our panel will help you continue to stretch

that imagination and continues to complicate this issue

and introduce more nuance

and present all the ethical questions that follow.

So with that, because I also like an interactive panel,

I d love to take a quick poll of everyone,

including our panelists of where you are at with AI.

So are you more optimistic or more pessimistic

about the future?

So raise your hand if you re more optimistic.

Okay, we got an excited kind of wave back there too.

Raise your hand if you re more pessimistic.

Okay, it s about half and half as expected.

So back to kind of our outline, if you will.

So let s start with the present.

As Mel reminded us earlier, AI has been around for a while.

Whenever you use your phone or your camera,

that s machine learning.

So today for our purposes,

let s mostly focus on generative AI

and all the latest developments there.

Fred, the Hannah Arendt quote shared earlier

that you ended your talk with

has been imprinted in my mind throughout the day.

Those are the stakes and what s at risk.

So I wanna read it again

just so we all kind of jump off from there.

So the ideal subject of totalitarian rule is people

for whom the distinction between fact and fiction

and the distinction between the true and the false

no longer exists.

So Fred, you recently wrote a piece in Vanity Fair,

titled Regarding the Pain of Others in Israel and Gaza.

How do we Trust What we See?

The mere existence of the threat of generative AI

led people to not believe what they were seeing,

whether it was true or false.

So it didn t even matter to an extent.

Can you help set and start, help us set the scene

for where we are, and then I d love for everyone

to jump in with where you see the present moment

from your perspective.

And Zahra I d love to hear from you

since we haven t heard from you all day.

So let s start there.

I don t think the issue is AI really.

I think the issue is authenticity, credibility, and belief.

And in my opinion, wrong to blame a technology.

You know, it s like, what s wrong with the hammer?

You could kill somebody.

Yeah, it s not the hammer s fault.

We use it in different ways.

So I think for the most part,

we ve been pretty much oblivious to issues of manipulation.

You know, you get more likes if you re more hateful,

your company makes more money.

So why would you be against hate?

It s a profit motive, and you re a platform,

you re not a publisher,

so you could do almost anything you want to do,

you know, whereas if you re a publisher,

you could get sued for libel to do it.

So, you know, when I was picture editor

of the New York Times, magazine.

I knew we could get sued and I would double check

and triple check and quadruple check what I needed to do.

And it wasn t, you know, having to see the actual

negative of the photograph,

but it was the integrity of the photographer.

Did I work with the person before?

Were they well known in the industry?

Could they tell me more about it and contextualize it?

And so on, and I think in three and a half years,

I caught two fakes, and they were both by people

I didn t know.

Everybody else did what they did.

And if somebody darkened the sky in a picture

or lightened it, it was fine.

You know, it s just like with a writer,

if I m quoting you, I could do dot, dot dot

and leave out certain words in the middle

if I have integrity.

But if I leave out the wrong words in the middle,

I have no integrity, you know,

the ones that are essential to your argument

or what you re saying.

So I think we re really wrong to blame a technology.

I think we just have to take credit for good and bad

for decades of manipulation of media.

And for allowing this kind of ecosystem to develop,

because then it s impossible for us to resist.

It s, you know, what I started my talk with,

I think is the most relevant,

which is the earth was vulnerable seen from outer space.

We got together with Earth Day environmental movements

and we did something.

We didn t question the photograph, you know,

we understood the Earth needed help.

That was the issue. It wasn t about the image.

I almost always start my talks by saying,

I don t care about photography.

I care about the world. We shouldn t focus on the image.

We should focus on the world and how we could be helpful.

And certainly if we don t believe each other, you know,

if we have no central public square of shared facts,

there is no way we re gonna work together

to do anything important, whether it s through image

or text, or audio, or skin color, whatever it would be.

So I think, you know, I think we re pretty much

all in agreement on that.

And what has to come out

is how do we do that constructively.

So it s not just, you know, we re the good guys

and they re the bad guys, but we re all the bad guys

and we re all the good guys.

That s it.

Zahra I d love to hear from you.

I actually, I agree with Fred on that.

I think the problem is not the problem of technology per se.

I work at Al Jazeera,

I had the storytelling and innovation studio at Al Jazeera,

and you know, when the recent war

between Israel and Palestine started,

one thing that we were really worried about

is that we d have a deluge of AI generated content

that we d actually have to verify and make sense of.

But actually that didn t happen.

We didn t see that much of AI generated content.

But we are still having to do verification, fact checking,

authenticity, because the way the storytelling and narrative

is happening, we re using the same tropes

where you re seeing governments, organizations,

people lying, you re using old footage

that is being recycled as new footage or seeing videos

from previous wars that are being used

for this current violence.

And so I don t think the issue

is really generative content, AI generative content.

It is more so the authenticity, verification

and a larger issue of trust in media.

Because even if we do figure out the verification

and authenticity, but we ve lost trust in media,

then who are we believing to do that?

So I think that seems to be the larger issue,

at least from the news perspective

and the one that we re currently working towards.

Daniele, I know you wanna jump in

on the news aspect of it.

Yeah, the fact, while they were talking,

this is a piece of New York Times, of yesterday.

And the headline says,

Harsh visuals of war leave newsrooms facing tough choices,

digital disinformation and restriction on photojournalists

have complicated decision making about the visual chronicle

of the Israeli Hamus war.

And this is exactly what it is.

What I feel every day when I have to deal with this story.

You know, we don t have enough sources to verify

what is going on there exactly at the same,

we don t, nobody knows exactly what is going on

or the nuclear power plant in the Zaporizhzhia in Ukraine.

It is exactly the same thing.

We don t have no one there.

So the two most discussed news since the start of the war

was the slaughter of the Israeli child and children.

And the, who hit the hospital.

By now, there is no verified version of both the news.

And this is what we face.

It s not about how both parties can make it up

and use their influence to try to deceive

which is the definition of fake news I used before.

We have to use the human factor.

How many people do you know that can confirm this

to deal with that?

Yeah that makes sense.

My next question really is to everyone,

and I think an important distinction to make

is between entertainment and journalism.

You know, a lot of what we see, like I mentioned,

the Beatles song that s squarely in entertainment.

I think a lot of the ethical questions come in,

in when we expect something to be true and we re not sure.

So in journalism and Florian,

I was so struck by the kind of examples and how might we

at the end of yours, because I think those are all

such amazing examples of imagining a future.

But what is the role of generative AI in journalism?

Is there room for how we think about truth and facts

in this new ecosystem?

How so and why?

And are there examples you ve seen

that have made you question the ethics of those content?

That s a hard one [chuckles].

Please.

Yeah. Boy, there s so much to talk about. So little time.

I mean, the media world as a life long journalist,

second generation journalist grew up in journalism.

You know, I risked my life to bring truth to people

from around the world and then ran

one of the world s largest photo departments

telling the world s story every day.

In a way, as Fred alluded to, that involved, you know,

verifying the sources of the imagery, everything,

it was a high, you know, degree of accuracy,

very few cases of outright manipulation, great.

Now we re in a world where everybody is a publisher

at their own scale through their own social media.

Technology facilitates, you know, rapid manipulation

or creation of content.

You re seeing a shift in skill sets.

A term I ve been hearing a little bit in the last few months

has been the term synthographer.

In other words, you re not a photographer,

you re a synthographer if you use generative AI.

And it s really fascinating for me

because I ve often thought about images.

When you break them down and describe them in words,

you end up with paragraphs.

Now with generative AI, you can enter paragraphs

into generative AI engines to generate images.

So it s like the thing has come full circle,

and then it s a question of the subjectivity

of what words you choose to enter into generative AI

will give you, you know, different results

according to which words you choose.

In the same way that photography has always been subjective,

where do I point the lens?

What am I interested in, which is influenced

by your background and your experience

and all of those kinds of things.

So the issue around objectivity, if it even exists,

which I m not convinced it does in any way,

has been around for a very long time.

And then I think if you add to the confusion of everybody

being a publisher and you add to the paucity or the lack

of media literacy in general terms around the world.

And if you add to that, the desire and the ability

to control the narrative,

which has resulted in the last 30 years or so,

In the profession of journalism becoming more dangerous

as more actors seek to control the narrative

because they re more aware of what s being produced

because of the information ecosystem

and the globalization of journalism.

You re left with a lot of very, very complicated

nuanced situations.

And when it comes to how generative AI plays

in all of this space, in my mind, you know,

in addition to the transparency and authenticity work

that I m involved in right now.

The other pillars I think of literacy, media literacy

in particular, and legislation as defined as a duty

of the people who govern us to keep us safe.

Those three things together I think can be powerful.

But by the same token,

the media has become increasingly politicized as we ve seen.

And you know, the fairness doctrine that once upon a time

existed in the US broadcasting world is a relic of the past.

And cable news now seeks to inflame

and seeks to polarize people.

And people enjoy hearing news that s in line

with their worldview, which further complicates things.

But I don t think there s a turnkey or light switch solution

to this in the sense of we can address all of this

with just one thing.

But I think the combination of things together

gives us a chance.

But there are always gonna be people who only want to hear

news that aligns with their views.

There are always gonna be people

who are going to dismiss certain pieces of information

because it doesn t align with their views.

I think that s inevitable.

But I have to be optimistic looking at the future

because pessimism is too horrible to contemplate.

I have probably the only person here

with no, is my mic on?

People can hear me. Yeah.

The only person here with no formal journalism background,

I wanna look at this question from an arts perspective,

I can still count on less than one hand

the number of things I ve seen made by a generative tool,

both written and in images that made me feel anything.

This is a big topic, but I think the cart is a little bit

ahead of the horse in terms of what it is that we re seeing

that actually has powerful, tangible human impact

in terms of quality of render, the uncanny valley, right?

Looking at things and going, ah,

that s, something s off there, right?

And I m not talking about, you know, the ability to correct

whether a, you know, shirt is folded over or not, right?

We ve been dealing with Photoshop for some time,

but really thinking about imagining something

that did not happen and being moved by that,

I think as a tool for journalism,

right this distinction between synthographer

and photographer and as you pointed to earlier, Fred,

like what do we know came out of a real tool

and what was generated in some way as synthetic

is table stakes, we have to get that right.

That has to be led by regulation.

We need major players, my company included,

coming forward and offering proactive solutions for this

that are then also checked.

But I think we also have to think about like,

what is it that this tool is designed to do that the tools

before it have not been able to do?

Because surely there were painters who sat there

at the advent of the camera and said,

shit, they can actually show you the landscape.

I m just interpreting it.

Painters make a lot more money selling landscape paintings

today than landscape photographers do selling

landscape photographs, right?

There is something innate to the medium that speaks to us

that can candidly, I haven t really seen yet

from this medium, which makes me, I guess, optimistic.

It s a weird way of saying that.

One project that I would point to that I think

deals with this in a really interesting head-on way

that is political, is something called Deep Reckonings,

which was a deep fake artistic video project that took,

there were at the time that I saw it, at least three people

I know two of them for sure were Brett Kavanaugh,

the US Supreme Court Justice, and Alex Jones.

I don t even know what to say about him.

And this person deep faked them, giving apologies

that they never gave.

And the genius of it for me was,

I m watching something that I know is fake,

because I know that those men have not given those speeches,

but there was still something uncanny about watching

their mouths move in front of an audience

and say those things because it created a more tangible path

to imagining A, that that might be possible.

And I wonder, I m sure they haven t seen it,

but if they did watch it, right?

Which would be a deeply weird thing to see yourself

say something that you never said,

of which we will only see more.

What could that unlock for you personally, right?

So there s a really high baseline standard

that has to be maintained for journalism

that is of course different than in the arts.

But I also think it s important to acknowledge

that we re not, in my opinion, we are not yet at a place

where this medium has figured out

what it s really there to do.

That some of the things

that have come before it cannot achieve more effectively.

I don t know, I actually disagree with you

on that a little bit.

I also think a lot of the conversations we have,

we have very much from the perspective

or what we are experiencing in the western world, right?

It s very different to when you are looking

at the developing world.

News there, the way it spreads, the way people consume it,

is so different.

One of the primary ways that we get information

in the developing world is through WhatsApp.

And I think the content that you are able to produce

using generative AI tools already are good enough

for you to be able to spread that,

to share that on messaging tools.

And so I m not oblivious to the fact

that there are a lot of challenges when it comes

to generative AI and what it means for truth and facts

and you know, discourse as a whole.

But as you said, like you ve gotta be optimistic about it.

There s nothing we can do. This is here to stay.

We ve just gotta figure out how do we engage

with those tools?

How do we increase media literacy?

How do we make sure that people are equitably,

and are engaging with all of those tools.

So I think those are more of the things

that I m thinking about

or wanting to have conversations about,

rather than is this good or bad?

This is here to stay.

We ve just gotta figure out

how it is going to serve us

rather than we become slaves to the technology.

And I would just add to that,

I ve always wanted to see a history of photography

from the point of view of the subject, not the photographer.

In other words, if there s an issue going on,

was it helpful to the people or not?

Or did it make it worse or not?

Did it inflame it or did it help it?

So, you know,

when I ve done a lot of these human rights work,

we re always looking for impact.

You know, it s like if a doctor operates,

they don t say, oh, I had a nice scalpel.

They ask, did the patient live? What happened?

But somehow we dissociate ourselves from impact.

If the right person is taking the picture

because they come from that community, we re happy.

Even if it has zero impact, if the right person

from that community takes a picture

and a 100 people have healthier lives, that s fantastic.

On top of it, both are important.

They re both important, super important.

So if you ask that same question,

can AI help people who are suffering or hurt them?

I would say right now in Gaza, for the most part

it s hurting because even if there s very few images,

the specter of the images,

is that you can get all these ridiculously statements

from governments and universities

saying there s no free speech.

One side is right, not the other,

but there s no evidence going back and forth.

There s like a specter, this kind of hole, of this void

of knowledge because nobody decides this image

is correct, this is wrong.

So since we don t know anything,

we ll just go on our tribal instinct.

And if we re pro this side, shut up the other side.

If we re pro that side, shut up the other side.

To me it s a complete disaster.

So from the point of view of the people of Gaza,

even the specter of artificial intelligence

is in my opinion, creating untold numbers of more deaths

and injuries.

So from their point of view,

and they re not sitting with us on the stage,

I think they would have strong opinions,

because often in projects I ve worked on,

you go out to the community and you ask, did this help?

You know, we re sorry, we thought we invaded your privacy,

we did something wrong.

And they said, no, you bore witness.

Without you, nothing would ve happened good for us.

And so you have to check the impact on the communities.

And we re not representing, as Zahra just said,

the great majority of the world right here.

And I think the great majority of the world

would like the most authentic reporting possible

of what s happening to them.

So the rest of the world could then decide

to do the right thing or not,

but at least open up the door to possibly

doing the right thing, which is already a huge advance

and sometimes works.

So I would take it from the other side

That comment reminds me of the Washington Post,

just published this big feature over the weekend

around the effects of guns.

And they did this really big storytelling package

and showed very graphic images from several mass shootings

around the US.

And I thought about that package and the images it showed,

and they re kind of,

I think three ways to look at the journalistic purpose

in publishing those images.

One being, do the ends justify the means?

Like should we show this as the impact

of not having strict enough gun control?

Is it an issue of harm reduction where we want to try

to reduce harm for current victims and future victims?

Is it a question around consent of those victims families

who may or may not want to see those photos?

So I think, you know, that s not about generative AI,

but it touches upon some of these issues

that Fred you brought up.

But I wanna go back to what you said Florian about

generative AI that makes you feel something.

And I think the kind of starkest example of that

for me was the example you showed earlier about

the human rights lawyers working with the victims

of the Australia Detention Center

and showing the experiences of those victims.

It made you feel something in a way that I wasn t expecting,

knowing that those were completely unreal.

So I d love to get everyone s responses to that.

Like, did that make you feel something?

And if you didn t see it earlier,

Fred, maybe you can recap that for us if you didn t.

You want me to?

You know, the two of the lawyers are here

who worked on that project.

So I think they had 400 hours of testimony from people,

refugees who were being abused for years.

And the judges wouldn t hear the case to release them

pro bono and photographers weren t allowed.

And they made a book which is on display,

images on display, and a video online.

And they re all synthetic images working for weeks

with the individuals who are abused

for them to approve the image as being relevant

to what happened to them.

So when I put on the video online, after about a minute,

I could not watch anymore.

I was devastated.

And I ve seen as a picture writer and curator,

I ve seen untold numbers of images and that s my job,

but I could not watch it, it was horrific to see it.

And I think in that sense, it s really important

and powerful because, you know, it s coming from a person,

I mean, to Florian s point, a lot of this stuff,

it seems to come from a machine, there s no authorship.

And the idea that it was a person giving testimony,

and they re the ones who often worked with

not only a technologist, but a psychologist

because it raises trauma to then represent what happened

to them, you know, rape and abuse and just terrific,

you know, somebody committing suicide

in front of their child, the different things that happened,

it had enormous weight for me.

So I think it s possible in certain circumstances,

but I just wanna go back,

like to the history of photography, 1839, it started.

And a lot of the work was considered very mechanical

machine-based for a long time.

And if we had the panel in 1842,

you could ask the same question, we d have the same answers

right now, it s not creative.

It takes time to, you know,

I think Florian said are very well,

it has to become itself the medium.

It can t just copy previous media, you know,

and pretend to be photography, pretend to be video.

Like I interviewed Alvy Ray Smith in 1984

who then went to Lucasfilm.

He said, the point is not to try to, you know,

simulate something that exists, but to try to create worlds

that we don t even know about.

And I think that s what AI can bring us to,

in really important ways.

I ve done hundreds of images of martians with AI,

you know, from the style of all kinds of people.

I m really a huge fan of martians. I love martians.

If I had traveled there tomorrow, I d feel right at home.

I know who to talk to, I know what they re like.

It s great, it s a fantasy world.

But I m really happy that Mars is not just this sort of,

you know, hostile cold world,

but could be just as interesting as we are.

So that we re not looking down at the other,

but we re looking at the other out of respect

as having much to teach us and so on.

In this case, obviously a fantasy, but in other cases,

the reality of it too.

So I think if you come out of it with a feeling of respect

and complexity and not thinking you re the first world

and you re dictating to the rest of the world,

but you re learning from the rest of the world.

Like a lot of my students like doing AI

last year in the class.

The number one guy is from South America, you know,

working from his point of view.

And we learn from him.

I m so happy it s not just to, you know, north south.

It also is south north and has to be much more of that too.

So that s a long answer to a short question

I ll add to that quickly.

I love the point you made about going somewhere

the camera cannot, there are lots of risks

to that as well, but I think if we apply that

to the Carceral system and think about,

there was an amazing exhibition called Marking Time

that looked at artworks by incarcerated artists.

And immediately you realize what the limitations

of the materiality and the space are by looking at the work

because they had to create with the things

that they were allowed to have in their cells, right?

And so here s a world where somebody using just words

ostensibly, and the other thing I wanted to hone in on

was collaboration.

I think there s a lot of conversation about this

as sort of someone s sitting at a computer by themselves

typing in prompts of something they ve never seen before

or experienced in real Life.

And then releasing their sort of authoritative view

on what this could be, rather than thinking about,

I was challenged by a friend of mine who s a historian

on a project that I was interested in doing

about re-imagining moments that we never got to see

from leaders of black activist movements who were killed.

And his challenge to me was there could be a lot of power

in what you re talking about.

What s also true is real human descendants of those people

are alive today, still reckoning with the fact

that their great, great or great or parent was murdered,

and where do they enter into that equation?

And it was a dimension of it that I hadn t considered

at that point, because I was like, oh, I m,

what could the provocation of this do for justice?

But we have to look at the other side of that too, right?

It s like there are real consequences to showing people

things that didn t happen.

And we have a responsibility to what that imagined

alternative is as well.

Does that feel enough of the present?

I think we can move to what our future looks like.

As mentioned many times earlier today,

this is one of those wicked problems that no one person

or no one industry or profession can solve alone.

I think this is something we collectively must do together.

But before we think about solutions, I wanna imagine

what the world will look like in like three to five years.

What gets worse, what gets better?

And Guido I see you on the screen,

so we haven t forgotten you.

I d love for you to talk a little bit about regulation

and Santiago, you re doing so much

on the Authenticity Initiative, like I d love for you

to talk a little bit about that too.

Oh no, we can t hear Guido.

Can you hear me now?

Yes,

Yes. Yeah, thank you.

So in terms of future and regulation,

I think probably two of the aspects that really need

more kinda more thinking and more regulation

are one, the convergence between AI and IoT.

Because a lot of the discussions and policy conversations

and also regulations, proposed regulations, et cetera,

really revolve around AI and its intangible

immaterial dimension.

Whereas I think a lot of complicated things happen

when AI becomes embedded in physical objects.

And I think that s definitely one of the two aspects

in the future that is going to become more important

and more important and present to regulate.

And the other one is the convergence between AI

and neuro technologies and all the complexities that arise

between, you know, when an AI is directly attached

to our brain, and really how do you regulate that?

How do you, I dunno, delete the data in that context data

that is written in your brain at that point.

So a lot of exciting things in terms of potential

to do things better, but also very complex work ahead.

[Santiago] Santiago. The Future.

Something I ve been thinking about is,

are we naive to try to set standards now?

Are things going to just be moving too quickly

for the next year or the next year after that?

I think the standards have to be constantly updated.

I don t think you can ever say the work is done.

With regards to the future,

the future is something that excites me tremendously.

I have a friend whose job title is Futurist,

and she goes around the world briefing companies

and individuals about what the future could be like.

And I think the most important component

of any future is that of imagination.

In other words, we don t know what the future is gonna be,

but we can imagine the future and in some cases

our imagination can be fulfilled and we can actually

shape the future.

And so I ve spent a lot of time over the last, I don t know,

five years or so thinking about the future of journalism

and the future of storytelling and having, you know,

been in the journalism business for almost 40 years.

I ve obviously seen a lot of change from a time

when my job consisted of transmitting photographs

down telephone lines and each picture took 15 minutes

to reach its destination to a world where we can send video

in real time from devices in our hands.

And so when I think about the future of storytelling,

especially as it relates to the people

who are consuming stories, young people.

I watch my own children or young adults now

and how they consume news, I ll confess to having spent

a inordinate amount of time on TikTok over the last year

or so to the point where it was really disrupting my sleep

and causing anxiety and just too much of one medium.

But what I came to conclude, at least in a limited way

was attention spans are shorter.

The density of information needs to be higher

in order to compress information so that it can penetrate

or interact with, you know, empirically,

demonstrable shorter attention spans.

And so what I start to think about

is the combination of media.

We ve traditionally separated media, right?

The photo, the text, the video, the audio, the map,

the data visualization.

And it seems that those media types

necessarily need to be compressed.

And in fact, when it s done right,

which sadly 99% of what you see on TikTok

and at least in my opinion is done poorly.

But the one or so percent of stories

that are told effectively on that medium,

which is super compressed and very effective in the sense

that it reaches a younger audience, are those stories

that combine all of those different media types

in a compressed way.

So the data visualization, the audio, the video,

the still image, et cetera, the question then becomes,

given that you re compressing all of those media types

in a very short amount of time, how do you determine

the provenance or the origins of so many media types

in one space?

And curiously enough, I think over time

artificial intelligence might be able to help us with that

in terms of being able to identify

and communicate media types in a vernacular

so that you re not limited to very technical data.

One of the things we ve been doing

on the Content Authenticity Initiative

is a lot of user research in terms of how do people

interact with content credentials.

These digital nutrition labels

that I was speaking about earlier

where you can click on something adjacent to an image

and it will give you some information about its provenance.

And the focus groups and the research groups

that we re talking with get very confused very quickly,

and they re not sure what they re supposed to do

with this information.

And so I think over time, thinking about the future,

the ability to interpret accurately that information

into a language that resonates with the news consumer,

I think is gonna be important.

So what do I see in the future even faster,

what is sometimes referred to content velocity,

even higher, you know, volume of imagery

and other information types reaching us,

shorter attention spans,

wearable devices that get information into our organisms,

which sounds crazy, but people have been using pacemakers

connected to the internet for years.

It s not out of the question to think of implants

being a reality.

We re already seeing, as Fred alluded to,

the ability of some technologies to capture brainwaves.

I believe Apple has a patent on an ear bud device

that will capture rudimentary brainwaves

and that will allow technology companies

to gauge people s reactions to content.

Terrifying on one side, fascinating on the other.

So I could talk all night about this stuff,

but I better be quiet

If I think about the future of journalism

or the future we have to look at with a perspective.

So starting from behind 10 years ago,

we were the first television to follow G20

just with a journalist with a smartphone.

And all the American news outlet made piece about the fact

that there was the Italian journalist

following the prime Minister at the G20

just with his smartphone.

And the reality was that we didn t have enough money.

It was a tough fiscal year.

And so television costs a lot,

good television costs a hell of money.

So the future is also connected to how technology will help

all the news outlet to make efficiency,

but maintaining a standard of quality

in terms of news output, video output, audio output

and so on, it must not affect the way you have the basics

to tell a story.

And this is crucial, if you use all the futurist features

we can imagine I dunno,

I guess it will be like that and, but we ll be ready

and human will be at the center of the process.

Otherwise there s no such thing like a robot

writing down a story.

I don t believe it.

I really, really, I don t believe it.

I m not worried about that.

I have to say I completely.

I really have a very different idea on this.

Again, if you take it from the point of view of the girl

burning from napalm in 1972.

Is it okay to have people flip through on Instagram?

And so the fact of her agony,

you just get rid of in a second.

Is it okay for people to see it even faster?

Is it okay to say the people in Gaza who are dying,

let s flip through quickly, you know, we re going to dinner.

I don t think there s any respect there for the people.

It may be more efficient for us,

but I think if we stop thinking of ourselves as consumers,

that we have to do things that are good for consumers.

But we start to think and say we do things

that are good for citizens.

And to be a citizen,

you have to understand Frederick Douglass,

you have to understand different groups.

I cannot understand, you know,

let s go back to the African American experience

in four seconds, that s disrespectful.

You know, it s like taking a vitamin pill and pretending

you re eating lunch, it s not it.

So I think again, it can t be driven by the economy

of profit.

It has to be driven by the public good.

It s very nice to say all our opinions count,

but that s a consumer lure.

They don t count, nobody cares about our opinions

when they make government policy.

The US is now anti-abortion.

The great majority believe in abortion.

The US government is anti ceasefire in Gaza.

The great majority of Americans want cease fire.

They re not listening to our individual opinions.

It s a consumer seduction.

So I think we have to leave place for the common good,

for the common truths, and we also have our opinions.

So the fact is six people are sitting on stage.

You could think it s really interesting, really boring,

it s up to you, whatever you want to think,

but the common truth is there are six of us sitting here,

and then you make up your mind whatever we re saying.

But you can t make reality into what you want it to be.

So there s a zebra on one side, an elephant on the other,

and a 47 foot martian sitting over there.

It s not what happened.

If you wanna do it privately, there s no problem.

Or if you wanna do it as an artist making a commentary

on how ridiculous these panel discussions are,

you could do it, there s no problem.

But we have to have some sense that the common good,

the common truths have a place while we have a right

to our own opinion, our own subjectivity

and our own representation as forcefully as we can.

And going back to Frederick Douglass,

who is brilliant about image, he understood

that you can actually project into the world certain ideas

in very complex and interesting ways.

And that s what we have to do more of,

but not catered to consumerism.

I worked at the New York Times,

it was all the news that s fit to print,

that s incredibly arrogant.

What we were saying, we left out a lot of the world,

it was really awful.

But then again, you know, there was a hierarchy

of professionalism.

Like I don t go to my doctor and say,

well on TikTok it said do this and this said that, you know,

forget this, we have to respect to a certain extent,

you know, journalists, people who know more than we do,

who come from the area and give them their space.

I think these are important things.

Don t blame it on AI, you know, it s a whole system

around it, you know, that s my take on the take, sorry.

Fred, I think what you re describing is a broader

societal issue and an issue that s based on a market economy

and money and so-called fiduciary responsibility

to shareholders who own companies and all of those things.

So the problem identification is, you know,

you do it very eloquently and I what I m sort of asking,

when I hear you talk about the problem identification

is about the solution identification,

and that s where it gets more complicated quickly.

No but I would say if you guys, you know,

any of you corporate, whatever, if you had like a button

for slow journalism, let me buy a smartphone or whatever,

and you re giving me the option for slow journalism,

I want the in-depth report on the G20,

I don t want the four second one.

And you start encouraging that in advertising

just the way you do so eloquently for people of color.

And you say for people want in depthness,

people who want complexity, I m selling you a smartphone

that s gonna give you complexity in all kinds of ways.

Or I m gonna do content authenticity

that not only is gonna trace the camera back,

but give you 12 different perspectives on context, you know,

by people who are there witnesses and so on.

I m a 100% in, and I think in a corporate plan

you might even be able to profit from that,

you know, certainly in terms of prestige in society.

And we d look at, you know, the different corporations

and say, wow, these guys are letting people

who have two minutes to get the news, do it in two minutes

and people have two hours, do it in two hours

and down the line.

I think we can t pander to just the speed.

I think we have to accept complexity and push it.

So there s a project in England that s been going on for,

I don t know, four or five years

called the Tortoise Project, talking about slow media.

It s founded by a former BBC executive.

It takes the form of a print,

sort of monthly thematic pamphlet or publication

that arrives to you by mail.

They engage with their audience.

They have sort of thinkings around the UK

where they ask people to come and physically

and online as well, discuss issues.

And I think it s along the lines

of what you re talking about, but it seems to me

that that appeals to a certain generation,

whereas there s a much more digital generation

that is never going to go back to broadcast television

if they were ever there is never going to go to print media

if they were ever there.

And whether it s foisted upon them

or whether it s their choice or some combination of it,

prefer to get their information through their devices

and prefer to get it in a way that responds

to whatever their attention span demands.

So when I see that phenomenon,

I ask, is the cat out of the bag?

Or, you know, do you want to go back to an era

where there were horses pulling your car

as opposed to having an engine?

Or do you wanna go back to a culture

where the wheel hadn t been invented?

I mean, human progression is a natural thing.

Yeah, but if I m Vogue Magazine

and I m doing a great fashion spread on Lagos, for example,

I would love to be able to click and find out of the culture

of Lagos, what s behind the fashion?

Who are the people, what s the economy?

What music do they listen to? Who are they?

And then as a young person,

I might wanna engage and learn a lot if I have the option.

So there s lots of ways that Hollywood, for example,

sells their movies, people go to Oppenheimer, huge thing.

But that s about a, you know, atomic bombs and stuff,

who would do that?

But there are ways to engage and I think, you know,

Florian showed it very well in your advertising.

I think it s quite brilliant that you guys do.

But I d love to see that kind of advertising,

getting people to look behind the scenes in deeper ways

and maybe in Lagos, you know,

I just learned about the music scene,

that s enough for that day,

but I could keep pushing and engaging and moving it.

I m not advocating broadcast television,

horse-drawn carriages.

You know, I actually developed a first multimedia version

of the New York Times, in 94, 95.

They hired me for a year to do it.

And I was trying to get it so young people, you know,

if it s a review of music instead of just reading it,

you could hear the music because I understood

that would be important.

And then you d read it, you d get different perspectives

or you d say, you know how different cultures

view the same book or the same, you know,

what Zahra pointed out before.

But there s ways of engaging and doing it.

We re just not using the resources to do that.

We re using the resources to sell certain kinds of packages,

but we don t, it s not the common good

that we re looking at, just to clarify

And just to respond about the G20,

the G20 thing was that by just sending a journalist

without the cameramen and following the prime minister,

without the cameramen, we spare enough money

to do a really deeper cover of the story

because we spare enough money to have a couple of pundits

not embedded, like all the reporters

after the prime minister are during the G20

or just cattle following all the,

and if you go to the COP 28,

I will go in 15 days from now,

it s exactly the same thing.

It s very difficult to find your space to cover a story.

And then just to say that at the end of the day,

you have to adapt to what technology you have at your time.

But human again, is at the center and if you wanna go

with slower journalism, you can.

If you know how to use your resources,

but your resources are not limited.

And because we are an industry.

We are not just for good, you know, just pro bono.

So we have to deal with that and we have to use our way

to do efficiency just to maintain our standards.

Otherwise, if you put away your money from the window

just to have broadcast television with a huge production,

sometimes you feel handcuffed and you have nothing to say

left at the end of a hour and a half news program you have.

[Millie] You wanna add?

No, I guess when you asked how do you feel

about the future?

I think it s a, you know, it s a really tough time

to be answering that question.

I think if you d asked me that two months ago,

I d have a very different answer.

But I also think that when we talk about technology,

we have to talk about it in the context of the politics

of our time because everything is political

and especially technology because it has real world impact

on common everyday people.

So when I think about AI tools in and of itself,

I can t think of it without thinking about

what s happening in the world.

We re living at a time where there is growing fascism,

you know, where we are seeing absolute tech monopolies

that do not have the public interest at their center.

You know, we re seeing so many inequalities

and when you re seeing these really, you know,

drastic changes happening around the world.

AI tools and even when it comes to AI content tools,

I think can do a lot of harm because they can be used

to further marginalize and increase inequalities

around the world.

So I don t feel very hopeful,

especially given that we re in the middle of a war

and things only seem to be getting worse

and will be getting worse.

So I don t think that I m able to separate just the tech

from what we re experiencing at this moment.

I ll be quick, a different angle on the question.

A lot of my favorite photographers

did not start as photographers.

And I m always interested in people who bring

a very different discipline to a new medium.

We re too early in this to have really seen

what happens when that happens with generative AI.

And the thing that comes to mind for me,

I ve seen extraordinary images produced

by young Ghanaian kids who are renting smartphones

by the hour to be able to have access to the tool

to produce an image.

And if we believe that it s not the tools

and it s the artist, right?

Your sense of composition doesn t matter

if you re using a $100 Nokia phone

or the most expensive Canon, right?

The eye should be able to shine through.

What happens when the technology reaches a point

where it is ubiquitous enough and accessible enough

that people who in the accessibility community

have never been able to move their arms,

are able to dictate a design that they otherwise

couldn t have created.

What happens when,

and I don t offer this as sort of blind optimism,

just another layer of this, right?

That same kid in Ghana who if we talk about

the human-centered design movement

best understands what solution he might need

for an LED lighting rig to set up at his,

I m gonna say cell phone stand

because I ve seen one of these,

not because I m trying to sell it, right?

And then can just articulate that design

and with some combination of affordable 3D printing

can actually just go make that thing

instead of waiting for a major company

to enter the market and do it for them.

I think that is an interesting potential compelling use case

that we re still a few years away from.

And absolutely and I think that there,

you know, there s a lot more potential

and a lot more positive impacts of AI

in the field of healthcare education, you know, medicine.

But again, I think that in order to make sure

that the positive effects of those reach people

beyond the western world that we re living in,

I think we need to address who controls these technologies,

who s creating this, what are their incentives behind it?

And I think without questioning and investigating

and putting our focus on essentially the people

who run those companies, I don t think we re going

to have the equitable distribution of technology,

no matter how good that technology is.

I mean, just a kind of funny way of thinking Umberto Eco,

you know, one said that he thought that The MS DOS computer

was Protestant and the Macintosh was Catholic.

Because the Macintosh has these like icons like the church,

and if you click them in the correct order,

you reach salvation and the MS DOS you know,

is a much more abstract.

And then he thought computer code was capitalistic.

And so I used to do this thought exercise.

If the computer was developed in India for example,

with avatars and so on, or if it was introduced

in a place where you first had to ask how are you?

How are your children, how are your grandchildren?

Before you could use it,

you know, where you had to have that kind of politeness

to it and it s not Command-Q and this that

and I get what I want.

You know, there used to be a woman wrote about misogynism

in computer programming, you know,

my husband s a programmer and it s all Command-Q, Command-V,

and when I come home, he says, do the dishes,

do this as if I m part of that system.

And so I think if, again, if we respect other cultures

and ways of knowledge and wisdom and we develop products

that help advance that kind of knowing

without the western perspective so much heavily ingrained,

I think that to me would be a very nice future to see

the child in Ghana wherever it would be.

Coming up with their own systems.

You know, do you have to bless the camera first?

Do you have to ask permission from the camera?

Do you have to show it to the elders first?

What do you do with it as opposed to just be a utility

to get what you want?

You know, a tool kind of an idea.

I think there s a lot to be said. What was it?

It was, you know, one of these video computer games

about earth and environment and unless you answered

the first three questions, you weren t allowed to play it.

You know, you had to have some knowledge,

you know, things like that.

Just to move it out of the quick easy consumer thing,

you know, including AI.

I think could be really, really important.

I think the answer is almost always the underlying system

in, you know, and how the system uses it,

not the tool itself

because it can be used in so many different ways.

Yeah, I think any system we built we imbue

with our own biases and cultural norms and societal norms

whether consciously or not so that makes a lot of sense.

Now that we ve painted a view of the future bleak or hopeful

or not, I d love to talk a little bit about what we do now.

What do we do now? Maybe we ll solve it up here.

And also I think this is a good time for anyone

in the audience to jump in and ask a question too.

I want this to be as interactive as possible so.

We ll run around with a mic as well.

So what do we do?

What do we do in this world today, tomorrow?

How do we kind of start big or small

with some of these to imagine a new world.

Education, education, education,

but democratizing education.

There s a huge component of the world s population,

a huge area, a huge civilizations

that have very limited access to knowledge and information

because their systems don t allow it.

They don t have the resources.

We re extraordinarily privileged

in the so-called Western world or so-called developed world

with the access to tools and knowledge that we have.

And we re so in incredibly,

in my mind shortsighted about sharing those tools,

that knowledge, those resources with the majority world.

That for me is a fundamental problem because the survival

of the planet depends on the people who live on it.

And if a great majority of those people,

or a majority of those people are constrained

because of the access that they have to information

and to resources to help do something about it,

then it gets very, very complicated.

And education as regards to media literacy, of course,

but just technology in general and access to resources

and understanding what those resources are.

I keep coming back to education as being at the root

of any solutions to any issues that we re dealing with.

For me the way I started,

with one of the books I wrote

was just what kinda world do we wanna live in?

You know, what s our ideal for the world?

And then develop the tools and systems to accomplish that.

So if we wanna live in a world

where we respect people of color,

so that when you photograph them

their skin color is authentic, you build it.

And we saw that today.

So that was defining the world we wanna live in,

and build the tools to get us there.

If we wanna live in a world that s more peaceful

and respectful, then we try to do something

about hate speech and hate images and weaponized images

and legislate against it or figure out ways to deal with it.

But I think if we just start out with that idea,

in five years from now, what world do we wanna live in?

We want a world with less issues of climate change.

AI maybe can help us with that, you know,

and then you use it, the tools to accomplish the world

you wanna live in and not be pushed around by the tools,

but develop them to do the things you want them to do.

That s it

For me one of the key word I agree with education

of course, is responsibility.

We have to take responsibility for what we produce

in terms of content, and we have to be aware

that we have to take responsibility.

We must have the courage to do things to create something.

But we have to take responsibility.

Sometimes I feel there is quite unpersonalism

and create something and give it to these highways

where you don t know when it will end.

But if you, in this case I m talking also as a citizen

not just a representative of a news outlet.

We have to be responsible for the fact

that we are very exposed in everything we create.

So, and sometimes you don t know what the outcome will be.

I always tell a story about climate change,

I guess, you know the story of the Paul Nicklen s photograph

of the polar bear starving in the Arctic.

There s a photograph about these polar bears starving,

and I guess it was buffing, something like that.

And it was really dying.

But it comes out that National Geographic,

published this photo with the headline,

This polar bear is dying for climate change.

Now is climate change for real? Yes, for sure.

It s one of the major challenge we have as human kind.

Is the Arctic affected by the climate change? For sure.

And this is endangering the polar bear, yes.

This particular polar bear was dying

for the climate change, no.

So the editor of National Geographic,

had to step back and ask everyone.

I m sorry. I m telling you, I m sorry I pushed too much.

But you know, I know Paul Nicklen,

which is the photographer who took the photo.

And I know how committed he is in this kind of,

sometimes you dunno exactly what the outcome will be.

In this case, the backlash was so hard,

no one can expect it.

And the day later, all the deniers came,

oh, now you see that it s all a makeup.

So it is also our responsibility to stay sharp

and see the limits and stick every time

to the truth.

Guido I ve been the person in your position before,

so I want to ask you if you have something

to add to this, since we can t see you up here.

[Guido] Sorry, thank you so much.

Can you hear me?

Yes. Yes.

Yeah, so very, very quickly.

I think three things mainly.

One is that AI can be amazing to improve education

that democratize it as someone said before,

it can improve access to justice, et cetera,

but it cannot and should not be the replacement

for a robust welfare state.

You know, often I hear, you know, there are people

who don t have access to education

and therefore isn t a robot teacher better than no teacher

or other people say there are people with no access

to justice, is it not better to have an AI generated ruling

over no justice at all.

But I think these are the wrong questions to a large extent,

because we just shouldn t allow these things,

these inequalities to exist.

AI cannot provide the excuse to governments

not to invest on welfare.

I think global investments on welfare

to protect those affected by AI should be the top priority

for governments.

The second thing also thinking about was Zahra Zasool

was saying technology is political.

That s absolutely true.

You know, you cannot fight for democracy, equality,

and even peace if you don t fight for a more responsible AI.

And this brings me back to what I was saying

in my presentation, that what we need,

or one of the things that we need is collective action,

participatory action, you know, or let s organize.

And this leads to the third point very briefly.

I m a lawyer, I don t love the law.

But I think one of the things

that one of the reasons why we can trust that the law

sometimes will lead to the common good

is what Jon Elster called the civilizing force of hypocrisy.

Effectively our, you know, politicians,

most of them, they don t really care about what we think,

but obviously they are hypocritical.

And that means that they re not gonna pass laws

that cannot be defended publicly.

That means that if we all care and we show that we care

about the future of AI, about responsible AI,

then the odds are then that those laws that will be passed

on these points on to regulate AI, et cetera,

will be public interest oriented.

They will to some extent listen to us.

And this is the moment because we have at the UN level

the discussions around the global digital compact

at the Council of Europe level,

the discussions around the AI treaty.

So there are movements on a global level

where there is something happening.

So this is the moment where we need to organize

and make our voices heard.

Two things, I think we need to center the communities

that we re serving, most specifically

marginalized communities in the creation of technology

and of content.

And secondly, I believe that, you know,

I m speaking from the US perspective

because that s, you know, that s where I live.

We ve gotta move beyond the idea of representation.

You know, we ve leaned on it too much.

The idea is that if you have a black person, a brown person,

and a white person, and you know,

all of them are in the room and their opinions

are taken into account,

that means that we ve done a good job

of representing a community.

But we ve gotta make sure the diversity includes

not just race and gender,

but includes different political ideas.

It includes different religions.

It most specifically includes different classes of people.

And unless we don t do that, we start having conversations

and discourse that includes people

with lots of different backgrounds and ideas and thoughts.

I don t think we re gonna be able to create technology

that is going to serve the majority of people.

My function is as a product manager.

So I spend a lot of my time thinking about product design

and when we come up with an idea,

we have to work with UX teams that make us think about

does somebody get prompted for a notification?

If so, where? If so for how long is that intrusive?

Is it helpful?

Do they get it shown to them once at the beginning?

And then never again does it come up every time?

I bring this up because going to your point earlier

Santiago, about how we meet people where they are,

while also to your point Fred, building the world

that we want to live in, this is not a grand solution

to anything but just a small idea.

I recently onboarded to a number

of these generative AI tools.

I was very resistant to them for a long time.

And then the prompt came for this and I was like, okay,

time to dive in.

I was struck by the fact that when I got to the stage

of using reference images, taking somebody else s picture

and letting it influence and heavily create the output

that I wanted, I was never prompted to think philosophically

about what that means.

And I get it, that s antithetical to every usage metric

under the sun, right?

You would never want to interrupt somebody s intent

in a product to do the thing that they came to do.

But in the spirit of challenging the norm,

if we know that the future is that people

are going to flock to and live in these tools,

and I agree with what everybody said about the importance

of media literacy and education, where is the obligation

to create some part of that education in the experience

of using the tool?

I imagine there will be hundreds of thousands

of young people who come on to using a tool like this.

Who have never thought about what it means

to go take five images from somebody else, make something,

and then call that thing their own, right?

And obviously there are proposals on the table

that will make it easier for people

to understand what the source material is.

But I think it is also the responsibility of the builders

to think critically about how you prompt people

to think about the act it is that they re performing

and borrowing from the slow journalism button.

I think as a product design idea, there s a lot in there.

How do I expand the surface area of learning that can happen

at the moment of intent, which is the thing

that we know somebody s going to do, rather than trying

to route them to another website

that they never intended to go to in the first place

and hope that they read that information.

So maybe, you know, there s an opportunity there.

Let s build that.

Questions from the audience?

[Audience Member] Thanks.

This is sort of building on a couple things

that have been said, especially just now

by Florian and by Daniele.

And it kind of goes back, I guess,

to the theme of this whole panel, right?

Addressing ethical dilemmas.

But I wonder if there s maybe agreement or disagreement

on the panel about like who should be doing the addressing

you know, cause I feel like

from the different conversations we ve heard today,

I think that from some perspectives, you know,

it s very much a governmental political action for others,

it s more corporations need to be taking more initiative

or the user maybe needs to be thinking more about it.

And I guess I just wonder if we could unpick that

a little bit.

I can speak quickly to that.

You know, from a Google perspective, we have AI principles

that were announced at the onset of sort of,

beginning the deep dive into this journey.

The intent there being leaning in favor of safety,

avoiding things that create harm,

leaning in favor of accountability.

I am not naive enough to say

that there is not also a necessary role

for government regulation to play in all things.

Because no matter the intent, there will come a time

where there is friction between the interest

of take Instagram as an example, right?

And I ll speak just from my personal experience.

The usage, the notion of an infinite feed, for example.

Is very good. or some players it might not be the best thing

for the user, who s right about that, right?

And how do you weigh the good and bad of that?

This is why you need outside operators as well.

So I think anything short of a collaborative approach

will fail.

What I would love to see to call for accountability

from the seat that I have in this is more proactive thinking

about ways that we can surface these things to our own users

within products.

Because I think the about section

of a lot of company websites actually has a lot

of really useful information that most people don t get to.

They get to the product,

but they don t see the thinking behind it.

And then of course, being regulated.

So I work obviously for Adobe,

Adobe s sort of slogan if you like, is Creativity for All.

And so Adobe for the most part,

creates tools that facilitate creativity.

And part of the reason why Adobe

started the Content Authenticity Initiative.

Was to try and have a level of transparency

around what kind of material is created with the tools

that we make.

And so in order to sort of carry that work forward,

in addition to the Content Authenticity Initiative,

which is, you know, 2,000 members and growing every day.

We ve also been very thoughtful about

how do we make these tools responsibly.

For example, generative AI training data sets,

bias, you know, being very cognizant of how these tools

can be in the hands of bad actors signing up to initiatives

that are seriously thinking about threats and harms.

Looking at the dangers from a variety of perspectives

outside of just the technology industry,

the partnership on AI, for example,

which is an organization that is working hard to codify

and describe what the threats are and how companies

can control for them and mitigate them

in addition to signing up for, you know,

government codes of conduct and things of that nature.

So I think it s necessarily holistic.

One needs to look at the whole picture,

but it s also really important to focus on the details,

especially as regards to the threats and harms

that sometimes this technology can cause

for certain individuals or certain societies.

And I just add to it, I think, you know,

we re sitting here and we could say the government

should do it or the corporation should do it,

but if we ask ourselves what can we do as a group?

I would love next year Photo Vogue

does this festival that there d be a media literacy book

or pamphlet or website.

And it would say, before you use AI to bring back people

who ve been murdered by the police,

for example, Florian s example.

Then maybe you should think about their descendants.

Would that upset them?

Would that upset you if it happened to you and your family?

Would you feel bad about it?

And then go through a whole bunch of different cases

so that people engage with it.

We don t tell them what to do,

but we just make them more thoughtful.

Like, I have a whole bunch of images I ve made,

I will not show anybody

because I don t want anybody ever to copy them

because I find them really awful.

But I would want that media literacy thing to say,

every image you make, you don t have to put online.

And these are some reasons not to do it

because you re gonna give ideas to people

to do the wrong thing.

So maybe don t think that every image

is worthy of publication, for example.

And down the line, you know, and then if you re a person

from a different culture, be a little bit careful

when you re representing a culture

you don t know too much about.

You could hurt people from it.

Like the New York Times, when I worked there,

we had one principle, which is the golden rule.

Put yourself or member of your family or friend

in the picture, would you want to do it to them?

So we had an example of a woman running from a fire

somewhere in the United States.

Her house was on fire behind her.

She was completely nude running from it.

And would we publish a picture? Of course not.

We re not gonna stop house fires by publishing a picture

of a woman who s nude.

Nor do we want a person to go back into the house

and get a towel and burn to death

because they might be photographed.

So we knew we wouldn t do it,

but if we could stop a war like the girl burning

from Napalm, an influence, and in my mind we would consider

publishing it, there are big differences in doing it,

but I think those nuances really deserve a website

or something.

So if anybody uses any of the tools we re talking about,

they think about it a little bit beforehand

and just don t do it.

That would be a great project that somebody would fund us

to do right now and next year we would present

in 75 languages, that would be wonderful.

Yeah, I mean, I agree with Fred that we can do stuff

on an individual level, but realistically on a global level,

I don t know how much individuals can have an impact.

I really believe that any change we want,

especially the implementation of, you know, legal framework,

we ve gotta do it through our governments.

We ve got to use collective action to firstly educate

our government officials

because most of them don t quite understand the technology.

And then work to making sure that they re implementing laws

that are for the benefit of the people.

Not saying anything personally towards Adobe or Google,

but I don t think any tech company

is going to act responsibly if it doesn t benefit

their profit margins.

So do I trust Google, Adobe to do the right thing?

Personally I do not.

We ve seen that play out when you had Meta and Facebook

Meta and Twitter a social platforms

and their impact on news.

You know, there s been a conversation for decades

about how they re impacting news

and the distribution of news.

But at the end of the day, news organizations

just had to agree to the terms and conditions that were set

by social platforms if they wanted readers to read them.

So I don t think that I necessarily trust that corporations

are going to do the right thing in favor of the public good.

Guido let us know if you wanna jump in.

Yeah, maybe I can say just very briefly

because kind of law and regulations have been mentioned

a couple of times.

One of the problems with an approach that only relies

on the law is that the law is heavily influenced

by lobbying.

And that s more explicit perhaps in the US

but in Europe it s the same in most countries at this point,

especially big tech and larger corporations

have more resources to influence the lawmaking process.

Whereas even when there are consultations,

there is not a lot of times that sort of citizens

are listened to.

It does happen, but the kind of the influence

is quite different.

And I think that s one of the aspects, one of the fronts

where we need to change things for the better.

We need to make sure that individuals, you know,

citizens organize and make their voices heard.

I think that the solution many times

will come from what Derek Bell called,

kind of a interest convergence.

So when different groups realize that even though

they have different interests at different aims, et cetera,

they actually share something in common,

that s where change can happen.

So for example, in recent times it s been launched

the Alliance on Universal Digital Rights.

And this alliance that is focused on digital rights,

was created by two women s organizations.

Organizations that traditionally

would ve just dealt with gender issues.

But now they understand that you cannot have gender equality

without fighting for better digital rights

and better AI.

Thank you for that question.

It kind of reminds me of the issue of nuclear weapons

on one hand and climate change on the other.

Where nuclear weapon, I think that issue that Guido

you mentioned of issue convergence,

we somehow stopped nuclear proliferation

or you know, dramatically decreased it.

Because countries had common interest to do so.

I think with climate change, on the other hand,

Zahra like you said, all of us can recycle

and feel good about it or compost,

but unless we have the infrastructure to do so

and the laws and governance and accountability

and responsibility to do so,

it feels like such a hard problem to solve

because of the varying incentives and oftentimes

conflicting incentives.

So thank you for that question.

Can I add one more thing while we wait

for a second question to come.

I look at where we are on real tone six years out.

And there was a time when this seemed impossible,

not because there were racist colleagues of mine telling me

that no people with dark skin should never be seen.

It s just because the people responsible

for building the tools did not have the lived experience

that provoked them to invest in this area.

I get asked a lot questions about Google

as though I am Google not the search engine,

but the entire company.

And I m sure those of us

who have worked at major corporations

have this experience often.

I have found it to be very productive,

to get as specific as we can about who needs to be moved

in what direction to achieve a certain goal.

I worked in influencer marketing at the time

when I first started asking the question,

what if we built this camera?

I have a background as a photographer.

I did not touch any of the teams

that built computer vision tools at Google.

At the end of two years of searching

and pitching internally,

the answer was about two human beings.

There were two people who were well positioned

to really give this project life or have it die.

And those two people, we transformed into staunch advocates

for the project and they are largely responsible

for why this has the legs that it has today.

And so if we think about expert collaboration

as a third dimension of this, or fourth right,

people, government, the companies,

it has been extremely effective for me

to put those two humans in the room with 10 other humans

from outside of the company

who can have a real face-to-face conversation

about what might be possible if we changed X, Y, and Z.

And who got to ask very open NDA protected questions

about why something doesn t do a certain thing today.

I think if we believe in the capacity for individual impact,

that is one route that I have found to be very effective

when navigating large corporate movers to any of these ends.

[audience talking quietly]

Any parting thoughts?

Oh, that s a wrap.

Thank you panelists, and thank you Guido.

[audience applauds]

Starring: Millie Tran