Skip to main content

Navigating Truth in the Age of A.I.: Fact-Checking and Media Literacy | PhotoVogue Festival 2023: What Makes Us Human? Image in the Age of A.I.

In a world where information is abundant and easily accessible through AI-driven technologies, the need for effective fact-checking and media literacy has never been more critical. In "Navigating Truth in the Age of A.I.: Fact-Checking and Media Literacy", Daniele Moretti, renowned expert in media analysis, delves into the intricate relationship between AI, information dissemination, and the potential pitfalls of misinformation.

Released on 11/22/2023

Transcript

Me and I took this picture in Svalbard Island

because this is me again.

This is what I do.

This is the K2 mountain base camp,

and this is me again.

I m a journalist.

I m the deputy editor in chief of 24/7 news channel.

So I am the mainstream media.

Severely times advocated before this talk.

In the previous talks,

I heard trust, I heard credibility as keywords

repeated again and again and again.

And this is exactly what I do since 25 years.

I actually 20 years at Sky,

and this is what I...

What I m trying to say, that we are the one

who take responsibility for the final output

on all the images, all the video

you saw AI generated or modified.

And what we do during the day is dealing

with all these materials, dealing with all this footage,

and trying to understand what is real and what is not.

So I decided to start from a couple of days ago,

photograph coming from Gaza.

We have a couple of day news meeting every day.

We process the news gathering during the day,

all the day long, 24 hours.

But we take a couple of moments during the day

to talk about how thing is going on,

about the headlines, about which is going to be

the opening, the closing,

and sometimes very often,

I mean, in the last year,

what to show and what not to show.

The head with the foreign desk that morning,

just a couple of days ago,

told us that the Israeli armies

took control of the Gaza Parliament.

But she said, We don t have any video

or photo or footage verified.

We ve only a couple of photo like this,

but we really don t know where they come from.

And as far as I know, they can be AI generated.

So we are not gonna use it.

Everyone accepted for real that we didn t know

where the photo was coming from, so we didn t use it,

simple as that.

And this is how our work has totally changed

in less than a year, less than a year.

Trying to put order in my thoughts coming here,

I remember exactly, well, when was the first time

that we understood that it was something in front of us,

in front of our house, in front of our eyes,

we were forced to dealing with,

and that was changing dramatically our way

of telling the story.

And we were there in London,

our headquarters for a workshop.

We were...

Like sometimes happened,

30 people gathered together to answer that question,

How the news consumption will be in the next 10 years,

especially digital thinking?

And we were forced to stay in this comfortable

brainstorming room with no cell phones, no laptop, tons

of Post-its trying to listen on each other

with a lot of keynotes, a lot of customer research

about the news consumption.

But suddenly, it was day two,

the Sky News tech correspondent started to show us

all the thing you have seen in all the afternoon,

how the reality was at our eyes totally hacked.

And we realized that we were totally late.

Navigating the Truth is the title of this talk

and navigating the truth is what we do if we know the truth.

And this is exactly the point.

We realized at this stage, it was late January,

the famous photo of Papa Francesco

with a white jacket is in March,

I guess, 20 March, something like that.

But we realized that really, it was too fast to manage.

Just to understand,

I grabbed from CB Insights this quick timeline.

It s something you already know,

something that we already discussed.

2020, point six, OpenAI released GPT-3

and then OpenAI launches GPT-3.5.

And suddenly in 2020,

we understand that something is going on,

but we as mainstream media,

didn t realize it was something more than a story.

We started to do news program about that,

but we didn t realize it was something

that was really affecting our work.

Why it was that?

And this is really the way [chuckles]

we can understand how fast it was.

ChatGPT reached the 1 million users in just five days.

We tweeted two years to get so fast.

And, you know, and this is why we become afraid.

But then, we realized that we had something to do.

And the first thing was doing news program about AI,

news program about AI, inviting experts,

giving voice to the fears, which were our fears.

But progressively, we continued to work

because we were on air, we were online,

and we continued to do our job.

And our job was about fake news even before.

What fake news are?

This is a common definition.

And we saw some example of the fake news even

in the previous talk, even if it s about Ulysses Grant

or the other of head of state walking in the aisle.

Concocted, inaccurate information,

imitating content by rigorous media.

And this is very important for our work,

imitating content by rigorous media, crafted like we do.

We are the rigorous ones, written with intention to deceive,

and to deceive is something, again, very important.

And we now call it the post-truth era.

It s just a decade.

Again, a decade ago that it started,

but this is the fact,

and this is where I start to modify

what I was intend to do.

It s 20 years that I m dealing with the fake news.

I m dealing with environmental issues since 15 years.

I ve been dealing with US election.

I covered the US presidential election

in 2000, 2004 and 2008.

I was in war theater

and we have a rule book.

The rule book has changed since the technology has changed

during those years.

But the rule book is always that you have to check twice

at least when you have a story to tell.

This is my perspective, the perspective of the media

and how we deal with that.

What do I do if I don t know

if the Gaza photo comes from there?

I call the Israeli army, I call them.

I have to be there.

And if I can t, and I cannot in Gaza,

because no one is allowed

to enter in the Gaza strip right now,

I will go to the press reference,

for the Israeli armies asking directly, You are my source.

Please confirm that this photo is true or it s not true.

If I don t have this confirm, I have to be transparent,

as transparent as I can, that I cannot confirm it is true.

So I won t show the photo. Simple as that.

And there s nothing really more to say.

We can discuss about the fact that

why fake news are so cause of worry for us.

Because they spread literally,

they go viral more quick

and faster than the actual news.

There are a lot of studies that said

that if you use fake news which affect

your mental well-being,

then you will share literally 10 times more

than an actual news.

So, these are the four arguments

for why we should worry about the impact

of generative AI on misinformation

in the general, scientist publication, and so on.

Increased quantity of misinformation,

increased quality of misinformation,

increased personalization of misinformation

and involuntary generation of plausible,

but false information.

This is something really...

The shift, the way we look at this way

of danger we feel when we are dealing with generative AI

and fake news as traditional media.

And when it comes to these worries,

there are some studies right now that tell us

that AI is dealing with that,

the AI is making more complex the context for us.

But if you stay at the basic rule book,

if you go back to the basics,

it s not affecting as much as we were worried

at the beginning of the year as it is now.

There is a study I want to quote,

and this is a study is, Misinformation reloaded?

Fears about the impact

of generative AI on misinformation are overblown.

And the paper is by leading journalist, Felix M. Simon.

And he argues, I m gonna read

because I don t wanna say something not accurate.

Increasing the supply

of misinformation does not necessarily mean people will

consume more information.

Remember that we are not talking about communication,

about sharing content peer to peer

in the social media environment.

We are talking about the media content.

So something which is mediated from us,

the mainstream media and what we know,

and it is something we, a brand over there in London,

talking about what the news consumption will be is

that now news consumption is affected by

something called the news fatigue.

So people, you, everyone has a fatigue

to deal with the news,

doesn t want to have too much news consumption.

Maybe it s something related to the pandemic also

because we had all the lockdown periods looking compulsively

to the news, even just to understand whether we will be able

to get out and when the pandemic will end.

Right now, we have these opposite news consumption behavior

of news fatigue,

but our customers want to be informed.

They can t afford to be without the news,

but this is the first place.

It doesn t mean that they don t want

to have more information if it s AI generated.

The impact of improved misinformation quality might be

negligible given most individuals are minimally exposed

to such content and majority of existing information works

without the need for more realism.

I go quicker.

Evidence suggests that personalizing misinformation

through microtargeting has limited persuasive effects

on the majority of recipients as people don t pay attention

to the messages in the first place.

So does it means that we don t have to be worried?

Not at all, not at all.

But we, as media,

have to be sharp,

have to be more...

To pay more attention than ever to our sources.

And going back to our sources is exactly what it is,

the warrant for us to stay

in contact of our customers.

And at this point,

I will tell you, okay,

the literacy before me,

initiative, like the content authenticity initiative,

it is exactly what we are leaning on to do our job.

We as media outlets, as news outlets, know exactly

that we don t have the power to understand directly

whether an image, video footage is real or is not real.

And so we have to lean on personality like

the professor of Polytechnical

and initiative like this to work together.

Transparency on one way and collaboration on the other way.

Transparency is, as I said before, if I can t prove

that my footage is a good one,

I won t show and I will tell you,

and in this case, sharing my sources,

sharing how I m going through the process of verification

of a story, who are my sources?

When it s about climate, I m going to call a scientist.

I m going to read directly a document, which is something

that nobody really, really does when it comes

to scientific issues.

And on the other end, collaboration.

Collaboration with institution,

collaboration with education stakeholders,

and to technology stakeholders.

The one who use AI for good, using these AI umbrella

and in which we can find machine learning, data analysis,

we use them for investigative journalism.

For instance, to sift huge data sets,

which is something that a simple journalist, a man,

a human cannot do.

Or in the previous talk,

we will talk about the fact that we can show

through generative AI how the world will be in 2080 or 2050.

This is something I personally did during news program,

drawing, literally drawing one of the square in Venice,

a design of what the ocean will be in 2050

if we don t do anything.

So bottom line, this is what we do.

The media literacy is that,

the first step is to be literate as news media outlet.

And then all our customers will be literate too.

And the old fashioned quote,

If you re explaining, you re losing,

it s not bad anymore for me,

because if you re explaining then

you re giving the right answer.

And that s it. Thank you.

[audience clapping]

Starring: Daniele Moretti