Skip to main content

How the Content Authenticity Initiative Combats Digital Mis/Disinformation | PhotoVogue Festival 2023: What Makes Us Human? Image in the Age of A.I.

In “How the Content Authenticity Initiative Combats Digital Mis/Disinformation”, Santiago Lyon examines how, back in 2019, Adobe established the Content Authenticity Initiative (CAI). CAI has been used to develop the open-source industry standard for establishing the provenance of digital file types, a valuable tool in the fight against the mis/disinformation challenging the news industry and the public’s trust in what is authentic.

Released on 11/22/2023

Transcript

All right.

Good afternoon everybody,

and thank you to Fred Ritchin for his wonderfully inspiring

and illuminating remarks.

It s always a pleasure to share a stage with Fred.

Let me introduce myself briefly.

So while I currently head up advocacy

and education for this initiative,

which I think you ll find useful

in light of some of the things

that Fred was talking about, I have a long

and deep background in the world of photojournalism.

For 10 years, I was a news and sports photographer.

In fact, the last time I was here in Milan was photographing

the World Cup Soccer Tournament in 1990.

And then after that, I spent a decade covering war

and conflict around the world.

I photographed nine wars on four continents.

I was wounded, I was taken prisoner.

I lost over a dozen friends to violent deaths,

photographing world events.

And so my commitment to truth and transparency

and authenticity runs very deep.

After that, I ran the global photo department

of the Associated Press for almost 15 years.

That was overseeing a staff

and a group of freelancers,

numbering about a thousand people,

and telling the world stories every day,

putting out about a million pictures a year,

all of them accurate and truthful.

So when I had the opportunity

to join the Content Authenticity Initiative,

it felt like a logical extinction of my life s work.

And I think you ll see over the next 20 minutes

or so that this initiative addresses many of the things

that Fred touched upon

and hopefully comes across as a ray of hope

or optimism or light in these troubling

and very uncertain times.

So this initiative was kicked off in 2019 by Adobe,

really in response to the perennial problems

of missing disinformation,

whether generated by artificial intelligence

or by other means.

And the reason for this was because obviously,

Adobe makes a lot of very powerful software,

which for the most part is used for constructive

and creative purposes.

But which sometimes falls into the hands of bad actors

and is used to manipulate things

and present them in an inaccurate or untruthful light.

So because of that, Adobe felt it was a responsibility

to try and address the issue.

And from the get-go,

this initiative has been all open-source,

which is to say all of the underlying technology

that you re about to learn

about is available online

for anybody and everybody to use,

including Adobe s business competitors.

And of course, at the same time,

it s included in any number of Adobe products.

More recently I ve see, of course,

we ve seen images like this generative AI,

which while these ones are clearly not real.

Initially, there was a lot of confusion.

How can that be?

It looks so real.

It must be real, seeing is believing, et cetera.

And more and more, this generative AI technology

is becoming hyperrealistic

and difficult to distinguish from reality.

And as Fred mentioned recently,

there s been some controversy

because on Adobe stock, which is Adobes Licensing Website,

generative AI images from the conflict in Gaza

have been presented and have been used.

And in response to that, Adobe is analyzing our policies,

our procedures, and we also want to engage more deeply

with the photojournalistic community

to listen and to hear what the media world has to say.

And in addition, we want to reinforce the use

of the technology that I m about to describe

to help clarify the situation.

The content authenticity initiative started off

with just three members back in 1999.

Now, we re past 2000,

as you can see here on the left,

a host of major media companies,

both news agencies as well as consumer-facing organizations.

And on the right,

you can see some of our technology partners,

both hardware and software.

And it s precisely the size and weight and heft

and influence of these members of the community

that we believe will give this initiative

a chance to succeed.

Now, the first thing that we did was take a look

at the transparency and authenticity landscape.

And we identified four important pillars,

and I d like to run through each of them.

Starting with the area of detection.

Detection is where most people assume technology

and missing disinformation intersect.

And detection technology involves uploading suspect files

to programs that look for telltale signs of manipulation,

things like inconsistent pixel structures

or impossible combination of lighting sources

or sloppy photoshopping.

And while detection software

can be useful on an ad hoc basis,

it has a couple of fundamental problems.

Number one, it s not scalable.

It takes too long to run images through detection software.

Number two, it s not particularly accurate.

The accuracy often depends on the file sizes

of the images being analyzed.

And since most of the mischief is happening on social media,

the file sizes are degraded

and thus, detection software fails.

And thirdly, detection software

is invariably a never ending arms race

with bad actors trying to stay one step ahead

of the latest detection software.

So while detection software can be useful

on an ad hoc basis, image by image,

especially when combined with journalistic fact-checking,

it s not an area that we re focused on

for the reasons I mentioned.

Instead, we re focused on three other areas

that we believe are fundamental

and the really must work in concert

for any of this to succeed.

I ll start with policy.

Policy involves interacting with lawmakers

and policymakers around the world

to make sure that they re well-informed

as to the state of technology where it currently is

and what s coming around the corner.

And so, Adobe s government relations team,

deals regularly with lawmakers and policymakers

towards that end.

And what we re seeing directionally in the policy space,

certainly in the European Union, and in the UK,

and in the United States, is a drive to mandate transparency

with regards to artificial intelligence

and much of the technology

that s underpinning those policy efforts.

And that is informing them

is based on some of the technology

that I ll get into in, in a second,

which has everything to do with provenance

or understanding the origins of digital files.

So we believe policy is an important aspect of this.

The next area is around education.

And in this space, we mean, particularly media literacy.

Media literacy is a field that varies wildly depending on

what country you happen to live in.

In some cases in Finland, for example,

media literacy is very highly developed

and is introduced into the classroom,

children at a very young age.

In other countries, it barely gets a mention.

And we think that media literacy is a hugely important part

of this issue about missed disinformation

and transparency and authenticity.

To that end, we have been busy

for the last year creating robust

and free media literacy materials

for use in the classroom in middle school and high school

and in university students.

And we re in the process

of updating those educational materials

to reflect recent advances in generative AI.

But what we re really focused in on here

from a technological standpoint

is this notion of provenance.

So in this context,

provenance really means the basic trustworthy facts

about the origins of digital files.

Where do they come from?

How are they manipulated perhaps along their journey

from creation through editing and onto publication,

and then sharing some or all of that information

with the viewer so that they can get a deeper look

into what it is they re contemplating?

And so in this context then,

provenance is really about proving what s real

as opposed to detecting what s false.

And our journey thus far in this space has taken us to work

with creators and news media

and more recently, in the AI generated space.

But when we look at the road ahead

and in response to interest from around the digital world,

we see a great number of use cases

for this kind of technology.

Things like brand reputation, and e-commerce, and insurance,

and auditing, and politics, and election integrity,

and law enforcement,

and medical and scientific imagery.

Politics and election integrity is particularly important.

Next year, 2024, over 2 billion people

around the world will vote in over 50 elections,

and we re already seeing generative AI

being used to create false and misleading campaign images,

placing people in meetings where they never were, et cetera.

And so we think that the urgency around the implementation

of provenance technology has never been clearer.

In terms of how the technology works,

I wanted to run you through the steps

that we re addressing here.

We ve divided the work up into four main areas.

So we start with capture.

So here we re working with the hardware manufacturers,

the manufacturers of cameras

and smartphones

to integrate this technology

into their devices at production.

That means that when you go and buy a smartphone

or a purpose-built camera,

it will likely come out of the box

in the not too distant future,

and in some cases already

with this technology embedded into the device,

and you as the user can choose to turn it on

or off depending on what the default setting is.

So for example,

like our recently announced

the world s first secure capture camera

a couple of weeks ago that has this technology embedded

into it at production,

Sony announced that in the early part of next year,

they will have the same technology,

embedded into their devices.

Nikon is also working on this,

as are some of the other major manufacturers.

And the smartphone companies are also

involved in this discussion.

So what does this do?

It establishes from the moment that an image is created

with a hardware device, what image was used

in an empirical fashion

that currently doesn t really exist,

or at least not in a very robust form.

The other thing that it does

is it secures the existing metadata

that most devices put out.

Things like XMP and EXIF and IPTC, metadata fields

that have a lot of very valuable information in them,

time, date, location,

technical settings of the device, et cetera,

but which in their current form are very vulnerable

and easy to hack into.

So what this does is it secures those metadata fields

and makes them more robust.

Next, we move on to the area of editing.

Here, we re integrating this technology

into editing software, not just Adobe tools

because after all, this is an open-source initiative,

but as many editing tools as possible.

And the way this works is that

whenever you make changes in a piece to an image

or to a file using editing software,

let s say, you darken an image or tone it or crop it

or lighten it or remove something,

each of those actions gets added to the file

as a secure layer of metadata,

thus creating an edit history or a digital chain of custody.

Next, we re working with publishers and CMS manufacturers,

and the goal here is to leave these metadata fields intact

because as you likely know,

metadata gets stripped off at every stage

on a piece of content security.

If I email you a picture, the metadata gets removed.

If I SMS it or WhatsApp it or Slack it,

metadata gets removed.

So we re working to change the way the digital industry,

handles metadata in order to leave the metadata intact.

And the reason that we re interested in doing that

is because then we can share some or all

of those metadata fields with the viewer

to give them a sense of

what it is they re actually looking at.

Where did it come from, how might it have been manipulated?

And we do this through what we call Content Credentials.

And that little CR icon,

which we just introduced a couple of weeks ago,

our goal is to make

that become the sort of ubiquitous symbol for credibility,

or at least for transparency

in terms of what s in a particular file

and where it might ve come from.

When we dive down a little bit deeper into

what s in the Content Credentials,

you can see it can get very granular at every point,

whether it s creation or editing or distribution.

You can capture a lot of information.

But of course, we recognize that not everybody wants

to capture all of that information,

and in some cases, they probably shouldn t.

A war journalist working in a conflict zone, for example,

would be ill advised to attach their GPS location data

to their file because it could prove fatal

as it has done in the past.

Or a human rights defender working

under a dictatorial regime would be ill advised

to inextricably attach their identity to a particular file.

So all of this will ultimately be customizable

and the user, the publisher, or the individual,

will be able to choose

how much information they re interested in sharing.

So Content Credentials, this will become the new symbol

around the world that will indicate

that a digital file has underlying information in it.

Now, when we come to the generative AI space,

Adobe plays in this space, as do many others.

Adobe has a product called Firefly,

which is a text to image generator.

Every file that is downloaded from Firefly,

gets a Content Credential to it, attached.

So when you click on it, this is what you get.

It s sort of what you might call a digital nutrition label

in the same way that you would go to a supermarket

and buy a food product

and inspect a label on the back of it to see what was in it.

This allows you to inspect a piece of content

and see what s in it.

The other generative AI providers out there, OpenAI,

stability.ai, Midjourney,

they re all working in this direction as well.

In fact, OpenAI has a text image generator currently working

in the Bing, Microsoft search engine.

And when you generate something with that engine

at Content Credential automatically gets attached to it.

So we re seeing some movement in that field

that the generative AI providers are interested

in the area of transparency.

We re also working on video.

So this is a video, imagine of a politician, he s speaking

to some journalists, everything s good,

and then here, all of a sudden something s wrong.

An artificial piece of audio is added to the the sound line,

for example, it turns up red.

You re able to click on the Content Credential

and get a little bit of insight as to why it turned red.

So work on audio continues.

We re designing a player that we re hoping

to make a standard player around the world.

And then we re also next gonna be working on audio

as well as other format types.

So there s a lot of work going on in this field,

and the partners that we re working with

are incredibly valuable

and incredibly important in all of this.

Now, this next slide or two is a little bit technical,

but I thought I would go through it quickly,

just so you get an understanding of the methodology here

for those of you who might be technically minded.

So we secure all of this information,

starting with assertions, where did the image come from,

what was done to it, et cetera.

And then what we do is we bind the assertions

to the file using what we call asset hashes cryptography.

This is the same technology

that is employed when you re doing online banking

and need a secure connection

and want to protect yourself online

when things are important.

Then what we do is we bundle the assertions

and the asset hashes together

with the digital signature of the individual.

That s particularly important

because digital identity, despite the world we live in,

is a very underdeveloped area.

We still walk around carrying pieces of plastic

with our pictures on them

when just about everything else we do is digital.

So there are a number of options

to create digital signatures out there,

and it s a fast growing area.

In some countries, it s being introduced.

In Estonia, for example,

digital identity has been introduced already.

And in some states of the United States,

it s being introduced.

So building on that drive towards digital identity,

you re now able to purchase

what are called signing certificates

that allow you to attach your digital identity to something.

And in this case, it could be the Content Credential.

Next, we look at, so where is this information housed?

So these Content Credentials can live

in one of three places.

One, they can be embedded into the file themselves.

Two, they can be housed on a cloud

with a pointer from the file to the cloud.

And in this particular case, we also store a tiny thumbnail

of the image in question,

so that if the Content Credential gets stripped off,

we can link the thumbnail to the image

that s being analyzed,

using what s called fingerprinting technology,

which makes it more robust.

And then lastly, for those who are interested in blockchain,

this can also be put on a distributed ledger.

Blockchain has some advantages in the sense

that it s immutable, it can t be altered once it s done.

It has some disadvantages in the sense

that it s a little slow sometimes,

there s some cost involved, and reputationally,

blockchain makes some people nervous.

But in any case, three different options.

And we re also exploring the addition

of invisible watermarking to make this even more robust.

So what we needed to do here early on

was create a global standards organization,

since what s we re aspiring to do here

is to create a global and robust standard.

So together

with some of these other big technology companies,

we created something called the C2PA

or the Coalition for Content Provenance and Authenticity.

This is a standards organization that sits

within the Linux Foundation,

which is the preeminent open-source organization

in the world.

Adobe is on the Steering Committee and Chairs

the technical working group.

And in January of 2022, we released a version one

of this technical specification.

So one way to think about these things

is that the C2PA

is the underlying global standards organization,

and it s essentially providing a technical standard

or a blueprint upon which everything is based.

And then the Content Authenticity Initiative comes along

and first of all, gets together a big community of large

and small and medium sized companies as well as individuals.

And just as importantly,

the Content Authenticity Initiative

builds open-source tooling based on the C2PA standard.

So the C2PA is the architect,

the CAI is the construction company.

And thus far, we ve built a suite of tools

that are currently available on GitHub for developers

to look at and experiment with.

We also run a very vibrant Discord channel,

which allows developers to ask questions in quasi-real time.

And at the same time,

these tools are being implemented into Adobe products.

This is in Photoshop already.

It s in Lightroom as a beta.

It s coming to Premier for video,

and it will come to other programs soon.

And so the tools that we released are number one,

a very basic JavaScript UI Kit

that allows people to see and read Content Credentials.

Number two, a more sophisticated C2PA Tool

that allows Content Credentials to be written into files.

And thirdly, a Full Software Development Kit

that allows you to do all of those things

to a much higher degree of sophistication.

And what we ve seen is

that the C2PA standard has emerged

as the gold standard in provenance technology.

Governments, companies, camera manufacturers,

everybody is referring to that standard.

And the reason is because it s an incredibly rigorous,

detailed technical standard,

and some companies have gone into the standard themself

and try and make tools from it, and they run away screaming

because it s so complicated.

And so a lot of the heavy lifting has been done

through the Content Authenticity Initiative

to make those C2PA standards a reality

and something that can actually be implemented.

On the camera front, as I mentioned,

we have three major manufacturers in the lead, if you like.

Leica was the first out of the door with their M11-P camera

that was released at the end of October.

That s the world s first production secure capture camera.

Sony announced last week that, in early 2024,

this technology will be available on a whole suite

of their cameras and will also be able

to apply it retroactively through firmware updates,

which is good for people who don t want to have

to go out and buy a new camera.

And Nikon is on their second prototype

and making good headway.

So directionally, we re seeing real traction

with the hardware manufacturers.

And on the smartphone side of things, we re talking

to everybody you d expect us to be talking to.

The issue with the smartphone world

is that it s a little more complicated

because the security thresholds are higher

because they have a much higher standard around security,

around encryption and those kinds of things.

So it s moving a little bit slowly,

but directionally, everybody s headed that way.

What we re currently doing now, whoops, lemme go back here.

So I wanted to show you this functionality

in Photoshop if I can.

Let s see.

There we go.

So imagine here I have an image in Photoshop.

I turn on Content Credentials in Photoshop.

I do a preview of the image.

I see the little CR icon indicates that it has a credential

because it came from Adobe Stock.

Now, what I m gonna do is manipulate this imagery,

significantly, cooling down the tonality, using an AI tool

to add artificial clouds to the background.

Now, what I m gonna do is overlay another image

that I had ready, snow.

I clear the image up, I have a picture

of pyramids in a blizzard.

I have the three component parts there.

I export this.

I add the Content Credential to the exported file.

I exported it, shows up in a mocked up social media site,

snowy pyramids.

I click on the CR, it s the Content Credential,

takes me to the backend.

And here what I m able to do

is see the sort of family tree of the image.

And with this little slider,

I m able to see where did I start and where did I end up.

So this is a functionality that exists currently.

We re in the process of intersecting

with the social media companies to get,

make this a reality on those platforms.

But as you can see, it s a very simple

and transparent way of doing many

of the things Fred described, but doing them at scale

and with enough industry partners to make this a reality.

And then lastly, just underscore three key areas of focus.

The open standard specification, that s the C2PA.

The implementation and member collaboration,

that s the Content Authenticity Initiative.

And lastly, advocacy and education,

that s the Content Authenticity Initiative as well.

So we re making a lot of progress.

This QR code will take you

to the contentauthenticity.org website.

If there s a call to action here, join the initiative.

It s completely free.

We welcome individuals, we welcome companies.

In fact, about half of our 2,000-plus members

are individuals who are interested, who are concerned.

If you join, you get access to newsletters,

workshops, community events.

And as the technology develops,

we re very interested in getting feedback

from people who are using it.

You know, questions,

opportunities to engage with the community,

just like we re gonna be engaging

with the community about the GenAI stuff

that Fred mentioned.

And so I d like to think that all of this work,

we ve been at this now for four years,

made a lot of progress, getting there,

we re beginning to see real-time implementations

of this in media companies and others around the world.

And the goal over the next several years,

we ll be to make this a reality

and try and make the internet a safer place

by allowing for this level of transparency,

so that people can see what s in content,

where it comes from.

And by doing so have a better understanding.

And that s it.

Thank you.

[audience clapping]

So I m told we have time for questions if there are any.

Yes!

[Audience Member] Hi!

Yes, so about that Photoshop program that you showed

where it shows the family tree of the edits

and how social media

and other platforms would use it to show,

how an image has been altered,

is that just for images that claim to be true

or all images ever posted online?

All images, whatever you post online,

if it has a Content Credential attached to it,

which, of course, is an important part of this,

and that s why we re working for implementation

at the creator level,

then the Content Credential can be inspected

and you can get that family tree slide or look.

And so it s important to note that the provenance trail,

while in its most robust form, would begin in device,

in the hardware device at creation out of the camera.

While we wait for that technology

to become more available, the provenance trail can begin

in the editing side of things,

if you re working on a picture in Photoshop or Lightroom

or whatever editing tool

that might have this technology in it,

you can establish at that point who you are,

attach your identity to the file,

and also disclose the changes that you might have made

to the file at that point.

Or we re working with some publishers, for example,

to introduce the provenance trail at the very end,

knowing that the likes of the AP

and the New York Times and Reuters

and other major organizations have a very rigorous system

of checks and balances carried out by humans.

But when they send something out,

it s because they believe it to be true,

and in fact, their whole reputation hinges on it.

So we have the opportunity there

to add a Content Credential upon publication that indicates

that particular image has met their standards.

And then over time,

as use of the editing software becomes more ubiquitous,

and as use of the hardware tools becomes more common,

we can add to those Content Credentials.

In addition, Fred was asking about archival use.

And so we re working with archivists for doing some work

with the US Library of Congress in the United States

to see about retrofitting Content Credentials

to existent digitized material

in which they have a high degree of confidence

in order to set the historic records straight.

And we re also exploring

how we would introduce Content Credentials

into the scanning process.

So that when physical objects are scanned,

be they photographic prints or negatives

or you know, artifacts,

you could add the Content Credentials

to the scanning process, which is where something begins,

its digital journey.

So the whole area of provenance,

multiple layers, multiple opportunities

to intersect with it.

And that s really where it ties back to education

and media literacy,

and also, the notion of consumer literacy.

As the CR icon becomes more widespread,

and as more organizations begin to take it up

and publish it,

it s going to require educating their consumers.

Dear New York Times reader,

you will have noticed a little CR icon appearing

on our website in recent days.

Here s what it means.

So, you know, none of these things I don t think

can be taken in isolation.

Content Credentials in the Content Authenticity Initiative

by itself is not a light switch or a turnkey solution

that all of a sudden solves all the world s problems.

It needs to be done hand in glove with education,

whether that s media literacy and or consumer literacy.

And also the area of policy,

which is to say mandating transparency

around certain things,

which seems to be what s happening around the world.

Although, policy will always lag technology

for obvious reasons, and the area of detection,

these tools when implemented are of huge value

to fact checkers and to, you know, people whose job it is

to assert whether something is true or not.

So it s sort of that trifecta, if you like,

four pillars of truth and authenticity,

provenance, education, policy,

and to some degree detection

that I think give us a fighting chance.

But it s gonna require a lot of work

by a lot of people across multiple disciplines

in order to make this work.

But really, it s all we have right now,

and we can t give up.

We can t sit on our hands.

We need to respond.

Because when you think about two things, first of all,

the great leaps of faith that we take every day online,

whatever it is, you re looking at,

Airbnb, home for a holiday,

buying something on Amazon, News photographs,

you have almost no information about

where those images come from.

You believe them based on your trust of a brand,

based on your assumptions, based on your biases.

But there s no empirical evidence.

As generative AI becomes more advanced and more common,

there needs to be a way to distinguish what s what,

so that you can have trust in things.

And if we don t do that,

it really starts to damage potentially things that,

you know, we hold dear.

Democracy becomes under threat, for example.

If politicians and political campaigns

are using AI to deliberately manipulate things

to mislead voters into voting one way or another,

that already happens at a less sophisticated level.

And we re already starting

to see the use of AI, creep into political campaigns,

and that s potentially terrifying.

So I think as a sort of a societal imperative,

we need to do something.

And in my mind, the combination of things

that I ve just skimmed over in 20 minutes or so

is a good start.

And we need to build on this,

we need to get involved,

and we need to become active participants

in any one of those four areas to make this thing viable

in order to keep ourselves safe.

[Audience Member] Thank you very much

for your talk, Santiago.

Three questions coming at you in one.

The first is,

I was noticed some pretty big logos were missing

from the collaborators you re working with.

Is the intention

that a process like this could be implemented

on social media, like on the matters of the world?

How quickly do you think we could get to that stance?

And what are the biggest challenges

in being able to work with a media

that is a sparse paced as social media say,

Instagram or Facebook?

Yes, thank you for the question.

There s a lot of stuff I can t talk about too openly

because there s lots of NDAs in place,

but let s just say,

we re talking to everybody you d expect us to be talking to,

and all of those conversations are making headway,

in some cases more quickly than in others.

But there is a general recognition across all the players

in the digital environment that there is a real value here.

For example, some of the newer members

of the Content Authenticity Initiative

have come to us from the advertising sector.

So as you probably know, the advertising world is dominated

by a half a dozen or so major holding companies.

And we just welcomed Publicis and Omnicom and Dentsu

to the community with some of the other major ones,

seriously considering it.

And they re interested in this notion for brand reputation

and for transparency in advertising.

So as the community diversifies, the logos I showed were,

you know, on the left of that slide

where all media companies,

because that s the most urgent use case.

But as the community diversifies

and we start to get advertising holding companies,

and to your question, social media companies more involved,

then that strengthens the community

and it makes it more robust,

then it s really a question of speed

and ease of integration.

And so it begs the question as to whether provenance

as a service is going to become an industry over time.

We have the tools, the tools are implementable right now.

The biggest challenges that we re facing

around implementation

are precisely the multiple stakeholders

along the information chain.

So you have publishers, that s great.

But then after publishers, you have CMS manufacturers.

And then after CMS manufacturers,

you have what s called CDN manufacturers

who are in charge of getting the data

as close physically to you,

so that when you log on on your smartphone,

you re not logging onto a server in New York,

you re logging onto a server in Milan

that has information from New York.

So it s a very complex ecosystem of companies.

But every day, we re getting closer to what we believe

is a tipping point where with enough critical mass

and enough members and enough ease of implementation,

we can reach that tipping point,

so that the notion of Content Credentials

or transparency around content becomes a scalable reality

that everybody is comfortable interacting with

and that everybody can interact with them.

[Audience Member] Thanks, Santiago, for the great talk.

I have one question regarding creators

and actually enabling the content identification label.

What is the incentive for creators?

So the incentive for creators can be,

there can be multiple incentives,

attribution at a very basic level.

This is mine, I made this.

There is an application here

that we re not directly involved in right now,

but we hear a lot about,

which has to do with digital rights management,

the ability to protect creators

against copyright violations, unauthorized use of content.

And I think this notion that just sort of writ large,

this notion that...

In most cases, there s no downside to transparency.

In some cases, it s a little nuanced, as I indicated,

according to who you are and what your circumstances.

But for the most part, there s no downside to transparency.

And so sometimes,

we refer to the Content Authenticity Initiative

as a sort of a movement, right?

A movement towards transparency

backed by the companies, large and small,

who make the technology that we use every day.

And this notion of, you know,

fermenting and growing, being a participant,

I think that s an important upside to all of this.

Do you want to be a participant?

Because when we try and define

what success looks like overtime over the next,

I don t know, five years or so.

Success is really all about ubiquity.

It s about this notion of just about everything

that you see online having Content Credentials,

and thus the content

that does not have Content Credentials stands out precisely

because it doesn t and therefore requires a second look.

There may be good reasons

why a piece of material might not have Content Credentials,

but if the trend is for transparency

and people are saying, I don t wanna be transparent,

then you have to ask yourself,

Why don t you wanna be transparent?

And like I said, you might have good reason,

but it does beg the question.

So I think the answer to your question

is because the upsides, at least in my mind,

outweigh the downside.

[Audience Member] Thank you.

Last question, I m told if there is one.

All right, well, thank you all very much.

It s been a pleasure.

[audience clapping]

Starring: Santiago Lyon