Skip to main content

Pretty Girls Look Like This: A.I., Photography and Racism within digital media | PhotoVogue Festival 2023: What Makes Us Human? Image in the Age of A.I.

"Pretty Girls Look Like This: A.I., Photography and Racism within digital media” examines the ways in which AI driven digital photography apps force all images to become more Eurocentric and the impact this has on who gets to be seen and how. In this talk Nkonde will discuss what companies like Google are doing to combat this and why developing images that represent all people are important as the global market for digital imagery becomes more diverse.

Released on 11/22/2023

Transcript

[Mutale] [microphone thumping] Can you?

Okay, good.

I work in technology

but I don t really know how to use technology.

So I am just waiting for my clicker,

and until that comes, my talk is actually very similar

in many ways to the one that we ve just seen.

I m just gonna lean a little bit more into

what policy making looks like in terms of image making,

AI and photography in the United States.

Is this my clicker?

Thank you. Oh.

Okay.

[Production Assistant] You wanna stay

just on the stage so you are at the light.

[Mutale] Oh, okay. Okay.

Okay, I also don t know about lighting.

So I have my clicker, so we can go.

Okay. So I m gonna do a number

of different things in this talk.

I am going to introduce myself,

speak a little bit about generative AI.

It sounds like my colleague prior

might have gone over similar things, so I might skip that.

Look at Lensa, which was an app

that I was looking at last year in the US.

And that, for people that don t know,

it was an AI-generation app

that was using pictures to create profile pics.

What we were thinking about, ChatGTP-3 and race,

even though I did take out that slide,

cause Sam Altman stepped down yesterday,

so I ve kind of switched that up.

What racial literacy could look like,

because I think one of the things I get very frustrated with

as a practitioner is that we can say that AI is not fair,

we can say that AI needs to be governed,

but people really don t have frameworks

for what that looks like, what s happening,

and then what we can do as image makers,

technologists, dreamers, and doers in this world.

So first of all, my name s Mutale Nkonde.

I came to people s attention through a report

called Racial Literacy in Tech,

which was really my frustration that people could

not see some of the things that I was saying

in terms of technology having a race problem

and trying to bring that knowing forward.

That ended up with my report went viral

with foundations in the US.

I started AI for the People, which is a nonprofit.

Obviously in the United States,

if capitalism doesn t say it s a real thing,

it s not a real thing.

So I was so happy when Nasdaq named us

as one of the 10 most exciting companies in AI.

And I work with TikTok, Google,

a bunch of different other tech companies

as they re thinking about their visual policies

and what does it mean to have a digital library

that is meant to scale across the world.

And then most recently I ve been doing

similar advisory work in the US Congress,

and that s a picture of that.

So generative, very quickly, I m sure you are

bored of this slide and hearing this,

but I m really speaking about technologies

where we put in the input as the user,

there is some type of database

from which the technology draws,

and then there is a third-party output.

So not old-fashioned AI as we knew it,

which was kind of making decisions or sorting.

We are actually in conversation

with the technologies that I m looking at.

And the particular technology was this app called Lensa.

And if you look on the right-hand side,

there s a picture of a beautiful woman.

She s white. Her eyes are kind of aqua, not blue, not green.

Her skin is very pale.

She has high cheekbones.

She has a very narrow nose.

That was the picture that came up

when I last looked at this app

about three months ago when I put in the search term...

Oh.

When I put in the search term, beautiful girl.

And she is extremely beautiful.

On this side, we re using the same app.

So I m now going to my left, your right.

The woman on the far right of you

is an African-American woman that used this app,

and what she got back was a picture

that didn t look like her but it looked

at the way Lensa would have liked her to look.

And if we can just look closely,

we can see that her nose is smaller and thinner,

which is typical of Eurocentric features.

Her eyes are actually bigger and wider apart.

And what s really interesting,

as a black woman who s had big lips all her life,

the AI has actually made these lips bigger.

So I don t know whether the Kardashians have

infiltrated AI norms, but she had beautiful full lips before

and her face is actually smaller.

It s smaller and rounder.

This was really interesting to me,

as both a policymaker and somebody that has

looked at research around computer vision,

because if we look at the way computer vision databases

are trained, they re gonna have millions and millions

of images of what is considered beauty.

And within those images, there are going to be

what we call facial architecture.

It s things like, how wide is the average beautiful nose?

What is the skin tone of the average beautiful person?

How big are the beautiful person s lips?

So that when we as the user are bringing

that search term up, it s going to be

an amalgamation of all of those questions,

and then we represented it to us in picture.

What I m positing to you today and what I would like us

to think about as image makers is that beauty

is not a set of statistical details or imagination,

nor is it something that we should take lightly

because it really drives many, all of our visual industries

and here it evoke photos certainly

about what s desirable and editorial,

and brands, which then have this other

afterlife around economic opportunity.

And so I really, at the point of being asked

to look at Lensa, started to think about

what policies would an app like Lensa have

during development to make sure that its outputs

are not only accurate, but don t harm mental health

of potentially young women using them, even older women,

but also are fair and representative.

And so as I looked at this, I started to think about

what ways could I as a communicator and a policymaker

make this real for people, while at the same time

really appreciating and respecting their visual integrity?

And this is the Racial Literacy in Tech framework.

It s a really easy way, as you re developing

or as you re looking at images, to think about

how is my image showing up in terms of race?

So it has three factors.

The first is cognitive, what do we think?

And in this cognitive space,

as we think about visuals, we have to get to the point

that images are culturally situated.

So the images that we see going back

to training computer vision systems,

are relying on preexisting databases.

Many of those databases are situated in what we think of

as the global north, the global minorities

in North America, Europe, Australasia,

and they then reflect those societies

and those images of beauty.

The next thing, as we consider

what does racial literacy look like,

people become defensive and shut off when we discuss race,

particularly when we discuss race

in a way that they find threatening.

So as we re having this

discussion about cultural imagery,

how do we do it in such a way that it s palatable

for even the people who are most, least likely

to want to be in that conversation?

And in New York City, we actually have a really interesting

example of that in the Metropolitan Museum of Art.

Following 2020, one of the things that their board decided

was that they were gonna go into their cannon

and think about who is missing from their walls.

And over the last three years, we ve seen amazing exhibits.

We ve seen exhibits from the Black Panther as they re

thinking about what black futures would look like.

We ve seen exhibits of hidden works

that are thought to have been developed

and made and created by enslaved people.

They re not credited,

but they are now in the Metropolitan Museum.

And the last exhibit that they ve done,

which is fascinating and amazing to me,

is that they ve looked at works

where black figures were painted out

because at the time that these works were acquired,

black people were not thought to be creative

and therefore couldn t make these works.

And they have painted those characters back in

and had marketing campaigns around this

as a way to open up the discussion.

And it has been extremely well received

by both scholars, by art lovers,

and people that love visuals like you and I.

And then the third thing is,

once you ve found a way to discuss it,

now not everybody s in Metropolitan Museum of Art,

not everybody s gonna be able to follow that discussion,

but there is a way of having

that discussion that brings people in.

What I m positing to you all today is really

what are the photography community, as we move into AI

where your images may not necessarily be just the images

that you ve made but images that are generated,

what are you gonna do as you re using these tools

and you re asking pointed questions

about beauty, about intelligence?

About the last speaker, I came in at the end of the speech,

I actually landed from New York this morning,

so I m still a little groggy.

I haven t been able to see everything.

But one of the things I really appreciated

about the last speaker was that she was speaking

about copyright and ethics as a way

of doing business in this sphere.

And I m building on that in saying that

we should think about goodwill,

we should think about who s missing,

and we should think about how can we make our images

more interesting by using frameworks

like this to make sure that more people

are in our images rather than less.

In the United States...

I m getting ready to finish in a couple of minutes,

so if you have questions, please shoot them my way.

I m leaving about five minutes.

So in the United States, this idea of image,

digital image making and AI is big news

and we re worried about it.

In this last week, 42 of what we call our Attorney Generals,

so these are people that are looking at specific laws

across the states, have actually joined in a law firm...

In a law firm. No, they haven t. That s landed brain.

In a lawsuit against Instagram

because one of the things that has been found

through research is that the Instagram app,

both through the way the algorithm is used

and the imagery that s selected,

it s attacking the mental health of young people.

That scene is a massive priority for us.

And it s something that, as we re looking across the world,

I d be interested in Italy.

What are young girls thinking about apps

like Instagram, like TikTok?

Are they seeing themselves reflected?

Is what they re seeing reflected back

good for them or bad for them?

And what will be the legal response?

And then just as I thought I had sent my slides

and I hadn t, thank God,

because I really wanted to add this image.

We re waking up from the fact that this is Sam Altman

from OpenAI, who I think is maybe one of the reasons

that we re all speaking about AI today

and certainly one of the reasons that I was invited

because of my commentary around them.

But he s been pushed out of the company,

he s been pushed out of the company with his co-founder.

And one of the things that s really interesting is

that I talk all over the world on these matters,

I have never been on a panel with Sam Altman.

Sam Altman and I are gonna be on three panels

before the end of the year, and we have, like, three weeks.

So I m wondering, as he is aligning him selves

in the United States with the responsible AI community,

and specifically me and other black women

who really are pushing to make sure that we are responsible,

inclusive, and honestly exciting and interesting

from the outset of these technologies.

He is now showing up in these spaces.

So I m wondering, and I m hoping, and I m positing

that as he and his co-founder go on,

maybe they ll start to think about

some of the problems they had with ChatGTP and DALL-E,

and design for a more equitable and inclusive world.

Who knows. Sam, if you re watching, I ll see you next week.

So finally,

these are some of the images that when I was putting them

through Lensa, I put in search terms like good,

I put in search terms like brave,

I put in search terms like strong.

I did not gender my search.

I was just really interested in

what Lensa s visual library looked like.

And it seemed to me that good, strong people

are more likely to be gendered male.

There are some women in there.

The women are racially ambiguous but not white,

so I think that that s a push to kind of multiculturalism.

And we only had one image of a young boy.

That does not reflect our world,

but our digital libraries, our digital apps should.

So if you too think that AI should be for the people,

not just some people, as I do,

then you can follow me on LinkedIn.

I m very, very restrained and professional on LinkedIn.

Unfortunately had to get off Twitter where I used to play,

but had to be on there yesterday

cause of P. Diddy, different conversation.

You can speak to me about that afterwards.

But you can certainly follow me there and my work

and how we re thinking about rulemaking

and making sure that we develop legal frameworks

across the world that include more people,

not just in our images, but in the value

that those images bring to the rest of the world.

So thank you for having me.

I have about four minutes for questions

and I d love to be in conversation.

[crowd applauding]

Starring: Mutale Nkonde