Skip to main content

Artificial Intelligence, Ethics, and Law: Can EU regulations protect humanity? | PhotoVogue Festival 2023: What Makes Us Human? Image in the Age of A.I.

After the success of the General Data Protection Regulation (the ‘Privacy Regulation’ or GDPR), the EU has endeavoured to play the role of global leader in the field of digital regulation. Ambitiously, the European Commission is at it again with the proposed AI Act and with a number of initiatives aimed at making sure companies, public bodies, and individuals adopt AI in an ethical manner. In this talk, during his talk “Artificial Intelligence, Ethics, and Law: Can EU regulations protect humanity?”, Professor Noto La Diega presents and critically analyses these recent instruments and question whether they can be effective in protecting humans and their humanity in a world where generative AI increasingly threatens our core values.

Released on 11/22/2023

Transcript

The next talk.

Artificial Intelligence, Ethics and Law:

Can EU Regulations Protect Humanity?

By Guiido Noto La Deiga,

chair of IP and Privacy Law

at University of Sterling,

and member of the European Commissions expert group

on AI and data in education and training.

Professor Noto La Deiga

will be here with us via Zoom

and he will analyze the effectiveness

of various initiatives created by

the European Commission in order to protect humanity

and humans from the threats and dangers of AI.

Enjoy.

[audience applauding]

Good afternoon, everyone,

and thank you so much for the very kind invitation.

I wish I could be there in person

unfortunately, I got Covid, which is not ideal,

but I really wanted to participate,

so I m really thankful for to [indistinct]

and all the team at Photo Vogue

and Vogue Italia for giving me the opportunity

to intervene, to speak remotely.

So I m gonna try and share my slides.

Let me know if you, if it doesn t work.

I am assuming that they re working.

You can see them right now.

So I am a lawyer by training.

I m a legal scholar, so I m an academic as well.

I m Sicilian, but I live in Scotland.

And I had the great opportunity to work

with the European Commission

as part of the European Commission

expert group on AI in education.

But obviously, as lawyers love to do,

there is a disclaimer that here

I m not representing the official views

of the European Commission,

I m representing my own views and my own research.

So I wanna just share with you

a little bit about what s going on

at the European level in terms of initiatives,

trying to minimize effectively

the risks posed by artificial intelligence.

I m gonna, in the first part of the presentation,

I m gonna consider the kinda ethical initiatives,

the initiatives around, AI ethics

and then I m gonna move on to the law,

what the actual law, not just ethics,

but the law or proposed legislation

in the European Union is saying about AI.

And finally also learning from the title of the event.

I m gonna try and chart new territories

and understand whether effectively

we can rely on just regulation to protect

humanity from AI.

Spoiler alert, the answer to the question is no.

There are so many different

ethical AI initiatives at the moment.

Ethical AI is a big and growing field.

You will see a lot of charters

and manifestos and guidelines

that try and effectively nudge developers

and other stakeholders so that

they develop AI systems that are ethical,

that reflect the values of our society.

This is easier said than done,

as you would expect.

I m gonna share very briefly

my own experience as part of

the European Commission Expert group on AI and education.

So in September, 2020,

the European Commission published

the Digital Education Action Plan 2021-2027.

And as part of that action plan,

they set up an expert group and I was part of it,

whose main task was to publish

some the European ethical guidelines

on the use of AI in education,

which we did.

So you can find them online very easily.

You can reach out to me, I can,

if you don t find em,

I m very happy to share them.

And the purpose of these guidelines

is to kind of spell out some common fears

that we have around AI,

but also address the ethical considerations

and requirements for the responsible use of AI.

Provide practical advice to particular educators,

but more general stakeholders

and discuss emerging competencies.

What do we need to teach our students,

our children about AI?

We distilled four key ethical considerations,

four ethical considerations that we always need

to keep in mind when we think about

developing, adopting, using AI.

These are human agency, fairness,

humanity and justified choice.

I think these are pretty self-explanatory.

So I m just gonna say a word about

the idea of justified choice,

because maybe that s one expression

that might be less familiar.

Justified choice means that before,

before de developing an AI system,

before adopting an AI system, using it, et cetera,

we need to ask ourselves whether that choice is justified.

Has it, has the choice been taken in a transparent way?

But more importantly,

has the decision making behind

that choice been participatory?

I think a lot of you,

there will be a lot of photographers,

a lot of people working in the fashion industry,

the kind of a press, et cetera.

I think you ve all been affected

and wondering how to,

the extent to which you ve been affected by

ChatGPT and other forms of AI.

But I am pretty sure that no one

or nearly no one in that room has been asked,

what are your fears about it?

What would you like the EU to do for you?

Nobody has really consulted people about

those people that are mostly affected

by AI about what they want

in terms of legal change for example.

And I think that s a

big issue that we should tackle.

And I will go back to it perhaps later in the talk.

So that s the kind of general framework.

Based on that general framework,

we have identified seven key requirements

for an AI system to be ethical.

And these are pretty much the same requirements

that the high level expert group on AI

had already considered a couple of years before us.

So we learned from lessons to a large extent,

I m sure that you are,

well, you re probably all familiar

with these concepts

and just gonna go through them very quickly,

maybe say one thing or about one of them.

So these key requirements for an AI system

to be ethical are human agency and oversight,

transparency, non-discrimination,

societal and environmental wellbeing,

privacy, data governance,

technical robustness and safety and accountability.

For example to explain a little bit

what some of these, or at least

what transparency for example, means.

A key question is, for example,

there are all these systems

that generate automatically in theory,

some beautiful or some beautiful pictures,

some beautiful photographs

or some beautiful videos.

Transparency means to ask

what kind of data has been used

to train that machine that is creating,

allegedly automatically these beautiful pictures?

Because chances are that type of AI

has been trained with photographs

that were covered by copyright

and nobody asked you for permission

and nobody compensated you about,

for this type of activity.

So transparency is really, really important.

And the difference, I suppose, between our work

and the work of the high level expert group was that,

they re more focused on the idea of a checklist.

So, you know, you have a checklist

and you tick your boxes, as in,

yes, I ve considered transparency

in developing this AI system, et cetera.

Whereas what we are focusing on

is actually on guiding questions.

We want people, educators, anyone

really dealing with AI to ask questions,

to be able to ask critical questions

before choosing and adopting some form of AI.

I m gonna finish this initial part

of my talk related to AI

and ethics with some criticism.

So it s true that it s a good thing that we see some,

things like the ethical guidelines

that I just talked about

and all these different, different

ethical charters manifestos on AI,

they re not a bad thing in and of itself,

but they re often used as forms of ethics washing.

Companies that effectively behave

and continue behaving in ways that are unethical.

But they say, you know,

they use these ethical initiatives to say

and to pretend that they are better

than they actually are.

The most dangerous thing is when these companies,

the famous kind of big tech,

the larger corporations

that are behind main types of AI,

and they re usually US based, or China based,

when these companies are saying we don t need new laws,

we don t need regulation

because we are already,

complying with our ethical principles.

But I dunno if there are lawyers in that room,

but I suppose for everyone that thinks about the difference

between ethics and the law, the main difference is,

that ethics is really not binding.

So these companies can say that

they are abiding by these high level ethical principles,

but in reality they don t have to

because ethical principles are not binding.

Only the law is binding.

And there are other issues with ethical AI

that I don t need to go into much detail.

Maybe it s something that we can explore during the Q&A.

For example, issues around around

equality, diversity and inclusion,

the need to avoid forms of digital colonialism.

So again, ethical AI it s a growing field,

it s not all bad,

but it s a dangerous thing to think

that we can replace ethics with the law

or the law with ethics.

So in the second part of this talk,

I wanna say share about something about what the EU,

what Europe is doing are in terms of legal interventions,

not just ethics, but something that is more binding,

something that can really change company s behavior.

And that s the AI Act.

The AI Act is just, it s a proposal.

It s not been adopted yet.

So I think that s one thing to kind of keep in mind.

But we are getting there.

Many people hope that we can adopt it by before Christmas,

but I m not sure that that s gonna be possible.

But what is the background of this?

Why do we need to regulate AI?

For many reasons in general,

the main point is that

AI poses an unprecedented threat to our society

and to our key core values.

And probably this is particularly well expressed

through the examples related to algorithmic bias.

We know that in many legal systems such as in the US,

algorithms are used to predict whether

or not people are gonna become criminals in the future.

And many of you won t be surprised to learn that

these systems are biased against black people.

So black people are more likely to become criminal

according to these types of algorithm.

So we clearly need better regulation

because we cannot allow our society

to become so unjust,

to become even more unjust than it currently is.

So the EU has decided to do something about it.

It s the first organization, it s not a country,

it s not an international organization,

it s a weird kettle of fish.

But the EU has decided to be the first in the world

to introduce a binding,

overarching comprehensive law around AI.

It applies to all industries,

including fashion and photography.

It s even broader in its application than the

previous regulation that the general GDPR.

It introduces or it will introduce harmonized rules

for development placement on the market

and use of AI systems.

It introduces sanctions that are even higher

than the sanctions that we have

with the GDPR up to 30 million, euros,

or 6% of the global turnover of the company.

It s a regulation that is similar to product safety.

I think all of you are familiar

with the model of product safety.

So when you build, when you make for example,

a product like, I dunno a hairdresser,

you re gonna test it before putting it on the market

to make sure that it s not gonna electrocute

the user of the hairdryer.

That s the model of product safety.

And with the AI Act, we re gonna see similar things.

That s a sort of CE marking

for that is gonna say this AI system is actually safe

and can be put on the market.

That s really good.

However, there are a number of issues

because the AI Act is focused on forms of narrow AI.

So what is saying, what the AI Act says

is that certain specific application

of artificial intelligence, are high risk

pause a high risk to fundamental rights, okay?

So the type of AI that the AI Act has in mind

is that type of narrow AI,

for example, we can think about Alexa

or Siri as examples of narrow AI.

And finally, another thing to keep

to know about the proposed AI Act is that

it will be enforced by

what the local s market surveillance authorities

that s in Italy,

that s likely to be autorita antitrust.

However, I ve carried out in the past research

about kind of fashion and power.

And what I found was, one of the things I found was

that the autorita antitrust in Italy

has done very little to address the power imbalance

in sectors such as fashion and photography.

Anyway, so this is a kind of,

some key elements of the AI Act

that I wanna share with you.

If you ve heard about the AI Act before,

probably the main thing you ve heard about it

is that it takes a risk-based approach.

A risk-based approach means that certain types of AI,

certain applications are regarded as posing

an acceptable risk to society.

For example, forms of AI used

for subliminal manipulation, they are unacceptable

and therefore they are prohibited.

They cannot be deployed,

they cannot be put on the market,

they cannot be used in Europe.

So it s a very strong type of regulation, it s a ban.

But that s a very short,

the scope of this is very short.

There is not a lot of types,

there are not a lot of types of AI that are prohibited.

The core of the AI Act is higher risk systems.

High risk systems means systems that pose

a higher risk to health,

safety and fundamental rights.

For example, AI systems used to decide

whether or not your child can go,

can get into a certain university or not.

These are high risk.

That means that these systems are legal,

they are allowed,

but they are significantly regulated,

which means you can put them on the market,

you can use them,

but they have to be subject to certain

safeguards and standards.

For example, manufacturers of these systems

have to mark the AI system as safe

with the classical like CE marking

that you see on your hairdryer, for example.

And there are, for example,

some specific design requirements

and some duties that we don t have

the time to go too much into them,

but let s just keep in mind

that there is a risk-based approach.

Depending on the risk,

there are more or less restrictions

around different types of AI systems.

However, there is a big,

big problem with the AI Act.

The problem is the AI Act

was written before ChatGPT went viral.

Okay, ChatGPT and all the other

forms of generative AI.

And why is that a problem?

It s a problem because the AI Act, as I said,

was focused on the idea of narrow AI.

It was focused on the idea

that certain specific applications

and certain specific domains of AI

are pose a risk, a high risk,

for example, to our fundamental rights.

Whereas ChatGPT

and similar forms of generative AI

are not so much forms of narrow AI

are forms of general AI,

because they can be used for a range of different things.

Some of these things will be very low risk.

Some of these things will be very high risk.

For example, you can ask ChatGPT

to write a birthday card to a friend of yours,

and that s definitely no risk.

But you can also ask ChatGPT to write some codes

for a lethal autonomous weapon.

And that s, I would say probably

even more than high risk.

The high risk, it should not be permitted.

So what the other problem is that the types of AI

that the AI Act considers are mostly types of predictive AI,

whereas generative AI is about generation of new content.

So it s quite, it s quite different.

So why does it matter?

Why should we also regulate

not just the types of AI that we had before,

but also these new types on AI?

And why should you care people in that room in Milan?

Well, because I assume that many of you

are concerned and if not should be concerned

and may have read headlines

such as the one that you see on these slides

from The Guardian biggest act

of copyright theft in history.

Effectively, what happened was that

some Australian writers found out that

thousands, thousands of their books

had been used to train an AI model.

And not only they had not been asked for permission,

they were not even sort of consulted,

and they were not even compensated for this.

That s a proper theft, that s a huge, huge problem.

And imagine if you re a photographer,

if you just found out that

all of your photographs have been used to train an AI model

and they didn t ask for permission

and they didn t compensate you.

So that s a huge problem.

And that applies across the creative industries.

So it s clear that there is a need for regulation,

but how do we do this?

Because we said that the AI Act

really was crafted and drafted before ChatGPT.

So what s going on at the moment

is that the European parliament and the council

are trying desperately to change

and change this, the AI Act,

so that it accounts for these types of new generative AI.

But it is a mess,

because obviously that s not the purpose

for which this law was written.

So I m not gonna go too much into details

because I ve seen that I ve finished my time.

But effectively the problem is that they re trying to,

to shoehorn,

this law to adapt to a new type of AI

that really requires quite a different type of legislation,

which to me means that we should stop, we should pause

and we should probably rewrite this AI Act from scratch.

But in law, one of the big problems in law

is the politics behind it.

So the EU has an interest in being the first

sort of country in the world

to have binding AI regulations,

and therefore they re trying desperately to

find a consensus.

But at the moment the main problem,

the main reason why we cannot find a consensus

is exactly the problem around generative AI

and foundation models, how to regulate them,

it s really quite unclear.

I can see that I have finished my time.

So I m just gonna briefly mention this,

that the question of whether,

the law can really help us

to protect humanity in a world of AI.

The answer is probably no.

In particular in the fashion industry,

but also beyond that.

I mentioned that, in previous,

in the past conducted some research around fashion.

And what I found was that

the law really cannot fix the power imbalance of fashion

because in the fashion industry

there are unwritten rules, social norms

that prevail even on the law.

So there is that problem in the fashion industry.

But more generally, the law really struggles

with new technologies.

And in my book,

in terms of things and the law,

I argue that the law will always fail

to thoroughly regulate new technologies.

And to do that, I take a Marxist approach

that I m not gonna go too much into it now,

but what I will say is that,

even though the law tends to fail when it comes to,

the law itself is not sufficient

in regulating new technologies,

it s an important part of the puzzle,

but it s not sufficient.

What we need alongside the law

is really conscious action.

We really need to organize all the people

that are potentially affected by AI.

They really need to organize

and fight against these threats.

So to conclude,

and I promise this is gonna be 20 seconds,

there is an abundance of ethical guidelines,

manifestos, et cetera, in the field of AI.

They re not all bad.

They are conversation starter,

they can help raise public awareness,

but cannot, they cannot be a substitute

for actual regulation of AI.

The EU is the most advanced place in the world

when it comes to AI regulation,

despite all the problems that we ve seen.

But we need to slow down

and we need to rewrite the rules

to account for types of AI

like generative AI like ChatGPT.

The law I do think will always struggle

in the regulation of new technologies.

Well, fundamentally,

because it s the law itself is a capitalistic product.

And the law struggles in particular

in the fashion industry and in the tech sectors.

The law, however is part of the puzzle.

It s part of a strategy to tackle the issues

that we have in this field.

What I really would like to see, as I said,

is collective action.

I would like people like photographers

whose work are being used to train AI,

would talk about these problems

and organize to make their voices heard.

For example, do they want forms

of fair remuneration for the use of their works?

And I think things like this symposium

can be definitely the start of that conversation.

Thank you.

[audience applauding]

Starring: Guido Noto La Diega