Carina Zehetmaier:

HOW TRUSTWORTHY IS AI?


About AI bias and its possible consequences


Release: 29.02.2024 on YouTube/Spotify
Expert: Carina Zehetmaier

Moderation: Gabriele Schelle
Consequence: 2


We speak we with Carina Zehetmaier, AI expert and entrepreneur, about the bias of AI and the possible consequences: How trustworthy is AI really? How does discrimination creep into technology? How do we bring justice into a system that we don't know how it works? What does the bias of AI mean in the context of law enforcement? What could the path to fair AI look like? What kind of regulation will AI experience through the EU's Artificial Intelligence Act (AIA)?

About the expert:

Carina Zehetmaieris an AI expert, entrepreneur and ambassador forWomen in AI Austria. As an entrepreneur with a passion for AI and social entrepreneurship, she enjoys exploring business opportunities that leverage the power of new technologies to benefit people and our planet. A trained human rights lawyer, she has several years of experience in non-governmental, governmental and international organizations and has a deeper understanding of diplomacy, global politics and international relations.


About the host:

Gabriele Schelleis a theatre director and author.


In this episode:

0:00 Intro

0:56 How trustworthy is AI really?

1:29 Problem of hallucinating

4:10 Context of law enforcement

5:36 Black-Box Problem: Research Area “Explainable AI”

6:16 Regulation of AI

6:36 Bias: Wrong decisions due to discrimination

7:26 AI Accountability

9:14 How do you actually measure whether the system is performing or not?

11:15 EU AI Act / Risk pyramid

13:04 Deep fakes

14:24 Human Rights and Impact Assessment

14:58 AI cannot be objective and neutral

15:48 Diversity

17:25 Automation Bias
17:43 “Coded Bias” – documentary by Joy Buolamwini

17:54 "An algorithm refers to historical information and makes a prediction about the future. So big data is a reflection of our history. It's as if our past is lingering in the algorithms." Joy Buolamwini

21:34 Chat-GPT is trained on the English-speaking web

24:59 Environmental Impact of AI

34:12 Fears about AI

35:51 We have to make sure that we take everyone with us

36:20 Fake and reality

Links and sources:


Editorial staff and collaboration:

Elena Messner (curation), Gabriele Schelle (curation/presentation), Christian Nisslmüller (production manager), Martin Lohr (studio manager), Alisa Karabut (graphics)


Contact:
We look forward to your questions and comments toinfo@theater-factory.de.


Support us:

If you like this podcast, we would be happy if you would rate it on the podcast platform. Subscribe to the podcast on the platform of your choice - and feel free to recommend us!


We would like to thank the Schleswig-Holstein State Library for supporting the podcast.


Thanks for listening and see you next time!

Zur Übersicht
Listen for free on YouTube Listen for free on Spotify

"An algorithm refers to historical information and makes a prediction about the future. So big data is a reflection of our history. It's as if our past is lingering in the algorithms."

Joy Buolamwini in

Coded Bias


(Film by Shalini Kantayya, 2020)

Share by: