Professor Shannon Vallor on the AI Mirror
Scarica e ascolta ovunque
Scarica i tuoi episodi preferiti e goditi l'ascolto, ovunque tu sia! Iscriviti o accedi ora per ascoltare offline.
Descrizione
What if we saw Artificial Intelligence as a mirror rather than as a form of intelligence? That’s the subject of a fabulous new book by Professor Shannon Vallor, who is...
mostra di piùThat’s the subject of a fabulous new book by Professor Shannon Vallor, who is my guest on this episode.
In our discussion, we explore how artificial intelligence reflects not only our technological prowess but also our ethical choices, biases, and the collective values that shape our world.
We also discuss how AI systems mirror our societal flaws, raising critical questions about accountability, transparency, and the role of ethics in AI development.
Shannon helps me to examine the risks and opportunities presented by AI, particularly in the context of decision-making, privacy, and the potential for AI to influence societal norms and behaviours.
This episode offers a thought-provoking exploration of the intersection between technology and ethics, urging us to consider how we can steer AI development in a direction that aligns with our shared values.
Guest Biography
Prof. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy.
She is Director of the Centre for Technomoral Futures in EFI, and co-Director of the BRAID (Bridging Responsible AI Divides) programme, funded by the Arts and Humanities Research Council. Professor Vallor's research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices.
Her work includes advising policymakers and industry on the ethical design and use of AI. She is a standing member of the One Hundred Year Study of Artificial Intelligence (AI100) and a member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network and the 2022 Covey Award from the International Association of Computing and Philosophy.
She is a former Visiting Researcher and AI Ethicist at Google. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024).
Links
Shannon's website: https://www.shannonvallor.net/
The AI Mirror: https://global.oup.com/academic/product/the-ai-mirror-9780197759066?A Noema essay by Shannon on the dangers of AI: https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/
A New Yorker feature on the book https://www.newyorker.com/culture/open-questions/in-the-age-of-ai-what-makes-people-unique
The AI Mirror as one of the FT’s technology books of the summer https://www.ft.com/content/77914d8e-9959-4f97-98b0-aba5dffd581c
The FT review of The AI Mirror: https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011
The Edinburgh Futures Institute: https://efi.ed.ac.uk/
The clip from the movie "Real Genius' which she refers to: https://www.youtube.com/watch?v=wB1X4o-MV6o
AI Generated Timestamped Summary of Key Points:
00:02:30: Introduction to Professor Shannon Vallor and her work.
00:06:15: Discussion on AI as a mirror of societal values.
00:10:45: The ethical implications of AI decision-making. 00:18:20: How AI reflects human biases and the importance of transparency.
00:25:50: The role of ethics in AI development and deployment.
00:33:10: Challenges of integrating AI into human-centred contexts.
00:41:30: The potential for AI to shape societal norms and behaviours.
00:50:15: Professor Vallor’s insights on the future of AI and ethics.
00:58:00: Closing thoughts and reflections on AI’s impact on humanity.
Informazioni
Autore | Human Risk |
Organizzazione | Human Risk |
Sito | - |
Tag |
Copyright 2024 - Spreaker Inc. an iHeartMedia Company
Commenti