
Public lecture, free entry
Abstract:
In the literature authors traditionally diagnose a responsibility gap for responsible AI use. They argue that because of the opaque nature of AI decision making and the absence of direct control over the behaviour of autonomous systems it is often impossible to establish who is responsible when things go wrong and that this is intuitively problematic. Tillmann Vierkant argues that this focus on epistemic opacity and absence of direct control is a red herring. This is because, as recent cognitive science developments show, we face surprisingly similar challenges when it comes to establishing whether humans have the right knowledge and the right control to be responsible for their actions. Philosophers who are impressed by this challenge to human responsibility have recently developed new instrumentalist views, which emphasize the forward-looking and flexible role of agency cultivation for responsibility. These views are not in the same way dependent on knowledge and control and therefore provide a plausible route to think about responsible AI use. But while instrumentalist views show great promise there is one underdiscussed problem in the case of AI that such accounts also struggle with. Responsibility practices for humans are a high stakes enterprise that we cannot escape if we want to be accepted as fully responsible agents. Dr Vierkant argues that neither the AI nor any human component of the system faces similar existential vulnerabilities. He concludes that this invulnerability to moral challenge is the most significant responsibility gap when it comes to AI.
This free public lecture takes part within Dny AI 2025 (Days of AI 2025).
Dr Tillmann Vierkant is a Professor of Neurophilosophy of Agency and Free Will at the School of Philosophy, Psychology and Language Sciences (PPLS) at the University of Edinburgh.
His research interests are centered around questions on the nature of mental actions. His monograph on related issues The Tinkering Mind has appeared in October 2022 with Oxford University Press. He has also worked extensively on topics relating to free will and voluntary action in an interdisciplinary context and is part of a multi million dollar project on the Neuro Philosophy of Free Will. He has written also on willpower and the extended mind, implicit bias, self-knowledge, mindreading and connections between language evolution and voluntary action. He has also been involved in X phi exploring folk intuitions on freedom and responsibility and has edited two volumes on the cognitive science challenge to free will, most notably Decomposing the Will.
His newest research interest is in how to bridge responsibility gaps in AI systems using lessons from neuroethics. This interest has led to a cooperation with Prof Shannon Vallor in the UKRI funded project "Making Systems Answer."

Public lecture, free entry
Abstract:
In the literature authors traditionally diagnose a responsibility gap for responsible AI use. They argue that because of the opaque nature of AI decision making and the absence of direct control over the behaviour of autonomous systems it is often impossible to establish who is responsible when things go wrong and that this is intuitively problematic. Tillmann Vierkant argues that this focus on epistemic opacity and absence of direct control is a red herring. This is because, as recent cognitive science developments show, we face surprisingly similar challenges when it comes to establishing whether humans have the right knowledge and the right control to be responsible for their actions. Philosophers who are impressed by this challenge to human responsibility have recently developed new instrumentalist views, which emphasize the forward-looking and flexible role of agency cultivation for responsibility. These views are not in the same way dependent on knowledge and control and therefore provide a plausible route to think about responsible AI use. But while instrumentalist views show great promise there is one underdiscussed problem in the case of AI that such accounts also struggle with. Responsibility practices for humans are a high stakes enterprise that we cannot escape if we want to be accepted as fully responsible agents. Dr Vierkant argues that neither the AI nor any human component of the system faces similar existential vulnerabilities. He concludes that this invulnerability to moral challenge is the most significant responsibility gap when it comes to AI.
This free public lecture takes part within Dny AI 2025 (Days of AI 2025).
Dr Tillmann Vierkant is a Professor of Neurophilosophy of Agency and Free Will at the School of Philosophy, Psychology and Language Sciences (PPLS) at the University of Edinburgh.
His research interests are centered around questions on the nature of mental actions. His monograph on related issues The Tinkering Mind has appeared in October 2022 with Oxford University Press. He has also worked extensively on topics relating to free will and voluntary action in an interdisciplinary context and is part of a multi million dollar project on the Neuro Philosophy of Free Will. He has written also on willpower and the extended mind, implicit bias, self-knowledge, mindreading and connections between language evolution and voluntary action. He has also been involved in X phi exploring folk intuitions on freedom and responsibility and has edited two volumes on the cognitive science challenge to free will, most notably Decomposing the Will.
His newest research interest is in how to bridge responsibility gaps in AI systems using lessons from neuroethics. This interest has led to a cooperation with Prof Shannon Vallor in the UKRI funded project "Making Systems Answer."
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.