
Institute of Philosophy, Jilská 1, Seminar Room 124a, Prague 1
Abstract:
In previous work we had argued that the real challenge to responsible AI use stems from the structural invulnerability to moral address that is created by the many hands problem which is so prevalent in AI use. But this focus on many hands may seem surprising given that we also argued that the knowledge and control condition (K&C) of responsibility are less important than often assumed. If these conditions are less important then why can't we hold users of AI responsible even if they fail to meet these conditions? If that is the case then many hands does not seem to be as much of a problem as we make out, because there are individuals (users) that are apt for moral address when things go wrong. This paper details the answer to this challenge. It argues that what makes agents responsible even if they fail to meet K&C is the right kind of scaffoldability. Humans learn from their mistakes and use their justifications for shaping their future minds. This regulative dimension of justifications sets humans apart from AIs where even explanation in terms of reasons is only ever backward looking (see Peters 2023). It is the possibility of agency cultivation (which sets humans apart from AIs) which also justifies moral emotions against human agents even if they do not fully fulfil K&C conditions. As AI users typically do not have the same opportunities for this kind of learning, because they are not normally involved in the development of the agency of the system they are also less apt as targets for our responsibility practices. I conclude by exploring consequences of this finding.
Tillmann Vierkant is a Professor of Neurophilosophy of Agency and Free Will at the School of Philosophy, Psychology and Language Sciences (PPLS) at the University of Edinburgh.
His research interests are centered around questions on the nature of mental actions. His monograph on related issues The Tinkering Mind has appeared in October 2022 with Oxford University Press. He has also worked extensively on topics relating to free will and voluntary action in an interdisciplinary context and is part of a multi million dollar project on the Neuro Philosophy of Free Will. He has written also on willpower and the extended mind, implicit bias, self-knowledge, mindreading and connections between language evolution and voluntary action. He has also been involved in X phi exploring folk intuitions on freedom and responsibility. and has edited two volumes on the cognitive science challenge to free will, most notably Decomposing the Will.
His newest research interest is in how to bridge responsibility gaps in AI systems using lessons from neuroethics. This interest has led to a cooperation with Prof Shannon Vallor in the UKRI funded project "Making Systems Answer".
Free entry.

Institute of Philosophy, Jilská 1, Seminar Room 124a, Prague 1
Abstract:
In previous work we had argued that the real challenge to responsible AI use stems from the structural invulnerability to moral address that is created by the many hands problem which is so prevalent in AI use. But this focus on many hands may seem surprising given that we also argued that the knowledge and control condition (K&C) of responsibility are less important than often assumed. If these conditions are less important then why can't we hold users of AI responsible even if they fail to meet these conditions? If that is the case then many hands does not seem to be as much of a problem as we make out, because there are individuals (users) that are apt for moral address when things go wrong. This paper details the answer to this challenge. It argues that what makes agents responsible even if they fail to meet K&C is the right kind of scaffoldability. Humans learn from their mistakes and use their justifications for shaping their future minds. This regulative dimension of justifications sets humans apart from AIs where even explanation in terms of reasons is only ever backward looking (see Peters 2023). It is the possibility of agency cultivation (which sets humans apart from AIs) which also justifies moral emotions against human agents even if they do not fully fulfil K&C conditions. As AI users typically do not have the same opportunities for this kind of learning, because they are not normally involved in the development of the agency of the system they are also less apt as targets for our responsibility practices. I conclude by exploring consequences of this finding.
Tillmann Vierkant is a Professor of Neurophilosophy of Agency and Free Will at the School of Philosophy, Psychology and Language Sciences (PPLS) at the University of Edinburgh.
His research interests are centered around questions on the nature of mental actions. His monograph on related issues The Tinkering Mind has appeared in October 2022 with Oxford University Press. He has also worked extensively on topics relating to free will and voluntary action in an interdisciplinary context and is part of a multi million dollar project on the Neuro Philosophy of Free Will. He has written also on willpower and the extended mind, implicit bias, self-knowledge, mindreading and connections between language evolution and voluntary action. He has also been involved in X phi exploring folk intuitions on freedom and responsibility. and has edited two volumes on the cognitive science challenge to free will, most notably Decomposing the Will.
His newest research interest is in how to bridge responsibility gaps in AI systems using lessons from neuroethics. This interest has led to a cooperation with Prof Shannon Vallor in the UKRI funded project "Making Systems Answer".
Free entry.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.