
On 11–12 November 2025, CETE-P hosted Prof. Tillmann Vierkant (University of Edinburgh). Prof. Vierkant works at the intersection of philosophy of mind, action theory, and ethics, with a focus on how human agency and responsibility are being reshaped by contemporary technologies. During his visit, Prof. Vierkant delivered two lectures examining the ethical challenges that arise from the deployment of artificial intelligence and autonomous systems. A central theme across both talks was the responsibility gap — the troubling possibility that harms caused through AI may lack anyone who can be held properly answerable. He examined how such gaps emerge not only from opacity or diminished human control, but also from the “many hands” problem, where actions are distributed across design teams, data pipelines, and users in ways that diffuse accountability.
The first talk — a seminar titled Problems with Responsible AI: Vulnerability, Explainability and Mindshaping (took place on 11 November 2025, at the Institute of Philosophy CAS, Prague) argued that responsibility practices depend on more than identifying causal contributors. They hinge on an agent's capacity to take up moral challenges, to learn, and to make good on harms they have caused. Humans possess this regulative and forward-looking dimension of agency; AI systems do not. And because users typically lack involvement in how AI systems develop their policies or behavior, they often lack the right position to engage in this kind of justificatory, prospective answerability. This reveals a deeper form of responsibility gap than commonly acknowledged.
The second talk — a public lecture at the Czech Academy of Sciences, Národní 3, Prague 1, took place as part of Days of AI and was titled Doc Brown, AI and the Responsibility Gap (11 November 2025 on the evening). Here, Prof. Vierkant introduced a broader audience to the same core concerns, emphasizing how responsibility has both retrospective elements (responding to past harms) and prospective ones (committing to improvement and repair). He argued that AI systems and the human actors surrounding them often lack the vulnerability that motivates individuals to accept normative demands and to justify their actions in ways that matter to others. This, he suggested, may be the most significant gap of all.
Both events prompted thoughtful discussion about what it will take to sustain meaningful responsibility practices in increasingly automated environments. CETE-P was delighted to host Prof. Vierkant and looks forward to continued collaboration on these urgent ethical and philosophical questions.

On 11–12 November 2025, CETE-P hosted Prof. Tillmann Vierkant (University of Edinburgh). Prof. Vierkant works at the intersection of philosophy of mind, action theory, and ethics, with a focus on how human agency and responsibility are being reshaped by contemporary technologies. During his visit, Prof. Vierkant delivered two lectures examining the ethical challenges that arise from the deployment of artificial intelligence and autonomous systems. A central theme across both talks was the responsibility gap — the troubling possibility that harms caused through AI may lack anyone who can be held properly answerable. He examined how such gaps emerge not only from opacity or diminished human control, but also from the “many hands” problem, where actions are distributed across design teams, data pipelines, and users in ways that diffuse accountability.
The first talk — a seminar titled Problems with Responsible AI: Vulnerability, Explainability and Mindshaping (took place on 11 November 2025, at the Institute of Philosophy CAS, Prague) argued that responsibility practices depend on more than identifying causal contributors. They hinge on an agent's capacity to take up moral challenges, to learn, and to make good on harms they have caused. Humans possess this regulative and forward-looking dimension of agency; AI systems do not. And because users typically lack involvement in how AI systems develop their policies or behavior, they often lack the right position to engage in this kind of justificatory, prospective answerability. This reveals a deeper form of responsibility gap than commonly acknowledged.
The second talk — a public lecture at the Czech Academy of Sciences, Národní 3, Prague 1, took place as part of Days of AI and was titled Doc Brown, AI and the Responsibility Gap (11 November 2025 on the evening). Here, Prof. Vierkant introduced a broader audience to the same core concerns, emphasizing how responsibility has both retrospective elements (responding to past harms) and prospective ones (committing to improvement and repair). He argued that AI systems and the human actors surrounding them often lack the vulnerability that motivates individuals to accept normative demands and to justify their actions in ways that matter to others. This, he suggested, may be the most significant gap of all.
Both events prompted thoughtful discussion about what it will take to sustain meaningful responsibility practices in increasingly automated environments. CETE-P was delighted to host Prof. Vierkant and looks forward to continued collaboration on these urgent ethical and philosophical questions.
•• All News
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.