
Public lecture, free entry
Abstract:
The talk proposes that AI poses an existential risk to democracies. The basic argument is that AI affords or encourages epistemic communities to form that have fundamentally different shared understandings of the world. The particular problem that the talk addresses is that this existential risk arises even when we are not focused on deliberate acts of disinformation, propaganda etc. The lecture looks at the problems of attention and focus: that AI is both driving ever greater volumes and velocities of information production and distribution, and is proposed as a solution to this problem of information overload. Adam Henschke then argues that, in order for democracies to persist, the democratic community must have a shared set of understandings about features of the world relevant to its political practices. When epistemic communities arise that no longer share understanding we have the phenomena of epistemic succession. This epistemic secession is particularly problematic for democratic survival. Returning to AI, the talk presents the concept of post-reality technologies as presenting the next step in factors that drive epistemic secession. The lecture engages with a counter example to show that it is only a risk to democratic survival when epistemic secession relates to shared political understandings. It closes by putting the risks of epistemic secession in the wider context of cognitive warfare and deliberate efforts to degrade shared political understandings.
Adam Henschke is Assistant Professor and Research Director with the Philosophy Section at the University of Twente. His work combines ethics of technology and institutional ethics, developing a pluralist method to offer analyses and critiques of responsibility. This techno-institutionalist approach allows him to explore conceptual and normative issues relating to security, health, information and technology ethics, and to offer critiques of regarding the ethical, social, and political responsibilities of military, intelligence, national security, clinical and public health, information technology, and policy making institutions.
He has published on national security and military ethics, the ethics of surveillance technologies, ethics of intelligence institutions, care for enhanced veterans, ethics and the internet of things, and ethics of cybersecurity. He is currently working on public health and disordered information, the ethics of nudges, and trust in informationally mediated environments. Recent books include the single authored book Cognitive Warfare: Grey Matters In Contemporary Political Conflict, the co-authored book The Ethics Of National Security Institutions: Theory And Applications and the coedited book The Ethics of Surveillance In Times Of Emergency.
This event is organized as part of the project Human-centred AI for a Sustainable and Adaptive Society (HumanAId, reg. no. CZ.02.01.01/00/23_025/0008691), implemented by the Institute of Philosophy CAS.
The main objective of the project is to develop methodologies and tools that will enable the potential of large language models (LLMs) to be used in a way that is in line with the value and normative requirements of specific users in civil society and government.


Public lecture, free entry
Abstract:
The talk proposes that AI poses an existential risk to democracies. The basic argument is that AI affords or encourages epistemic communities to form that have fundamentally different shared understandings of the world. The particular problem that the talk addresses is that this existential risk arises even when we are not focused on deliberate acts of disinformation, propaganda etc. The lecture looks at the problems of attention and focus: that AI is both driving ever greater volumes and velocities of information production and distribution, and is proposed as a solution to this problem of information overload. Adam Henschke then argues that, in order for democracies to persist, the democratic community must have a shared set of understandings about features of the world relevant to its political practices. When epistemic communities arise that no longer share understanding we have the phenomena of epistemic succession. This epistemic secession is particularly problematic for democratic survival. Returning to AI, the talk presents the concept of post-reality technologies as presenting the next step in factors that drive epistemic secession. The lecture engages with a counter example to show that it is only a risk to democratic survival when epistemic secession relates to shared political understandings. It closes by putting the risks of epistemic secession in the wider context of cognitive warfare and deliberate efforts to degrade shared political understandings.
Adam Henschke is Assistant Professor and Research Director with the Philosophy Section at the University of Twente. His work combines ethics of technology and institutional ethics, developing a pluralist method to offer analyses and critiques of responsibility. This techno-institutionalist approach allows him to explore conceptual and normative issues relating to security, health, information and technology ethics, and to offer critiques of regarding the ethical, social, and political responsibilities of military, intelligence, national security, clinical and public health, information technology, and policy making institutions.
He has published on national security and military ethics, the ethics of surveillance technologies, ethics of intelligence institutions, care for enhanced veterans, ethics and the internet of things, and ethics of cybersecurity. He is currently working on public health and disordered information, the ethics of nudges, and trust in informationally mediated environments. Recent books include the single authored book Cognitive Warfare: Grey Matters In Contemporary Political Conflict, the co-authored book The Ethics Of National Security Institutions: Theory And Applications and the coedited book The Ethics of Surveillance In Times Of Emergency.
This event is organized as part of the project Human-centred AI for a Sustainable and Adaptive Society (HumanAId, reg. no. CZ.02.01.01/00/23_025/0008691), implemented by the Institute of Philosophy CAS.
The main objective of the project is to develop methodologies and tools that will enable the potential of large language models (LLMs) to be used in a way that is in line with the value and normative requirements of specific users in civil society and government.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency (REA). Neither the European Union nor the granting authority can be held responsible for them.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency (REA). Neither the European Union nor the granting authority can be held responsible for them.