
On 6 March 2026, CETE-P and IRLaB hosted the workshop called "Critically Examining AI", attended by historians, sociologists, economists, computer scientists, and philosophers. It brought together scholars from different local institutions and a visiting guest scholar from Linköping University.
The talks addressed different aspects. Lars Lindblom discussed the idea of ‘moral AI’ for healthcare priority setting, pointing out that a prediction is not a piece of advice and that aggregation methods of machine learning only reveal optimised values rather than reasons that are typical for offering moral advice. Hana Porkertova and Sabina Vassileva presented their research on attending to risky attachments, which showcased patient-led innovations in type-1 diabetes care. Lucy Císař Brown presented on the emergence of technological re-enchantment in the age of AI and the problems of our willing epistemic submission. Finally, Paula Gürtler and Jose Luis Guerrero Quiñones presented on the use of AI in the public sector and the problem this creates for democracy. The diversity of presentations and perspectives highlights that AI raises many issues and should be considered interdisciplinarily.
In the final discussion panel, the workshop participants engaged in the question of whether we understand refusal as necessary when there is a consent-undermining power imbalance between technology and humans. This sparked a conversation on what refusal means, what forms it can take, and how it might be limited by institutional arrangements. This discussion demonstrated that there is more need to look at political and public responses to AI.

On 6 March 2026, CETE-P and IRLaB hosted the workshop called "Critically Examining AI", attended by historians, sociologists, economists, computer scientists, and philosophers. It brought together scholars from different local institutions and a visiting guest scholar from Linköping University.
The talks addressed different aspects. Lars Lindblom discussed the idea of ‘moral AI’ for healthcare priority setting, pointing out that a prediction is not a piece of advice and that aggregation methods of machine learning only reveal optimised values rather than reasons that are typical for offering moral advice. Hana Porkertova and Sabina Vassileva presented their research on attending to risky attachments, which showcased patient-led innovations in type-1 diabetes care. Lucy Císař Brown presented on the emergence of technological re-enchantment in the age of AI and the problems of our willing epistemic submission. Finally, Paula Gürtler and Jose Luis Guerrero Quiñones presented on the use of AI in the public sector and the problem this creates for democracy. The diversity of presentations and perspectives highlights that AI raises many issues and should be considered interdisciplinarily.
In the final discussion panel, the workshop participants engaged in the question of whether we understand refusal as necessary when there is a consent-undermining power imbalance between technology and humans. This sparked a conversation on what refusal means, what forms it can take, and how it might be limited by institutional arrangements. This discussion demonstrated that there is more need to look at political and public responses to AI.
•• All News
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.