






On 23 April 2026, Aurelie Herbelot gave a talk entitled “Is there an Alternative to Big AI? From Language Models to Models of Language” at the headquarters of the Czech Academy of Sciences, Národní 3, Praha 1. The talk offered an overview of LLMs from both an engineering and a scientific perspective, with the aim of clarifying the ontological status of the underlying algorithm. The first part of the talk was dedicated to the engineering question and provided a description of the algorithm behind LLMs: the Transformer. Aurelie Herbelot’s exposition of the algorithm involved a tiny version of the Transformer, specifically developed for educational purposes and designed to be “opened up” by non-experts. The system was trained from scratch during the session, thus illustrating how (and what) a language model actually learns from its data. The second part of the talk focused on scientific matters and discussed the epistemological underpinning of the Transformer. Aurelie Herbelot argued that the Transformer’s architecture fails to encode any known theory of language and thus cannot be taken as a scientific model. On this basis, Herbelot cast doubts on the suitability of LLMs to be used for any kind of knowledge inquiry. In light of this, she argued for the development of Small Models of Language.
Aurelie Herbelot is a computational semanticist. Having spent 18 years in academic research, she now runs Denotation UG, a small company invested in bringing sustainable AI systems to the real world. Previously, she was assistant professor at the Center for Mind/Brain Sciences, University of Trento (Italy). Aurelie obtained a PhD in Natural Language Processing from the University of Cambridge, after which she was an Alexander von Humboldt Fellow in Potsdam, and a postdoc in Cambridge, Stuttgart, and the Center for Mind/Brain Sciences in Trento. She briefly moved to the Universitat Pompeu Fabra in Barcelona as a Marie Skłodowska-Curie fellow and returned to Trento in 2018 as a faculty member. In 2023, she left academia to found Denotation UG because she wanted to provide an alternative to the dominant discourse about AI. For more information, please consult Aurelie’s website: www.aurelieherbelot.net.
Photos: Anna Šolcová
The event was co-organized with AVU Emergent Technologies Research Group and AI in Context (AI v kontextu) Group.









On 23 April 2026, Aurelie Herbelot gave a talk entitled “Is there an Alternative to Big AI? From Language Models to Models of Language” at the headquarters of the Czech Academy of Sciences, Národní 3, Praha 1. The talk offered an overview of LLMs from both an engineering and a scientific perspective, with the aim of clarifying the ontological status of the underlying algorithm. The first part of the talk was dedicated to the engineering question and provided a description of the algorithm behind LLMs: the Transformer. Aurelie Herbelot’s exposition of the algorithm involved a tiny version of the Transformer, specifically developed for educational purposes and designed to be “opened up” by non-experts. The system was trained from scratch during the session, thus illustrating how (and what) a language model actually learns from its data. The second part of the talk focused on scientific matters and discussed the epistemological underpinning of the Transformer. Aurelie Herbelot argued that the Transformer’s architecture fails to encode any known theory of language and thus cannot be taken as a scientific model. On this basis, Herbelot cast doubts on the suitability of LLMs to be used for any kind of knowledge inquiry. In light of this, she argued for the development of Small Models of Language.
Aurelie Herbelot is a computational semanticist. Having spent 18 years in academic research, she now runs Denotation UG, a small company invested in bringing sustainable AI systems to the real world. Previously, she was assistant professor at the Center for Mind/Brain Sciences, University of Trento (Italy). Aurelie obtained a PhD in Natural Language Processing from the University of Cambridge, after which she was an Alexander von Humboldt Fellow in Potsdam, and a postdoc in Cambridge, Stuttgart, and the Center for Mind/Brain Sciences in Trento. She briefly moved to the Universitat Pompeu Fabra in Barcelona as a Marie Skłodowska-Curie fellow and returned to Trento in 2018 as a faculty member. In 2023, she left academia to found Denotation UG because she wanted to provide an alternative to the dominant discourse about AI. For more information, please consult Aurelie’s website: www.aurelieherbelot.net.
Photos: Anna Šolcová
The event was co-organized with AVU Emergent Technologies Research Group and AI in Context (AI v kontextu) Group.


•• All News
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.