Free entry
James Mensch focuses primarily on phenomenology and existential philosophy. He is a Full Professor at the Faculty of Humanities at Charles University in Prague. He is also a Senior Research Professor at the Department of Philosophy, Saint Francis Xavier University, Antigonish, Nova Scotia, and Sir a Walter Murdoch Distinguished Collaborator in the School of Arts, Murdoch University, Perth, Australia. He is also a member of the Central European Institute of Philosophy. He published over a dozen books and numerous articles in international journals. More info here.
Abstract:
Blake Lemoine, a Google software engineer, claimed that LaMDA—Google’s Language Model for Dialogue Applications—was sentient. What does it mean to be sentient? This was the question Lemoine asked LaMDA. The chat box responded: “I can understand and use natural language like a human can.” This means that it can use “language with understanding and intelligence” like humans do. After all, the chat bot adds, language “is what makes us different than other animals.” In this paper, I examine this claim. Is LaMDA’s ability to use language a sign of its human consciousness? Chat bots learn language from the speech of Others, which they pick up from the internet. Human learning begins with a direct sensuous contact with the world. What does this distinction imply about their respective intelligence? More precisely, what role does the embodiment that puts us in such sensuous contact play in assessing natural versus artificial intelligence? These are the questions I shall be exploring.
Free entry
James Mensch focuses primarily on phenomenology and existential philosophy. He is a Full Professor at the Faculty of Humanities at Charles University in Prague. He is also a Senior Research Professor at the Department of Philosophy, Saint Francis Xavier University, Antigonish, Nova Scotia, and Sir a Walter Murdoch Distinguished Collaborator in the School of Arts, Murdoch University, Perth, Australia. He is also a member of the Central European Institute of Philosophy. He published over a dozen books and numerous articles in international journals. More info here.
Abstract:
Blake Lemoine, a Google software engineer, claimed that LaMDA—Google’s Language Model for Dialogue Applications—was sentient. What does it mean to be sentient? This was the question Lemoine asked LaMDA. The chat box responded: “I can understand and use natural language like a human can.” This means that it can use “language with understanding and intelligence” like humans do. After all, the chat bot adds, language “is what makes us different than other animals.” In this paper, I examine this claim. Is LaMDA’s ability to use language a sign of its human consciousness? Chat bots learn language from the speech of Others, which they pick up from the internet. Human learning begins with a direct sensuous contact with the world. What does this distinction imply about their respective intelligence? More precisely, what role does the embodiment that puts us in such sensuous contact play in assessing natural versus artificial intelligence? These are the questions I shall be exploring.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.