Author: John Dorsch
The Routledge Handbook of Mindshaping (2025)
DOI: https://doi.org/10.4324/9781032639239
Abstract:
This chapter examines possible ramifications of mindshaping a social robot. It explores how such an agent might learn to represent psychological states, align its behavior with evolving societal norms, and develop capacities for self-directed mindreading and normative self-knowledge. Integrating perspectives from cultural evolution and naturalized intentionality, this approach suggests that social robots could achieve a level of norm-based self-regulation typically reserved for humans, fulfilling criteria for moral and legal personhood. However, this possibility raises ethical concerns: creating a self-knowing agent would tax care-giving resources as we would need to provide AI welfare, thus undermining our capacity to act responsibly toward humans, non-human animals, and the environment, to whom our moral consideration is already owed and in desperate need. Thus, this chapter concludes by urging caution, warning that attempts to cultivate moral responsibility in artificial agents may have destabilizing consequences for moral practices.
•• More publications:
Author: John Dorsch
The Routledge Handbook of Mindshaping (2025)
DOI: https://doi.org/10.4324/9781032639239
Abstract:
This chapter examines possible ramifications of mindshaping a social robot. It explores how such an agent might learn to represent psychological states, align its behavior with evolving societal norms, and develop capacities for self-directed mindreading and normative self-knowledge. Integrating perspectives from cultural evolution and naturalized intentionality, this approach suggests that social robots could achieve a level of norm-based self-regulation typically reserved for humans, fulfilling criteria for moral and legal personhood. However, this possibility raises ethical concerns: creating a self-knowing agent would tax care-giving resources as we would need to provide AI welfare, thus undermining our capacity to act responsibly toward humans, non-human animals, and the environment, to whom our moral consideration is already owed and in desperate need. Thus, this chapter concludes by urging caution, warning that attempts to cultivate moral responsibility in artificial agents may have destabilizing consequences for moral practices.
•• More publications:
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.