Other publication
Looks like a human, acts like a human, but is it something else? AI as Schein-Dasein
(Extended conference abstract: International Conference on the Ethical and Social Impacts of ICT (ETHICOMP) 2024)
Authors: Koskinen Jani, Westerstrand Salla, Naskali Juhani
Conference name: International Conference on the Ethical and Social Impacts of ICT
Publisher: Universidad de La Rioja
Publication year: 2024
Book title : Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT
First page : 142
Last page: 145
Number of pages: 4
ISBN: 978-84-09-58160-3
Web address : https://dialnet.unirioja.es/descarga/articulo/9326106.pdf(external)
Ethics of systems utilising Artificial Intelligence (AI) is an increasingly discussed topic in academia (e.g., Franzke, 2022), industry (e.g., Morley et al., 2021; Jobin et al., 2019), and popular media (Ouchchy et al., 2020). The recent popularisation of AI, following the introduction of new solutions with easier user interfaces, such as ChatGPT, has further accelerated the discussion. How do these technologies influence people’s lives? Can AI have moral agency? How should we interact with the technology when it reminds us of our fellow humans? Can we soon evem make such distinctions, and if not, what could that mean for our moral agency?
To shed light on these complex questions, one option is to lean on Martin Heidegger’s work and engage in an ontological discussion on the role of AI systems through the concept of Dasein.
Dasein is the central term for Heidegger (1927). He discussed the concept of being under deep and permanent ontological investigation, and used it to describe human existence that has awareness and confronts its own being in this world. Heidegger presents three modes of being in Being and Time, namely ready-to-hand, present-at-hand and Dasein, which all differ from each other with their unique characteristics. He did not offer strict or explicit explanations to being in Being and Time because the project was never entirely completed. Instead, he attempted to bring clarity to the question from different perspectives, emphasising the individual comprehension: for Heidegger being is based on hermeneutical phenomenology and—in simple terms—this means that being can only be investigated from the first-person perceptive. Only people themselves can reach an understanding of their Dasein. (see Heidegger, 1927)
For an ontological analysis of AI, it is essential to understand the three primary modes of being defined by Heidegger. Doing so reveals that AI systems could be giving a rise to something novel: a fourth mode of being.
Heidegger describes things (objects) and their being by a hammer example. First, Heidegger explained that something is ready-to-hand if it has some purpose to accomplish – like a hammer is used for hammering (Heidegger, 1927, §15–18). Usually, we do not give much consideration to the objects we use; we just use them like we always have and accept that they are there, ready for us to use to accomplish a certain goal. For example, when you are reading an article, the tool (the paper or the screen) that allows you to read it is not used consciously. You just use it and hopefully concentrate on the content of the article and get some sense out of it (the goal, or the purpose). Thus, we use such objects in the way they are meant to be used—or should we say, how they are properly used.
The second mode of being – present-at-hand – can be exposed by the breaking of an object. Brokenness reveals the object and exposes its natures, which refers to the purpose for which the thing exists (see Heidegger, 1927, § 16). The term referral indicates that we understand the meaning of an object by its reference: for example, a hammer is referring to nails and wood towards the wall under construction. When the hammer breaks, we become conscious of its nature – it is revealed for us. When the hammer is not broken, we do not give it much thought, and it is revealed as ready-to-hand. Heidegger (1927, §18) shows that objects that are ready-to-hand appear to the observer in the context of the surrounding world and are referred to, along with other things, in the world for some purpose. Entities have significance only in their full context: a knife is a different thing in the kitchen, in a theatre, or in the hands of a criminal (Harman, 2010).
What makes situating an AI system – such as transformer-based language models like ChatGPT – in the hammer example difficult is that the being of AI does not seem to limit to the ready-to-hand. Instead, AI systems are something that to a human eye resembles Dasein, or ‘the individual human mode of being in the world’, which is one of the ways to grasp and present the meaning of the original German term (). The special character of Dasein compared to other two modes of being is that Dasein is the only one that can have an understanding of one’s own being and hence can also investigate it. Thus, Dasein is a mode of being that is traditionally associated with (only) human beings (Van Der Hoorn and Whitty, 2015). This understanding of one’s existence is the key factor that separates Dasein from present-at-hand and especially from ready-to-hand. Dasein can see the present-at-hand and the ready- to-hand, but Dasein cannot truly be reached as present-at-hand or as ready-to-hand. Things, or artefacts can be present or ready but only Dasein (human) can see other modes and give meanings for those.
Hence, we argue that AI has given birth to a fourth mode of Being: ScheinDasein (looking-like-Dasein), that reveals itself in such ways that it seems like Dasein – a human behind the technology. They may appear as witty conversationalists, therapists, or even romantic partners (Hale, 2023; Cost and Court, 2023). AI can even seem to be able conduct deep self-investigations (a key factor separating people as Dasein form objects) that ordinary people cannot easily achieve because of our human limitations. The interaction with AI can, at its best (or worst, depending on the situation), give people new insights and provoke feelings of empathy and of being understood.
In the future, it is possible that people will not be able to distinguish between actual Dasein and Schein-Dasein—although the idea of seeing Dasein is a paradox in itself, as Dasein is always a lived experience by individuals, by themselves. As a consequence, due to the ongoing popularisation of ever more pervasive AI systems (), we may end up in a situation where our being (Dasein) is left alone as we cannot be entirely sure that we are living with other selfconscious people. Instead, we may feel like we are left alone with mere objects. This also set problem with death as possibility for us as Dasein.
Like Heidegger (Heidegger, 1927, §51–53) shows us, death is something that only Dasein can and must face, and it should not be confronted like the ordinary man (das Man) does. Das Man is a term that Heidegger (1927) uses to describe a situation where people consciously choose to hide or lose themselves and replace themselves with commonly given ways of being or acting, whereas Dasein is living a life consciously and actively makes sense of it. Thus, das Man could be described as a generally accepted and non-disturbing way of living or being. However, death is an issue which cannot be outsourced to das Man, because common shared way of living cannot reach or face the death. Actually, das Man gives justification and adds temptation to cover up oneself from one’s own most possibility as being-towards-death (Sein-zum-Tode) (Heidegger, 1927). By being-towards-death one could see what is important and how one wants to spend life, for example, with family.
Yet, in the case of AI, Death has a very different meaning. We are not sure anymore if we are living with people or Shein-Dasein—objects that deceive us to make false conclusions of our surroundings. This could lead to a Dasein turning into das Man, who is not able to reflect consciously and make sense of itself or its surroundings. We merely believe that we are Dasein in this life and have, for example, decided to be with our family (which could turn out to be a collection of AI systems) because it was what seen worthwhile as a beingtowards-death who recognises that having a limited lifespan worth spending wisely. Furthermore, AI makes it possible to claim Shein-Dasein as our own— to ”create” art by tasking an AI to do it, or ”write” a book requesting it from an AI, claiming the apparent creativity as our own, living as das Man with the outer appearance and self-esteem of Dasein.
Introducing human-like AI systems raises fundamental ethical questions: if we can no longer distinguish Schein-Dasein from Dasein, how much of human autonomy do we have left? Can we ever make rational decisions and interact with others in a way that enables ethical action, which would be a requirement of, e.g., in Habermasian discourse ethics? Do we have moral obligations towards Schein-Dasein, like we would have towards Dasein according to ? When does Schein-Dasein, in fact, become Dasein – if ever? Such a fourth mode of being requires further examination.