A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä
Talking existential risk into being: a Habermasian critical discourse perspective to AI hype
Tekijät: Westerstrand Salla, Westerstrand Rauli, Koskinen Jani
Kustantaja: Springer Nature
Julkaisuvuosi: 2024
Journal: AI and ethics
eISSN: 2730-5961
DOI: https://doi.org/10.1007/s43681-024-00464-z
Verkko-osoite: https://doi.org/10.1007/s43681-024-00464-z
Rinnakkaistallenteen osoite: https://research.utu.fi/converis/portal/detail/Publication/387562814
Recent developments in Artificial Intelligence (AI) have resulted in a hype around both opportunities and risks of these technologies. In this discussion, one argument in particular has gained increasing visibility and influence in various forums and positions of power, ranging from public to private sector organisations. It suggests that Artificial General Intelligence (AGI) that surpasses human intelligence is possible, if not inevitable, and which can—if not controlled—lead to human extinction (Existential Threat Argument, ETA). Using Jürgen Habermas’s theory of communicative action and the validity claims of truth, truthfulness and rightness therein, we inspect the validity of this argument and its following ethical and societal implications. Our analysis shows that the ETA is problematic in terms of scientific validity, truthfulness, as well as normative validity. This risks directing AI development towards a strategic game driven by economic interests of the few rather than ethical AI that is good for all.
Ladattava julkaisu This is an electronic reprint of the original article. |