A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä

Talking existential risk into being: a Habermasian critical discourse perspective to AI hype




TekijätWesterstrand Salla, Westerstrand Rauli, Koskinen Jani

KustantajaSpringer Nature

Julkaisuvuosi2024

JournalAI and ethics

eISSN2730-5961

DOIhttps://doi.org/10.1007/s43681-024-00464-z

Verkko-osoitehttps://doi.org/10.1007/s43681-024-00464-z

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/387562814


Tiivistelmä

Recent developments in Artificial Intelligence (AI) have resulted in a hype around both opportunities and risks of these technologies. In this discussion, one argument in particular has gained increasing visibility and influence in various forums and positions of power, ranging from public to private sector organisations. It suggests that Artificial General Intelligence (AGI) that surpasses human intelligence is possible, if not inevitable, and which can—if not controlled—lead to human extinction (Existential Threat Argument, ETA). Using Jürgen Habermas’s theory of communicative action and the validity claims of truth, truthfulness and rightness therein, we inspect the validity of this argument and its following ethical and societal implications. Our analysis shows that the ETA is problematic in terms of scientific validity, truthfulness, as well as normative validity. This risks directing AI development towards a strategic game driven by economic interests of the few rather than ethical AI that is good for all.


Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2024-26-11 at 12:35