A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
"You Always Get an Answer" : Analyzing Users' Interaction with AI-Generated Personas Given Unanswerable Questions and Risk of Hallucination
Tekijät: Kaate, Ilkka; Salminen, Joni; Jung, Soon-Gyo; Xuan, Trang Thi Thu; Häyhänen, Essi; Azem, Jinan Y.; Jansen, Bernard J.
Toimittaja: Li, Toby; Paterno, Fabio; Väänänen, Kaisa; Leiva, Luis A.; Spano, Davide; Verbert, Katrien
Konferenssin vakiintunut nimi: International Conference on Intelligent User Interfaces
Kustantaja: Association for Computing Machinery
Julkaisuvuosi: 2025
Journal: International Conference on Intelligent User Interfaces
Kokoomateoksen nimi: IUI '25: Proceedings of the 30th International Conference on Intelligent User Interfaces
Tietokannassa oleva lehden nimi: International Conference on Intelligent User Interfaces, Proceedings IUI
Aloitussivu: 1624
Lopetussivu: 1638
ISBN: 979-8-4007-1306-4
DOI: https://doi.org/10.1145/3708359.3712160
Verkko-osoite: https://doi.org/10.1145/3708359.3712160
Rinnakkaistallenteen osoite: https://research.utu.fi/converis/portal/detail/Publication/491881323
We investigated the presence and acceptance of hallucinations (i.e., accidental misinformation) of an AI-generated persona system that leverages large language models for persona creation from survey data in a 54-user within-subjects experiment. After interacting with the personas, users were given a task to ask the personas a series of questions, including an unanswerable question, meaning the personas lacked the data to answer the question. The AI-generated persona system provided a plausible but incorrect answer half (52%) of the time, and more than half of the time (57%), the users accepted the incorrect answer, and the rest of the time, users answered the unanswerable question correctly (no answer). We found that when the AI-generated persona hallucinated, the user was significantly more likely to answer the unanswerable question incorrectly. Also, for genders separately, when the AI-generated persona hallucinated, it was significantly more likely for the female user and the male users to answer the unanswerable question incorrectly. We identified four themes in the AI-generated persona's answers and found that users perceive AI-generated persona's answers as long and unclear for the unanswerable question. Findings imply that personas leveraging LLMs require guardrails to ensure that personas clearly state the possibility of data restrictions and hallucinations when asked unanswerable questions.
Ladattava julkaisu This is an electronic reprint of the original article. |