A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

"You Always Get an Answer" : Analyzing Users' Interaction with AI-Generated Personas Given Unanswerable Questions and Risk of Hallucination




TekijätKaate, Ilkka; Salminen, Joni; Jung, Soon-Gyo; Xuan, Trang Thi Thu; Häyhänen, Essi; Azem, Jinan Y.; Jansen, Bernard J.

ToimittajaLi, Toby; Paterno, Fabio; Väänänen, Kaisa; Leiva, Luis A.; Spano, Davide; Verbert, Katrien

Konferenssin vakiintunut nimiInternational Conference on Intelligent User Interfaces

KustantajaAssociation for Computing Machinery

Julkaisuvuosi2025

JournalInternational Conference on Intelligent User Interfaces

Kokoomateoksen nimiIUI '25: Proceedings of the 30th International Conference on Intelligent User Interfaces

Tietokannassa oleva lehden nimiInternational Conference on Intelligent User Interfaces, Proceedings IUI

Aloitussivu1624

Lopetussivu1638

ISBN979-8-4007-1306-4

DOIhttps://doi.org/10.1145/3708359.3712160

Verkko-osoitehttps://doi.org/10.1145/3708359.3712160

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/491881323


Tiivistelmä
We investigated the presence and acceptance of hallucinations (i.e., accidental misinformation) of an AI-generated persona system that leverages large language models for persona creation from survey data in a 54-user within-subjects experiment. After interacting with the personas, users were given a task to ask the personas a series of questions, including an unanswerable question, meaning the personas lacked the data to answer the question. The AI-generated persona system provided a plausible but incorrect answer half (52%) of the time, and more than half of the time (57%), the users accepted the incorrect answer, and the rest of the time, users answered the unanswerable question correctly (no answer). We found that when the AI-generated persona hallucinated, the user was significantly more likely to answer the unanswerable question incorrectly. Also, for genders separately, when the AI-generated persona hallucinated, it was significantly more likely for the female user and the male users to answer the unanswerable question incorrectly. We identified four themes in the AI-generated persona's answers and found that users perceive AI-generated persona's answers as long and unclear for the unanswerable question. Findings imply that personas leveraging LLMs require guardrails to ensure that personas clearly state the possibility of data restrictions and hallucinations when asked unanswerable questions.

Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2025-20-05 at 11:38