TY - GEN
T1 - "You Always Get an Answer"
T2 - 30th International Conference on Intelligent User Interfaces, IUI 2025
AU - Kaate, Ilkka
AU - Salminen, Joni
AU - Jung, Soon Gyo
AU - Xuan, Trang Thi Thu
AU - Häyhänen, Essi
AU - Azem, Jinan Y.
AU - Jansen, Bernard J.
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/3/24
Y1 - 2025/3/24
N2 - We investigated the presence and acceptance of hallucinations (i.e., accidental misinformation) of an AI-generated persona system that leverages large language models for persona creation from survey data in a 54-user within-subjects experiment. After interacting with the personas, users were given a task to ask the personas a series of questions, including an unanswerable question, meaning the personas lacked the data to answer the question. The AI-generated persona system provided a plausible but incorrect answer half (52%) of the time, and more than half of the time (57%), the users accepted the incorrect answer, and the rest of the time, users answered the unanswerable question correctly (no answer). We found that when the AI-generated persona hallucinated, the user was significantly more likely to answer the unanswerable question incorrectly. Also, for genders separately, when the AI-generated persona hallucinated, it was significantly more likely for the female user and the male users to answer the unanswerable question incorrectly. We identified four themes in the AI-generated persona's answers and found that users perceive AI-generated persona's answers as long and unclear for the unanswerable question. Findings imply that personas leveraging LLMs require guardrails to ensure that personas clearly state the possibility of data restrictions and hallucinations when asked unanswerable questions.
AB - We investigated the presence and acceptance of hallucinations (i.e., accidental misinformation) of an AI-generated persona system that leverages large language models for persona creation from survey data in a 54-user within-subjects experiment. After interacting with the personas, users were given a task to ask the personas a series of questions, including an unanswerable question, meaning the personas lacked the data to answer the question. The AI-generated persona system provided a plausible but incorrect answer half (52%) of the time, and more than half of the time (57%), the users accepted the incorrect answer, and the rest of the time, users answered the unanswerable question correctly (no answer). We found that when the AI-generated persona hallucinated, the user was significantly more likely to answer the unanswerable question incorrectly. Also, for genders separately, when the AI-generated persona hallucinated, it was significantly more likely for the female user and the male users to answer the unanswerable question incorrectly. We identified four themes in the AI-generated persona's answers and found that users perceive AI-generated persona's answers as long and unclear for the unanswerable question. Findings imply that personas leveraging LLMs require guardrails to ensure that personas clearly state the possibility of data restrictions and hallucinations when asked unanswerable questions.
KW - AI-generated personas
KW - generative AI
KW - human-computer interaction
KW - misinformation
KW - user experience
UR - https://www.scopus.com/pages/publications/105001919671
U2 - 10.1145/3708359.3712160
DO - 10.1145/3708359.3712160
M3 - Conference contribution
AN - SCOPUS:105001919671
T3 - International Conference on Intelligent User Interfaces, Proceedings IUI
SP - 1624
EP - 1638
BT - IUI 2025 - Proceedings of the 2025 International Conference on Intelligent User Interfaces
PB - Association for Computing Machinery
Y2 - 24 March 2025 through 27 March 2025
ER -