A4 Refereed article in a conference publication
Performance Evaluation of LLM Hallucination Reduction Strategies for Reliable Qualitative Analysis
Authors: Adeseye, Aisvarya; Isoaho, Jouni; Mohammad, Tahir
Editors: Arabnia, Hamid R.; Deligiannidis, Leonidas; Amirian, Soheyla; Ghareh Mohammadi, Farid; Shenavarmasouleh, Farzan
Conference name: International Conference on the AI Revolution
- Publisher: Springer
Publication year: 2026
Journal: Communications in Computer and Information Science
Book title : AI Revolution : Research, Ethics and Society : International Conference, AIR-RES 2025, Las Vegas, NV, USA, April 14–16, 2025, Proceedings, Part I
Volume: 2721
First page : 142
Last page: 156
ISBN: 978-3-032-12312-1
eISBN: 978-3-032-12313-8
ISSN: 1865-0929
eISSN: 1865-0937
DOI: https://doi.org/10.1007/978-3-032-12313-8_11
Publication's open availability at the time of reporting: No Open Access
Publication channel's open availability : Partially Open Access publication channel
Web address : https://doi.org/10.1007/978-3-032-12313-8_11
Large Language Models (LLMs) are crucial for qualitative analysis because they offer automation and interpretive information. Also, the computation time for LLM is much shorter than that of software-assisted manual qualitative analysis. However, LLM hallucinations can lead to misleading or incorrect outputs that pose a significant challenge to reliability and accuracy. This study identified and examined the root causes of 12 types of hallucinations in LLM-based qualitative analysis. To mitigate these hallucinations, a systematic system prompts refinement, spurious noise filtering, and controlled batch processing of transcripts were adopted to optimize and enhance the reliability and precision of LLM-based qualitative research results.