A4 Refereed article in a conference publication

Performance Evaluation of LLM Hallucination Reduction Strategies for Reliable Qualitative Analysis




AuthorsAdeseye, Aisvarya; Isoaho, Jouni; Mohammad, Tahir

EditorsArabnia, Hamid R.; Deligiannidis, Leonidas; Amirian, Soheyla; Ghareh Mohammadi, Farid; Shenavarmasouleh, Farzan

Conference nameInternational Conference on the AI Revolution

  • PublisherSpringer

Publication year2026

Journal: Communications in Computer and Information Science

Book title AI Revolution : Research, Ethics and Society : International Conference, AIR-RES 2025, Las Vegas, NV, USA, April 14–16, 2025, Proceedings, Part I

Volume2721

First page 142

Last page156

ISBN978-3-032-12312-1

eISBN978-3-032-12313-8

ISSN1865-0929

eISSN1865-0937

DOIhttps://doi.org/10.1007/978-3-032-12313-8_11

Publication's open availability at the time of reportingNo Open Access

Publication channel's open availability Partially Open Access publication channel

Web address https://doi.org/10.1007/978-3-032-12313-8_11


Abstract

Large Language Models (LLMs) are crucial for qualitative analysis because they offer automation and interpretive information. Also, the computation time for LLM is much shorter than that of software-assisted manual qualitative analysis. However, LLM hallucinations can lead to misleading or incorrect outputs that pose a significant challenge to reliability and accuracy. This study identified and examined the root causes of 12 types of hallucinations in LLM-based qualitative analysis. To mitigate these hallucinations, a systematic system prompts refinement, spurious noise filtering, and controlled batch processing of transcripts were adopted to optimize and enhance the reliability and precision of LLM-based qualitative research results.



Last updated on 03/02/2026 12:40:13 PM