Performance Evaluation of LLM Hallucination Reduction Strategies for Reliable Qualitative Analysis




Adeseye, Aisvarya; Isoaho, Jouni; Mohammad, Tahir

Arabnia, Hamid R.; Deligiannidis, Leonidas; Amirian, Soheyla; Ghareh Mohammadi, Farid; Shenavarmasouleh, Farzan

International Conference on the AI Revolution

2026

 Communications in Computer and Information Science

AI Revolution : Research, Ethics and Society : International Conference, AIR-RES 2025, Las Vegas, NV, USA, April 14–16, 2025, Proceedings, Part I

2721

142

156

978-3-032-12312-1

978-3-032-12313-8

1865-0929

1865-0937

DOIhttps://doi.org/10.1007/978-3-032-12313-8_11

https://doi.org/10.1007/978-3-032-12313-8_11



Large Language Models (LLMs) are crucial for qualitative analysis because they offer automation and interpretive information. Also, the computation time for LLM is much shorter than that of software-assisted manual qualitative analysis. However, LLM hallucinations can lead to misleading or incorrect outputs that pose a significant challenge to reliability and accuracy. This study identified and examined the root causes of 12 types of hallucinations in LLM-based qualitative analysis. To mitigate these hallucinations, a systematic system prompts refinement, spurious noise filtering, and controlled batch processing of transcripts were adopted to optimize and enhance the reliability and precision of LLM-based qualitative research results.



Last updated on 03/02/2026 12:40:13 PM