A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

Performance Evaluation of LLM Hallucination Reduction Strategies for Reliable Qualitative Analysis




TekijätAdeseye, Aisvarya; Isoaho, Jouni; Mohammad, Tahir

ToimittajaArabnia, Hamid R.; Deligiannidis, Leonidas; Amirian, Soheyla; Ghareh Mohammadi, Farid; Shenavarmasouleh, Farzan

Konferenssin vakiintunut nimiInternational Conference on the AI Revolution

  • KustantajaSpringer

Julkaisuvuosi2026

Lehti: Communications in Computer and Information Science

Kokoomateoksen nimiAI Revolution : Research, Ethics and Society : International Conference, AIR-RES 2025, Las Vegas, NV, USA, April 14–16, 2025, Proceedings, Part I

Vuosikerta2721

Aloitussivu142

Lopetussivu156

ISBN978-3-032-12312-1

eISBN978-3-032-12313-8

ISSN1865-0929

eISSN1865-0937

DOIhttps://doi.org/10.1007/978-3-032-12313-8_11

Julkaisun avoimuus kirjaamishetkelläEi avoimesti saatavilla

Julkaisukanavan avoimuus Osittain avoin julkaisukanava

Verkko-osoitehttps://doi.org/10.1007/978-3-032-12313-8_11


Tiivistelmä

Large Language Models (LLMs) are crucial for qualitative analysis because they offer automation and interpretive information. Also, the computation time for LLM is much shorter than that of software-assisted manual qualitative analysis. However, LLM hallucinations can lead to misleading or incorrect outputs that pose a significant challenge to reliability and accuracy. This study identified and examined the root causes of 12 types of hallucinations in LLM-based qualitative analysis. To mitigate these hallucinations, a systematic system prompts refinement, spurious noise filtering, and controlled batch processing of transcripts were adopted to optimize and enhance the reliability and precision of LLM-based qualitative research results.



Last updated on