A4 Refereed article in a conference publication

Systematic Prompt Framework for Qualitative Data Analysis: Designing System and User Prompts




AuthorsAdeseye, Aisvarya; Isoaho, Jouni; Tahir, Mohammad

EditorsN/A

Conference nameIEEE International Conference on Human-Machine Systems

Publication year2025

Book title 2025 IEEE 5th International Conference on Human-Machine Systems (ICHMS)

First page 229

Last page234

ISBN979-8-3315-2165-3

eISBN979-8-3315-2164-6

DOIhttps://doi.org/10.1109/ICHMS65439.2025.11154183

Publication's open availability at the time of reportingNo Open Access

Publication channel's open availability No Open Access publication channel

Web address https://ieeexplore.ieee.org/document/11154183

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/500027066


Abstract

Prompt engineering has become an important aspect in optimizing the performance of large language models (LLMs) in diverse applications. This research proposes a systematic framework for system and user prompts by utilizing few-shot learning, chain-of-thought reasoning, role play and iterative refinement. The proposed framework was evaluated on open source LLMs, Llama, Gemma, and Phi, running on local machines to underscore their capability to enhance LLMs' outputs for qualitative data analysis for interview transcripts about security and privacy issues of gamification. Utilizing local LLMs eliminates concerns related to data leakage and privacy, making this approach particularly suitable for organizations that have privacy concerns with publicly available LLM solutions like ChatGPT, Gemini, DeepSeek etc. The LLM output demonstrated improved accuracy, consistency, and scalability in addressing security and privacy concerns with gamification. The validation using manual analysis with NVivo indicates less than 5% error margin for frequency analysis.



Last updated on 2025-20-11 at 13:46