A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

Efficient Prompt Design for Resource-Constrained Deployment of Local LLMs




TekijätAdeseye, Aisvarya; Isoaho, Jouni; Virtanen, Seppo; Tahir, Mohammad

ToimittajaNurmi, Jari; Pikulins, Dmitrijs; Ellervee, Peeter; Liobe, John

Konferenssin vakiintunut nimiIEEE Nordic Circuits and Systems

Julkaisuvuosi2025

Kokoomateoksen nimi2025 IEEE Nordic Circuits and Systems Conference (NorCAS)

ISBN979-8-3315-1502-7

eISBN979-8-3315-1501-0

DOIhttps://doi.org/10.1109/NorCAS66540.2025.11231309

Verkko-osoitehttps://ieeexplore.ieee.org/document/11231309

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/505411162


Tiivistelmä

The local deployment of Large Language Models (LLMs) is essential for privacy and latency in several domains. However, it faces significant challenges in terms of memory, power, and inference speed, particularly in resource-constrained systems such as Internet of Things (IoT) and edge computing devices. Most existing studies emphasize compression and hardware tuning, while holistic system-level optimization remains incomplete, and the role of prompt design is still underexplored. This study introduces a structured evaluation of prompt engineering strategies designed to enhance resource efficiency and accuracy in local LLMs, applied across three textual analysis tasks: theme extraction, frequency analysis, and impact analysis. Four experimental conditions were compared: Baseline, System Prompt Only (SP), User Prompt Only (UP), and System+User Prompt (SP+UP). Using multiple LLMs ranging from 1 B to 70B parameters, we audited tokens generated, latency, VRAM usage, hallucination rates, and other structural errors. The results show that System Prompts alone substantially reduced computational overhead, whereas User Prompts improved accuracy and task alignment. Their combination yields comprehensive improvements, maximizing both efficiency and reliability. The proposed prompt design enabled smaller LLMs to rival larger ones in efficiency and accuracy, with LLaMA-3.2, 3B with SP+UP reducing VRAM by 96%, latency by 85%, and hallucinations by 83% when compared to the 70B with Baseline. Even LLaMA-3.2, 1B proved to be a viable option, especially when VRAM size is a critical factor.



Last updated on 2025-20-11 at 14:02