A4 Refereed article in a conference publication

Twill: Scheduling Compound AI Systems on Heterogeneous Mobile Edge Platforms




AuthorsTaufique, Zain; Vyas, Aman; Miele, Antonio; Liljeberg, Pasi; Kanduri, Anil

EditorsN/A

Conference nameIEEE International Conference on Computer-Aided Design

Publication year2025

Journal: IEEE/ACM International Conference on Computer-Aided Design

Book title 2025 IEEE/ACM International Conference On Computer Aided Design (ICCAD)

ISBN979-8-3315-1561-4

eISBN979-8-3315-1560-7

ISSN1933-7760

eISSN1558-2434

DOIhttps://doi.org/10.1109/ICCAD66269.2025.11240767

Publication's open availability at the time of reportingNo Open Access

Publication channel's open availability No Open Access publication channel

Web address https://ieeexplore.ieee.org/document/11240767


Abstract

Compound AI (cAI) systems chain multiple AI models to solve complex problems. cAI systems are typically composed of deep neural networks (DNNs), transformers, and large language models (LLMs), exhibiting a high degree of computational diversity and dynamic workload variation. Deploying cAI services on mobile edge platforms poses a significant challenge in scheduling concurrent DNN-transformer inference tasks, which arrive dynamically in an unknown sequence. Existing mobile edge AI inference strategies manage multi-DNN or transformer-only workloads, relying on design-time profiling, and cannot handle concurrent inference of DNNs and transformers required by cAI systems. In this work, we address the challenge of scheduling cAI systems on heterogeneous mobile edge platforms. We present Twill, a run-time framework to handle concurrent inference requests of cAI workloads through task affinity-aware cluster mapping and migration, priority-aware task freezing/unfreezing, and Dynamic Voltage/Frequency Scaling (DVFS), while minimizing inference latency within power budgets. We implement and deploy our Twill framework on the Nvidia Jetson Orin NX platform. We evaluate Twill against state-of-the-art edge AI inference techniques over contemporary DNNs and LLMs, reducing inference latency by 54% on average, while honoring power budgets.


Funding information in the publication
This work is funded by the European Union Horizon 2020 Research and Innovation Program (APROPOS) under the Marie Curie grant No. 956090.


Last updated on 2025-24-11 at 12:10