Twill: Scheduling Compound AI Systems on Heterogeneous Mobile Edge Platforms




Taufique, Zain; Vyas, Aman; Miele, Antonio; Liljeberg, Pasi; Kanduri, Anil

N/A

IEEE International Conference on Computer-Aided Design

2025

 IEEE/ACM International Conference on Computer-Aided Design

2025 IEEE/ACM International Conference On Computer Aided Design (ICCAD)

979-8-3315-1561-4

979-8-3315-1560-7

1933-7760

1558-2434

DOIhttps://doi.org/10.1109/ICCAD66269.2025.11240767

https://ieeexplore.ieee.org/document/11240767



Compound AI (cAI) systems chain multiple AI models to solve complex problems. cAI systems are typically composed of deep neural networks (DNNs), transformers, and large language models (LLMs), exhibiting a high degree of computational diversity and dynamic workload variation. Deploying cAI services on mobile edge platforms poses a significant challenge in scheduling concurrent DNN-transformer inference tasks, which arrive dynamically in an unknown sequence. Existing mobile edge AI inference strategies manage multi-DNN or transformer-only workloads, relying on design-time profiling, and cannot handle concurrent inference of DNNs and transformers required by cAI systems. In this work, we address the challenge of scheduling cAI systems on heterogeneous mobile edge platforms. We present Twill, a run-time framework to handle concurrent inference requests of cAI workloads through task affinity-aware cluster mapping and migration, priority-aware task freezing/unfreezing, and Dynamic Voltage/Frequency Scaling (DVFS), while minimizing inference latency within power budgets. We implement and deploy our Twill framework on the Nvidia Jetson Orin NX platform. We evaluate Twill against state-of-the-art edge AI inference techniques over contemporary DNNs and LLMs, reducing inference latency by 54% on average, while honoring power budgets.



This work is funded by the European Union Horizon 2020 Research and Innovation Program (APROPOS) under the Marie Curie grant No. 956090.


Last updated on 24/11/2025 12:10:24 PM