A4 Article in conference proceedings
A Cache- and Memory-Aware Mapping Algorithm for Big Data Applications




List of Authors: Thomas Canhao Xu, Ville Leppänen
Publication year: 2015
Book title *: Digital Information Processing and Communications (ICDIPC), 2015 Fifth International Conference on
ISBN: 978-1-4673-6831-5

Abstract


In this paper, we propose and investigate a task mapping algorithm for big data applications. As a critical resource, data are produced faster than ever before. Parallel programs that process these data on massive parallel systems are widely adopted. The task mapping algorithm however, has not been well optimized for these applications. We explore the characteristics of big data applications based on a shared cache/memory multicore processor. The latencies of cache and memory sub-systems are analysed. The proposed algorithm is designed to optimize the cache/memory latency, as well as intra-application latency. We introduce an efficient greedy algorithm to calculate the mapping result based on the congregate degree of nodes. Different numbers of search spaces are discussed and evaluated. Experiments are conducted based on synthetic simulation and running real applications on a full system simulation environment. Results confirmed the effectiveness of the proposed algorithm. Average execution time of five selected big data applications is reduced by 8% compared with the first fit algorithm.



Last updated on 2019-29-01 at 12:15