A4 Artikkeli konferenssijulkaisussa
A Cache- and Memory-Aware Mapping Algorithm for Big Data Applications




Julkaisun tekijät: Thomas Canhao Xu, Ville Leppänen
Julkaisuvuosi: 2015
Kirjan nimi *: Digital Information Processing and Communications (ICDIPC), 2015 Fifth International Conference on
ISBN: 978-1-4673-6831-5

Tiivistelmä


In this paper, we propose and investigate a task mapping algorithm for big data applications. As a critical resource, data are produced faster than ever before. Parallel programs that process these data on massive parallel systems are widely adopted. The task mapping algorithm however, has not been well optimized for these applications. We explore the characteristics of big data applications based on a shared cache/memory multicore processor. The latencies of cache and memory sub-systems are analysed. The proposed algorithm is designed to optimize the cache/memory latency, as well as intra-application latency. We introduce an efficient greedy algorithm to calculate the mapping result based on the congregate degree of nodes. Different numbers of search spaces are discussed and evaluated. Experiments are conducted based on synthetic simulation and running real applications on a full system simulation environment. Results confirmed the effectiveness of the proposed algorithm. Average execution time of five selected big data applications is reduced by 8% compared with the first fit algorithm.



Last updated on 2019-21-08 at 21:07