Learning Low Cost Multi-target Models by Enforcing Sparsity




Naula P, Airola A, Salakoski T, Pahikkala T

Moonis Ali, Young Sig Kwon,Chang-Hwan Lee, Juntae Kim, Yongdai Kim

28th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems (IEA/AIE)

PublisherSPRINGER-VERLAG NEW YORK, MS INGRID CUNNINGHAM, 175 FIFTH AVE, NEW YORK, NY 10010 USA

2015

Current approaches in applied artificial intelligence

CURRENT APPROACHES IN APPLIED ARTIFICIAL INTELLIGENCE

LECT NOTES ARTIF INT

Lecture notes in computer science

9101

252

261

10

978-3-319-19065-5

978-3-319-19066-2

0302-9743

DOIhttps://doi.org/10.1007/978-3-319-19066-2_25



We consider how one can lower the costs of making predictions for multi-target learning problems by enforcing sparsity on the matrix containing the coefficients of the linear models. Four types of sparsity patterns are formalized, as well as a greedy forward selection framework for enforcing these patterns in the coefficients of learned models. We discuss how these patterns relate to costs in different types of application scenarios, introducing the concepts of extractor and extraction costs of features. We experimentally demonstrate on two real-world data sets that in order to achieve as low prediction costs as possible while also maintaining acceptable predictive accuracy for the models, it is crucial to correctly match the type of sparsity constraints enforced to the use scenario where the model is to be applied.




Last updated on 2024-26-11 at 15:43