A3 Refereed book chapter or chapter in a compilation book

Learning Low Cost Multi-target Models by Enforcing Sparsity




AuthorsNaula P, Airola A, Salakoski T, Pahikkala T

EditorsMoonis Ali, Young Sig Kwon,Chang-Hwan Lee, Juntae Kim, Yongdai Kim

Conference name28th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems (IEA/AIE)

PublisherSPRINGER-VERLAG NEW YORK, MS INGRID CUNNINGHAM, 175 FIFTH AVE, NEW YORK, NY 10010 USA

Publication year2015

Book title Current approaches in applied artificial intelligence

Journal name in sourceCURRENT APPROACHES IN APPLIED ARTIFICIAL INTELLIGENCE

Journal acronymLECT NOTES ARTIF INT

Series titleLecture notes in computer science

Volume9101

First page 252

Last page261

Number of pages10

ISBN978-3-319-19065-5

eISBN978-3-319-19066-2

ISSN0302-9743

DOIhttps://doi.org/10.1007/978-3-319-19066-2_25


Abstract

We consider how one can lower the costs of making predictions for multi-target learning problems by enforcing sparsity on the matrix containing the coefficients of the linear models. Four types of sparsity patterns are formalized, as well as a greedy forward selection framework for enforcing these patterns in the coefficients of learned models. We discuss how these patterns relate to costs in different types of application scenarios, introducing the concepts of extractor and extraction costs of features. We experimentally demonstrate on two real-world data sets that in order to achieve as low prediction costs as possible while also maintaining acceptable predictive accuracy for the models, it is crucial to correctly match the type of sparsity constraints enforced to the use scenario where the model is to be applied.




Last updated on 2024-26-11 at 15:43