A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
A comparative study of pairwise learning methods based on Kernel ridge regression




Julkaisun tekijät: Michiel Stock, Tapio Pahikkala, Antti Airola, Bernard De Baets, Willem Waegeman
Kustantaja: MIT Press Journals
Julkaisuvuosi: 2018
Journal: Neural Computation
Tietokannassa oleva lehden nimi: Neural Computation
Volyymi: 30
Julkaisunumero: 8
eISSN: 1530-888X

Tiivistelmä

Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.


Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.




Last updated on 2019-14-03 at 13:27