A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

Contrastive Language-Entity Pre-training for Richer Knowledge Graph Embedding




TekijätPapaluca Andrea; Krefl Daniel; Lensky Artem; Suominen Hanna

ToimittajaWallraven, Christian; Liu, Cheng-Lin; Ross, Arun

Konferenssin vakiintunut nimiInternational Conference on Pattern Recognition and Artificial Intelligence

KustantajaSpringer Nature Singapore

Julkaisuvuosi2025

JournalLecture Notes in Computer Science

Kokoomateoksen nimiPattern Recognition and Artificial Intelligence: 4th International Conference, ICPRAI 2024, Jeju Island, South Korea, South Korea, July 03–06, 2024, Proceedings, Part I

Tietokannassa oleva lehden nimiLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Vuosikerta14892

Aloitussivu233

Lopetussivu246

ISBN978-981-97-8701-2

eISBN978-981-97-8702-9

ISSN0302-9743

eISSN1611-3349

DOIhttps://doi.org/10.1007/978-981-97-8702-9_16

Verkko-osoitehttps://doi.org/10.1007/978-981-97-8702-9_16


Tiivistelmä
In this work we propose a pretraining procedure that aligns a graph encoder and a text encoder to learn a common multi-modal graph-text embedding space. The alignment is obtained by training a model to predict the correct associations between Knowledge Graph nodes and their corresponding descriptions. We test the procedure with two popular Knowledge Bases: Wikidata (formerly Freebase) and YAGO. Our results indicate that such a pretraining method allows for link prediction without the need for additional fine-tuning. Furthermore, we demonstrate that a graph encoder pretrained on the description matching task allows for improved link prediction performance after fine-tuning, without the need for providing node descriptions as additional inputs. We make available the code used in the experiments on GitHub(https://github.com/BrunoLiegiBastonLiegi/CLEP) under the MIT license to encourage further work.



Last updated on 2025-07-05 at 12:31