A4 Refereed article in a conference publication

Contrastive Language-Entity Pre-training for Richer Knowledge Graph Embedding




AuthorsPapaluca Andrea; Krefl Daniel; Lensky Artem; Suominen Hanna

EditorsWallraven, Christian; Liu, Cheng-Lin; Ross, Arun

Conference nameInternational Conference on Pattern Recognition and Artificial Intelligence

PublisherSpringer Nature Singapore

Publication year2025

JournalLecture Notes in Computer Science

Book title Pattern Recognition and Artificial Intelligence: 4th International Conference, ICPRAI 2024, Jeju Island, South Korea, South Korea, July 03–06, 2024, Proceedings, Part I

Journal name in sourceLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume14892

First page 233

Last page246

ISBN978-981-97-8701-2

eISBN978-981-97-8702-9

ISSN0302-9743

eISSN1611-3349

DOIhttps://doi.org/10.1007/978-981-97-8702-9_16

Web address https://doi.org/10.1007/978-981-97-8702-9_16


Abstract
In this work we propose a pretraining procedure that aligns a graph encoder and a text encoder to learn a common multi-modal graph-text embedding space. The alignment is obtained by training a model to predict the correct associations between Knowledge Graph nodes and their corresponding descriptions. We test the procedure with two popular Knowledge Bases: Wikidata (formerly Freebase) and YAGO. Our results indicate that such a pretraining method allows for link prediction without the need for additional fine-tuning. Furthermore, we demonstrate that a graph encoder pretrained on the description matching task allows for improved link prediction performance after fine-tuning, without the need for providing node descriptions as additional inputs. We make available the code used in the experiments on GitHub(https://github.com/BrunoLiegiBastonLiegi/CLEP) under the MIT license to encourage further work.



Last updated on 2025-07-05 at 12:31