Silver Syntax Pre-training for Cross-Domain Relation Extraction




Bassignana Elisa, Ginter Filip, Pyysalo Sampo, van der Goot Rob, Plank Barbara

Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki

Findings of the Association for Computational Linguistics

PublisherAssociation for Computational Linguistics (ACL)

2023

Findings of the Association for Computational Linguistics: ACL 2023

Proceedings of the Annual Meeting of the Association for Computational Linguistics

6984

6993

978-1-959429-62-3

DOIhttps://doi.org/10.18653/v1/2023.findings-acl.436

https://aclanthology.org/2023.findings-acl.436

https://research.utu.fi/converis/portal/detail/Publication/181712942



Relation Extraction (RE) remains a challenging task, especially when considering realistic outof-domain evaluations. One of the main reasons for this is the limited training size of current RE datasets: obtaining high-quality (manually annotated) data is extremely expensive and cannot realistically be repeated for each new domain. An intermediate training step on data from related tasks has shown to be beneficial across many NLP tasks. However, this setup still requires supplementary annotated data, which is often not available. In this paper, we investigate intermediate pre-training specifically for RE. We exploit the affinity between syntactic structure and semantic RE, and identify the syntactic relations which are closely related to RE by being on the shortest dependency path between two entities. We then take advantage of the high accuracy of current syntactic parsers in order to automatically obtain large amounts of low-cost pre-training data. By pre-training our RE model on the relevant syntactic relations, we are able to outperform the baseline in five out of six cross-domain setups, without any additional annotated data.


Last updated on 2024-26-11 at 16:45