A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
Overview of the CLEF eHealth Evaluation Lab 2020
Tekijät: Lorraine Goeuriot, Hanna Suominen, Liadh Kelly, Antonio Miranda-Escalada, Martin Krallinger, Zhengyang Liu, Gabriella Pasi, Gabriela Gonzalez Saez, Marco Viviani, Chenchen Xu
Toimittaja: Avi Arampatzis, Evangelos Kanoulas, Theodora Tsikrika, Stefanos Vrochidis, Hideo Joho, Christina Lioma, Carsten Eickhoff, Aurélie Névéol, Linda Cappellato, Nicola Ferro
Konferenssin vakiintunut nimi: International Conference of the Cross-Language Evaluation Forum for European Languages
Julkaisuvuosi: 2020
Journal: Lecture Notes in Computer Science
Kokoomateoksen nimi: Experimental IR Meets Multilinguality, Multimodality, and Interaction
Sarjan nimi: Lecture Notes in Computer Science
Vuosikerta: 12260
Aloitussivu: 255
Lopetussivu: 271
ISBN: 978-3-030-58218-0
eISBN: 978-3-030-58219-7
ISSN: 0302-9743
DOI: https://doi.org/10.1007/978-3-030-58219-7_19
In this paper, we provide an overview of the eight annual edition of the Conference and Labs of the Evaluation Forum (CLEF) eHealth evaluation lab. The Conference and Labs of the Evaluation Forum (CLEF) eHealth 2020 continues our development of evaluation tasks and resources since 2012 to address laypeople’s difficulties to retrieve and digest valid and relevant information in their preferred language to make health-centred decisions. This year’s lab advertised two tasks. Task 1 on Information Extraction (IE) was new and focused on automatic clinical coding of diagnosis and procedure the tenth revision of the International Statistical Classification of Diseases and Related Health Problems (ICD10) codes as well as finding the corresponding evidence text snippets for clinical case documents in Spanish. Task 2 on Information Retrieval (IR) was a novel extension of the most popular and established task in the Conference and Labs of the Evaluation Forum (CLEF) eHealth on Consumer Health Search (CHS). In total 55 submissions were made to these tasks. Herein, we describe the resources created for the two tasks and evaluation methodology adopted. We also summarize lab submissions and results. As in previous years, the organizers have made data and tools associated with the lab tasks available for future research and development. The ongoing substantial community interest in the tasks and their resources has led to the Conference and Labs of the Evaluation Forum (CLEF) eHealth maturing as a primary venue for all interdisciplinary actors of the ecosystem for producing, processing, and consuming electronic health information.