A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
On evaluation of automatically generated clinical discharge summaries
Tekijät: Moen H., Heimonen J., Murtola L., Airola A., Pahikkala T., Terävä V., Danielsson-Ojala R., Salakoski T., Salanterä S.
Toimittaja: Jaatun E.A.A., Brooks E., Berntsen K., Gilstad H., Jaatun M.G.
Konferenssin vakiintunut nimi: Practical Aspects of Health Informatics
Kustantaja: CEUR-WS
Julkaisuvuosi: 2014
Journal: CEUR Workshop Proceedings
Kokoomateoksen nimi: Proceedings of the 2nd European Workshop on Practical Aspects of Health Informatics
Tietokannassa oleva lehden nimi: CEUR Workshop Proceedings
Vuosikerta: 1251
Aloitussivu: 101
Lopetussivu: 114
Sivujen määrä: 14
Verkko-osoite: http://ceur-ws.org/Vol-1251/paper10.pdf
Proper evaluation is crucial for developing high-quality computerized text summarization systems. In the clinical domain, the specialized information needs of the clinicians complicates the task of evaluating automatically produced clinical text summaries. In this paper we present and compare the results from both manual and automatic evaluation of computer-generated summaries. These are composed of sentence extracts from the free text in clinical daily notes - corresponding to individual care episodes, written by physicians concerning patient care. The purpose of this study is primarily to find out if there is a correlation between the conducted automatic evaluation and the manual evaluation. We analyze which of the automatic evaluation metrics correlates the most with the scores from the manual evaluation. The manual evaluation is performed by domain experts who follow an evaluation tool that we developed as a part of this study. As a result, we hope to get some insight into the reliability of the selected approach to automatic evaluation. Ultimately this study can help us in assessing the reliability of this evaluation approach, so that we can further develop the underlying summarization system. The evaluation results seem promising in that the ranking order of the various summarization methods, ranked by all the automatic evaluation metrics, correspond well with that of the manual evaluation. These preliminary results also indicate that the utilized automatic evaluation setup can be used as an automated and reliable way to rank clinical summarization methods internally in terms of their performance.