A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

On evaluation of automatically generated clinical discharge summaries




TekijätMoen H., Heimonen J., Murtola L., Airola A., Pahikkala T., Terävä V., Danielsson-Ojala R., Salakoski T., Salanterä S.

ToimittajaJaatun E.A.A., Brooks E., Berntsen K., Gilstad H., Jaatun M.G.

Konferenssin vakiintunut nimiPractical Aspects of Health Informatics

KustantajaCEUR-WS

Julkaisuvuosi2014

JournalCEUR Workshop Proceedings

Kokoomateoksen nimiProceedings of the 2nd European Workshop on Practical Aspects of Health Informatics

Tietokannassa oleva lehden nimiCEUR Workshop Proceedings

Vuosikerta1251

Aloitussivu101

Lopetussivu114

Sivujen määrä14

Verkko-osoitehttp://ceur-ws.org/Vol-1251/paper10.pdf


Tiivistelmä

Proper evaluation is crucial for developing high-quality computerized text summarization systems. In the clinical domain, the specialized information needs of the clinicians complicates the task of evaluating automatically produced clinical text summaries. In this paper we present and compare the results from both manual and automatic evaluation of computer-generated summaries. These are composed of sentence extracts from the free text in clinical daily notes - corresponding to individual care episodes, written by physicians concerning patient care. The purpose of this study is primarily to find out if there is a correlation between the conducted automatic evaluation and the manual evaluation. We analyze which of the automatic evaluation metrics correlates the most with the scores from the manual evaluation. The manual evaluation is performed by domain experts who follow an evaluation tool that we developed as a part of this study. As a result, we hope to get some insight into the reliability of the selected approach to automatic evaluation. Ultimately this study can help us in assessing the reliability of this evaluation approach, so that we can further develop the underlying summarization system. The evaluation results seem promising in that the ranking order of the various summarization methods, ranked by all the automatic evaluation metrics, correspond well with that of the manual evaluation. These preliminary results also indicate that the utilized automatic evaluation setup can be used as an automated and reliable way to rank clinical summarization methods internally in terms of their performance.




Last updated on 2024-26-11 at 16:32