A4 Refereed article in a conference publication
On evaluation of automatically generated clinical discharge summaries
Authors: Moen H., Heimonen J., Murtola L., Airola A., Pahikkala T., Terävä V., Danielsson-Ojala R., Salakoski T., Salanterä S.
Editors: Jaatun E.A.A., Brooks E., Berntsen K., Gilstad H., Jaatun M.G.
Conference name: Practical Aspects of Health Informatics
Publisher: CEUR-WS
Publication year: 2014
Journal: CEUR Workshop Proceedings
Book title : Proceedings of the 2nd European Workshop on Practical Aspects of Health Informatics
Journal name in source: CEUR Workshop Proceedings
Volume: 1251
First page : 101
Last page: 114
Number of pages: 14
Web address : http://ceur-ws.org/Vol-1251/paper10.pdf
Proper evaluation is crucial for developing high-quality computerized text summarization systems. In the clinical domain, the specialized information needs of the clinicians complicates the task of evaluating automatically produced clinical text summaries. In this paper we present and compare the results from both manual and automatic evaluation of computer-generated summaries. These are composed of sentence extracts from the free text in clinical daily notes - corresponding to individual care episodes, written by physicians concerning patient care. The purpose of this study is primarily to find out if there is a correlation between the conducted automatic evaluation and the manual evaluation. We analyze which of the automatic evaluation metrics correlates the most with the scores from the manual evaluation. The manual evaluation is performed by domain experts who follow an evaluation tool that we developed as a part of this study. As a result, we hope to get some insight into the reliability of the selected approach to automatic evaluation. Ultimately this study can help us in assessing the reliability of this evaluation approach, so that we can further develop the underlying summarization system. The evaluation results seem promising in that the ranking order of the various summarization methods, ranked by all the automatic evaluation metrics, correspond well with that of the manual evaluation. These preliminary results also indicate that the utilized automatic evaluation setup can be used as an automated and reliable way to rank clinical summarization methods internally in terms of their performance.