A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä
Reliability of prehospital patient classification in helicopter emergency medical service missions
Tekijät: Heino A, Laukkanen-Nevala P, Raatiniemi L, Tommila M, Nurmi J, Olkinuora A, Virkkunen I, Iirola T
Kustantaja: BMC
Julkaisuvuosi: 2020
Journal: BMC Emergency Medicine
Tietokannassa oleva lehden nimi: BMC EMERGENCY MEDICINE
Lehden akronyymi: BMC EMERG MED
Artikkelin numero: ARTN 42
Vuosikerta: 20
Numero: 1
Sivujen määrä: 6
ISSN: 1471-227X
eISSN: 1471-227X
DOI: https://doi.org/10.1186/s12873-020-00338-7
Verkko-osoite: https://bmcemergmed.biomedcentral.com/articles/10.1186/s12873-020-00338-7
Rinnakkaistallenteen osoite: https://research.utu.fi/converis/portal/detail/Publication/48474395
BackgroundSeveral scores and codes are used in prehospital clinical quality registries but little is known of their reliability. The aim of this study is to evaluate the inter-rater reliability of the American Society of Anesthesiologists physical status (ASA-PS) classification system, HEMS benefit score (HBS), International Classification of Primary Care, second edition (ICPC-2) and Eastern Cooperative Oncology Group (ECOG) performance status in a helicopter emergency medical service (HEMS) clinical quality registry (CQR).
MethodsAll physicians and paramedics working in HEMS in Finland and responsible for patient registration were asked to participate in this study. The participants entered data of six written fictional missions in the national CQR. The inter-rater reliability of the ASA-PS, HBS, ICPC-2 and ECOG were evaluated using an overall agreement and free-marginal multi-rater kappa (Kappa(free)).
ResultsAll 59 Finnish HEMS physicians and paramedics were invited to participate in this study, of which 43 responded and 16 did not answer. One participant was excluded due to unfinished data entering. ASA-PS had an overall agreement of 40.2% and Kappa(free) of 0.28 in this study. HBS had an overall agreement of 44.7% and Kappa(free) of 0.39. ICPC-2 coding had an overall agreement of 51.5% and Kappa(free) of 0.47. ECOG had an overall agreement of 49.6% and Kappa(free) of 0.40.
ConclusionThis study suggests a marked inter-rater unreliability in prehospital patient scoring and coding even in a relatively uniform group of practitioners working in a highly focused environment. This indicates that the scores and codes should be specifically designed or adapted for prehospital use, and the users should be provided with clear and thorough instructions on how to use them.
Ladattava julkaisu This is an electronic reprint of the original article. |