A1 Vertaisarvioitu alkuperäisartikkeli tieteellisessä lehdessä

Reliability of prehospital patient classification in helicopter emergency medical service missions




TekijätHeino A, Laukkanen-Nevala P, Raatiniemi L, Tommila M, Nurmi J, Olkinuora A, Virkkunen I, Iirola T

KustantajaBMC

Julkaisuvuosi2020

JournalBMC Emergency Medicine

Tietokannassa oleva lehden nimiBMC EMERGENCY MEDICINE

Lehden akronyymiBMC EMERG MED

Artikkelin numeroARTN 42

Vuosikerta20

Numero1

Sivujen määrä6

ISSN1471-227X

eISSN1471-227X

DOIhttps://doi.org/10.1186/s12873-020-00338-7

Verkko-osoitehttps://bmcemergmed.biomedcentral.com/articles/10.1186/s12873-020-00338-7

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/48474395


Tiivistelmä
BackgroundSeveral scores and codes are used in prehospital clinical quality registries but little is known of their reliability. The aim of this study is to evaluate the inter-rater reliability of the American Society of Anesthesiologists physical status (ASA-PS) classification system, HEMS benefit score (HBS), International Classification of Primary Care, second edition (ICPC-2) and Eastern Cooperative Oncology Group (ECOG) performance status in a helicopter emergency medical service (HEMS) clinical quality registry (CQR).
MethodsAll physicians and paramedics working in HEMS in Finland and responsible for patient registration were asked to participate in this study. The participants entered data of six written fictional missions in the national CQR. The inter-rater reliability of the ASA-PS, HBS, ICPC-2 and ECOG were evaluated using an overall agreement and free-marginal multi-rater kappa (Kappa(free)).

ResultsAll 59 Finnish HEMS physicians and paramedics were invited to participate in this study, of which 43 responded and 16 did not answer. One participant was excluded due to unfinished data entering. ASA-PS had an overall agreement of 40.2% and Kappa(free) of 0.28 in this study. HBS had an overall agreement of 44.7% and Kappa(free) of 0.39. ICPC-2 coding had an overall agreement of 51.5% and Kappa(free) of 0.47. ECOG had an overall agreement of 49.6% and Kappa(free) of 0.40.
ConclusionThis study suggests a marked inter-rater unreliability in prehospital patient scoring and coding even in a relatively uniform group of practitioners working in a highly focused environment. This indicates that the scores and codes should be specifically designed or adapted for prehospital use, and the users should be provided with clear and thorough instructions on how to use them.

Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2024-26-11 at 23:21