A4 Refereed article in a conference publication

Different Sample Sources, Different Results? A Comparison of Online Panel and Mail Survey Respondents




AuthorsKoivula Aki, Sivonen Jukka

EditorsSoares Marcelo M., Rosenzweig Elizabeth, Marcus Aaron

Conference nameInternational Conference on Human-Computer Interaction

PublisherSpringer Science and Business Media Deutschland GmbH

Publishing placeCham

Publication year2022

JournalInternational Conference on Human-Computer Interaction

Book title 11th International Conference, DUXU 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings, Part I

Journal name in sourceLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Series titleLecture Notes in Computer Science

Volume13321

First page 220

Last page233

ISBN978-3-031-05896-7

eISBN978-3-031-05897-4

ISSN0302-9743

eISSN1611-3349

DOIhttps://doi.org/10.1007/978-3-031-05897-4_16

Web address https://link.springer.com/chapter/10.1007/978-3-031-05897-4_16


Abstract

This paper compares data and results from two different survey modes: a probability sampled postal survey and a nonprobability sampled online panel. Our main research objective was to explore if there are differences between the sample methods in terms of nonresponse, item response bias, and selectivity. Both the postal survey and online panel data consist of Finns aged 18–74. Altogether, 2470 respondents were included in the probability sample gathered randomly from the population register of Finland (sample size was 8000 with a response rate of 30.9%), and 1254 respondents were from an online panel organized by a market company. We collected the data in late 2017. The findings confirmed that an online panel can improve the representativeness by including more respondents from groups that are underrepresented within the traditional probability sample. However, we found that panel respondents were more likely to leave unanswered questions perceived as sensitive, which may be a sign of a measurement bias related to intrusiveness. Moreover, the results indicated selection differences between samples related to respondents’ media interests.



Last updated on 2024-26-11 at 11:13