A4 Refereed article in a conference publication
Different Sample Sources, Different Results? A Comparison of Online Panel and Mail Survey Respondents
Authors: Koivula Aki, Sivonen Jukka
Editors: Soares Marcelo M., Rosenzweig Elizabeth, Marcus Aaron
Conference name: International Conference on Human-Computer Interaction
Publisher: Springer Science and Business Media Deutschland GmbH
Publishing place: Cham
Publication year: 2022
Journal: International Conference on Human-Computer Interaction
Book title : 11th International Conference, DUXU 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings, Part I
Journal name in source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Series title: Lecture Notes in Computer Science
Volume: 13321
First page : 220
Last page: 233
ISBN: 978-3-031-05896-7
eISBN: 978-3-031-05897-4
ISSN: 0302-9743
eISSN: 1611-3349
DOI: https://doi.org/10.1007/978-3-031-05897-4_16
Web address : https://link.springer.com/chapter/10.1007/978-3-031-05897-4_16
This paper compares data and results from two different survey modes: a probability sampled postal survey and a nonprobability sampled online panel. Our main research objective was to explore if there are differences between the sample methods in terms of nonresponse, item response bias, and selectivity. Both the postal survey and online panel data consist of Finns aged 18–74. Altogether, 2470 respondents were included in the probability sample gathered randomly from the population register of Finland (sample size was 8000 with a response rate of 30.9%), and 1254 respondents were from an online panel organized by a market company. We collected the data in late 2017. The findings confirmed that an online panel can improve the representativeness by including more respondents from groups that are underrepresented within the traditional probability sample. However, we found that panel respondents were more likely to leave unanswered questions perceived as sensitive, which may be a sign of a measurement bias related to intrusiveness. Moreover, the results indicated selection differences between samples related to respondents’ media interests.