A4 Vertaisarvioitu artikkeli konferenssijulkaisussa
Different Sample Sources, Different Results? A Comparison of Online Panel and Mail Survey Respondents
Tekijät: Koivula Aki, Sivonen Jukka
Toimittaja: Soares Marcelo M., Rosenzweig Elizabeth, Marcus Aaron
Konferenssin vakiintunut nimi: International Conference on Human-Computer Interaction
Kustantaja: Springer Science and Business Media Deutschland GmbH
Kustannuspaikka: Cham
Julkaisuvuosi: 2022
Journal: International Conference on Human-Computer Interaction
Kokoomateoksen nimi: 11th International Conference, DUXU 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings, Part I
Tietokannassa oleva lehden nimi: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Sarjan nimi: Lecture Notes in Computer Science
Vuosikerta: 13321
Aloitussivu: 220
Lopetussivu: 233
ISBN: 978-3-031-05896-7
eISBN: 978-3-031-05897-4
ISSN: 0302-9743
eISSN: 1611-3349
DOI: https://doi.org/10.1007/978-3-031-05897-4_16
Verkko-osoite: https://link.springer.com/chapter/10.1007/978-3-031-05897-4_16
This paper compares data and results from two different survey modes: a probability sampled postal survey and a nonprobability sampled online panel. Our main research objective was to explore if there are differences between the sample methods in terms of nonresponse, item response bias, and selectivity. Both the postal survey and online panel data consist of Finns aged 18–74. Altogether, 2470 respondents were included in the probability sample gathered randomly from the population register of Finland (sample size was 8000 with a response rate of 30.9%), and 1254 respondents were from an online panel organized by a market company. We collected the data in late 2017. The findings confirmed that an online panel can improve the representativeness by including more respondents from groups that are underrepresented within the traditional probability sample. However, we found that panel respondents were more likely to leave unanswered questions perceived as sensitive, which may be a sign of a measurement bias related to intrusiveness. Moreover, the results indicated selection differences between samples related to respondents’ media interests.