A4 Refereed article in a conference publication

Rethinking personas for fairness: Algorithmic transparency and accountability in data-driven personas




AuthorsJoni Salminen, Soon-gyo Jung, Shammur A. Chowdhury, Bernard J. Jansen

EditorsHelmut Degen, Lauren Reinerman-Jones

Conference nameInternational Conference on Human-Computer Interaction

PublisherSpringer

Publishing placeCham

Publication year2020

JournalInternational Conference on Human-Computer Interaction

Book title Artificial Intelligence in HCI First International Conference, AI-HCI 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings

Journal name in sourceLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Series titleLecture Notes in Computer Science

Volume12217

First page 82

Last page100

ISBN978-3-030-50333-8

eISBN978-3-030-50334-5

ISSN0302-9743

DOIhttps://doi.org/10.1007/978-3-030-50334-5_6(external)


Abstract

Algorithmic fairness criteria for machine learning models are gathering widespread research interest. They are also relevant in the context of data-driven personas that rely on online user data and opaque algorithmic processes. Overall, while technology provides lucrative opportunities for the persona design practice, several ethical concerns need to be addressed to adhere to ethical standards and to achieve end user trust. In this research, we outline the key ethical concerns in data-driven persona generation and provide design implications to overcome these ethical concerns. Good practices of data-driven persona development include (a) creating personas also from outliers (not only majority groups), (b) using data to demonstrate diversity within a persona, (c) explaining the methods and their limitations as a form of transparency, and (d) triangulating the persona information to increase truthfulness.



Last updated on 2024-26-11 at 17:46