Rethinking personas for fairness: Algorithmic transparency and accountability in data-driven personas
: Joni Salminen, Soon-gyo Jung, Shammur A. Chowdhury, Bernard J. Jansen
: Helmut Degen, Lauren Reinerman-Jones
: International Conference on Human-Computer Interaction
Publisher: Springer
: Cham
: 2020
: International Conference on Human-Computer Interaction
: Artificial Intelligence in HCI First International Conference, AI-HCI 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings
: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
: Lecture Notes in Computer Science
: 12217
: 82
: 100
: 978-3-030-50333-8
: 978-3-030-50334-5
: 0302-9743
DOI: https://doi.org/10.1007/978-3-030-50334-5_6(external)
Algorithmic fairness criteria for machine learning models are gathering widespread research interest. They are also relevant in the context of data-driven personas that rely on online user data and opaque algorithmic processes. Overall, while technology provides lucrative opportunities for the persona design practice, several ethical concerns need to be addressed to adhere to ethical standards and to achieve end user trust. In this research, we outline the key ethical concerns in data-driven persona generation and provide design implications to overcome these ethical concerns. Good practices of data-driven persona development include (a) creating personas also from outliers (not only majority groups), (b) using data to demonstrate diversity within a persona, (c) explaining the methods and their limitations as a form of transparency, and (d) triangulating the persona information to increase truthfulness.