'Transparency is Meant for Control' and Vice Versa: Learning from Co-designing and Evaluating Algorithmic News Recommenders

Publication
Article in Proceedings of the ACM on Human-Computer Interaction

Publication for CSCW 2022

Abstract

Algorithmic systems that recommend content often lack transparency about how they come to their suggestions. One area in which recommender systems are increasingly prevalent is online news distribution. In this paper, we explore how a lack of transparency of (news) recommenders can be tackled by involving users in the design of interface elements. In the context of automated decision-making, legislative frameworks such as the GDPR in Europe introduce a specific conception of transparency, granting ‘data subjects’ specific rights and imposing obligations on service providers. An important related question is how people using personalized recommender systems relate to the issue of transparency, not as legal data subjects but as users. This paper builds upon a two-phase study on how users conceive of transparency and related issues in the context of algorithmic news recommenders. We organized co-design workshops to elicit participants’ ‘algorithmic imaginaries’ and invited them to ideate interface elements for increased transparency. This revealed the importance of combining legible transparency features with features that increase user control. We then conducted a qualitative evaluation of mock-up prototypes to investigate users’ preferences and concerns when dealing with design features to increase transparency and control. Our investigation illustrates how users’ expectations and impressions of news recommenders are closely related to their news reading practices. On a broader level, we show how transparency and control are conceptually intertwined. Transparency without control leaves users frustrated. Conversely, without a basic level of transparency into how a system works, users remain unsure of the impact of controls.