STSM – Algorithmic Biases and Digital Divides by Jahna Otterbacher

STSM – Algorithmic Biases and Digital Divides by Jahna Otterbacher

Algorithmic Biases and Digital Divides:
STSM of Jahna Otterbacher to the University of Sheffield

Dates: 2 – 8 April 2016

Host: Prof. Paul Clough, Information School, University of Sheffield, UK

Modern, networked information services (e.g., search, recommendation, social media “news feeds”) make extensive use of algorithmic curation processes, designed to guide users to the most interesting and/or useful content, filtering out that which is likely of less value. There is little doubt that such algorithms influence the manner in which we view the word, and thus, quite literally mediate our social relations and participation in public life. However, it is becoming increasingly challenging to systematically study how algorithmic biases impact our experiences and worldviews.

For instance, given the use of propriety, “black box” algorithms to filter content, extensive personalization, and the tight user-system feedback loop, it is no wonder that some scholars have questioned whether it is still useful to talk of algorithmic bias. After all, in highly personalized information environments, there is no “gold standard” against which we might compare what a given user sees. In some sense, the use of hyper-personalization and tight user-system feedback loops means that there is, in fact, no unbiased view.

During my week’s visit to Sheffield’s Information School, I had the opportunity to discuss these issues with Prof. Paul Clough and members of the Information Retrieval group, as well as other faculty in the Information School, such as Dr. Jo Bates. The specific goal of the visit was the development of a methodology to explore this complex topic, which requires the synthesis of literatures across both the social and computer sciences. We were able to sketch out a plan for two collaborative publications for 2016-2017. In addition, we discussed possible ways to fund our research project.

Mid-week, I also had the chance to give a talk on my related work in the context of the School’s seminar series[1] (title and abstract below). All in all, it was a very productive and helpful visit and I am looking forward to this new collaboration with Prof. Clough and Dr. Bates!

 

Crowdsourcing Stereotypes? Linguistic Bias in Descriptions of People

The language we use to describe others reveals our social expectations, and thus plays a key role in the maintenance of stereotypes. While most people try to avoid blatantly offensive language, such as sexist or racist terms, a subtle phenomenon, known as linguistic bias, can reveal the stereotypes that influence us. I consider two social computing settings in which participants describe others: 1) a game in which players are shown an image and must guess which descriptive labels partners will assign, and 2) collaboratively authored biographies of actors at the Internet Movie Database. In the first setting, images of women were more likely to be described using subjective adjectives (e.g., good, ugly) as compared to men. In the second setting, actor gender and race were correlated to authors’ language patterns, with white men being described more abstractly and subjectively than other social groups. Since the widespread prevalence of linguistic biases in social technologies stands to reinforce stereotypes, further work must consider both the technical features and social cues built into sharing platforms, which might influence the biases observed. I will discuss directions for future work, as well as methodological considerations.

[1] http://www.sheffield.ac.uk/is/research/seminarseries1