skip to primary navigationskip to content

Trust, technology and truth-claims

My research focuses on the production, evaluation and contestation of truth-claims in the digital age, and my path into this tangled topic is the empirical case of human rights fact-finding.  Because it is so high-risk and so contested, this practice is a canary in the coalmine for wider professions and publics struggling to get to grips with the new information order.  Indeed, human rights practitioners working with digital evidence were sounding alarm bells about fake news well before the problem became mainstream and are at the cutting edge of verification methodologies.  A concern with trust (and with the associated concepts of trustworthiness and credibility) is at the centre of their work – and thus it is at the centre of mine.

This concern has many dimensions, but I would like to highlight two here that are particularly relevant to the launch of our new and exciting Trust and Technology Initiative.  First, we should reflect on the methods we use to evaluate and establish trustworthiness and credibility.  As we increasingly encounter unknown sources of information in our hyper-mediated world, we increasingly need to use these methods.  Verification is, however, resource-intensive; it requires time and knowledge.  Technologists have therefore been seeking and implementing ways of building credibility and trustworthiness cues into ICTs.  These practices have significant implications for inequalities in our societies, a second key concern of my research – yet we are so often caught up in protecting ourselves from bad intentions and deceptions that we often overlook these implications.  I often use the example of Twitter’s blue verified badge to explain this; a user who has the badge has been verified by Twitter as ‘authentic,’ and as a result, the badge may be used as an identity verification shortcut by fact-finders evaluating a tweet’s truth-claim.  But who gets the badge?  Twitter says the verified user must have ‘an account of public interest. Typically this includes accounts maintained by users in music, acting, fashion, government, politics, religion, journalism, media, sports, business, and other key interest areas.’  So it is a pretty elite (and gendered) subset who have the privilege of this shortcut to credibility.  As these verification technologies proliferate, we should be mindful of whose cultural understandings of trustworthiness and credibility are built into them, who can meet these standards, who is excluded, and what the implications are for truth-claims in the public sphere.

The second dimension of the relationship between trust and technology I wish to briefly explore is how technologies interfere with and even displace interpersonal trust, which is often built over time through demonstrations of performance and reciprocity.  Though new ICTs connect human rights fact-finders to previously inaccessible information, fact-finders still state that face-to-face interviews with witnesses are the gold standard for gathering evidence.  This is in part because the information exchange between human rights fact-finder and witness depends on a mutual trust supported by being in each other’s presence.  By mediating across time and place, ICTs can interfere with this trust-building, so much so that some fact-finders interviewed by The Whistle team said they eschew technology out of the concern that it renders information exchange into information extraction.  Other technologies are deliberately developed to replace trust through decreasing the risks we use trust to overcome.  As Onora O’Neill explains so well, we trust when we don’t have guarantees.  We used to have to trust that our children would walk home safely from school – specifically, we would have to trust not only our children but also all the people they encountered on that walk.  Now, we can track them real-time on our iPhones with the Find my Friends app; we can guarantee their locations, or at least the locations of their phones.  The displacement of trust with technologies is of significant consequence when trust is good for the citizens of a society (which it not always is). 

Because of its interdisciplinarity and its reach, the Trust and Technology Initiative is well-poised to explore these dimensions as relates to both research and practice.  I am delighted to be a part of it!

 

Dr Ella McPherson
Trust & Technology Initiative; Department of Sociology, University of Cambridge

About us

The Trust & Technology Initiative brings together and drives forward interdisciplinary research from Cambridge and beyond to explore the dynamics of trust and distrust in relation to internet technologies, society and power; to better inform trustworthy design and governance of next generation tech at the research and development stage; and to promote informed, critical, and engaging voices supporting individuals, communities and institutions in light of technology’s increasing pervasiveness in societies.

Find out more > 

Mailing list

Sign up to the Trust & Technology mailing list to keep up-to-date with our news, events, and activities

Sign up to our mailing list >