skip to primary navigationskip to content

Digital Trust Dissonance: when you’ve got them by the app, their clicks and minds will follow

Debate about trust in big tech has recently been re-ignited by Lisa Khan’s 93 page article about Amazon’s business practices may require antitrust legal intervention. Meanwhile the average Amazon customer loves their Prime account, including Khan’s husband who is a regular user. Rachel Botsman asked “Who Can You Trust?” in her 2017 book, with a sub-title of “How Technology Brought Us Together – and Why It Could Drive Us Apart“. I would argue that neither has happened yet. Instead, many individuals have entered a psychological state that one might call ‘digital trust dissonance’.

This concept tries to explain why we observe millions of digital technology users who express distrust for major corporations whose applications they use (Google, Facebook, Microsoft, Amazon), yet continue to use these platforms with little or no restraint. Even if that company has admitted to losing that users data in a major hack, most recently with British Airways. This behaviour seems to echo the privacy paradox, a cognitive dissonance discovered by academics like Susan B. Barnes (2006) when studying early use of social networking sites like Myspace. These studies showed that people express genuine concerns about their online privacy, yet continue to broadcast personal details in public forums and on websites that warn them that they are collecting their data. Are we seeing a similar effect with trust in digital technologies?

Distrust in technology is nothing new (E.g. the luddites). One of more vocal groups in society, older adults, will “frequently deploy the concept of distrust“ (Knowles & Hanson, 2018) when talking technology as a reason for non-use. However, these are likely to be outliers, Digital trust dissonance could have several causes. A primary cause is likely that individuals are simply ‘locked in’ when it comes to using specific technology platforms, either by their employer or family. The time cost and compatibility issues associated with switching to alternative platform are too high. For example, try using Open Office instead of Microsoft Office when all your colleagues use the Microsoft platform. You may trust the makers of Open Office (Apache) more than Microsoft, but look out of the inevitable email from a friend or co-worker who cannot open your documents.

Another reason for digital trust dissonance could be a lack of visibility of the negative impacts of technology usage. It is hard to see how one’s trust has been betrayed if one cannot observe any real-world impact or ‘direct betrayal’’ of their trust. This sets a dangerous precedence as individuals’ may become resigned to the fact that their trust in a technology inevitably comes with some downsides, leading to a dependency that becomes hard to break.

One example of where this can go wrong is with the collection of our health data. As we channel shift our medical records an online platform such as Patientaccess.com we make ourselves vulnerable. This might not have a major impact on a person initially, but health data in digital form can more easily find its way to credit score companies, potential employers or governments. This can happen without our knowledge (Or because we didn’t read the terms and conditions or privacy policy). A more dystopian view is that technology providers become the agent for a political regime that seeks to target specific ethic groups or classes. This happened in Europe when smartphone meta-data was collected by EU authorities when a change public attitude prompted policy changes favouring the deportation of refugees instead of integration.

An explanation for the digital trust dissonance may also from an explanation for the privacy paradox as proposed by Hallam and Zanella (2017). They suggest “a temporally discounted balance between concerns and rewards”. In other words, the more distant to the individual a privacy breach is, the more that individual will discount it. The same may be true with trust. If we had the specific details of what data we lost in a breach, and which agents received that data, our trust would break more profoundly. If we are bundled in with millions of others, with few or no details about our individual data, we may discount our distrust.

Entities like the EU are showing how regulation can place a ‘check’ on large multinational tech companies which might actually increase our trust in them. However, can regulations go far enough when companies lose millions of user details to hacking from criminals or hostile regimes? How can we build healthy levels of trust and distrust? Trust in technology to improve our lives with the right amount of distrust to lobby for better security, regulation and fair use of our data. It seems that the major tech companies have learned that major user data breaches (Yahoo), negative press (Facebook) or other breaches of trust (Amazon) seem to have little effect on their business. Perhaps we’ve become so dependent on technology in the 21st century that when you’ve got ‘em by the app, their clicks and minds will follow.

 

Richard Dent
Department of Sociology, University of Cambridge

@richardddent
rd459@cam.ac.uk
www.richardjdent.com

 

Bibliography

Barnes, S, B (2006) A privacy paradox: Social networking in the United States. First Monday. Firstmonday.org. Availble at: http://firstmonday.org/article/view/1394/1312

Botsman, R (2017) Who Can You Trust? Penguin Books, UK.

boyd, D (2014). It's Complicated: The Social Lives of Networked Teens. Yale University Press, New Haven, USA.

Bran Knowles & Vicki L. Hanson. (2018). Older Adults’ Deployment of ‘Distrust’. ACM Trans. Comput-Hum. Interact. 1, 1, Article 1 (March 2018).

Hallam, C. & Zanella ,G. (2017) Online self-disclosure: The privacy paradox explained as a temporally discounted balance between concerns and rewards. Computers in Human Behavior Volume 68, March 2017, Pages 217-227

Morgan Meaker, M (2018) Europe is using smartphone data as a weapon to deport refugees. Wired.com. Available at: https://www.wired.co.uk/article/europe-immigration-refugees-smartphone-metadata-deportations

 

 

About us

The Trust & Technology Initiative brings together and drives forward interdisciplinary research from Cambridge and beyond to explore the dynamics of trust and distrust in relation to internet technologies, society and power; to better inform trustworthy design and governance of next generation tech at the research and development stage; and to promote informed, critical, and engaging voices supporting individuals, communities and institutions in light of technology’s increasing pervasiveness in societies.

Find out more > 

Mailing list

Sign up to the Trust & Technology mailing list to keep up-to-date with our news, events, and activities

Sign up to our mailing list >