skip to content
 

Digital trust dissonance: when you’ve got them by the app, their clicks and minds will follow

Richard Dent
Department of Sociology
 

Lisa Khan’s 93 page article about Amazon’s business practices suggests an antitrust legal intervention may be required. This has sustained an ongoing public debate about trust in “big tech”. However, Amazon customers are not giving up their Prime accounts, including Khan’s husband who is a regular user according to her. Over at Facebook, the fallout from the Cambridge Analytica scandal hasn’t had a major impact on their bottom line, despite an increase in distrust. According to a recent survey, 81% of respondents reported that they ‘have little no confidence Facebook will protect their data and privacy’, which is line with Business Insider’s Digital Trust annual survey (Business Insider, 2018). Yet Facebook report that their "daily active users" and "monthly active users" have not declined and analysts suggest advertisers are not looking elsewhere (Business Insider, 2018a). An independent study by the Pew Research Centre (2018) showed that more people are changing their privacy settings on Facebook. However, a mass exodus has not taken place. Have Amazon and Facebook users entered a state that we might call ‘digital trust dissonance’?

This concept tries to explain why research shows that millions of people express distrust for major technology corporations (Google, Facebook, Microsoft, Amazon), yet continue to use these platforms with little or no restraint. Even when a company has admitted to losing users’ personal data in a major hack, for example the recent British Airways hack. This behaviour seems to echo the privacy paradox, a cognitive dissonance discovered by academics like Susan B. Barnes (2006) when studying early use of social networking sites like Myspace. Barnes found that people express genuine concerns about their online privacy, yet continue to broadcast personal details in public forums and on websites that warn them that they are collecting their data. Are we seeing a similar effect with trust in digital technologies?

Distrust in technology is nothing new (E.g. the luddites). One of more vocal groups in society, older adults, will “frequently deploy the concept of distrust“ (Knowles & Hanson, 2018) when talking technology as a reason for non-use. However, these are likely to be outliers, Digital trust dissonance could have several causes. A primary cause is likely that individuals are simply ‘locked in’ when it comes to using specific technology platforms, either by their employer or family. The time cost and compatibility issues associated with switching to alternative platform are too high. For example, try using Open Office instead of Microsoft Office when all your colleagues use the Microsoft platform. You may trust the makers of Open Office (Apache) more than Microsoft, but look out of the inevitable email from a friend or co-worker who cannot open your documents.

Another reason for digital trust dissonance could be a lack of visibility of the negative impacts of technology usage. It is hard to see how one’s trust has been betrayed if one cannot observe any real-world impact or ‘direct betrayal’’ of their trust. This sets a dangerous precedence as individuals’ may become resigned to the fact that their trust in a technology inevitably comes with some downsides, leading to a dependency that becomes hard to break.

One example of where this can go wrong is with the collection of our health data. As we channel shift our medical records to online platforms, such as Patientaccess.com, we make ourselves vulnerable. This might not have a major impact on a person initially, but health data in digital form can more easily find its way to credit score companies, potential employers or governments. This can happen without our knowledge (or because we didn’t read the terms and conditions or privacy policy). A more dystopian view is that technology providers become the agent for a political regime that seeks to target specific ethic groups or classes. This happened in Europe when smartphone meta-data was collected by EU authorities when a change public attitude prompted policy changes favouring the deportation of refugees instead of integration.

An explanation for the digital trust dissonance may also from an explanation for the privacy paradox as proposed by Hallam and Zanella (2017). They suggest “a temporally discounted balance between concerns and rewards”. In other words, the more distant to the individual a privacy breach is, the more that individual will discount it. The same may be true with trust. If we had the specific details of what data we lost in a breach, and which agents received that data, our trust would break more profoundly. If we are bundled in with millions of others, with few or no details about our individual data, we may discount our distrust.

Entities like the EU are showing how regulation can place a ‘check’ on large multinational tech companies (E.g. Google), which might actually increase our trust in them. However, can regulations go far enough when companies lose millions of user details to hacking from criminals or hostile regimes? How can we build healthy levels of trust and distrust? Trust in technology to improve our lives with the right amount of distrust to lobby for better security, regulation and fair use of our data. It seems that the major tech companies have learned that major user data breaches, negative press or wider breaches of social trust seem to have little effect on their business. Perhaps we’ve become so dependent on technology in the 21st century that when you’ve got ‘em by the app, their clicks and minds will follow.

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us