skip to content
 

What is trust in technology? Conceptual bases, common pitfalls and the contribution of trust research

Dr Frens Kroeger
Centre for Trust, Peace and Social Relations, Coventry University
 

In a fundamental sense, all technology depends on trust. What makes technology ‘technology’ is precisely the fact that most users do not know – and do not need to know – how it works; instead, they hold the confident positive expectation that a mechanism which is ultimately opaque to them will bring about the desired outcome.

Consequently, when we talk about technology, we also need to talk about trust. After all, technologies can work flawlessly but still be rejected by an untrusting audience; conversely and potentially even worse, deeply flawed technologies can spread to all corners of the globe based on the misplaced trust of users.

While it is highly positive that the discussion on trust in technology is widening, being taken up by more and more technology experts, it still needs deepening. All too often in the newly emerging research on trust and technology there appears to be an implicit assumption that it is the specific technology under investigation that lends complexity and intrigue to the topic, whereas trust is presumed to be more or less self-explanatory. Often, this seems to be driven by the idea that after all, we all know how trust works in our daily practice; of course, if this were a valid hermeneutical principle, social science as a whole would be largely redundant.

Equally often, trust is tacitly equated with other terms (like security, confidentiality, or risk, to name but a few) which turn out to be the real focus of the work presented, with trust shoe-horned in as an afterthought that only seemingly links the piece to a novel debate. This is especially visible in the case of conference presentations, which will often mention trust in the title, the introduction and conclusion but nowhere in the main body of the presentation, which instead deals with the concept that is really at the centre of the researcher's interest. Similarly, many empirically oriented studies go to great pains to operationalise it but at a closer look, what is being operationalised are often other, related phenomena (for instance adoption or use of a technology, even though we know that users can adopt a technology without fully trusting it, while others may trust but still choose not to adopt it for a myriad different reasons).

Why is this problematic? When trust is chosen as a label, but no real rigour is invested into its understanding and conceptualisation, we end up talking at cross-purposes. For instance, when I was a member of the team that compiled the first annotated bibliography on Trust in Artificial Intelligence for the Partnership on AI (2019), a key problem we encountered in categorising the corpus of texts was that the majority of papers were not communicating with each other in any meaningful way; much of the time they were effectively talking about virtually unrelated problems.

This is liable to keep the study of trust in technology from achieving coherence and to reduce the potential for both insight and impact of research on the topic. Even worse, if we purport to talk about trust but fail to do so with conceptual rigour, any intervention we design may miss the mark and may facilitate the development of factually untrustworthy technologies.

I want to argue that the way of achieving the coherence required is to draw on the insights and concepts provided by trust research, as an established and mature field of study. The systematic study of trust, arguably starting with some of the early 20th century classics, has long solidified into its own research field, with dedicated conferences, professorial appointments and research centres, and it is imperative for the debate on trust in technology to draw more strongly on the rich insights this interdisciplinary field of study has produced particularly over the last 25 years, incidentally with several volumes edited by Cambridge scholars leading the charge (Gambetta, 1988; Lane & Bachmann, 1998). (For a brief overview of what we commonly refer to as "trust research", see for instance the table of contents in Bachmann & Zaheer, 2008.)

What would be some of the first and most basic lessons we can draw from this rich vein of research? Of course, within this very brief format I cannot provide an exhaustive list, but we may think at least of a few of the most basic and fundamental toeholds that matter here. As a very first step on an admittedly long way, we could make sure that at a minimum we always distinguish clearly between different groups of trustors, trusted objects and trust dimensions. The straightforward question to ask for this would be: who trusts what, and in what respect?

While this may seem trivial, at this stage it is anything but that. In my research on trust in autonomous vehicles (AV), I often encounter simplistic surveys investigating "what percentage of people in country X trust AV", though in reality, this may mean different things: for instance, do respondents trust the AV merely to keep its driver safe, but at a closer look we would find that they are not confident the privacy of their data will be preserved?

On reflection, we may also note that there are further stakeholder groups which matter, and that their trust requirements differ from each other (Pirson & Malhotra, 2011); for instance, other road users will want to trust that AV are safe not just for drivers but for cyclists, pedestrians and pets too; car rental companies may choose to focus on reliability and cost effectiveness; and insurance agencies need to be assured regarding the legal liabilities created by autonomous driving.

Even the question of what is being trusted may not always be as straightforward as it may seem at first. For instance, do the trust problems which are frequently diagnosed in regard to AI (Partnership on AI, 2019) relate to users' distrust of the algorithm as a technology, to the purposes for which the algorithm is being employed, or even to the organisation developing and deploying the algorithm?

(For algorithms making recommendations on consequential matters as different as bail, grade distributions or children's social care, it makes a big difference whether suspicion relates to the data that the algorithm was trained on or to the question whether the respective agency intends to use technology as a pseudo-objective justification for a socio-political agenda.) To complicate things further, each of these different objects has identifiable analogues of the ability, benevolence and integrity that we look for in human trustees, and the relationships between the interlinked trust objects situated across different analytical levels are complex and non-trivial (Kroeger, 2012, 2017).

Embracing these and many more advanced concepts and mecha-nisms – from the genesis of System Trust over the preconditions for rapidly evolving Swift Trust to the possibility of simultaneous trust and distrust – and contextualising them to the unique setting of individual technologies will enable the study of trust in technology to make rapid advances as a coherent field whose research findings relate to each other in ways that enable fruitful communication and add value both to individual studies and to the field as a whole. First and most importantly, however, I think we will all need to agree on one thing: when we talk about trust and technology, we need to give both equal attention. Leveraging the insights that trust research has created over the last decades will be a central tool in this endeavour.

References

Bachmann, R. & Zaheer, A. (eds.) (2008). Landmark Papers on Trust. 2 vols. Cheltenham: Edward Elgar.
Gambetta, D. (ed.) (1988). Trust: Making and Breaking Cooperative Relations. Oxford: Blackwell.
Kroeger, F. (2012). Trusting Organizations: The Institutionalization of Trust in Interorganizational Relationships. Organization 19: 743-63.
Kroeger, F. (2017). Facework: Creating trust in systems, institutions and organisations. Cambridge Journal of Economics 41: 487-514.
Lane, C. & Bachmann, R. (eds.) (1998). Trust Within and Between Organizations: Conceptual Issues and Empirical Applications. Oxford: OUP.
Partnership on AI (2019). Human-AI Collaboration Trust Literature Review – Key Insights and Bibliography.
Pirson, M. & Malhotra, D. (2011). Foundations of Organizational Trust: What Matters to Different Stakeholders? Organization Science 22: 1087-104.



 

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us