skip to primary navigationskip to content

Trust and Technology

Trust has been at the focus of Frens Kroeger's research for over twelve years now. (In fact, he has never had a publication that did not have trust in its title.)

His most recent research interests concern trust in Artficial Intelligence, and trust in autonomous vehicles. (For the latter, he has just started a three-year EPSRC-funded residency at the government's Transport Systems Catapult to research this topic.) Alongside this runs a continuing interest in the theorisation of trust across contexts.

As a social scientist, Frens firmly believes that trust in technology cannot be reduced to its surface manifestations, but that instead we need to draw on concepts such as system trust. This form of trust, which has been a long-standing research interest of his and which was originally described by theorists like Luhmann, differs both from trust in objects and from interpersonal trust.

For instance, for the case of self-driving cars – much like the more long-standing trust in transport by airplane – the trust of users and adopters will refer to the vehicle not as an object, but instead as a manifestation of the complex systems which educate, instruct and monitor engineers, pilots, or coders. Accordingly, the question of trust is not one of simple interface design (à la "will people trust the vehicle more if it communicates in a soothing female voice"). Instead, users may be wary of the vehicle's technological capability to keep them safe, but also of their own legal liability in using a self-driving car, or of the vehicle's ethical parameters. (See the "trolley problem": in a situation where the death of a road user cannot be avoided, whose death should the vehicle choose to accept – the old lady or the young child in the road, or indeed the driver of the vehicle?) Similar questions are the subject of public debate over AI: who enables the AI to understand what it "really" needs to do and avoid perverse consequences? But also: who defines or monitors the ethical parameters of algorithms that structure our interaction with friends through social media, that may or may not reproduce structural racism in their recommendations for parole, or that may even be implemented in intelligent weaponry in the future?

These examples show that trust in technology is similar, but certainly not identical to trust in human actors, both in terms of its dimensions and of the complex relationships between trust and distrust. These problems need to be defined and researched drawing on a long history of trust research, while at the same time emphasising its contrast to the predominant purely interpersonal tradition. As part of this, Frens has strived to develop multi- and cross-level models that help explain how trust in complex systems may arise in the first place (for instance, using concepts like institutionalisation, trust cultures, and "facework"), and how we may in future achieve a balance of "optimal trust" in emerging technologies.

 

Dr Frens Kroeger
Centre for Trust, Peace and Social Relations, Coventry University

 

 

About us

The Trust & Technology Initiative brings together and drives forward interdisciplinary research from Cambridge and beyond to explore the dynamics of trust and distrust in relation to internet technologies, society and power; to better inform trustworthy design and governance of next generation tech at the research and development stage; and to promote informed, critical, and engaging voices supporting individuals, communities and institutions in light of technology’s increasing pervasiveness in societies.

Find out more > 

Mailing list

Sign up to the Trust & Technology mailing list to keep up-to-date with our news, events, and activities

Sign up to our mailing list >