skip to content

Trust & Technology Initiative

 
Groundbreaking report on trustworthy AI published

58 experts on the technical and policy aspects of AI have jointly authored a ground-breaking report – proposing ten detailed, concrete steps AI companies should take to move towards trustworthy AI development.

In order for AI developers to earn trust from users, civil society, governments, and other stakeholders that they are building AI responsibly, there is a need to move beyond principles to a focus on mechanisms for demonstrating responsible behaviour. Making and assessing verifiable claims, to which developers can be held accountable, is one step in this direction.

The co-authors of the report come from a wide range of organisations and disciplines, including researchers from Cambridge University and Oxford University, industry scientists, policy experts, and other organisations.

The 72-page report identifies three areas (institutional, software and hardware) in which progress can be made on specific mechanisms. 

A summary of the report is available on the website of the Centre for the Study of Existential Risk.

To access the full paper, please visit arXiv.org