skip to content

Trust & Technology Initiative

 

new degree in AI Ethics is being launched at Cambridge, aiming to teach professionals in all areas of life — from engineers and policymakers to health administrators and HR managers — how to use AI for good, not ill. The programme is a co-operation between the Leverhulme Centre for the Future of Intelligence (CFI) and the University of Cambridge’s Institute for Continuing Education.

The ‘Master of studies in AI Ethics and Society’ promises to develop leaders who can confidently tackle the most pressing AI challenges facing their workplaces. These include issues of privacy, surveillance, justice, fairness, algorithmic bias, misinformation, microtargeting, Big Data, responsible innovation and data governance.

The curriculum spans a wide range of academic areas including philosophy, machine learning, policy, race theory, design, computer science, engineering, and law. Run by a specialist research centre, the course will include the latest subject research taught by world-leading experts.  

Dedicated to meeting the practical needs of professionals, the course will address concrete questions such as:

  • How can I tell if an AI product is trustworthy? 
  • How can I anticipate and mitigate possible negative impacts of a technology?
  • How can I design a process of responsible innovation for my business?
  • How do I safeguard against algorithmic bias?
  • How do I keep data private, secure, and properly managed?
  • How can I involve diverse stakeholders in AI decision-making?

Applications for the new degree close on 31st March 2021, with the first cohort commencing in October 2021.

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us