skip to content
 

Abstracts of events with a focus on the the ethics of design and utilisation of Artificial Intelligence.

 

 

Human Values and Explainable Artificial Intelligence

Talk by Dr Rune Nyrup, Leverhulme Centre for the Future of Intelligence

A common objection to the use of artificial intelligence in decision-making is the concern that it is often difficult to explain or understand how AI systems make decisions. There is a growing body of technical AI research developing techniques for making AI more “explainable” or “interpretable”. However, it is still not well understood why this is an important property for an AI system to possess, or what types of explanations are most important. While there are empirical studies of which types of explanations individuals subjected to AI decision-making find satisfactory, psychological evidence suggests people’s sense of understanding is often unreliable and easy to manipulate. In this paper, Rune argues that a pragmatist account of explanation provides a fruitful framework for exploring the problem of AI Explainability, which allows us to combine normative and empirical perspectives on user values.

 

The Algorithm is Going to Get You: Should We Fear the Rise of AI in Criminal Justice?

Talk by Matthew Bland, PhD Candidate in Criminology

With the use of artificial intelligence techniques increasingly prominent in public discourse, there have been several recent examples of the media focussing attention on the use of algorithms in criminal justice settings. The prevailing sentiment of these pieces has been one of strong caution. A typical consumer of these stories may be left with the sense that algorithms can only lead to at best, a biased outcome and that it would be far better to leave practitioners to their own professional judgements. The talk expanded on this important debate in detail, with specific references to my experiences of working with machine-learning techniques in criminal justice forecasts in domestic abuse cases, reoffending and case solvability. The discussion was mostly non-technical but drew on practical examples of the use of these methods to illustrate issues around ethics, bias and implementation.

 

The Methodology and Ethics of Targeting

Lecture by Prof David Stilwell, delivered at Trinity College
hosted by the Leverhulme Centre for the Future of Intelligence (CfI)
 

David Stillwell (University Lecturer in Data Analytics & Quantitative Social Science and Deputy Director of the Psychometrics Centre), spoke about  the use of psychometrics in personalised targeting. Governments and companies can now model and predict the beliefs, preferences, and behaviour of small groups and even individuals – allowing them to “target” interventions, messages, and services much more narrowly. These new forms of targeting present huge opportunities to make valuable interventions more effective, for example by delivering public services to those most in need of them. However, the use of more fine-grained information about individuals and groups also raises huge risks, challenging key notions of privacy, fairness, and autonomy.

 

The Future of AI: Language, Society, Technology

September 2019 workshop at the Centre for Research in the Arts, Social Sciences & Humanities (CRASSH)

This workshop, the third in a series on the future of artificial intelligence, focussed on the impact of artificial intelligence on society, specifically on language-based technologies at the intersection of AI and ICT (henceforth ‘Artificially Intelligent Communications Technologies’ or ‘AICT’) – namely, speech technology, natural language processing, smart telecommunications and social media. The social impact of these technologies is already becoming apparent. Intelligent conversational agents such as Siri (Apple), Cortana (Microsoft) and Alexa (Amazon) are already widely used, and, in the near future, a new generation of Virtual Personal Assistants (VPAs) will emerge that will increasingly influence all aspects of our lives, from relatively mundane tasks (e.g. turning the heating on and off) to highly significant activities (e.g. influencing how we vote in national elections). Crucially, our interactions with these devices will be predominantly language-based.

Despite this, the specific linguistic, ethical, psychological, sociological, legal and technical challenges posed by AICT (specifically) have rarely received focused attention. The workshop examined various aspects of the social impact of AICT-based systems in modern digital democracies, from both practical and theoretical perspectives. 

The workshop featured speakers from Cambridge University, the Free University of Brussels, the UK Department for Digital, Culture, Media & Sport and Pricewaterhouse Coopers. Trust & Technology Inititative founding member Dr Ella McPherson spoke about 'Digital Human Rights Reporting and The Politics of Intelligence.'

Videos of all presentations at the Future of AI workshop can be accessed via the CRASSH website.

 

Lenses or Mirrors? How Algorithms Affect Ways of Seeing Race and Gender

Event by Cambridge Digital Humanities and the Power and Vision research group (2018)

How does online behaviour feed the construction of our digital selves? In what ways do algorithmic curation processes identify, modulate, and reconstruct identity, through the juxtaposition of digital media? How do they reproduce, reinforce or challenge existing inequalities and biases? Ways of Machine Seeing and Power and Vision at CRASSH, invite you to a half-day workshop where we explore algorithms as mirrors of existing societal biases and inequalities, and as lenses which contribute to the perpetuation of those biases. As Online Social Networks (OSNs) have increasingly become virtual and cyberphysical prosthetics of modern social life, hereunder the practice of image sharing, curation, processing and interpretation, images have come to yield unimaginable power. 

 

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us