skip to content
 

Towards accountable algorithmic systems

Dr Jat Singh
Department of Computer Science and Technology

 ‘Algorithm’, once a term mostly used by those technical, has now become mainstream. The summer of 2020 saw students protesting against unfair algorithmic grade allocations, “f**k the algorithm” being one of their chants. That protest is but one example where people have pushed back against an algorithmic process. And as algorithms continue pervade and impact our lives, accountability will be increasingly on the agenda.

‘Algorithmic systems’ – colloquially describing systems with some technical or data-driven elements – are inherently socio-technical. Not only are people affected by these systems, but people are involved in their construction, operation and use. Views that technology is neutral, and that technology-related issues can be addressed (solely) through technical fixes are rapidly fading. Rather now, we see ‘algorithmic accountability’ as a burgeoning area of interdisciplinary research, one that explicitly recognises that tackling issues in this context involves organisational, political, economic, and social considerations, not just those technical.

The Compliant and Accountable Systems Research Group[1] in the Department of Computer Science and Technology actively works in this space. The group focuses on the interplays of technology, law and policy, considering how technical and legal interventions might drive better compliance and accountability.

One key theme of our research is to bring about meaningful transparency, aiming at the information (and power) asymmetries between those leveraging algorithmic systems and those who oversee or are affected by them. Improving transparency won’t, itself, solve these issues, but meaningful information about algorithmic systems can assist broader accountability regimes, by facilitating the understanding, oversight and scrutiny over systems and the parties involved.

Towards this, we have been developing the concept of reviewability,[2] which takes inspiration from the well-established principles of administrative law that govern public sector decision-making. Rather than focusing on the ‘inner workings’ of (or ‘explaining’) a system, reviewability seeks to enable a more holistic understanding of an algorithmic system. This is by providing a systematic framework for determining the relevant information – from the technical, organisa-tional, and usage elements of a system – as is necessary for supporting meaningful oversight and scrutiny. In practice, reviewability entails recording details about an algorithmic system, from before it’s commissioned, right through its design, deployment, operation and use, as well as its resulting consequences.

Another key theme that we consider is what we call rights engineering: how algorithmic systems can better account for people’s rights. Ensuring that systems accord with rights is an area of increasing attention; for example, work on issues of bias and fairness in AI often concern equality rights, while freedom of expression considerations are raised in the context of online content moderation.

In addition to how algorithmic systems impact rights, there are also considerations regarding how algorithmic systems support individuals in exercising their rights. For instance, the GDPR gives individuals certain rights regarding the processing of their personal data. However, this is an area under considered in practice – compliance in this space is patchy, the process of exercising one’s rights can be cumbersome, and tensions exist between rights and other concerns such as security and privacy.[3] The research community is only just beginning to scratch the surface of the interplays between systems and rights; indeed, raising awareness seems an important step forward.

Algorithmic accountability, though a nascent area, is one rapidly growing in prominence. The above represents but a few examples of the many topics and challenges in this space. There is much to do to help make emerging algorithmic systems better work for us all.

 

[2] J. Cobbe, M.S.A Lee, J. Singh, Reviewable automated decision-making: A framework for accountable algorithmic systems”, ACM Fairness Accountability and Transparency (FAccT) 2021.

[3] M. Veale, R. Binns, J. Ausloos, “When data protection by design and data subject rights clash”, International Data Privacy Law 2018; J. Singh, J. Cobbe, The security implications of data subject rights”, IEEE Security & Privacy, 2019.

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us