skip to content
 

Abstracts of past talks at the University of Cambridge exploring the impact of social media on communication habits, the damage caused to democratic processes and the legal and regulatory responses to social media platforms.

 

Dr Duncan Brumby: Digitally Distracted

 

Civil Servants and Social Media: A minefield or the new frontier?

Lunchtime Talk by Dr Dennis GrubeDepartment of Politics and International Studies (POLIS)

The talk highlighted some of the unintended consequences of social media engagement by civil servants:  people who once 'weren't household names, even in their own housholds', are now becoming identifiable and active participants in social media. With social media literacy among them in its infancy, there are many examples of spurious use of official government Twitter handles. Should these inexperienced broadcasters be trained more widely in reputation management, and how does public perception of once anonoul collective insitutions change when individual employees become spokespersons and targets by using social media communication channels? Is trust in the efficacy and solidity of the civil service being undermined, or is it becoming more transparent and accountable?

Dennis’ research interests at POLIS focus on the study of administrative leadership, and in particular the ways in which senior civil servants contribute to public debates in countries that operate under the Westminster system of government. 

 

Death by 1,000 Likes: Is Social Media a Threat to Democracy?

Samantha Bradshaw, Oxford Internet Institute (OII)

The use of computational propaganda to shape public attitudes has become one of the most pressing challenges for democracy. Over the past few years, there have been several attempts by foreign operatives, political parties, and populist movements to manipulate the outcome of elections by spreading disinformation, amplifying divisive rhetoric, and micro-targeting polarizing messages to voters.
 
By co-opting the advertising infrastructure, algorithms, and the user agreements that support social media platforms, computational propaganda has been leveraged to sow discord, dissent, and division among citizens in democracies around the world. The talk examined the global phenomenon of social media manipulation, as well as the legal and private self-regulatory responses currently being developed to address it.

The talk was hosted by the Technology and New Media Research Cluster, Department of Sociology.

 

Digitally Distracted

Talk by Prof Duncan Brumby (UCL) at the Department of Psychology

Work activities are constantly punctuated by interruptions, and maintaining focus can be challenging. There are three main sources of distraction. First, work tasks are often distributed across different applications (e.g., emails, browsers, documents) and devices (e.g., laptops, phones, tablets), and switching between these is cognitively demanding. Second, new digital distractions abound, from social media and breaking news stories, to new urgent work requests. Third, the rise of remote work, and greater flexibility over when and where work is done comes at a cost: work must now be juggled with other activities and obligations.

In this talk, Duncan discussed the results of our recent research aimed at understanding how people organise their work and manage digital distractions. To investigate this question we have used different research methods and approaches, from controlled lab experiments to situated observational studies, and online studies with crowdsourcing platforms. The results of this research give insights into how people can better manage digital interruptions, and how systems can be better designed to help people maintain focus.

 

The Social Impact of Automatic Hate Speech Detection

 Talk by Dr Stefanie Ullman, Centre for Research in the Arts, Social Sciences and Humanities (CRASSH)

In this talk, Stefanie explored quarantining as a more ethical method for delimiting the spread of Hate Speech via online social media platforms. Currently, companies like Facebook, Twitter, and Google generally respond reactively to such material: offensive messages that have already been posted are reviewed by human moderators if complaints from users are received. The offensive posts are only subsequently removed if the complaints are upheld; therefore, they still cause the recipients psychological harm. In addition, this approach has frequently been criticised for delimiting freedom of expression, since it requires the service providers to elaborate and implement censorship regimes. In the last few years, an emerging generation of automatic Hate Speech detection systems has started to offer new strategies for dealing with this particular kind of offensive online material. Anticipating the future efficacy of such systems, the present article advocates an approach to online Hate Speech detection that is analogous to the quarantining of malicious computer software. If a given post is automatically classified as being harmful in a reliable manner, then it can be temporarily quarantined, and the direct recipients can receive an alert, which protects them from the harmful content in the first instance. The quarantining framework is an example of more ethical online safety technology that can be extended to the handling of Hate Speech. Crucially, it provides flexible options for obtaining a more justifiable balance between freedom of expression and appropriate censorship.

 

 

 

 

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us