skip to content
 

 Why the AI impacts ecosystem must move beyond ‘near-term’ and ‘long-term’

Jess Whittlestone and Shahar Avin
Centre for the Study of Existential Risk
 

The impacts of AI are already visible in numerous domains, while research breakthroughs are likely to precipitate even greater impacts than those we are already seeing. The combination of algorithmic bias, increasing technological unemployment, and AI concentrating power in the hands of tech companies could entrench existing pat-terns of systemic discrimination and lock in much more extreme global inequality than we see today.[1] [2]

If advanced language models become a regular component of fake online personas, this could corrupt our information ecosystem to the extent that “the pillars of modern democratic self-government—logic, truth, and reality—are shattered”.[3] [4] The increasing integration of machine learning systems in critical infrastructure across the world holds huge promise for improving critical resource management but could also open up huge vulnerabilities, where accidents could result in huge loss of human life. More generally, AI technologies might put pressure on international law by driving frequent changes in diverse sectors, putting stress on existing treaty regimes and inhibiting effective global governance.[5]

As many have pointed out, research on the impacts, risks and governance of AI so far has tended to ‘cluster’ into two groups one focused on identifying and shaping the impacts of existing and imminent applications of AI in society (‘near-term’), and the other focused on the potential existential risks of developing human-level AI (‘long-term’). However, as the research community has made progress on both near- and long-term issues, we are beginning to see the limitations of both these approaches.

While immediate issues resulting from current applications of AI are things we can address now, and may still be very important to address in the long-run, this ‘near-term’ focus must inevitably be somewhat reactive to problems as they become apparent. For example, widespread acknowledgement that algorithmic bias and data privacy are serious ethical problems has come in response to highly-publicised mistakes including racial bias in parole rating algorithms[6] and data breaches as a result of collaborations between DeepMind and the Royal Free Hospital in the UK.[7] As AI systems become more sophisticated and integrated into more important areas of society, the stakes of ‘mistakes’ will only get bigger, and addressing problems after-the-fact becomes increasingly infeasible.[8]

On the other hand, while low-probability, extreme-stakes risks from human-level AI are worth preparing for, the abstract nature of these concerns and broad assumptions involved make it difficult to know how these concerns should guide decisions about the development, deployment, and governance of AI today. This is an instance of the ‘Collingridge Dilemma’: before a technology is well-developed it is difficult to predict its impacts, but once those impacts are more apparent it is often too late to change them. By exploring the possible applications and impacts of current research trends in AI over the next 5-15 years, we may be able to find a ‘sweet spot’ where impacts are grounded enough in current trends to prepare for now, but far enough in the future to not already be entrenched.

The ecosystem addressing AI’s impacts on society must diversify beyond the purely ‘near-term’ or ‘long-term’ (though this doesn’t mean every group or sub-community must do so). To ensure that work today on AI impacts, risks, and governance stays relevant and useful as capabilities advance, we must look ahead to consider possible emerging applications and impacts of AI, and identify actions we can take today that are likely to be robustly beneficial and mitigate risks across a range of scenarios. To ensure that we’re able to prepare for and mitigate these most extreme impacts, we must more thoroughly explore different possible trajectories of AI development, deployment, and impacts, rather than centering all attention on preparing for a subset of scenarios in which human-level AI arises suddenly. As well as identifying areas of risk and future concern, there is also an urgent need to build shared visions of the future we want to create with AI, which can guide the development and use of this technology today.[9]

Exploring these ‘mid-term’ issues will require thinking rigorously about new methodological approaches. To anticipate and prepare for future impacts of AI, we must draw on the perspectives of a wide range of stakeholder groups, to bring domain expertise and ensure consideration of a diverse range of concerns. In addition, unlike short-term AI impacts, exploration of the “medium-term” requires more direct and prolonged engagement from the AI research community to identify plausible technology futures. Ensuring that future scenarios are grounded in an understanding of technical capabilities is particularly important given that our intuitions are often poor guides for the behaviours of future intelligent systems. While a broad range of tools and methods are available for AI futures exploration,[10] existing approaches tend to prioritise either deep expertise or diverse participation: none are perfectly suited for combining the two. We must therefore find novel ways to combine existing methods to bring deep technical expertise and diverse stakeholder groups together.

Humans are not mere bystanders in this “AI revolution”:[11] the futures we occupy will be futures of our own making, driven by the actions of and interactions between technology developers, policymakers, diverse stakeholders and numerous publics. There is therefore an urgent need to develop “anticipatory” approaches to the study of responsible AI.

 

[1] West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute, 1-33.

[2] Lee, K. F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Houghton Mifflin Harcourt.

[3] Lin, H. (2019). The existential threat from cyber-enabled information warfare. Bulletin of the Atomic Scientists, 75(4), 187-196.

[4] Seger, E., Avin, S., Pearson, G., Briers, M., Ó hÉigeartaigh, S.S., and Bacon, H. (2020). Tackling threats to informed decision- making in democratic societies: Promoting epistemic security in a technologically-advanced world. Alan Turing Institute.

[5] Maas, M. M. (2019). International law does not compute: Artificial intelligence and the development, displacement or destruction of the global legal order. Melb. J. Int'l L., 20, 29.

[6] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica.

[7] Powles, J., & Hodson, H. (2017). Google DeepMind and healthcare in an age of algorithms. Health and technology, 7(4), 351-367.

[8] Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568-1580.

[9] Ramos, J., Sweeney, J. A., Peach, K. and Smith, L. (2020). Our futures: by the people, for the people. How mass involvement in shaping the future can solve complex problems. Retrieved from https://media.nesta.org.uk/documents/Our_futures_by_the_people_for_the_people_WEB_v5.pdf

[10] Avin, S. (2019). Exploring artificial intelligence futures. Journal of AI Humanities. Available at https://doi. org/10.17863/CAM, 35812.

[11] Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60.

Keep in Touch

    Sign up to our Mailing List
    Follow us on Twitter
    Email us