What is artificial intelligence (AI)?
The definition of artificial intelligence is contested, but the term is generally used to refer to computer systems that can perform tasks normally requiring human intelligence. 60 Central Digital and Data Office, Data Ethics Framework: glossary and methodology, last updated 16 March 2020, retrieved 23 October 2023, www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-glossary-and-methodology The 2023 AI white paper defines it according to two characteristics that make it particularly difficult to regulate – adaptivity (which can make it difficult to explain the intent or logic of outcomes) and autonomy (which can make it difficult to assign responsibility for outcomes). 61 Department for Science, Innovation and Technology, A pro-innovation approach to AI regulation, CP 815, The Stationery Office, 2023, www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach, p. 22.
It is helpful to think of a continuum between narrow AI on the one hand – which can be applied only to specific purposes, e.g. playing chess – and artificial general intelligence on the other – which may have the potential to surpass the powers of the human brain. Somewhere along this continuum, general purpose AI is a technology that enables algorithms, trained on broad data, to be applied for a variety of purposes.
The models underlying general purpose AI are known as foundation models. 62 Jones E, Explainer: What is a foundation model?, Ada Lovelace Institute, 2023, www.adalovelaceinstitute.org/resource/foundation-models-explainer/ A subset of these that are trained on and produce text are known as large language models (LLMs). These include GPT3.5, which underpins ChatGPT. 63 Bommasani R and Liang P, Reflections on Foundation Models, Stanford Institute for Human-Centered Artificial Intelligence, 18 October 2021, https://hai.stanford.edu/news/reflections-foundation-models General purpose AI programs such as ChatGPT which can provide responses to a wide range of user inputs are sometimes imprecisely referred to as “generative” AI.
How does general purpose AI work?
General purpose AI relies on very large datasets (e.g. most written text available on the Internet). The complex models that interpret this data – known as foundation models – learn, iteratively, what response to draw from the data when prompted to do so (e.g. through questions asked – otherwise known as prompts). 64 Visual Storytelling Team and Murgia M, ‘Generative AI exists because of the transformer’ Financial Times, 12 September 2023, retrieved 23 October 2023, https://ig.ft.com/generative-ai/ The models learn in part autonomously, but also through human feedback, with rules set by their developers to tune their outputs. This process hones the models to provide outputs increasingly tailored to their intended audience, often refining them based on user feedback.
General purpose AI programs enable foundation models to be applied by users in particular contexts. General purpose AI is capable of “emergent behaviour” 65 Wei J, Tay Y, Bommasani R and others, Emergent Abilities of Large Language Models, Transactions on Machine Learning Research, August 2022, https://openreview.net/pdf?id=yzkSU5zdwD, p. 22. where software can learn new tasks with little additional information or training. 66 Ibid, p. 6. https://openreview.net/pdf?id=yzkSU5zdwD This has led to models learning moderate arithmetic or another language. 67 Ngila F, ‘A google AI model developed a skill it wasn’t expected to have’, Quartz, 17 April 2023, retrieved 23 October 2023, https://qz.com/google-ai-skills-sundar-pichai-bard-hallucinations-1850342984 Concerningly, AI developers are unsure of how these emergent behaviours are being learned. 68 Ornes S, ‘The Unpredictable Abilities Emerging From Large AI Models’, Quanta Magazine, 16 March 2023, retrieved 23 October 2023, www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/
What are the potential benefits for public services?
General purpose AI models have already been used in a range of circumstances. Whilst the most common usage to date has been in marketing and customer relations, foundation models have also been essential for radical improvements in healthcare, for instance by predicting protein structures which will increase the speed of drug development, 69 The AlphaFold team, ‘AlphaFold: a solution to a 50-year-old grand challenge in biology’, blog, Google DeepMind, www.deepmind.com/blog/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology developing antibody therapies 70 Callaway E, ‘How generative AI is building better antibodies’, Nature, 4 May 2023, retrieved 23 October 2023 www.nature.com/articles/d41586-023-01516-w and designing vaccines. 71 Dolgin E, ‘‘Remarkable’ AI tool designs mRNA vaccines that are more potent and stable’, Nature, 2 May 2023, retrieved 23 October 2023, www.nature.com/articles/d41586-023-01487-y AI has also been used to aid the transition to net zero, for example by informing the siting and design of new wind farms and improving the efficiency of carbon capture systems. 72 Larosa F, Hoyas S, García-Martínez S and others, ‘Halting generative AI advancements may slow down progress in climate research’, Nature Climate Change, 2023, vol.13, no.6, pp. 497–9, www.nature.com/articles/s41558-023-01686-5; Neslen A, ‘Here's how AI can help fight climate change’, World Economic Forum, 11 August 2021, retrieved 23 October 2023 www.weforum.org/agenda/2021/08/how-ai-can-fight-climate-change/
In public services, general purpose AI can be utilised to provide highly personalised services at scale. It has already been tested in education, improving student support services at multiple universities 73 UNESCO, ‘Artificial intelligence in education’, [no date], retrieved 23 October 2023, www.unesco.org/en/digital-education/artificial-intelligence but its biggest impact could be in schools 74 Ahmed M, ‘UK passport photo checker shows bias against dark-skinned women’, BBC News, 8 October 2020, retrieved 23 October 2023, www.bbc.co.uk/news/technology-54349538 where student data can be used to design learning activities best suited to an individual’s subject understanding and style of learning, rather than via a more standardised approach to classroom learning (albeit that further testing and careful safeguards would be required). AI has also been deployed for facial recognition in policing and to identify fraudulent activity, for example.
What are the concerns around its use in the public sector?
Concerns around the widespread usage of AI in the public sector can fall into three categories.
- Data privacy
Reaping the benefits from personalised healthcare or education means making personal and often sensitive data available to the algorithms – for both training purposes and for day-to-day operations. Government will need to be transparent about how it is working with the private sector to develop AI-enabled public services and how these systems are using public and personal data. There are potential areas where the AI technology itself can improve subject privacy – for example by transforming real-world data into a more anonymised form that can then be used by general purpose AI – but care will be necessary to maintain public confidence.
- Bias and other unintended consequences
The effectiveness of general purpose AI is limited by the scope of the data on which it has been trained. ChatGPT, for example, can only answer questions based on data available until the end of its training period in 2021. But the quality of the training data also matters. Earlier narrow AI technology has displayed bias, for example when facial recognition did not recognise darker skin tones in passport photos. 82 Ahmed M, ‘UK passport photo checker shows bias against dark-skinned women’, BBC News, 8 October 2020, retrieved 23 October 2023, www.bbc.co.uk/news/technology-54349538 LLMs trained on the whole corpus of written text to date will absorb the full range of more or less obvious biases in society. 83 Bender E, Gebru T, McMillan-Major A and Shmitchell S, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, pp. 610–23, https://dl.acm.org/doi/10.1145/3442188.3445922 This bias must be identified and accounted for when using general purpose AI to deliver public services.
Bias may also be introduced in the parameters AI companies input – e.g. regarding what kind of jokes it is appropriate to tell about whom. However well intentioned, such parameters may reinforce the expectations of a particular social group. More generally, the novelty of AI technology means that its potential unintended consequences are difficult to foresee. Work on AI safety, within and outside government, aims to identify and mitigate a range of potential risks.
There are also concerns about AI being used to make decisions about citizens without transparency, explainability or redress. Even if known biases are eliminated, the automated nature of AI may still lead to greater trust being (mis)placed in it and therefore problems being harder to expose (as was the case, for instance, with the Post Office’s Horizon computer system).
- Civil engagement
General purpose AI will raise questions for how government makes decisions. Its widespread accessibility already means text can no longer be trusted to have been written by a human. With the advance of “deepfake” systems that can clone voices and video feeds, any online interaction may soon be of questionable authenticity. This is already an area of concern for the Financial Conduct Authority as it considers its approach to AI regulation within the financial sector. 84 Rathi N, ‘Our emerging regulatory approach to Big Tech and Artificial Intelligence’, speech at Economist Impact, 12 July 2023, www.fca.org.uk/news/speeches/our-emerging-regulatory-approach-big-tech-and-artificial-intelligence In future, politicians and civil servants will not only need to determine how far they can rely on evidence or perspectives provided by general purpose AI systems – which may substantially assist good decision-making but may also reflect the parameters set by their developers – but also how far they can still trust interactions with unknown interlocutors that do not take place face to face.
How is the UK government approaching AI in public services?
AI is already being deployed in various public service settings, from facial recognition in policing to identifying fraudulent activity. Civil servants are currently being encouraged to experiment with the use of AI where it can improve the productivity of government, although with constraints around the manipulation of classified information and caution regarding bias and data protection. 85 Knott D, ‘The use of generative AI in government’, blog, Central Digital and Data Office, 30 June 2023, retrieved 23 October 2023, https://cddo.blog.gov.uk/2023/06/30/the-use-of-generative-ai-in-government/ This approach is subject to review at the end of 2023.
Realising the potential of AI in the public sector will require strong underlying digital and data systems: AI systems rely on accessing large quantities of robust data, which does not exist uniformly across all public services (for example the NHS has repeatedly struggled to digitise services 86 Comptroller and Auditor General, Digital Transformation in the NHS, Session 2019-21, HC 317, National Audit Office, 2020, www.nao.org.uk/wp-content/uploads/2019/05/Digital-transformation-in-the-NHS.pdf ). Adopting AI will also have implications for the civil service workforce, 87 Shepard M, Technology and the future of the government workforce: How new and emerging technology will change the nature of work in government, Institute for Government, 2020, www.instituteforgovernment.org.uk/publication/report/technology-and-future-government-workforce; Waterfield S, 'AI ‘could replace two-thirds of civil service jobs’ in next 15 years’, Tech Monitor, updated 26 June 2023, retrieved 23 October 2023, https://techmonitor.ai/government-computing/automation-ai-jobs-cuts-civil-service-mid-2030s-former-chro-rupert-mcneil which has already experienced automation of many junior roles. This trend is likely to continue, particularly in areas where technology is already proven such as call centres, correspondence, case work and information gathering. 88 Cabinet Office, Places for Growth: Evidence Base, March 2018, https://committees.parliament.uk/publications/40034/documents/195499/default/, p. 178. There will be disruption as well as automation of tasks: the civil service will need to consider how to respond as firms use AI to generate bids in government procurement rounds, or if public consultations are inundated by unique responses generated by AI tools, for example.