Working to make government more effective

Explainer

Artificial intelligence: how is the government approaching regulation?

What is the government doing to regulate artificial intelligence, and how is it ensuring alignment with other countries?

Artificial intelligence concept

Government has so far been tentative in its approach to general purpose artificial intelligence (AI) technology and it is cited as a chronic risk in the national risk register. 55 Cabinet Office, National Risk Register 2023, Gov.uk, August 2023, www.gov.uk/government/publications/national-risk-register-2023, p. 17. The AI white paper has set out that a specific regulatory response is required. It aims to establish principles for regulating AI technology on a non-statutory basis to encourage the swift and consistent application of rules across existing regulators already regulating different fields of potential application. But it foresees the need to strengthen and clarify regulatory mandates once parliamentary time is available. 56 Department for Science Innovation and Technology, A pro-innovation approach to AI regulation, CP 815, The Stationery Office, 2023, www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach, pp. 35–6. 

Who is responsible for AI in government? 

The Department for Science, Innovation and Technology (DSIT) leads on AI policy, but other departments have an interest, including No 10, HMT, the Cabinet Office (particularly regarding national security) and the FCDO (regarding international negotiations). The Frontier AI taskforce, housed in DSIT and working closely with No 10 Data Science (‘10DS’) – a unit in No 10 that aims to improve the evidence base for decisions – will study the opportunities and risks posed by AI and explore ways to develop capability within government. The Office for AI, also within DSIT, is responsible for overseeing implementation of the government’s AI strategy. But this strategy is overlaid on an existing landscape in which different regulators and departments have been setting out their own, separate, approaches. For example:  

These various area-specific guidelines are complemented by other initiatives to improve the safety, fairness and transparency of AI systems, such as the AI assurance guidance from the Centre for Data Ethics and Innovation.  63 Centre for Data Ethics and Innovation, ‘CDEI portfolio of AI assurance techniques’, Gov.uk, 7 June 2023, retrieved 24 October 2023, www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques  The Central Digital and Data Office has also provided guidance for government on the use of Large Language Models (LLMs) – foundation models trained on text – across the Civil Service (combining advice from the National Cyber Security Centre and the Department for Education). 64 Knott D, ‘The use of generative AI in government’, blog, Central Digital and Data Office, 30 June 2023, retrieved 23 October 2023, https://cddo.blog.gov.uk/2023/06/30/the-use-of-generative-ai-in-government/  The net result is a complex, multi-layered set of guidelines and regulation from multiple bodies that the designers and operators of general purpose AI systems will need to incorporate into their operations. 

How is the UK government approaching AI regulation? 

The Office for AI, based in the science department and originally set up to implement the 2018 AI sector deal, 77 Department for Business and Trade et al, ‘AI Sector Deal’, Gov.uk, updated 19 May 2019, retrieved 27 October 2023, www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal  has been expanded and is responsible for encouraging safe and innovative use of AI technology. But the UK’s current activity-based approach to regulation relies on individual regulators all maintaining internal expertise and keeping pace with current and future developments in AI technology as models become more capable. 78 Department for Science Innovation and Technology, A pro-innovation approach to AI regulation, CP 815, The Stationery Office, 2023, www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach   Government’s proposed “central AI regulatory functions” 79 UK Artificial Intelligence Regulation Impact Assessment (publishing.service.gov.uk) are intended to help regulators with this, and a regulatory concierge service for innovators should make the landscape easier to navigate for those regulated. 80 Department for Science Innovation and Technology, ‘New advisory service to help businesses launch AI and digital innovations’, press release, 19 September 2023, www.gov.uk/government/news/new-advisory-service-to-help-businesses-launch-ai-and-digital-innovations   The government intends the activity-based approach to help to maintain the agility necessary to respond to unforeseen developments – and indeed to lead globally as these occur.  

It is not yet clear whether the monitoring, evaluation, and capacity-building activity the government proposes will add up to a sufficiently capable and authoritative cross-cutting unit at the centre of government that can effectively support and co-ordinate individual regulators’ more bespoke work. The government acknowledges that more may be needed, either by strengthening existing cooperation arrangements (such as the Digital Regulation Cooperation Forum) or building a centre of expertise that can address capability gaps. 81 Department for Science Innovation and Technology, A pro-innovation approach to AI regulation, CP 815, The Stationery Office, 2023, www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach, p. 63.  

How does the UK’s approach compare to international peers? 

The UK government has talked about being an AI superpower, leading the development of the international rules and standards necessary for safe AI, and will host the AI Safety Summit in autumn 2023. 82 Prime Minister’s Office, ‘UK to host first major global summit on Artificial Intelligence, press release, 7 June 2023, www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence   An international consensus on AI regulation would help to minimise harms from new technological developments. 

However, the international community’s track record in co-ordinating regulation does not inspire confidence. In social media, early regulation developed under the influence of individual technology firms gave platforms legal protection for hosting user-generated content, which made it hard to regulate online harms later. 83 Wakabayashi D, ‘Legal Shield for Websites Rattles Under Onslaught of Hate Speech’, The New York Times, 6 August 2019, www.nytimes.com/2019/08/06/technology/section-230-hate-speech.html   This mistake could be repeated with AI. Whilst there have been recent calls for an equivalent to the Intergovernmental Panel on Climate Change by both the prime minister and the EU commission president, 84 van der Leyen, U, 2023 State of the Union Address, European Commission, 13 September 2023, retrieved 24 October 2023, https://ec.europa.eu/commission/presscorner/detail/en/speech_23_4426  regulatory consensus on the response to climate change has also been hard to build where there are different national interests which are analogous to those surrounding AI. 85 Schiermeier Q, ‚IPCC flooded by criticism’, Nature, 2010, vol.463, no.7281, pp. 596–7  www.nature.com/articles/463596a    

The UK’s current approach to AI regulation so far is in contrast to the EU’s planned approach in its “AI Act”, 86 European Parliament, ‘EU AI Act: first regulation on artificial intelligence, last updated 14 June 2023, retrieved 24 October 2023, www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence  which will impose stringent controls and transparency requirements on “high risk” AI and reduced requirements on “limited risk” AI. Most general purpose AI would be classified as high risk, which implies prescriptive requirements for those developing foundation models, with stringent reporting requirements to explain how the models are trained. 

There is also a joint US-EU initiative to draft a set of voluntary rules for businesses, called the “AI Code of Conduct”, in line with their Joint Roadmap for Trustworthy AI and Risk Management. 87 European Commission, ‘TTC Joint Roadmap for Trustworthy AI and Risk Management’, last updated 4 February 2023, retrieved 24 October 2023, https://digital-strategy.ec.europa.eu/en/library/ttc-joint-roadmap-trustworthy-ai-and-risk-management  The code of conduct will be made available through the Hiroshima Process at the G7 in an effort to build international consensus for AI governance, and if this is successful there is a risk that the UK will lose influence over the development of international AI regulation as a result. But the blueprint for an AI bill of rights in the USA, published in October 2022, could lead to a more principles-based approach aligned with the UK. 88 The White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights, October 2022,  www.whitehouse.gov/ostp/ai-bill-of-rights/   

Notwithstanding these risks, in setting its own path the UK is positioning itself as a country in which firms can develop innovative AI technology and which the world might follow. This could be advantageous as long as an appropriate balance can be struck between innovation and the safe development of systems. 

Related content