The UK government has shelved £1.3bn of funding that had been earmarked for AI and technology innovation. This includes £800m for the creation of the exascale supercomputer at the University of Edinburgh and £500 million for the AI Research Resources — another supercomputer facility comprising Isambard at the University of Bristol and Sunrise at Cambridge University.
The funding was originally announced by the then Conservative government as part of the November meeting. Autumn DeclarationHowever, on Friday, a spokesperson for the Department of Science, Innovation and Technology revealed to the BBC that the Labour government, which came to power in early July, was redistributing funding.
He claimed the money had been promised by the Conservative administration but was never allocated in its budget. In a statement, a spokesman said: “The government is taking difficult and necessary spending decisions across all departments in the face of billions of pounds of unfunded commitments. This is essential to restore economic stability and deliver on our national mission of growth.
“We have launched the AI Opportunities Action Plan which will identify how we can strengthen our IT infrastructure to better suit our needs and consider how AI and other emerging technologies can best support our new Industrial Strategy.”
A £300m grant for AIRR has He has already committed and will continue as planned. Part of this has already been earmarked for the first phase of the Dawn supercomputer. However, the second phase, which would improve its speed tenfold, is now in jeopardy, according to RegisterThe BBC said the University of Edinburgh had already spent £31m on building housing for its exascale project and that the last government had made it a priority project.
“We are absolutely committed to building a technology infrastructure that creates growth and opportunity for people across the UK,” the DSIT spokesperson added.
AIRR and exascale supercomputers were designed to enable researchers to analyze advanced AI models for security and drive advances in areas such as drug discovery, climate modeling, and clean energy. The GuardianThe University of Edinburgh's Principal and Vice-Chancellor, Professor Sir Peter Mathieson, is urgently seeking a meeting with the Technology Secretary to discuss the future of exascale.
Removing funding goes against commitments made in the government's AI Action Plan
The shelved funding appears to go against a statement by Secretary of State for Science, Innovation and Technology Peter Kyle on 26 July, where he said he was “putting AI at the heart of the government's agenda to drive growth and improve our public services”.
He made the statement as part of the announcement of the new AI Action Planwhich, once developed, will establish how best to develop the country's AI sector.
Next month, Matt Clifford, one of the main organisers of the November edition… AI Security Summitwill publish its recommendations on how to accelerate the development and boost the adoption of useful AI products and services. An AI Opportunities Unit will also be created, made up of experts who will implement the recommendations.
The government announcement sees infrastructure as one of the “key enablers” of the Action Plan. If the necessary funding were received, exascale supercomputers and AIRR would provide the immense processing power needed to handle complex AI models, accelerating research and development of AI applications.
SEE: Four ways to boost digital transformation in the UK
AI bill to focus on continued innovation, despite funding changes
While the UK Labour government has pulled back on investment in supercomputers, it has taken some steps to support AI innovation.
On July 31, Kyle told executives at Google, Microsoft, Apple, Meta and other major tech companies that Artificial Intelligence Bill will focus on the large ChatGPT-style base models created by only a handful of companies, according to the Financial Times.
He assured tech giants that it would not become a “Christmas tree bill” where more regulations would be added through the legislative process. Limiting AI innovation in the UK could have a significant economic impact, with a Microsoft report finding that adding five years to the time it takes to implement AI could cost more than £150 billionAccording to the IMF, the AI Action Plan could generate annual productivity gains of 1.5%.
FT sources heard Kyle confirm that the AI bill will focus on two things: making voluntary agreements between businesses and government legally binding and turning the AI Safety Institute into an independent government body.
First point of the AI bill: making voluntary agreements between the government and big tech companies legally binding
At the AI Safety Summit, representatives from 28 countries signed the Bletchley Declaration, which committed them to jointly manage and mitigate AI risks while ensuring safe and responsible development and deployment.
Eight companies involved in AI development, including ChatGPT creator OpenAI, voluntarily agreed to work with the signatories, allowing them to evaluate their latest models before release so that the declaration can be upheld. These companies also voluntarily agreed to the Frontier AI Security Commitments In May's house AI Summit in Seoulincluding stopping the development of AI systems that pose serious, unmitigated risks.
According to the FT, UK government officials want these agreements to be legally binding so that companies cannot back out if they lose commercial viability.
Artificial Intelligence Bill, Item 2: Turning the AI Safety Institute into an independent government body
The UK AISI was launched at the AI Safety Summit with three main objectives: to assess existing AI systems for risks and vulnerabilities, to conduct fundamental research into AI safety, and to share information with other national and international stakeholders.
A government official said making AISI an independent body would reassure companies that they were not under government pressure while also strengthening their position, the FT reported.
The UK government's stance on AI regulation versus innovation remains unclear
The Labour government has shown evidence of both limiting and supporting the development of AI in the UK.
Along with the redistribution of AI funding, it has suggested that it will be stricter in its restrictions on AI developers. This was announced in July. The king's speech that the government “will seek to establish appropriate legislation to impose requirements on those working on the development of the most powerful artificial intelligence models.”
This backs up Labour’s pre-election manifesto, which pledged to introduce “binding regulation for the handful of companies developing the most powerful AI models”. After the speech, Prime Minister Keir Starmer also told the House of Commons that his government would “harness the power of AI as we look to strengthen security frameworks”.
On the other hand, the government has promised tech companies that the AI bill will not be too restrictive and has apparently been calmly waiting for its introduction. It was expected to include the bill in the legislative pieces named which were announced as part of the King's Speech.