First the plan, now the action
A look at what's ahead in 2026 for UK AI policy
Hi everyone! It’s been a minute, but I’m back to posting. This year I’ll be branching out from familiar themes on Growth Zones and investment to dig into some of the increasingly urgent debates around AI labour market impacts, capabilities and societal impacts.
I’ll also soon be relaunching AI Wonk Week, a regular newsletter pitched at UK AI policy watchers to make sense of what’s going on in AI world and why it matters in a UK context. Keep an eye out!
In the meantime, I’ve put together the key policy flashpoints I see coming up over the next year and where the political debate around AI is likely to shift, and why more AI Growth Zones alone no longer cut it in 2026.
Sam
About a year ago, the landmark ‘AI Opportunities Action Plan’ set out the UK’s ambitions to be a world-leading AI hub. And a busy year it’s been indeed. In the year since the Action Plan has been published, there has been a flurry of policies and announcements on everything from AI skills to backing tomorrow’s cutting edge tech. Properly listing all of these announcements would take pages upon pages, but there are a few core developments that stand out:
Boosting AI infrastructure buildout, notably through the announcement of four ‘AI Growth Zones’, action to make the planning framework faster for data centres, and (in principle) accepting the recommendations from the Fingleton Review on nuclear energy regulation – which should make building nuclear in the UK faster and cheaper.
Boosting British capabilities and sovereignty in AI. A big focus has been strengthening independent UK capabilities in AI – either through sovereign compute, or in developing valuable niches in parts of the AI stack. In particular, the AI Research Resource will allocate public compute, while the recently-announced £100 million Advance Market Commitment and £500 million Sovereign AI Unit will aim to back British companies, including in potentially strategic AI and AI-adjacent areas such as novel compute.
Building a coalition for AI. While this has not been a primary focus, some smaller policies – notably allowing local areas to retain business rates on AI Growth Zones – have suggested a conscious effort to demonstrate wider, community benefits of AI. This is in line with AI Minister Kanishka Narayan’s oft-repeated line around the need to build a “British story” on AI.
To be sure, 2025 brought meaningful progress for the UK’s position in AI. And speaking as a policy wonk, the AI Action Plan has given us a fantastic suite of AI policies too. I would say in terms of having a plan, we are overall in a pretty good place right now. So what should AI watchers be looking out for this year? As I see it, the challenge in 2026 is going to be shared between delivery on the Action Plan’s proposals, and further developing the narrative or vision for ‘British AI’ – demonstrating how AI can benefit British society sustainably and position the UK well for further breakthroughs in AI capabilities.
Just do it
It’s worth re-emphasising: we have a lot of great plans and policies already. It’s more important than anything else to just do it and make meaningful progress on them. Having said that, there are a few delivery challenges it will be worth paying attention to over the year.
Prioritisation
Given the very limited fiscal room which we are repeatedly told about, it is therefore going to be really important to maintain good prioritisation – especially since other countries such as France and the UAE are also spending big on AI.
Through policies like the AI Research Resource, Advance Market Commitment and now the Sovereign AI Unit, there are signs this is happening: the Government is interested in ‘picking races’. In other words, it’s willing to make bets on the next wave of innovation in order to help build British capabilities at particular points in the AI stack and to support British companies to grow.
Based on recent announcements around the AMC and from Jade Leung, it seems like novel compute is the first area where the Government is focused on and seems to think there’s strategic value for the UK. This seems like a pretty good place to start, as I’ve indicated before. Being the person in the gold rush who sells the shovels is a fine place to be in terms of strategic leverage and economic rewards. There is an opportunity to develop specialised hardware for AI inference. Plus, the UK has a foothold in the chip design industry already: both Arm and Graphcore, although acquired by Softbank, are established and notable chip designers. And there are numerous other companies springing up, focused on “in-memory” compute (which basically fuse together memory and computation to greatly reduce the need for moving data around) and semiconductors with specialist materials that have better properties than silicon.
The main challenges here in 2026 will be making good decisions and maintaining a tight focus. DSIT is making some big calls now in terms of how to expand and allocate the compute under the AIRR. So it’s going to be crucial to have the right talent in place; the risk is that generalist officials end up making decisions with a big impact on our AI ecosystem. And as the Action Plan emphasised, “spreading large amounts of compute thinly will have little impact.” From the AI Action Plan there was an intention to appoint specialist AIRR Programme Directors to allocate that public compute to impactful ends. I’ve not heard anything on these recently, so as far as I know these appointments haven’t been/aren’t being made.
Just as important will be maintaining sufficient focus; programmes like the Advance Market Commitment will be much less effective if they are sprawled out across 5 or 6 technologies rather than 2 or at most 3. For now, it looks like we have £100 million aimed at novel compute, which is good – Government just needs to avoid ‘shiny new thing’ syndrome, and continue focusing resources to make meaningful bets.
An early test for the AMC will be how many slices that £100 million ends up in – whether it’s one big £100 million contract or ten £10 million contracts will say a lot about how focused and bold the programme will end up being. Consider that it costs something like £75 million to design a chip and another £75 million to produce it in a manufacturing run at chip foundries such as TSMC. Setting up a competitive chip design firm is expensive business.
Datacentres and energy
Probably the most politically visible problem this year will be how well and how quickly we can actually build the AI infrastructure we need. Several changes in the wake of the Action Plan have made this more likely. AI Growth Zones signal high political priority, while various planning regulations around data centres and the nuclear energy to power them are set to be streamlined. For the most part, we should just… get on and do that.
The problem is that on current timelines, the promised nuclear revolution of clean and abundant power is unlikely to arrive before the mid-2030s. It’s not obvious that the timelines need be quite that long, as some such as Jack Wiseman have argued. But either way, that leaves an awkward period where it’s not entirely clear how we will power data centres or connect them to our creaking grid. Given data centres typically require “five nines” (that is, 99.999% uptime), they need reliable power. Renewables are an option, but none offer the baseload, land-efficient power that nuclear energy does and require expensive backup or battery storage.
All this is to say that the Government will likely have to make a call on what it wants to do about this mismatch in timelines to keep up the momentum. It clearly wants to support data centre buildout, but in the short term this is likely to require bridging measures such as gas. That is politically awkward for a government that makes much of its climate commitments and also wants to accelerate AI infrastructure buildout. To get out ahead of that, more clarity is needed. Setting out a framework that allows data centres to co-locate with and use gas facilities, but combining this with commitments to repurpose these over the next ten years and/or invest in clean energy is needed to manage this interim period.
As energy and grid challenges have mounted, data centre operators have also become increasingly interested in “on-site” solutions for power. There’s more that could be done to facilitate this. In practice, many projects using on-site generation need to connect to the grid for backup. This requires “firm” grid connections (continuous and uninterrupted power) which are not straightforward to acquire. By introducing more visible standard routes to access other forms of grid connection, such as flexible connections (where users agree to temporarily supply their own output during times of grid congestion), we could shorten timelines on projects and do more to encourage investment.
The news that a proposed data centre project in Buckinghamshire is set to be (re)quashed over climate impact considerations suggests there might also be more to do on the planning side of things. Lots of moves towards speeding up consenting for data centres have happened, such as the decision to include them in the scope of the Nationally Significant Infrastructure Projects track. But developments such as this, combined with the trends described above, will make the next steps in the National Policy Statement for data centres very important. There might be a case here too for some kind of “data centre planning passport” akin to the one intended for housing developments, with the idea being pretty similar: a default “yes” for projects that meet stringent predefined standards, which could involve environmental standards or waste heat reuse.
Data
A bigger elephant in the room is copyright and data. Besides computing power, data is another fundamental input to AI models – but this is a weak infrastructure pillar for us right now. The Government tried and failed to reform copyright rules in 2025. Other countries, such as Japan and the US, have adopted frameworks that make it clearer or easier to use data to train AI models. With this impasse, we’re currently stuck in the worst of both worlds: AI is already reshaping the creative sector, but it is models trained elsewhere doing this, with the UK not reaping all the benefits it could.
To be fair, it’s not often that politicians win out against Paul McCartney. This was always going to be politically fraught and likely to be framed as a fight between the tech and creative sectors. But the Government could be thinking more (ahem) creatively about how to move on from the impasse. I think this looks like adopting a liberal framework for inputs to AI models, such as a broad Text and Data Mining exception, but combining this with much stronger measures on AI outputs.
That might include things like personality rights, which protect identifying features of performers such as likeness and voice. This already has precedent even in the stridently pro-AI US: in Tennessee, they named their version the ELVIS Act, which prohibits people from mimicking a person’s voice with AI without their permission. Given the implications of proliferating deepfakes, this is not only fairer to creatives and public figures but looks prudent as well. Combine this with other measures, such as standards around watermarking AI outputs and encouraging fairer business practices like Perplexity’s revenue-share pool.
Being stuck in the mud on copyright is compounded by the fact that efforts to leverage public sector data have fallen flat. It’s been about 18 months since the election in which it was touted, but there is still little sign of the National Data Library – what it looks like or what it would even aim to do. Even avid AI policy watchers I’ve talked to aren’t really sure what the latest is on the scheme. That’s a huge missed opportunity. An ambitious NDL that treats public data as infrastructure could supercharge impactful, sector-specific AI applications aligned with the public interest. It could be the difference between sometimes promising, but very patchy, public sector AI adoption and a genuine reimagining of the way we deliver public services. Or between having theoretical advantages in talent and data versus developing world-leading AI assurance and public sector AI industries.
The vision thing
A second issue to watch will be whether the benefits of AI can be demonstrated and the risks managed – whether, to quote AI Minister Kanishka Narayan, a “British story” on AI can be articulated in 2026.
Scepticism of AI, already evident in polling, is only likely to increase in 2026. We had a taste of backlash against AI last year in the form of the copyright battles and grumbling about data centres. Since then, the technology has advanced enormously over the past year, from the rise of reasoning models to autonomous “AI agents” capable of tackling ever-longer tasks. This has involved a frankly astonishing increase in capabilities. We now have models that can complete 5-hour tasks to a 50% success rate, look through files and use tools, autonomously browse the web, and create useful outputs like slide decks, code or briefings.
What does that actually mean for policymakers? I think there are a couple of trends to look out for as the year unfolds.
We have not seen an AI “jobpocalypse” obviating swathes of jobs, and recent troubles in the UK for new graduates are difficult to attribute solely to AI in amongst all the other ways the economic vibes are off, such as low business confidence and rising labour costs. But there are subtle signs that AI is already having an impact. In the US, in more AI-exposed fields, entry-level employment is down by more than in less exposed sectors.
Something similar could be starting to play out in the UK. It’s not showing up in the ‘big’ numbers, like productivity or huge jumps in headline unemployment. But it is showing up in reduced vacancies in more exposed firms, where hiring expectations for junior and technical roles are taking a hit. We seem to be moving towards a slow job market where at entry level there’s not much hiring, not much firing.
At the moment, it looks like this is a phenomenon with multiple factors, with AI playing a minor role in some sectors and role types. But it seems plausible with increasing capabilities and widening adoption that the first rung of the career ladder gets further out of reach for graduates. Even back in April last year, AI was having a discernible impact on software engineering: as Anthropic’s Economic Index Report found: “79% of conversations on Claude Code were identified as ‘automation’—where AI directly performs tasks—rather than ‘augmentation’”. Since then we’ve had new products like Claude Cowork that could widen this kind of “Claude Code experience” impact to other, more general forms of knowledge work.
To be clear, I don’t think things are as simple as “robots take over all jobs by 2030”, for several reasons including institutional bottlenecks, output assurance, imperfect labour substitution and more. But the evidence is coming in fast that in entry level knowledge work we’re beginning to see labour market effects.
There’s only so much government can do about this, and I don’t have the answers – but managing this trend will play an important part in maintaining public confidence in the UK’s AI ambitions. Either way, it seems likely that AI’s impact on the labour market will become much more politically charged over the next year.
Which underlines a more important point. More Growth Zones, while great for investment, do not constitute a radical vision to change the way the country works in the face of a profound technological change. Yes, communities will benefit from more funding alongside Growth Zones, but this is small fry compared to the potential impacts AI could have and the upside from reimagining public services, driving nationwide AI adoption, or cultivating advanced UK AI industries. Excessive political focus on Growth Zones risks overshadowing important work that will do much more to position the UK favourably: stuff around capabilities, supporting startups, adoption, skills. In my opinion it will be important to avoid sinking too much political capital into Growth Zones, as was sometimes the case in 2025 – especially considering these are bespoke investment zones with no legislative footing.
2025 brought us (most of) the policies we need to succeed. The challenge in 2026 will centre on delivering what was promised last year and making the ‘British story’ on AI more than a slogan.



Thanks for writing this, it clarifies a lot, and I'm really keen to hear your thoughts on how other European countries are interpreting these UK polocy shifts, because your insights are always so valuable.