Labour's AI Action Plan - a gift to the far right
Critical computing expert Dan McQuillan argues that, on top of the clear social 
and environmental harms associated with the technology, Labour's vapid fixation 
on AI-led growth in lieu of real change will further enable the far right. 
Instead, he proposes an alternative strategy of 'decomputing'.

By Dan McQuillan - 14 Jan 2025

Labour has published its 'AI Opportunities Action Plan' (The Plan). The Prime 
Minister is very bullish about The Plan, and peppers his foreword with muscular 
terms like growth, revolution, ambition, strength and innovation. In itself, 
The Plan is full of claims that AI is essential and inevitable, and urges the 
government to pour public money into the industry so as not to miss out.

In the style of tech entrepreneurs, The Plan likes to put 'x' after things, so 
investment must go up by 20x (meaning twenty times), the amount of compute AI 
requires has already gone up by 10,000x and so on. The Plan claims that Britain 
is already leading the world through the AI Safety Institute (of which more 
later) and infuses the usual AI hype with nationalist vibes via terms like 
world leader, world-class, national champions and 'Sovereign AI'.

Above all, The Plan emphasises the need to scale. The significance of scale for 
AI and its technopolitical impacts will be explored below.

This article addresses the poverty of The Plan and the emptiness of its claims 
about AI but, rather than a point-by-point rebuttal, it's about the underlying 
reasons why this Labour government supports measures that will harm both people 
and the environment.

In between invocations of speed, pace and scale, there's some recognition in 
The Plan that the UK is not a wholly happy place right now. While recommending 
a high-tech form of land enclosures via 'AI Growth Zones' (AIGZs), which are 
about handing data centre developers "access to land and power", it gestures 
towards the idea that these could drive local innovation in post-industrial 
towns. While The Plan's claims about AI's inevitable progress and the oncoming 
wave of agentic systems that will reason, plan and act for themselves already 
seem dated and discredited, what hasn't changed is that the very regions 
targeted for growth via AIGZs have already seen violent anti-immigrant pogroms 
accompanied by fascist rhetoric, and those sentiments have not gone away.

Ultimately, it's argued here, the misstep represented by The Plan and its total 
commitment to AI will reinforce and amplify the threat of the far right, as 
well as connecting it to the extremely reactionary ideas that are in the 
ascendency in Silicon Valley. This article proposes instead 'decomputing'; the 
refusal of hyperscale computing, the replacement of reductive solutionism with 
structures of care, and the construction of alternative infrastructures based 
on convivial technologies.

Labour and AI
In some ways, it's fairly obvious why this Labour government would want to 
prioritise AI. Kier Starmer's single identifiable political belief is the idea 
of 'growth', so demonstrating economic growth supersedes all other government 
concerns.

Growth will demonstrate that Kier and colleagues are serious politicians who 
are no threat to the establishment, and at the same time win over Mr. & Mrs Red 
Wall voter who are disillusioned with orthodox politics. Boosting GDP is 
Labour's answer to all the wicked problems that beset the UK's public services 
and infrastructure, and avoids having to actually challenge the underlying 
logic of Thatcherite neoliberalism which has dominated for decades.

And if there's one area of economic activity which is growing, it's certainly 
AI. All the graphs are going up; the venture capitalist investment, the stock 
market valuations, the size of the AI models and the number and scale of the 
data centres. Latching on to this growth is already working for the government, 
as a full 10% of the £63b promised by their 'record-breaking' international 
investment summit was earmarked for data centres.

Having apparently decided to pin their hopes on AI, the Labour government have 
been aligning with demands from Big Tech. Not long after the election that 
brought Labour to power, Google issued a report titled 'Unlocking the UK's AI 
Potential' laying out their conditions for AI growth in the UK, including 
investment in data centres and a loosening of copyright restrictions. Of 
course, these aren't Google-specific requirements; the foundation of all 
contemporary AI is scale, meaning more and larger data centres to house all the 
servers and bigger pools of data to train them.

The collision course with copyright comes from the fact that these data sets 
have always been too large to pay for, so the AI industry just grabs them from 
the internet without asking. Google's report was accompanied by a media round 
from their UK & Ireland vice president warning that the UK risked "losing 
leadership" and "falling behind" if their advice wasn't followed.

It seems the message was received loud and clear; since the election, Labour 
have designated data centres as 'critical national infrastructure' which means 
that ministers can override any local planning objections, and the government 
is also floating a relaxation of copyright protections. It's not just Google 
that the Labour government is prepared to doff its cap to; Peter Kyle, the 
current secretary of state for Science, Innovation and Technology, has 
repeatedly stated that the UK should deal with Big Tech via 'statecraft'; in 
other words, rather than treating AI companies like any other business that 
needs taxing and regulating, the government should treat the relationships as a 
matter of diplomatic liaison, as if these entities were on a par with the UK 
state.

This awe of Big Tech reflects deeper currents of commitment within the Labour 
government. Certainly, any ministers from the Blairite faction are going to be 
influenced by the absolute belief in AI expressed by influential think tank, 
the Tony Blair Institute (TBI).

It's hard to exaggerate the centrality of AI to the TBI world view, but the 
title of their 2023 report is pretty representative: 'A New National Purpose: 
AI Promises a World-Leading Future of Britain'. According to this report, 
becoming a world leader in AI development is "a matter becoming so urgent and 
important that how we respond is more likely than anything else to determine 
Britain's future", and its 2024 sequel, 'Governing in the Age of AI: A New 
Model to Transform the State', opens with "There is little doubt that AI will 
change the course of human progress."

The breathless rhetoric is accompanied by policy demands; a factor of ten 
increase in the UK's compute capacity, the diversion of major spending 
commitments to AI infrastructure, reducing regulation to US levels and, of 
course, enacting all this in close relationship with the private sector.

A core promise is that turning the public sector over to AI will deliver huge 
savings and improved delivery, although one might question the reliability of 
their research, given that it was based on asking ChatGPT itself how many 
government jobs it could do. While this sketchy approach has echoes of the Iraq 
('dodgy') Dossier, it's reflecting a realpolitik that sees both AI companies 
and rhetoric about AI as incredibly powerful at the current moment.

This is perhaps the hole that AI fills for the Labour government; having long 
abandoned any substantive belief in the transformative power of socialism, it 
is lacking a mobilising belief system. At the same time, it's obvious to all 
and sundry that the status quo is in deep trouble and that being the party of 
continuity isn't going to convince anyone.

Ergo, the claim that AI has the power to change the world becomes a good 
stand-in for a transformative ideology. The bonus for the Labour government is 
that relying on AI to fix things avoids the need for any structural changes 
that might upset powerful business and media interests, and rhetoric about 
global AI leadership has a suitably 'Empire' vibe to appeal to nationalistic 
sentiments at the grassroots.

Harms at scale
Beneath all the policy gloss and think tank reports, though, lurk the real 
harms of AI in the here-and-now, starting with environmental harms. The Labour 
government's vision for AI takes concrete form in the shape of more data 
centres. However, as some previously tranquil localities are starting to 
discover, this comes with significant impacts.

Generative AI, in particular, is driving the computational scale of AI models 
through the roof. The rate at which these models are increasing in size 
outpaces any other recent tech revolution, from smartphone adoption to genome 
sequencing. In turn, this is driving massive increases in energy demand.

To service AI and the internet cloud, the fastest growing type of data centre 
in the UK is the so-called hyperscale data centres run by the likes of Google, 
Microsoft and AWS. These are typically at least 10,000sq ft and contain upwards 
of 5,000 servers, but the industry wants them to be much larger, and filled 
with the energy-guzzling GPU chips that train and run AI.

Sam Altman, CEO of ChatGPT's parent company OpenAI, has pitched plans for 5GW 
data centres in the US, which is the equivalent of about five nuclear reactors' 
worth and enough energy to power a large city. These voracious demands for 
electricity come with immediate consequences for national grids and for climate 
emissions.

The Greater London Authority (GLA) has already had to impose a temporary ban on 
new housing developments in West London because a cluster of existing data 
centres was using the available grid supply. Because AI's energy demands are 
outpacing the development of bigger electricity grids, there's currently a push 
for bringing back fossil fuel sources, especially gas-powered turbines. 
Connecting directly to the natural gas network to overcome local power 
constraints is less efficient than grid-scale generation and increases 
unmonitored carbon emissions.

Of course, Big Tech is already aware that driving climate emissions is a bad 
look and has previously tried to look 'green' via the use of 'renewables' and 
the cover story of carbon offsets. However, the scale of generative AI has 
blown this away to the point where both Google and Microsoft have admitted an 
inability to meet their own climate targets.

Locally, it's not just potential power cuts that an AI data centre brings to an 
area, but a huge demand for cooling water to stop all the servers overheating 
and the pervasive presence of a background hum from all the cooling systems. 
The question is whether a pursuit of 'AI greatness' will make the UK more like 
Ireland, which has already been recolonised as a dumping ground for Big Tech's 
data centre infrastructure.

The here-and-now harms of AI are also social. Never mind the sci-fi fantasies 
about AI taking over the world, the mundane reality of AI in any social context 
are forms of ugly solutionism that perpetuate harms rather than reducing them. 
The claim that more computation will improve public services is hardly new, and 
algorithmic fixes for everything from welfare to education have already left a 
trail of damage in their wake.

In Australia, the 'Robodebt' algorithm wrongly accused tens of thousands of 
people of welfare fraud, and was only halted by a grassroots campaign and an 
eventual public inquiry, while in the Netherlands an algorithm falsely labelled 
tens of thousand of people as defrauding the child benefits system, causing 
crippling debts and family break-ups. What the UK's notorious Horizon IT system 
and contemporary AI have in common is the tendency to generate falsehoods while 
appearing to be working properly. What AI adds is the capacity to scale harms 
in way that makes the Horizon scandal look like small beer.

The insistence that AI will reverse the rot in education and healthcare systems 
also has a tired history. Back in 2018, Facebook's non-profit arm inserted an 
online learning platform into a California public school system on the basis 
that it offered 'personalised learning', the central mantra of all AI-driven 
educational technology. It took mass resistance by 17-year old students to get 
rid of it.

In the open letter they sent to Zuckerberg they said "Most importantly, the 
entire program eliminates much of the human interaction, teacher support, and 
discussion and debate with our peers that we need in order to improve our 
critical thinking. Unlike the claims made in your promotional materials, we 
students find that we are learning very little to nothing. It's severely 
damaged our education, and that's why we walked out in protest".

Meanwhile the Nobel Prize-winning godfather of AI, Geoffrey Hinton, made the 
claim back in 2016 that thanks to the superior accuracy of AI's image 
classification there was no need to train any more radiologists. As it turned 
out, of course, that claim was just as specious as the more recent hype about 
ChatGPT passing medical exams. Labour's minister for Science, Innovation and 
Technology is continuing to use the trope of an AI-powered solution to cancer 
detection to push for more AI in public services while ignoring calls by 
leading cancer specialists to "concentrate on the basics of cancer treatment 
rather than the 'magic bullets' of novel technologies and artificial 
intelligence".

Forcing AI into services in lieu of fixing underlying issues like decaying 
buildings and without funding more actual teachers and doctors is a form of 
structural violence - a form of violence by which institutions or social 
structures harm people through preventing them from meeting their fundamental 
needs.

The political continuity here is that a commitment to AI solutions also enacts 
a kind of Thatcherite 'shock doctrine' where the sense of urgency generated by 
an allegedly world-transforming technology is used as an opportunity to 
transfer power to the private sector.

The amount of data and computing power required to create on of today's 
foundation models, the big generative and supposedly general purpose systems, 
is beyond the reach of all but the biggest tech companies. Whether it's in 
welfare, education or health, a shift to AI is a form of privatisation by the 
backdoor, shifting a significant locus of control to Silicon Valley.

Like the original Thatcherism, this is also going to be accompanied by job 
losses and a change to more precarious forms of algorithm-driven outsourcing. 
It's Deliveroo all round, not because AI can actually replace jobs but because 
its shoddy emulations provide managers and employers with a means to pare away 
employment protections.

What marks the Labour government out from earlier forms of neoliberalism is its 
emphasis on total mobilisation, that all material and human resources should be 
aligned behind the national mission for growth. This translates into a 
rhetorical and legislative intolerance for the idea that people should be 
'idle', no matter their states of mental distress or other disability.

Unfortunately, while AI systems are themselves unproductive in a practical 
sense, they excel at exactly the functions of ranking, classification and 
exclusion that are required for forms of social filtering at scale. Turning 
more services over to AI-assisted decision-making will indeed facilitate the 
differentiation of the deserving from the undeserving in line with this 
productivist ideology.

This alignment of social and technical ordering will show that Labour is indeed 
still the 'party of the workers', but only in the sense of squeezing the last 
drop of work out of people while using algorithmic optimisation to decide who 
is relatively disposable.

Far right
Ultimately, the Labour government's capitulation to AI in the vain hope of 
meaningful growth is a gift to the far right.

Most governing parties across Europe are making the mistake of incubating the 
far right while complaining about their seemingly inexorable rise. Governments 
seem oblivious to the political fact that trying to distract people with the 
spectre of the far right while completely failing to address the structural 
failings of neoliberalism that leave people feeling angry and abandoned, only 
serves to empower polarising and post-truth politics. Moreover, the fact that 
populist rhetoric gains support leads the same governing parties to mainstream 
their reactionary narratives, as the Labour government has done around 
immigration and the so-called 'small boats crisis'.

Using AI to distract from structural problems while failing to deliver actual 
solutions follows a similar pattern. The only thing these algorithms will do is 
filter and classify groups of people to blame for the way AI itself degrades 
the already shoddy state of public services. The double whammy that comes with 
AI is the way that the industry itself is a catalyst for more extreme right 
ideologies.

The seeds of this can be seen in the apparently innocuous turn to 'AI safety', 
which was initiated by Rishi Sunak but has been endorsed and continued by the 
current government. The rationale of AI safety is not to protect people from 
the everyday harms of dysfunctional AI in their everyday lives, but to head off 
the imagined potential for AI to trigger human extinction by developing 
bioweapons or by becoming completely autonomous and simply taking over.

This, in turn, derives from the underlying belief in AGI or Artificial General 
Intelligence; the belief that the humungous but stumbling AI of today is a step 
to superintelligent systems that will be superior to humans. As Hollywood as 
this might seem, it's the position of many in AI, from godfather Geoffrey 
Hinton to the founders of most of the main companies like DeepMind and OpenAI.

Such powerfully warping beliefs spawn real world consequences because, 
ultimately, it's built on a eugenicist mindset. The very idea of 'general 
intelligence' comes from Victorian eugenics, when scientists like Francis 
Galton and Karl Pearson were rationalising the racial supremacy that 
legitimised the British Empire. The idea of superior intelligence always comes 
with its corollary of inferior intelligence, whether that's defined racially or 
in terms of disabilities, and always pans out as assessing some lives as more 
worthy than others.

Part of the motive for a belief in AGI is self-serving. If superintelligent AI 
is our only hope to solve climate change then we shouldn't be limiting the 
development of AI through small minded measures like carbon emissions targets, 
but should be mobilising all available resources, fossil fuel or not, behind 
its accelerated development.

This is also good news for fossil fuel oligarchs who, not coincidentally, are 
some of the biggest funders of far right think tanks.

Similarly, if the future of humanity is to join such superintelligence inside 
computers themselves, and this leads to vastly multiplied numbers of virtual 
humans, then facilitating the emergence of AGI and the 1054 virtual future 
humans becomes morally more important than any collateral harms to actual 
humans in the present moment. Again, while this might seem deranged, it's the 
stance of a set of beliefs known variously as 'effective altruism' (EA) or 
'long termism' which are very influential in Silicon Valley.

As a world view, it elevates the self-styled mission of people in AI above the 
meagre concerns of ordinary folk. Disturbingly, it seems that the infiltration 
of US and UK policy circles by people with EA beliefs was responsible for the 
shift to an AI Safety agenda and, in the UK, the creation of an AI Safety 
Institute.

Lurking behind long termism, but equally influential in Silicon Valley, are the 
darker and more explicitly fascist beliefs of neoreaction . These kinds of 
ideas, as espoused by the likes of Peter Thiel, argue that democracy is a 
failed system, that corporations should take over social governance with 
monarchical models, and that society should optimise itself by weeding out 
inferior and unproductive elements. It's this strand of far right tech 
accelerationism that merged with the MAGA movement in the run up to Trump's 
2024 re-election.

While Elon Musk's support for anti-immigrant pogroms and his direct attacks on 
Starmer et al have been the most visible consequences for the UK so far, the 
real threat is the underlying convergence of Big Tech and reactionary politics. 
AI is not simply a technology but a form of technopolitics, where technology 
and politics produce and reinforce each other, and in the case of contemporary 
AI this technopolitics tends towards far right solutionism.

Alternatives
Neither Labour nor any other political party is going to defend us against this 
technopolitics. We can, however, oppose it directly through 'decomputing'.

Decomputing starts with the refusal of more data centres for AI, on the basis 
that they are environmentally damaging and because they run software that's 
socially harmful. Hyperscale data centres are the platform for AI's assaults on 
workers' rights, through precaritisation and fake automation, but also for the 
wider social degradations of everything from preemptive welfare cuts to 
non-consensual deepfake porn.

Decomputing opposes AI because it's built on layers of exploitative labour, 
much of which is outsourced to the Global South, and because its reductive 
predictions are foreclosing life chances wherever they're applied to assess our 
future 'value'.

What's needed to salve pain and suffering isn't the enclosure of resources to 
power the judgements of GPUs but acts of care, the prioritisation of 
relationships that acknowledge our vulnerabilities and interdependencies.

Decomputing is a direct opposition to the material, financial, institutional 
and conceptual infrastructures of AI not only because they promote an 
already-failed solutionism but because they massively scale alienation. By 
injecting even more distancing, abstraction and opacity into our lives, AI is 
helping to fuel our contemporary crisis, furthering the bitter resentments that 
feed the far right and the disenchantments that separates us from the 
more-than-human lifeworld.

What we urgently need, instead of a political leadership in thrall to AI's aura 
of total power, is a reassertion of context and agency by returning control to 
more local and directly democratic structures. Decomputing argues that, 
wherever AI is proposed as 'the answer', there is a gap for the 
self-organisation of people who already know better what needs to be done, 
whether it's teachers and students resisting generative AI in the classroom or 
healthcare workers and patients challenging the algorithmic optimisation of 
workloads that eliminates even minimal chances to relate as human beings.

Decomputing claims that the act of opposing AI's intensification of social and 
environmental harms is at the same time the assertion that other worlds are 
possible. It parallels contemporary calls for degrowth, which also opposes 
runaway extractivism by focusing on the alternative structures that could 
replace it.

As much as contemporary AI is a convergence of off-the-scale technology and 
emergent totalitarianism, decomputing offers a counter-convergence of social 
movements that brings together issues of workers' rights, feminism, ecology, 
anti-fascism and international solidarity.

Where AI is another iteration in the infrastructuring of Empire, decomputing 
recognises the urgency of starting to develop alternative infrastructures in 
the here-and-now, from renewable energy coops to structures of social care 
based on mutual aid.

The murmurings in the financial pages of mainstream media that AI's infinite 
growth is actually a bubble prone to collapse misses the point that a wider 
collapse is already upon us in one form or another. Climate change is happening 
in front of our eyes, while it's pretty clear that liberal democracy is 
allowing itself to be eaten from the inside by the far right.

The more that AI is allowed scale its reactionary technopolitics, the more it 
will have the effect of narrowing options for the rest of us. Decomputing is 
the bottom-up recovery of alternatives that have been long buried under 
techno-fantasies and decades of neoliberalism; people-powered visions of 
convivial technologies, cooperative socialities and a reassertion of the 
commons.

<https://www.computerweekly.com/opinion/Labours-AI-Action-Plan-a-gift-to-the-far-right>

Reply via email to