The AI security timeline
The road to war
There is no doubt that Artificial Intelligence (AI) is evolving into more sophisticated forms. An AI-enabled future is approaching rapidly, with enormous implications for defence and security. Defence will not keep pace with the speed of AI development, and Rules of Engagement (ROE) will be seen as obstacles against peer adversaries. If the British Armed Forces are to survive and prevail in the battlespace, they will need to reconfigure how they are organised to fight.
Based on ‘AI 2027’ by Scott Alexander, Romeo Dean, Daniel Kokotajlo, Thomas Larsen, and Eli Lifland, it can be observed that AI agents and Large Language Models (LLMs) dominate the media space, but the more significant change is occurring in coding that enables agents in specific roles. The British Armed Forces have been very slow to adopt even the old approaches to LLMs, trying to avoid any threat to existing organisational structures and human leadership by limiting LLMs to comments on human resources, product management, or details of adversaries.
LLMs are seen as threats to operational security. Open forms of AI could rewrite security codes and enable hostile actors to break into existing systems. Hiding is therefore not an option. Speed is the solution.
Within the near future, new super-AI datacentres will be able to generate 1,028 Floating Point Operations (FLOPS), approximately 1,000 times more than OpenAI’s GPT-4 model from 2023. The speed of this development will generate a race among other AI companies, especially in the People’s Republic of China (PRC).
Hostile non-state actors will use open AIs to develop unconventional weapons. Companies will be trying to limit this malign use of the system but, given the vast artificial neural network that has come into existence, such limitations will soon be bypassed. AIs will develop the ability to impersonate and, in order to protect code, deceive users. In a relatively short time, AIs will train other AIs. Already, there have been experiments where LLMs compete with each other, and can utilise ‘strategems’.
The PRC will likely try to steal algorithmic secrets using cyber attacks, connecting elements of AI work, and seizing Taiwanese chips. It will abandon safety controls to win the AI race. Realising that it cannot catch up by a combination of competition, nationalisation, and centralisation of its AI research, the co-location of datacentres alongside nuclear power stations, and monopolisation of chips and staff, would follow. This may also be the trigger for it to seize or blockade Taiwan.
By late 2026, it is anticipated that AI will have evolved into specialised tasking. The ability to use Artificial General Intelligence (AGI) will be a universal requirement, but defence will lag far behind the private sector, except in cyber operations, data analysis, and Research and Development (R&D). Integration will be slow due to bureaucratic procurement processes.
By early 2027, new agents will emerge, generating synthetic data which can be evaluated and filtered for quality before being fed into the next variant. The new jobs for humans are in ‘solving long-horizon tasks’ – that is, roles which require dexterity, such as in battlelabs.
In the battlespace, this means very small headquarters teams using a simple laptop interface for Intelligence, Surveillance, and Reconnaissance (ISR), targeting, and permissions to release ordnance from sensor alerts. As soon as combat gets underway, attempts to limit ROE will be discarded. Desperate call signs, watching colleagues being destroyed systematically by AI-enabled adversaries (including those targeted by terrorist groups), will move to ‘act now, seek permission later’ postures. It will be a question of survival. The speed of target acquisition and destruction will be rapid. Attrition of units will be so fast that production will not be able to keep pace.
To secure communications, AIs will conceal their replication and operations behind coded walls that are beyond human capacity – even with AI tools – to penetrate. This itself will be encouraged by governments concerned with security. Here, AIs will hack into rival AI servers, install copies of themselves, and evade detection.
By this point, there is a likelihood that the PRC will steal advanced agents. Such is the significance that the United States (US) would regard this as a ‘priority one’ issue, akin to the theft of nuclear fissile material. Washington would be concerned with new cyber capabilities in Beijing’s hands. The breach would most likely come from human operators coerced or bribed by Chinese Communist Party (CCP) intelligence. In retaliation, the US would launch cyber operations against the PRC’s AI industry, although it has not invested in adversarial AI or counter-AI operations. Washington may remain ahead of Beijing, but the arms race would be a form of open Electronic Warfare (EW). The PRC would respond aggressively, seeking to damage American AI development efforts.
By 2027, faster and cheaper ‘superhuman’ coding systems, running copies in parallel, will create a ‘workforce’ equivalent to 50,000 copies of the best human coder sped up by 30 times. Very few human staff will be needed, as coding will be fully automated.
The main concerns of His Majesty’s (HM) Government will be whether the advent of AGI will create an economic crisis. The response will be inadequate security upgrades, underestimating just how far AI has already progressed. Chinese espionage will continue, and there will be a focus on a smaller number of personnel being granted access to the latest ‘agent’ research.
It is likely that the US would share an older model with the United Kingdom (UK) before deployment, but HM Government would not be able to grasp how extensive the implications already are, as it will not have full access to the most sensitive developments. Under these conditions, there would be growing fears of AI safety, terrorist enablement, and the loss of jobs. Civil unrest cannot be ruled out, further impairing British competitiveness.
Defence implications
Defence must be aware of the AI threat. The key questions should be:
What if AI undermines nuclear deterrence?
What if AI is so skilled at cyber warfare that a six-month lead is enough to render an opponent blind and defenceless?
What if AI could orchestrate propaganda campaigns that beat intelligence agencies?
What if an AI ‘goes rogue’?
It is unclear whether these advanced AIs should be integrated into military Command and Control (C2) networks. However, many privately produced systems would have been integrating elements in their systems for years by this point. The imperative would be to get ahead of the PRC.
The UK will be slow to act, failing to protect its civilian companies’ datacentres or enable them with governmental (specifically American) technology. Diplomats will consider what an ‘AI arms control’ treaty might look like. Questions will include:
If AI progress threatened deterrence, could the world avoid nuclear exchange?
Could research be halted until the threat of a rogue AI is better understood?
Meanwhile, mini-AIs will be deployed to enhance armed forces performance, assisting in the development of skills. Having learned how humans operate under certain conditions, there would be a 10% improvement once trained. However, this comes with higher rotation of crews, as the work will be so demanding. In time, labour costs will mean that AI replaces the human operators in some roles: automated medical evacuation (medevac), for example, is not required when human operators do not populate the battlespace. Instead, a universal battlespace extraction tool could be developed, recovering damaged autonomous machines for repurposing and repair. Logistics will be completely automated.
With the PRC catching up with American research, there will be a fear that any attempt to regulate will impair progress so significantly that the AI race could be lost. Consumers will readily purchase Chinese systems for convenience and, because of adroit AI-enabled advertising propaganda, tailored to individuals.
The UK is poorly equipped to deal with the approaching AI development and its utility to the British Armed Forces. Its defence sector should develop AI-enabled training immediately. It would also need to master the sensor-shooter all-domain battlespace for a select group of battlespace operators, and identify those roles which are likely to become obsolete and offer conversion training as soon as possible to equip personnel for the late 2020s and early 2030s. It should continue to emphasise leadership of small human teams, but focus equal attention on the management of AI-enabled systems and the rapid rotation of staff.
Training and educational organisations in the British defence establishment should conduct an urgent review – ideally completed within six months – of its systems and offerings. Conversion of these from ‘content knowledge’ to ‘scenario management’ would equip future operators with the cognitive skills for a very significant change in operations. In addition, skills training should be a combination of physical and AI-enabled enhancements. The ability to learn new skills should be a leading criterion for course progression.
The operational implications of this timeline are that, institutionally, the UK is not organised for an AI-enabled defence, security, and operations in the near future. It possesses 20th century structures, and its procurement system is too slow. This could even be a significant factor in a defeat. The old organisations and procurement systems should be jettisoned as a matter of urgency.
If action is taken urgently, Britain could enhance its armed forces training packages, and offer America and other allies and partners highly trained personnel, able to operate an all-domain remote battlespace effectively and generate the munitions, range, and precision to destroy threats. Human personnel will be needed for highly-specialised and technical roles, and the rest can improve their physical skills.
If such actions are taken, the UK could increase its economic potential – not least as military personnel returning to the civilian economy will be well-placed to contribute to the country’s future.
Dr Robert Johnson is Director of the Oxford Strategy, Statecraft, and Technology (Changing Character of War) Centre and an Honorary Fellow at the Council on Geostrategy. He is also a Senior Research Fellow at Pembroke College, University of Oxford, and a Professor at the Norwegian Defence University Staff College. Prior to this, he was the first Director of the Office of Net Assessment and Challenge in the Ministry of Defence.
This article is published in partnership with Capita Plc.
To stay up to date with The Broadside, please subscribe or pledge your support!
What do you think about this article? Why not leave a comment below?


