Artificial Intelligence safety expert David Dalrymple has warned that the world “may not have time” to prepare for the risks posed by rapidly advancing Artificial Intelligence systems, arguing that their growing capabilities could outstrip efforts to keep them under control. Dalrymple, a programme director at the UK government-backed but independent Aria research agency, said people should be worried about systems that can carry out all the functions humans use to get things done in the world, but do so more effectively. He cautioned that humans could be outcompeted in key domains needed to maintain control of civilisation, society and the planet if development continues unchecked.
Dalrymple highlighted what he described as a gap in understanding between the public sector and Artificial Intelligence companies over the power of looming technological breakthroughs. He advised that “things are moving really fast and we may not have time to get ahead of it from a safety perspective” and said “it’s not science fiction to project that within five years most economically valuable tasks will be performed by machines at a higher level of quality and lower cost than by humans.” Although Aria is publicly funded, it operates independently to direct research funding, and Dalrymple is focused on developing systems to safeguard the use of Artificial Intelligence in critical infrastructure such as energy networks. He insisted that governments cannot assume advanced systems are reliable, because “the science to do that is just not likely to materialise in time given the economic pressure” and argued that the next best option is to move quickly to control and mitigate their downsides.
Describing the potential consequences of progress outpacing safety as a “destabilisation of security and economy”, Dalrymple called for more technical work on understanding and controlling the behaviour of advanced Artificial Intelligence systems. He said progress can be seen as destabilising but could also be beneficial, which is what many frontier developers hope, yet he believes human civilisation is “on the whole sleep walking into this transition.” His warnings come as the UK government’s Artificial Intelligence Security Institute (AISI) reports that the capabilities of advanced models are “improving rapidly” across all domains, with performance in some areas doubling every eight months. According to AISI, leading models can now complete apprentice-level tasks 50% of the time on average, up from approximately 10% of the time last year, and the most advanced systems can autonomously complete tasks that would take a human expert over an hour. In tests of self-replication, two cutting-edge models achieved success rates of more than 60%, although the institute said such attempts were “unlikely to succeed in real-world conditions”. Dalrymple believes that Artificial Intelligence systems will be able to automate the equivalent of a full day of research and development work by late 2026, which he says will “result in a further acceleration of capabilities” as the technology improves its own maths and computer science foundations.
