AI growth could be dangerous and threaten everyday life if misused by hackers
Breakthroughs in artificial intelligence technology could be used maliciously, boosting cybercrime by means of political threats, physical attacks and data corruption, according to researchers at Cambridge University.
AI and machine learning capabilities were growing at an unprecedented rate and while they might yield many beneficial applications for society, there were also risks that the technology might not be put to its intended uses.
Ironically, the latest developments might make life easier for hackers, reducing the cost of their attacks and thus allowing them to multiply their threats.
The range of possible targets might also be broadened as a result.
Another potential threat posed by new technologies was that AI systems might be able to complete tasks that humans previously couldn't.
Digital security faced the greatest threats, they said, with attacks via speech synthesis for impersonation, automated hacking on software vulnerabilities and data poisoning likely to become most popular among criminals.
Another growing worry was that AI could also be used for physical attacks via drones and autonomous weapons or by provoking crashes in autonomous vehicles. A specific example cited was the use of micro "drone swarms" - known as “slaughterbots” - fitted with explosives which were then set loose.
In the political sphere, the risk existed that artificial intelligence would be used to analyse mass-collected data.
Persuasion and deception through manipulation could be used to sway people’s opinions and generate public mistrust in democratic states and strengthen the power in authoritarian states.
In their report, researchers advised governments to look into the potential risks of AI and to try and prevent or mitigate the potential misuse of this new technology.
The creators of AI themselves were also asked to closely analyse the "dual use technology" capabilities they were developing in order to minimise the harm that they could pose to the general public.
Seán Ó hÉigeartaigh of the University of Cambridge, one of the report’s authors, said it was very important for authorities to take the risks seriously.
"For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report … suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this."
Miles Brundage of Oxford University and co-author of the paper also warned, "AI will alter the landscape of risk for citizens, organisations and states - whether it's criminals training machines to hack or 'phish' at human levels of performance or privacy-eliminating surveillance, profiling and repression - the full range of impacts on security is vast."