New Report Outlines Dangers Posed by ‘Malicious AI’

New Report Outlines Dangers Posed by ‘Malicious AI’

The latest technology in the world of AI undoubtedly holds huge promise and has rightly been championed for the progress its various applications are expected to result in over coming years. Optimised diagnosis using AI is one area in healthcare being heavily researched and developed and just a handful of the other areas in which the technology will be effectively utilised in include environmental conservation, the efficiency of processes in a huge range of areas and cyber security.

However, a new report created in collaboration with experts from institutions and organisations at the forefront of developing AI warns that there are also inherent dangers in the technology that must be protected against. The 100-page report title ‘Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’, incorporates the research and opinions of experts from Oxford University’s Future of Humanity Institute, Cambridge University’s Centre for the Study of Existential Risk, Elon Musk’s OpenAI, and the Electronic Frontier Foundation, amongst others.

The report cautions that the ability of the most advanced AI already in existence is able to significantly surpass human levels of performance, which could lead to its malicious application. Three areas of particular risk highlighted are the ‘digital, physical and political arenas’. AI even poses risks in areas where its performance levels are inferior to those of humans due to its application being far more scalable than human labour.

The report’s co-author, Miles Brundage, a Research Fellow at Oxford University’s Future of Humanity Institute, believes that hacking, surveillance, persuasion and physical target identification are areas that AI could potentially achieve ‘superhuman’ levels in the near future. The report’s authors write:

“We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems.”

In the area of cyber security, AI could be used to automate hacking, phishing, ‘data poisoning’ and even speech synthesis that accurately impersonates individual humans. In the physical space, AI could theoretically be used to hijack drones, turning them into weapons, or to hack into the systems of driverless cars, intentionally provoking crashes. Politically, malicious use of AI would be expected to involve the automation of highly effective fake news online media campaigns as well as improve surveillance technologies.

While the report’s findings may sound like some kind of nightmarish future dystopia in which we are at the mercy of malicious AI, its intention is to raise awareness that future cyber security needs ‘re-thinking’. Software and hardware must be made less vulnerable to hacking to combat the future threat of malicious application of AI.

However, some cyber security experts have already come out and said they believe the report overstates the real risks. Ilia Kolochenko, CEO of web security company High-Tech Bridge is quoted by online tech media Gizmodo as commenting:

“One should also bear in mind that [artificial intelligence and machine learning is] being used by the good guys to fight cybercrime more efficiently too. Moreover, development of AI technologies usually requires expensive long term investments that Black Hats [malicious hackers] typically cannot afford. Therefore, I don’t see substantial risks or revolutions that may happen in the digital space because of AI in the next five years at least.”

‘The next five years at least’ sounds less comforting than it was presumably intended to. Hopefully some of the warnings contained within the report will hit home and we will start preparing defenses against the potential malicious application of AI in the meanwhile!

Leave a Comment

15 − 1 =