The Dangerous Technology Question: Will Tech Finally Wipe Out Humanity?

The Dangerous Technology Question: Will Tech Finally Wipe Out Humanity?

Advances in technology, particularly the acceleration of science and technology discoveries over the past century or two, have had a hugely positive impact for humanity. The dangers of disease or injury, by accident or inflicted intentionally, have declined exponentially. And the average living standard and level of comfort in our lives has unquestionably improved drastically. Most of that is thanks to technology and biotechnology. But will the advances in the technology that have improved our lives beyond recognition eventually wipe us out?

This week Mary Wareham, co-ordinator of the Campaign to Stop Killer Robots, spoke to the American Association for the Advancement of Science’s annual convention. She warned the scientists, politicians and representatives from the business world in attendance of the dangers posed by the development of autonomous weapons systems.

She referred to AI-powered weapons such as the ‘swarm squadrons’ of drones UK defence secretary Gavin Williamson said this week could be used to counter enemy air defences. The USA’s military tech is, of course, far beyond anything being developed in the UK. Both in terms of what they currently have and what is being worked on. We don’t really know what’s going on behind the scenes in China when it comes to weaponised technology. But it would probably be naïve to believe they are not trying to get an edge on the USA even if there is no recent history of open military hostility between the two economic superpowers.

Killer robots that could select and engage human and non-human targets autonomously without the need of an operator are, of course, a concern. Wareham told the AAAS convention:

“Bold political leadership is needed for a new treaty to pre-emptively ban these weapons systems.”

This week OpenAI, a non-profit AI research and development company funded by Elon Musk and a series of other high-profile and extraordinarily wealthy tech leaders announced that it was delaying the public release of its latest AI algorithm – GPT2. Why? Because the text generator has, for the first time, proven so convincing that what it produces is difficult to detect as having been written by a robot. OpenAI’s approach until now has been to release the AI algorithms it has built as open code, advancing the progress of the technology.

This time, however, it has decided not to pending investigation into what GPT2 is potentially capable of. There is a fear that it could be used maliciously to automate the creation of a level of fake news far beyond anything to date in terms of how convincing it might be to unwary readers. Political manipulation and the mass generation of fake reviews are just two possible consequences of the sophistication of GPT2.

A few months ago the first ever genetically engineered human babies were born. Chinese scientist He Jiankui genetically engineered two embryos using the CRISPR gene-editing tool. The twin girls, Nana and Lulu, were carried to term and born healthy in November of 2018. The procedure, carried out in what can only be described as secrecy and with no oversight of any regulator, had what could be argued as a positive incentive. The girls’ father carries the HIV virus and He edited the genome of the embryos to disable the CCR5. CCR5 is the door through which the virus can infect humans. But the consequences of the event must be seen in a broader context. The Pandora’s Box has now been opened. Genetically-engineered humans now, for the first time, exist.

The Chinese government condemned He’s off-piste approach but the country generally has a much less strict policy of restriction around human gene-editing than in the West. He (Jiankui) may have ‘gone rogue’ by bringing the twin girls into existence but China is believed to have state-sponsored research programs into genetic editing of the human genome. The country is heavily invested in its aim to surpass the USA and wider West as the global leader in the latest technology and biotechnology in the world. And its prepared to take a more relaxed approach to regulation to achieve that aim.

Genetically engineering embryos to prevent them from contracting the HIV virus is of course, on the surface, not a bad thing. Few would argue to outlaw the opportunity to eradicate the human suffering caused by genetic diseases or diseases and conditions that could be prevented by genetic engineering.

The problem is that we have reached a point where we understand enough and have the technologies, like CRISPR, to make genetic engineering of the human genome possible. But we don’t yet understand it in enough detail to be sure of all of the consequences of editing the human genome. The reason genetic engineering of humans is not permitted is that we can’t guarantee that we aren’t introducing unintended ‘bugs’ to the genome that will then spread through future generations with potentially devastating consequences. But we know enough that the temptation to use technologies like CRISPR on human genes will always be there. And now that the technology is out there is it realistic to expect its application can be completely regulated and controlled?

That’s especially the case in a globalised economy when gaining the upper hand in technology and biotechnology can have such a profound impact on the fortunes of an economic entity. Be it a company or a whole country.

Which brings us back to weaponised AI and robotics. Appeals by activists such as the Campaign to Stop Killer Robots are likely to fall on deaf ears as long as there is any doubt that other states are not secretly working on advanced military technology. We’re locked into a Catch 22. Everyone realises that the greater the advances in military technology the greater the danger to humanity as a whole. But no-one wants to curtail their own military technology programs for fear others will not, leaving them vulnerable.

The problem is wider, as the OpenAI case demonstrates. Technology cannot be simply classified as ‘dangerous’ and ‘benign’. Many technologies developed for purely civilian purposes could, in theory, be given or adapted to malicious application. Otto Hahn, the German scientist who first discovered atoms could be split, reputedly considered suicide when he grasped the potential of his discovery. He was devastated on understanding that the atomic bombs made possible by his science had been dropped on Hiroshima and Nagasaki. But it can also be argued nuclear power plants, despite incidents such as Chernobyl and Fukushima, are the most environmentally friendly power source available to us.

Technology is rarely black and white in its potential application. Much of the technology that has done so much to improve global living standards was originally developed by the military. The internet, digital photography, jet engine and GPS are just some of the technologies first created in a military context that now shape the modern economy.

There are two main questions when it comes to the danger that new technology might pose. The first is if there is anything that can actually be done to stop the development of new technology and biotechnology that might have a particularly dangerous application? The second is, if not, is there any way that the misuse of technology can be protected against once it has been invented?

Nick Bostrom, director of the Future of Humanity Institute and a professor at Oxford University explores the question in his paper ‘The Vulnerable World Hypothesis.’ Bostrom notes that while technology advances have led to ‘unprecedented prosperity’, they have also made it easier to do harm on a mass scale. However, until now, aided by simultaneous developments in culture and government, the positives have far outweighed the negatives. We have ‘better’ guns than ever before but the chance of us being the victim of homicide is lower than ever before.

Bostrom also authored the book ‘Superintelligence’, which explores the potential and risks posed by the development of AI. Across all of his work, Bostrom arrives at the conclusion that technology is invented, and often proliferates, without the inventors being fully aware of the consequences. We pursue the solution to a problem in a way that doesn’t take into consideration all of the potential side effects. An example is the CFC gases that made refrigeration cheaper but then also punched a hole in the Earth’s ozone layer.

Bostrom concludes that the reason we haven’t yet invented a technology that has destroyed us is largely down to luck. The most destructive technologies have been very expensive and complicated to reproduce, limiting whose hands they fall into. But the law of averages means that we will one day draw a ‘black ball’ – a technology that has both the capacity for largescale destruction and is cheap and easy to recreate.

Atomic energy is the most obvious example of a hugely destructive technology that has been controlled to an extent but only over the course of several decades. And there have still been disasters and there is still a risk of a rogue actor launching a nuclear weapon. Over centuries how likely is it that risk remains only a theoretical possibility? And nuclear weapons require extremely rare elements to work, billions of pounds, top scientists and many years to develop. So we have always known when a country has pursued nuclear weapons and been able to take steps to control the danger.

But what if atomic energy had required elements that were cheap and easy to get hold of? And releasing nuclear energy wasn’t difficult to achieve? Would we still exist today? There’s a high chance we would not. If future technologies that have the potential for mass destruction are cheap and easy to recreate, can we realistically expect to be able to stop their proliferation?

Is there a solution? Agreeing to stop technology and biotechnology research in directions considered to hold danger would not practically work. There would always be someone who didn’t play by the rules. Another theoretical possibility would be to slow the development of dangerous technologies so that other technologies able to neutralise that danger can be suitably developed in the meanwhile. But that approach is unlikely to manage every risk sufficiently.

A third solution explored by Bostrom is a world government and global surveillance state that would make the development or use of dangerous technologies practically impossible. There are of course plenty of downsides to this proposal. The economist Robin Hanson responded to Bostrom’s world-government surveillance concept with:

“It is fine for Bostrom to seek not-yet-appreciated upsides [of more governance], but we should also seek not-yet-appreciated downsides” — downsides like introducing a single point of failure and reducing healthy competition between political systems and ideas”.

Ultimately, stalling the progress of technology doesn’t seem either realistic or particularly desirable. So are we doomed to eventually reach a tipping point where the technology that has improved human life so much finally destroys it or hurls us into a kind of Mad Max or Brave New World dystopia? The reality is no-one has the answer to that question. The only approach we seem to have available to us is to put as many checks and balances as we can in place. And hope that human political and civil society continues to advance alongside technology to the extent the balance towards the positive in place until now persists.

Leave a Comment