More than a dozen nations around the world have developed robotic or semi-autonomous weapons systems. That number is expected to grow exponentially as the technology develops over the next decade.
Research firm IDC estimated that spending on robotics would double from USD 91 billion in 2016 to USD 188 billion by 2020, creating significant improvements in the technology which will enable fully autonomous systems and the embedding of artificial intelligence in weapons platforms. Around 350 or so semi-autonomous platforms are currently in use around the world.
Currently, airborne drones such as the General Atomics Predator and Reaper platforms being used by the U.S. military and sold to nations such as Australia are controlled remotely by humans. But with time, they could well possess their own AI systems.
The world of AI has thrown up a whole cornucopia of issues around ethical use. Nowhere is this more keenly contested than in the area of defence technology.
Opponents of the use of AI in defence have coined the emotive term “killer robots” to describe how the future could look with the unrestrained implementation of AI in defence.
Debate Over "Laws"
One of the foremost Australian experts on AI is Professor Toby Walsh from the University of NSW in Sydney, and he has been leading a groundswell of academic activism against AI in warfare.
Professor Walsh believes that AI will match human intelligence by 2062, but his view is that AI “isn’t stained by a very unfortunate use of the technology to decide to kill people.”
Walsh was one of 50 signatories to a letter sent in April 2018 to South Korean company Hanwha Systems, which is working with the state-owned Korea Advanced Institute of Science and Technology to investigate the battlefield deployment of AI.
In July, the movement gained further momentum when 2,400 academics and business people, including SpaceX entrepreneur Elon Musk, signed a pledge seeking to deter nations from building lethal autonomous weapon systems, also known as “Laws.”
The way forward for this would appear to be an international convention which might create some agreed limits and ethical boundaries, but so far that has been kicked into the long grass.
A group of Governmental experts meeting under the auspices of the U.N. Convention on Certain Conventional Weapons failed to reach agreement on the issue in 2018, and has adjourned further debate to this year.
In the meantime, research is continuing at pace and in Australia the national defence forces are moving rapidly to deploy AI and autonomy, regardless of what the international community might say.
Australia’s Defence Industrial Capability Plan was published in April 2018. It identified autonomy as one of Australia’s “Sovereign Industrial Capability Priorities.”
Safer Warfare
Autonomy, robotics and AI are attractive capabilities for a nation like Australia, which has a small army of 45,000 by global standards, but which might leverage its technology to create scale and seek to find new advantages on the battlefield.
Last year, the Australian Government seeded an AUD 50 million Trusted Autonomous Systems Defence Cooperative Research Centre (CRC) based in Brisbane to “research and deliver game-changing autonomous technologies.”
Professor Jason Scholz was appointed Chief Scientist and Engineer at the new CRC, and while he agrees that some kind of ethical international framework needs to be built in this area, he also believes there is an alternate view to opposing “killer robots.”
Rather, Scholz believed that properly implemented these technologies can create a new world of almost “ethical warfare,” where humans are increasingly moved out of harm’s way and weapon systems can be set to focus on narrow military targets, avoiding the carnage of collateral damage which often tragically involves civilians.
It might sound ironic, but these technologies might even make warfare safer.
“A lot of people say we should not use AI in a weapon but there is a strong humanitarian counter-argument which says there are things we could do with AI which could detect a Red Cross or a Red Crescent and divert a weapon away,” said Scholz.
Minimizing Human Casualties
In another example, a surface to air missile might be able to autonomously detect passenger aircraft in flight and be programmed not to engage.
This is a particularly sensitive issue in Australia, which lost 38 of its citizens when MH17 was shot down by a missile when flying over Ukraine in 2014.
Some of these developing technologies were showcased at an event called Autonomous Warrior 2018 held at Jervis Bay south of Sydney in November, and involving personnel from Australia, the U.S., the U.K., N.Z. and Canada.
There, a defense technology company demonstrated a mine hunting system with undersea surveillance capability. Although not fully autonomous, the system uses an unmanned surface vessel with a mine hunting sensor, reducing clearance times while eliminating risk to humans.
“In many situations, if you put a manned platform in a situation the personnel will be at risk,” said Scholz.
“Anything which is not stealthy is likely to be tracked at a range and targeted, and if we don’t want to put our sons and daughters into harm’s way it brings up questions as to what is possible with machines, AI and autonomy.”
Too Early
Last year, the previous Australian Foreign Minister Julie Bishop said it was too early in their development to make a definitive call on the ethics for these technologies in warfare.
This may partly be motivated by Australian self-interest, but there is also a perspective that in a combat situation these technologies can be used to protect humans and minimize casualties as well as destroy targets.
As with many of the wider ethical issues around AI, the jury is still out and the debate is proceeding, but any international consensus is unlikely to be reached in 2019.