Business Continuity Management / Disaster Recovery , Cyberwarfare / Nation-State Attacks , Endpoint Security

Killer Robots in the Air: Slouching Toward Full Autonomy

Security and Ethical Concerns Persist as AI-Driven Lethal Weapon Systems Evolve
Killer Robots in the Air: Slouching Toward Full Autonomy
Semi-autonomous weapons already come in many forms, including Russian-built loitering munitions allegedly being used in Ukraine (Photo: Necro Mancer via Twitter)

Is there anything more frightening than killer robots, running amok? Long a staple of B-movies, the concept of an unthinking, uncaring and ruthlessly destructive machine that kills without human oversight continues to chill.

See Also: Modernizing Malware Security with Cloud Sandboxing in the Public Sector

Enter fresh warnings being sounded about the threat posed by lethal weapons that are increasingly semi-autonomous. While none offer full autonomy, the rapid pace of technological change, plus artificial intelligence capabilities, have driven many computing experts, human rights advocates and political figures to urge governments to create norms and rules for the use of such weapons now (see: Should 'Killer Robots' Be Banned?).

Concerns about lethal autonomous weapons systems - or LAWS - are nothing new. But as AI news site Skynet Today notes in a recent editorial, all currently available weapons seem to still require a human to be in the loop, allowing them to either guide or override any decisions the weapon might make. "In contrast, once deployed, LAWS could conceivably use AI to perceive targets, categorize them as enemies, and take lethal action against them without human involvement," the Skynet Today editors write.

Swarms, Kamikazes and More

What comes to mind when people think of killer robots? For many, it might be a bipedal killing machine. But lethal weapons are increasingly being designed to fly. Market researcher Valuates Reports estimates that the global market for military drones will reach $17.2 billion by 2028. The U.S., U.K., Israel, Turkey, China and even Sweden are among the top manufacturers of unmanned aerial vehicles for military use.

Many countries are also experimenting with drone swarms, which appear to have been first used in battle by Israel against Hamas in May 2021. A swarm is designed to act as a single entity, guided by artificial intelligence, with every drone providing input and options. As detailed by U.S. national security consultant Zachary Kallenborn, a single swarm might combine "attack, sensor, communication, decoy and mothership drones," based on military objectives.

Unmanned drones already come in many more forms, from surveillance devices that might linger in the air for days, to combat vehicles such as the Turkish-built Bayraktar TB2 being used by forces in Ukraine to launch missiles against targets in its war with Russia.

Some drones also serve as kamikaze weapons. Last month, for example, as yet unconfirmed reports emerged that Russia was using a new unmanned aerial vehicle system called KUB-BLA, or "Cube," to attack targets in Ukraine. The catapult-launched drone, also known as KYB-UAV, is built by Russian manufacturer Zala Aero, which is a subsidiary of defense contractor Kalashnikov Group.

The lethal weapon, which has a wingspan of 1.2 meters, is known as a loitering munition, which refers to a weapon that can be launched and remain in the air for some time - or in some cases even land, be refueled and relaunch - until detecting an appropriate target and being ordered to attack it. In the case of KUB, the device is designed to ram its target and explode.

The KUB's design was reportedly informed by Russian soldiers' combat experiences in Syria from 2015 to 2018. The weapon was first demonstrated at a Russian air show in 2019 and was due to enter full production this year.

Loitering munitions have been around for decades, initially in missile form, having been designed to facilitate the destruction of enemy radar and other anti-aircraft defenses without putting fighter aircraft and their pilots at risk, reports Brookings, a Washington-based nonprofit public policy organization. Newer versions may act as a "suicide" weapon, or carry ordinance such as grenades, which they can drop on targets.

The latest loitering munitions are often designed to linger until they identify a predetermined target and then alert a human operator, who can decide whether to attack the target - be it a specific building or a soldier, tank or something else.

Humans in the Loop

Where the Russian-made KUB is concerned, experts say that while the manufacturer advertises the device as having autonomous capabilities, it's not clear if this means full autonomy. Rather, it more seems to suggest that the drone can be programmed to arrive in a specific area, or search for targets within a specified area, and alert a human operator once it finds a target.

Such capabilities point to how this technology seems likely to evolve, which would be to remove human operators whenever possible.

"The KUB is part of a new generation of weapon systems where the role of the human operator is becoming blurred and where it risks being reduced over time," says the Campaign to Stop Killer Robots, an international group that has been calling on governments to pre-emptively ban lethal autonomous robotics.

But the KUB is far from the only example of small, loitering munitions available today. Another example is the Switchblade, a hand-launched drone used by U.S. forces and manufactured by American defense contractor AeroVironment, which says the device can be carried in a backpack and offers "real-time GPS coordinates and video for precise targeting with low collateral effects." The Switchblade requires a human operator to select targets.

Earlier this month, the Biden administration announced that it would be providing Switchblade drones to Ukraine as part of a $300 million military assistance package.

Exploitable Vulnerabilities Included

As lethal weapons gain more autonomy, many operational, legal and ethical questions also remain unanswered. For example, what happens if the target selection turns out to be inaccurate? What happens if such drones get tasked to automatically or indiscriminately attack an individual or group of individuals, including not just civilians but supposed enemy combatants? Who's responsible if a third party is able to hijack the device?

To the lattermost concern, another challenge with autonomous systems is that like any other type of technology, drones and other platforms for weapons can of course never be made fully secure. They will always have flaws or software vulnerabilities that might allow outsiders to interfere with their operations, unless appropriate checks, balances or other safeguards are in place.

"Adversarial sticker" employed by security researchers at McAfee to fool the Speed Assist feature in some models of Tesla

Refining such systems via artificial intelligence - really, machine learning - to better select targets also isn't foolproof, and arguably this makes their use even more ethically fraught.

What happens, for example, if a fully autonomous lethal weapon makes a mistake? Will there be an audit trail that can be used to reverse-engineer what happened? Also, who takes the blame, not least from a legal standpoint?

The types of vulnerabilities that continue to be found in consumer-grade technology highlight the security shortcomings that are likely to be found in military hardware.

Tesla, for example, is under investigation for its self-driving "autopilot" feature sometimes failing to recognize stopped emergency vehicles. Separately, in a recent YouTube video showing a test of new self-driving software still in beta, the Tesla nearly hits a bicyclist before the operator grabs the steering wheel.

Separately, in 2020, security researchers at McAfee - now known as Trellix - demonstrated how they could use a piece of tape on a road sign to trick the car into accelerating to 50 miles per hour above the speed limit.

McAfee researchers demonstrate how they tricked a Tesla Model X into exceeding the speed limit.

Drones have been similarly "hacked" too. In January, Michigan State University and Lehigh University researchers demonstrated how they were able to use two bright spots of light to trick the camera in a widely used DJI consumer drone to make it fly a path that the researchers determined.

A researcher uses two projectors to launch the DoubleStar attack at a DJI drone flying 7 meters away, tricking it into thinking there is an object half a meter ahead of it. (Source: "DoubleStar: Long-Range Attack Towards Depth Estimation based Obstacle Avoidance in Autonomous Systems")

"The successful long-range attacks against the flying DJI drone imply potential security impacts on different types of autonomous systems," the researchers write in a research paper detailing what they're calling DoubleStar attacks, which they're due to present this August at the Usenix Security Symposium.

Legal Frameworks Lag

Despite these types of risks and ongoing ethical questions, no international rules govern the use of semi-autonomous or fully autonomous lethal weapons systems. "There are currently no specific legal rules to limit the extent to which machines are allowed to identify and attack targets," the Stop Killer Robots campaign says. "There is an urgent need for clear regulations of how these systems should be used, and where the red lines are."

Government officials in some countries, including Austria and New Zealand, have been calling for autonomous weapons to at least be regulated via international law.

"There's increasing awareness applying AI to weapons systems raises legal, ethical and security risks," New Zealand Minister of Disarmament and Arms Control Phil Twyford said last November. "The idea of a future where the decision to take a human life is delegated to machines is abhorrent and inconsistent with New Zealand's interests and values."

But as Russia has shown in Ukraine with its indiscriminate targeting of civilians in apparent violation of the Geneva Convention, such regulations might do little to discourage real-world use of semi-autonomous or even fully autonomous weapons during wartime. "They are an extraordinarily cheap alternative to flying manned missions," Samuel Bendett, a Russian military expert at defense think tank CNA, tells Wired. "They are very effective both militarily and of course psychologically."

Indeed, because who doesn't fear killer robots?


About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.asia, you agree to our use of cookies.