In war, Russia is using machines to direct the target of its tanks. The Armata tank, Russia’s next major piece of frontline armor, will be equipped with a gun that will automatically find and track targets, requiring only human interference to accept or deny the shot lined up. It’s both a small step toward a world where computers decide who to kill in combat and a giant leap toward a future where machines determine who to kill.
The T-14 Armata has been in production for a long time, and it is intended to replace existing tanks in Russia’s army and be exported abroad. An uninhabited turret is crucial to the Armata’s architecture. The Armata will have a lower profile by eliminating the need for humans to be physically present next to the target, making it harder to see and strike in battles. It also allows the tank to take advantage of natural terrain for shelter.
Uninhabited turrets are becoming more common in armored vehicles, such as the US military’s heavy Stryker armored personnel carrier. Remote control arms stations, which are a robot turret with at least one gun and several sensors, are a popular way to make people secure inside a car, using video feeds and a remote control system (such as a tablet or joystick) to see and fire at potential targets without putting themselves in danger.
Battle testing of the Armata has also taken place, but without the autonomous targeting system in place.
The Armata is unique because the uninhabited turret was designed into the vehicle from the outset, ensuring that the humans inside never have to face extra danger by sitting directly next to the missile. The crew of an Armata tank is now tucked securely inside a more robust crew capsule in the vehicle’s main body. However, for the device to function, it would need a massive amount of data from sensors to fine-tune its targeting.
“Armata will be used with or without a crew—the robot can control the tank and choose the target on its own. But if a person decides to shoot or not to shoot, he or she still has to press the button,” Sergei Chemezov, the head of Rostec, the state corporation that manufactures Armata, told TASS in April 24.
Instead of the human operator aiming the weapon and choosing when to fire, the human operator would check the target selection made by the Armata’s automated systems and either accept or cancel the shot. The military research program that is independent of The Armata’s fire-control scheme, according to Janes, is modeled after a video game, with the targeting crosshairs projected on an LCD panel.
The specifics of how this automated device will operate are relevant to more comprehensive discussions of how nations can control lethal autonomous machines.
“Weapons are generally categorized as having a human ‘in the loop,’ which means a human selects a goal and chooses to engage; ‘on the loop,’ which means the computer selects the target and engages, and may abort the process; or ‘out of the loop,’ which means there is no human involved in the target selection and engagement,” Maaike Verbruggen, a doctoral researcher at the Vrije Universiteit Brussel, says.
The Armata’s targeting comes somewhere between a person “in the loop” and “on the loop,” according to Chemezov, so the weapon immediately detects a target and then demands human permission to fire.
“The concern is that this classification risks shrinking the position of the human to the point where it merely presses the red acceptance button and is no longer critically engaged with the process,” says Verbruggen. “The human’s function is limited to rubber-stamping the machine’s actions.”
A real animating fear behind attempts to develop an international norm for operating autonomous weapons is that a person, particularly in battle, will actually bow to the judgment of a computer and accept almost every shot chosen. This at the heart of the campaign for meaningful human oversight, which requires tighter restrictions on automatic firearms. When there’s a risk that people will be nearby, for example, the weapon’s autonomy can be restricted.
According to Samuel Bendett, an expert at the Center for Naval Analysis and adjunct senior fellow at the Center for New American Security, Russia’s Ministry of Defense “basically claims that autonomous and AI-enabled systems cannot work without human interference – but at the same time, its R&D institutions are debating the maximum potential autonomy for such systems in war.”
The details of how this automated system can work are essential for broader debates of how nations will regulate lethal autonomous machines.
Russia has described this shift to robots and digital war tools as one that can save lives while still assisting the nation in its eventual quest to find enough people to fill military ranks.
However, protecting troops is just one aspect of a military’s mission in combat. Another is to be sure that the weapons are only used against lawful and suitable targets. The automation device is unsafe if the Armata tank mistakenly interprets school buses as tanks or crowds of people as enemy combatants. It also necessitates that the person who approves the automatic goal collection pay careful attention, which is much more difficult.
Verbruggen contrasts the condition to that of a human driver in an autonomous vehicle: if the car has been in control for a long time, the driver may become complacent, and if they are unexpectedly compelled to take over, they may be less alert.
Although a soldier can be comfortably nestled inside the Armata, they must also make quick choices about the tank’s guns. They could only trust the automatic targeting in that case, even though they don’t understand why a particular target was chosen.