Kettering Bugatti
Loitering munitions, and the stalemates they cannot break. Plus, AUSA coverage!
The Kettering Bug was built to fly, and then it was built to crash.
If there is a touchstone for my writing on drones and robots and missiles and war, it is this old bad machine, an aerial torpedo built for World War I that never saw combat. It is the ancestor of drones and cruise missiles and, most especially, of loitering munitions, a relatively recent kind of flying machine that is a drone until it is a missile.
The Kettering Bug was built to fly, and then it was built to crash, with an explosion.
It did this with an on-board timer, one that ran the engine until it didn’t. When the engine stopped, the wings were released, and the Bug became a bomb, falling below with deadly effect. It is autonomous only in the most generous sense of autonomy.
With the Bug, it would be easy to trace backwards a train of causality from the explosion to a human responsible. If the Bug crashed into a house instead of a trench, if it hit civilians instead of uniformed enemy combatants, those deaths are on the head of the human who pointed the aerial torpedo at a target and let it fly.
This is, broadly, the way the laws of war see missiles, as attributable to the humans who put in the targeting information and launched it.
What matters, or I should say, what might matter, in a future where actions in war are held up for accounting in a court of law, is what happens if the missile selects a different target once its been fired. This question extends beyond loitering munitions, and sits at the heart of weapons like the LRASM, or Long Range Anti-Ship Missile developed by Lockheed-Martin for the US Navy.
(An aside: built as a kind of economy-of-force tool, the LRASM is fired at a ship, and then, if its sensors pick up another target or receive some confirmation that the first target is already sunk, it will change course and hit a different ship that matches a pre-selected target profile, if it can. As Paul Scharre ably details in “Army of None,” without changing the actual function of the missile, Lockheed has changed how it talks about what the missile does, emphasizing with each iteration more and more human control over the process).
Autonomy for an armed, mobile machine is largely about the length of time between when it is set in motion, and when it makes a decision to kill. Some autonomous weapons are fairly indefinite in time and finite in mobility, like land mines that sit in place until they explode. Missiles, even missiles that may change targets like the LRASM or short-flight loitering munitions like the Switchblade and its cousins, will most likely be seen as responsible to the human who fired them in the first place.
What is especially fascinating about these algorithmic targeting decisions is not so much that they will have errors. That’s a given part of war, humans, and especially, algorithmic decision making. What is fascinating is that the kind of error offers an excuse, a new source of error. It is, after all, a machine that disobeyed human intent, it was a machine that mistook a school bus for an armored transport, a pharmacy for a bunker, a wedding for an armed assault.
If the human is responsible for firing the weapon, but not ultimately responsible for where its coding leads it to kill, then it becomes almost impossible to ban lethal autonomy without implicating the whole of the production process, from sensor design to coding, at once.
This is true with missiles, and it is especially true with loitering munitions, which can take a much longer time to find a target. Landmines, at least, are fixed in place. A loitering munition is a sky-bound mine, overhead for an unclear length of time, that may or may not turn into an explosion should its trigger conditions be met.
In the midst of all this, there is an actual shooting war using these drones. The presence of loitering on the battlefield had led to breathless speculation about the end of tanks, but the war has devolved into a stalemate that, barring a stunning tactical innovation, can only be broken through attrition. Instead, it’s worth looking at how drones instead give the illusion of a more effective war.
From War on the Rocks:
Furthermore, the relative accessibility of combat footage — whether from drones, cellphones, or cameras — paints a stylized picture of the battlefield for any analyst. They are official propaganda, and it is worth noting that on the modern battlefield, some systems have cameras or live video feeds, while many do not, distorting perceptions on combat effectiveness. A social media feed composed largely of drone video footage could lead one to believe in the dominance of such systems, even in a conflict where many casualties are still inflicted by armor, artillery, and multiple launch rocket systems.
In all the focus on drones as a weapon, it is so easy to lose sight that, for nations looking to claim victories, an abundance of drone shot video is an ends unto itself.
“First, the conflict showed again the important propaganda value of drones. As drones carry sophisticated surveillance tech, they document every strike they make (if armed), or operation flown,” tweets Ulrike Franke, one of the keenest observers of drone policy. “So using drones is like having a film crew with you, and Azerbaijan in particular has taken advantage of this, publishing clips of their drone operations.”
I lack the media studies competency and fluency to make this claim further, but I feel like there’s something about how cameras, attached to agents of violence, become a tool justifying that violence.
What I can say, more solidly, is that as loitering munitions and drones and other battlefield sensors become a bigger part of future war, international laws of war should look at regulating data preservation for these weapons. In cases where an algorithm is truly responsible for killing someone against the express desires of the human that set it in motion, it’s possible the evidence transmitted by the weapon itself will be the only other verifiable claim about what actually happened.
This is all a far cry from what the Kettering Bug set out to prove. We should take some comfort, I think, that over a century later, flying machines are still applied to the same problem of a trench carved into a battlefield. As data-rich as modern warfare can be, sensors and algorithms alone cannot magically disappear all the danger in attacking armed humans prepared for a fight.
(Kettering Aerial Torpedo, via Wikimedia Commons)
ARMY TIMES
Every October, around Indigenous Peoples’ Day, the Association of the United States Army hosts a big conference in Washington, DC, usually involving parking tanks inside the convention center. This year, the arms show was all virtual, which worked out well for my ability to cover it. And so, at too-early Mountain Time this week, I got up, watched a bunch of contractors sell code as weapons, and then wrote about it.
I like convention coverage, even if I know going in that I will only hear sales pitches. It’s important, I think, to document how weapons are sold, especially when the pitch is being made with the expectation that there are buyers in the room.
“Hybrid Clouds,” where one company offers its own special cloud features on files ultimately stored by Amazon, Microsoft, or Google, was the focus of the first big pitch. Silicon Valley is (largely) a side-effect of defense spending, or even a deliberate creation of it, and so it’s important to track just how close tech and the Pentagon still are.
Sensor- and targeting- autonomy will come before movement and firing autonomy, I think. In one demonstration of a military targeting algorithm, the camera showed a tree and a human tagged with the same label, right after the maker said it could tell civilians apart from armed combatants. Hmm.
My third story from AUSA 2020 is about the “Defense Internet of Things,” which is a helluva phrase!
LABOR, FORCE REVISITED, UNLOCKED
Last week, in lieu of finishing this newsletter, I made my first subscriber-only newsletter publicly available. As best I’ve understood from feedback, paid subscribers are fine with anything private going public a month later, and that’s now! Here it is, for all of you to enjoy. Subscribers can expect more time-locked exclusives as I produce them.
I very much enjoy writing like this, and reader support is what lets me take it from a weird, time-intensive hobby into part of my patched-together freelance income. Paid subscribers will gain access to more ways to directly contact me, and I’ve included a survey in my paid posts so I can better respond to what it is that subscribers actually want me to cover.
That’s all for this fortnight. Thank you all for reading, and if you’re in the mood for more newsletters, may I recommend checking out what we’re doing over at Discontents?