Gatling, AI, and a Whirling Cavalcade of Unforeseeable Consequence
Lessons for 21st century programming from a 19th century failure to anticipate the 20th century.
Richard Gatling wanted history to record that his namesake machine was, fundamentally, a tool to reduce the horror of war.
Here is what he wrote to a friend:
“It occurred to me that if I could invent a machine -- a gun -- which could by its rapidity of fire, enable one man to do as much battle duty as a hundred, that it would, to a great extent, supersede the necessity of large armies, and consequently, exposure to battle and disease be greatly diminished.”
The quote comes to me from Paul Scharre’s “Army of None”, who found it in Julia Keller’s “Mr. Gatling’s Terrible Marvel.” It is instructive, especially, in its timing. Gatling wrote the above in 1877. The Gatling gun was first assembled and tested in 1861-62, and had seen battlefield use in the U.S. Civil War by 1864. (Keller records that, even before it was used in combat, police fired one to scare anti-draft rioters in New York City in 1863.)
We know Gatling today, primarily, as the inventor of the first machine gun, a killing device that combined the labor (if you can call it that) of a full platoon of riflemen into a single piece of artillery. The litany of horrors powered by Gatling guns and machine guns is largely the history of violence in the late 19th and mid-20th centuries. If we include in their number the more-portable descendants, in the forms of automatic and semi-automatic weapons, then the long tail of violence from this “greatly diminishing” tool of war continues daily.
I’m not here today, really, to talk about machine guns and violence in the 19th century. But I bring up the Gatling anecdote because it captures something essential to the industry of war, a deep deep tension that is possibly irreconcilable.
When it comes to designing weapons for use against people, the makers of those weapon want to simultaneously prove that the weapon is effective, useful, and better than what exists before, and also, that the new technology posed by the weapon will not come back to immediately cause problems for the people who use it.
In “Army of None,” Scharre uses the Gatling anecdote to set up his long dive into the coming future of autonomous military machines, some of which may have weapons.
“Killer robots” to the more skeptical, these machines reduce the cognitive burden on humans deciding to kill other humans in war. It is labor-saving only if no one decides to to exploit that time saved, to turn a seconds advantage in robot-aided combat into an opening that could win not just a firefight but a battle, or a while war..
Here, I want to talk about what this means for maybe the most breathlessly covered piece of military technology news this August: in an event organized by DARPA, an AI pilot beat a human in a virtual dogfight.
This is what that news looked like as headlines, with DARPA itself choosing both the vaguest and most ominous way to describe the event:
“AlphaDogfight Trials Foreshadow Future of Human-Machine Symbiosis,” said DARPA.
“AI Slays Top F-16 Pilot In DARPA Dogfight Simulation,” said Breaking Defense (where I am a contributor).
“Artificial Intelligence Easily Beats Human Fighter Pilot in DARPA Trial” claimed Air Force magazine.
“A Dogfight Renews Concerns About AI's Lethal Potential,” said Wired.
“AI Defeats Human Pilot in DARPA Organized Dogfight,” read The Diplomat’s headline, before immediately qualifying it with gentler subhed, “The performance of the artificial intelligence program marks an important but modest milestone.”
So, what actually happened?
For about a year, several companies built AIs to control a virtual F-16 fighter in a dogfight in a game. In competition, the teams many AIs were pitted against each other in one-on-one matches. The winning team then had their AI fight against a human pilot, controlling an F-16 in the game. The AI won, five times, for a whole host of factors that are largely inaccessible to humans, and only some of which a human could attempt in game. The most iconic of these is the neck-breaking turns, simulated at Gs that would snap a human neck if attempted in real life (though permissible to all players in the simulation.)
Despite the human limitations this would suffer in real life, both the AI and the player in game attempted the same strategy: approach close, turn fast to get out of the enemy’s targeting computer, and fire off a fatal shot as soon as it can be placed. The AI succeeded every time, as one might expect an entity built entirely of code to do with perfect information in a virtual environment.
How history will record the AI dogfight 15 years hence depends, to a great extent, on what lessons people designing military AI choose to do with this information.
DARPA projects are an initial push, proofs of concept designed to attract the interest of contractors or service-based labs and let them pick up the assembled pieces or dangling threads. Two, equally plausible interpretations of the AI’s superhuman performance have emerged, creating a vexing problem.
The first is that, because the AI was more capable than the human pilot, we might expect AI-augmented weapons (drones, robot tanks, “uncrewed wingmen,” or the like) to have that same kind of special perceptive power. If the human in charge of controlling (or, perhaps, deploying) people and robots into battle fear their forces might be outgunned by enemy AI, then perhaps, perhaps, commanders will be more cautious.
The second interpretation is that, because AI is fundamentally an opaque technology, human commanders will overestimate their own power, and underestimate the ability of an enemy to respond to it. This makes first-use of an AI-powered weapon (if one is built) more likely, as it assumes the surprise and precision of hitting first is enough to defeat a reactive force.
If it wasn’t clear from the above, I tend to agree with the “AI weapons are fundamentally destabilizing” camp. Human error is real, and there’s been plenty of wars that stem from it, but it is also explainable, and understood on both sides of a given conflict. Human error can be a premise for de-escalation, a way for the political leaders in charge of militaries to urge each other back from the brink.
When a super-fast super-weapon starts shooting, seemingly in ways humans cannot comprehend, it becomes a whole different category of problem.
Gatling’s weapon, and the weapons it inspired, changed the calculus of how humans fight. Rather than reducing the number of people in a battlefield, it shrank the time those people had to either attack, or find cover and survive. That the strength of early machine guns was primarily defensive failed to make the weapons de-escalatory, as commanders disconnected from battle and consequences urged assaults on the hope that a seething tide of sacrificial flesh would be enough to overcome the advantages of a technology they only somewhat understood.
Even in his revision, though, Gatling was right about one part of his terrible machine. Automatic weapons do, by and large, “enable one man to do as much battle duty as a hundred.” (“Battle duty” is a heckuva euphemism for “killing.”)
We are, all of us now, living in that aftermath.
AFTER ALL THAT, CYBORG LOCUSTS
For years, DARPA and other parts of the military research wing have sought a way to make machines as cheap and useful and small as insects. One strain of that research, which hit a major milestone this month, is simply taking locusts, wiring microchips to their brains, and then and gluing those microchips to their backs. The idea is that, instead of outright building a miniature disposable bomb scent detector, the locust’s natural sense of smell can do the job. It can, at least in lab tests. The forever war will likely endure long enough for humans to try sending cyborg-locusts to sniff out hidden IEDs, a sentence that is an entire indictment of the limitations of trying to engineer an end to war, instead of negotiating one.
Other writing I did this fortnight:
Russia is working on a robot helicopter for search and rescue, and has partnered with Iran for an autonomous robot jetski, ideal in the shallow waters of the Caspian Sea.
The pandemic (and subsequent cybersecurity vulnerabilities) may finally drive Congress to mandate Internet of Things security rules. The Navy awarded an AI contract to combat rust, which for my money is a far more durable problem than dogfights, something the Air Force has managed to avoid since the 1990s.
In Lebanon, the challenges of rebuilding after the Beirut Port disaster are fundamentally political, placed upon a strained populace, balance, and delicate working relationship in the country. In particular, the fact that people or countries may risk secondary sanctions if their aide ends up in the wrong hands makes managing recovery especially thorny.
READING LIST: CRIMES MICRO, MACRO
“True Crime” is a genre I’ve struggled with, both out of a personal squeamishness for the kinds of violence discussed, and also for reasons I couldn’t quite place. In “The Enduring, Pernicious Whiteness Of True Crime,” Elon Green speaks with several Black writers about how their crime reporting gets left out of the genre, and what it means when most stories about crime treat the police as inherently reliable narrators, instead of agents with their own agenda.
At the “Cruel and Unusual” newsletter, Shane Ferro drills down especially on that last point, writing “The humanity and empathy at the core of the true crime genre is reserved for white people, and the stories involved are often not even based in fact, but in law enforcement fantasy.”
Ah, there’s that squeamishness: a genre nominally about power and doubt left defferential to police and the notion of a single, clear, provable truth.
The other trend in my reading this month was about fire, and specifically, about how the story of California’s ongoing fires is one about nested failures.
There is the immediate tragedy of intense burning that threatens lives, livelihoods, and ancient ecosystems. There is the preceding tragedy, that the State of California built a forest fire fighting system dependent on coerced, prison labor, labor it then bars from continuing in that same line of work when released (and labor that, this year, was unable to fight fires because people in prisons, especially, were hard-hit by the pandemic.) There is the failure that massive fires mean for the ability of the natural landscape to contain carbon, thus leading to more hard-to-control fires in the future. And, finally, there is the way that the white settling of California, especially, outlawed existing ways of regularly managing fire and forests, and in so doing created the conditions for massive fires.
Here is what I read about the California fires in sequence. This short thread, from Charlie Loyd, “'Fire is medicine': the tribes burning California forests to save them” by Susie Cagle, who covered in-depth the work of the Firelighters of Northern California, and “The fire we need” by Page Buono. How humans can continue to live with fire is a fascinating topic, even if it’s far from my normal beats, and I hope you get as much from reading about it as I did.
As you may have already noticed, this weekend I set up a paid option for Wars of Future Past. You can expect newsletters like this, sent once every two weeks, to remain free, but I’m exploring options for what other writing I could do as a bonus for paid subscribers. If you’ve already signed up for a paid subscription, I am deeply grateful to you for doing so. Expect some communication from me before the next fortnight about what, exactly, you might want to see as a special bonus.
I very much enjoy writing like this, and reader support is what lets me take it from a weird, time-intensive hobby into part of my patched-together freelance income.
That’s all for this fortnight. Thank you all for reading, and if you’re in the mood for more newsletters, may I recommend checking out what we’re doing over at Discontents?