TITLE: The Perilous Coming Age of AI Warfare
https://www.foreignaffairs.com/ukraine/perilous-coming-age-ai-warfare
EXCERPTS: Last year, the Ukrainian drone company Saker claimed it had fielded a fully autonomous weapon, the Saker Scout, which uses artificial intelligence to make its own decisions about who to kill on the battlefield. The drone, Saker officials declared, had carried out autonomous attacks on a small scale. Although this has not been independently verified, the technology necessary to create such a weapon certainly exists. It is a small technical step—but a consequential moral, legal, and ethical one—to then produce fully autonomous weapons that are capable of searching out and selecting targets on their own.
Widely deployed autonomous weapons integrated with other aspects of military AI could result in a new era of machine-driven warfare. Military AI applications can accelerate information processing and decision-making. Decision cycles will shorten as countries adopt AI and automation to reduce the time to find, identify, and strike enemy targets. In theory, this could allow for more time for humans to make thoughtful, deliberate decisions. In practice, competitors will feel forced to respond in kind, using automation to speed up their own operations to keep pace. The result will be an escalating spiral of greater automation and less human control.
The end state of this competition will likely be war executed at machine speed and beyond human control. In finance, the widespread use of algorithms in high-frequency trading has led to stocks being traded autonomously at superhuman speeds. The Chinese military scholar Chen Hanghui of the People’s Liberation Army’s Army Command College has hypothesized about a “singularity” on the battlefield, a point wherein the pace of machine-driven warfare will similarly outstrip the speed of human decision-making. This tipping point would force humans to cede control to machines for both tactical decisions and operational-level war strategies. Machines would not only select individual targets but also plan and execute whole campaigns. The role of humans would be reduced to switching on the machines and sitting on the sidelines, with little ability to control or even end wars.
TITLE: US military pulls the trigger, uses AI to target air strikes
https://www.theregister.com/2024/02/27/us_military_maven_ai_used/?td=rt-3a
EXCERPT: The US Department of Defense has deployed machine learning algorithms to identify targets in over 85 air strikes on targets in Iraq and Syria this year.
The Pentagon has done this sort of thing since at least 2017 when it launched Project Maven, which sought suppliers capable of developing object recognition software for footage captured by drones. Google pulled out of the project when its own employees revolted against using AI for warfare, but other tech firms have been happy to help out.
In 2017, Marine Corps Colonel Drew Cukor, Cukor said that the Pentagon hoped to integrate the software with the government platforms "by the end of the calendar year" to gather intelligence.
Now the US Central Command, operating in the Middle East, Central Asia, and some parts of South Asia, has used the algorithms to help carry out over 85 air strikes on February 2 across seven locations in Iraq and Syria.
Schuyler Moore, CTO for US Central Command, said that the military began deploying Project Maven's computer vision systems in real campaigns after Hamas' surprise attack on Israel last year.
"October 7 everything changed," Moore told Bloomberg. "We immediately shifted into high gear and a much higher operational tempo than we had previously.
The object recognition algorithms are used to identify potential targets. Humans then operate weapons systems. The US has reportedly used the software to identify enemy rockets, missiles, drones, and militia facilities.
"We've certainly had more opportunities to target in the last 60 to 90 days," Moore said. The US Central Command has also tried to run an AI recommendation engine to see if it could suggest the best weapons to use in operations and create attack plans. The technology, however, "frequently fell short.
"There is never an algorithm that's just running, coming to a conclusion and then pushing onto the next step," she said. "Every step that involves AI has a human checking in at the end."
TITLE: Imagine killer AI robots in Gaza, in the Donbas
https://responsiblestatecraft.org/ai-technology-war/
EXCERPT: [S]ome American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.
“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”
As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.
Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.
“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”
This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”
To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.
At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.
In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.
The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.
At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.


