Redefining Warfare through AI
The contemporary approach to warfare seems to shimmer with a corporate sheen that contrasts starkly with the chaos of battlefields. Recently, reports have highlighted the Israeli Defense Forces’ “Target Factory”—a sophisticated AI system dubbed Hapla, or the “Gospel.” This technology doesn’t dictate military operations but enhances them, generating around 100 targets daily, a significant increase compared to historical figures. This precision raises questions about the human elements involved and suggests a production-line mentality to conflict. We’re witnessing a new aesthetic in warfare that’s far less obscured by the unpredictable nature of battle, all managed by the watchful algorithms of technology.
The use of automation in combat isn’t entirely new. The historical thread spans from ancient Chinese inventions like the tripwire crossbow to modern innovations such as the V-1 flying bomb, which has haunted military strategy for ages. In the 1970s, Soviet scientists created computer simulations in a primitive effort to predict the U.S. nuclear response. Historically, there’s been a desire to offload the hard decisions onto machines. What sets the current era apart is the unprecedented scale and speed of this delegation. Machines have evolved from mere calculators in distant bunkers to active partners and hunters.
Take, for example, the Turkish Kargu-2 quadcopter. In 2020, it autonomously pursued retreating Libyan fighter planes—humans were not involved in that targeting decision. The term “hunted” implies an almost biological patience and eagerness. Or, consider Israel’s push towards a true combat swarm, where small drones autonomously coordinate to identify and modify rocket launch sites. Here, human input is minimal, primarily only to set missions and initiate actions, while the machines manage the intricacies of the operation.
Theorists might argue that human intuition still holds sway over the machine-driven precision, making a case for the balance between cold, calculated decisions and human judgment. However, this distinction is reminiscent of the ancient Greek concepts of craft versus mere technique. The machines may have the upper hand in efficiency, but they lack the nuanced understanding of context—a quality often considered irreplaceable in human decision-making. While AI can outperform seasoned pilots in simulated aerial dogfights, one wonders whether it truly comprehends the motives behind human conflict. Sure, a camouflage tank can be spotted from thousands of feet in the air, but the political ramifications of annihilation are scarcely quantifiable.
There’s an implicit notion that the human operator will provide the necessary wisdom to navigate these complexities. However, as systems become increasingly efficient, the likelihood of critical human oversight diminishes. If the kill chain shortens from hours to mere seconds, and algorithms sift through vast data sets to provide targeting solutions, the avenues for dissent shrink markedly. The system momentum favors rapid response, creating an environment where questioning an algorithm’s recommendation might slow down operations. We often hear about “automation bias,” the tendency to over-rely on machines—this isn’t merely a psychological quirk but a predictable outcome of a system that exudes authority and conclusions beyond what human analysts can muster.
A troubling cultural shift accompanies this transition. In Western societies, particularly the U.S., the adoption of algorithms in warfare springs from a profound reluctance to acknowledge the collateral damage involved—drones and robotic systems are portrayed as humane alternatives, suggesting the ability to exert force while minimizing personal risk. The valor associated with traditional warriors is being redefined; the new hero isn’t necessarily the ace pilot but rather the developer or operator who ensures the functionality of these systems.
This drive toward conflict with reduced personal risks fosters demands that further alienate us from the very essence of warfare, often referred to as the “virtual gravity of a deadly choice.” F-16 pilots can feel their aircraft shudder at missile release; drone operators, in contrast, experience a more detached type of stress. Those commanding operations based on algorithm-generated targets are even farther removed. The algorithm presents a solution for human approval, leading to a diffusion of accountability. If a tragic error occurs, such as misidentifying civilian gatherings as hostile forces, it raises uncomfortable questions. Is the fault with the algorithm? The data used for training? The officer who placed their trust in it? The chain of responsibility becomes a convoluted web.
Ultimately, these technological systems are tools and don’t intrinsically transform the fundamental nature of warfare. Yet, one must reflect on the implications. As conflicts unfold at breakneck speeds, as battles morph into algorithmic contests dictated by data processing capabilities, can we truly recognize it as war? Does the human element diminish, overshadowed by system validations and mechanical evaluations? The irony is striking: in our quest to hone the machinery of war, we risk making conflict itself a hollow projection of automated violence. The “Gospel” produces targets; the drones execute this logic, while we merely observe.





