In the ongoing conflict between Israel and Palestine, technology has increasingly played a crucial and troubling role. Data collection, AI-powered tools, and surveillance systems have been implemented by Israel to monitor and target Palestinians in ways that extend beyond traditional warfare. With advanced technology at its disposal, including spyware, facial recognition systems, and automated weapons, Israel’s tech-driven approach to the conflict has sparked global outcry. At the same time, it has also gained attention from governments and corporations alike.
Israel’s Advanced Surveillance and Spyware Systems
One of Israel’s most powerful tools in its surveillance arsenal is the Blue Wolf system, an app that allows Israeli forces to collect biometric data from Palestinians. The app has been widely deployed since 2020 across the Occupied Palestinian Territories.
- Blue Wolf’s biometric data collection often occurs at checkpoints or even at private homes during raids.
- The Israeli army has turned the process into a game, rewarding soldiers with points for collecting the most photographs of Palestinians, including children.
In addition to Blue Wolf, there is Red Wolf, a facial recognition system that is linked to CCTV networks. Red Wolf works hand-in-hand with the thousands of surveillance cameras that blanket Palestinian areas, particularly in Jerusalem’s Old City and Sheikh Jarrah. Amnesty International’s 2023 report highlighted the intensity of the surveillance, showing cameras placed every five meters in some parts of the city.
Palestinian Lives Under a Web of Constant Surveillance
The psychological effects of being under constant watch are immense. As one Palestinian resident put it, the sight of so many cameras instills a sense of constant anxiety. The surveillance doesn’t stop at cameras either. Israeli forces have access to White Wolf, another app that allows Israeli settlers to search through the database of Palestinians.
This heavy surveillance has led to deep social and psychological issues among Palestinians. In 2021, the digital rights group 7amleh reported that the surveillance extended into people’s homes, with one woman stating that she felt the need to wear her hijab even while sleeping due to the fear of being watched.
AI-Driven Weapons and Military Contracts
Israel’s use of AI in military operations has drawn significant condemnation. One key development in this area has been the creation of Lavender and The Gospel (“Hasbora”)—AI-powered systems designed to identify and target individuals. Human rights organizations have expressed outrage at these tools, particularly because they often operate with minimal human oversight.
A particularly shocking system known as Where’s Daddy uses AI to track individuals suspected of militant activity. The system has led to tragic deaths, targeting individuals when they enter their homes, killing not only them but also their families and neighbors.
In 2021, Israel’s tech-driven warfare received a major boost when Google and Amazon jointly signed a billion-dollar contract with the Israeli government, known as Project Nimbus. This initiative provides Israel with enhanced facial recognition technology, object tracking, and other tools to support military objectives. The signing sparked protests from hundreds of Google and Amazon employees, who formed the coalition No Tech for Apartheid to voice their concerns.
The Role of AI in Warfare: From Ranking Civilians to Drone Swarms
Israeli intelligence units increasingly rely on AI to rank civilians and infrastructure in Gaza based on their supposed affiliations with militant organizations. What used to take human operatives up to a year is now accomplished in half a day by AI. Critics argue this reliance on AI has led to the mass assassination of civilians who are wrongly identified as threats.
Since 2021, Israel has also begun deploying drone swarms in Gaza. These UAVs locate, monitor, and in some cases, strike at targets. As early as 2009, Human Rights Watch had raised concerns about Israel’s drone operations. By 2022, the entirety of Gaza was reportedly “covered” by these drones, monitoring the area 24 hours a day.
Palestinians have reported that the constant buzzing of drones overhead has caused significant psychological distress. In one particularly sinister tactic, drones have allegedly broadcasted recordings of crying infants to lure targets out of their hiding places, leading to further strikes.
Intensifying AI-Powered Attacks and International Responses
The scale of Israel’s AI-powered warfare escalated after October 2023. International legal experts have called the bombardment of Gaza “a potential genocide,” with the International Court of Justice acknowledging in January 2024 that Israeli actions could be classified under the Genocide Convention. By July 2024, the World Court ruled that Israel was responsible for apartheid, particularly citing the use of AI-driven surveillance and weaponry against Palestinians.
The situation in Gaza has reached catastrophic levels, with the United Nations Office for the Coordination of Humanitarian Affairs reporting over 41,000 Palestinians killed and more than 96,000 injured.
Israel’s use of AI in warfare is not confined to Gaza. In September 2024, Israel was accused of deploying AI-assisted attacks in Lebanon, where explosions killed at least 37 and displaced over 90,000 people. Airstrikes on Lebanon, Syria, and Yemen have continued to escalate, bringing further attention to the dangerous role AI plays in modern conflict.
The global community remains divided on the issue. While some governments and institutions criticize Israel’s actions, others seek to capitalize on the technology for their own military and surveillance purposes.