Ubiquitous IT giant Google has silently inked a partnership with the Department of Defense to militarize artificial intelligence and machine learning technologies, reinvigorating fears of a Terminator-style apocalyptic scenario.
Google has been secretly working with the Pentagon in order to help its 1,100-strong fleet of drones to detect images, faces, and behavioral patterns, and plans to scour through massive amounts of video footage in order to improve bombing accuracy for autonomous drones. The endgame is to improve combat performance by automating the decision-making process in locating and targeting combatants, The Intercept reported on Tuesday.
Project Maven was launched in April 2017 to establish an “Algorithmic Warfare Cross-Functional Team,” which advocates using sophisticated algorithm-based technologies to combat rising “competitors and adversaries”.
According to a Pentagon memo, dated April 26, 2017, its objective is to accelerate the process of using big data and machine learning together during combat situations and speed up the process of analyzing collected data. Former Secretary of Defense Robert Work signed off on the initiative.
Project Maven also aims to “augment or automate Processing, Exploitation and Dissemination (PED) for unmanned aerial vehicles (UAVs)” in order to “reduce the human factors burden of [full motion video] analysis, increase actionable intelligence, and enhance military decision-making,” he wrote.
The Pentagon has become increasingly worried that it will become displaced as the world’s top AI developer. At a February 13 hearing, Senators Jack Reed (D-Rhode Island), Mark Warner (D-Virginia) and others lamented Chinese efforts to develop artificial intelligence (AI) and quantum computing, leaving the US behind.
Another DOD report, “Unmanned Systems Integrated Roadmap”, notes that there are three primary impetuses driving the push towards AI, which are “department budgetary challenges, evolving security requirements, and a changing military environment.” The report reflects another US Government Accountability Office (GAO) report which addressed problems with human-piloted drones, including fatigue, human error, and demoralization.
“Downward economic forces will continue to constrain Military Department budgets for the foreseeable future. Achieving affordable and cost-effective technical solutions is imperative in this fiscally constrained environment,” it also pointed out.
“People and computers will work symbiotically to increase the ability of weapon systems to detect objects,” Marine Corps Colonel Drew Cukor said during a 2017 Defense One Tech summit. “Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now. That’s our goal.”
Cukor also mentioned the program would help to identify 38 classes of objects essential to detect in warfare, especially when “fighting”Islamic State militants. He also addressed plans to carry out Project Maven by the end of last year.
“We are in an AI arms race […] It’s happening in industry [and] the big five Internet companies are pursuing this heavily. Many of you will have noted that Eric Schmidt […] is calling Google an AI company now, not a data company,” he said.
Google is no stranger to the Department of Defense. Eric Schmidt, former CEO of its parent company Alphabet, chaired the DOD Defense Innovation Board under the Obama administration.
Some Google employees were outraged that the company would share its technology with the military, according to Gizmodo, while others said the project raised ethical questions about machine learning.
A company spokeswoman told Bloomberg that Google was sharing TensorFlow API with the military for “non-offensive uses only.”
“Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others,” the unnamed spokeswoman said.