The U.S. Army announced steps this week to protect its troops and aims to strengthen its ability to successfully implement artificial intelligence as part of a 500-day plan.
The Army’s Office of Acquisition, Logistics and Technology (ALT) on Wednesday unveiled two new initiatives called “Break AI” and “Counter AI” that will test advanced AI technologies for reliable use in the field and protect against hostile use of AI against the United States, the Federal News Network reported this week.
The Army is not only concerned with the safe implementation of AI across all branches of the military, but also with its safe development in coordination with external parties.
HOW ARTIFICIAL INTELLIGENCE IS RESHAPING MODERN WARFARE
“One of the barriers to adoption is how we look at the risks around AI. We have to address issues around poisoned data sets, adversarial attacks, Trojans, things like that,” Young Bang, deputy assistant secretary of the Army’s ALT, reportedly said Wednesday during a technology conference in Georgia.
“That’s easier if you’ve developed it in a controlled, trusted environment that’s owned by the Department of Defense or the Army, and we’ll do all that,” he added. “But this is really about how we can take the algorithms from third-party or commercial vendors directly into our programs so we don’t have to compete with them.”
“We want to adopt her.”
Bang’s announcement came as the Army completed a 100-day sprint to incorporate AI into its procurement process.
The goal was to explore ways the Army could develop its own AI algorithms while working with trusted third parties to develop the technology as securely as possible, the Federal News Network reported.
The Army is now using what it learned in the 100-day sprint to test and validate the implementation of AI across the board, developing systems for Army use while strengthening its defenses against adversary use of AI.
US holds conference on military AI use with dozens of allies to determine “responsible” use
The Break AI initiative will focus on how AI might evolve in an area called artificial general intelligence (AGI), which is the development of software designed to match or exceed human cognitive capabilities. This technology has the potential to employ complex decision-making and learning capabilities.
This technology, which is not yet fully developed, is intended to improve current AI software, which can currently only generate a predicted result based on provided data.
But the next phase is not only about development, but also about protection against this dubious technology, and the army therefore has a lot of work ahead of it.
“It’s about how we actually test and evaluate artificial intelligence,” Bang said. “As we move toward AGI, the question is how can we test something where we don’t know what the outcome will be, nor how the behavior will evolve.”
“This cannot be tested in the same way as deterministic models. This is where we need the help of industry.”
The second part of the Army’s 500-day plan is a bit simpler, said Jennifer Swanson, deputy assistant secretary of the Army’s Office of Data, Technology and Software.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
CLICK HERE TO GET THE FOX NEWS APP
“We want to make sure our platforms, our algorithms and our capabilities are protected from attacks and threats, but it’s also about how we counter what the adversary has,” she reportedly said. “We know we’re not the only ones investing here. There are many investments in countries that pose a major threat to the United States.”
Because of the sensitive nature of these operational security initiatives, Army officials are remaining tight-lipped about specific details the military branch will pursue in developing AI capabilities.
However, Swanson said, “As we start to learn and figure out what we’re going to do, there will be things that we share.”