Military and DefenseSyndicated PostsUS News

DOD Focuses Early AI Use on ‘Low Consequence’ Applications, Not Command and Control

The Defense Department has a long way to go in developing artificial intelligence and applying it to the most pressing military problems. For now, DOD is applying AI toward humanitarian assistance and predictive maintenance, the director of the Joint Artificial Intelligence Center said.

”We start with low-consequence use cases for a reason,” Air Force Lt. Gen. John Shanahan said during a panel discussion last week at the U.S. Naval Academy in Annapolis, Maryland. Because they are ”narrow” applications, he explained, it’s easier to assess results.

Shanahan said AI hasn’t yet achieved the readiness level to apply toward more complex issues such as nuclear command and control or missile defense, which carry a much higher risk if it doesn’t work as expected.

”I think that’s not where any of us are interested in heading right now,” he said.

One measure the department is willing to apply now is the perceived risk versus the potential reward for using AI in a particular application, and reward outweighing risk is something Shanahan said he’s not seeing now.

”I can’t show the rewards right now on mission-critical systems,” he said. ”On decision support, every single combatant command wants help on decision support systems: ‘How can I do an operational plan in two weeks instead of two years?’ That’s very, very challenging … to take on.”

There is no part of the Department of Defense that cannot benefit from AI.”

Air Force Lt. Gen. John Shanahan, director of the Joint Artificial Intelligence Center

The reward is great for solving a problem like decision support, he said, especially in terms of saving time, but only if an AI system can get it right — and that’s just not happening yet, Shanahan said.

”Nobody has proven that those rewards justify the risks we’re going to take right now,” he said. ”Everything that we do in the business I am in is about risk. Who incurs the risk? What’s the risk to the mission? What’s the risk to force? Is it a risk worth accepting? What I am having a hard time getting through right now is [that] I am not seeing the rewards outweigh the risk in those mission-critical cases.”

Still, Shanahan said, he’s confident AI is going to be a big part of the department’s future.

”There is no part of the Department of Defense that cannot benefit from AI,” he said.

Problems beyond risk exist as well, he said, including overcoming hurdles in military culture, talent, and data. Military culture requires long-term planning for the development of new systems, he explained, and a new aircraft might take decades to deliver.

”There are a lot of people that want to go forward very quickly with AI capabilities in the department, but we live by five-year budget cycles and weapons system milestones that are measured in five-to 10-year increments, as opposed to how quickly can I take an algorithm, update it and put it back into the field,” Shanahan said. ”We have a long way to go to really embrace the speed and the scale of what’s happening in commercial industry.”

The Defense Department, he said, is making progress in learning to do acquisition and contracting more quickly. He cited as examples the Defense Digital Service, which hires top experts from industry and academia for short tours to help overcome defense challenges, and the Defense Innovation Unit, which provides funding to private sector companies to solve defense-related problems.

 

Source: Department of Defense

Content created by Conservative Daily News is available for re-publication without charge under the Creative Commons license. Visit our syndication page for details.

Support Conservative Daily News with a small donation via Paypal or credit card that will go towards supporting the news and commentary you've come to appreciate.

Related Articles

Back to top button