As we are witnessing in real-time, generative AI has the potential to dramatically shape how organizations of all types become more productive, more effective, and more mission-oriented. The Department of Defense (DoD) is no exception. Despite those shared benefits, applying generative AI within the DoD involves unique considerations.
We sat down with Brandon Bean, a retired military officer who joined GDIT as a Senior AI/ML Solution Architect and our Luna AI Defense Capability Lead, to talk about what generative AI is and how it can be effectively and securely used within the DoD. Here’s what he had to say:
Generative AI is impacting all sorts of industries and use cases. What are some of its applications to the defense sector?
Like everywhere else, the technology landscape within the DoD is changing and so are the challenges it is asked to confront. So, there has been a major move within the DoD toward reducing menial tasks placed on soldiers and civilians alike. By that, I essentially mean the DoD is asking: What can we do to free people up to support their mission instead of doing things like spending hours answering emails or generating reports – and that extends from tasks that support soldier readiness and availability, to things like air space surveillance and analysis?
What are some of the unique challenges when it comes to using generative AI within the DoD?
Two words: transparency and ontology.
Transparency is crucial because humans naturally distrust what they cannot see or understand. It’s essential for AI systems to be clear about how they make decisions, especially in high-stakes environments.
Ontology, on the other hand, deals with how the AI understands and categorizes data. The DoD’s unique challenge lies in the mismatch between common data sources used in AI training, like Wikipedia, and the specialized data it uses, which includes everything from military manuals to tactics. There is no foundational model trained on that language. This mismatch can lead to errors and potential security risks. Creating proprietary models or generating new training data is neither practical nor cost-effective for the DoD. They need to leverage fine tuning techniques which take existing foundational models – whether it’s open source or closed source – and adapt them.
Adding these specific tools and training techniques to the process is necessary before you can reliably or securely use generative AI for Defense.
Can you tell us more about some of the generative AI use cases within the DoD?
Generative AI can help organizations wrangle their data, generate operations orders and synthesize a Commander’s intent, as well as parse through open source and cyber intelligence data to find the needles in the haystack.
Other use cases involve, obviously, pushing these capabilities out to the edge and looking at how to use generative AI to be able to do processing in Denied, Disrupted, Intermittent, and Limited (DDIL) communication environments. Most operational theaters are in areas without good cloud connectivity or high bandwidth. We need to be able to push capabilities out toward the edge – whether that’s to the individual soldier or to the sensor.
In your role, how are you talking with DoD customers and mission partners about generative AI?
In most cases, we are heightening awareness to generative AI, given its recent notoriety. This includes educating customers on what it is, what it isn’t, and what its limitations are.
There are additional examples of cultivating maturity with some customers to enable their journey to generative AI. For one customer, we are using generative AI models to create military training materials. This involves fine-tuning a model specific to their data and their instructional design methodologies.
We’ve also, in collaboration with customers, developed several minimum viable products (MVPs) in key areas like data summarization and model compression so that we can efficiently run models on regular CPUs rather than GPUs at the edge.
What’s ahead for generative AI within the defense space?
Generative AI serves as an enabler by reducing servicemember overhead and narrowing knowledge gaps, and it shifts-left critical decision-making capabilities. This enablement will allow our forces, equipment, and sensors to fight faster, longer, and harder, with greater precision. But all of this value comes at a cost in terms of security and scalability. Responsible and trustworthy AI is imperative to mission success and wide-scale adoption.
That’s why today, GDIT is actively participating in federal level AI working groups and is helping shape and identify policies that will support the broad and responsible use of generative AI to realize its full potential within the Department.