High performance computing (HPC) leverages the enormous power of pooled computing resources to tackle complex problems. For intelligence missions, HPC has the potential to be incredibly transformative because of its ability to handle massive amounts of complex data and turn it into actionable insights at speeds no human – or team of humans – could ever match. Yet for all of the promise and potential of HPC, a real challenge that accompanies it are its GPU requirements. I’ll explain.
Complex, Large and Dense Datasets
In the intelligence community, data sets and sizes are large and dense, and they’re changing as data collection methods evolve and become more sophisticated. A geospatial analyst used to analyze mostly electro-optical data or photos. But today’s sensors are capable of capturing more and more clear pictures, and with it, more data. Tools like synthetic aperture, radar and others are capturing data all along the electromagnetic spectrum, which is, again, dense and full of metadata.
A satellite image of a red truck, for example, can contain all sorts of additional information – such as the make and model of the truck, its location, the time and date the image was captured, its elevation, and more. In some cases, analysts are using custom tools to mine it and ascertain what they need. This rich, attribute-heavy is important to collect and even more important to assess carefully. The richer the data, the more there is to find – and the more there is for humans to miss.
GPUs Deliver Required Processing Power
Graphics processing units (GPUs) embedded within HPC devices or networks can process data faster than ever and can handle petabytes of data with ease. They allow analysts to leverage both existing and emerging capabilities in automated processing, artificial intelligence and machine learning. This flexibility and adaptability is especially important in an environment in which the intelligence analyst workforce is aging and in which technology is evolving faster than ever. Case in point: Even tradecraft used in the War on Terror is, today, outdated.
The challenge is that the GPUs needed for HPC environments are expensive, and the onus is on agencies and mission partners to generate a clear return on the government’s investment in them. At the same time, we know our adversaries are gaining ground in the HPC field and in artificial intelligence, while we are working to catch up.
Recent policy changes, such as the congressional mandate to use AI and machine learning in America’s intelligence agencies, will help. But they’re only part of the battle. This is why mission partners like GDIT are working collaboratively with customers to help integrate and advance America’s HPC capabilities.
Collaborating to Demonstrate Value
Today, GDIT leads an ML Ops capability with automated Computer Vision workflows. This allows analysis for change in activities, patterns or behavior rapidly. One application of GDIT’s HPC capabilities is our Convergence Project that supports active operations and battlefield awareness for mission partners and allows them to make decisions in seconds or hours versus weeks or months.
Projects like this help to clearly show the tangible returns of HPC investment. It showcases the opportunity space and positions agencies to deploy new HPC technology and continue their exploration of it. Looking forward, GDIT is planning to deploy a GPU farm at one of our technology centers in St. Louis that collaborators can tap into to generate the mission based evidence they need to justify continued HPC investment within their respective agencies.
This kind of research and development space, and connections with those in the intelligence technology community who can come together to innovate and envision the art of the possible, is incredibly important to accelerating the use and deployment of HPC systems. We look forward to future collaborations there and to being a convener of technology, talent and experimentation in the interest of helping to advance customer missions.