4 leaves coming together
4 leaves coming together

Artificial Intelligence, Cyber 3 MIN Read

Safe & Secure AI: 4 Cyber Best Practices

July 17th, 2024

OUR CAPABILITIES

Learn more about our work with AI and our efforts in ensuring safe practices.

Following the October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the April 2024 OMB Memo Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, agencies and the private sector alike are working to define what “safe and secure AI” means to them and, just as important, how to operationalize it within their organizations.

While there are many facets to safe and secure AI – encompassing everything from the development lifecycle to its responsible and ethical use, to avoiding bias and building a responsible AI culture – the focus of this piece is on protecting AI models, software and applications from cyber threats.

As organizations implement AI, they must ensure that the AI is trustworthy in order to reduce risks to the organization. There are many characteristics of trustworthy AI including validity, reliability, security and resiliency, accountability, transparency, explainability, and ensured privacy.

In addition, it is important to understand the context in which the AI solution is used to ensure that any AI risk is managed appropriately. Against this backdrop, it’s essential for agencies to actively manage their AI risks and understand the responsibility they have to develop trustworthy AI systems – with an emphasis on managing risk that might impact safety and rights. To do so, we use frameworks such as the NIST AI Risk Management Framework (RMF) to baseline and manage AI risk activities.

In our experience, there are at least four best practices that should be top of mind for agencies looking to adopt safe and secure AI practices, and to defend against complex and evolving cyber threats. Based on the NIST AI RMF, they are:

Govern: Build Trust into Your AI

Develop a culture of risk management ensuring security is baked into the different aspects of trustworthy AI use. Think about the data being used or generated and the systems it interacts with. How is it being secured? Have you performed appropriate vulnerability testing? Do you have redundant sensors that drive safety in your operations? Satisfactorily answering each of these questions will build trust in your AI models and development process and put you on stronger footing throughout the AI lifecycle.

Map: Allow the Data Being Used and Generated to Inform Your Data Management Policy

Understand AI context and the recognized risks associated with the various components of the system. Data management and governance policies should account for both the data you are using to build your models and the data that is generated as a result. Examine risk tolerance, trustworthiness, and loss prevention differently. Additionally, different AI use cases may warrant different safety and security controls. Take all of this into consideration and adjust accordingly.

Measure: Measure and Monitor for Drift or Atypical Behaviors

Ensure risks are frequently and continuously assessed, analyzed, and tracked and remediated. Beyond traditional security testing, agencies must be sure they are monitoring for data drift as well as outlier behavior – whether it’s because of model performance or an internal or external cyber threat. Only by carefully monitoring both can agencies and their end users reliably and safely leverage AI to its fullest extent.

Manage: Test & Validate in Variety of Scenarios

Agencies should be sure to test their AI models and applications under normal and abnormal situations. This includes vulnerability testing as well as other types of testing that can uncover anomalies in the data or performance that can indicate the presence of a problem before it becomes a crisis. Based on the scenarios and risks to the organization these tests should be prioritized and remediated.

Across the federal government, many agencies and their mission partners are working collaboratively to develop, use and defend their AI, as well as how to plan and build a culture that supports safe and secure AI. More agencies will be following suit. Because as the AI workforce expands to support the growth of AI technology, it’s critical for agencies to ensure proper, safe and secure AI practices like these are not only place – but embraced.