New AI Risk Management Guidelines

New AI Risk Management Guidelines

Sep 29, 2021 | Data & Cybersecurity

Untitled 14

Government agencies and regulators in the U.S. and globally are intensifying their pursuit of new standards and frameworks designed to rein in risks related to the growing use of artificial intelligence (AI). As these varied efforts proliferate, it’s worth highlighting a common thread that will resonate with third party risk management (TPRM) experts: trust is crucial. 

“Given this significant degree of unpredictability, the AI user must ultimately decide whether or not to trust the AI,” notes a March draft report, “Trust and Artificial Intelligence,” issued by the National Institute of Standards and Technology (NIST). “The dynamic between AI user and AI system is a relationship, a partnership where user trust is an essential part.” 

Substitute the term “outsourcer” for “AI user” and “third party” for “AI system,” and that final sentence could double as a TPRM mission statement.  

Shared Assessments’ Standardized Information Gathering (SIG) Questionnaire Tools serve as the “trust” component for outsourcers who choose to use industry–vetted questions to obtain succinct, scoped initial assessment information on a service provider’s controls. The SIG is part of a Shared Assessments Toolkit that is “foundational in the area of third party risk,” according to Shared Assessments Vice President Ron Bradley. “2020 has been particularly challenging for those navigating vendor risk, and third party risk managers rely on tools, such as the SIG and the SCA to gather, assess, and verify controls with ease and efficiency.”  

Third party risk managers would like to trust standard-setters and rule-makers to develop efficacious guidelines and regulations for managing AI-related risks. Doing so will be challenging, given the dynamic nature of a technology designed to continually learn.  

“Both IoT and AI are examples of the immutable fact that technology always overtakes the law,” notes Shared Assessments Senior Advisor Bob Jones. “AI is likely at risk of violating the one law unsusceptible to repeal — the law of unintended consequences. Both issues certainly deserve commentary:  IoT by information security experts, and AI by ethicists.” Jones also notes that “politicians and lobbyists from every conceivable constituency aren’t likely to reach a desirable outcome” in developing AI regulations without input from business leaders, TPRM professionals, and other experts. 

NIST has sought that type of input, and the agency will continue to do so as it moves forward in finalizing its Artificial Intelligence Risk Management Framework (AI RMF). Regardless of whether and how they plan to shape standard-setting activities, TPRM experts should keep tabs on at least three AI regulatory trends: 

 

1. NIST is working on the first draft of its AI Risk Management Framework

NIST is currently reviewing responses to a formal request for information (RFI) regarding the agency’s drafting of its AI RMF. (Here’s a landing page for the framework and related NIST-AI information.) So far, NIST has asked for input on:  

  • The greatest challenges in improving the management of AI-related risks; 
  • How organizations currently define and manage characteristics of AI trustworthiness; and 
  • The extent to which AI risks are incorporated into organizations’ overarching risk management, such as the management of risks related to cybersecurity, privacy, and safety. 

“For AI to reach its full potential as a benefit to society, it must be a trustworthy technology,” notes  Elham Tabassi, NIST’s federal AI standards coordinator and a member of the National AI Research Resource Task Force. “While it may be impossible to eliminate the risks inherent in AI, we are developing this guidance framework through a consensus-driven, collaborative process that we hope will encourage its wide adoption, thereby minimizing these risks.” 

 

2. Other global governments are scrutinizing AI risks

The European Union, Germany, Canada, New Zealand, and other countries and U.S. cities (e.g., two years ago, San Francisco banned its police department and other city agencies from using facial recognition technology) have some form of AI risk and impact assessments or more formal rules in place. More are sure to follow. Research published by UC Berkeley’s Center for Long-Term Cybersecurity finds that a handful of risk mitigation measures are present in AI risk management frameworks, including human oversight, external review and engagement, documentation, testing and mitigation of bias, alerting those affected by an AI system of its use, and regular monitoring and evaluation. Wired Magazine reports that EU lawmakers are considering AI regulations that could influence other global regulators on AI the way GDPR has done so on data privacy: “The regulation calls for the creation of a public registry of high-risk forms of AI in use in a database managed by the European Commission. Examples of AI deemed high risk included in the document include AI used for education, employment, or as safety components for utilities like electricity, gas, or water… The EU report also encourages allowing businesses and researchers to experiment in areas called ‘sandboxes,’ designed to make sure the legal framework is ‘innovation-friendly, future-proof, and resilient to disruption.’” 

 

3. Trust is big – and complicated

The NIST draft paper that examines trust’s role in AI use was written by Brian Stanton and Theodore Jensen. The co-authors present several terms that seem destined to crop up in future third party assessments. For example, “user trust potential” (UTP) describes “each user’s unique predisposition to trust AI.” And “perceived system trustworthiness” (PST) is defined by Stanton and Jensen as “the user’s contextual perceptions of an AI system’s characteristics that are relevant to trust.”  While two users may have a fairly similar PST, they may differ in their individual UTPs.  

 

Conclusion

AI regulations will require continuous monitoring – in two ways. First, the development of these guidelines and regulations will affect third party risk management activities, so they bear watching. Second, the guidelines themselves will likely change over time as various AI applications evolve. The UC-Berkely paper offers recommendations for the development of risk and impact assessments related to AI. One piece of guidance is that “periodic risk and impact reassessments should be required to ensure that continuous learning AI systems meet the standards required after they have undergone notable changes.”

The AI risk management era has begun and it will evolve rapidly as humans and machines intensify their learning activities. 

Eric Krell

Eric is a writer based in Austin, Texas. He has authored hundreds of articles on enterprise risk management (ERM), governance risk and compliance (GRC), treasury, finance and accounting, sales and marketing, cybersecurity and talent management for Consulting, HR Magazine, Treasury & Risk, Direct Marketing News and other business publications. Eric’s lifestyle writing has appeared on National Public Radio and in Rolling Stone, Cooking Light, Men’s Fitness, and other consumer outlets.


Sign up for our Newsletter

Learn about upcoming events, special offers from our partners and more.

Sub Topics