Natasha Allen Discusses Latest Government Guidance on AI Risk
Foley & Lardner LLP Partner Natasha Allen is quoted in the Bloomberg Law article, “AI Risks Guide Sets Starting Point for Compliance, Regulation,” discussing the latest government guidance addressing artificial intelligence (AI) risks and its implication for compliance considerations.
The AI Risk Management Framework, released on January 26 by the National Institute of Standards and Technology (NIST), offers companies a comprehensive approach to assessing and managing the risks in the implementation or development of AI.
Allen said, “NIST’s focus on governance and management can help attorneys and clients understand how to put data they collect to use, and it underlines the need for identifying leaders who understand an AI technology enough to make appropriate decisions when a risk is identified internally.”
She noted a notable suggestion detailed in the framework’s playbook that involves mapping out all of the third-party software and data an AI system relies on. “This allows companies to identify risk – such as biased data or insecure software – and how the third parties are mitigating them,” she explained.
Allen added that she views the framework as a way for government figures to test the waters for developing laws by mandating NIST to develop a resource built on the insight of professionals who interact with complex AI technology every day.