AI ‘Bill of Rights’ must be accompanied by NIST risk management framework say experts

Bias and privacy risks associated with the technology can't be properly identified until NIST completes much-awaited impact assessments.
A display shows a vehicle and person recognition system for law enforcement during the NVIDIA GPU Technology Conference in Washington, DC, November 1, 2017. (Image credit: SAUL LOEB/AFP via Getty Images).

The AI ‘Bill of Rights’ Blueprint released Tuesday must be complemented by the National Institute of Standards and Technology’s risk management work to effectively protect citizens, according to experts.

While the White House document calls for the development and deployment of safe and effective AI systems that mitigate bias and privacy risks, those risks are identified through impact assessments NIST hasn’t finalized guidance on.

NIST continues to finalize its AI Risk Management Framework (AI RMF) expected in January 2023, which advises organizations to conduct impact assessments early and often in the development life cycle and mitigate the risks they find. But often multiple companies are involved in an AI system’s development, which requires clarity on who’s responsible for the impact assessment and mitigation at each stage.

“From the enterprise software perspective that we represent, if there’s a weakness in that system at any point — and discrimination or unintended bias comes out as a result — that’s going to slow the uptake of what’s a really important technology,” Aaron Cooper, vice president of global policy at trade group The Software Alliance, told FedScoop.


The Biden administration recognizes the importance of ensuring trustworthy AI systems because rapid adoption of the technology is critical to its goal of becoming the global leader in the space, Cooper added.

NIST staff leading the AI RMF effort were present alongside Cabinet members for the White House’s blueprint announcement Tuesday, underscoring the “all-of-government” approach to the issue, Alex Givens, president and CEO of the Center for Democracy & Technology, told FedScoop. 

The blueprint further mentions the AI RMF as an example of federal leadership, given NIST’s focus on implementation — unlike policy-making agencies. As an advisory body the White House Office of Science and Technology Policy is limited in its authority, hence the reason for the AI Bill of Rights’ blueprint format.

“There’s a lot more the agencies can do,” Givens said. “So this is kind of the beginning of a process, an indication of interest and commitment by the White House to make this a cross-administration priority.”

That 12 agencies are mentioned in the White House fact sheet outlining early agency commitments — like the Department of Health and Human Servicesproposed rule prohibiting algorithmic discrimination by certain health programs and the Department of Labor’s increased enforcement of worker surveillance reporting — is a “clear signal” to industry of how serious the Biden administration is, she added.


Givens said she expects more interagency coordination and efforts addressing specific instances of algorithmic discrimination moving forward, and agencies need to regulate both industry’s and their own use of AI.

The Software Alliance’s recommendation that the White House ask the agencies responsible for enforcing civil rights laws what updates are needed to AI rules across sectors wasn’t incorporated into its blueprint.

“That would be helpful in assessing where there are gaps and then making recommendations to Congress or through rule changes to make sure those gaps are filled,” Cooper said.

Still agencies like the Consumer Financial Protection Bureau are considering how they can better enforce laws already on the books — like one requiring creditors to clearly state why they’re rejecting an applicant — to prohibit the use of discriminatory algorithms.

CFPB also administers a “core” law on consumer data, the Fair Credit Reporting Act, said Director Rohit Chopra, during the White House blueprint announcement.


“There is an underworld of data on all of us that is making decisions in employment background screening, in tenant screening,” Chopra said. “And every single day people are essentially being falsely accused by an algorithm of having a criminal conviction or some sort of court filing because they happen to have a common last name.”

Regulating meaningful AI audits is “low-hanging fruit” for agencies, as is procurement reform, Givens said.

“That’s where the government can put these principles into action,” she said. “And show what it is to lead a thoughtful procurement process that will improve government processes but also set a model for private industry as well.”

Latest Podcasts