Advertisement

MITRE releases recommendations to incoming administration on AI governance

The nonprofit operator of federally funded R&D centers for agencies compiled a list of nine recommendations for the next presidential administration to apply to AI-governance and use.
White House on deep blue sky background in Washington DC, USA. (Image credit: Getty Images)

Securing federal funding for artificial intelligence research and establishing an executive task force to monitor the development and use of AI should be prioritized in the first year of the next presidential term, a new policy document from MITRE recommended.  

MITRE, the federally funded organization that operates research and development centers on behalf of agencies, said in the document — which is intended to advise the incoming administration on AI security, innovation, trust and ethics — that the establishment of a regulatory framework for the technology’s safe use will “reinforce the United States’ international leadership in AI” and also “unlock its transformative potential to address a wide range of critical challenges.” 

“The diverse needs and requirements of agencies based on their size, organization, budget, mission and internal AI talent present an opportunity to promote flexibility and adaptability in AI governance,” the document states. “An effective approach to AI regulation should allow for a tailored and effective implementation of AI strategies and policies across agencies.”

MITRE offers nine recommendations for the next presidential term, and has included executive support of an AI Science and Technology Intelligence apparatus to monitor adversarial tradecraft from open sources as well as red-teaming public and commercial AI infrastructure and operations. 

Advertisement

Additionally, the organization recommended issuing an executive order that “mandates system auditability” along with developing standards for audit trails and advocating for legislation that aims to increase transparency in AI-related applications. MITRE argued that the ability to audit systems is “vital” for tracking AI misuse and maintaining public trust in AI. 

“This would include requiring AI developers to disclose what data was used to train their systems as well as the foundation models on which their systems were built,” the document states. 

MITRE also suggests that the executive branch improve collaboration and communication between legislators and those who implement AI strategies — within federal agencies — through ensuring that policies from the executive level are “effectively translated into action at the agency level, taking into account the unique needs and contexts of each agency.”

The organization pointed to expanding executive interagency committees or establishing a new committee that includes the office of the presidency, agency and industry representatives in order to “help ensure effective communication and collaboration,” which would involve consistent meetings, sharing resources and a common platform for the exchange of ideas and best practices.

Caroline Nihill

Written by Caroline Nihill

Caroline Nihill is a reporter for FedScoop in Washington, D.C., covering federal IT. Her reporting has included the tracking of artificial intelligence governance from the White House and Congress, as well as modernization efforts across the federal government. Caroline was previously an editorial fellow for Scoop News Group, writing for FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. She earned her bachelor’s in media and journalism from the University of North Carolina at Chapel Hill after transferring from the University of Mississippi.

Latest Podcasts