The FAA is probing how to use AI for national airspace safety
The Federal Aviation Administration continues to weigh how it can incorporate artificial intelligence in the national airspace system, the controlled and uncontrolled airspace that the agency is charged with overseeing.
Earlier this week, the FAA shared a market survey, titled “Analytics for Safety of NAS,” seeking information on “existing capabilities for advanced analytics using modern Artificial Intelligence (AI) capabilities to improve aviation safety.” The posting is meant to help the agency improve its safety information systems and incorporate analytics from commercially available tools, according to the posting.
“The FAA envisions a new safety analytics system that will vastly expand and accelerate insights from current and additional sources of data and provide a comprehensive understanding of causal factors of safety events to help predict high-risk operations and environments,” the agency said. “The end state will be built on commercially available analytics tools that are widely used by a substantial number of companies and organizations to make similar improvements to the safety of operations or to reduce mistakes in operations.”
The FAA — which did not respond to a request for comment by publication time — is also interested in potential data sensitivity and data variety challenges.
The posting comes as the agency has slowly explored artificial intelligence applications and has disclosed several use cases as part of the Department of Transportation’s AI use case inventory.
Previous FedScoop reporting revealed myriad hurdles the FAA faces in trying to incorporate the technology, even as the Biden administration encourages an all-of-government effort to modernize systems using AI.
“Because aviation is a safety critical industry and domain, in general, stakeholders involved in this industry are slower to adapt AI models and tools for decision-making and prediction tasks,” Syed A.M. Shihab, an assistant professor of aeronautics and engineering at Kent State University, told FedScoop in April. “It’s all good when the AI model is performing well, but all it takes is one missed prediction or one inaccurate classification, concerning the use cases, to compromise safety of flight operations.”