Homeland Security employees expressing ‘good interest’ in using public generative AI tools, agency official says
The Department of Homeland Security has seen “good interest” from staff looking to deploy generative AI within the agency, Eric Hysen, DHS’s chief information officer and chief artificial intelligence officer, said in an interview with FedScoop. That interest comes amid new guidance for employee use of commercial AI tools and the conditional approval of systems like ChatGPT at the agency.
The recent memo is meant to apply to public tools and jumpstart a new approval process. There is “clear demand” for using tools like ChatGPT and Claude 2, a rival chatbot created by the tech firm Anthropic, within the agency, Hysen said. Hysen also emphasized that this policy applies only to public tools and is separate from any potential plans to incorporate generative AI into the agency’s IT systems.
Employees who want to use generative AI must first seek permission from a manager and then attend an agency generative AI training, which is currently being held weekly. Hysen said the department is not specifically tracking the extent to which employees are using each tool, though he suggested that summarizing events and producing visuals might be potential use cases for this technology.
“We are building strong relationships with these companies at every level, particularly as it pertains to these policies,” Hysen told FedScoop. “We have been working with them both to ensure that we have appropriate terms of service for government use of these tools and that we are leveraging best practices that they can share on how they’re seeing other large enterprises use their tools as well.”
When one of these tools reaches the threshold of becoming an agency “use case,” it would be added to official DHS inventory, Hysen said.
Meanwhile, the agency is still finalizing separate plans for a generative AI pilot, which officials plan to announce in the coming weeks.
Federal agencies are quickly ramping up their generative AI efforts. Several agencies, including NASA and the National Science Foundation, have released some rules or policies regarding at least some applications of the technology. Notably, the recent executive order on AI encourages agencies not to outright ban using these kinds of tools.