More federal agencies join in temporarily blocking or banning ChatGPT

Energy and the VA are the latest agencies to confirm they’ve banned generative AI tools, at least temporarily, on their systems.
An image of a man using ChatGPT software. (Photo by NICOLAS MAETERLINCK/BELGA MAG/AFP via Getty Images)

The Department of Energy’s chief information officer has temporarily blocked access to ChatGPT for its employees, as well as other DOE customers managed under its Enterprise Information Technology Services organization, a spokesperson for the agency confirmed to FedScoop. 

Energy is one of the latest agencies to reveal such a move, which comes as federal officials wrestle with how to govern and use generative AI models.

DOE’s decision is supposed to set the foundation for future agency uses of generative artificial intelligence, Chad Smith, a spokesperson for the department, told FedScoop. The “block provided OCIO with the opportunity to remind all AI tool users of the existing DOE guidance and policies in place,” Smith said, though he added that the office is making exceptions based on certain business and mission needs.

Other agencies have also recently forbidden the use of generative AI tools on their systems. For example, the Department of Veterans Affairs confirmed to FedScoop this week that ChatGPT and “similar commercial generative AI services” were not available on the agency’s network. A VA spokesperson also noted the agency has had generative AI guidance that prohibits sensitive information from being input into unapproved systems since July 2023.


Similarly, the Social Security Administration issued a temporary block of the technology, and the Agriculture Department banned the use of ChatGPT and other third-party generative AI tools on government equipment. (The Agriculture Department determined that the risk of using ChatGPT was “high” and established a review board to study proposed uses of generative AI tools) 

In the meantime, DOE has enabled a generative AI testing sandbox, which should allow employees to evaluate the technology in a secure environment. The Office of the CIO has also forged agreements with two generative AI providers in Microsoft Azure and Google Cloud. And it’s developing a second version of its generative AI reference guide, which is expected to be released in the second quarter of this year.

“DOE is taking a risk-based approach to AI software by continually assessing the landscape. DOE has conditionally approved Google Cloud Platform that includes AI technology and pursuing additional AI technologies,” Smith said. “DOE is actively pursuing generative AI use cases to be developed in 2024 and building a pipeline for use cases that allows us to prioritize the most impactful concepts to DOE’s mission.” 

The national laboratories that fall under the DOE’s auspices also continue to pursue research projects focused on generative AI, including “trillion-parameter-scale large-language models” that could be developed on Frontier, which is currently the world’s fastest supercomputer

Overall, agencies have taken varied approaches to generative AI. The U.S. Agency for International Development has discouraged employees from entering private data or information into public generative AI systems. Meanwhile, the Department of Homeland Security late last year conditionally approved several generative AI tools for use in the agency including ChatGPT, Bing Chat, and DALL-E2, and set up a process for employee training and approving use cases. 


Madison Alder contributed to this article.

Latest Podcasts