Microsoft makes Azure OpenAI service available in government cloud platform
Federal agencies that use Microsoft’s Azure Government service now have access to its Azure OpenAI Service through the cloud platform, permitting use of the tech giant’s AI tools in a more regulated environment.
Candice Ling, senior vice president of Microsoft’s federal government business, announced the launch in a Tuesday blog post, highlighting the data safety measures of the service and its potential uses for productivity and innovation.
“Azure OpenAI in Azure Government enables agencies with stringent security and compliance requirements to utilize this industry-leading generative AI service at the unclassified level,” Ling’s post said.
The announcement comes as the federal government is increasingly experimenting with and adopting AI technologies. Agencies have reported hundreds of use cases for the technology while also crafting their own internal policies and guidance for use of generative AI tools.
Ling also announced that the company is submitting Azure OpenAI for federal cloud services authorizations that, if approved, would allow higher-impact data to be used with the system.
Microsoft is submitting the service for authorization for FedRAMP’s “high” baseline, which is reserved for cloud systems using high-impact, sensitive, unclassified data like heath care, financial or law enforcement information. It will also submit the system for authorization for the Department of Defense’s Impact Levels 4 and 5, Ling said. Those data classification levels for DOD include controlled unclassified information, non-controlled unclassified information and non-public, unclassified national security system data.
In an interview with FedScoop, a Microsoft executive said the availability of the technology in Azure Government is going to bring government customers capabilities expected from GPT-4 — the fourth version of Open AI’s large language models — in “a more highly regulated environment.”
The executive said the company received feedback from government customers who were experimenting with smaller models and open source models but wanted to be able to use the technology on more sensitive workloads.
Over 100 agencies have already deployed the technology in the commercial environment, the executive said, “and the majority of those customers are asking for the same capability in Azure Government.”
Ling underscored data security measures for Azure OpenAI in the blog, calling it “a fundamental aspect” of the service.
“This includes ensuring that prompts and proprietary data aren’t used to further train the model,” Ling wrote. “While Azure OpenAI Service can use in-house data as allowed by the agency, inputs and outcomes are not made available to Microsoft or others using the service.”
That means embeddings and training data aren’t available to other customers, nor are they used to train other models or used to improve the company’s or third-party services.
According to Ling’s blog, the technology is already being used for a tool being developed by the National Institutes of Health’s National Library of Medicine. In collaboration with the National Cancer Institute, the agency is working on a large language model-based tool, called TrialGPT, that will match patients with clinical trials.