- Exclusive
- AI
Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation
Two leading congressional AI proponents, Rep. Ted Lieu, a California Democrat, and Rep. Ken Buck, a Colorado Republican, are working to boost the federal government’s ability to foster AI innovation through increased funding and competition while also reducing major risks associated with the technology.
Last week each lawmaker shared with FedScoop their own unique vision for how Congress and the federal government should approach AI in the coming months, with Lieu criticizing parts of the European Union’s proposed AI Act while Buck took a shot at the White House’s AI Bill of Rights blueprint.
Buck and Lieu recently worked together to introduce a bill which would create a blue-ribbon commission on AI to develop a comprehensive framework for the regulation of the emerging technology and earlier this year introduced a bipartisan bill to prevent AI from making nuclear launch decisions.
The bicameral National AI Commission Act would create a 20-member commission to explore AI regulation, including how regulation responsibility is distributed across agencies, the capacity of agencies to address challenges relating to regulation, and alignment among agencies in their enforcement actions.
The AI Commission bill is one of several potential solutions for regulating the technology proposed by lawmakers, including Senate Majority Leader Chuck Schumer, who recently introduced a plan to develop comprehensive legislation in Congress to regulate and advance the development of AI in the U.S.
Buck said he would like to see “experts studying AI from trusted groups like the Bull Moose project and other think tanks, including American Compass,” to be a part of the AI commission.
Buck and Lieu are both strongly focused on ensuring Congress and the federal government allow AI companies and their tools to keep innovating to ensure the US stays ahead of adversaries like China while ensuring any harms caused by the technology are understood and mitigated.
With respect to increasing and supporting AI innovation in the U.S., Lieu said he is currently pushing for more funding within the Congressional appropriations process for AI safety, research and innovation that the federal government would disperse to qualified entities and institutions.
“I would like to see more funding from the government to research centers that create AI and to have different grants available for people who want to work on AI safety and AI risks and AI innovation,” said Lieu, who is a member of the House Artificial Intelligence Caucus and one of three members of Congress with a computer science degree.
Buck on the other hand highlighted that one of the keys to encouraging AI innovation is the government ensuring that “we don’t have a single controlling entity that we have dispersed AI competition,” in order to “make sure that we don’t have a Google in the AI space. I don’t mean Google specifically but I mean, I want to make sure we have five or six major generative AI competitors in the space,” he said.
For the past two years, Buck was the top Republican on the powerful House antitrust subcommittee and has played a key role in forging a bipartisan agreement in Congress that would rein in Big Tech companies such as Google, Amazon, Facebook, and Apple for anti competitive activities.
Buck also said he’s not in favor of OpenAI and ChatGPT CEO Sam Altman’s key approach to regulating the technology, which calls for the creation of a new federal agency to license and regulate large AI models. That proposal was floated by Altman along with other legislative ideas during congressional testimony in May.
“I’m not in favor of one agency with one commission, because it’s too easy to be captured by an outside group. So I think dispersing that oversight within the government is important,” Buck told FedScoop during an interview in his Congressional office on Capitol Hill.
Tech giant Google has also pushed the federal government to divide up oversight of AI tools across agencies rather than creating a single regulator focused on the technology, in contrast with rivals like Microsoft and OpenAI.
Kent Walker, Google’s president of global affairs, told the Washington Post in June that he was in favor of a “hub-and-spoke model” of federal regulations that he argued is better suited to deal with how AI is affecting U.S. economy than the “one-size-fits-all approach” of creating a single agency devoted to the issue.
When asked about which AI regulatory framework he supports, Buck said the main frameworks currently being debated in Washington including the National Institute of Standards and Technology’s (NIST) voluntary AI Risk Management Framework, the White House’s AI Bill of Rights Blueprint, and the EU’s proposed AI Act all have “salvageable items.”
However, Buck added that the White House’s AI Bill of Rights “has some woke items that won’t find support across partisan lines,” indicating Republicans will push back against parts of the blueprint document which consists of five key principles for the regulation of AI technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.
On the other hand, Lieu, a Democrat, is strongly in favor of the White House’s AI blueprint which is intended to address concerns that unfettered use of AI in certain scenarios may cause discrimination against minority groups and further systemic inequality.
“The biggest area of AI use with the government [of concern] would be AI that has some sort of societal harm, such as discrimination against certain groups. Facial recognition technology that is less accurate for people with darker skin, I think we have to put some guardrails on that,” Lieu told FedScoop during a phone interview last week.
“I am concerned with any AI model that could lead to systematic discrimination against a certain group of people, whether that’s in facial recognition or loan approval,” Lieu said.
Lieu added that the federal government should be focused on regulating or curtailing AI that could be used to hack or cyberattack institutions and companies and how to mitigate such dangerous activity.
In a paper examining popular generative AI tool ChatGPT’s code-writing model known as Codex, which powers GitHub’s Co-Pilot assistant, OpenAI researchers observed that the AI model “can produce vulnerable or misaligned code” and could be “misused to aid cybercrime.” The researchers added that while “future code generation models may be able to be trained to produce more secure code than the average developer,” getting there “is far from certain.”
Lieu also said that “AI that can be very good at spreading disinformation and microtargeting, people with misinformation,” which needs to be addressed and highlighted AI will cause “there to be disruption in the labor force. And we need to think about how we’re going to mitigate that kind of disruption.”
Alongside the White House’s AI blueprint, Lieu said he was strongly in favor of the voluntary NIST AI framework AI regulatory framework focused on helping the private sector and eventually federal agencies build responsible AI systems centered on four key principles: govern, map, measure and manage.
However, Lieu took issue with parts of the EU’s AI Act which was proposed earlier this year and is currently being debated but unlike the White House AI Blueprint and the NIST AI framework would be mandatory by law for all entities to follow.
“My understanding is that the EU AI Act has provisions in it that for example, would prevent or dissuade AI from analyzing human emotions. I think that’s just really stupid,” Lieu told FedScoop during the interview.
“Because one of the ways humans communicate is through emotions. And I don’t understand why you would want to prevent AI from getting the full communications of the individual if the interviewer chooses to communicate that to the AI,” Lieu added.