New Senate bill targets bias in decision-making algorithms
Democrats in Congress, wary of how unexpected biases in algorithms could lead to discrimination against people, want to give federal regulators oversight over their use by companies.
The Algorithmic Accountability Act, introduced by Sens. Cory Booker, D-N.J., and Ron Wyden, D-Ore., would require companies that use algorithmically driven software systems to study them for bias and correct it if found. Under the terms of the bill, the Federal Trade Commission would ensure compliance. Rep. Yvette Clarke, D-N.Y., introduced companion legislation in the House.
“Computers are increasingly involved in the most important decisions affecting Americans’ lives — whether or not someone can buy a home, get a job or even go to jail,” Wyden said in a statement. “But instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color.”
Algorithmic bias is a fraught topic in government, especially as agencies exhibit more interest in using artificial intelligence and machine learning. Last summer, a study conducted by the American Civil Liberties Union used Amazon’s facial recognition software Rekognition, which is used at some law enforcement agencies, to test photos of members of Congress against a database of arrest photos. In the study, 28 sitting members of Congress were falsely identified as individuals who have been arrested for a crime. Lawmakers were understandably perturbed.
Amazon has pushed back against the study, arguing that the ACLU used incorrect settings and that the results were misinterpreted.
Many, including the White House Office of Science and Technology Policy, argue that overregulation of AI systems would hinder innovation.
The FTC would have two years to develop regulations spelling out companies would furnish “impact assessments” of their automated decision technology. Failure to comply would be treated as “an unfair or deceptive act or practice” by the FTC, as laid out in the Federal Trade Commission Act.
The legislation would apply to companies that make over $50 million per year or collect data on more than 1 million people. It would require companies to assess how well their systems protect the privacy of individuals’ information, but would not require those assessments to be made public.
The bill is endorsed by a number of tech and civil rights groups like Data for Black Lives, the Center on Privacy and Technology at Georgetown Law and the National Hispanic Media Coalition.
“Algorithms shouldn’t have an exemption from our anti-discrimination laws,” Clarke said.