DOJ and EEOC launch new initiative to combat AI discrimination during recruitment
Federal contractors providing government departments with HR services will be held accountable for computer-based tools that discriminate against potential employees with disabilities under a new joint initiative launched by the Department of Justice and the Equal Employment Opportunities Commission.
In new guidance issued Thursday, the EEOC and DOJ warned that software, algorithms and artificial intelligence used to pre-screen candidates in some cases may discriminate against potential candidates with disabilities and violate federal civil rights law.
The guidance applies to all private sector companies, including federal contractors. Public entities at the federal, state and local level must also follow the guidance.
It comes after EEOC Chair Charlotte Burrows in 2021 launched an agency-wide program to ensure that emerging technologies used in employment decisions comply with federal civil rights law.
The latest initiative focuses on aptitude assessments and other types of early job interview-stage screening, where algorithmic decision-making tools may be used to automatically weed out candidates.
Speaking on a call with reporters, Burrows gave the example of a candidate pre-screening algorithm that might test for characteristics like “optimism.”
“Workers who might have a disability like depression, who might answer a question that is not typical, get screened out,” she said.
Burrows added that the EEOC and DOJ are prepared to take enforcement action to ensure the guidance is followed.
Through the initiative, DOJ and EEOC are seeking to enforce the Americans with Disabilities Act, a federal civil rights law. Title I of the act prohibits employers, employment agencies, labor organizations and joint labor-management committees with 15 or more employees from discriminating on the basis of disability.
The guidance comes as federal agencies across the government work to address concerns over the potential impact of AI bias.
Last month, the Department of Commerce appointed 27 experts to a National AI Advisory Committee to advise the White House on artificial intelligence issues. It consists of leaders from across academia, industry, nonprofits and civil society, who will make recommendations on the use of the technology.
In March, NIST published new guidance on AI bias, as it works to create a management standard for the technology.
In the guidance document, the agency at the time concluded that more research is needed into human and societal sources of bias encoded into the technology and that it would launch a draft AI risk management framework for public comment.