By Justin Donoho
Duane Morris Takeaway: Available now is the recent article in the Journal of Robotics, Artificial Intelligence & Law by Justin Donoho entitled “Five Human Best Practices to Mitigate the Risk of AI Hiring Tool Noncompliance with Antidiscrimination Statutes.” The article is available here and is a must-read for corporate counsel involved with development or deployment of AI hiring tools.
While artificial intelligence (AI) hiring tools can improve efficiencies in human resource functions, such as candidate sourcing, resume screening, interviewing, and background checks, AI has not replaced the need for humans to ensure that AI-assisted human resources (HR) practices comply with a wide range of antidiscrimination laws such as Title VII of the Civil Rights Act of 1964 (Title VII), the Americans with Disabilities Act (ADA), the Age Discrimination in Employment Act (ADEA), the sections of Colorado’s AI Act setting forth developers’ and deployers’ “duty to avoid algorithmic discrimination” (CAI), New York City’s law regarding the use of automated employment decision tools (NYC’s AI Law), the Illinois AI Video Act (IAIVA), and the 2024 amendment to the Illinois Human Rights Act to regulate the use of AI (IHRA). This article identifies human best practices to mitigate the risk of companies’ AI hiring tools violating the foregoing statutes, according to the statutes, EEOC regulations, and scholarly sources authored by EEOC personnel and leading data scientists.
Implications For Corporations
AI hiring tools designed to comply with antidiscrimination statutes will comply. Moreover, by eliminating some human decision-making and replacing it with carefully designed algorithms, AI holds the potential to substantially reduce the kind of bias that has been unlawful in the United States since the civil rights movement of the mid-twentieth century. This article identifies human best practices to assist with such compliance and, relatedly, such potential substantial reduction of bias.