Ahead of the 2025 New Mexico legislative session, Rep. Christine Chandler (D) sponsored House Bill 60, the Artificial Intelligence Act, which seeks to mitigate algorithmic discrimination. Algorithmic discrimination is any condition in which the use of an artificial intelligence system results in unlawful differential treatment of a person based on their ethnicity, gender, disability and other groups legally protected from discrimination, according to the bill.
The legislative session begins Tuesday, Jan. 21 and ends March 22.
Examples of algorithmic discrimination have been found in algorithms like COMPAS, or Correctional Offender Management Profiling for Alternative Sanctions, which is meant to calculate the odds that a defendant will reoffend. COMPAS flagged almost twice as many false positives for Black people than white people, according to a 2016 ProPublica investigation.
In 2018, The Guardian revealed that Amazon’s AI hiring tool had a bias against women — often rejecting resumes with women’s names or women’s colleges.
HB 60 requires that developers of “high-risk” artificial intelligence systems — defined by the bill as “any artificial intelligence system that when deployed makes or is a substantial factor in making a consequential decision” — document its use, disclose algorithmic discrimination and implement risk management policies and impact assessments.
Developers must disclose to AI software users a summary of the data that was used to train the system, the actions the developer took to prevent algorithmic discrimination, possible biases within the system and whether studies on performance were peer-reviewed, according to the bill.
In the event of a “risk-incident” — defined by the bill as “an incident when a developer discovers or receives a credible report from a deployer that a high-risk artificial intelligence system offered or made available by the developer has caused or is reasonably likely to have caused algorithmic discrimination” — the developer must provide to the New Mexico Department of Justice a comprehensive list of all known users of the software, as well as a copy of all documentation the developer provided users, within 90 days.
Chandler told the Daily Lobo she wanted a bill that would mitigate the potential negative consequences of the use of algorithms embedded with bias, discrimination and inaccurate information.
The bill necessitates that any corporate entity using customer-facing AI must disclose to the customer that they are interacting with an AI system.
If software developers fail to comply with the legislation, a consumer may bring a civil action in district court against a developer or deployer, according to the bill.
“This is not what some people refer to as a message bill,” Chandler said. “It's a bill that impacts people, and as a result, I'm dedicated to trying to make sure it moves forward.”
Addison Fulton is the culture editor for the Daily Lobo. She can be reached at culture@dailylobo.com or on X @dailylobo
Get content from The Daily Lobo delivered to your inbox