Algorithmic Accountability: How Lawmakers are Responding to the Rise of AI in the Workplace
Author
Artificial intelligence (AI) is no longer a futuristic concept; it’s a present-day reality impacting how employers recruit, evaluate, and manage talent.
From resume screening tools to automated video interviews and algorithms to evaluate employee performance analytics, AI tools are becoming increasingly embedded in how companies make employment decisions. But as these technologies continue to advance, so do concerns about fairness, transparency, and accountability.
In response, lawmakers across the United States are considering regulations aimed at aligning the use of AI in the workplace with existing employee rights and protections.
A Federal Framework in Flux
At the federal level, the regulatory framework remains relatively unclear. While there is no comprehensive federal law specifically governing AI in employment, several initiatives have laid some preliminary groundwork.
For example, the National AI Initiative Act of 2020 established a national strategy for AI research and standards development. In 2022, the Department of Justice issued guidance warning employers about the risks of disability discrimination when using AI tools, and Congress passed the AI Training Act, which mandates education for federal acquisition personnel on the capabilities and risks of AI. While these three initiatives remain intact, their practical impact is uncertain.
With the 2025 change in administration, the government revoked or deprioritized much of the previous federal guidance on AI. President Trump’s Executive Order 14179, signed in January 2025, signaled a shift toward deregulation of AI technology, emphasizing instead the United States’ leadership in AI innovation rather than enforcement.
As a result, although some federal measures technically remain in place, their active enforcement is unclear, as is whether they will continue to influence employer behavior in a meaningful way.
States Take the Lead
Across the country, states are stepping in to fill the regulatory void left by the federal government, each taking different approaches to managing the risks and responsibilities associated with the use of AI in employment.
California
California has emerged as a national leader in AI regulation. In March 2025, the California Civil Rights Department approved final regulations clarifying that the use of AI and automated decision-making tools in employment must comply with the state’s anti-discrimination laws.
These rules define key terms such as “automated-decision systems” and “agents,” and make clear that employers are responsible for ensuring that these tools do not result in unlawful discrimination. The regulations also reaffirm that existing protections around criminal history inquiries, medical questions, and reasonable accommodations apply even when decisions are made by AI technologies.
Set to take effect on October 1, 2025, these rules follow the state’s earlier legislation restricting the use of digital replicas of workers’ voices or likenesses without consent.
Colorado
Colorado has taken a broader step with the passage of SB 24-205, which is set to go into effect on February 1, 2026. This law is a first of its kind, imposing a statutory duty of care on employers and developers of AI tools and requiring them to prevent algorithmic discrimination.
Colorado’s law requires employers to develop risk management policies, conduct annual impact assessments, and provide detailed notices to individuals affected by AI-driven decisions.
Additionally, employers must provide opportunities for individuals to correct personal data and appeal adverse decisions that may result from using AI. Most notably, the law treats violations as a form of tort liability, potentially opening the door to significant legal exposure.
New Jersey
New Jersey released joint guidance from the state’s Office of the Attorney General and Division of Civil Rights in January 2025 clarifying that employers are liable for discriminatory outcomes caused by the use of AI tools, even if the employer did not intend to discriminate or fully understand how the tools operate.
The guidance emphasizes that employers must proactively test for bias and ensure reasonable accommodations are not overlooked.
Texas
Texas has taken a slightly softer approach to regulation, with the passing of the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) which prohibits intentional discrimination by AI systems.
Notably, TRAIGA does not require private employers to disclose their use of AI to employees or applicants or how AI may be used to make or aid in employment-related decisions.
New York
New York State is preparing to regulate developers of very large-scale AI systems through the Responsible AI Safety and Education Act (RAISE Act). This legislation targets “frontier models” of AI, or those with $100 million or more in compute cost.
If signed by New York’s Governor, the law would, among other things, require developers to implement safety protocols, retain detailed records, and submit to oversight by the Attorney General and Homeland Security.
Municipalities Take Things a Step Further
At the local level, cities are also asserting their authority when it comes to AI in the workplace.
New York City’s Local Law 144, which took effect in 2023, requires employers to conduct independent bias audits of automated employment decision tools and to publicly notify applicants and employees about their use.
Additionally, Portland, Oregon, banned the use of facial recognition technology in “any place of public accommodation.”
What Should Employers Do Now?
With AI regulation in employment accelerating, employers should develop a proactive approach to implementing and using this type of technology in the workplace.
To stay compliant, employers should consider the following:
- audit AI tools for bias
- implement transparency policies
- provide clear notices to affected employees and applicants
- train personnel on the appropriate use of AI in the workplace
Employers must also stay informed about regulatory developments in their jurisdictions and adhere to any state or locally imposed restrictions or requirements on the use of AI technology.
For additional guidance and best practices for leveraging AI productively and compliantly in the workforce, tune in to OneDigital's on-demand session: AI Isn’t Coming—It’s Here. Employer Compliance Guidance in a New Era of Tech.