Building Responsibly: What Employers Need to Know About AI Compliance in the Workplace

Article Summary

As AI becomes a fixture in hiring, performance reviews, and workforce planning, federal and state governments are moving quickly to regulate its use and ensure compliance with existing employee protections. With the federal government favoring deregulation, states such as California, Colorado, Illinois, and others are leading the charge, making it critical for employers to audit their AI tools and stay current on evolving, jurisdiction-specific requirements.

Artificial intelligence (AI) is fundamentally reshaping virtually every industry, and the workplace is no exception.1,2 Through conversations with our HR Consulting teams, it has become increasingly clear that what once seemed like a distant concept is now a daily reality in talent acquisition, performance reviews, and workforce planning. But as AI technologies continue to advance, so too does the responsibility to deploy them thoughtfully. In response, state and federal lawmakers across the United States are actively pursuing measures to ensure AI use in the workplace aligns with existing employee rights and protections. 

The Federal Landscape: The Push for a National Framework 

At the federal level, the regulatory landscape for AI has shifted considerably under the current administration. Rather than building on prior federal guidance, much of which has been revoked or deprioritized, the Trump administration has moved decisively toward a deregulatory approach that prioritizes AI innovation over oversight.  

That shift began in January 2025, when President Trump signed Executive Order 14179, signaling a move away from the more restrictive AI guidance of prior years and emphasizing instead United States’ leadership in AI development. That deregulatory posture was reinforced in December 2025 when President Trump signed Executive Order 14365, directing federal agencies to use lawsuits and funding cuts to challenge state-level AI regulations deemed to be inconsistent with federal policy. 

Most recently, in March 2026, the Trump Administration released “A National Policy Framework for Artificial Intelligence,” providing a non-binding set of legislative recommendations that urge Congress to adopt a unified federal approach to AI regulation. The Proposal flows from EO 14365 and addresses six priority areas, including safeguards for small businesses, preservation of intellectual property rights, and expansion of AI-related workforce opportunities. It is important to note, however, that Congressional action would be required before any of the Proposal’s recommendations become enforceable regulatory obligations; an outcome that remains uncertain. In the meantime, the divergence between federal deregulation and state-level compliance requirements means employers cannot rely on the federal framework alone.  

States Take the Lead  

While the federal government works through its own transition, states have not waited. Across the country, legislators are advancing their own frameworks for responsible AI use in the workplace, each taking different approaches to balancing innovation with employee protections. 

California has emerged as a national leader in AI regulation. The California Civil Rights Department approved final regulations clarifying how existing anti-discrimination laws apply to automated decision systems in employment, which took effect October 1, 2025. The rules confirm that it is unlawful to use AI and automated-decision making tools to make employment-related decisions that discriminate against applicants or employees and reaffirm that established criminal history and medical inquiry restrictions continue to apply even when decisions are made by or with the assistance of AI technology. Covered systems include tools such as those that screen resumes, rank applicants, analyze facial expressions or voice, and target job ads. Employers remain responsible for compliance even when using third-party vendors or technology platforms. 

California has also made clear that accountability for AI-driven outputs rests with the humans and organizations that develop, modify, or deploy the technology, meaning employers cannot disclaim responsibility by arguing that “AI acted on its own.” Reinforcing this principle, California’s Transparency in Frontier Artificial Intelligence Act (TFAIA) also imposes new disclosure and internal reporting standards for large-scale AI developers and provides whistleblower protections for employees who report safety risks or violations to authorities.  

Looking ahead, California’s Privacy Protection Agency (CPPA) has also finalized regulations under the California Consumer Privacy Act addressing the use of automated decision-making technology (ADMT) in employment decisions. Set to go into effect on January 1, 2027, the CPPA’s regulations require covered businesses to provide pre-use notices with opt-out options and allow employees access to ADMT logic. The regulations also impose specific risk assessment and reporting obligations for employers who use ADMTs for “significant” employment decisions. 

Colorado has taken a broader step with the passage of SB 24-205, which was originally set to take effect on February 1, 2026. However, following significant lobbying efforts, the Colorado legislature has agreed to postpone the law’s effective date to June 2026 to allow for further negotiations. Even so, Colorado’s AI law represents a first of its kind, imposing a statutory duty of care on employers and developers of AI tools and requiring them to proactively prevent algorithmic discrimination. 

The starting point is to understand where and how AI is currently being used within your organization. This includes developing an understanding of where the tool draws data from, how it is trained to evaluate that data, and how it ultimately generates output. 

Illinois has adopted a multi-pronged approach. The state’s AI Video Interview Act, in effect since 2020, requires employers to obtain consent and report demographic data when using AI to analyze video interviews. A 2024 law further restricts the use of digital replicas in employment contracts. Most recently, HB 3773, which took effect on January 1, 2026, modifies the Illinois Human Rights Act to prohibit the use of AI in employment decisions if it results in discrimination against protected classes.  

New Jersey launched its Civil Rights and Technology Initiative in 2025, through which the state’s Attorney General issued guidance clarifying that algorithmic bias violating the New Jersey Law Against Discrimination is prohibited. Employers using AI tools must proactively test for bias and ensure that reasonable accommodations are not overlooked, regardless of whether discriminatory outcomes were intentional.  

Texas enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) which establishes a regulatory framework for AI systems in the state. While TRAIGA does not impose direct obligations on private employers, it does prohibit the development or deployment of AI systems that intentionally discriminate against protected classes in violation of federal or state law.  

New York’s Responsible AI Safety and Education Act (RAISE Act), set to take effect January 1, 2027, imposes safety and transparency requirements on developers of large-scale “frontier models” of AI, including obligations to publish safety protocols, issue transparency reports, and report critical safety incidents to a new oversight office within the state’s Department of Financial Services. While the RAISE Act shares structural similarities to California’s TFAIA, it establishes its own distinct standards, underscoring the growing complexity of the state-by-state AI compliance landscape.  

Local Governments Add Another Layer  

At the local level, cities are also adding their own requirements to the growing framework of responsible AI use in employment. New York City’s Local Law 144, which took effect in 2023, requires employers to conduct independent bias audits of automated employment decision tools and provide transparent notice to applicants and employees about their use. Additionally, Portland, Oregon, has banned the use of facial recognition technology in “any place of public accommodation.” 

What Can Employers Do Now?  

Whatever form federal policy ultimately takes, the common thread running through nearly every state regulation issued to date is clear: the use of AI does not excuse compliance with existing anti-discrimination, employee privacy, or any other applicable employee protection law. 

The starting point is to understand where and how AI is currently being used within your organization. This includes developing an understanding of where the tool draws data from, how it is trained to evaluate that data, and how it ultimately generates output. Armed with that understanding, employers can begin to assess whether those tools are operating in a manner that is consistent with regulatory requirements.  

To build a responsible foundation, employers should consider auditing their AI tools for bias, implementing transparency practices regarding AI-driven decisions that affect applicants and employees, and training HR and management personnel on responsible AI use. As AI regulation continues to evolve, staying informed is one of the most valuable steps an employer can take. Employers are encouraged to monitor developments within their jurisdictions to ensure their AI practices remain aligned with applicable requirements. 

For a summary of the latest updates, visit the Federal Policy Hub for Employers and connect with one of our employee benefits consultants or HR experts to ensure you always remain compliant.


References

[1] https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf

[2] https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

Publish Date:Apr 24, 2026

Share