Navigating Workplace AI When Federal, State Policies Clash
This article was originally published in Law360 on January 7, 2026, and is republished here with permission.
Lawmakers on the state, federal and local levels are taking shots at legislation that is aimed at understanding or regulating the use of artificial intelligence in employment-related decisions.
Employers should know about two of the most recent efforts joining the fray — a bipartisan U.S. Senate bill that would require certain companies to regularly report on personnel decisions affected by AI, and a bipartisan bill in the U.S. House of Representatives that would require human oversight and disclosure of AI tools that are used in employment decisions.
These Senate and House proposals — along with state legislation — potentially clash with the White House’s recent executive order calling for a minimally burdensome national framework for AI regulation.
With the flurry of competing priorities and various takes on regulation, employers must stay on their toes to remain compliant.
At the very least, this means that employers should be able to understand and explain their use of AI — a task that is perhaps easier said than done — and must comply with any applicable state laws while awaiting further direction from the federal government.
The Senate Bill: Quarterly Updates on AI’s Impact on the Workforce
On Nov. 5, Sens. Josh Hawley, R-Mo., and Mark Warner, D-Va., introduced the AI-Related Job Impacts Clarity Act — invoking their belief that we need more transparency about how AI affects the workforce.
To that end, the Senate bill would require covered entities to file quarterly reports to the U.S. Department of Labor identifying how many employees were laid off because their job functions were replaced or automated by AI, how many employees were hired as a result of the employer’s adoption of AI, and how many vacancies the employer chose not to fill for reasons that were substantially related to AI-driven replacement or automation.
Employers would also have to report the number of employees who were retrained due to AI, along with any additional information about AI-related job impacts that the secretary of labor may require.
The Senate bill would require such disclosures from publicly traded companies and the federal government, as well as from some as-yet-unknown universe of companies that are not publicly traded.
For the latter camp, the Senate bill requires regulations, which must be issued within 180 days of passage, to outline which private companies are covered.
The Senate bill calls for the regulations to include categories of companies that are not publicly traded that have a “significant workforce, estimated enterprise value, or employment impact on a regional or national basis,” considering things like employer size, annual revenue and industry classification.
The secretary of labor would be responsible for preparing quarterly and annual reports summarizing the data that is reported by covered entities. The bill does not currently have co-sponsors other than Hawley and Warner, and has been referred to the Senate Committee on Health, Education, Labor and Pensions.
It is unclear how quickly the bill may move through the Senate and then to the House. Even less clear is whether President Donald Trump would approve the Senate bill, given his calls for less state regulation of AI.
What it Would Mean for Employers
While the Senate bill may be far from passage, employers should consider whether they would be positioned to meet its reporting obligations, including the following:
Who is responsible for determining whether AI is substantially affecting personnel decisions, thus potentially requiring reporting under the act? Consider designating a team or point person to be responsible for these assessments in order to embed consistency.
How will an employer determine that AI is substantially affecting personnel decisions? Often, multiple nuanced forces drive layoffs or hiring, and employers will have to assess AI’s potential role among them. While the determination may be clear where a chatbot replaces a human worker, it may be unclear where AI tools are incorporated across multiple employees’ workflows, driving efficiencies over time.
Do the employer’s internal systems allow for tracking whether and how AI is affecting personnel decisions? Solid documentation could be critical to showing compliance.
In sum, employers may wish to proactively assess their decision-making and data hygiene to address whether they can identify and document positions being eliminated or created because of AI, as well as positions requiring additional AI training.
Of course, the complexity of wrangling this information just on an individual employer basis underscores stakeholders’ likely questions about the value of data that is aggregated from a vast number of employers, each with their own interpretation of the Senate bill’s criteria.
The House Bill: Human Oversight and Disclosure of AI Tools in Employment
On the heels of the Senate bill’s push to gather information about employers’ use of AI, a small bipartisan group in Congress introduced a bill that would more directly and sweepingly regulate such use — primarily by mandating human oversight and disclosures to employees, among other requirements.
On Dec. 3, Reps. Suzanne Bonamici, D-Ore., and Chris Deluzio, D-Pa., along with Del. James Moylan, R-Guam, introduced the No Robot Bosses Act of 2025. If passed, the act would apply to employers with 11 or more employees. According to Bonamici, the bill would require, among other things, that employers:
Audit their AI tools for issues like discrimination and bias, both before using them and periodically;
Provide independent, human oversight of information, decisions, predictions or recommendations that are generated by AI tools that the employer uses; and
Disclose to employees the employer’s use of AI tools in employment-related decisions, like hiring, firing, compensation, promotion, scheduling, benefits and others.[1]
The No Robot Bosses Act of 2025 is the latest iteration of similar proposals that were introduced without bipartisan support in 2023 and 2024. But the fate of the 2025 version is also uncertain, particularly given Trump’s critique of arguably similar state legislation.
State Laws Regarding AI
Indeed, multiple jurisdictions — including California, Colorado, Illinois, New York City and Texas — have already passed their own legislation concerning AI’s impact on employment.[2] Broadly speaking, these laws restrict how and when companies may use AI in making employment decisions.
For example, California and Texas prohibit using AI in a way that discriminates against applicants.[3]
Colorado, Illinois and New York City mandate that covered employers notify employees and applicants if they are using AI to make hiring and other employment decisions.[4]
New York City’s AI law goes further, prohibiting employers from using an AI tool if it has not been subject to a specific type of bias audit within one year.[5]
Employers that are subject to these state laws and use AI in their employment processes should carefully evaluate their processes to ensure compliance.
For example, the AI laws in Illinois and Colorado, which take effect in January and June, respectively, will require employers to provide certain notices to employees or applicants about their use of AI. Employers must understand when such disclosures are required and be prepared to provide them.
As another example, employers that are subject to the New York City law must ensure that they are meeting the bias audit requirements for using AI tools in employment.
Further, employers that are seeking remote applicants or employing remote workers should also be aware that while they may not be based in a jurisdiction with an AI law, their recruiting or hiring in such jurisdictions renders them subject to those jurisdictions’ regulations.
The White House’s Executive Order
While federal, state and local lawmakers are busy proposing regulations for AI or regulating employers’ use of it, the White House just issued an executive order criticizing “excessive State regulation” of AI as stifling innovation, and instead calling for “a minimally burdensome national policy framework for AI.”
Specifically, on Dec. 11, the White House issued Executive Order No. 14365, which is aimed at discouraging state-level AI regulations.[6]
The order directs the attorney general to create a dedicated AI litigation task force. Among other things, this unit would be responsible for challenging state-specific AI legislation that is allegedly at odds with national policy objectives.
The order also directs the secretary of commerce to publish a review of state AI laws within 90 days, and to restrict funding to states with AI laws that are deemed objectionable.
The White House’s preference for a minimally burdensome national AI policy appears to conflict with the goals of federal and state lawmakers on both sides of the aisle, whose policies and proposals favor legislation.
For example, the order explicitly critiques Colorado’s ban on algorithmic discrimination, but the No Robot Bosses Act similarly requires AI tools that are used in employment to be validated with respect to the “lack of any potential discriminatory impact” of the tools.
We anticipate that the order will be subject to legal challenges, just as the AI litigation task force that the order contemplates may challenge state AI laws.
In the meantime, though, employers that are subject to state AI laws are bound to comply, absent a court order stating otherwise.
Le bilan
The Senate bill detailed above reflects bipartisan interest in understanding how AI is affecting workers. Likewise, the No Robot Bosses Act suggests some in Congress want to place guardrails on employers’ use of AI — as certain states have already.
The passage and final contours of these proposals remain unknown and, particularly given Trump’s pro-AI posture, a lot could change. But employers would nevertheless benefit now from proactively and methodically monitoring both their own usage of AI and staying apprised of updates to federal and state laws.
[2] See, e.g., Cal. Code Reg. tit. 2, § 11008-11097 (2025); Colo. Rev. Stat. § 6-1-1701 (2024); 775 III. Comp. Stat. 5/2-101-110 (2026); N.Y., Admin. Code §§ 20-870 to 20-874; Texas Responsible Artificial Intelligence Governance Act, 89th Leg., R.S., ch. 1174, 2025 Tex. Gen. Laws.
[3] See, e.g., Cal. Code Reg. tit. 2, § 11008-11097 (2025); Texas Responsible Artificial Intelligence Governance Act, 89th Leg., R.S., ch. 1174, 2025 Tex. Gen. Laws.
[4] See, e.g., Colo. Rev. Stat. § 6-1-1701 (2024); 775 III. Comp. Stat. 5/2-101-110 (2026); N.Y., Admin. Code §§ 20-870 – 20-874.
[5] N.Y.C. Admin. Code §§ 20-871. [6] https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/