Site iconSite icon HospitalityLawyer.com®

3 AI Bills in Congress for Employers to Track: Proposed Laws Target Automated Systems, Workplace Surveillance, And More

AI or Artificial intelligence concept. Businessman using computer use ai to help business and used in daily life, Digital Transformation, Internet of Things, Artificial intelligence brain, A.I.,

Employers that use artificial intelligence – and developers that create AI systems – could be subject to extensive new laws under several bills introduced by federal legislators. While much of the existing legal landscape on AI centers on broad, overarching principles, Congress is now considering bills that hone in on more specific issues like the workplace. We’ll outline the three bills that employers should care about most, covering issues ranging from overreliance on automated decision systems – or “robot bosses” – to workplace surveillance – or “spying bosses.”

Existing Federal AI Rules and Initiatives

Over the past several years, the federal government has ramped up its efforts to govern the development, design, and usage of AI. Here’s a sample of the laws, guidance, and standards already in place:

Proposed New Rules: Top 3 Bills Employers Should Know About

1. No Robot Bosses – 2419, introduced by Sen. Bob Casey (D-PA)

The aptly named “No Robot Bosses Act” would ban employers from relying exclusively on automated decision systems (ADS) to make “employment-related decisions” – which is broadly defined to include decisions at the recruiting stage through termination and everything in between (such as pay, scheduling, and benefits). The bill would protect not only employees and applicants but also independent contractors.

Employers would be barred from even using ADS output to make employment-related decisions, unless certain conditions are met, such as the employer independently supporting that output via meaningful human oversight). The bill would impose additional requirements on employers (for example, training employees on how to use ADS) and establish a Technology and Worker Protection Division within the Department of Labor.

2. Stop Spying Bosses – 262, introduced by Sen. Bob Casey (D-PA)

The “Stop Spying Bosses Act” targets (as its title suggests) invasive workplace surveillance. Technology that tracks employees – from their activity to even their location – is growing more common. This bill would require employers that engage in surveillance (such as employee tracking or monitoring) to disclose such information to employees and applicants. The disclosure would have to publicly and timely provide details about the data being collected and how the surveillance affects the employer’s employment-related decisions.

The bill also would:

3. Algorithmic Accountability – 2892, introduced by Sen. Ron Wyden (D-OR)

A proposed “Algorithmic Accountability Act” seeks to regulate how companies use AI to make “critical decisions,” including those that significantly affect an individual’s employment. For example, companies would be required to:

The Federal Trade Commission (FTC) would be required to create regulations to carry out the purpose of the bill.

What Other Bills Are Under Consideration?

Here’s a sample of other types of bills that have been introduced:

Federal AI Framework

Proposed bipartisan legislation would provide a national framework for bolstering AI innovation while strengthening transparency and accountability standards for high-impact AI systems. Another comprehensive bill would establish guardrails for AI, establish an independent oversight body, and hold AI companies liable – through entity enforcement and private rights of action – when their AI systems cause certain harms, such as privacy breaches or civil rights violations. 

AI Labeling and Deepfake Transparency

One bill aims to protect consumers by requiring developers of AI systems to include clear labels and disclosures on AI-generated content and interactions with AI chatbots. Another bill would require similar disclosures from developers and require online platforms to label AI-generated content.

Labeling deepfakes is “especially urgent” this year, according to a press release from one of the bill’s cosponsors, because “at least 63 countries which are nearly half the world’s population and are holding elections in 2024 where AI-generated content could be used to undermine the democratic process.” 

AI Cybersecurity and Data Privacy Risks

Several bills target cybersecurity and data privacy issues, including a bill that would make it an unfair or deceptive practice (subject to FTC enforcement) for online platformers to fail to obtain consumer consent before using their personal data to train AI models.

What’s Next

We’ll have a much better view of the chances of any of these proposals becoming law by later this summer. We will be providing detailed updates about these and other key legislative activities on a regular basis throughout the year, so make sure you are subscribed to FP’s Insight System in order to stay up to speed.

Conclusion

We will continue to monitor developments as they unfold. Make sure you subscribe to Fisher Phillips’ Insight System to gather the most up-to-date information on AI and the workplace. Should you have any questions on the implications of these developments and how they may impact your operations, please do not hesitate to contact your Fisher Phillips attorney, the authors of this Insight, or any attorney in our Artificial Intelligence Practice Group, our Government Relations Team, or Privacy and Cyber Practice Group.


About the authors:

Benjamin M. Ebbink is a partner in the Sacramento and Washington D.C. offices and Co-Chair of the Government Relations Practice Group.

With over two decades of experience in the intersection between labor and employment law and public policy, he focuses on legislation and regulations enacted at the federal, state and local levels. Benjamin assists employers with navigating evolving legislative and regulatory landscapes in a variety of areas. He is also a member of the firm’s Artificial Intelligence Team, where he monitors the rapidly developing regulation of artificial intelligence at the federal and state level.

Braden Lawes is a Senior Government Affairs Analyst in the firm’s Washington, D.C. office and assists the Government Relations Practice Group in identifying and communicating federal and state policy changes that impact corporate counsel and their workplaces across the country.

Prior to joining Fisher Phillips, Braden was a Senior Government Affairs Associate for a government affairs firm in Washington D.C. where he managed a diverse portfolio of non-profit and trade association clients and tracked federal legislation in the transportation, cybersecurity, and healthcare sectors. In this role, he also guided companies through the often-complex federal appropriations process, organized congressional briefings to educate legislative staff on various policy matters, and drafted letters on clients’ behalf to the Office of Management and Budget (OMB) and congressional committees.

Exit mobile version