Article
Resources
Article
Considerations for Employing AI in the Workplace
What is workplace artificial intelligence or AI? In its simplest form, AI in the workplace is the use of technology or software to monitor employees’ work performance, gather data, problem-solve, or aid in decision making. This technology can take many forms, but the most popular options include Radio Frequency Identification (“RFID”) badges, speed and location monitors on vehicles, keystroke or mouse activity trackers, work pace scanners, computer-based tests, and resume score assignments. An increasing number of employers are using AI to monitor employees and their work, with the EEOC estimating that, as of May 2023, as many as 83 percent of employers (and 99 percent of Fortune 500 companies) use some sort of AI.
On its face, AI can be an attractive tool that allows an employer to reduce costs, increase productivity, raise performances, and streamline decision making processes, particularly for positions that are remote or “work from home.” AI can be used by an employer to decide who is interviewed for a position, hired for a job, promoted, paid more money, or subject to discipline. While AI certainly can make the employer’s oversight of its employees easier, it also can collide with employee privacy concerns under a still-evolving legal landscape. In many respects, how and when AI can be used is also evolving, which is why the legal landscape is ever changing.
There is little doubt that if AI is misused, it can run afoul of traditional state and/or federal workplace protections like Title VI, the ADA, or the National Labor Relations Act. Federally, the White House, U.S. Equal Employment Opportunity Commission (“EEOC”), Occupational Safety and Health Administration (“OSHA”), Department of Justice, and Consumer Financial Protection Bureau, just to name a few, have released statements or guidelines about how AI should be safely and fairly implemented in the workplace, demonstrating that agencies are keeping a close eye on AI’s increased use. In fact, the EEOC has implemented new training for its employees, which rolled out in spring 2023, about how to spot illegal AI practices and provided guidance as recently as May 2023 on how AI may violate federal law. The top concerns appear to be AI that perpetuates discrimination (i.e., AI that “learns” discrimination from a company’s past unconscious bias), decreases workplace safety (i.e., encourages employers to work faster than is safe), or infringes on collective bargaining rights (i.e., discourages workers from engaging in protected activity due to monitored communications). States have similar concerns, with some, like California and New York, implementing new legislation aimed to increase transparency and fair implementation of AI in the workplace. Other laws, notably privacy laws and data security laws, also may come into play depending on the specific AI used.
With all this focus at both the state and federal levels, it is imperative that employers considering AI take a hard look, backed by the aid of counsel, at the software under consideration to ensure it is legally compliant. Employers must also ensure that any compliant software is being used in an appropriate, non-discriminatory fashion.
Equally as important, however, is employee response to the use of AI in the workplace. A Pew Research Center poll found that a vast majority of workers (nearly 65 percent) do not understand AI or how it actually works, leaving them to fill in the blanks with fictional AI, ranging from the Disney/Pixar’s WALL-E to the notorious SkyNet in the Terminator franchise. This lack of understanding fuels concerns about inappropriate “watching” of the employee, misuse of the information collected by AI, and unauthorized disclosure of that information to co-workers or third parties.
Given these legal and employee considerations, when implementing (or considering implementing) AI, employers should take the following steps:
- Select the type of AI most appropriate for the employer’s business and with the least amount of employee intrusion;
- Provide transparency to employees about what AI is, how it works, and how the employer intends to use it;
- Ensure written policies exist to govern the use of AI including, but not limited to, how it can and will be used, what decisions it may be used for, and the steps the employer will take to ensure it is consistently applied;
- Update existing policies (anti-discrimination, remote/hybrid work) to account for use of AI;
- Protect employees’ privacy by restricting the use, storage, and access to AI and the resulting data to appropriate persons within the organization;
- Train management on the proper use of AI, including how to respond if issues or concerns are raised regarding AI (an anti-discrimination, harassment and retaliation refresher course also is a good idea);
- Engage with the vendor of the AI to make sure it runs appropriately and that the employer understands what information it is retaining and how it is being used.
If done properly, AI can be implemented into the workplace for the benefit of both the employer and employee. To do so, an employer must thoroughly educate itself as to the AI options available, the applicable state and federal laws governing the use of AI, and its own workforce. As this is an ever-changing legal landscape, Spilman stands ready and available to aid any employer considering AI.