Read More

Employers are Taking the “Human” Out of Human Resources. Here's Why That May Be a Bad Idea.

Many businesses are rushing to integrate AI programs into their corporate processes. HR professionals should be aware of the legal, ethical, and cybersecurity risks of doing so.

Over the past several years, the use of artificial intelligence (AI), large language models (LLMs) like ChatGPT, and other automated decision-making tools (ADTs) have been a hot topic in the workplace. Ideally, these tools can be implemented to streamline workplace processes and procedures, saving employers time and resources and creating a more efficient workforce. However, there are some pitfalls to these developing technologies that may pose serious risks and challenges to employers, specifically within the context of human resources.

Defining Artificial Intelligence

Because AI encompasses a diverse array of applications, there is no singular definition for what constitutes AI technology. Congress has defined AI technology to mean “machine-based systems that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” The National Institute of Standards and Technology (NIST) has described AI technology as “software and/or hardware that can learn to solve complex problems, make predictions, or undertake tasks that require human-like sensing (i.e., vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action.”

In its simplest form, AI seeks to combine computer-based systems with organized collections of data to solve problems and achieve goals under various conditions. In a perfect world, AI is intended to empower machines and computers to mimic cognitive functions that traditionally require human thinking, and ultimately produce a “pure” output that represents rational decision-making, free of biases and other external influences.

But the more advanced AI becomes, the less it seems we truly understand its abilities.

What are the potential pitfalls of using AI in HR?

When deployed in the context of human resources, we have seen AI technology used for recruitment and talent acquisition activities such analyzing resumes, screening candidates, identifying top talent, and administering pre-employment “job fit” testing. AI has also been implemented as a performance management tool used to evaluate employee productivity metrics, project outcomes, and peer reviews. In addition, AI technology has been used to build predictive analytics frameworks by analyzing data from employee assessments and trainings to formulate suggestions about career pathways, advancement to new positions, and programs for developing additional skills. In some cases, employers have even relied upon AI to generate company policies and other written materials that are distributed to employees.

Despite the range of possibilities with AI technologies, concerns persist about how these systems gather, organize, and store data, some of which may create significant challenges and liability issues under existing employment laws. In light of this, federal agencies are increasingly focused on regulating the integration of these technologies into workplace processes. A Field Assistance Bulletin published by the Department of Labor (DOL) in early 2024 highlighted the potential risks associated with employer use of AI, specifically in relation to the Fair Labor Standards Act, the Family and Medical Leave Act, and the Employee Polygraph Protection Act. Additional DOL guidance in 2024 outlined key principles and best practices for responsibly developing and implementing AI in the workplace, focused on ensuring transparency, maintaining data privacy, and avoiding discrimination. State legislatures have also begun taking an interest in regulating the use of AI, imposing comprehensive compliance obligations for employers who use this technology. With these guidelines in place, employers must be proactive in aligning their AI use with new and existing legal requirements to avoid violations.

AI Datasets & Related Ethical Concerns

Data is the foundation of any AI system, as data is the basis upon which algorithms make decisions and predictions. In general, any output produced by an AI system will directly reflect the underlying data that is input into that system. Of course, the concern here is “garbage in, garbage out.” Without quality data to learn from, AI models can (and often do) struggle to discern meaningful patterns or produce reliable outputs. From an HR compliance standpoint, the underrepresentation of diverse demographic groups within AI training datasets is a major cause for concern.

We have seen real-world examples of corrupted datasets leading to erroneous AI outputs abound. In 2018, Amazon implemented and then quickly discontinued an AI recruiting tool due to gender bias perpetuated by the system’s output. The program was built using resumes the company had received over the preceding 10 years, most of which were, unbeknownst to Amazon, based on male profiles. Because of this, the AI system “learned” to favor male candidates over female ones, creating a gender disparity in hiring. Similarly, systems like ChatGPT are trained using internet data and, as a result, may reflect and preserve societal biases that exist in websites, online books, and other sampled content from which it draws information.

AI technologies based on poor datasets risk running afoul of employment and civil rights laws.

When relied upon to make certain employment-related decisions, such as those related to hires, promotions, and terminations, AI technologies based on poor datasets risk running afoul of employment and civil rights laws. If an ADT or other AI technology generates an adverse impact on a particular protected group, the Equal Employment Opportunity Commission (EEOC) has advised that employers will face liability under Title VII, even in instances where the employer was not directly responsible for the development of the program or where the program was administered by an outside agency.

In addition to concerns over systemic bias, AI users have been faced with another data-related concern, with respect to AI-manufactured misinformation. This concept has come to be known as AI “hallucinations,” which arise when an AI system self-generates inaccurate or misleading information that is then presented to consumers in a plausible-sounding manner.

Some of the starkest illustrations of the types of negative professional consequences that can result from AI hallucinations have been seen in the legal profession. Since 2022, there have been multiple reports of attorneys being caught using LLMs to write legal briefs where the AI system simply invented legal precedent and court citations that do not exist. Obviously, citing fake cases in an official legal brief is one of the most damaging things a lawyer can do, and these cases have generally resulted in disciplinary measures for misconduct and significant reputational damage. Employers who make use of AI technologies to understand or interpret complex employment-related laws without sufficient oversight risk consuming this type of misinformation and similarly exposing themselves to significant liability.

Security & Confidentiality Concerns for Corporate AI Use

In addition to data input and output concerns, AI technology also implicates several confidentiality and security related concerns, based on the potential for misuse or unauthorized access to sensitive information.

One significant challenge is understanding how AI protects information provided to the system. Many AI technologies “learn” from interactions with human users and store all input and output for potential future use. For example, ChatGPT has indicated that the system logs every user’s conversation, including any personal data that the individual may purposely or inadvertently disclose. The system then references all stored data to develop new outputs in future conversations, both with that same user and with others.

While employees may be disciplined for violating confidentiality requirements, AI technologies cannot.

To the extent that users are inputting sensitive or classified information into these programs, it is likely that this information will “live forever” within the AI system and may be used to produce outputs or generate solutions for other organizations or expose protected trade secrets to competitors. While an employee may be disciplined for violating certain confidentiality requirements, AI technologies and systems cannot.

Additionally, cyber-hacking and spear phishing have presented employers with ongoing digital security concerns for years, but through the advancement of AI technology, these concerns will be ever more present as hackers develop manipulation techniques for entire AI systems. There have been recent reports of cyber-criminals who have harnessed the power of AI to bypass security measures and expose vulnerabilities in companies’ security systems or engage in unauthorized surveillance. Because many current AI technologies are web-based, typical server protections are unavailable, which leads to users unwittingly exposing themselves to the potential for bad actors to gain access to conversation logs, user data, or other sensitive information.

Takeaways for Employers Who Use AI

Recent advances in the capabilities of AI technologies have given us a glimpse of an incredibly promising future. Generative AI has come a long way in a short time, and it is quite possible that employers will eventually be able to offload large amounts of labor onto these programs without needing to worry about security, confidentiality, bias, and the other areas of concern outlined above.

However, the current state of AI is far from perfect. The uncertainties surrounding the reliability of AI datasets and decision-making processes merit a cautious and thoughtful approach from even the most tech-forward businesses. Without adequate human oversight, integrating such programs into corporate processes could expose employers to a high degree of legal risk. In the present moment, HR professionals should seek to implement appropriate checks on any automated systems in order to strike a balance between the benefits of workplace automation and the good judgement of compliance-conscious human employees.

While the recent growth of AI technologies has been impressive and may entice employers to reduce human labor and modernize workplace procedures, we have learned that the potential benefits do not always outweigh the pitfalls. With continued uncertainty about the reliability of AI datasets and decision-making rationale, as well as the security and confidentiality concerns with AI systems, employers should proceed with caution when electing to replace human thinking with AI technology, specifically in the human resources arena.

Ultimately, human behavior remains a unique and individualized concept, and it is unlikely, at least in the near future, that AI technologies will be able to mimic these human variances in a way that allows them to be utilized in the workplace without exposing employers to some level of risks. HR professionals should seek to implement appropriate checks on any automated systems to strike a balance between the benefits of workplace automation with the good judgement of human-based decision-making and compliance with employment laws.

For more insights into the topics that matter most to employers, check out OneDigital's 2025 Workforce Insights Guide.

Share

Top