Compliance Confidence
Should Employers be Taking the “Human” Out of Human Resources?
Should Employers be Taking the “Human” Out of Human Resources?
Many businesses are rushing to integrate AI programs into their corporate processes. HR professionals should be aware of the legal, ethical, and cybersecurity risks of doing so.
Over the past several years, the use of artificial intelligence (AI), large language models (LLMs) like ChatGPT, and other automated decision-making tools (ADTs) have been a hot topic in the workplace. Ideally, these tools may be implemented to streamline workplace processes and procedures, save employers time and resources, and create a more efficient workforce. However, certain limitations on these developing technologies may pose serious risks and challenges to employers, specifically in the context of human resources.
Defining Artificial Intelligence
With such a diverse potential for application, there does not seem to exist a single, commonly agreed upon definition of “Artificial Intelligence.” Congress has defined AI technology to mean “machine-based systems that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” The National Institute of Standards and Technology (NIST) has described AI technology as “software and/or hardware that can learn to solve complex problems, make predictions, or undertake tasks that require human-like sensing (i.e., vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action.”
In its simplest form, AI combines computer-based systems with an organized collection of data to solve problems and achieve goals under various conditions. In a perfect world, AI is intended to replace human thinking with purely rational decision-making that’s free from biases and other external influences. But the more advanced AI becomes, the less it seems we truly understand its abilities.
What are the dangers of using AI in HR?
When deployed in the context of human resources, we have seen AI technology used for certain hiring and recruiting activities such as the auto-scanning of resumes, the overseeing of preliminary applicant interviews, and the administering of pre-employment “job fit” testing. AI has also been implemented as an employee monitoring tool, as it is able to evaluate employee productivity by tracking keystrokes and other performance metrics. In addition to this, ADTs have been applied to applications in employee advancement and growth: by analyzing data from assessments and trainings, it is able to formulate suggestions about career pathways, advancement to new positions, and programs for developing additional skills. In some cases, employers have even relied upon AI and LLMs to generate company policies and other written materials that are distributed to employees.
Despite the breadth of possibilities with AI, there remain many concerns about the ways in which these technologies gather, organize, and store data, some of which may pose significant challenges and liability issues when deployed in the context of employment.
Dataset & Ethical Concerns
Data is the foundation of any AI system. In general, any output or decision generated by an AI system will reflect the underlying data that is input into that system. Oftentimes, this data is incomplete, biased, or outdated. For example, AI training datasets often exclude information from marginalized and minority communities, who have historically had less access to technologies such as the internet, or had fewer opportunities to have their writings, songs, or culture digitized.
A real-world example arose when Amazon implemented its first recruiting software several years ago, which was shortly discontinued thereafter. Specifically, Amazon’s program was built using resumes the company had received over the preceding 10 years, most of which were based on male profiles. Because of this, the recruiting software largely favored male applicants during the screening process, reflecting a biased outcome that was directly attributable to the relied-upon dataset. Similarly, systems like ChatGPT are trained using internet data and, as a result, may reflect and perpetuate societal biases that exist in websites, online books, and other sampled content.
Recently, AI users have been faced with another data-related concern, this time with respect to AI-manufactured misinformation. This concept has come to be known as AI “hallucination,” which arises when an AI system self-generates inaccurate or misleading information that is presented to consumers in a plausible-sounding manner.
For example, there have been multiple reports of attorneys being caught using LLMs to write legal briefs where the AI system simply invented legal cases and court citations that do not exist, resulting in potential discipline for legal misconduct. Similarly, employers who make use of AI technologies without sufficient oversight may be exposing themselves to significant legal liability.
When relied upon to make certain employment-related decisions, such as those related to hires, promotions, and terminations, AI technologies based on poor datasets risk running afoul of employment and civil rights laws. If an ADT or other AI technology generates an adverse impact on a particular protected group, the Equal Employment Opportunity Commission (EEOC) has advised that employers will face liability under Title VII – even in instances where the employer was not directly responsible for the development of the program or where the program was administered by an outside agency.
Security & Confidentiality Concerns
Cyber-hacking and spear phishing have presented employers with ongoing digital security concerns for years, but through the advancement of AI technology, these concerns will be ever more present as hackers develop manipulation techniques for entire AI systems. There have been recent reports of cyber-criminals who have harnessed the power of AI to bypass security measures and expose vulnerabilities in companies’ security systems or engage in unauthorized surveillance. Because many current AI technologies are web-based, typical server protections are unavailable, which leads to users unwittingly exposing themselves to the potential for bad actors to gain access to conversation logs, user data, or other sensitive information.
Additionally, many AI technologies “learn” from their interactions with human users, and store that information for future use. For example, ChatGPT logs every conversation, including any personal data that is input into the system, and mines data in the future to develop new outputs. To the extent that users are inputting sensitive or classified information into these programs, it is likely that this information will “live forever” within the AI system and may be used to produce outputs or generate solutions for other organizations, thereby exposing potentially protected trade secrets.
While the recent growth of AI technologies has been impressive and may be appealing to employers as a way to reduce human labor and modernize workplace procedures, we have learned that the potential benefits do not always outweigh the pitfalls. With continued uncertainty about the decision-making rationale behind ADTs, as well as the security and ethical concerns of using other AI systems, employers should proceed with caution when electing whether to insert AI technology into the human resources arena.
Ultimately, human behavior remains a unique and individualized concept, and it is unlikely, at least in the near future, that AI technologies will be able to mimic these human variances in a way that allows them to be utilized in the workplace without exposing employers to significant risks. HR professionals should seek to implement appropriate checks on automated systems that strike a balance between the benefits of workplace automation, the good judgement of human-based decision-making, and compliance with employment laws.
For cutting-edge compliance information and resources, check out our Compliance Consulting Page