"Many HR organizations are starting to use AI, especially through their vendor technologies, and don't know what that technology is doing and that's a big risk to take on," said Guru Sethupathy, co-founder and CEO at FairNow, a technology company that aims to help organizations build compliance and trust in artificial intelligence.
Sethupathy said that more organizations realize the importance of AI governance as they look to use AI in their HR and recruitment. Many have questions about responsible AI adoption in recruiting, how to effectively implement AI, and concerns about potential risks. Sethupathy addressed these key topics in our latest RPO Leadership forum, hosted by RPOA CEO Lamees Abourahma. Below is a recap of his valuable insights.
What Is Artificial Intelligence(AI)?
As the adoption of AI in human resources and recruitment continues to grow, understanding the importance of AI governance and addressing potential risks becomes increasingly crucial. But people need to first understand what AI is. Sethupathy defines AI as any technological system that takes data as inputs to learn and then makes predictions, recommendations, or content.
He pointed out the ubiquitousness of AI-created predictions, recommendations, and content in people's lives. Whether you've used ChatGPT, Mistral, Claude, or other large language models (LLMs) or interacted with AI-based chatbots or recommendation systems like those on Netflix, you've been exposed to AI. These AI systems create content, make predictions, and provide personalized recommendations based on user data. "AI isn't just a new phenomenon; it has been around for a while, and laws and regulations regarding AI also encompass classical prediction models. Essentially, any technological system that uses data to learn and make predictions, recommendations, or content can be considered an AI system," he said.
Gain deeper knowledge of responsible AI adoption in recruiting, watch the webinar today.
Moving from AI's widespread influence, the discussion highlighted its potential to transform businesses and individuals' lives, and Sethupathy provided case studies of AI's transformative action.
AI: Potential to Increase Efficiency, Fairness, and Quality
Sethupathy emphasized the tremendous potential of AI in transforming businesses and people's personal lives. He highlighted the AI transformative power he experienced while he led people analytics and technology functions as an executive at CaptialOne. His team developed AI technology that improved the efficiency and quality of hires and reduced bias in the hiring process. Achieving these outcomes required careful governance, testing, and proper risk mitigation methods. Here's his description of that experience.
Research shows that “if done right, AI has the potential to increase efficiency, fairness, and quality.” Sethupathy cited a research study where applicants had the choice of two job groups: Group A has random jobs that were handled by humans throughout the process (sourcing, screening, interviewing, selection). Group B has random jobs that were handled by AI technology. The study found that more women self-selected jobs in group B (managed by AI technology), indicating that certain groups may perceive that traditional human systems are biased, particularly in talent management and hiring. The research, he said, also highlights that AI can be used for improving candidate experience, minimizing bias in the hiring process, and enhancing the quality of talent acquisition through the use of AI.
Another research study showed significant productivity gains when consultants used generative AI. The data revealed fascinating improvements in efficiency of 12 to 25% among low performers, which can apply to various consulting and HR roles.
Sethupathy observed that individuals and HR organizations using AI can significantly enhance productivity and efficiency, presenting a substantial opportunity for growth. This research underscores the immense value proposition of AI, explaining the enthusiasm companies show for its potential. Based on significant research in the field, he expects profound gains in efficiency, efficacy, and overall experience, Sethupathy said.
Next, Sethupathy spoke about the ethical considerations surrounding the use of AI in businesses.
The Risks And Challenges Associated With AI
AI has great benefits; however, organizations must consider the potential risks associated with AI. Some significant challenges associated with AI include:
- Biases in AI systems.
- The generation of toxic content.
- Lack of transparency in understanding how specific complex systems operate.
Moreover, the emergence of AI systems that fabricate information, termed "hallucinating" information, poses further difficulties—additionally, the security and privacy concerns related to data. Therefore, when integrating AI into organizational processes, he said it is essential to carefully contemplate and address these potential challenges to ensure responsible and beneficial utilization of AI technology.
Experts see investments in AI reaching trillions of dollars in the next decade, with HR receiving a significant portion of that investment for AI implementation, Sethupathy said. Organizations already widely use AI in the hiring process for tasks such as candidate selection, resume scoring, and creating job descriptions, he added.
Here's the challenge, he said. Many large HR departments have adopted AI, but they often lack an understanding of its efficiency and functionality because of limited resources for monitoring these systems and reliance on purchased technologies without deep internal expertise. Consequently, organizations struggle to ascertain whether these solutions use AI, exhibit biases, or comply with laws and regulations. As a result, purchasers now face confusion and unanswered questions during sales processes. While this situation slowly changes, it highlights the pressing need for more precise insight into AI technology and its implications within HR organizations and vendor offerings.
Sethupathy next shared information about the regulations to tackle the risks posed by AI in critical areas like HR, finance, and healthcare.
Mitigating AI Risks
Over the past year, there has been a significant increase in regulations related to the use of AI affecting various domains such as HR, finance, and healthcare. These regulations differentiate between low, medium, and high-risk AI applications, with hiring classified as high-risk across most laws. The laws focus on governance, bias testing, and accountability for the enterprise customer and the vendor. Both parties must demonstrate testing, governance, and audits, indicating a shared responsibility. The trends suggest that neither the customer nor the vendor can solely be held accountable. It's crucial to consider these themes when using AI.
Sethupathy noted that existing laws need to be followed alongside upcoming regulations. Various regulatory frameworks will be implemented at the state and country levels. Additionally, industry standards, such as ISO (International Organization for Standards,) now include certification for responsible AI. Vendors will likely need to obtain ISO or NIST certification to sell to customers. This shift will require vendors and technology providers to demonstrate their certification to operate in the market. Therefore, compliance with existing laws, future regulations, and industry standards will be essential for vendors and technology providers in the AI space.
Now, let's delve into the six pillars of good governance. Sethupathy observed that organizations need to establish a framework to address the governance and compliance aspects of AI. He called this framework the six pillars of good governance.
The 6 Pillars of Good Governance
Organizations have begun addressing the governance and compliance aspects of artificial intelligence (AI) as part of their operations. Customers have started asking vendors questions about the governance and compliance of AI models, and professionals in the field expect part of the sales cycle to include AI governance and compliance.
As Sethupathy sees it, good governance consists of six pillars. Those pillars include:
- Inventory: Inventory all AI applications and assess risk levels.
- Humans in the loop: Establishing rules and accountability for humans involved in building, approving, and testing AI models.
- Roles and responsibilities: Establishing the roles and responsibilities of individuals involved in AI governance, such as data scientists, legal and risk personnel, business representatives, and HR executives.
- Policies: Implementation of policies around transparency, trust-building, and remediation in case of AI bias or unexpected behavior.
- Compliance and testing: Ensuring compliance with regulations and conducting thorough testing for bias, performance, and data privacy.
- Training: Training HR organizations in understanding the risks associated with AI and ensuring compliance in its usage.
Sethupathy said the pillars collectively contribute to good governance in AI implementation within organizations.
In Conclusion
The increasing integration of AI in HR and recruitment shows the critical importance of responsible AI adoption in recruiting and its potential risks. Author, speaker, and AI governance expert, Guru Sethupathy's insights shed light on the transformative potential of AI in improving efficiency, fairness, and quality within organizations while emphasizing the ethical considerations and challenges associated with AI implementation. As organizations continue to invest significantly in AI, it becomes imperative to approach its integration responsibly, addressing biases, ensuring transparency, and safeguarding security and privacy. This affects the HR domain and has broader implications for the ethical and responsible adoption of AI across various industries, underscoring the need for proactive governance and thoughtful consideration of risks in harnessing the potential of AI technology.
To learn more about AI in recruiting, visit our RPO Academy or explore more leadership articles on the RPO Voice Blog.