Putting the ‘Human’ Back into Human Resources: How HR Can Protect the Human Side of Work
As AI moves deeper into the workplace, the focus can’t be only on speed and scale. Meaning, trust, and inclusion need to be designed in, and they’ll define not just how work gets done, but how it feels to be part of it.

Artificial intelligence is changing the way we work, promising increased productivity and data-driven decisions. However, AI progress also has a dark side, specifically related to the potential impact on jobs and the work itself becoming less meaningful, less personal, and less human. This is where HR comes in—not just to address bias and fairness concerns but to shape how AI is adopted in ways that protect what people value most about work: connection, purpose, growth, and fairness.
This article explores how HR can lead to AI integration while preserving these human foundations of work.
Contents
The hidden risks of the growing AI adoption
Why HR needs to lead AI integration and capability efforts
What a human-centered workplace looks like in an AI world
Making human-centered work a strategic priority for HR
The hidden risks of the growing AI adoption
It is easy to get swept up in the excitement of AI’s promise. The technology is already reshaping how work gets done, from generative AI tools that write job descriptions to algorithms that screen resumes in seconds. However, while the benefits are significant, so are the risks, especially if we focus solely on efficiency and ignore the broader implications for people and jobs.
While concerns about bias and unethical AI use are valid, the conversation must include more systemic implications of how AI shapes our organizations and society.
Productivity gains may come at the cost of engagement
Globally, AI could displace up to 300 million jobs, with 47% of workers in the United States alone at risk of being affected by AI-driven automation. One in four CEOs anticipates job cuts due to generative AI in the near future, while 30% of workers are concerned about their jobs.
Despite AI’s potential to boost productivity, we must also remain mindful of its impact on the meaning people find in their work. Global employee engagement levels are already in decline, and if AI is implemented without intentional design, businesses risk creating future roles that lack challenge, purpose, and fulfillment. The result could be a workforce that is more efficient but less inspired and invested.
“I’ve seen the pictures—and you have too—of robots on Amazon lines, moving large packages from one conveyor belt to another, being able to track their movements precisely. They’re part of the supply chain now. They’re not human; they don’t talk, they don’t call in sick, and they show up every day. I think that’s resonating with leadership. One company said, ‘We’ll make the $70,000 investment—this pays for itself in a year or two. We don’t have to pay benefits.”

Short-term decisions are backfiring
OrgVue’s research reveals that many CEOs are experiencing AI regret, second-guessing decisions made to replace human work with artificial intelligence. In the UK, two in five businesses (39%) reported making redundancies as part of their AI adoption efforts. Yet, over half of those organizations (55%) now admit that those decisions were misguided.
Many companies have faced unintended consequences rather than unlocking the anticipated gains in efficiency and innovation, such as internal confusion, increased employee turnover, and a decline in productivity. These outcomes highlight a critical lesson: AI decisions must be guided by long-term thinking and organizational foresight, not short-term cost-cutting or hype-driven expectations.
AI risks increasing inequality and anxiety
Beyond the headlines, we also need to understand that displacement due to AI is rarely evenly distributed. Younger workers, lower-income employees, and workers of colour are disproportionately worried about the future. The promise of AI has, for many, become entangled with feelings of insecurity, inequality, and exclusion.
This is especially important as AI adoption risks deepening existing inequalities. In contrast, in high-income countries, as many as 60 percent of jobs are considered automatable, compared to just 26 percent in low-income economies, leading to increased anxiety related to AI’s impact on skilled labor.
These disparities are not just societal concerns. They have direct implications for how organizations adopt and scale AI. If left unaddressed, they risk breaking down trust between employees and employers, leading to increased fear and anxiety towards AI and undermining the goals AI is meant to serve. This is where HR’s role becomes critical.
Creating more human-centered workplaces in the age of AI takes more than good intentions — it requires HR teams with the right mindset, skills, and strategic tools.
With AIHR for Business, your entire HR team can build capabilities in areas like change management, employee experience design, organizational culture and development, and more. Give your people the training they need to protect the human side of work and elevate HR’s impact across the business.
Why HR needs to lead AI integration and capability efforts
HR is uniquely positioned to play a critical role in how AI is adopted in organizations. No other discipline holds the mandate to align technology with people or the responsibility to balance organizational priorities with workforce wellbeing. As AI becomes embedded in how organizations hire, manage, develop, and engage people, HR must lead its adoption, not just for productivity gains but to preserve the human essence of work.
HR’s role is to drive the implementation of AI solutions that improve efficiency and service delivery while safeguarding employee experience, trust, and inclusion. The challenge lies in ensuring that innovation serves people, not the other way around.
“It’s the human aspects—the organization has to be human-centered. It has to be the kind of environment that makes someone want to join the company. And HR sits right at the center of that through recruiting and branding.”

What a human-centered workplace looks like in an AI world
The term human-centered is often misunderstood as opposing performance or technology. However, a truly human-centered workplace does not reject AI; it integrates it thoughtfully to protect psychological safety, amplify purpose, and deepen connection.
HR is the custodian of this balance. It must set the tone for how AI is introduced, communicated, and experienced across the organization, balancing decisions to drive business results with human implications. A truly human-centered HR function uses AI to enhance, rather than replace, the human aspects of work. This involves applying technology thoughtfully to reduce friction, support better decision-making, and personalize employee experiences, all while preserving human connection.
For instance, AI can efficiently manage repetitive tasks such as scheduling interviews or analyzing employee feedback data. By automating these routine activities, HR professionals can focus on high-impact, high-touch efforts like coaching leaders, facilitating inclusion dialogues, and shaping experiences that build a sense of purpose and belonging.
However, when AI is applied without consideration for the human experience, the consequences can be counterproductive. Some organizations, for example, have experimented with using AI to replace managers fully in the performance review process. These initiatives often backfire. Employees resisted being evaluated solely by algorithms and strongly preferred maintaining a human relationship with their managers. They see AI as a tool that should assist managers by reducing bias and supporting better insights, not as a substitute for human judgment and connection.
Making human-centered work a strategic priority for HR
For HR to lead AI in a human-centered way, you need to embed five key principles within all HR activities. Each of these supports the broader goal: making sure technology supports people, not the other way around.
1. Build psychological safety into your AI strategy and address fear proactively
Across all AI efforts, HR should aim to create psychological safety for individuals. This means that employees feel that they have the space to voice their concerns, process disruption, and participate in shaping the future. HR can enable open dialogue and create forums for listening, allowing employees to express their fears and concerns.
Transparency and proactive communication also play a critical role in building psychological safety. Research shows that only 32 percent of employees feel their organization has been transparent about how AI is used. This lack of openness undermines trust and reinforces anxiety.
Employees want to understand how AI is being used, who benefits from it, and what safeguards are in place to ensure ethical, fair, and inclusive practices. That’s why HR should avoid vague or overly technical messaging in employee communication and involve teams early through pilots and feedback sessions.
Also, executive leaders should openly discuss their plans for adopting AI and influencing jobs in the future, as well as their plans for reskilling or transitioning employees.
2. Build an AI-ready workforce
With 120 million workers expected to retrain in the next few years, HR must lead the development of new learning pathways and career transitions. It’s essential to go beyond the intent and principle of reskilling and be more specific in terms of:
- Which jobs will be in focus, and how the organization is segmenting and prioritizing workers who are currently in those jobs
- What skills will be required in the future, and what paths to develop these skills entail
- What the investments required to transition the workforce into these opportunities are, and if the organization is willing to invest these numbers into its workforce.
Upskilling and reskilling efforts haven’t always prioritized AI. According to a TalentLMS and Workable report, only 41% of companies include AI skills in their upskilling programs, and just 39% of employees say they use those skills in their roles. This gap highlights the need for a more holistic approach—one that goes beyond training to include opportunities for real-world application, alignment with business needs, and clear links to growth and recognition.
3. Audit AI systems for fairness and inclusion
HR needs to partner with the Risk Management, Compliance, and Legal teams to conduct realistic audits of AI systems to evaluate them for fairness and inclusion. The results of these audits should show how AI initiatives are intentionally inclusive and highlight where AI initiatives might be unintentionally excluding specific groups.
For example, how AI is adopted can lead to exclusion and perceived unfairness. A global financial services firm adopted AI tools for client insights and productivity, which were rolled out first to senior consultants and head office teams, giving them a significant edge in performance and visibility. Meanwhile, regional teams and junior staff received delayed access and minimal support, limiting their ability to benefit from the same tools. This uneven implementation widened internal inequalities, creating a digital divide within the organization.
4. Redefine the value of work
AI can help eliminate low-value tasks. HR should use this opportunity to elevate roles focused on creativity, empathy, and collaboration, the parts of work that technology cannot replicate. HR should rethink work design and intentionally design for meaningful work that improves engagement, wellbeing, and job satisfaction.
Meaningful work also balances the individual’s need to be challenged and feel like they are contributing to work that adds value to the business objectives and strategies.
AI offers a great opportunity to completely reinvent work design, and HR needs to lead the efforts to ensure the responsible adoption and implementation of these principles.
5. Create guiding principles for ethical AI use
Establish internal policies that prioritize consent, transparency, and data dignity. Data dignity means treating people’s data with the same respect as the individuals themselves, ensuring they have visibility, control, and fair benefit from how their data is used.
These principles should guide all decisions around AI deployment in the workplace. While most AI policies today focus on basic compliance, HR has an opportunity to go further by helping shape policies that are grounded in human-centered thinking, not just minimum standards.
The future of HR and work is more human, not less
There is a growing narrative that the future of work is digital, fast-paced, and AI-powered. That may be true, but it is incomplete. The future of HR must also be deeply human.
As technology becomes more powerful, HR’s responsibility is not to abandon the human side of their work but to amplify it. This means using AI to unlock time, insights, and possibilities; not to replace judgment, empathy, and connection.
AI is an opportunity to elevate the human aspects of work, not replace them. HR is key in shaping authentic human-centered organizations, making sure that as AI is integrated, connection, thoughtful work design, and values like dignity and inclusion remain at the core.
Learn more
Related articles
Are you ready for the future of HR?
Learn modern and relevant HR skills, online