The AI Policy Gap and What HR Leaders Must Do Now
Two emerging HR-led trends are reshaping AI adoption: AI Councils, cross-functional groups that ensure ethical, transparent implementation, and AI Champions, employee advocates driving trust and collaboration across teams.
February 12, 2026

While 60% of leaders expect employees to update their skills, roles, and responsibilities to adapt to AI, 34% of companies still lack a policy guiding AI use, according to The Adecco Group (TAG) data. That gap increases risk in compliance, trust, and talent strategy. For HR and talent teams, the mandate is clear. Set guardrails that empower employees, align with the business, and accelerate innovation rather than slow it.
Becky McCullough, VP of Talent Acquisition and Mobility at HubSpot, captured the current tension on the Talk Talent to Me podcast, “Responsible AI usage is still really uncharted territory, especially in the people space. When you think about all the legislation that’s going on with AI enabled workflows and hiring, that balance of wanting to be intentional and responsible while also wanting to move fast, is really challenging.”
The AI readiness gap
Only 51% of leaders feel confident in their leadership team’s AI knowledge, according to the 2025 TAG Business Leaders Report. Overall leader confidence in their AI implementation strategies has dropped 11 percentage points year-over-year. Meanwhile, data from TAG’s 2025 Global Workforce of the Future shows employees are ready for AI adoption pending leader guidance.
- 87% of workers say they are willing to adapt to AI
- 71% believe their AI knowledge exceeds employer training
High expectations without clear guidance can introduce real risks around data privacy, security, and bias, while leaving teams unsure how AI supports business goals.
Tip: Develop clear, practical AI guardrails to focus adoption.
Establish AI guidelines to help employees use AI confidently and responsibly, protect sensitive information, and ensure adoption accelerates innovation in ways that are business aligned, security and privacy compliant, and trusted.
What your AI policy should include, with HR owning the why
HR’s role is to connect responsible AI use to how work gets done, how people are developed, and how trust is maintained across the workforce.
- Responsible use: Define acceptable applications by function and task.
- Bias mitigation: Pre-deployment testing, ongoing monitoring, and remediation playbooks.
- Data privacy: Clear rules for sensitive employee and candidate data, with approved tools and redaction standards.
- Upskilling: Role-based AI fluency curriculum that makes AI literacy a baseline skill, similar to data analysis or communication.
- Incident management: Simple channels to report issues and defined escalation paths.
- Vendor governance: Due diligence on third party AI, documented controls, and contractual obligations.
- Monitoring and review: Adoption, performance, and ethics dashboards with a scheduled policy refresh.
HR as the strategic architect of AI‑enabled work
HR is designing the blueprint for how AI connects to talent systems across hiring, internal mobility, learning, and performance.
- Trust and alignment: Employees look to HR for clarity on responsible AI use, and 69% of employees say they trust colleagues more than AI. HR must structure the human oversight that gives AI decisions context, fairness, and credibility.
- Culture before tools: Many leaders reframe AI adoption as change leadership. Successful HR transformation starts with culture, not workflows. Establish shared principles first, then operationalize them through tools, use cases, and governance.
Tip: Establish principles first, then operating model.
Treat AI as a team sport not a standalone capability. HR must anchor AI adoption in shared principles, then operationalize them through clear roles, governance, and cadence. This creates speed without sacrificing trust, safety, or alignment.
- Principles: Human-centric, fairness, transparency, privacy, security, continuous learning
- Roles: Build your AI council across HR, IT, legal, data, and business units. CHRO as architect, AI governance lead, HR business partner and TA leaders as process owners, data and legal partners for compliance, learning leaders for AI fluency
- Cadence: Quarterly reviews of AI use cases, bias and privacy audits, business metric dashboards, and controlled experimentation
Actionable steps for HR leaders
- Design a responsible AI framework with business KPIs
Link guardrails to outcomes like cycle time, time saved, and bias reduction. - Launch role-based AI learning paths
Make AI literacy a baseline skill across HR, TA, and people managers. - Communicate early and often
Explain use cases, data rules, and human oversight so teams trust the process. - Measure and iterate
Use surveys, audits, and workflow metrics to refine adoption and impact.
“Anyone who claims to be an expert in AI is just two weeks ahead of you. That’s how fast this is moving,” Becky adds. “Build flexibility into your planning cycles.”
Becky McCullough shares HubSpot’s approach to launching an AI strategy while preserving culture and business priorities—discussing the evolution of HR in the AI era with Talk Talent to Me host Rob Stevenson.
About the data
This white paper draws on the Adecco Group research, including Global Workforce of the Future 2025 (37,500 workers across 31 countries) and Business Leaders 2025 (2,000 C-suite executives across 13 countries).