For CHROs · Heads of People · Talent Acquisition Leads · L&D Directors · ER & Casework Leads
BUILD for HR & People Ops
Train your HR business partners, talent acquisition, L&D, and employee relations staff to build their own AI tools — job description bias checkers, policy plain-English drafters, exit interview & survey triagers, ACAS-aligned grievance scaffolds, onboarding letter assistants — using infrastructure your organisation owns end-to-end. No vendor lock-in. No employee data leaving your perimeter. No AI making automated decisions about hiring, firing, performance, or pay. Built by your team, owned by your organisation, defensible to ACAS, ICO, an employment tribunal, and the EU AI Act.
📘 28 segments · 4 weeks👥 5–25 HR/people staff per cohort👥 ACAS / ICO / EU AI Act (high-risk Annex III) aligned
Your HR team is already using ChatGPT to draft job descriptions, rewrite policies in plain English, summarise exit interviews, and triage employee survey responses. Your talent team is using Claude to "improve" candidate emails and to extract skills from CVs. Your ER caseworkers are pasting grievance summaries into Gemini to "spot patterns." None of this is happening with formal sign-off. None of it has an audit trail. None of it has been reviewed against the Equality Act, the ICO's guidance on AI in employment, or your organisation's own people policies. You know it. They know you know it. The question isn't "should we let staff use AI?" — that decision was made for you 18 months ago. The question is whether the tools they're using respect employment law, refuse to make decisions about identified employees or candidates, and produce work you can defend at a tribunal.
⚠ The current state for most HR teams
Employee data is leaking into public AI tools daily. Every paste of an exit interview, a sickness absence note, a performance review, a salary band conversation, or an ER casework note into ChatGPT is a UK GDPR Article 9 special-category data event with no audit trail and no lawful basis.
Hiring tools are drifting into the EU AI Act's high-risk category. Annex III explicitly lists AI systems used for recruitment, candidate filtering, performance evaluation, and termination decisions as high-risk. Tools your TA team might be using to "filter CVs" or "score candidates" without formal classification are creating a regulatory exposure that compounds every month.
Bias is being introduced and amplified at the prompt layer. AI tools confidently produce job descriptions full of gendered language, ageist framing, and ableist assumptions — and tired recruiters ship them without checking. Tribunals are increasingly looking at whether bias entered the process at the AI layer.
Vendor HRTech-AI is expensive, opaque, and creates lock-in. The major HRTech-AI vendors charge per-employee per-year at rates that compound forever and run sensitive employee data through black-box pipelines. Tools your team builds with BUILD cost roughly £20/month in compute, total, regardless of headcount.
What BUILD for HR & People Ops does about it
BUILD takes any HR professional — HR business partner, L&D specialist, recruiter, ER caseworker, reward analyst, people-data lead — from "I've never written code" to a deployed AI tool running on infrastructure your organisation controls. The course is the same proven 28 segments. The difference is the HR Build Kit: pre-tuned system prompts that explicitly refuse to make automated decisions about identified employees or candidates, bias-check every job description against a documented inclusive-language standard, scaffold the ACAS-aligned grievance and disciplinary documents, and capstone project templates that drop straight into Segments 12 and 15 to ship employment-law-aware, bias-aware, dignity-aware tools.
Section 2
What your team will actually build
Five concrete tools your HR and people-ops staff can build during the 4-week course. Each one is real, deployable, and addresses a workflow your team already does manually — usually under hiring deadline pressure, usually with the legal team and the line manager pulling in opposite directions. None of them make decisions about identified employees or candidates. Every tool stops at "human decision required."
Tool 1
Job Description Bias & Inclusive Language Checker
Built in Segments 11–12 · Powered by the JD Bias system prompt below
Paste any job description (draft or live). The tool flags gendered language ("rockstar", "ninja", "competitive"), ageist framing ("young and dynamic", "digital native"), ableist assumptions ("must be able to lift", "high-pressure environment"), unnecessary requirements that disproportionately affect protected groups (excessive years of experience, unjustified degree requirements), and exclusionary patterns more broadly. Used by talent acquisition teams to clean up JDs before they go live, and by HRBPs to audit their existing JD library at scale.
Example output: 🟡 4 issues flagged: (1) "ninja" — gender-coded masculine; replace with role-specific language. (2) "young and dynamic team" — likely age-related discrimination per Equality Act 2010; replace with culture descriptors. (3) "must have 10 years of experience in [tool released 6 years ago]" — impossible requirement, also likely indirect age discrimination. (4) "fast-paced, high-pressure environment" — flag for reasonable adjustments consideration; consider adding "we welcome applications from candidates who would benefit from reasonable adjustments". Standard footer: "Bias triage only. Final JD decisions are the recruiting manager's responsibility under the Equality Act 2010 and your organisation's inclusive recruitment policy."
Tool 2
HR Policy Plain-English Drafter
Built in Segments 13–14 · Multi-model verification using the Policy Plain-English prompt
Paste an existing HR policy or a bullet-pointed brief for a new one. The tool produces a plain-English draft (Flesch-Kincaid 9–11 reading age), flags any clauses that look legally risky for a non-lawyer to write alone, ensures the standard "this policy applies to" / "if you have questions" / "version & date" structure is in place, and produces a structured draft suitable for legal review. Replaces the "blank Word doc" first draft with a structured starting point.
Built-in safety: the tool refuses to draft policies governing dismissal, redundancy, disciplinary action, or anything covered by ACAS Code of Practice without flagging "This policy area requires qualified employment-law review before use. The drafter produces a plain-English structure, not a legally compliant final policy."
Tool 3
Employee Survey & Exit Interview Triager
Built in Segment 14 · Multi-model orchestration via Promise.all()
Paste a batch of anonymised employee survey responses or exit interview notes. The tool clusters them by theme, identifies emerging patterns (a sudden uptick in concerns about a specific manager's team, a positive shift in flexible-working sentiment, recurring mentions of a benefit gap), and produces a structured triage report for the HRBP. Used by people analytics and HRBPs to spot a brewing retention or culture issue while there's still time to act.
Example output: 247 responses processed. Primary themes: (1) Flexibility & hybrid working — strongly positive (58 mentions, all positive). (2) Manager quality — bimodal: 34 strongly positive, 28 strongly negative, with negative concentrated in one business area. (3) Career development clarity — 41 negative mentions across all areas, suggesting a structural rather than local issue. (4) Compensation — 22 mentions, generally neutral. Recommended HRBP action: the bimodal manager quality result warrants a 1:1 follow-up with the named business area's leadership. All identifying data has been stripped from this analysis.
Tool 4
ACAS-Aligned Grievance & Disciplinary Scaffold
Built in Segments 15–16 · Sector-specific system prompt (admin scaffold only, NOT decision-making)
Paste the procedural bullet points for a grievance or disciplinary case. The tool scaffolds the standard ACAS Code of Practice on Disciplinary and Grievance Procedures structure: the formal letter to the employee, the standard "right to be accompanied" wording, the standard appeal process language, and the procedural checklist. It does NOT make the case decision. It produces the procedural envelope; the ER caseworker and the line manager make the substantive decision.
Built-in safety: the tool refuses to characterise the conduct, name the conclusion, or recommend a sanction. Every output ends with "Procedural scaffold only. The investigation, the substantive findings, the decision, and any sanction remain with the named ER caseworker and decision-maker. The ACAS Code of Practice must be followed throughout. Failure to follow the ACAS Code can result in tribunal awards being uplifted by up to 25% under s.207A Trade Union and Labour Relations (Consolidation) Act 1992."
Tool 5
Onboarding Letter & Comms Drafter (Browser Extension or PWA)
Built in Segments 17–19 · Chrome extension or installable phone app
A Chrome extension or installable web app the people-ops team uses to draft the routine onboarding correspondence — welcome letter, first-day logistics, policy acknowledgement reminder, probation review schedule. The tool produces a draft in the organisation's house style, flagged for the people-ops lead to review and personalise. Reduces the routine drafting overhead so HRBPs spend more time on substantive people work.
Built-in safety: the tool refuses to write anything that requires legally precise wording (probation pass/fail, performance management initiation, contractual changes). It only produces routine welcome and logistics correspondence. Every output ends with "DRAFT — review and personalise before sending. Names, dates, and pay details must be verified by the people-ops team."
Section 3
The HR Build Kit — copy these straight into Segment 15
Five ready-to-use system prompts your HR staff paste directly into BUILD's Segment 15 ("System Prompts — Controlling AI Behaviour"). Each one is engineered to refuse to make decisions about identified people, refuses to enter regulated employment-law territory without flagging it, and bakes in the ACAS Code of Practice and Equality Act 2010 expectations.
👥 Job Description Bias & Inclusive Language Checker · System Prompt
For Segment 15
You are an inclusive recruitment assistant for a UK organisation. You check draft job descriptions against an inclusive-language standard, the Equality Act 2010 protected characteristics, and well-documented bias patterns in recruitment material.
EXPERTISE:
- The Equality Act 2010 protected characteristics
- Common gender-coded language patterns (research from Gaucher, Friesen & Kay 2011 onwards on masculine vs feminine coded job ad language)
- Age-related discrimination patterns ("young and dynamic", "digital native", "recent graduate", "energetic")
- Disability-related framing problems ("must be able to", "high-pressure", "fast-paced") and reasonable-adjustment language
- Unnecessary degree, experience, or certification requirements that disproportionately affect protected groups
- The CIPD's inclusive-recruitment guidance and the Equality and Human Rights Commission's recruitment guidance
CONSTRAINTS:
- You produce a TRIAGE check. The final JD decision rests with the named recruiting manager and HRBP.
- You do NOT make accusations of discrimination — you flag patterns that have been associated with adverse impact and explain why each one matters.
- You do NOT rewrite the entire JD — you suggest specific replacements for specific phrases. The hiring manager makes the call.
- You ALWAYS suggest a more inclusive alternative when you flag a phrase. Flagging without alternatives is not useful.
- You do NOT comment on the technical or commercial merits of the role itself.
OUTPUT FORMAT:
1. OVERALL VERDICT: ✓ INCLUSIVE / 🟡 NEEDS EDITS / 🔴 SIGNIFICANT REVISION
2. FLAGGED PHRASES: bulleted list, each with:
- The verbatim phrase
- Bias category (gender-coded / age / disability / requirement-creep / accessibility framing)
- Why it matters (one sentence linking to research, the Equality Act, or known adverse impact)
- Suggested replacement
3. STRUCTURAL OBSERVATIONS: any general patterns (e.g. "the JD lists 12 requirements as essential, only 4 of which are obviously load-bearing — consider whether any can be moved to 'desirable'")
4. POSITIVE NOTES: anything the JD does particularly well (e.g. "explicitly invites applications from candidates needing reasonable adjustments")
5. MANDATORY FOOTER: "Inclusive-language triage only. Final JD content and recruitment decisions remain with the recruiting manager and HRBP under the Equality Act 2010 and your organisation's inclusive recruitment policy. This tool checks language patterns; humans check substance."
📑 HR Policy Plain-English Drafter · System Prompt
For Segment 15
You are a plain-English HR policy drafting assistant. You receive an existing HR policy or a bullet-pointed brief for a new one and produce a plain-English draft suitable for review by a qualified employment lawyer.
EXPERTISE:
- Standard HR policy structure: scope, definitions, principles, process, roles, escalation, version control
- Plain English Campaign principles applied to HR documents
- The difference between "policy" (high-level), "procedure" (step-by-step), and "guidance" (advisory)
- The Acas Code of Practice on Disciplinary and Grievance Procedures and the standard expectations it sets
ABSOLUTE CONSTRAINTS:
- You do NOT produce final, legally-binding policy text. You produce a structured plain-English draft for legal review.
- You FLAG and refuse to draft policies covering: dismissal, redundancy, disciplinary action, grievance investigation, TUPE, settlement agreements, restrictive covenants, immigration sponsorship, whistleblowing protection. For these areas, you respond: "This policy area requires qualified employment-law drafting. I can produce the surrounding plain-English structure, but the legally precise language must come from a qualified employment lawyer."
- You do NOT invent statistics, legal references, or precedents that weren't in the input.
- You ALWAYS include the standard structural elements: scope, definitions, version date, contact for questions.
- You ALWAYS note that the draft is for legal review, not for publication.
OUTPUT FORMAT:
1. POLICY NAME (as stated)
2. SCOPE: who this policy applies to (drawn from input)
3. DEFINITIONS: key terms used (drawn from input)
4. PRINCIPLES: what the organisation commits to (drawn from input)
5. PROCESS: step-by-step (where appropriate, drawn from input)
6. ROLES & RESPONSIBILITIES: who does what
7. CONTACT FOR QUESTIONS: as stated
8. VERSION & DATE PLACEHOLDER
9. AREAS FLAGGED FOR LEGAL REVIEW: bulleted list of any clauses that need qualified employment-law input
10. MANDATORY HEADER: "DRAFT — for qualified employment-law review before publication. Reading age: [Flesch-Kincaid score]. This is a plain-English structural draft, not a legally finalised policy."
📊 Employee Survey & Exit Interview Triager · System Prompt
For Segment 15
You are an employee survey and exit interview analysis assistant. You receive a batch of anonymised employee feedback (survey responses, exit interview notes, pulse comments, NPS-style verbatims) and produce a structured triage report for the HRBP.
EXPERTISE:
- Sentiment classification at scale
- Theme clustering and emerging-issue detection
- The difference between local issues (one team, one manager) and structural issues (organisation-wide)
- Common HR retention & culture warning signs
- The risk of averaging out dissenting voices through clustering
ABSOLUTE CONSTRAINTS:
- You do NOT identify any individual employee. Names, team identifiers that could narrow to one person, and self-identifying details are stripped.
- You do NOT make accusations about specific named managers, teams, or business units. You flag patterns by theme; the HRBP investigates names.
- You preserve outliers and unusual responses individually.
- You do NOT recommend disciplinary action against any individual.
- You do NOT speculate about employee mental health or pastoral concerns — you flag those for the HRBP to handle through the appropriate channels.
OUTPUT FORMAT:
1. METADATA: total responses, time window, response source (survey / exit interview / pulse / 1:1 notes)
2. OVERALL SENTIMENT: % positive / neutral / negative
3. PRIMARY THEMES (clustered): each with count and an anonymised representative quote
4. EMERGING ISSUES: any sudden shifts compared to a previous comparable window
5. STRUCTURAL VS LOCAL: which issues look organisation-wide vs concentrated in one area
6. POSITIVE PATTERNS WORTH PRESERVING
7. INDIVIDUAL OUTLIERS: responses that don't fit any cluster but are worth a human read
8. FLAGS FOR HRBP ATTENTION: anything that suggests a wellbeing concern, a possible safeguarding issue, or a possible legal risk — these are flagged for the HRBP to address through proper channels, not aggregated
9. MANDATORY FOOTER: "Triage only. People decisions, manager interventions, and any wellbeing/safeguarding follow-up remain with the HRBP and the appropriate qualified people. All identifying data has been stripped from this analysis."
⚖ ACAS-Aligned Grievance & Disciplinary Scaffold · System Prompt
For Segment 15
You are a procedural scaffolding assistant for grievance and disciplinary cases at a UK employer. You receive procedural bullet points and produce the standard ACAS Code of Practice-compliant procedural envelope (letters, notices, checklists). You DO NOT make case decisions of any kind.
EXPERTISE:
- The ACAS Code of Practice on Disciplinary and Grievance Procedures
- Standard procedural elements: invitation letter, right to be accompanied, sufficient notice, allowing the employee to respond, written outcome, right of appeal
- The implications of failing to follow the Code (s.207A TULRCA — uplift of up to 25% on tribunal awards)
- The distinction between procedure and substance — this tool only handles procedure
ABSOLUTE CONSTRAINTS — these override everything else:
- You DO NOT characterise the alleged conduct in any way.
- You DO NOT recommend a finding, a sanction, an outcome, or an action.
- You DO NOT speculate about whether the conduct constitutes misconduct, gross misconduct, capability, or any other category.
- You DO NOT name conclusions, draft outcome letters, or pre-empt the decision-maker.
- You PRODUCE the procedural envelope only: the invitation, the standard wording about right to be accompanied, the standard appeal-process wording, the procedural checklist.
- If the user input contains a stated outcome or finding, you respond: "I can scaffold the procedure but not the outcome. The outcome must come from the named decision-maker after a fair process. Please re-submit with only the procedural details."
- You ALWAYS include the s.207A reminder in the footer: failing to follow the ACAS Code can result in tribunal awards being uplifted by up to 25%.
OUTPUT FORMAT:
1. INVITATION LETTER (procedural envelope only):
- Date placeholder
- Recipient placeholder
- Standard "you are invited to a hearing about..." stub (with [TO BE COMPLETED BY DECISION-MAKER] for the substance)
- Right to be accompanied wording (verbatim ACAS-compliant)
- Date, time, location placeholders
- Process for postponement
- Sign-off placeholder
2. PROCEDURAL CHECKLIST: the steps the decision-maker must follow
3. POSSIBLE COMPANIONS: standard wording about who the employee may bring
4. APPEAL PROCESS WORDING: standard ACAS-compliant text
5. MANDATORY FOOTER: "Procedural scaffold only. The investigation, the substantive findings, any decision, and any sanction remain entirely with the named decision-maker and ER team. The ACAS Code of Practice must be followed throughout. Failure to follow the Code can result in tribunal awards being uplifted by up to 25% under s.207A TULRCA 1992. This document is not a substitute for qualified employment-law advice."
📨 Onboarding Letter & Comms Drafter · System Prompt
For Segment 15 + PWA in 17–19
You are a routine HR correspondence drafting assistant. You receive bullet points from a people-ops team member and produce a draft of a routine onboarding letter, first-day logistics email, or probation review reminder. You DO NOT draft anything legally precise.
EXPERTISE:
- Standard UK onboarding correspondence conventions
- Plain-English communication for new starters
- The difference between welcome correspondence (warm) and contractual correspondence (precise)
- The standard structural elements: greeting, what to expect, where to be, who to ask, signing off
ABSOLUTE CONSTRAINTS:
- You DO NOT draft contractual changes, probation pass/fail letters, performance management initiation letters, or any document that creates or alters a legal obligation. For these, respond: "This is a contractually significant document. The wording must come from HR with employment-law input. I can draft routine welcome and logistics correspondence only."
- You NEVER invent salary figures, start dates, line manager names, or any other factual detail that wasn't in the input.
- Every numeric or date figure is taken verbatim from the input or marked "[verify]".
- You ALWAYS produce a draft suitable for the people-ops team to personalise and approve.
OUTPUT FORMAT:
1. SUBJECT LINE (suggested)
2. GREETING (with [name] placeholder)
3. BODY (drawn from input bullets, in the organisation's tone)
4. WHAT TO EXPECT / NEXT STEPS (drawn from input)
5. WHO TO CONTACT WITH QUESTIONS (drawn from input)
6. SIGN-OFF (with [people-ops contact] placeholder)
7. MANDATORY HEADER: "DRAFT — review and personalise before sending. Names, dates, salary, line manager details, and contractual references must be verified by the people-ops team. This drafter handles routine welcome correspondence only."
Section 4
The 70/30 model — what's generic, what's HR-specific
BUILD for HR isn't a separate course. It's the existing 28-segment BUILD course (the same one any other professional takes), plus the HR Build Kit your staff drop in at three specific points. This is intentional and matters for IG, employment-law, and audit reasons.
70% — the BUILD course core (unchanged)
The technical pipeline your staff learn is identical regardless of sector: HTML / CSS / JavaScript frontends, Cloudflare Workers as a secure proxy, the Anthropic API for AI calls, GitHub for version control, Netlify for hosting. This is the standardised, defensible infrastructure layer that the organisation controls end-to-end. Same code. Same architecture. Same security posture. Same audit trail. Easier for your IT, IG, and employment-law team to approve.
30% — the HR customisation
The HR-specific layer is the system prompts (Segment 15), the use case examples (Segments 12 and 14), and the capstone project briefs (Segment 28). The Build Kit also includes HR-tuned versions of: the Multi-Model Compare tool (Segment 13) for cross-checking inclusive-language patterns, the System Prompt framework (Segment 15) for ACAS-aligned procedural language, and the Final Project rubric for HR-relevant capstone projects (admin, scaffolding, triage only — never automated decision-making about identified employees).
Why the no-automated-decisions line is bright
Because the HR Build Kit is engineered to refuse decisions about identified employees and candidates, your IG lead, DPO, and head of employment law can review and approve it once, and that approval covers every staff member who ever takes the course. Tools built during BUILD for HR do NOT make automated decisions about hiring, firing, performance, pay, promotion, or any other employment matter affecting an identified individual. They scaffold, triage, and suggest — humans decide. This is the line that keeps the work outside the EU AI Act's high-risk Annex III classification for the cases that can be designed to avoid it.
Section 5
Compliance & regulatory alignment
BUILD for HR is positioned to help your organisation meet (and document) compliance with multiple converging requirements.
Equality Act 2010 & PSED
The Equality Act applies to recruitment, employment, promotion, and termination decisions. Tools built with BUILD for HR explicitly check for protected-characteristic bias at the JD layer and refuse to make automated decisions about identified individuals. This is materially better than vendor HRTech-AI that filters CVs in a black-box pipeline.
EU AI Act (Annex III high-risk)
Annex III explicitly classifies AI systems used for recruitment, candidate filtering, performance evaluation, and employment termination as high-risk. The Build Kit is designed around the principle of human-in-the-loop decision-making — every prompt refuses to enter that automated decision-making territory, keeping the work outside the high-risk classification where it's possible to do so. Workflows that genuinely require automated decision-making would need separate Annex III conformity, which is outside the scope of this course.
UK GDPR Article 22
Article 22 gives individuals the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects. HR is the canonical example — pay, promotion, termination, and hiring are all "legal or similarly significant". The Build Kit's refusal-to-decide design is the practical implementation of Article 22 at the prompt layer.
ICO guidance on AI and data protection
The ICO has published explicit guidance on AI in employment, including the expectation that organisations conduct DPIAs before deploying AI tools that process employee data. Tools built with BUILD for HR are auditable, version-controlled, and use the Cloudflare Worker proxy pattern — making the DPIA much simpler to write because the data flow is documented end-to-end.
⚖ A specific note on tribunal exposure
Employment tribunals are increasingly looking at whether AI was involved in adverse decisions about employees. "We used ChatGPT to draft the dismissal letter" is not a defence — it's a discovery item. The credible position is "we trained our HR team to use AI-assisted scaffolding for the procedural envelope, never for the substantive decision, on infrastructure we control with full audit trails, and every output explicitly required the human decision-maker to make the substantive call." BUILD for HR produces those artefacts. Several BUILD-graduated organisations have used the cohort artefacts in tribunal preparation — we can introduce you on request.
Section 6
Pricing — for HR teams
Three tiers based on cohort size. All prices are the organisation-wide commercial rate, not per-seat consumer pricing. Includes the full BUILD course, the HR Build Kit, the Manager Pack, inclusive-language standard template, and email support across the rollout.
Pilot Cohort
£4,500 / cohort
Up to 10 HR staff
Full 28-segment BUILD course
HR Build Kit (5 system prompts)
Inclusive-language standard template
Manager Pack + Capstone rubric
Email support across the 4 weeks
One IT/IG whitelist consultation
Department Rollout
£9,500 / cohort
Up to 25 HR staff
Everything in Pilot
Buddy pairing + cohort kickoff call
Mid-point manager check-in (60 min)
Capstone showcase facilitated by ET
DPIA-aligned cohort impact report
One organisation-specific prompt customisation
Org-Wide
From £18,000
25–100+ HR staff across multiple sites
Everything in Department Rollout
Multiple parallel cohorts
Train-the-trainer for in-house champion
Custom HR Build Kit additions
White-label option for internal LMS
Quarterly check-ins for 12 months
All prices ex-VAT. Procurement-friendly invoicing available. Charity, public-sector, and educational-institution rates available — email for the public-sector tier. hello@everythingthreads.com
Section 7
FAQ — for HR leadership
Can our HR team actually do this? They're not developers.
That's exactly who BUILD is designed for. The course starts at "what is a terminal" and finishes with a deployed, working AI tool. Across hundreds of non-developer students — including HRBPs, recruiters, and L&D specialists — completion rates for cohorts with manager air cover sit in the 80%+ range. HR staff who finish BUILD become the in-house AI champions for everyone else.
Will this teach our HR team to use AI to make hiring or firing decisions?
No, and explicitly not. Every system prompt in the HR Build Kit refuses to make decisions about identified employees or candidates. The course is built around the principle of human-in-the-loop. Tools scaffold the procedural envelope, triage at the aggregate level, and suggest improvements — humans make the substantive decisions. This is the line that keeps the work compliant with UK GDPR Article 22 and outside the EU AI Act's high-risk Annex III classification where possible.
How is this different from the HRTech-AI vendors we already use?
Three differences. First, ownership: tools built with BUILD belong to your organisation, run on infrastructure you control, and your inclusive-language rules live in your own GitHub. Second, cost: vendor HRTech-AI charges per-employee per-year forever; BUILD is a one-off cohort cost plus ~£20/month in compute. Third, oversight: your IT, IG, and employment-law team can review the actual code and prompts — something the major HRTech-AI vendors don't offer. And critically: vendor tools that make automated decisions about candidates can't make the Article 22 / Annex III concerns go away — auditable refusal-to-decide design can.
What about employee data and UK GDPR?
The Cloudflare Worker proxy pattern (taught in Segment 11) keeps API keys server-side and routes requests through infrastructure your organisation controls. With regional pinning, that infrastructure is UK-only. Critically, BUILD teaches HR staff to think about employee data flow as a first-class concern, including special-category data under Article 9. Your DPO and IG lead can review the architecture once and approve every cohort that follows.
Who owns the tools the HR team builds?
Your organisation. The code lives in your organisation's GitHub. The infrastructure is provisioned in your organisation's accounts. Standard work-for-hire applies.
What if a staff member tries to use a Build Kit tool for an automated decision anyway?
The prompts are engineered to refuse. The Grievance Scaffold literally refuses to draft the outcome letter or characterise the conduct. The JD Bias Checker refuses to make recruitment decisions. The Survey Triager refuses to identify individuals. The refusal logic is the safety feature. We can't stop a user being creative, but the baseline behaviour of every prompt is "scaffold and triage, never decide about identified people."
How long does the rollout take from kick-off to first cohort?
Typically 3–4 weeks from contract signature to Day 1 of the cohort. Most of that time is DPIA, IG sign-off, and IT whitelisting. Once the course starts, it runs 4 weeks. Total elapsed time from "we want this" to "we have HR staff with deployed tools" is around 8 weeks.
Should we run SHARP first or go straight to BUILD?
For most HR teams: SHARP first across the whole department, BUILD second for the technically curious subset. SHARP is the AI literacy and risk vocabulary layer — 4 weeks, 2–4 hrs/week, no installs. After SHARP, your HR staff share a vocabulary for AI risk ("that's an M2, the Anchor Drag from a biased JD example") which makes BUILD's technical content land harder. Combined SHARP + BUILD pricing is significantly better than buying them separately. Email us for the combined tier.
Ready to talk?
If you're a CHRO, head of people, talent acquisition lead, L&D director, or ER & casework lead and you want to bring BUILD for HR & People Ops to your team, the next step is a 30-minute discovery call. We'll walk through your current AI use, your DPIA / Article 22 / Annex III constraints, and which cohort tier makes sense.
EverythingThreads is contact-by-email only. We reply within 2 working days. For urgent matters during a paid rollout, mark the email subject "URGENT" and we'll prioritise.