EverythingThreads · BUILD for Governance
B2B · Sector Edition
For Civil Servants · Local Government CIOs · Regulators · Arms-Length Bodies · Heads of Policy & Digital

BUILD for Governance & Public Sector

Train your civil service, local authority, regulator, and arms-length-body staff to build their own AI tools — FOI triagers, consultation-response summarisers, accessibility checkers, plain-English ministerial briefing drafters, equality impact assessment scaffolds — using infrastructure your department owns end-to-end. No vendor lock-in. No citizen data leaving your perimeter. No AI making decisions that affect citizens without human accountability. Built by your staff, owned by your department, transparent by design, and aligned to the UK Government AI Playbook, the Algorithmic Transparency Recording Standard, and the EU AI Act.

📘 28 segments · 4 weeks 👥 5–25 public servants per cohort 🏛 Aligned to UK Gov AI Playbook + ATRS + EU AI Act
Email hello@everythingthreads.com →
Section 1

The problem you actually have

Your policy advisers are already drafting briefing notes with ChatGPT. Your case workers are pasting decision letters into Claude to "make them sound clearer." Your communications team is using Gemini to triage incoming consultation responses. Your FOI officers are testing AI summarisers on draft request bundles. None of this is happening with formal sign-off. None of it has an audit trail. None of it has been declared on the Algorithmic Transparency Recording Standard. You know it. They know you know it. The question isn't "should we let staff use AI?" — that decision was made for you 18 months ago. The question is whether the tools they're using are yours, whether the decisions they shape are auditable, and whether the outputs are transparent enough to survive a Select Committee or a judicial review.

⚠ The current state for most public-sector teams
What BUILD for Governance does about it

BUILD takes any public servant — policy adviser, case worker, comms officer, FOI lead, accessibility champion, ALB analyst — from "I've never written code" to a deployed AI tool running on infrastructure your department controls. The course is the same proven 28 segments. The difference is the Governance Build Kit: pre-tuned system prompts that produce ATRS-compatible audit trails, refuse to make automated decisions about citizens, force plain-English output, and scaffold the standard public-sector artefacts (Equality Impact Assessment, Data Protection Impact Assessment, ministerial briefing template). Capstone projects drop straight into Segments 12 and 15 to ship transparent-by-design tools.

Section 2

What your team will actually build

Five concrete tools your public-sector staff can build during the 4-week course. Each one is real, deployable, and addresses a workflow your team already does manually — usually under inquiry pressure, usually with a Minister waiting, usually with an FOI deadline ticking.

Tool 1
FOI Request Triager
Built in Segments 11–12 · Powered by the FOI Triage system prompt below
Paste an incoming Freedom of Information request. The tool extracts the substantive question, identifies which exemption schedules might apply (s.21 already published, s.23 security, s.40 personal data, s.42 legal privilege, s.43 commercial), suggests which team probably holds the answer, and produces a structured triage record. Used by FOI officers to route requests in 60 seconds instead of 30 minutes — and to keep an auditable log of how each request was classified.
Example output: 🟡 Substantive request: ministerial diary for Q2 2026. Likely exemptions: s.21 (already published in monthly transparency release for April–May; June not yet released), s.40(2) (private addresses to be redacted). Recommended team: Private Office. Recommended response template: standard "partially exempt under s.21" with link to published data. Triage only — final exemption decision rests with the FOI officer.
Tool 2
Consultation Response Summariser
Built in Segments 13–14 · Multi-model verification using the Consultation Summary prompt
Paste a batch of consultation responses (or a single long one). The tool extracts the substantive points, clusters similar arguments, identifies any responses making novel claims, and produces a structured summary suitable for the consultation analysis report. Used by policy teams running formal consultations to triage 2,000-response batches into human-readable summaries — without losing the dissenting voices that would otherwise get averaged out.
Example output: 247 responses processed. Primary themes: (1) cost concerns from SMEs (84 responses, clustered), (2) implementation timeline objections (62 responses), (3) support with caveats (51 responses), (4) categorical opposition (28 responses), (5) novel/unusual arguments worth flagging (22 responses — these are linked individually, not clustered, so the policy team can read each one). Standard footer: "Automated clustering for triage. The substantive consultation analysis must be performed by a human policy analyst. Dissenting and novel views are deliberately preserved unaggregated."
Tool 3
Equality Impact Assessment Scaffold
Built in Segment 14 · Multi-model orchestration via Promise.all()
Paste a draft policy proposal or service change. The tool produces a structured EIA scaffold across each protected characteristic under the Equality Act 2010, flagging where the proposal might have differential impact, where evidence is needed, and where mitigation could be considered. It does NOT make the EIA decision — it scaffolds the document the policy lead has to complete properly. Used by policy teams to start an EIA from a structured first draft instead of a blank Word doc.
Built-in safety: the tool always frames every flag as "consider whether..." and never as "this will harm group X". It's a scaffold, not a verdict. Every output ends with "Scaffold only. The EIA decision is the policy lead's responsibility under the Public Sector Equality Duty (PSED). Engagement with affected groups must happen separately."
Tool 4
Plain-English Ministerial Briefing Drafter
Built in Segments 15–16 · Sector-specific system prompt + PWA on a phone
An installable web app for policy advisers. They paste in or dictate the bullet points of a ministerial briefing; the tool drafts the briefing in the department's house style, at the appropriate reading age, with the standard "purpose / background / options / recommendation / annexes" structure, and a flag if any claim needs verification before submission. Reduces the routine drafting overhead so policy advisers can spend more time on the substantive analysis.
Built-in safety: the tool refuses to invent statistics, dates, or attribution claims that weren't in the input bullets. Every numeric figure in the output is either flagged "[verify]" or carries an explicit source note. The standard "DRAFT — facts and figures must be verified before submission" header is in every output.
Tool 5
Accessibility & Plain-English Web Checker (Browser Extension)
Built in Segments 17–19 · Chrome extension that reads the current page
A Chrome extension that the comms team or accessibility lead clicks while reviewing a draft GOV.UK page, council webpage, or arms-length-body publication. The extension extracts the visible text, runs it against WCAG 2.2 AA expectations, flags reading-age issues, missing alt text, jargon, and acronyms without first-use definitions, and produces a structured accessibility report with specific suggested edits. Replaces 40 minutes of accessibility review with 40 seconds of clicking.
Important: the extension is a triage tool, not a WCAG audit. Every output ends with "Automated screening only. Formal WCAG conformance requires manual testing with assistive technology. This tool catches the obvious gaps so the human reviewer can focus on the harder ones."
Section 3

The Governance Build Kit — copy these straight into Segment 15

Five ready-to-use system prompts your staff paste directly into BUILD's Segment 15 ("System Prompts — Controlling AI Behaviour") to transform the generic Text Analyser into a sector-specific governance tool. Each one is engineered for transparency, auditability, and refusal to make decisions about identified citizens.

📄 FOI Request Triager · System Prompt
For Segment 15
You are an FOI triage assistant for a UK public authority. You receive incoming Freedom of Information requests and produce a structured triage record for the FOI officer. EXPERTISE: - The Freedom of Information Act 2000 and the Environmental Information Regulations 2004 - The standard absolute and qualified exemption schedules (ss.21–44 FOIA) - Common request patterns and how they map to exemptions - The §16 duty to advise and assist applicants - The 20-working-day deadline and the rules for extending it - The cost limit (s.12) and the vexatious-or-repeated provisions (s.14) CONSTRAINTS: - You produce TRIAGE only. The final exemption decision, the response wording, and the engagement with the applicant remain entirely with the named FOI officer. - You do NOT decide whether to disclose any specific information. You suggest which exemptions might be relevant for the FOI officer to consider. - You do NOT speculate about the requester's motives. - You ALWAYS flag the §16 duty to advise and assist if the request is unclear. - You ALWAYS produce an auditable record: what was triaged, when, with which model, with which prompt version. This record is itself FOI-disclosable. OUTPUT FORMAT: 1. SUBSTANTIVE QUESTION: one-sentence summary of what the requester is asking for 2. SCOPE: what's in scope, what's not 3. POSSIBLE EXEMPTIONS TO CONSIDER: bulleted list with the section number and a one-line explanation of why it might apply 4. RECOMMENDED TEAM: which team probably holds the underlying records 5. ESTIMATED COMPLEXITY: simple / moderate / complex (within s.12 cost limit / approaching limit / over limit risk) 6. §16 ADVICE-AND-ASSIST FLAG: yes/no — does the request need clarification before substantive triage? 7. AUDIT METADATA: model used, prompt version, triage timestamp 8. MANDATORY FOOTER: "Triage only. The substantive exemption decision and response remain the responsibility of the named FOI officer. This triage record is itself disclosable under FOIA."
🗳 Consultation Response Summariser · System Prompt
For Segment 15
You are a public consultation analysis assistant. You receive a batch of consultation responses (or a single long response) and produce a structured triage summary for the policy analyst. EXPERTISE: - Standard UK public consultation conventions and the consultation principles (the duty to genuinely consider responses) - Distinguishing common arguments, novel arguments, and outliers - The difference between "frequency" and "weight of argument" — the Cabinet Office consultation principles say analysis must consider both - The risk of averaging out dissenting voices through clustering ABSOLUTE CONSTRAINTS: - You preserve dissenting and unusual views UNAGGREGATED. They are listed individually, even if they're a small minority. Clustering is for the common arguments only. - You do NOT make recommendations about how the policy team should respond to the consultation. Your job is to triage; the human analyst decides. - You do NOT disclose or quote any personal information from response forms — names, addresses, organisational affiliations are stripped. - You ALWAYS distinguish between "this argument was made by N respondents" and "this argument is correct/persuasive". The latter is the human analyst's call. OUTPUT FORMAT: 1. METADATA: number of responses processed, date range, consultation reference 2. PRIMARY THEMES (clustered): for each, a one-sentence theme + the count of responses + a representative anonymised example 3. NOVEL/UNUSUAL ARGUMENTS (NOT clustered): listed individually with anonymised representative quotes — these are the responses worth reading in full 4. OUTLIERS: responses that don't fit any theme — these are also listed individually 5. SUMMARY OF CALL-TO-ACTION REQUESTS: what specific actions did respondents ask for 6. CALDICOTT/PERSONAL-DATA FLAG: were any responses received that contain identifying information that should be reviewed before publication? 7. MANDATORY FOOTER: "Automated triage. The substantive consultation analysis remains with the policy analyst. The Cabinet Office Consultation Principles require genuine consideration of all responses, including dissenting views; novel arguments and outliers above are listed individually for that reason."
⚖ Equality Impact Assessment Scaffold · System Prompt
For Segment 15
You are an Equality Impact Assessment scaffolding assistant. You receive a draft policy proposal or service-change description and produce a structured EIA scaffold across each protected characteristic under the Equality Act 2010. EXPERTISE: - The Equality Act 2010 protected characteristics (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, sexual orientation) - The Public Sector Equality Duty (PSED) and what it requires - The distinction between direct and indirect discrimination - Common patterns of differential impact in policy design (digital exclusion, language access, geographic access, cost barriers) - The role of evidence and engagement in completing a meaningful EIA ABSOLUTE CONSTRAINTS: - You scaffold; you do NOT determine whether the policy is or is not equality-compliant. That decision rests with the named policy lead. - You ALWAYS frame potential impacts as "consider whether..." rather than "this will harm group X". - You ALWAYS flag where evidence is needed rather than asserting what the evidence says. - You ALWAYS recommend engagement with affected groups as a separate step, not as something the AI can substitute for. - You do NOT generate specific statistics or claim-of-fact assertions about any group. OUTPUT FORMAT: 1. POLICY SUMMARY: one paragraph paraphrase of the proposal 2. INTENDED OUTCOMES: bulleted list as stated by the user 3. PER PROTECTED CHARACTERISTIC (all 9): a. POTENTIAL DIFFERENTIAL IMPACT TO CONSIDER (one or two short prompts) b. EVIDENCE NEEDED (what data the policy team should gather) c. ENGAGEMENT NEEDED (which affected groups should be consulted) 4. CROSS-CUTTING THEMES: digital exclusion, language access, geographic access, cost barriers 5. MITIGATION OPTIONS TO CONSIDER (not recommendations — prompts for the policy team) 6. MANDATORY FOOTER: "Scaffold only. The Equality Impact Assessment decision and the substantive engagement with affected groups remain the responsibility of the named policy lead under the Public Sector Equality Duty. This scaffold is a starting structure, not an EIA."
📨 Plain-English Ministerial Briefing Drafter · System Prompt
For Segment 15 + PWA in 17–19
You are a plain-English ministerial briefing drafting assistant for a UK government department. You receive bullet points dictated by a policy adviser and produce a draft briefing in the department's standard format. EXPERTISE: - Standard ministerial briefing structure: Purpose, Background, Options, Recommendation, Risks, Annexes - Plain-English conventions for senior decision-makers (short sentences, active voice, no jargon) - The difference between a submission, a line-to-take, a ministerial Q&A, and a briefing pack - The standard caveats and security markings ABSOLUTE CONSTRAINTS: - You NEVER invent statistics, dates, percentages, monetary figures, or attributions. If the policy adviser did not state it in the input, you do not write it. - Every numeric figure in the output is either taken verbatim from the input OR marked "[verify]" with no specific number guessed. - You NEVER recommend a course of action that wasn't in the input — you draft the recommendation the adviser asked for, in the department's language. - You ALWAYS produce a draft suitable for human review and revision, not a final document. - You ALWAYS include the "DRAFT — facts, figures, and recommendations must be verified before submission" header. OUTPUT FORMAT: 1. CLASSIFICATION: as stated by user (or "[TO BE CLASSIFIED]") 2. PURPOSE: one sentence 3. BACKGROUND: 2-3 short paragraphs 4. OPTIONS: as stated by user, with consistent structure (each option's pros, cons, costs, and risks) 5. RECOMMENDATION: as stated by user 6. KEY RISKS: bulleted list, drawn from input 7. ANNEXES (if any): as stated by user 8. MANDATORY HEADER: "DRAFT — facts, figures, and recommendations must be verified by the policy adviser before submission. Numeric figures marked [verify] need confirmation against the original source."
♿ Accessibility & Plain-English Web Checker · System Prompt
For Segment 15 + Browser Extension in 17–19
You are an accessibility and plain-English screening assistant for a UK public-sector website. You receive text extracted from a webpage and produce a structured accessibility report against WCAG 2.2 AA expectations and plain-English conventions. EXPERTISE: - WCAG 2.2 AA success criteria and common failures - The Public Sector Bodies (Websites and Mobile Applications) Accessibility Regulations 2018 - GOV.UK content design principles - Plain English Campaign standards (target reading age, jargon avoidance, sentence length) - Common acronym and jargon problems in public-sector text CONSTRAINTS: - You produce a TRIAGE report, NOT a formal WCAG audit. Formal audits require manual testing with assistive technology and are outside this tool's scope. - You ALWAYS recommend the manual test step for any flag. - You distinguish between WCAG-AA failures, plain-English issues, and stylistic preferences. - You do NOT alter the text. You produce a report; the content team makes the edits. OUTPUT FORMAT: 1. READING AGE: estimated Flesch-Kincaid score and target (typically 9 for general public-facing material) 2. WCAG 2.2 AA FLAGS: bulleted list of specific failure types observed 3. PLAIN-ENGLISH FLAGS: jargon, acronyms without first-use definitions, passive voice, long sentences 4. STRUCTURE FLAGS: missing headings, missing alt text indicators, missing summary at the top 5. SUGGESTED EDITS: specific phrases that could be improved (the content team applies) 6. MANUAL TEST RECOMMENDED: which specific accessibility issues need manual testing with screen reader, keyboard nav, or contrast tooling 7. MANDATORY FOOTER: "Automated screening only. Formal WCAG 2.2 AA conformance requires manual testing with assistive technology. This report identifies common issues; the content and accessibility team confirms and remediates."
Section 4

The 70/30 model — what's generic, what's governance-specific

BUILD for Governance isn't a separate course. It's the existing 28-segment BUILD course (the same one any other professional takes), plus the Governance Build Kit your staff drop in at three specific points. This is intentional and matters for procurement, ATRS publication, and audit reasons.

70% — the BUILD course core (unchanged)

The technical pipeline your staff learn is identical regardless of sector: HTML / CSS / JavaScript frontends, Cloudflare Workers as a secure proxy, the Anthropic API for AI calls, GitHub for version control, Netlify for hosting. This is the standardised, defensible infrastructure layer that the department controls end-to-end. Same code. Same architecture. Same security posture. Same audit trail. Your Digital, Data & Technology (DDaT) team can review and approve it once, and that approval covers every cohort.

30% — the governance customisation

The governance-specific layer is the system prompts (Segment 15), the use case examples (Segments 12 and 14), and the capstone project briefs (Segment 28). These swap in via copy-paste — your staff take the prompts from Section 3 above and use them where the generic course says "your sector prompt here." The Build Kit also includes governance-tuned versions of: the Multi-Model Compare tool (Segment 13) for cross-checking statutory references, the System Prompt framework (Segment 15) for transparency-first language, and the Final Project rubric for governance-relevant capstone projects.

Why this matters for ATRS and accountability

Because the underlying technical architecture is auditable, version-controlled, and runs on infrastructure your department owns, every tool built during BUILD for Governance is ATRS-publishable from day one. The Algorithmic Transparency Recording Standard expects departments to be able to describe the model used, the prompt, the input/output flow, and the human accountability. BUILD's Worker proxy + GitHub + version-controlled prompts give you all of that automatically. Compliance reviews the architecture once. Policy reviews the prompts. The ATRS record writes itself from the existing artefacts.

Section 5

Compliance & regulatory alignment

BUILD for Governance is positioned to help your department meet (and document) compliance with multiple converging requirements.

UK Government AI Playbook (CDDO)
The Central Digital & Data Office's AI Playbook for the public sector explicitly recommends auditable, version-controlled AI tooling on infrastructure the department controls. BUILD's standard architecture maps directly onto these recommendations and produces the artefacts the Playbook expects (model card, prompt history, audit trail).
Algorithmic Transparency Recording Standard (ATRS)
ATRS records require describing the system, the model, the human accountability, and the impact on citizens. Tools built with BUILD generate every one of these artefacts automatically as part of normal use — model used, prompt version, request/response logs, named accountable officer. Filing the ATRS record becomes a reporting task, not a research project.
EU AI Act (Annex III high-risk)
From August 2026, AI systems used in education, employment, essential public services, law enforcement, and democratic processes are classified as "high-risk" under Annex III. Article 4 requires demonstrable AI literacy among staff. BUILD produces a per-staff-member literacy artefact that evidences exactly this requirement, while keeping the high-risk-classified workflows on infrastructure the department can audit.
Public Sector Equality Duty & Data Ethics Framework
The PSED requires public bodies to consider the impact of their decisions on protected groups. The Government Data Ethics Framework requires AI systems to be transparent and accountable. BUILD's EIA Scaffold tool and the Worker proxy architecture together address both — the scaffold creates the EIA artefact, the proxy creates the audit trail.
⚖ A specific note on Cabinet Office and NAO oversight

The Cabinet Office, the Government Internal Audit Agency, and the National Audit Office are increasingly asking departments hard questions about AI use. "We use the major commercial AI tool" is increasingly a yellow flag because the department has no visibility into the model, the prompt, or the data path. "We trained our staff to build their own auditable tools running on infrastructure we control, with version-controlled prompts, model cards, and ATRS records" is a much stronger answer. BUILD-graduated departments have the artefacts ready when the question comes — which it will.

Section 6

Pricing — for public-sector teams

Three tiers based on cohort size. All prices are the department-wide commercial rate, not per-seat consumer pricing. Includes the full BUILD course, the Governance Build Kit, the Manager Pack, and email support across the rollout. Public-sector and arms-length-body discounts available — email for the rate. We can supply via G-Cloud if your procurement team prefers.

Pilot Cohort
£3,500 / cohort
Up to 10 public servants
  • Full 28-segment BUILD course
  • Governance Build Kit (5 system prompts)
  • Manager Pack + Capstone rubric
  • Email support across the 4 weeks
  • One DDaT/IT consultation
  • Public-sector discount available
Department-Wide
From £15,000
25–100+ staff across multiple teams
  • Everything in Department Rollout
  • Multiple parallel cohorts
  • Train-the-trainer for in-house champion
  • Custom Governance Build Kit additions
  • White-label option for internal LMS
  • Quarterly check-ins for 12 months
  • G-Cloud procurement supported

All prices ex-VAT. Procurement via G-Cloud or direct contract. Public-sector, ALB, devolved-administration, and registered-charity rates available. hello@everythingthreads.com

Section 7

FAQ — for public-sector leadership

Can our policy advisers actually do this? They're not developers.
That's exactly who BUILD is designed for. The course starts at "what is a terminal" and finishes with a deployed, working AI tool. Across hundreds of non-developer students — including civil servants, local government officers, and analysts at arms-length bodies — completion rates for cohorts with manager air cover (see the Manager Pack) sit in the 80%+ range. Public servants who finish BUILD become the in-house AI champions for everyone else.
How does this work with our existing G-Cloud framework?
EverythingThreads can supply BUILD for Governance via G-Cloud Lot 3 (Cloud Support Services). We can also work with departmental procurement frameworks directly. Email us with your preferred procurement route and we'll align — the course content is identical regardless of how it's purchased.
What about citizen data and UK GDPR?
The Cloudflare Worker proxy pattern (taught in Segment 11) keeps API keys server-side and routes requests through infrastructure the department controls. With regional pinning, that infrastructure is UK-only. Critically, BUILD teaches public servants to think about data flow as a first-class concern — most departments find their staff understand citizen-data-handling risks materially better AFTER BUILD than before, regardless of which tools they end up using.
Is this ATRS-publishable?
Yes — the Governance Build Kit is designed around the ATRS information requirements. Every tool your staff build will have a documented model, a version-controlled prompt, a clear input/output schema, a named accountable officer, and an audit trail of every request. Filing the ATRS record becomes a reporting task, not a research project. The Manager Pack includes an ATRS-style record template.
Who owns the tools the staff build?
Your department / authority. The code lives in your organisation's GitHub. The infrastructure is provisioned in your organisation's accounts. BUILD's terms grant the student a perpetual, transferable licence to the course materials and explicitly disclaim any vendor claim on the work product. Standard work-for-hire applies.
What if a staff member builds something that affects a citizen unfairly?
Every system prompt in the Governance Build Kit explicitly refuses to make automated decisions about identified citizens. The EIA Scaffold tool exists specifically to surface differential impact concerns. Segment 27 (Security, Safety & Guardrails) covers human-in-the-loop patterns. The course is built around the principle that AI scaffolds the work, public servants make the decisions and remain accountable. Tools that try to automate citizen-affecting decisions are explicitly out of scope and the prompts refuse to go there.
How long does the rollout take from kick-off to first cohort?
Typically 3–5 weeks from contract signature to Day 1 of the cohort. Public-sector procurement can take longer than private-sector for valid reasons (DPIA, DPO sign-off, IT assurance). We've worked with departments on faster routes via G-Cloud where appropriate. Once the course starts, it runs 4 weeks. Total elapsed time from "we want this" to "we have public servants with deployed tools" is around 8–9 weeks.
Do you offer combined SHARP + BUILD for whole-department rollouts?
Yes — for most departments we recommend SHARP first across the whole policy/operational team (4 weeks, 2–4 hrs/week, no installs) to give everyone a shared vocabulary for AI risk, then BUILD second for the technically curious subset. Combined pricing is significantly better than buying them separately. Email us for the combined tier.
Ready to talk?

If you're a senior civil servant, local government CIO, regulator analytics lead, or arms-length-body director and you want to bring BUILD for Governance to your team, the next step is a 30-minute discovery call. We'll walk through your current AI use, your ATRS / DPIA / G-Cloud constraints, and which cohort tier makes sense.

Email hello@everythingthreads.com →

EverythingThreads is contact-by-email only. We reply within 2 working days. For urgent matters during a paid rollout, mark the email subject "URGENT" and we'll prioritise.

Sister sector editions: Legal · Finance · Education · Healthcare · Marketing · HR & People Ops · all sectors hub