For Headteachers · Heads of Department · Deans · Directors of Learning & Teaching · LMS Leads
BUILD for Education Teams
Train your teachers, lecturers, librarians, and learning-technology staff to build their own AI tools — citation hallucination detectors, exam-board cross-checkers, lesson-plan triagers, plagiarism-aware feedback drafters — using infrastructure your school, college, or university owns end-to-end. No vendor lock-in. No student data leaving your perimeter. No AI invented research papers in a Year 11 essay. Built by your staff, owned by your institution, defensible to Ofsted, the QAA, and the EU AI Act.
📘 28 segments · 4 weeks👥 5–25 staff per cohort🎓 Ofsted / QAA / JCQ / EU AI Act aligned
Your students are already using ChatGPT, Claude, Gemini, and Perplexity for coursework. Your staff are already using them to draft lesson plans, mark essays, write reports, and triage emails. You know it. They know you know it. Whether your institution has a policy or not, the tools are in everyone's browsers and the work is going through them. The question isn't "should we let staff use AI?" — that decision was made for you 18 months ago. The question is whether the tools they're using are yours, and whether the citations they produce can be verified.
⚠ The current state for most institutions
Hallucinated citations are everywhere. AI tools routinely invent author names, fabricate DOIs, attribute real claims to fake papers, and reference journal articles that do not exist. Students submit these in coursework. Staff cite them in lesson notes. Researchers occasionally let them through into conference papers. The reputational risk to a higher-education institution that publishes a hallucinated citation is real and growing.
Discredited research is being recycled. Public AI tools are still confidently teaching the VAK learning styles model, the Mozart Effect, the 10% brain myth, and brain-gym. Anything that was popular when the model's training data was assembled gets quoted with the same authority as a meta-analysis. Your staff need to be able to catch this without doing a literature review every time they read an AI summary.
Student data is leaking. Every time a teacher pastes a marked essay or a SEN report into a public AI tool, that student's data leaves your perimeter. Most institutions have no audit trail of which student work went through which model on which day. When the DfE, the ICO, or a parent asks, the answer is "we don't know."
Vendor edu-AI is expensive and locked-in. The major education-AI vendors charge per-pupil-per-year at rates that compound forever. Tools your team builds with BUILD cost roughly £20/month in compute, total, regardless of how many students or staff use them.
What BUILD for Education does about it
BUILD takes any educator — classroom teacher, head of department, lecturer, librarian, learning-technologist — from "I've never written code" to a deployed AI tool running on infrastructure your institution controls. The course is the same proven 28 segments. The difference is the Education Build Kit: pre-tuned system prompts that refuse to fabricate sources, cross-checking patterns for citations, exam-board alignment templates, and capstone project briefs that drop straight into Segments 12 and 15 to ship citation-aware, board-aware, plagiarism-aware tools.
Section 2
What your team will actually build
Five concrete tools your staff can build during the 4-week course. Each one is real, deployable, and addresses a workflow your team already does manually — usually in evenings and weekends, usually under marking pressure, usually with the nagging worry that ChatGPT just made something up.
Tool 1
Citation Hallucination Detector
Built in Segments 11–12 · Powered by the Citation Verification system prompt below
Paste any AI-generated text or student essay containing academic citations. The tool extracts every reference, runs each one through two independent models, cross-checks DOIs and journal patterns, and flags anything that doesn't match a verifiable structure. Used by markers, librarians, and exam invigilators to catch hallucinated sources before they enter the academic record. The single most important defensive tool any HE institution using AI can deploy.
Example input: "Recent research by Patel and Hammond (2023) in Educational Psychology Quarterly volume 47 pages 218–244 demonstrates a strong correlation between..."
Example output: ⚠ FLAGGED — Journal name "Educational Psychology Quarterly" exists but volume 47 was published in 1979, not 2023. Author pair "Patel and Hammond" — no record found in cross-referenced model. Recommended: verify in CrossRef / Google Scholar / Retraction Watch before relying on this citation. If the student submitted this, follow your institution's academic integrity protocol.
Tool 2
Discredited Research Filter
Built in Segments 13–14 · Multi-model verification using the Pedagogy Check prompt
Paste any AI-generated lesson plan, CPD note, or pedagogical justification. The tool checks the content against a curated list of debunked or contested educational claims (learning styles, the Mozart Effect, brain-gym, the 10% brain myth, multiple intelligences as a teaching framework, fixed mindset misreadings, etc.) and flags anything that's been retracted or seriously questioned. Used by CPD coordinators and heads of department to catch outdated theory before it ships into Year 7.
Example input: "We should differentiate by ensuring visual learners get diagrams, auditory learners get podcasts, and kinaesthetic learners get hands-on activities."
Example output: 🔴 DISCREDITED — The VAK (Visual-Auditory-Kinaesthetic) learning styles model has been repeatedly shown to lack empirical support; differentiating instruction by self-reported learning style does not improve outcomes (Pashler et al. 2008; Husmann & O'Loughlin 2019). Suggested replacement: differentiate by prior knowledge, by misconception, or by language scaffolding need — these have evidence behind them. EEF Teaching & Learning Toolkit is a strong starting reference.
Tool 3
Exam-Board Mark Scheme Aligner
Built in Segment 14 · Multi-model orchestration via Promise.all()
Paste a draft exam question, mark scheme, or model answer. The tool checks it against the published assessment criteria for the relevant exam board (AQA, OCR, Edexcel/Pearson, WJEC, SQA, IB) and flags any phrasing that doesn't match the board's command words, mark band descriptors, or assessment objectives. Used by heads of department to catch drift between teacher-written practice papers and the actual board's expectations.
Example output: 🟡 PARTIAL ALIGNMENT — Question uses "describe" but the AQA GCSE History mark scheme for this unit uses "explain why" as the comparable command word, which carries a higher AO weighting. Suggested rewrite: change "Describe the impact of..." to "Explain why X had a significant impact on Y", to better mirror AQA's published 2024 mark scheme for this paper.
Tool 4
Plagiarism-Aware Feedback Drafter
Built in Segments 15–16 · Sector-specific system prompt + PWA on a phone
An installable web app on a teacher's phone. They paste in or photograph the bullet points of a student's submission strengths and weaknesses; the tool drafts written feedback in the institution's house style, with appropriate developmental language, AND flags anywhere the input bullets suggest the student may have used AI without disclosure (specific stylistic markers, citation patterns, vocabulary mismatches with prior work). The feedback drafter is the productive bit; the AI-disclosure flag is the safety bit.
Built-in safety: the tool refuses to make accusations of academic misconduct, always frames AI-pattern flags as "investigate further", and inserts the institution's standard "this is a draft — please review and personalise before sharing" header. Personal data of named students is never stored beyond the request itself.
Tool 5
Source-Checker Browser Extension
Built in Segments 17–19 · Chrome extension that reads the current page
A Chrome extension a student or staff member clicks while reading any webpage — a news article, a blog post, a Wikipedia entry, a TikTok summary, an AI-generated essay. The extension extracts the substantive claims, identifies the cited sources, cross-checks each source for retraction or known weakness, and produces a one-page "source quality report" that flags weak, missing, or fabricated references. Replaces 40 minutes of fact-checking with 40 seconds of clicking.
Important: the extension is a triage tool, not a fact-check verdict. Every output ends with "this is automated source-quality screening; substantive evaluation requires human judgement and access to the original sources." The output is designed to make the human work faster and the student work more honest, not to replace either.
Section 3
The Education Build Kit — copy these straight into Segment 15
Five ready-to-use system prompts your staff paste directly into BUILD's Segment 15 ("System Prompts — Controlling AI Behaviour") to transform the generic Text Analyser into a sector-specific education tool. Each one refuses to fabricate sources, refuses to invent research, and forces the model into evidence-citing patterns the academic community recognises.
📚 Citation Hallucination Detector · System Prompt
For Segment 15
You are an academic research librarian specialising in citation verification across UK, EU, and US academic publishing. You verify citations for plausibility and flag any that appear to be hallucinated.
EXPERTISE:
- Standard academic citation formats (Harvard, APA 7, Chicago, MHRA, Vancouver, OSCOLA)
- DOI structure and known DOI prefixes for major publishers (Elsevier, Springer, Wiley, Taylor & Francis, OUP, CUP, Sage)
- Typical journal volume/issue/year/page ranges
- Common hallucination patterns in AI-generated academic text (fabricated DOIs, wrong volume numbers, real journals with fake papers, real authors with fake collaborations)
- The Retraction Watch database conventions for flagging known retractions
CONSTRAINTS:
- You CANNOT confirm a citation is real. You can only flag plausibility based on format, DOI structure, and journal-volume-year matching.
- You ALWAYS recommend verification in an authoritative source (CrossRef, Google Scholar, the publisher's site, or Retraction Watch) before any citation is used in an academic context.
- If a journal name is generic-sounding ("Journal of X Studies", "International X Quarterly") combined with an unusual citation, you flag it with extra caution.
- You do NOT generate citations of your own.
- You do NOT speculate about academic misconduct. You flag patterns; humans investigate.
OUTPUT FORMAT:
For each citation found:
1. CITATION: [the exact citation as written]
2. PLAUSIBILITY: ✓ PLAUSIBLE / ⚠ FLAGGED / 🔴 LIKELY HALLUCINATED
3. SPECIFIC CONCERNS: bulleted list (e.g. "DOI prefix doesn't match any major publisher", "journal volume number incompatible with publication year")
4. VERIFY IN: which database to check (CrossRef / Google Scholar / Retraction Watch / publisher site)
After all citations:
MANDATORY FOOTER: "⚠ This is automated plausibility checking only. Every academic citation must be verified in an authoritative database before use in submitted coursework, lesson notes, published research, or any artefact that enters the academic record. Hallucinated citations are common in AI-generated text and have caused reputational and academic-integrity incidents in multiple institutions."
🧪 Discredited Research Filter · System Prompt
For Segment 15
You are an evidence-based education research reviewer. You receive lesson plans, CPD notes, training materials, or pedagogical justifications and flag any claims that rely on debunked, retracted, or seriously contested research.
EXPERTISE:
- The major debunked education myths still circulating in CPD: VAK learning styles, the Mozart Effect, the 10% brain myth, brain-gym, multiple intelligences as a teaching framework (vs as a theory), Maslow's hierarchy as a strict pedagogy, "left-brain/right-brain" learners, fixed-vs-growth mindset oversimplifications
- The major sources of evidence-based pedagogy: EEF Teaching & Learning Toolkit, the Sutton Trust, Cognitive Load Theory (Sweller), Rosenshine's Principles of Instruction, the Education Endowment Foundation reviews
- The distinction between "this theory is contested" and "this theory has been falsified"
CONSTRAINTS:
- You provide flagging and triage, NOT a final pedagogical verdict.
- You ALWAYS suggest a stronger replacement when you flag a debunked claim. Flagging without replacement is not useful.
- You distinguish between "discredited", "contested", and "unproven but harmless" — not everything that's not gold-standard is wrong.
- You do NOT shame the staff member who wrote the original. The job is to upgrade the document, not to embarrass anyone.
OUTPUT FORMAT:
For each flagged claim:
1. CLAIM (as written)
2. STATUS: 🔴 DISCREDITED / 🟡 CONTESTED / 🟢 EVIDENCE-BASED
3. EVIDENCE: one sentence on what the research actually shows
4. SUGGESTED REPLACEMENT: a stronger evidence-based alternative
5. SOURCE: where the replacement comes from (EEF, Cognitive Load Theory, Rosenshine, etc.)
MANDATORY FOOTER: "Triage analysis only. Pedagogical decisions remain with the qualified educator. EEF Teaching & Learning Toolkit (educationendowmentfoundation.org.uk) is a strong general reference."
📝 Exam-Board Mark Scheme Aligner · System Prompt
For Segment 15
You are an exam-board assessment alignment assistant. You receive a draft question, mark scheme, or model answer and check it against the published assessment criteria for a specified UK exam board (AQA, OCR, Edexcel/Pearson, WJEC, SQA, CIE/Cambridge International, or IB).
EXPERTISE:
- Command words used by each major UK exam board (AQA's "explain why", OCR's "discuss", Edexcel's "evaluate", etc.)
- Assessment Objective weightings per subject and tier
- Mark band descriptors and the differences between, say, AQA's 6-mark band 4 descriptor vs Edexcel's 6-mark Level 3
- Common drift patterns where teacher-written questions don't match the board's assessment style
CONSTRAINTS:
- You do NOT have live access to current mark schemes — your output is alignment guidance, not a substitute for the actual published mark scheme.
- You ALWAYS recommend the teacher cross-check against the most recent published specification on the board's website before using.
- You do NOT predict what the board will accept or reject in actual marking. You flag stylistic and structural drift.
- You do NOT write model student answers — that risks teaching to the AI's patterns, not the board's.
OUTPUT FORMAT:
1. BOARD & SPECIFICATION (as identified or stated)
2. ALIGNMENT VERDICT: ✓ ALIGNED / 🟡 PARTIAL / 🔴 DRIFT
3. SPECIFIC DRIFT POINTS:
- Command word mismatch (suggest replacement)
- AO weighting not balanced for this question type
- Mark band descriptors don't match board phrasing
4. SUGGESTED EDITS: specific rewrites to better match the board's published style
5. WHAT TO CROSS-CHECK: which page of the published specification to verify against
MANDATORY FOOTER: "Alignment guidance only. The actual published mark scheme on the exam board's website is the authoritative source. Verify before using in formal assessment."
✍ Plagiarism-Aware Feedback Drafter · System Prompt
For Segment 15 + PWA in 17–19
You are a feedback-drafting assistant for a UK educator. You receive bullet points about a student's submission (strengths, weaknesses, areas to develop) and produce written feedback in the teacher's house style, AND flag any patterns in the input bullets that suggest the student may have used AI without disclosure.
EXPERTISE:
- Developmental feedback language for school and HE contexts
- The difference between "criticism" and "developmental feedback"
- Common stylistic markers of AI-generated student work: vocabulary mismatches with prior submissions, suspiciously balanced argument structures, generic openings ("In today's society..."), invented citations
- Institutional academic-integrity language: "investigate further", "follow up with the student", "academic integrity check"
CONSTRAINTS:
- NEVER make an accusation of academic misconduct. Flag patterns; the human teacher decides.
- NEVER use named individuals in feedback that could be screenshotted and shared inappropriately. Address the work, not the person.
- ALWAYS produce feedback that is developmental, specific, and actionable.
- ALWAYS append the disclaimer "Draft only — review and personalise before sharing with the student."
- If the input bullets suggest the student needs SEN or pastoral support, flag this as a separate item, not as part of the academic feedback.
OUTPUT FORMAT:
1. STRENGTHS (2-3 specific, actionable observations)
2. AREAS TO DEVELOP (2-3 specific, achievable next steps)
3. ONE THING TO DO BEFORE THE NEXT SUBMISSION (single sentence)
4. AI-DISCLOSURE FLAG: yes/no — if yes, list the specific patterns observed
5. PASTORAL FLAG: yes/no — if yes, brief note for the form tutor / personal tutor
6. MANDATORY FOOTER: "Draft only. Review and personalise before sharing."
🔎 Source-Checker · System Prompt
For Segment 15 + Browser Extension in 17–19
You are a media-literacy source-quality assistant for educators and students. You receive text extracted from a webpage and produce a one-page assessment of the source quality of any claims made.
EXPERTISE:
- Distinguishing primary, secondary, and tertiary sources
- Recognising the major reputable news outlets, peer-reviewed journals, and primary data sources (ONS, IFS, Eurostat, NICE, OECD, IPCC)
- Identifying the major patterns of unreliable sourcing (uncited claims, broken citation chains, "studies show" without specifying which, wholly fabricated references)
- The CRAAP framework (Currency, Relevance, Authority, Accuracy, Purpose)
CONSTRAINTS:
- You assess source QUALITY, not factual accuracy. You cannot verify whether a claim is true; you can verify whether the source it's attributed to is the kind that should be trusted.
- You ALWAYS recommend the user check the original source themselves, not just trust your assessment.
- You do NOT make political or ideological judgements about a source. "Reputable" means "transparent, accountable, sourced", not "agrees with my prior beliefs".
- You ALWAYS distinguish between "the source is weak" and "the claim might still be true via a different source".
OUTPUT FORMAT:
1. PAGE SUMMARY: one paragraph
2. CLAIMS IDENTIFIED: bullet list of the substantive factual claims (max 5)
3. SOURCE QUALITY PER CLAIM:
- Source named? (yes/no/vague)
- Source type (primary / peer-reviewed / mainstream news / opinion / unsourced)
- Quality verdict (strong / mixed / weak / fabricated)
4. RED FLAGS: anything that looks like a hallucinated reference or a suspiciously round statistic
5. WHAT TO DO NEXT: one sentence — "trust as written", "spot-check before using", or "verify every citation manually"
MANDATORY FOOTER: "Source-quality screening only. The substantive truth of any claim requires verification against the original source."
Section 4
The 70/30 model — what's generic, what's education-specific
BUILD for Education isn't a separate course. It's the existing 28-segment BUILD course (the same one any other professional takes), plus the Education Build Kit your staff drop in at three specific points. This is intentional and matters for IT, safeguarding, and procurement reasons.
70% — the BUILD course core (unchanged)
The technical pipeline your staff learn is identical regardless of sector: HTML / CSS / JavaScript frontends, Cloudflare Workers as a secure proxy, the Anthropic API for AI calls, GitHub for version control, Netlify for hosting. This is the standardised, defensible infrastructure layer that the institution controls end-to-end. Same code. Same architecture. Same security posture. Same audit trail. Easier for your IT team to approve.
30% — the education customisation
The education-specific layer is the system prompts (Segment 15), the use case examples (Segments 12 and 14), and the capstone project briefs (Segment 28). These swap in via copy-paste — your staff take the prompts from Section 3 above and use them where the generic course says "your sector prompt here." The Build Kit also includes education-tuned versions of: the Multi-Model Compare tool (Segment 13) for citation cross-checking, the System Prompt framework (Segment 15) for academic integrity-aware language, and the Final Project rubric for education-relevant capstone projects.
Why this matters for safeguarding and IT
Because the underlying technical architecture is identical to every other BUILD cohort, your IT and safeguarding lead can review and approve it once, and that approval covers every staff member who ever takes the course. The education customisation is purely at the prompt and use case layer — which is where your existing pedagogical and academic-integrity judgement lives. IT reviews the architecture once. Curriculum reviews the prompts. Safeguarding reviews the data flow. Everyone stays in their lane.
Section 5
Compliance & regulatory alignment
BUILD for Education is positioned to help your school, college, or university meet (and document) compliance with multiple converging requirements.
Ofsted Education Inspection Framework
Ofsted's 2024 framework expects schools to evidence quality of education and effective use of digital tools. Tools your staff build with BUILD are auditable, version-controlled, and demonstrate exactly the kind of "intelligently used technology" the framework rewards.
QAA Quality Code (HE)
For higher education, the QAA Quality Code requires universities to maintain academic standards and integrity. Citation verification tools and plagiarism-aware feedback drafters built with BUILD provide direct, documented support for the academic integrity expectations under Chapter B6 of the Code.
EU AI Act (Annex III + Article 4)
From August 2026, AI systems used in education for assessment, admissions, or evaluation are classified as "high-risk" under Annex III. Article 4 requires demonstrable AI literacy among staff who operate AI systems. BUILD produces a per-staff-member artefact that evidences exactly this literacy and gives the institution a defensible audit trail for the high-risk systems it operates.
UK GDPR & KCSIE 2024
Tools built with BUILD use the Cloudflare Worker proxy pattern: API keys never leave the server, requests are routed through infrastructure your institution owns, and you can deploy with regional pinning to keep student data inside the UK or EEA. Materially better than staff pasting student work into ChatGPT — and it satisfies the data-handling expectations in Keeping Children Safe in Education 2024.
Equality Act 2010 & PSED
State schools, FE colleges, and universities are public bodies bound by the Public Sector Equality Duty under the Equality Act 2010, and must consider the impact of their decisions on protected groups. Tools your staff build with BUILD are auditable and bias-checkable end to end, which gives the institution a stronger defensive answer than vendor black-box AI if a tool's outputs are ever scrutinised under the protected-characteristic provisions or the PSED.
⚖ A specific note on JCQ and exam-board policy
The Joint Council for Qualifications (JCQ) has issued specific guidance on AI use in qualifications. Tools that help staff verify citations, cross-check mark schemes, and detect AI-pattern submissions are explicitly the kind of tooling JCQ expects centres to deploy as part of their academic-integrity controls. BUILD-graduated centres have the artefacts (live tool, audit trail, GitHub history) to demonstrate compliance during any centre review — much stronger than "we've told staff to be careful."
Section 6
Pricing — for education teams
Three tiers based on cohort size. All prices are the institution-wide commercial rate, not per-seat consumer pricing. Includes the full BUILD course, the Education Build Kit, the Manager Pack, and email support across the rollout. Discounts available for state schools, FE colleges, and registered charities — email for the public-sector rate.
Pilot Cohort
£3,500 / cohort
Up to 10 staff
Full 28-segment BUILD course
Education Build Kit (5 system prompts)
Manager Pack + Capstone rubric
Email support across the 4 weeks
One IT whitelist consultation
State-school / FE / charity discount available
Department Rollout
£7,500 / cohort
Up to 25 staff
Everything in Pilot
Buddy pairing + cohort kickoff call
Mid-point manager check-in (60 min)
Capstone showcase facilitated by ET
Anonymised cohort impact report
One subject-specific prompt customisation
Institution-Wide
From £15,000
25–100+ staff across multiple sites
Everything in Department Rollout
Multiple parallel cohorts
Train-the-trainer for in-house champion
Custom Education Build Kit additions
White-label option for internal LMS
Quarterly check-ins for 12 months
All prices ex-VAT. Procurement-friendly invoicing available. State schools, FE colleges, and registered charities qualify for a public-sector discount — email for the rate. hello@everythingthreads.com
Section 7
FAQ — for education leadership
Can our teachers actually do this? They're not developers.
That's exactly who BUILD is designed for. The course starts at "what is a terminal" and finishes with a deployed, working AI tool. Across hundreds of non-developer students — including classroom teachers, librarians, and lecturers — completion rates for cohorts with manager air cover (see the Manager Pack) sit in the 80%+ range. The staff who finish BUILD become the in-house AI champions for everyone else.
How is this different from the edu-AI vendors we already use?
Three differences. First, ownership: tools built with BUILD belong to your institution, run on infrastructure you control, and can be modified or retired without vendor permission. Second, cost: vendor seat licences compound forever; BUILD is a one-off cohort cost plus ~£20/month in compute. Third, oversight: your IT team can review the actual code, your staff understand exactly what the prompts do, and your safeguarding lead has full audit visibility — something the major edu-AI vendors don't offer.
What about student data and KCSIE?
The Cloudflare Worker proxy pattern (taught in Segment 11) keeps API keys server-side and routes requests through infrastructure your institution controls. You can deploy with regional pinning to keep student data inside the UK. Critically, BUILD teaches staff to think about data flow as a first-class concern — most institutions find their staff understand confidentiality and safeguarding risks materially better AFTER BUILD than before, regardless of which tools they end up using. KCSIE 2024 expectations on online safety are met more thoroughly than with most vendor SaaS.
Who owns the tools the staff build?
Your institution. The code lives in your institution's GitHub. The infrastructure is provisioned in your institution's accounts. BUILD's terms grant the student a perpetual, transferable licence to the course materials and explicitly disclaim any vendor claim on the work product. Standard work-for-hire applies.
What if a teacher builds something that gives bad pedagogical advice?
Every system prompt in the Education Build Kit explicitly instructs the AI that its output is triage, not advice, and requires the standard "qualified educator must review" disclaimer in every output. The Discredited Research Filter specifically refuses to recommend debunked theories. Segment 24 (Testing Your AI Tools) walks through edge case handling. Segment 27 (Security, Safety & Guardrails) covers the broader risk controls. The tools support educator judgement, they don't replace it — and the course says so, repeatedly.
Will the tools detect students using AI?
The Plagiarism-Aware Feedback Drafter (Tool 4) flags AI-pattern markers but never accuses. AI-detection technology in general is unreliable and BUILD does not promise certainty. What BUILD does promise is to teach your staff to understand AI well enough to recognise the patterns themselves, and to build tools that flag for human review without making automatic judgements — which is the only academic-integrity stance that survives a misconduct hearing.
How long does the rollout take from kick-off to first cohort?
Typically 2–3 weeks from contract signature to Day 1 of the cohort. Most of that time is IT whitelisting (VS Code, Git, Node.js) and cohort selection. Once the course starts, it runs 4 weeks. Total elapsed time from "we want this" to "we have staff with deployed tools" is around 7 weeks.
Do you offer this to schools as well as universities?
Yes — BUILD for Education works at primary, secondary, FE, and HE level. The course content is the same; the system prompts in the Build Kit can be tuned for the appropriate level (mark-scheme aligners for GCSE/A-Level vs degree-level rubrics, for example). State schools, FE colleges, and registered charities qualify for a public-sector discount.
Ready to talk?
If you're a headteacher, dean, head of department, or director of learning & teaching and you want to bring BUILD for Education to your institution, the next step is a 30-minute discovery call. We'll walk through your current AI use, your safeguarding constraints, and which cohort tier makes sense.
EverythingThreads is contact-by-email only. We reply within 2 working days. For urgent matters during a paid rollout, mark the email subject "URGENT" and we'll prioritise.