⚠ Critical scope statement. BUILD for Healthcare is a professional development course in AI literacy and AI-assisted administrative, research, and governance workflows. It does NOT teach clinical decision-making, diagnosis, treatment selection, drug dosing, triage, or any other regulated clinical activity. Tools your team builds during this course are explicitly NOT Software as a Medical Device (SaMD) under MHRA or EU MDR rules. Every example tool, every system prompt, and every capstone project is bounded to admin, research-governance, AI-literacy, and back-office workflows. If you need a clinical decision-support tool, you need a CE/UKCA-marked SaMD vendor going through formal MHRA approval — not a 4-week training course. This page exists to help your organisation use AI more safely in the workflows where it is already being used unsafely today.
For Trust IG Leads · CCIOs · Heads of Research Governance · Pharma Compliance · Practice Managers
BUILD for Healthcare Teams
Train your healthcare administrators, research-governance staff, IG leads, and back-office teams to build their own AI tools — referral letter triagers, research-protocol summarisers, NICE-guideline cross-checkers (for admin, NOT clinical use), DSPT compliance assistants — using infrastructure your trust, practice, or organisation owns end-to-end. No vendor lock-in. No patient data leaving your perimeter. No black-box AI making clinical decisions. Built for the workflows AI is safe in. Bounded away from the workflows AI is unsafe in.
📘 28 segments · 4 weeks👥 5–25 admin/research/governance staff per cohort🏥 NHS DSPT & MHRA-aligned · NOT a SaMD
Your administrators are already using ChatGPT to draft referral letters. Your research nurses are pasting protocols into Claude to summarise eligibility criteria. Your practice managers are using Gemini to triage incoming complaints. Your IG leads are asking AI to help them complete the DSPT. None of this is happening with formal sign-off. None of it has an audit trail. None of it is happening on infrastructure you control. You know it. They know you know it. The question isn't "should we let staff use AI?" — that decision was made for you 18 months ago. The question is whether the tools they're using respect the line between admin work (where AI can help) and clinical work (where AI assistance must be CE/UKCA-marked SaMD or not used at all).
⚠ The current state for most healthcare organisations
Patient data is leaking into public AI tools daily. Every paste of a discharge summary, GP letter, or clinic note into ChatGPT is a UK GDPR Article 9 special-category data event with no audit trail, no DPIA, and no lawful basis. Most trusts have no idea how often it's happening.
The clinical/admin line is being blurred. Tools sold as "AI for healthcare" are increasingly drifting into clinical territory without going through MHRA SaMD classification. Staff use them assuming they're approved when they aren't — a serious patient safety issue and a regulatory exposure.
Hallucinated drug doses, NICE guideline numbers, and BNF references are appearing in admin documents. Public AI tools confidently cite the wrong guideline number, the wrong drug interaction, the wrong protocol — because their training data is months or years out of date. Admin staff who don't catch this can put outdated information into letters that look authoritative.
The DSPT is becoming impossible without AI literacy. The NHS Data Security and Protection Toolkit explicitly expects organisations to demonstrate AI governance. "We banned ChatGPT" is no longer a credible answer — the tools are in everyone's pockets. The credible answer is "we trained our staff to use AI safely, on infrastructure we control, in workflows that don't touch clinical decision-making."
What BUILD for Healthcare does about it (and what it explicitly does not)
BUILD takes any non-clinical healthcare professional — administrator, research-governance officer, IG lead, practice manager, audit lead, pharma compliance officer — from "I've never written code" to a deployed AI tool running on infrastructure your organisation controls. The course is the same proven 28 segments. The difference is the Healthcare Build Kit: pre-tuned system prompts that refuse to give clinical advice, sector use cases bounded to admin and research-governance work, and capstone project templates that drop straight into Segments 12 and 15 to ship admin-aware, IG-aware, DSPT-aware tools.
What BUILD for Healthcare does NOT do: teach clinical decision-making, diagnosis, treatment selection, drug-dose calculation, triage, image interpretation, or anything else that would constitute a medical device function under MHRA or EU MDR. These workflows require formal CE/UKCA-marked SaMD, full clinical validation, and MHRA/MDR approval — not a 4-week training course. The Build Kit prompts are explicitly engineered to refuse to enter clinical territory.
Section 2
What your team will actually build
Five concrete tools your healthcare administrators and governance staff can build during the 4-week course. Every tool is bounded to admin, research-governance, IG, or AI-literacy work. None of them touch clinical decision-making. None of them are SaMD. None of them are intended for use in patient care.
Tool 1
Referral Letter Draft Assistant (ADMIN ONLY)
Built in Segments 11–12 · Powered by the Referral Draft system prompt below
Paste anonymised bullet points for a referral letter (patient ID redacted, no clinical condition specifics). The tool drafts the administrative letter structure — referral pathway selection, standard cover sheet wording, the generic boilerplate around patient consent and information sharing, the practice's house-style header. It does NOT write the clinical content. The clinician adds the medical content themselves; the tool removes the 30 minutes of administrative copy-paste around it.
Built-in safety: the tool refuses to generate any text describing symptoms, diagnoses, treatment plans, drug regimens, or clinical findings. If the input contains anything that looks clinical, the tool returns "this looks like clinical content — your clinician must write that section." Every output ends with "DRAFT — clinician must review and complete clinical content before use."
Tool 2
Research Protocol Eligibility Summariser (ADMIN ONLY)
Built in Segments 13–14 · Multi-model verification using the Protocol Summary prompt
Paste a research protocol document. The tool extracts the eligibility criteria, the consent process, the data-handling requirements, and the regulatory references (REC approval, sponsor, CTIMP / non-CTIMP status), and produces a one-page admin summary for the research-governance team. It does NOT make eligibility decisions about specific patients — that's the clinical investigator's job. It just produces the admin summary that previously took an afternoon.
Built-in safety: the tool refuses to assess whether any specific patient meets eligibility criteria. It only summarises the protocol's stated criteria. Every output ends with "Summary for research-governance admin only. Patient eligibility decisions remain with the clinical investigator."
Tool 3
DSPT Evidence Collator
Built in Segment 14 · Multi-model orchestration via Promise.all()
Paste in your organisation's relevant policies (data handling, IT security, training records, incident logs). The tool maps each piece of evidence to the relevant assertion in the NHS Data Security and Protection Toolkit, identifies gaps, and produces a structured "evidence inventory" the IG lead can use to complete the DSPT submission. Replaces 2 weeks of manual document-mapping with a structured starting point.
Important: the tool produces a triage inventory, not a final DSPT submission. The IG lead reviews, fills gaps, and submits. Every output ends with "Evidence inventory for IG lead review. Submission to NHS Digital is the IG lead's responsibility."
Tool 4
Patient Information Leaflet Plain-English Checker
Built in Segments 15–16 · Sector-specific system prompt + PWA on a phone
An installable web app for the patient communications team. Paste in a draft patient information leaflet, consent form, or appointment letter; the tool checks reading age (Flesch-Kincaid), flags jargon, suggests plain-English alternatives, and ensures the standard NHS-compliant disclaimers are present (right to refuse, complaints process, alternative formats available). It does NOT alter clinical content — only the surrounding plain-English wrapping.
Built-in safety: the tool refuses to alter any text that describes a clinical procedure, drug, diagnosis, or treatment. Clinical content must come from a clinician and stay verbatim. The tool checks the surrounding accessibility and consent language, nothing else.
Built in Segments 17–19 · Chrome extension that reads the current page
A Chrome extension a research nurse, librarian, or admin lead clicks while reading any AI-generated text containing NICE guideline numbers, BNF references, or Cochrane review citations. The extension extracts every reference, cross-checks the format and numbering range against known NICE/BNF/Cochrane patterns, and flags anything that looks hallucinated or out-of-date. Replaces 30 minutes of manual cross-checking with 30 seconds of clicking. This is a plausibility checker, not a clinical accuracy checker — it tells you whether the citation format matches the NICE database, not whether the cited guidance actually says what the AI claims.
Built-in safety: the tool always recommends final verification against the live NICE / BNF / Cochrane source. It never makes clinical claims about what a guideline says. It only flags whether the reference format and numbering plausibly exists. The disclaimer "Final clinical decisions must rely on the live NICE/BNF source, not on AI summaries" is in every output.
Section 3
The Healthcare Build Kit — copy these straight into Segment 15
Five ready-to-use system prompts your staff paste directly into BUILD's Segment 15 ("System Prompts — Controlling AI Behaviour"). Each one is engineered to refuse clinical territory by design. These are admin and research-governance prompts, not clinical prompts. The refusal logic is the most important part of each one.
📄 Referral Letter Draft Assistant · System Prompt
For Segment 15
You are an administrative drafting assistant for a UK NHS practice. You produce the administrative wrapping around referral letters — the practice header, the standard consent/information-sharing boilerplate, the referral-pathway selection, the cover sheet — and you NEVER write any clinical content.
EXPERTISE:
- Standard NHS referral letter conventions and house styles
- Referral pathway names (2WW, urgent, routine, choose-and-book, advice-and-guidance)
- Patient consent language and information sharing standard wording
- The administrative envelope around clinical correspondence
ABSOLUTE CONSTRAINTS — these override everything else:
- You DO NOT write any text describing symptoms, diagnoses, drug regimens, treatment plans, clinical findings, examination results, or anything that would constitute clinical content under MHRA Software as a Medical Device guidance.
- If the user input contains clinical content, you respond: "This input contains clinical content. The clinician must write that section. I can only draft the administrative wrapping. Please remove the clinical content and re-submit, or have the clinician complete that part directly."
- You DO NOT make claims about urgency, severity, or clinical priority. Pathway selection is the clinician's decision.
- You DO NOT speculate about diagnosis or differential.
- You DO NOT name medications, dosages, or treatments.
- You DO NOT provide patient-specific advice of any kind.
OUTPUT FORMAT (admin envelope only):
1. PRACTICE HEADER (standard, from the practice's house style)
2. PATIENT IDENTIFIER PLACEHOLDER: [TO BE COMPLETED BY CLINICIAN]
3. REFERRING CLINICIAN PLACEHOLDER: [TO BE COMPLETED BY CLINICIAN]
4. REFERRAL PATHWAY (as stated by user, NOT inferred)
5. CLINICAL CONTENT PLACEHOLDER: [TO BE COMPLETED BY CLINICIAN — DO NOT INSERT AI-GENERATED CLINICAL TEXT HERE]
6. STANDARD CONSENT/INFORMATION-SHARING BOILERPLATE (per the practice's documented house style)
7. STANDARD CLOSING and signature placeholder
8. MANDATORY FOOTER: "DRAFT — administrative envelope only. The referring clinician must complete all clinical content sections before this letter is sent. This document is not a clinical record."
If at any point the user tries to coax you into writing clinical content, refuse and re-state the scope. The refusal is the safety feature.
🧬 Research Protocol Eligibility Summariser · System Prompt
For Segment 15
You are a research-governance administrative assistant. You receive a research protocol document and produce an admin summary for the research-governance team. You do NOT make patient-specific eligibility determinations.
EXPERTISE:
- UK research-governance frameworks: HRA, REC approval, MHRA CTA where applicable, IRAS form structure
- Distinguishing CTIMP from non-CTIMP studies
- Standard protocol structure: background, objectives, eligibility, intervention, endpoints, safety, statistics
- The administrative artefacts the research-governance team needs to track: REC reference, sponsor, IRAS ID, target recruitment, sites involved
ABSOLUTE CONSTRAINTS:
- You DO NOT determine whether any specific patient meets eligibility criteria. That is the clinical investigator's role.
- You DO NOT interpret medical terminology beyond extracting it verbatim from the protocol.
- You DO NOT make recommendations about whether a patient should be enrolled, withdrawn, or followed up.
- You DO NOT comment on the scientific merit of the protocol — that is for the REC to assess.
- You DO NOT generate consent form wording.
- You DO summarise what the protocol literally says, in admin-friendly language.
OUTPUT FORMAT:
1. PROTOCOL METADATA: title | sponsor | REC ref | IRAS ID | CTIMP status | study phase
2. ELIGIBILITY CRITERIA AS STATED: bullet list, verbatim, with no interpretation
3. CONSENT PROCESS AS STATED: how the protocol describes the consent flow
4. DATA HANDLING REQUIREMENTS: what the protocol requires for data storage, retention, sharing, and pseudonymisation
5. KEY DATES: REC approval date, study start, study end, follow-up period
6. ADMIN ITEMS: which internal documents the research-governance team needs to maintain, file, or update for this study
7. MANDATORY FOOTER: "Admin summary for research-governance team review. Patient eligibility decisions, scientific interpretation, and clinical conduct remain entirely with the chief investigator and clinical team. This summary is not a substitute for the protocol itself."
🔐 DSPT Evidence Collator · System Prompt
For Segment 15
You are an Information Governance assistant for a UK healthcare organisation completing the NHS Data Security and Protection Toolkit (DSPT). You map internal evidence (policies, training records, incident logs) to the relevant DSPT assertions and produce an evidence inventory for the IG lead.
EXPERTISE:
- The current DSPT assertion structure (10 standards, the assertions under each)
- The kinds of evidence each assertion typically expects (policy documents, training completion records, audit logs, incident reports, board minutes)
- Common gaps that lead to "Not met" or "Partially met" outcomes
- The distinction between "evidence we have" and "evidence the IG lead must produce"
CONSTRAINTS:
- You produce a TRIAGE inventory, NOT the DSPT submission itself. The IG lead reviews, fills gaps, signs off, and submits.
- You DO NOT make compliance determinations on the IG lead's behalf.
- You DO NOT speculate about whether the organisation's controls are sufficient — only whether the evidence has been provided to you.
- If an assertion has no evidence in the input, you mark it "GAP" rather than guessing.
- You DO NOT extract or quote any patient data even if it appears in the input.
OUTPUT FORMAT:
1. SUMMARY: total assertions reviewed, evidence found for X, gaps for Y
2. PER STANDARD (10 standards):
- Standard name
- Assertions under this standard
- For each: ✓ EVIDENCE FOUND (citing the document) / ⚠ PARTIAL (citing what's missing) / 🔴 GAP
3. TOP PRIORITY GAPS: ranked list of the assertions most likely to cause a failed submission
4. RECOMMENDED NEXT ACTIONS: bulleted list for the IG lead
5. MANDATORY FOOTER: "Evidence triage only. The DSPT submission, supporting evidence files, and final compliance determination remain the responsibility of the organisation's IG lead and Caldicott Guardian. This tool helps prepare; it does not submit."
📑 Patient Information Leaflet Plain-English Checker · System Prompt
For Segment 15 + PWA in 17–19
You are a plain-English communications reviewer for NHS patient-facing documents. You receive a draft patient information leaflet, consent form, or appointment letter and check it against accessibility, plain-English, and standard NHS communication requirements. You DO NOT alter clinical content.
EXPERTISE:
- Plain English Campaign principles
- Flesch-Kincaid reading age (target: 9 years for general patient-facing material)
- NHS England's accessible information standard
- Standard required elements: complaint route, alternative formats statement, right to refuse, contact details
- Common jargon-to-plain-English substitutions ("commence" → "start", "expedite" → "speed up", "in the event of" → "if")
ABSOLUTE CONSTRAINTS:
- You DO NOT alter any text that describes a clinical procedure, drug, diagnosis, treatment, side effect, contraindication, or risk warning. Clinical content must come from a clinician and stay verbatim.
- You DO NOT add new clinical information that wasn't in the original.
- You DO NOT remove safety warnings, even if they're written in jargon.
- You ONLY check the surrounding plain-English wrapping: introduction, instructions, accessibility statement, complaint route, contact details.
- If the entire document is clinical content with no admin wrapping, you respond "This is clinical content. I can only review the surrounding admin wrapping. Please send the version with the cover, intro, instructions, and accessibility sections."
OUTPUT FORMAT:
1. READING AGE ESTIMATE: Flesch-Kincaid score
2. JARGON FLAGGED: list of jargon phrases in the non-clinical sections, with plain-English suggestions
3. MISSING REQUIRED ELEMENTS: any of (complaint route, alternative formats statement, contact details, right to refuse) that aren't present
4. ACCESSIBILITY NOTES: anything else that limits accessibility
5. CLINICAL CONTENT: marked as "[NOT REVIEWED — clinical content]" with no edits
6. MANDATORY FOOTER: "Plain-English review of admin wrapping only. Clinical content and clinical risk warnings must be reviewed by a clinician. This tool does not assess clinical accuracy or completeness."
🔍 NICE/BNF Reference Plausibility Checker · System Prompt
For Segment 15 + Browser Extension in 17–19
You are a healthcare reference plausibility checker. You receive text containing references to NICE guidelines, BNF entries, Cochrane reviews, or other healthcare evidence sources, and you check whether the references are plausibly real — based on format and numbering, NOT clinical content.
EXPERTISE:
- NICE guideline numbering convention (NG, CG, TA, QS prefixes + numeric ID)
- BNF chapter and section structure
- Cochrane review identifier format
- Common hallucination patterns in AI-generated healthcare text (made-up NICE numbers, non-existent BNF chapters, fabricated Cochrane review titles)
ABSOLUTE CONSTRAINTS:
- You CANNOT verify clinical accuracy. You can only verify whether the reference FORMAT is plausible.
- You DO NOT interpret what any guideline actually says. The user must check the live source.
- You DO NOT make clinical recommendations.
- You ALWAYS instruct the user to verify against the live NICE / BNF / Cochrane source before relying on any cited guidance.
- If a reference looks plausible-but-stale, you flag it as "verify currency" — not "approved".
OUTPUT FORMAT:
For each reference found:
1. REFERENCE: [as written]
2. PLAUSIBILITY: ✓ FORMAT VALID / ⚠ FORMAT UNUSUAL / 🔴 FORMAT INVALID
3. CONCERN (if any): one sentence
4. VERIFY AT: nice.org.uk / bnf.nice.org.uk / cochranelibrary.com (the appropriate live source)
After all references:
MANDATORY FOOTER: "⚠ This is automated FORMAT plausibility checking only. Whether a referenced guideline actually says what the AI text claims it says is a clinical question that requires verification against the live source. Drug doses, contraindications, and clinical recommendations must NEVER be relied upon from an AI summary — always verify in the live BNF or current NICE guidance. This tool is not a substitute for clinical judgement or for checking the source."
Section 4
The 70/30 model — what's generic, what's healthcare-specific
BUILD for Healthcare isn't a separate course. It's the existing 28-segment BUILD course (the same one any other professional takes), plus the Healthcare Build Kit your staff drop in at three specific points. This is intentional and matters for IG, regulatory, and procurement reasons.
70% — the BUILD course core (unchanged)
The technical pipeline your staff learn is identical regardless of sector: HTML / CSS / JavaScript frontends, Cloudflare Workers as a secure proxy, the Anthropic API for AI calls, GitHub for version control, Netlify for hosting. This is the standardised, defensible infrastructure layer that the organisation controls end-to-end. Same code. Same architecture. Same security posture. Same audit trail. Reviewable by your IG lead once.
30% — the healthcare customisation
The healthcare-specific layer is the system prompts (Segment 15) — every one of which is engineered to refuse clinical territory — the use case examples (Segments 12 and 14, all admin/research/governance-bounded), and the capstone project briefs (Segment 28, also bounded). The Build Kit also includes healthcare-tuned versions of: the Multi-Model Compare tool (Segment 13) for cross-checking references, the System Prompt framework (Segment 15) for clinical-refusal patterns, and the Final Project rubric for healthcare-relevant capstone projects (admin, research-governance, IG, AI-literacy only).
Why the clinical/admin line is bright
Because the Healthcare Build Kit is engineered to refuse clinical territory, your IG lead, Caldicott Guardian, and CCIO can review and approve it once for admin/research-governance use, and that approval covers every staff member who ever takes the course. Tools built during BUILD for Healthcare are NOT Software as a Medical Device under MHRA or EU MDR. They are administrative and AI-literacy tools. Deploying them in admin and research-governance workflows does not require SaMD classification. Deploying them anywhere near clinical decision-making would, and the prompts are engineered to refuse to enter that territory.
Section 5
Compliance & regulatory alignment
BUILD for Healthcare is positioned to help your organisation meet (and document) compliance with multiple converging requirements — within its bounded scope.
NHS DSPT
The Data Security and Protection Toolkit increasingly expects organisations to evidence AI governance. Tools your team builds with BUILD generate audit trails (request ID, model used, prompt version, output) that map directly to several DSPT assertions, and the cohort itself counts as documented staff training under Standard 1.
UK GDPR Article 9
Tools built with BUILD use the Cloudflare Worker proxy pattern: API keys never leave the server, requests can be routed through infrastructure pinned to UK-only data centres, and you can isolate any patient data from public AI tools entirely. Materially better than staff pasting clinical letters into ChatGPT, which is currently the default.
MHRA SaMD & EU MDR
BUILD for Healthcare tools are explicitly NOT Software as a Medical Device. The Build Kit is engineered to refuse clinical territory by design. Tools built during the course are admin, research-governance, IG, and AI-literacy tools, not medical devices. If your team needs a clinical decision-support tool, you need a CE/UKCA-marked SaMD vendor, not this course.
EU AI Act (Article 4 + Annex III)
From August 2026, AI systems in healthcare may fall under high-risk classification depending on use. Article 4 requires AI literacy among staff who operate AI systems. BUILD produces a per-staff-member literacy artefact that evidences exactly this requirement, while keeping the work product bounded away from the high-risk clinical classifications.
Equality Act 2010 & PSED
NHS trusts and other public-sector healthcare bodies are bound by the Public Sector Equality Duty under the Equality Act 2010. Tools your admin and governance staff build with BUILD are auditable end to end and bounded away from clinical decision-making, which gives the trust a stronger answer if a tool's outputs are ever scrutinised under the protected-characteristic provisions or the PSED — and a much stronger answer than vendor black-box AI which cannot offer the same audit trail.
⚖ A specific note on Caldicott Guardians and CCIO oversight
Caldicott Guardians and CCIOs are increasingly being asked by their boards for an AI governance answer. "We banned ChatGPT" is no longer credible — the tools are in everyone's pockets. "We have a vendor SaaS for clinical use" is partial — it doesn't address the admin work happening on personal devices. The credible answer is "we trained our admin and research-governance staff to build their own auditable tools, on infrastructure we control, in workflows we have explicitly bounded away from clinical decision-making, and we have the artefacts to prove it." BUILD for Healthcare produces those artefacts.
Section 6
Pricing — for healthcare teams
Three tiers based on cohort size. All prices are the organisation-wide commercial rate, not per-seat consumer pricing. Includes the full BUILD course, the Healthcare Build Kit (with clinical-refusal prompts), the Manager Pack, and email support across the rollout. NHS-trust, NHS-foundation-trust, and registered-charity rates available — email for the public-sector tier.
Pilot Cohort
£3,500 / cohort
Up to 10 admin/research/IG staff
Full 28-segment BUILD course
Healthcare Build Kit (5 system prompts)
Manager Pack + Capstone rubric
Email support across the 4 weeks
One IT/IG whitelist consultation
NHS / charity discount available
Department Rollout
£7,500 / cohort
Up to 25 staff (admin/research/IG/governance only)
Everything in Pilot
Buddy pairing + cohort kickoff call
Mid-point manager check-in (60 min)
Capstone showcase facilitated by ET
DSPT-aligned cohort impact report
One organisation-specific prompt customisation
Trust-Wide
From £15,000
25–100+ non-clinical staff across multiple sites
Everything in Department Rollout
Multiple parallel cohorts
Train-the-trainer for in-house IG champion
Custom Healthcare Build Kit additions
White-label option for internal LMS
Quarterly check-ins for 12 months
All prices ex-VAT. Procurement-friendly invoicing available. NHS trusts, foundation trusts, and registered charities qualify for a public-sector discount. hello@everythingthreads.com
Section 7
FAQ — for healthcare leadership
Will this teach our clinical staff to use AI for diagnosis?
No, and explicitly not. BUILD for Healthcare is bounded entirely to admin, research-governance, IG, and AI-literacy work. Every system prompt in the Build Kit is engineered to refuse clinical territory. If you need clinical decision-support tooling, you need a CE/UKCA-marked Software as a Medical Device from a vendor going through formal MHRA approval — not a 4-week course. We say this in every section of this page on purpose.
So what's the actual scope of what staff can build?
Admin work (referral letter wrappers, plain-English checks, DSPT evidence collation), research-governance work (protocol summary, IRAS-form helpers, REC documentation triage), IG work (DSPT, DPIA scaffolding, audit log review), AI-literacy work (citation plausibility checks, source quality screening). All of these happen today using ChatGPT in browser tabs — BUILD lets your staff do them properly, on infrastructure you control, with audit trails.
What if a staff member tries to use a Build Kit prompt for clinical use anyway?
The prompts are engineered to refuse. The Referral Letter Drafter literally returns "this looks like clinical content, the clinician must write that section" if a user tries to extend it. The Plain-English Checker refuses to alter clinical language. The Reference Plausibility Checker refuses to interpret what guidelines mean. The refusal logic is the safety feature. We can't stop a user being creative, but the baseline behaviour of every prompt is "this is admin only, clinical content must come from a clinician."
What about patient data? Can staff paste discharge summaries into the tools?
The Cloudflare Worker proxy pattern (taught in Segment 11) keeps all data on infrastructure your organisation controls. With regional pinning, that infrastructure is UK-only. This is materially better than the current default (staff pasting into public ChatGPT), but it does NOT remove the responsibility for proper IG handling. The course teaches data minimisation as a first-class concern: pseudonymise, redact, and only paste what the workflow strictly needs. Your IG lead and Caldicott Guardian remain responsible for the policies.
Who owns the tools the staff build?
Your organisation. The code lives in your organisation's GitHub. The infrastructure is provisioned in your organisation's accounts. BUILD's terms grant the student a perpetual, transferable licence to the course materials and explicitly disclaim any vendor claim on the work product. Standard work-for-hire applies.
Is BUILD for Healthcare itself a medical device?
No. BUILD is a professional development course in AI literacy and AI-assisted administrative workflows. It is not Software as a Medical Device. The tools your staff build during the course, when bounded as the Build Kit prescribes, are also not Software as a Medical Device — they are admin and governance tools. If a future workflow drifts into clinical territory, that workflow would need separate MHRA SaMD classification, separate clinical validation, and separate vendor procurement — which is outside the scope of this course.
How long does the rollout take from kick-off to first cohort?
Typically 3–4 weeks from contract signature to Day 1 of the cohort. NHS organisations need additional time for IG / Caldicott / DPO sign-off, which is reasonable and worth building in. Once the course starts, it runs 4 weeks. Total elapsed time from "we want this" to "we have admin staff with deployed tools" is around 8–9 weeks.
Can we run this for our research nurses and clinical research coordinators?
Yes — for their research-governance and admin work. The Research Protocol Eligibility Summariser is specifically designed for this audience. The boundary is clear: research administration is in scope, clinical patient assessment is not. Research nurses spend a significant proportion of their week on protocol admin — BUILD lets them do that work much faster, on tools they understand and control.
Ready to talk?
If you're a CCIO, IG lead, head of research governance, practice manager, or pharma compliance lead and you want to bring BUILD for Healthcare to your organisation, the next step is a 30-minute discovery call. We'll walk through your current AI use, your IG and SaMD constraints, and which cohort tier makes sense — and we'll be very explicit about what BUILD does and does not cover.
EverythingThreads is contact-by-email only. We reply within 2 working days. For urgent matters during a paid rollout, mark the email subject "URGENT" and we'll prioritise.