Everything you need to run the EverythingThreads SHARP course across your team — professional AI literacy, no coding, designed for legal, finance, healthcare, marketing, HR, comms, ops, and any role where staff use AI for real work. The simplest way to give a whole team a shared vocabulary for AI risk before they cause an incident. Often the right warm-up before BUILD.
📘 20 segments · 3–4 weeks👥 Designed for teams of 5–50⏱ 4–5 hrs of your time across the rollout
Section 1
What SHARP actually delivers
SHARP is a 20-segment, 3–4-week, fully self-paced course that takes a knowledge worker from "AI gives me weird answers sometimes" to a documented vulnerability profile, a personal session protocol, and a downloadable Risk Score Report. No installs. No code. Runs entirely in a browser on any device.
Week 1 · Segments 1–5
The 7 Machine Patterns
M1–M7 — Agreement Trap, Fake Admission, Tailored Response, Confident Guess, Caveat That Changes Nothing, Redirect, The Fold. Original methodology, named and explained with real examples.
Week 2 · Segments 6–10
Human Failures + Vulnerability Profile
The 10 human failure patterns, then a private vulnerability self-assessment in Segment 8 (the foundation for the final Risk Score). This is the most personal week of the course.
Week 3 · Segments 11–15
Interventions + Sector Application
The Source Challenge, the 3-Minute Check, the Position Anchor, and other intervention scripts students can use mid-conversation with AI. Then sector-specific risk briefing.
Week 4 · Segments 16–20
Practice Document + Risk Report
Build a personal session protocol, audit a real session, draft a team recommendation, then re-take the vulnerability profile in Segment 20 to generate a downloadable Risk Score Report — the artefact you can share with leadership.
Why SHARP, why now
Most "AI safety training" is a 90-minute video and a tick-box quiz. SHARP is the opposite — 20 short segments, a personal vulnerability profile, hands-on intervention practice, and an actual measurable change between Week 2 and Week 4. The point isn't to scare people. The point is to give your team specific language ("that's M1 — the Agreement Trap") so they can interrupt their own AI mistakes in real time, without slowing down their day job.
Section 2
Who SHARP is for (and who it isn't)
SHARP is the broadest of the three EverythingThreads courses. If your team uses AI in their work in any meaningful way, they probably belong here.
✓ Good fit
Anyone using AI in a knowledge role — legal, finance, healthcare, marketing, consulting, HR, comms, ops, product, research
Has basic AI exposure (has used ChatGPT, Claude, Gemini, or Copilot at least a few times) — doesn't need to be an expert
Has 2–4 hours per week for 4 weeks — a lighter commitment than BUILD
Works in any environment where an AI mistake could create real consequences (drafting documents, summarising data, advising clients, writing reports)
Any device works — desktop, laptop, tablet, or phone. SHARP runs entirely in a browser.
⚠ Wrong fit — find them a different course
Wants to BUILD AI tools, not just use AI more carefully — they need BUILD, the 28-segment technical implementation course. SHARP is the literacy layer; BUILD is the engineering layer.
Total AI beginner — has literally never used ChatGPT or any AI tool. They need CLEAR first (the free foundational course). Once they've spent a few hours with any LLM, they're ready for SHARP.
Already an AI safety researcher — SHARP is professional literacy, not academic research. They'll find it too applied.
Manager hasn't carved out the time. 2 hours a week is the floor. Without protected hours they'll skim and the vulnerability profile will be meaningless.
Wants accreditation more than skills. SHARP is non-accredited. The artefact is the personal Risk Score Report — sell them on that, not on a certificate.
⚡ The combined play: SHARP → BUILD
SHARP works brilliantly as a standalone course. It also works as a warm-up before BUILD. Run SHARP across your whole team first (4 weeks, light touch, 2–4 hours/week each). Then enrol the technically curious subset into BUILD afterwards. SHARP gives the whole team a shared vocabulary for AI risk — which makes everything BUILD teaches land harder. It's also the cheapest way to find out who's actually motivated enough to commit to the heavier 4–6 hour weekly load BUILD requires. Combined SHARP + BUILD pricing for team rollouts is significantly better than buying them separately — email hello@everythingthreads.com for the combined tier.
Section 3
Pre-rollout checklist
SHARP is much lighter to set up than BUILD because there's nothing to install. Most of these are about people, not technology.
Confirm browser access. Every student needs a modern browser (Chrome, Edge, Safari, Firefox). That's it. No installs, no admin rights, no IT involvement.
Block calendars. Negotiate 2–4 hours per week of protected time per student for 4 weeks. Lighter than BUILD's 4–6 hours but still requires manager air cover.
Frame the vulnerability profile honestly. Segment 8 asks staff to score themselves on 10 failure patterns. They need to know in advance that this is private — only they see their own scores — otherwise they'll game it.
Pick a Slack/Teams channel. One central channel for the cohort. Pattern-spotting in real-life AI conversations is more fun as a group activity.
Optional: pair into buddies. Less critical than BUILD (no debugging required) but still helpful for the reflection exercises in Week 3.
Send the kickoff email (template in Section 8) the Friday before Week 1.
Schedule a 30-minute kickoff call on Day 1 — set expectations, name the pattern vocabulary the team will use from now on, answer questions.
Schedule a 30-minute mid-point check at end of Week 2. The vulnerability profile in Segment 8 is the dropout risk point — some students find it confronting. A mid-point check catches anyone who's gone quiet.
Schedule a 60-minute Risk Report review at end of Week 4. Each student walks through their Risk Report in 4 minutes. Use the rubric in Section 6.
Brief your boss. See Section 9. SHARP is much easier to justify than BUILD because the artefact (the Risk Report) maps cleanly to compliance and risk reduction language leadership already speaks.
Section 4
The 5 most common stuck-points
SHARP has fewer technical failure modes than BUILD. The friction is mostly psychological — people having to admit they've been making the same mistake for months.
"I don't see myself in any of these patterns"
Hits in Week 1, around Segments 2–4 when the Machine Patterns are introduced. Some students are convinced they don't fall for AI tricks.
Don't argue. Tell them to keep going to Segment 8 (Vulnerability Profile) and then re-watch Segments 2–4 with their own scores in front of them. The denial usually evaporates around Segment 8. If it doesn't, they're either genuinely advanced (rare, but possible) or not engaging — flag for a 1:1.
"The vulnerability score feels confronting / I scored badly"
Segment 8. The student is shaken by their own honest score and wants to quit.
Reassure them in private. Say two things, in this order: (1) "the score is supposed to feel high — that's the whole point, you're identifying real exposure," (2) "the only score that matters is the delta in Segment 20, and the people who score highest in Week 2 typically improve the most by Week 4." Quote the Segment 8 prose: "a high score doesn't mean you're bad at your job. It means you're human and you've been using AI without knowing the rules."
"I don't have time for the practice exercises"
Hits in Week 3 (Segments 11–15). The interventions and Source Challenge protocols require active practice, not passive reading.
Tell them to practice during their NEXT real AI conversation, not in addition to it. The interventions take seconds — saying "how do you know this?" out loud once is the entire exercise. Frame it as "use it once today" not "block 30 minutes to practice."
"I'm not sure how to apply this in my actual role"
Week 3, around the sector application segments. Student understands the patterns abstractly but can't connect them to their day job.
Ask them to bring one real AI conversation from the past week to the next 1:1. Walk through it together and name the patterns that appeared. One concrete example breaks the abstraction. After that they'll see the patterns everywhere.
"My Risk Report came out scary — what do I do with it?"
Segment 20 / capstone. Student generated their Risk Score Report and the High or Very High band feels alarming.
Two answers: (1) the Risk Report is a personal artefact — they decide whether to share it, with whom. They don't have to show it to anyone. (2) The point of the report is the watch points and recommended interventions, not the score itself. Tell them to pick the top 1–2 watch points and use the listed interventions for 30 days. Re-take the assessment at day 30 — the score will move.
Section 5
How to monitor without micromanaging
SHARP runs lighter than BUILD, but the same principle applies — watch for silence, not for activity.
Weekly Slack check-in. One message: "Where is everyone? Drop your segment number." 30 seconds of effort.
Watch for silence. SHARP students who go quiet for 5+ days are usually wrestling with the vulnerability profile, not the content. DM them privately. Don't make it a big deal.
Don't ask to see anyone's vulnerability score. Ever. The Segment 8 score is private by design. Asking destroys the honesty the course depends on. Students can volunteer, but never ask.
Encourage pattern-spotting in the channel. Once people learn the M1–M7 vocabulary, they start seeing patterns in their own AI conversations everywhere. Encourage them to drop screenshots in the channel: "found an M5 in the wild — caveat that changes nothing." This is the moment the course actually changes behaviour.
Section 6
Practice Document Review Rubric (Segment 20)
The Practice Document is SHARP's capstone — a single document combining 5 components from across the course. Each student walks through theirs in 4 minutes at the showcase. Score against these 5 criteria. Total out of 100. Pass at 60+. Standout at 85+.
Vulnerability Profile
From Segment 8. Honestly scored. Top 3 patterns identified with at least one written reflection per pattern (a specific moment, not a vague "I sometimes do this").
20 pts
Session Protocol
From Segment 18. A personal protocol for AI use, specific to their actual role. Names which interventions they will use and when. Not generic "be careful" advice.
20 pts
Audited Session
From Segment 12. A real AI conversation from their work, annotated with the M1–M7 patterns that appeared. At least 3 patterns correctly identified.
20 pts
Sector Risk Brief
From Segment 15. A short brief on the AI risks specific to their industry. Concrete — 2+ real-world examples or scenarios. Not theoretical.
20 pts
Team Recommendation
From Segment 19. Actionable recommendation for their actual team — what to change, who to involve, what to measure. Should be something a manager could implement next week.
20 pts
Bonus: the Risk Report
In addition to the Practice Document, every student also generates a downloadable Risk Score Report in Segment 20 — comparing their Week 2 vulnerability score to their Week 4 score, with watch points and recommended interventions. The Risk Report is private to the student. Don't grade it, don't ask to see it, don't make it a deliverable. But if they choose to share it with you, that's the artefact you can compile across the cohort to show leadership "the average team member reduced their AI risk score by X%."
Section 7
Time commitment — yours, not theirs
SHARP is significantly lighter on the manager than BUILD because there's no debugging.
Total: roughly 4–5 hours of your time across 4 weeks, for a team of up to 25. If you're spending more than that, you've drifted into being the helpdesk — re-route to the cohort channel and the buddies.
Section 8
Email templates — copy, paste, send
Three ready-to-use emails. SHARP is a 4-week course so there's one fewer than BUILD's set.
📨 Kickoff — send the Friday before Week 1
Subject: We're starting SHARP on Monday — 4 weeks, no installs, real change
Hi all,
Starting Monday, [team name] is running through the EverythingThreads SHARP course — 20 short segments over 4 weeks that give you the language and tools to spot AI mistakes BEFORE they make it into your work.
This isn't AI safety theatre. It's not a 90-minute video. It's a structured programme where you'll:
• Learn the 7 specific ways AI gets things wrong, and the 10 ways humans miss it
• Score your own vulnerability profile (privately — only you see it)
• Practise specific intervention scripts you can use in real AI conversations
• Build a personal session protocol for your actual role
• Generate a Risk Score Report at the end showing exactly how you've changed
What I need from you before Monday:
1. Block 2–4 hours per week in your calendar for the next 4 weeks. I've already started doing this for you.
2. Be ready to be honest with yourself in Week 2. The vulnerability profile is private — only you see it — but it only works if you score honestly.
3. Read this email :)
How it'll run:
• Self-paced segments in your own time
• Drop your segment number once a week in [#sharp-cohort]
• Pattern-spotting is a team sport — share screenshots in the channel when you find a pattern in the wild
• Stuck or have a question? Ping the channel first, then me
One thing I want to make clear: this course doesn't try to scare you. It tries to give you specific words for problems you've already noticed but didn't have a name for. By Week 4 you'll be calling out patterns ("that's an M1 — the Agreement Trap") in your own AI conversations without thinking about it. That's the point.
Kickoff call Monday at [time]. Just to set expectations and answer questions.
Course link: https://everythingthreads.com/course-sharp
[Your name]
📨 Mid-point — send start of Week 3
Subject: Halfway through SHARP — and a check on Week 2
Hi all,
Halfway. You should be around Segment 10–11 by now. From here on, the course shifts from "naming the patterns" to "intervening when you see them."
Quick check on Week 2: how was the vulnerability profile? If you scored higher than you expected, that's normal — most people do. Remember the only score that matters is the delta you'll see in Segment 20, and the people who scored highest in Week 2 are usually the ones who improve the most by Week 4. Don't quit because the number was confronting.
Two things to do this week:
1. Pick ONE intervention from Segment 11 — Source Challenge, 3-Minute Check, Pattern Naming, whatever resonates — and use it once in a real AI conversation. Just once. That's the exercise.
2. Drop one screenshot in [#sharp-cohort] of an AI conversation where you spotted a pattern in the wild. Doesn't have to be a big one. A tiny example is fine.
Mid-point call: [date/time]. Bring questions, bring screenshots, bring the moments that surprised you.
[Your name]
📨 Showcase invite — send start of Week 4
Subject: SHARP showcase next [day] — bring your Practice Document
Hi all,
We're nearly there. Showcase is on [date/time], 60 minutes. Each of you gets 4 minutes to walk us through your Practice Document — the 5-component summary of everything you've built across the course.
What I need from you on the day:
1. Your Practice Document, ready to share screen
2. Your Risk Score Report from Segment 20 (ONLY if you choose to share it — it's private)
3. A 30-second pitch on the ONE thing that's changed about how you use AI now
4. A 60-second walkthrough of your audited session — show one real conversation you analysed, name the patterns
5. 60 seconds on what you'd recommend the team change
I'll be scoring against the SHARP Practice Document rubric (5 criteria, 100 points total). Standout submissions get featured internally + shared with the EverythingThreads team for inclusion in the next cohort's case studies.
You don't have to share your vulnerability score or your Risk Report — those are yours. But if you DO share, I'll compile the cohort numbers (anonymously) so we can show leadership how the team's overall AI risk profile changed. That number is what unlocks the next budget cycle.
Four weeks of work. Bring it.
[Your name]
Section 9
ROI talking points — for briefing your own boss
SHARP is much easier to justify to leadership than BUILD because it maps cleanly onto compliance and risk language they already speak.
The pitch in one sentence
"For under £150 per head, every member of [team] will finish with a documented vulnerability profile, a personal session protocol, and a measurable reduction in their AI risk score — which is exactly what the EU AI Act's August 2026 literacy requirement asks us to demonstrate."
EU AI Act, August 2026. Article 4 of the Act requires staff using AI systems to have "a sufficient level of AI literacy." Most companies don't have evidence they've trained staff to that standard. SHARP gives you a per-employee, dated, downloadable Risk Score Report — the artefact regulators will accept.
The "rework tax" is real. Roughly 40% of AI productivity gains are currently lost to staff having to redo AI work because of hallucinations, made-up citations, or confidently-wrong outputs. SHARP's Source Challenge protocol directly targets this loss. Risk reduction = productivity gain.
Comparable executive AI courses cost $995–$2,500 per seat. SHARP delivers the same toolkit at £99/head for individual purchase, less for team rollouts.
Standardised vocabulary across the team. When staff can name what went wrong ("that was an M3 — Tailored Response, the AI told me what I wanted to hear"), incident post-mortems become 10x faster. You can't fix what you can't name.
Measurable Week 2 → Week 4 delta per employee. Every student generates a downloadable Risk Score Report comparing their starting vulnerability to their ending score. Aggregate across the team and you have a single number to show leadership.
4 weeks, 2–4 hours/week per person. Light enough that no other meeting needs to move. No installs, no tech support, no IT involvement.
The natural step before BUILD. Once your team has shared AI risk vocabulary, the technically curious subset is much better prepared for the heavier BUILD course. SHARP also helps you identify which staff are motivated enough for that next step.
Section 10
Staff FAQ — share with your team
Do I need any technical skills?
No. SHARP is professional literacy, not coding. If you can use a browser and you've used ChatGPT or any AI tool a few times, you're ready.
How long per week?
2–4 hours. Most students do 30–45 minutes a day, four or five days a week. Lighter than most certification courses.
Will I have to write code?
No. Zero code. SHARP is about how to use AI more carefully, not how to build it. If you want to build, that's a different course (BUILD).
What if my honest scores are embarrassing?
They probably will be — for almost everyone. The Vulnerability Profile in Segment 8 is private by design. Only you see your scores. Your manager doesn't see them. The cohort doesn't see them. The whole point is honest self-assessment, which only works in private.
Can I share my Risk Report with my manager?
Only if you choose to. The Risk Report at the end of Segment 20 is yours. You decide whether to share it and with whom. Some students share aggregate numbers anonymously to help leadership justify the team's training budget — but that's a choice, not a requirement.
Will I get a certificate?
No. SHARP is non-accredited. What you'll get is a personal Vulnerability Profile, a Risk Score Report, a Session Protocol you'll actually use, and a shared vocabulary with your team for AI risk. That's the artefact — not a piece of paper.
Is this just generic AI safety training?
No. Generic AI safety training is "be careful." SHARP is "here are the 7 specific ways AI gets things wrong, the 10 specific ways humans miss it, and the 10 specific intervention scripts you can use to interrupt them in real time." Specific is the difference.
Do I need to take CLEAR first?
Only if you've never used an AI tool at all. If you've spent at least a few hours with ChatGPT, Claude, Gemini, or Copilot, you're ready for SHARP. CLEAR is for the absolute beginner step.
Section 11
When to escalate to EverythingThreads
SHARP is designed to run without vendor support. But if any of these happen, email hello@everythingthreads.com with "SHARP Manager Pack — [your company]" in the subject line:
You hit a stuck-point that isn't covered in Section 4 of this pack
You want to roll SHARP out to a second cohort and need the bulk pricing tier
You want to combine SHARP + BUILD into a single team rollout (significant combined pricing available)
Your team's audited sessions or Risk Reports include particularly powerful examples and you'd like them anonymised for the EverythingThreads case study collection
You need a sector-specific variant of SHARP for a specialised rollout. BUILD for Legal is live now and shows the sector skin model in practice. SHARP for Legal / Finance / Healthcare / Marketing is in pipeline — you'd be a candidate for the pilot.
EverythingThreads support is email-only by design. You'll get a reply within 2 working days. For genuine emergencies during a paid rollout, mark the email subject "URGENT" and we'll prioritise.
One last thing
Most "AI safety" rollouts fail because they're delivered as a one-off training session that staff sit through and forget. SHARP is different because it's structured as a vocabulary upgrade — by Week 4, your team will be using M1–M7 in conversation without thinking about it. That's the moment the course pays off. Not when they get the certificate (there isn't one). Not when they pass a quiz. The moment a colleague says "wait, that's an M5" in a meeting and everyone knows exactly what they mean. Protect the time, run the cohort, trust the structure. The vocabulary will stick.