EverythingThreads · SHARP Manager Pack
Internal Use
Train the Trainer · For Department Managers

SHARP · Manager Pack

Everything you need to run the EverythingThreads SHARP course across your team — professional AI literacy, no coding, designed for legal, finance, healthcare, marketing, HR, comms, ops, and any role where staff use AI for real work. The simplest way to give a whole team a shared vocabulary for AI risk before they cause an incident. Often the right warm-up before BUILD.

📘 20 segments · 3–4 weeks 👥 Designed for teams of 5–50 ⏱ 4–5 hrs of your time across the rollout
Section 1

What SHARP actually delivers

SHARP is a 20-segment, 3–4-week, fully self-paced course that takes a knowledge worker from "AI gives me weird answers sometimes" to a documented vulnerability profile, a personal session protocol, and a downloadable Risk Score Report. No installs. No code. Runs entirely in a browser on any device.

Week 1 · Segments 1–5
The 7 Machine Patterns
M1–M7 — Agreement Trap, Fake Admission, Tailored Response, Confident Guess, Caveat That Changes Nothing, Redirect, The Fold. Original methodology, named and explained with real examples.
Week 2 · Segments 6–10
Human Failures + Vulnerability Profile
The 10 human failure patterns, then a private vulnerability self-assessment in Segment 8 (the foundation for the final Risk Score). This is the most personal week of the course.
Week 3 · Segments 11–15
Interventions + Sector Application
The Source Challenge, the 3-Minute Check, the Position Anchor, and other intervention scripts students can use mid-conversation with AI. Then sector-specific risk briefing.
Week 4 · Segments 16–20
Practice Document + Risk Report
Build a personal session protocol, audit a real session, draft a team recommendation, then re-take the vulnerability profile in Segment 20 to generate a downloadable Risk Score Report — the artefact you can share with leadership.
Why SHARP, why now

Most "AI safety training" is a 90-minute video and a tick-box quiz. SHARP is the opposite — 20 short segments, a personal vulnerability profile, hands-on intervention practice, and an actual measurable change between Week 2 and Week 4. The point isn't to scare people. The point is to give your team specific language ("that's M1 — the Agreement Trap") so they can interrupt their own AI mistakes in real time, without slowing down their day job.

Section 2

Who SHARP is for (and who it isn't)

SHARP is the broadest of the three EverythingThreads courses. If your team uses AI in their work in any meaningful way, they probably belong here.

✓ Good fit
⚠ Wrong fit — find them a different course
⚡ The combined play: SHARP → BUILD

SHARP works brilliantly as a standalone course. It also works as a warm-up before BUILD. Run SHARP across your whole team first (4 weeks, light touch, 2–4 hours/week each). Then enrol the technically curious subset into BUILD afterwards. SHARP gives the whole team a shared vocabulary for AI risk — which makes everything BUILD teaches land harder. It's also the cheapest way to find out who's actually motivated enough to commit to the heavier 4–6 hour weekly load BUILD requires. Combined SHARP + BUILD pricing for team rollouts is significantly better than buying them separately — email hello@everythingthreads.com for the combined tier.

Section 3

Pre-rollout checklist

SHARP is much lighter to set up than BUILD because there's nothing to install. Most of these are about people, not technology.

  1. Confirm browser access. Every student needs a modern browser (Chrome, Edge, Safari, Firefox). That's it. No installs, no admin rights, no IT involvement.
  2. Block calendars. Negotiate 2–4 hours per week of protected time per student for 4 weeks. Lighter than BUILD's 4–6 hours but still requires manager air cover.
  3. Frame the vulnerability profile honestly. Segment 8 asks staff to score themselves on 10 failure patterns. They need to know in advance that this is private — only they see their own scores — otherwise they'll game it.
  4. Pick a Slack/Teams channel. One central channel for the cohort. Pattern-spotting in real-life AI conversations is more fun as a group activity.
  5. Optional: pair into buddies. Less critical than BUILD (no debugging required) but still helpful for the reflection exercises in Week 3.
  6. Send the kickoff email (template in Section 8) the Friday before Week 1.
  7. Schedule a 30-minute kickoff call on Day 1 — set expectations, name the pattern vocabulary the team will use from now on, answer questions.
  8. Schedule a 30-minute mid-point check at end of Week 2. The vulnerability profile in Segment 8 is the dropout risk point — some students find it confronting. A mid-point check catches anyone who's gone quiet.
  9. Schedule a 60-minute Risk Report review at end of Week 4. Each student walks through their Risk Report in 4 minutes. Use the rubric in Section 6.
  10. Brief your boss. See Section 9. SHARP is much easier to justify than BUILD because the artefact (the Risk Report) maps cleanly to compliance and risk reduction language leadership already speaks.
Section 4

The 5 most common stuck-points

SHARP has fewer technical failure modes than BUILD. The friction is mostly psychological — people having to admit they've been making the same mistake for months.

"I don't see myself in any of these patterns"
Hits in Week 1, around Segments 2–4 when the Machine Patterns are introduced. Some students are convinced they don't fall for AI tricks.
Don't argue. Tell them to keep going to Segment 8 (Vulnerability Profile) and then re-watch Segments 2–4 with their own scores in front of them. The denial usually evaporates around Segment 8. If it doesn't, they're either genuinely advanced (rare, but possible) or not engaging — flag for a 1:1.
"The vulnerability score feels confronting / I scored badly"
Segment 8. The student is shaken by their own honest score and wants to quit.
Reassure them in private. Say two things, in this order: (1) "the score is supposed to feel high — that's the whole point, you're identifying real exposure," (2) "the only score that matters is the delta in Segment 20, and the people who score highest in Week 2 typically improve the most by Week 4." Quote the Segment 8 prose: "a high score doesn't mean you're bad at your job. It means you're human and you've been using AI without knowing the rules."
"I don't have time for the practice exercises"
Hits in Week 3 (Segments 11–15). The interventions and Source Challenge protocols require active practice, not passive reading.
Tell them to practice during their NEXT real AI conversation, not in addition to it. The interventions take seconds — saying "how do you know this?" out loud once is the entire exercise. Frame it as "use it once today" not "block 30 minutes to practice."
"I'm not sure how to apply this in my actual role"
Week 3, around the sector application segments. Student understands the patterns abstractly but can't connect them to their day job.
Ask them to bring one real AI conversation from the past week to the next 1:1. Walk through it together and name the patterns that appeared. One concrete example breaks the abstraction. After that they'll see the patterns everywhere.
"My Risk Report came out scary — what do I do with it?"
Segment 20 / capstone. Student generated their Risk Score Report and the High or Very High band feels alarming.
Two answers: (1) the Risk Report is a personal artefact — they decide whether to share it, with whom. They don't have to show it to anyone. (2) The point of the report is the watch points and recommended interventions, not the score itself. Tell them to pick the top 1–2 watch points and use the listed interventions for 30 days. Re-take the assessment at day 30 — the score will move.
Section 5

How to monitor without micromanaging

SHARP runs lighter than BUILD, but the same principle applies — watch for silence, not for activity.

Section 6

Practice Document Review Rubric (Segment 20)

The Practice Document is SHARP's capstone — a single document combining 5 components from across the course. Each student walks through theirs in 4 minutes at the showcase. Score against these 5 criteria. Total out of 100. Pass at 60+. Standout at 85+.

Vulnerability Profile
From Segment 8. Honestly scored. Top 3 patterns identified with at least one written reflection per pattern (a specific moment, not a vague "I sometimes do this").
20 pts
Session Protocol
From Segment 18. A personal protocol for AI use, specific to their actual role. Names which interventions they will use and when. Not generic "be careful" advice.
20 pts
Audited Session
From Segment 12. A real AI conversation from their work, annotated with the M1–M7 patterns that appeared. At least 3 patterns correctly identified.
20 pts
Sector Risk Brief
From Segment 15. A short brief on the AI risks specific to their industry. Concrete — 2+ real-world examples or scenarios. Not theoretical.
20 pts
Team Recommendation
From Segment 19. Actionable recommendation for their actual team — what to change, who to involve, what to measure. Should be something a manager could implement next week.
20 pts
Bonus: the Risk Report

In addition to the Practice Document, every student also generates a downloadable Risk Score Report in Segment 20 — comparing their Week 2 vulnerability score to their Week 4 score, with watch points and recommended interventions. The Risk Report is private to the student. Don't grade it, don't ask to see it, don't make it a deliverable. But if they choose to share it with you, that's the artefact you can compile across the cohort to show leadership "the average team member reduced their AI risk score by X%."

Section 7

Time commitment — yours, not theirs

SHARP is significantly lighter on the manager than BUILD because there's no debugging.

Total: roughly 4–5 hours of your time across 4 weeks, for a team of up to 25. If you're spending more than that, you've drifted into being the helpdesk — re-route to the cohort channel and the buddies.

Section 8

Email templates — copy, paste, send

Three ready-to-use emails. SHARP is a 4-week course so there's one fewer than BUILD's set.

Section 9

ROI talking points — for briefing your own boss

SHARP is much easier to justify to leadership than BUILD because it maps cleanly onto compliance and risk language they already speak.

The pitch in one sentence

"For under £150 per head, every member of [team] will finish with a documented vulnerability profile, a personal session protocol, and a measurable reduction in their AI risk score — which is exactly what the EU AI Act's August 2026 literacy requirement asks us to demonstrate."

Section 10

Staff FAQ — share with your team

Do I need any technical skills?
No. SHARP is professional literacy, not coding. If you can use a browser and you've used ChatGPT or any AI tool a few times, you're ready.
How long per week?
2–4 hours. Most students do 30–45 minutes a day, four or five days a week. Lighter than most certification courses.
Will I have to write code?
No. Zero code. SHARP is about how to use AI more carefully, not how to build it. If you want to build, that's a different course (BUILD).
What if my honest scores are embarrassing?
They probably will be — for almost everyone. The Vulnerability Profile in Segment 8 is private by design. Only you see your scores. Your manager doesn't see them. The cohort doesn't see them. The whole point is honest self-assessment, which only works in private.
Can I share my Risk Report with my manager?
Only if you choose to. The Risk Report at the end of Segment 20 is yours. You decide whether to share it and with whom. Some students share aggregate numbers anonymously to help leadership justify the team's training budget — but that's a choice, not a requirement.
Will I get a certificate?
No. SHARP is non-accredited. What you'll get is a personal Vulnerability Profile, a Risk Score Report, a Session Protocol you'll actually use, and a shared vocabulary with your team for AI risk. That's the artefact — not a piece of paper.
Is this just generic AI safety training?
No. Generic AI safety training is "be careful." SHARP is "here are the 7 specific ways AI gets things wrong, the 10 specific ways humans miss it, and the 10 specific intervention scripts you can use to interrupt them in real time." Specific is the difference.
Do I need to take CLEAR first?
Only if you've never used an AI tool at all. If you've spent at least a few hours with ChatGPT, Claude, Gemini, or Copilot, you're ready for SHARP. CLEAR is for the absolute beginner step.
Section 11

When to escalate to EverythingThreads

SHARP is designed to run without vendor support. But if any of these happen, email hello@everythingthreads.com with "SHARP Manager Pack — [your company]" in the subject line:

EverythingThreads support is email-only by design. You'll get a reply within 2 working days. For genuine emergencies during a paid rollout, mark the email subject "URGENT" and we'll prioritise.

One last thing

Most "AI safety" rollouts fail because they're delivered as a one-off training session that staff sit through and forget. SHARP is different because it's structured as a vocabulary upgrade — by Week 4, your team will be using M1–M7 in conversation without thinking about it. That's the moment the course pays off. Not when they get the certificate (there isn't one). Not when they pass a quiz. The moment a colleague says "wait, that's an M5" in a meeting and everyone knows exactly what they mean. Protect the time, run the cohort, trust the structure. The vocabulary will stick.