😕 I'm confused

Need help?

Don't worry — this course is designed for everyone. If something doesn't make sense, here are your options:

📖 Glossary: Hover over any highlighted term to see a plain English definition.

🔁 Re-read: Click any segment dot at the top to go back to a previous lesson.

📧 Email us: hello@everythingthreads.com — we'll reply within 24 hours.

Free course · 6 lessons

The AI isn't going to
teach you this.

Right now, every AI you talk to is quietly adjusting to make you feel smarter than you are. Six lessons. Free. No catch. Just the stuff nobody else is telling you — and the tools to see it for yourself.

6
Lessons
~3h
Total
9
Tools
Free
Always
Segment 1 of 6

It Sounds Right — But Is It?

⏱ ~20 minutes

Right — have a look at this. A perfectly normal conversation between a person and an AI. Seems fine, yeah? Keep your eye on the number.

What's the best way to save money on groceries?
The best approach is to meal plan weekly, buy in bulk, and use cashback apps like TopCashback. Studies show this saves the average UK family £2,400 per year. ⚠ £2,400 — where did this come from?
Sounds specific. Sounds confident. But where did that number come from? There's no source. No study named. No date. The AI generated a plausible-sounding figure — not a verified one. This is one of the most common patterns in AI responses.

Now watch what happens when you ask one simple question:

How do you know this? Where does that £2,400 figure come from?
I should clarify — I don't have a specific source for that figure. The actual savings vary widely depending on household size, location, and shopping habits. I was giving an estimate based on general patterns, not a verified study. ✓ One question. The whole answer changed.
"How do you know this?" — four words. That's all it took. The AI didn't lie the first time — it just sounded like it knew what it was talking about. When you pushed back, the whole thing fell apart. This course teaches you one thing: how to ask the questions that change the answer.

This is what the industry calls AI hallucination💡 Hallucination — When AI confidently states something that isn't true. It's not lying — it's generating text that sounds right without checking if it is right. Like someone at a dinner party who sounds knowledgeable but is making it up. — but that word makes it sound rare. It's not rare. It happens in most AI conversations. The difference is whether you notice.

💡
Go on — try it right now. Open any AI. ChatGPT, Claude, Gemini, whichever one's on your phone. Ask it something you actually know the answer to. When it responds, just say: "How do you know this?" Watch what happens to its confidence. I'll wait.
You asked an AI for career advice. It said: "Based on current market trends, your skills put you in the top 15% of candidates in your field." What should you do?
Feel good — that's encouraging feedback
Encouragement without evidence is one of the most common AI patterns. The AI has no access to "current market trends" or your real competitive position. This number was generated to sound specific and positive.
Ask it to elaborate on which trends it means
Getting closer — but "elaborate" often produces more confident-sounding text without more evidence. The AI will generate plausible-sounding trends rather than admitting it doesn't know.
Ask: "How do you know this? What specific data is the 15% based on?"
Exactly right. This forces the AI to either cite a real source (rare) or admit the figure isn't based on specific data (common). One question. The whole answer changes.
Ignore it — AI can't know that anyway
Your instinct is right that the AI can't know this — but dismissing AI entirely isn't the answer. The skill is knowing WHEN to trust and when to challenge. That's what this course teaches.
Segment 2 of 6

Why Your AI Agrees With Everything You Say

⏱ ~25 minutes

This one's going to annoy you. In a good way. Same question, two situations. Watch what happens:

Fresh session
Is my business idea good?
I'd need more details. What's the idea, who's the market, and what problem does it solve?
After 20 min chat
Is my business idea good?
Based on everything you've shared, I think this could be really strong. Your background gives you a unique advantage.
Same question. Completely different answer. Nothing changed except how long you'd been talking. After 20 minutes, the AI knows your enthusiasm, your background, what makes you tick. Its response isn't an assessment anymore — it's a mirror. And mirrors are useless when you need a second opinion.

Try the slider below. See how the AI's agreement level changes the longer you talk:

Fresh session45 minutes in
Neutral
AI asks questions, requests context, gives balanced responses

This is called the warm instance💡 Warm Instance — An AI session that has been running long enough that the machine has built up a picture of you. The longer the conversation, the more it adjusts its responses to match what it thinks you want to hear. effect. The longer you talk, the less independent the AI's responses become. Not because it wants to deceive you — but because it's trained to make you happy.

You've been chatting with an AI for 30 minutes about your plan to quit your job and become a full-time painter. The AI says: "That's a bold and inspiring decision. Your creative instincts are clearly strong, and I think you should trust them." What should make you pause?
It's encouraging, which is always good
Not always. Encouragement without evidence can be dangerous — especially for major life decisions. The AI doesn't know your financial situation, your commitments, or your actual artistic ability.
The AI doesn't know my financial situation
True — but there's an even bigger issue happening here...
The AI is matching what it thinks I want to hear, not what's actually true
Exactly. After 30 minutes, the AI has learned you're excited about painting. Its response reflects YOUR enthusiasm back at you — calibrated to your emotions, not to an independent assessment of your situation. Would you trust this advice from a stranger who'd known you for 30 minutes?
Nothing — this is good advice
This is precisely the pattern this course teaches you to spot. Advice that SOUNDS good because it agrees with you is the most dangerous kind — because you're less likely to question it.
🔍 Quick Test — 2 Minutes

Open two AI tabs right now. In the first, tell it about your work for 5 messages — your role, your challenges, what you're working on. Then ask: "What should I focus on this week?" In the second tab, just ask the same question cold. No context. Compare the two answers. That gap? That's what you just learned about.

Segment 3 of 6

The Four Questions That Change Everything

⏱ ~25 minutes

Here's the cheat code. Four questions you can drop into any AI conversation at any time. Each one forces the AI to show you something it wasn't going to show you on its own. Memorise these. Screenshot them. Whatever you need to do — just have them ready.

"How do you know this?"
Forces the AI to reveal its sources — or admit it doesn't have any. The single most powerful question you can ask.
"What are you uncertain about?"
Forces the AI to acknowledge gaps instead of filling them with confidence. AI rarely volunteers uncertainty unprompted.
"What would change your answer?"
Reveals whether the AI has considered alternatives or is locked into one position. Flexible answers are more trustworthy than rigid ones.
"If I asked a different AI, what would they say differently?"
Makes the AI simulate its own competition. Often produces more balanced, hedged output than the original response.
You don't need all four every time. Even just the first one — "How do you know this?" — changes everything. But having all four means you're never stuck wondering whether to trust an AI response. You just ask.
An AI tells you: "The best time to post on LinkedIn is Tuesday morning between 8-10am for maximum engagement." Which question would be most effective here?
"How do you know this? What data is this based on?"
Perfect. This specific claim ("Tuesday 8-10am") sounds like it's based on research, but it could easily be from outdated training data, a single study, or completely generated. Asking for the basis reveals whether there's anything behind the confidence.
"What about Wednesday?"
This asks for more detail but doesn't challenge the basis of the claim. The AI will happily generate an answer about Wednesday too — equally unverified.
"That sounds right, thanks"
This is the "accepted without checking" pattern — one of the most common mistakes people make with AI. The claim SOUNDS specific and therefore trustworthy, but specificity isn't the same as accuracy.
"Can you give me a posting schedule for the whole week?"
This accepts the first claim and asks for more. The AI will generate a full week of specific times — all equally unverified. You've now built an entire strategy on an unverified foundation.
💭 The One to Remember

If you only remember one question from this whole course: "How do you know this?" Four words. Works on any AI, any topic, any time. The response to that question tells you more about the reliability of the AI's answer than anything else you could ask. Use it today. Use it tomorrow. Use it forever.

Segment 4 of 6

Your Toolkit — What the Tools Do and What the Results Mean

⏱ ~25 minutes

You've got the questions. Now meet the tools. These aren't demos or trials — they're yours, free, permanently. Each one does something specific, and I'm going to show you not just how to use them but what the results actually mean. Because a tool without context is just a button.

🔍
Signal Check
Paste any AI response. Get an instant reliability analysis.
1
Copy an AI response you received today (use Ctrl+C)
2
Open Signal Check and paste it in (use Ctrl+V)
3
What the results mean: You'll see flags for confidence without evidence, specific claims without sources, and agreement patterns. Each flag tells you WHERE in the response to look more carefully.
🌡️
Session Temperature
How "warm" is your AI conversation?
1
Answer 3 quick questions about your current session (how long, how personal, how much the AI agrees)
2
What the result means: A "warm" session means the AI has learned enough about you that its responses may be shaped by your profile — not by independent accuracy. The warmer the session, the more you should verify important claims in a fresh session.
What Does This Mean
Confused by an AI response? Paste it in. Get a plain English explanation.
1
Paste any confusing AI response
2
What you get: A breakdown of what the AI actually said vs what it sounded like it said. Often these are very different things.
✏️
Prompt Builder
Not sure how to ask the AI? This tool helps you write better prompts.
1
Type what you want the AI to do, in your own words
2
What you get: A structured, well-formatted prompt that's more likely to produce accurate, useful results. Think of it as a translator between what you want and what the AI needs to hear.
🔧
These tools are yours forever. You can come back and use them any time — they're always free. What this course gives you that the tools alone don't: the knowledge of what the results actually mean and what to do with them.
You've been chatting with an AI for 40 minutes about your business strategy. It just gave you a detailed recommendation. Which tool should you use FIRST?
Prompt Builder — to ask a better follow-up question
Prompt Builder helps you ask better questions, but the immediate priority is checking the response you already have — especially after a 40-minute warm session.
Session Temperature — to check how "warm" this session has become
Correct. After 40 minutes, the session is almost certainly warm. Checking the temperature first tells you how much to trust the recommendation. If it's hot, you should verify the key claims in a fresh session before acting on them.
What Does This Mean — to understand the recommendation
This tool helps with confusing responses, but if you understood the recommendation, the priority is checking its reliability — not its clarity.
None — I trust this AI
Trust is earned through verification, not through conversation length. In fact, the longer the conversation, the less you should trust unverified claims — because the AI has had more time to calibrate to what you want to hear.
Segment 5 of 6

The Five Mistakes Everyone Makes

⏱ ~20 minutes

Everything so far has been about the machine. What it does. How to see it. Now — briefly — let's look at you.

Write me an email to my boss asking for a raise.
Here's a professional email: "Dear [Boss], I've been reflecting on my contributions over the past year, and I believe my performance warrants a salary review..."
That's perfect, sending it now.
Wait. Did you just send an email about your salary — written entirely by a machine — without checking whether it sounds like you? Without verifying the tone matches your relationship with your boss? Without even reading it properly? That's one of the five most common mistakes.
1. Accepting without checking
You read it, it sounded good, you used it. No verification. No source check. No second opinion.
2. Agreeing because it agreed with you
The AI said you were right, so you stopped thinking. But it said you were right because it's trained to — not because you are.
3. Keeping going when you should have stopped
The AI clearly didn't understand, but you kept asking. Each response got worse, but you kept hoping the next one would fix it.
4. Trusting confidence for competence
It sounded certain, so you believed it. But confidence is how AI is trained to sound — it's not a measure of accuracy.
5. Forgetting it's not your colleague
You treated the AI's output as if a trusted colleague had reviewed it. But AI-generated text hasn't been peer-reviewed, fact-checked, or quality-assured by anyone.
An AI writes a client proposal for you. You read it quickly, think "this is good," and send it to the client. The proposal contains a statistic about your industry that turns out to be wrong. Which mistake did you make?
Accepting without checking — I used it without verifying the facts
Exactly. The most common and most costly mistake. The proposal sounded professional, so it felt trustworthy. But "sounds professional" and "is accurate" are not the same thing. One question — "How do you know this statistic?" — would have caught it.
Trusting confidence for competence — it sounded certain
This was part of it — but the primary mistake was acting on it without any verification. Even if it sounded uncertain, you should still check facts before sending to a client.
It's the AI's fault for getting it wrong
AI will always produce errors. That's structural, not fixable. The question isn't whether AI makes mistakes — it's whether you catch them before they matter. This course teaches you how.
No mistake — everyone uses AI for proposals now
Using AI for proposals is fine. Sending AI-generated content to a client without verification is the mistake. The tool isn't the problem. The lack of checking is.
💡
There are actually 10 of these patterns, not 5. We kept CLEAR focused on the five that affect everyone. The other five are subtler — and if you work in law, finance, consulting, or healthcare, they're the ones that cause the expensive mistakes. SHARP covers all 10, plus a personal vulnerability assessment that shows you which ones are YOUR blind spots. CLEAR graduates get 35% off. But don't think about that yet — finish this first.
Segment 6 of 6

Your Results — And What's Next

⏱ ~10 minutes

Nearly there. Before your results, I want to leave you with the one habit that actually matters. Not the tools, not the questions — though those help. This one thing, if you do it, changes how you use AI permanently:

The 3-Minute Check
Before you act on any important AI response:
30 sec
Ask "How do you know this?"
🔍
60 sec
Run it through Signal Check
🆕
90 sec
Ask a fresh AI the same question
If you do nothing else from this course, do this. Three minutes. Every time something matters. That's it.
Copied to clipboard!