AI Clarity — Build
Week 1 · Segment 1 of 28
🛟 Help
1Environment
2Building
3Tools & Ext
4Deploy
Segment 1 of 28 · Week 1

What You're About to Build

⏱ ~25 min📖 No setup required🎯 Understanding + 1 micro-challenge

Look — before we get into anything. You've put money down on this, or someone has for you, and you want to know it's worth your time. Fair enough. So here's what I'll say: by the end of this week you'll have a live website. By Week 4 you'll have built things most people don't believe a non-developer can build. I didn't believe it either, until I did it. Let's go.

🎬
Welcome to BUILD — See What's Coming
3 min · Kariem demoing the 3 finished products live
"In 4 weeks, you'll have built all three of these. From scratch. Let me show you."
Let's address this first

"If AI can write code, why am I learning to build?"

Good question. Here's the answer: you're not learning to become a software developer. You're learning to build AI-powered tools that work for you. The people who understand how these systems connect — how APIs work, how to orchestrate multiple AIs, how to deploy and maintain tools — are the people who control AI instead of just using it.

In 2026, companies aren't looking for people who can write code from memory. They're looking for people who understand technology well enough to guide it, question it, and build with it. That's what this course teaches. In 4 weeks.

Before you touch a keyboard, see where you're going. By Week 4, you will have built, deployed, and own all three of these:

🌐
Your Website
Live, deployed, your design. Dark theme, glassmorphic panels, mobile-responsive. Hosted for free on Netlify. Your URL, your brand.
Weeks 1-2
🤖
Your AI Tool
A working tool that sends input to AI APIs and displays analysed results. Like Signal Check — but built by you, for your use case.
Week 2-3
🧩
Your Chrome Extension
A browser extension that analyses web pages using AI. Popup, side panel, API connection. Installable, functional, yours.
Week 3

Right. That's the destination. Now let's start the journey. It begins with an editor — the place where you'll write every line of code for the next four weeks. It's free, it takes five minutes to install, and it's the same editor most professional developers use. Let's go.

These are not tutorials you follow and forget. You own everything. Your code, your deployment, your domain. When the course ends, your system keeps running. That's the difference between watching a tutorial and building something real.
Why this course — not YouTube, not freeCodeCamp, not The Odin Project
Free courses teach you to code
Generic curriculum
No AI-specific focus
No live sessions or 1:1
No deployment guidance
No sector application
Self-directed, easy to quit
BUILD teaches you to build AI systems
AI-powered tools from Segment 11
Multi-model orchestration
Weekly live + 2× 1:1 with founder
Deployed and live by Week 2
Sector-specific applications
Cohort accountability + peer review

Here's everything you'll learn, and how it connects. The lit box shows where you are now. This diagram updates every segment — you'll always see your position in the system.

Your Complete Stack
🧠
You
Understanding the system
💻
VS Code
Your editor
⌨️
Terminal
Your command line
📄
HTML / CSS / JS
Your code
🐙
GitHub
Version control
🚀
Netlify
Hosting (free)
Cloudflare Workers
API proxy (secure)
🤖
AI APIs
Claude · GPT · Gemini
🧩
Chrome Extension
Browser integration
What You Need
A laptop or desktop (Windows, Mac, or Linux)
An internet connection
Google Chrome installed
Curiosity
You do NOT need: Any coding experience. Any technical background. Any maths. Expensive software. A powerful computer. This course teaches everything from scratch.
This Week: Your Environment (Segments 1-7)
1 What You're About to Build — you are here
2 Setting Up VS Code
3 The Terminal — It's Just a Text Box
4 Git & GitHub — Your Safety Net
5 Node.js & Python — Your Two Languages
6 DNS, Hosting & Deployment
7 Your First Live Website 🎯
~5 hours total · Go at your own pace within the week · Live session at end of week
Micro-Challenge · 2 minutes

Before the next segment, do this one thing: right-click anywhere on this page and click "Inspect" (or press F12). A panel will open showing the code behind this page. Every element you see — every colour, every animation, every card — is made from HTML, CSS, and JavaScript. That's exactly what you're about to learn to build.

Look at the code for 30 seconds. It will look like gibberish right now. Bookmark this moment. In 2 weeks, you'll be writing code that looks like that — and you'll understand every line.

Looking at the stack diagram — which component keeps your API keys secure so they're never exposed in your website code?
GitHub
GitHub stores your code — but API keys should never be in your code at all, even on GitHub.
Netlify
Netlify hosts your website, but it doesn't handle API key security for AI calls.
Cloudflare Workers
Correct. Cloudflare Workers run server-side code that holds your API keys securely. Your website calls the Worker, the Worker calls the AI API with the key. The key never touches the browser. You'll build this in Week 2.
VS Code
VS Code is your editor — it's where you write code on your computer, not where code runs online.
🔓
Blackbird Scope Tier 3 — Unlocked
Full 4-test AI session analysis. Try it →
💡
This is the only segment where you don't install anything. Starting from Segment 2, you build. Every segment after this has a "try it now" checkpoint and a troubleshooting guide if you get stuck. You're never on your own.
Segment 2 of 28 · Week 1

Setting Up VS Code

⏱ ~35 min🔓 Unlocks: Blackbird Scope Tier 3💻 Desktop required
Your Stack — You Are Here
🧠
Understanding
💻
VS Code
Installing now
⌨️
Terminal
🎬
Installing VS Code — Step by Step
5 min · Screen recording with voiceover
Video shows the complete installation process on Windows and Mac
⏱ 30-Second Preview — This Is Where You're Heading

Picture this: you type a question in a text box on YOUR website. You hit Send. Three seconds later, Claude's response appears below it. You wrote every line of that code in the editor you're about to install. That's Segment 11. This segment gets you the workspace.

VS Code is your editor — where you'll write everything. It's free, it runs on any operating system, and it's what most developers actually use day-to-day. By the end of this segment, you'll have it installed, configured, and ready. This takes about 20 minutes and you only do it once.

Step 1: Download & Install
Go to code.visualstudio.com
Click the big blue download button. It will detect your operating system automatically.
🪟 Windows
Run the downloaded .exe installer. Accept all defaults. Check "Add to PATH" when prompted — this is important.
🍎 Mac
Open the downloaded .dmg file. Drag VS Code into your Applications folder. Open it once from Applications.
Checkpoint
Can you open VS Code? You should see a Welcome tab with a dark interface.
Great — you're ready for Step 2.
Step 2: Install 4 Essential Extensions

Extensions add superpowers to VS Code. We'll install exactly 4 — no more, no less. These are the ones that matter for everything we'll build.

How to install an extension: Press Ctrl+Shift+X (or Cmd+Shift+X on Mac) to open the Extensions panel. Type the name. Click "Install."
1. Live Server
What it does: Opens your HTML files in a browser that updates automatically when you save. No need to refresh manually — every change appears instantly. You'll use this dozens of times per week.
2. Prettier — Code Formatter
What it does: Automatically tidies your code when you save. Indentation, spacing, line breaks — all handled. Your code will always look clean and professional, even as a beginner.
3. GitLens
What it does: Shows who changed each line of code and when. You won't need this immediately, but when you start working with Git in Segment 4, this becomes essential. Install it now.
4. HTML CSS Support
What it does: Gives you autocomplete suggestions when writing HTML and CSS. Start typing a tag or property and VS Code will suggest the rest. Saves time and prevents typos.
Checkpoint
In VS Code, press Ctrl+Shift+X. Can you see all 4 extensions listed as "installed" with a gear icon next to each?
Perfect. Four extensions — that's all you need to start.
Step 3: Configure Beginner-Friendly Settings

VS Code has hundreds of settings. We'll change exactly 5. These are the ones that make the biggest difference for new developers — based on what the most successful coding courses recommend in 2026.

Press Ctrl+, (comma) to open Settings. Use the search bar at the top to find each setting:

1. Auto Save: Search "auto save" → set to afterDelay
Your files save automatically. You'll never lose work because you forgot to press Ctrl+S.
2. Format On Save: Search "format on save" → check the box ✓
Prettier automatically tidies your code every time you save. Clean code, zero effort.
3. Word Wrap: Search "word wrap" → set to on
Long lines of text wrap instead of scrolling off screen. Much easier to read.
4. Bracket Pair Colorization: Search "bracket pair" → ensure it's enabled (it's on by default in 2026)
Matching brackets get the same colour. Helps you see where blocks of code begin and end.
5. Theme: Search "color theme" → choose Dark+ or any dark theme you like
Easier on the eyes during long sessions. Most developers use dark themes — and all our examples use dark backgrounds.

Five settings. That's it. I promise I won't make you configure anything else for at least two more segments.

Step 4: Know the 5 Areas of VS Code

VS Code has 5 main areas. You don't need to memorise them — just know they exist so nothing surprises you.

📁
Explorer
Your files & folders (left sidebar)
✏️
Editor
Where you write code (centre)
⌨️
Terminal
Command line (bottom panel)
🧩
Extensions
Add-ons (left sidebar)
🔍
Command Palette
Ctrl+Shift+P
The Command Palette is your secret weapon. Press Ctrl+Shift+P and you can search for ANY action in VS Code. Forgot where a setting is? Command Palette. Want to change the theme? Command Palette. Want to install an extension? Command Palette. It's the one shortcut to remember above all others.
A Note on AI Code Assistants

VS Code now has AI coding assistants built in — GitHub Copilot being the most popular. These tools suggest code as you type, and they can be genuinely helpful.

Some courses tell you to disable them entirely for learning. We disagree — but with a condition.

If you completed CLEAR or SHARP, you already know the patterns: AI agrees with you, AI sounds confident without basis, AI calibrates to what you want. The same patterns apply to code suggestions. Copilot will suggest code that LOOKS right. Your job — the same job from the earlier courses — is to understand what it's suggesting before you accept it. Use it. But verify it. Same principles. Different context.

You've just installed VS Code and 4 extensions. Which keyboard shortcut opens the Command Palette — the search bar that lets you find ANY action in VS Code?
Ctrl+Shift+X
Close — that opens the Extensions panel. Useful, but not the Command Palette. You're in the right neighbourhood though.
Ctrl+Shift+P
Correct. The Command Palette is the single most important shortcut in VS Code. It lets you search for ANY action — settings, themes, commands, extensions, everything. If you only remember one shortcut, this is the one.
Ctrl+,
This opens Settings — useful for changing configuration, but not the Command Palette.
Ctrl+`
This opens the integrated terminal — you'll learn this in Segment 3. But it's not the Command Palette.
🔓
Blackbird Scope Tier 3 — Confirmed
Full analysis suite now available. Open Scope →
⌨️
Shortcuts you've learned so far:
Ctrl+Shift+P — Command Palette (find anything)
Ctrl+Shift+X — Extensions panel
Ctrl+, — Settings
Ctrl+` — Terminal (next segment!)
💡
Mac users: Everywhere this course says Ctrl, you use Cmd instead. Everything else is the same.
Segment 3 of 28 · Week 1

The Terminal — It's Just a Text Box

⏱ ~35 min💻 Desktop required🔓 Unlocks: Bird's Eye Scope
Your Stack — You Are Here
🧠
Understanding
💻
VS Code
⌨️
Terminal
Learning now
🎬
The Terminal Is Not Scary — Watch This First
3 min · Screen recording: typing wrong commands, nothing breaks
"I'm going to type 10 wrong commands in a row. Watch what happens: absolutely nothing bad."

Let's get this out of the way.

I'm going to be honest — the first time I opened a terminal I stared at it for about thirty seconds, typed nothing, and closed it. That blinking cursor felt like it was judging me. It wasn't. It was just waiting.

Here's what nobody tells beginners: if you type a wrong command, you get an error message. That's it. Nothing explodes. Nothing gets deleted. The terminal says "command not found" and waits patiently for you to try again. I typed a lot of wrong commands. My computer survived all of them.

The terminal is just a text box where you type instructions for your computer. You've been typing instructions into Google for years — this is the same concept, pointed at your own machine instead of the internet.

Step 1: Open the Terminal Inside VS Code

Remember the shortcut we previewed at the end of Segment 2? Now you'll use it for real.

Open VS Code. Press Ctrl+` (that's the backtick key — top-left of your keyboard, next to the 1 key). On Mac: Cmd+`. A panel will open at the bottom of VS Code. That's the terminal.

You'll see a blinking cursor after some text. On Windows it might say PS C:\Users\YourName>. On Mac it might say yourname@MacBook %. Don't worry about what it says — that text just tells you where you are on your computer.

Checkpoint
Can you see a blinking cursor in a panel at the bottom of VS Code?
That blinking cursor is your terminal. Everything you type here goes directly to your computer. Let's try your first command.
Step 2: Prove To Yourself It's Safe

Type this exact nonsense into your terminal and press Enter:

Terminal
asdfghjkl

What happened? You got an error message — something like 'asdfghjkl' is not recognized or command not found: asdfghjkl. And then nothing. Your computer is fine. Your files are fine. The terminal just said "I don't understand" and waited for you to try again.

This is the most important lesson in this segment. The terminal is safe. Wrong commands produce error messages, not disasters. The only genuinely dangerous command is rm (remove files permanently) — and we'll teach you exactly when and how to use it safely.
Step 3: The 8 Commands You'll Actually Use

The terminal has hundreds of commands. You need 8. Here they are, with a "try it now" exercise for each:

pwd
Print Working Directory
What it does: Shows you where you currently are on your computer. Like checking the address on a map.
Try it
pwd
Windows equivalent: the path is already shown in your prompt, but you can type cd alone to see it.
ls
List files
What it does: Shows all files and folders in your current location. Like opening a folder in File Explorer.
Try it (Mac/Git Bash)
ls
Windows PowerShell: ls also works. Or use dir.
cd foldername
Change Directory
What it does: Moves you into a folder. Like double-clicking a folder in File Explorer. Use cd .. to go back up one level.
Try it
cd Desktop
Then type cd .. to go back. Works the same on all systems.
mkdir foldername
Make Directory
What it does: Creates a new folder. This is how you'll create project folders.
Try it
mkdir my-first-project
Now type ls — you should see your new folder listed.
touch filename
Create a file
What it does: Creates an empty file. This is how you'll create HTML, CSS, and JS files.
Try it (Mac/Git Bash)
touch index.html
Windows PowerShell: use ni index.html (New-Item) or echo $null > index.html
code .
⭐ Most important command
What it does: Opens the current folder in VS Code. The dot . means "this folder." This connects your terminal work (Segment 3) back to your editor (Segment 2). You'll use this every single time you start working on a project.
Try it
# Move into your new folder, then open it in VS Code
cd my-first-project
code .
Mac users: if code doesn't work, open VS Code → Cmd+Shift+P → type "shell command" → click "Install 'code' command in PATH"
clear
Clear the screen
Clears all the text in the terminal. Doesn't delete anything — just gives you a clean screen. Use it whenever the terminal feels cluttered.
rm filename
⚠ Use with care
What it does: Permanently deletes a file. No recycle bin. No undo. This is the ONE command to be careful with.
Rules: Only use rm on files you created yourself. Never use rm -rf / (this would attempt to delete everything). Always double-check the filename before pressing Enter.
Pro Tip: Tab Completion

Start typing a folder or file name and press Tab. The terminal will auto-complete it for you. Try it: type cd my- then press Tab. It completes to cd my-first-project/. This saves enormous amounts of typing and prevents typos. Use it constantly.

Step 4: Put It All Together — The 60-Second Exercise

Do these 6 commands in order. Each one builds on the last:

Complete Exercise
# 1. Go to your Desktop
cd Desktop

# 2. Create a project folder
mkdir ai-project

# 3. Move into it
cd ai-project

# 4. Create your first file
touch index.html     # Mac/Git Bash
# ni index.html      # Windows PowerShell alternative

# 5. Check it's there
ls

# 6. Open it in VS Code
code .
Checkpoint
Did VS Code open showing a folder called "ai-project" with one file called "index.html" inside it?
You just created a project folder and a file using only the terminal — then opened it in VS Code. That's the workflow you'll use for every project in this course.

Right. Take a breath. That was a lot of new words for one segment. If your head's buzzing a bit — good. That means you're actually learning, not just scrolling. Grab a glass of water if you need one. I'll wait.

You've created a project folder and you're inside it. Which command opens this folder in VS Code?
open vscode
This isn't a real command. The VS Code command is much shorter.
code .
There it is. The dot means "this folder." You'll type code . at the start of every project session from now until forever. It connects your terminal to your editor — two tools, one workflow. You'll be typing this in your sleep by Week 2.
start vscode .
Close — start is a Windows command for opening programs, but code . is the cross-platform VS Code command that works everywhere.
vscode --open
Not a real command. The VS Code CLI uses code, not vscode.
🔓
Bird's Eye Scope — Unlocked
Your personal AI interaction profile. Get your profile →
⌨️
Shortcuts you've learned so far:
Ctrl+` — Open terminal in VS Code
Tab — Auto-complete file/folder names
— Recall your last command (press repeatedly to go further back)
Ctrl+C — Cancel a running command (not copy — in terminal this stops things)
💡
Quick reference card — save this mentally: pwd (where am I), ls (what's here), cd (go somewhere), mkdir (make folder), touch (make file), code . (open in VS Code), clear (clean up), rm (delete — carefully). That's it. That's the terminal.
⚡ Where this is going

The terminal you just learned to use is the same one that will deploy your AI tools to the internet. In Segment 11, you'll type a command in this terminal that sends a message to Claude and gets a response back — through code you wrote. Every command you just practised gets you closer to that moment.

Segment 4 of 28 · Week 1

Git & GitHub — Your Safety Net

⏱ ~40 min💻 Desktop required🔓 Unlocks: Boardroom Access
Your Stack — You Are Here
🧠
Understanding
💻
VS Code
⌨️
Terminal
📄
HTML / CSS / JS
🐙
GitHub
Setting up now
🚀
Netlify
🎬
Git Explained Like You're 5
4 min · Animation: save points in a video game
"Every time you 'commit', you create a save point. You can always go back."

Think of Git as save points in a video game.

⏱ 30-Second Preview

You'll change one line of your AI tool's system prompt, type git push, and 30 seconds later the live version updates automatically. Break something? One command rolls it back. That's what Git gives you.

You're playing a game. Before you fight the boss, you save. If you die, you go back to the save point — you don't restart the whole game.

Git does this for your code. Every time you "commit," you create a save point. If you break something, you go back. Your entire history is preserved. You can never truly lose work if you're using Git.

GitHub is where your save points are stored online. Think of it as cloud backup for your code. It also enables deployment — Netlify reads your code from GitHub and puts your website online automatically.

💻
Your Computer
Git tracks changes locally
🐙
GitHub
Stores your code online
🚀
Netlify
Deploys your website
Step 1: Create a GitHub Account
Go to github.com and sign up for a free account. Choose a professional username — this will be visible on your projects. Verify your email.
Step 2: Install Git
🪟 Windows
Download from git-scm.com. Run the installer. Accept all defaults. This also installs Git Bash — an alternative terminal.
🍎 Mac
Git is often pre-installed. Check by typing git --version in terminal. If it's not there, install Xcode Command Line Tools: xcode-select --install
Verify Git is installed
git --version
# Should show something like: git version 2.43.0
Step 3: Tell Git Who You Are

Git needs your name and email to label your save points. Run these two commands (replace with YOUR details):

Configure Git
git config --global user.name "Your Name"
git config --global user.email "your@email.com"
git config --global init.defaultBranch main
The third command sets "main" as your default branch name — this is the 2026 standard.
Step 4: The 4 Git Commands You'll Use 80% of the Time

Git has hundreds of commands. You need 4. These four cover about 80% of everything you'll ever do with version control:

git add .
"Stage everything" — prepares all your changes to be saved. The dot means "everything in this folder."
git commit -m "message"
"Save point" — creates a snapshot of your code right now, with a description of what you changed.
git push
"Upload" — sends your save points to GitHub so they're backed up online.
git status
"What's changed?" — shows which files have been modified since your last save point.
Step 5: Your First Repository

Let's create a repository (a tracked project), add a file, create a save point, and push it to GitHub. Use the "ai-project" folder from Segment 3:

Your First Git Workflow
# Make sure you're in your project folder
cd ~/Desktop/ai-project

# Turn this folder into a Git repository
git init

# Stage everything (your index.html file)
git add .

# Create your first save point
git commit -m "My first commit"

Now create a repository on GitHub to store it online:

1. Go to github.com/new
2. Name it ai-project
3. Leave everything else as default (don't check "Add a README")
4. Click "Create repository"
5. GitHub will show you commands to connect your local repo — copy the two lines under "push an existing repository"
Connect & Push to GitHub
# Connect your local folder to GitHub (paste YOUR URL from GitHub)
git remote add origin https://github.com/YOURUSERNAME/ai-project.git

# Push your code to GitHub
git push -u origin main
Checkpoint
Go to github.com/YOURUSERNAME/ai-project — can you see your index.html file listed there?
That's your code. On the internet. In a real repository with your name on it. Not bad for someone who didn't know what a terminal was two segments ago. From here, Netlify will read from GitHub and deploy your site automatically. You'll connect that in Segment 6.

Git feels like overkill right now. I know. You're saving files to a cloud service when you could just... save the file. But the moment something breaks — and it will — you'll type one command and everything rolls back. That's not a feature. That's a superpower. Trust the process.

From now on, your workflow is always: Write code → git add .git commit -m "what I changed"git push. Three commands. Every time. This becomes muscle memory within a week.
You've been coding for an hour. You want to create a save point and upload it to GitHub. What's the correct sequence?
git push then git commit
Close, but reversed. You need to create the save point (commit) BEFORE you can upload it (push). Think: save the game, THEN backup to the cloud.
git add .git commit -m "message"git push
Correct. Stage → Commit → Push. Every time. add prepares your changes, commit saves them locally, push uploads them to GitHub. This three-step workflow will become second nature.
git save then git upload
These aren't real Git commands. The real ones are git add, git commit, and git push.
Just save the file in VS Code — Git does the rest automatically
Git doesn't auto-save. That's intentional — you CHOOSE when to create save points. This gives you control over what gets tracked and when. Auto-save (which you set up in S2) saves the file locally. Git tracks versions.
🔓
Boardroom Access — Unlocked
Multi-model AI comparison tool. Try it →
💡
I know, I know. One file in one folder. Git feels like overkill. Trust me on this one — future you will be extremely grateful to present you. In two weeks you'll have 20+ files and you'll accidentally break something at midnight. Git lets you undo it in one command. Give it two weeks. Then buy present-you a drink.
⌨️
New shortcuts:
Ctrl+Shift+G — Opens the Git panel in VS Code (visual view of your changes)
in terminal — Recall last command (use this constantly with git add .git commitgit push)
Segment 5 of 28 · Week 1

Node.js & Python — Your Two Languages

⏱ ~35 min💻 Desktop required🔓 Unlocks: Bird's Eye Scope
Your Stack — You Are Here
🧠
Understanding
💻
VS Code
⌨️
Terminal
📄
Node / Python
Installing now
🐙
GitHub
🚀
Netlify
🎬
Two Languages, Two Jobs — Why You Need Both
3 min · Animation showing Node.js and Python side by side
"Node runs your web tools. Python runs your scripts. Together they cover everything."

You don't need to learn to code in either language. I want to be clear about that before we go any further. You need to install them so your computer can run the tools that are written in them. Think of it like installing a DVD player — you don't need to know how to make DVDs, you just need the player so you can watch them.

Node.js
Runs JavaScript outside the browser. Powers web tools, servers, and Chrome extensions. You'll use it from Week 2 onwards.
🐍
Python
General-purpose language for scripts, automation, and data. Many AI tools and APIs have Python libraries. You'll use it for automation in Week 3.
Step 1: Install Node.js (LTS Version)
Go to nodejs.org
Click the big green LTS button (currently Node 24 "Krypton"). LTS means Long Term Support — it's the stable version that won't break. Don't click "Current" — that's for testing.
🪟 Windows
Run the .msi installer. Accept all defaults. It will also install npm automatically.
🍎 Mac
Run the .pkg installer. Or if you have Homebrew: brew install node
Verify Node.js is installed
node --version
# Should show: v24.x.x (any 24.x number is fine)

npm --version
# Should show: 11.x.x (npm comes with Node)
Checkpoint
Do both commands show version numbers?
Node.js and npm are installed. npm is your package manager — it installs tools and libraries that other developers have built. You'll use it constantly from Week 2.
What is npm? It stands for Node Package Manager. Think of it as an app store for code. When you type npm install something, it downloads code that someone else wrote and makes it available for your project. You'll use this to install tools, frameworks, and libraries throughout the course.
Step 2: Install Python
Go to python.org/downloads
Click "Download Python 3.14.x" (the latest stable version). The site detects your OS automatically.
⚠ CRITICAL — Windows Users Only
When the installer opens, check the box that says "Add Python to PATH" at the bottom of the first screen. If you miss this, Python won't be accessible from the terminal. This is the #1 beginner installation issue.
🪟 Windows
Run the installer. CHECK "Add Python to PATH". Click "Install Now". At the end, click "Disable path length limit" if prompted.
🍎 Mac
Run the .pkg installer. Or with Homebrew: brew install python. Python may already be installed — check first.
Verify Python is installed
python --version
# Should show: Python 3.14.x
# Mac/Linux users: try python3 --version if python doesn't work

pip --version
# Should show: pip 24.x or similar
# Mac/Linux: try pip3 --version
Checkpoint
Do both python --version and pip --version show version numbers?
Python and pip are installed. pip is Python's package manager — same concept as npm, different language. You'll use it for automation scripts and some AI tools.
What is pip? Same idea as npm, but for Python. pip install something downloads Python packages. The Anthropic SDK (for building with Claude), OpenAI SDK, and many AI tools are Python packages you'll install with pip.
When You'll Use Each
⚡ Node.js (Weeks 2-4)
Running your website locally (Live Server)
Installing tools with npm
Chrome extension development
Cloudflare Workers (serverless functions)
Build tools and automation
🐍 Python (Week 3-4)
AI API SDKs (Anthropic, OpenAI)
Quick automation scripts
Data processing and analysis
Scheduled tasks and cron jobs
Backend tools if needed
💡
You don't need to choose. Professional developers use both. Node.js is dominant for web tools, Python for AI and data. Having both installed means you can use any tool or tutorial you find online, regardless of which language it's written in.
⚡ Myth vs Reality
Myth: "You need to pay for hosting to have a real website."
Reality: Netlify's free tier gives you 100GB bandwidth and 300 build minutes per month. That's enough for thousands of visitors. The AI tools you'll build in this course will run on free infrastructure — Netlify for hosting, Cloudflare Workers for the API proxy. Professional architecture, zero hosting cost.
You find a tutorial online that says "Install the package by running npm install express." Which language does this package belong to?
Python
Python uses pip install, not npm install. npm is Node.js's package manager.
Node.js / JavaScript
Correct. npm is Node's package manager. If a tutorial says npm install, it's a JavaScript package. If it says pip install, it's Python. That's all you need to know to tell them apart.
Both
npm is specifically Node.js's package manager. Python has its own: pip. The command itself tells you which language the package belongs to.
Neither — it's a terminal command
npm IS a terminal command — but it's Node.js's terminal command specifically. Different languages have different package managers that all run in the terminal.
🔓
Bird's Eye Scope — Unlocked
Your personal AI interaction profile. Get your profile →

Right. Deep breath. That was the heaviest setup segment — more installs in one sitting than most people do in a year. Fancy a breather? It's getting a bit much, I can feel it too. Here's the good news: you're done installing things. Segment 6 is understanding, Segment 7 is building. The heavy lifting is behind you.

⌨️
Your installation checklist so far:
✅ VS Code (Segment 2)
✅ 4 extensions: Live Server, Prettier, GitLens, HTML CSS Support (Segment 2)
✅ Git (Segment 4)
✅ GitHub account (Segment 4)
✅ Node.js + npm (this segment)
✅ Python + pip (this segment)
⬜ Netlify account (next segment)
⚡ Why you just did all that

Node.js is the engine that runs your AI tools. Here's what 4 lines of Node can do — you can't run this yet, but by Segment 11, you will:

const response = await fetch('https://api.anthropic.com/v1/messages');
const data = await response.json();
console.log(data.content[0].text);
// → "Hello! I'm Claude. How can I help you today?"

Four lines. Your code talks to an AI. That's where we're heading. The setup you just completed makes this possible.

Segment 6 of 28 · Week 1

DNS, Hosting & Deployment — How the Internet Works

⏱ ~30 min🌐 Understanding + account setup
Your Stack — You Are Here
📄
HTML / CSS / JS
🐙
GitHub
🚀
Netlify
Setting up now
🎬
How the Internet Works in 3 Minutes
3 min · Animation: URL → DNS → server → browser
"When you type a web address, here's what actually happens behind the scenes."

Quick one before we build anything. You've got code on GitHub. Netlify is set up. But before we connect them in Segment 7, it helps to understand what actually happens when someone types a web address into their browser. Takes two minutes.

What Happens When You Type a Web Address
⌨️
1. You type everythingthreads.com
Your browser needs to find where this website lives.
📖
2. DNS looks up the address
DNS is like a phone book. It converts "everythingthreads.com" into a number (IP address) that computers understand.
🖥️
3. The server sends your files
The server (Netlify, in our case) finds your HTML, CSS, and JS files and sends them to the browser.
🌐
4. Your browser renders the page
The browser reads the HTML (structure), applies the CSS (design), and runs the JS (interaction). You see the website.
This entire process takes less than 1 second.
That's it. URL → DNS → server → browser. Every website in the world works this way. What Netlify does is handle the server part for you — for free. You push code to GitHub, Netlify picks it up, and serves it to anyone who visits your URL.
What Netlify Does for You
🚀
Auto-deploys
Push to GitHub → site updates in 30 seconds
🔒
Free HTTPS
SSL certificate included automatically
🌍
Global CDN
Content served from 100+ locations worldwide
💰
Free Tier
100GB bandwidth, 300 build minutes/month
Your Deployment Pipeline (what you set up in S4 + this segment)
You write code
in VS Code
git push
to GitHub
Netlify detects
the push automatically
Site is live
in ~30 seconds
You never need to "upload" files or manage a server. Push to GitHub. Netlify handles the rest.
Step 1: Create Your Netlify Account
1. Go to app.netlify.com/signup
2. Click "Sign up with GitHub" — this connects the two accounts automatically
3. Authorise Netlify to access your GitHub (it needs this to detect when you push code)
4. You'll see your Netlify dashboard — empty for now. That changes in Segment 7.
Checkpoint
Can you see your Netlify dashboard at app.netlify.com?
Your deployment pipeline is ready. In Segment 7, you'll push code to GitHub and watch Netlify deploy it live. That's the most exciting moment in the course.
Your complete environment is now set up.
VS Code (editor) + Terminal (commands) + Git/GitHub (version control) + Node.js (JavaScript runtime) + Python (scripting) + Netlify (hosting). That's the full stack. Segment 7 puts it all together: you'll build a page, push it to GitHub, and watch it appear live on the internet. From your computer to the world in 30 seconds. This is the same pipeline your AI tools will use — the deployment step doesn't change when the code gets smarter.

DNS and hosting aren't sexy topics. I get it. But understanding how the internet actually delivers your website to someone's browser — that's the kind of knowledge that separates people who build things from people who use things other people built. Two more minutes and you'll have it.

You change a line of CSS on your website, save the file, and run git add . && git commit -m "fixed colour" && git push. What happens next?
Nothing — you need to upload the file to Netlify manually
That's the old way (FTP, manual upload). With Netlify's continuous deployment, you never upload manually. Git push is all it takes.
Netlify detects the push, rebuilds, and your live site updates automatically in ~30 seconds
Exactly. That's continuous deployment. Push to GitHub → Netlify detects → site updates. You never touch a server. You never upload files. You just push. This workflow is what professional developers use — and it's what you'll use from Segment 7 onwards.
You need to run a deploy command in Netlify
You CAN use the Netlify CLI for manual deploys, but with the GitHub integration, you don't need to. Push to GitHub is enough — Netlify watches for changes automatically.
The change only appears on GitHub, not on your live website
If Netlify is connected to your GitHub repo (which we just set up), it deploys automatically. The change appears both on GitHub AND on your live site.
💡
Next segment is the one you've been waiting for. Six segments of setup. Six segments of "trust me, this matters." Segment 7 is the payoff. You're going to push code to GitHub, Netlify is going to pick it up, and you're going to visit a URL and see YOUR website — live, on the actual internet. I still remember the first time I did it. It never gets old.
⚡ Myth vs Reality
Myth: "You need to pay for hosting to have a real website."
Reality: Netlify's free tier gives you 100GB bandwidth and 300 build minutes per month. That's enough for thousands of visitors. The AI tools you'll build in this course will run on free infrastructure — Netlify for hosting, Cloudflare Workers for the API proxy. Professional architecture, zero hosting cost.
Segment 7 of 28 · Week 1

Your First Live Website

⏱ ~40 min💻 Desktop required📋 Week 1 quiz gate
Your Stack — Everything Connects
💻
VS Code
⌨️
Terminal
🐙
GitHub
🚀
Netlify
Deploying now

This is the one. Six segments of setup. Six segments of trust. Here's where it all comes together. You're going to write some HTML, push it to GitHub, connect Netlify, and visit a URL that shows YOUR website. Live. On the actual internet. Ready?

🎬
Your First Deploy — Watch It Happen
2 min · Screen recording of the full process in real time
"From empty file to live website in under 5 minutes. This is what you're about to do."
Step 1: Write Your First HTML Page

Open VS Code. Open your ai-project folder (from Segment 3). Open index.html. It's empty right now. Paste this in:

index.html — Your first webpage
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <title>My First Website</title>
  <style>
    /* Dark theme — looks professional from the start */
    body {
      background: #0b0b0c;
      color: #f3efe9;
      font-family: -apple-system, sans-serif;
      display: flex;
      justify-content: center;
      align-items: center;
      min-height: 100vh;
      margin: 0;
    }
    .card {
      text-align: center;
      padding: 48px;
      max-width: 500px;
    }
    h1 { font-size: 2.5rem; margin-bottom: 12px; }
    p { color: rgba(243,239,233,.6); line-height: 1.7; }
  </style>
</head>
<body>
  <div class="card">
    <h1>Hello, World.</h1>
    <p>I built this. From scratch. And it's live on the internet.</p>
  </div>
</body>
</html>
Don't worry about understanding every line yet. That's Segment 8. Right now, the goal is: write code → push it → see it live. You'll learn what each piece does starting next week. For now, just paste it and save.
Step 2: Preview It Locally

Right-click your index.html file in VS Code's explorer panel. Click "Open with Live Server" (that's the extension from Segment 2). A browser tab should open showing your page — dark background, white text, centred on the screen.

Checkpoint
Can you see "Hello, World." in your browser on a dark background?
That's your code running in the browser. Right now it's only on your machine. Let's put it on the internet — and once it's there, it's the same site that will host your AI tools from Segment 11 onwards.
Step 3: Push to GitHub

You know this workflow now. Open the terminal (Ctrl+`) and run:

The workflow you'll use forever
git add .
git commit -m "My first real webpage"
git push

Three commands. Every time. This is already becoming muscle memory.

Step 4: Connect Netlify to Your Repository
1. Go to app.netlify.com (you logged in during Segment 6)
2. Click "Add new project""Import an existing project"
3. Click "GitHub"
4. Find and select your ai-project repository
5. Leave all settings as default — Netlify detects it's a static HTML site
6. Click "Deploy"

Wait about 30 seconds. Netlify will show "Published" with a green checkmark. You'll see a URL that looks something like random-name-12345.netlify.app.

Step 5: Visit Your Live URL

Click the URL Netlify gave you.

Open it on your phone. Send it to a friend. Open it in a different browser. It works everywhere. Because it's on the internet. Your code. Your design. Your URL.

Checkpoint — The Big One
Can you see your "Hello, World." page at a .netlify.app URL — accessible from any device?
That's your website. On the actual internet. Built from an empty folder, pushed through GitHub, deployed by Netlify — in under 10 minutes. Send that link to someone. Go on. You've earned it.

I'm not going to pretend this wasn't a lot of work to get here. Six segments of setup. You installed VS Code, learned the terminal, set up Git, created a GitHub account, installed Node.js, installed Python, created a Netlify account. And now you've got a live website. That's Week 1 done. Properly done.
Bonus: Change Your Site Name

That random URL isn't great. In Netlify: Site settings → Change site name. Type something you like — it becomes your-chosen-name.netlify.app. Free. Instant.

Week 1 Assessment — Pass to unlock Week 2
You've changed some CSS on your website. What's the correct sequence to make the change appear on your live site?
Save the file and refresh the Netlify URL
Saving locally doesn't update the live site. The change needs to travel through your deployment pipeline: save → git push → Netlify deploys.
Save the file, then git add .git commit -m "message"git push
That's the one. Save → stage → commit → push. Netlify picks up the push automatically and deploys in about 30 seconds. This workflow is your new normal. Every change you make from now on follows this exact path.
Upload the file to Netlify manually
You could drag-and-drop to Netlify, but that defeats the purpose of the GitHub connection. The power of continuous deployment is: push to GitHub, everything else is automatic.
Email the file to Netlify support
I appreciate the confidence in Netlify's support team, but no. Your deployment pipeline is fully automated — git push is all it takes.
🏁 What You Built This Week
🛠
6 tools installed
VS Code, Git, Node, Python, npm, pip
💻
8 terminal commands
cd, ls, mkdir, git, node, python, npm, pip
🌐
1 live website
Your URL, your code, your deploy

Seven days ago you didn't have any of this. Now you have a deployed website and every tool you need to build AI applications. Week 2 starts building them.

Week 1 Complete.
You started with nothing. You now have: VS Code configured, terminal skills, Git version control, GitHub repository, Node.js, Python, a Netlify account, and a live website. Week 2 is where it gets interesting — you'll learn what every line of that HTML actually does, how to make it look beautiful with CSS, and how to make it interactive with JavaScript.
💡
Quick reminder: This week's live session covers any setup issues. If something didn't work, bring it. That's what the sessions are for. If everything worked — come anyway. The group energy is worth it.
Segment 8 of 28 · Week 2

HTML — Structure

⏱ ~40 min💻 Desktop required🔓 Unlocks: Boardroom access
Your Stack — Week 2
📄
HTML
Learning now
🎨
CSS
JavaScript

Remember that HTML you pasted in Segment 7? You didn't need to understand it then. You do now. HTML is the skeleton of every webpage — the structure that holds everything in place. CSS makes it look good (next segment). JavaScript makes it do things (Segment 10). This segment is about the skeleton — and every tag you learn here shows up in the AI tool you'll build in Segment 12. The text input where users type their prompt? HTML. The button that sends it? HTML. The box that displays the AI response? HTML. You're building the interface for your AI tool right now.

Week 2. The building starts. Everything from here is about making things — not installing things. If Week 1 felt slow, this is where the pace picks up. Ready?

🎬
HTML Explained Through Building
5 min · Building a real multi-section page from scratch, narrated
"We're not going to list every HTML tag. We're going to build a page, and I'll explain each piece as we use it."
We're going to rebuild your index.html from scratch — same file, but this time you'll understand every line. By the end, you'll have a proper multi-section page with a header, navigation, content areas, and a footer. Let's get into it.
The 3 Things You Need to Know About HTML
1. Everything is a tag
Tags have an opening (<h1>) and a closing (</h1>). Content goes between them. That's it. That's HTML.
2. Tags nest inside each other
A <div> can contain a <h1> which can contain a <span>. It's boxes inside boxes. The bracket pair colours in VS Code (from Segment 2) show this nesting.
3. Tags can have attributes
Attributes give extra information. <a href="url"> makes a link. <img src="image.jpg"> shows an image. The tag says WHAT, the attribute says HOW.
The 12 Tags That Build 90% of Webpages

HTML has over 100 tags. You need about 12. Here they are — grouped by what they do:

Structure
<header> <nav> <main> <section> <footer>
Content
<h1><h6> <p> <a> <img>
Grouping
<div> <span> <ul> / <li>
Exercise: Build a Multi-Section Page

Replace everything in your index.html with this. Read the comments (the grey text) — they explain what each piece does:

index.html — A real multi-section page
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <title>My AI Project</title>
</head>
<body>

  <!-- HEADER: Top of the page, usually your name or brand -->
  <header>
    <h1>My AI Project</h1>
    <p>Built from scratch during the AI Clarity Programme</p>
  </header>

  <!-- NAV: Links to different sections or pages -->
  <nav>
    <a href="#about">About</a>
    <a href="#tools">Tools</a>
    <a href="#contact">Contact</a>
  </nav>

  <!-- MAIN: The primary content of your page -->
  <main>
    <section id="about">
      <h2>About</h2>
      <p>This is my first real website. I'm learning to build AI-powered tools.</p>
    </section>

    <section id="tools">
      <h2>My Tools</h2>
      <ul>
        <li>Signal Check</li>
        <li>Session Temperature</li>
        <li>Multi-Model Compare</li>
      </ul>
    </section>

    <section id="contact">
      <h2>Contact</h2>
      <p>Email me at <a href="mailto:you@email.com">you@email.com</a></p>
    </section>
  </main>

  <!-- FOOTER: Bottom of the page -->
  <footer>
    <p>&copy; 2026 My AI Project</p>
  </footer>

</body>
</html>
Checkpoint
Save the file. Does Live Server show a page with a heading, navigation links, three sections, and a footer? (It will look plain and unstyled — that's CSS's job, next segment.)
It looks ugly right now. That's normal — HTML without CSS is just raw content with default browser styling. That plain page IS the skeleton. Next segment dresses it up. Push it to GitHub while you're at it: git add . && git commit -m "multi-section page" && git push
What's the difference between <section> and <div>?
There is no difference
They both group content, but <section> tells the browser (and screen readers) "this is a distinct section of content" while <div> is a generic container with no meaning. Use <section> when the content has a clear theme.
<section> has semantic meaning — it tells the browser this is a distinct content area. <div> is a generic container.
Spot on. Semantic HTML matters — it helps screen readers, search engines, and anyone reading your code understand what's what. It's the difference between labelling a box "kitchen items" vs just "stuff." Small thing, big difference.
<div> is newer and better
Actually the opposite — <div> is older. Semantic tags like <section>, <header>, <nav> were introduced in HTML5 specifically to add meaning that <div> doesn't provide.
<section> is only for text content
Sections can contain anything — text, images, forms, other elements. The point is that the content within a section is thematically related.
💡
Emmet shortcut: In VS Code, type ! and press Tab. It generates the entire HTML boilerplate instantly. You'll never type <!DOCTYPE html> by hand again.
Segment 9 of 28 · Week 2

CSS — Design

⏱ ~40 min💻 Desktop required
Your Stack — Week 2
📄
HTML
🎨
CSS
Learning now
JavaScript

Your page from Segment 8 works — but it looks like it was designed in 1998 by someone in a hurry. That textarea, button, and response area you built? They're about to look like they belong in a professional AI tool. Same HTML. Completely different experience. CSS is how — and this segment teaches you just enough of it to make your tools look like they belong on EverythingThreads.

🎬
CSS: From Ugly to Beautiful in 10 Minutes
5 min · Side-by-side: before and after CSS applied
"Same HTML. Watch what CSS does to it."
How CSS Works — 3 Concepts
1. Selectors → Properties → Values
body { background: #0b0b0c; } — the selector (body) picks what to style, the property (background) says what to change, the value (#0b0b0c) says to what.
2. The Box Model
Every element is a box. The box has: content (the text), padding (space inside), border, and margin (space outside). Understanding this = understanding CSS layout.
3. Flexbox & Grid = Layout
Flexbox arranges things in a line (horizontal or vertical). Grid arranges things in a grid. Between them, you can build any layout. These are the only two layout systems you need to learn.
Exercise: Add the Dark Theme

Add a <style> block inside your <head> tag (after <title>, before </head>). Paste this CSS:

CSS — Dark theme with clean layout
<style>
/* Base — dark theme */
* { margin: 0; padding: 0; box-sizing: border-box; }
body {
  background: #0b0b0c;
  color: #f3efe9;
  font-family: -apple-system, 'Segoe UI', sans-serif;
  line-height: 1.7;
}

/* Header */
header {
  text-align: center;
  padding: 60px 20px 40px;
}
header h1 { font-size: 2.4rem; }
header p { color: rgba(243,239,233,.5); }

/* Nav */
nav {
  display: flex;
  justify-content: center;
  gap: 24px;
  padding: 16px;
  border-bottom: 1px solid rgba(255,255,255,.06);
}
nav a {
  color: #ff6a1f;
  text-decoration: none;
  font-size: 0.85rem;
}

/* Main content */
main {
  max-width: 700px;
  margin: 0 auto;
  padding: 40px 20px;
}
section { margin-bottom: 40px; }
h2 { margin-bottom: 12px; color: #ff6a1f; }
p { color: rgba(243,239,233,.6); }
ul { padding-left: 20px; color: rgba(243,239,233,.6); }

/* Footer */
footer {
  text-align: center;
  padding: 30px;
  font-size: 0.75rem;
  color: rgba(243,239,233,.3);
}
</style>
Checkpoint
Save. Does your page now have a dark background, orange accents, centred content, and clean spacing?
Same HTML from Segment 8. Completely different look. That's the power of CSS — structure and design are separate. Change the CSS, the entire feel changes. The HTML doesn't move. Push it: git add . && git commit -m "dark theme CSS" && git push
💡
DevTools are your best friend. Right-click any element on your page → Inspect → you can see AND edit CSS live in the browser. Changes disappear on refresh, so it's a safe playground. Try changing #ff6a1f to #22d3ee (cyan) and watch the accent colour change in real time.

Two languages down, one to go. HTML tells the browser what's on the page. CSS tells it how to show it. Next up: JavaScript tells it what to do when someone actually interacts with it. Nearly there.

What you just styled is the front end of your AI tool. Same textarea. Same button. Same response area. In Segment 10, you'll make the button actually do something. In Segment 11, that something will be talking to Claude.

You want your AI tool's response area to have a dark background, rounded corners, and light text. Which CSS properties do you need?
color: dark; shape: round; text: light
Those aren't real CSS property names. CSS uses specific properties like background, border-radius, and color.
background: #1a1a1c; border-radius: 8px; color: #f3efe9
That's it. Three properties: background sets the fill colour, border-radius rounds the corners, color sets the text. These three lines turn a raw HTML box into something that looks like it belongs in a professional tool.
dark-mode: on; rounded: true
CSS doesn't have shorthand toggles like that. Each visual property is set individually — which gives you precise control over every detail.
You need JavaScript for visual styling
JavaScript handles behaviour (clicks, input, API calls). CSS handles appearance. They're separate concerns — and keeping them separate makes your code easier to maintain.
Segment 10 of 28 · Week 2

JavaScript — Interaction

⏱ ~40 min💻 Desktop required
Your Stack — Week 2
📄
HTML
🎨
CSS
JavaScript
Learning now

HTML is the skeleton (you built it). CSS is the skin (you styled it). JavaScript is the brain — it makes your page respond to clicks, process input, talk to AI APIs, and change content dynamically. This segment teaches you just enough JavaScript to build interactive tools — which is what Segment 11 onwards is all about.

🎬
JavaScript: Making Your Page Do Things
5 min · Building a button that changes content on click
"HTML shows it. CSS styles it. JavaScript makes it DO something."
This is not a JavaScript course. You're not going to learn algorithms or data structures. You're going to learn 5 concepts that let you make interactive pages and call AI APIs. That's it. If you want to go deeper into JavaScript after this course, you'll have the foundation — but right now, these 5 concepts are all you need.
The 5 JavaScript Concepts You Need
1. Variables — storing values
JavaScript
const name = "My AI Tool";       // doesn't change
let count = 0;                   // can change
let isActive = true;             // true or false
Use const for things that don't change. Use let for things that do. Never use var (it's the old way).
2. Functions — reusable blocks of code
JavaScript
function greet(name) {
  return "Hello, " + name;
}

greet("World");  // returns "Hello, World"
A function is a set of instructions with a name. You write it once, call it whenever you need it.
3. DOM Manipulation — changing the page
JavaScript
// Find an element
const heading = document.querySelector('h1');

// Change its text
heading.textContent = "New Heading!";

// Show or hide something
heading.style.display = 'none';    // hidden
heading.style.display = 'block';   // visible
The DOM is the page's structure in JavaScript form. querySelector finds elements. Then you can change them.
4. Events — responding to user actions
JavaScript
const button = document.querySelector('button');

button.addEventListener('click', function() {
  alert('Button clicked!');
});
Events listen for things happening — clicks, typing, scrolling. When the event fires, your function runs.
5. fetch() — talking to APIs ⭐
JavaScript — the one that matters most
// Send a request to an API and get a response
const response = await fetch('https://api.example.com/data');
const data = await response.json();

// Now 'data' contains whatever the API sent back
console.log(data);
This is the one. fetch() is how your website talks to AI APIs. You send a request, the API sends a response. In Segment 11, you'll use this to connect your page to Claude, GPT, and Gemini.
Exercise: Add Interactivity to Your Page

Add this just before </body> in your index.html:

Add before </body>
<button id="myBtn" style="padding:12px 24px;background:#ff6a1f;color:#0b0b0c;border:none;border-radius:8px;font-weight:bold;cursor:pointer;margin:20px auto;display:block">
  Click Me
</button>
<p id="output" style="text-align:center;color:rgba(243,239,233,.6)"></p>

<script>
  let clickCount = 0;
  const btn = document.querySelector('#myBtn');
  const output = document.querySelector('#output');

  btn.addEventListener('click', function() {
    clickCount++;
    output.textContent = `You clicked ${clickCount} time${clickCount === 1 ? '' : 's'}!`;
  });
</script>
Checkpoint
Save. Does clicking the orange button update the text below it with a click count?
Your page is now interactive. HTML shows it, CSS styles it, JavaScript makes it respond. Those three — HTML, CSS, JS — are the entire foundation of web development. Everything from here builds on this. Push it: git add . && git commit -m "added JavaScript interaction" && git push
In Segment 11, you'll connect your website to an AI API. Which JavaScript feature does that?
querySelector
querySelector finds elements on your page — it doesn't communicate with external services.
addEventListener
addEventListener responds to user actions on your page — it doesn't make network requests.
fetch()
That's the one. fetch() sends HTTP requests to external services — including AI APIs. In Segment 11, you'll send a user's input to an AI API via fetch() and display the response on your page. This is the bridge between your website and artificial intelligence.
textContent
textContent changes text on the page — useful for displaying API responses, but it doesn't make the request itself.

Three languages in three segments. HTML for structure. CSS for design. JavaScript for interaction. That's the entire front end of the web — and you've just built with all three. The next segment is where we connect it to AI. That's when this course stops feeling like a coding course and starts feeling like building something real.

💡
Console is your debugging friend. console.log(anything) prints to the DevTools Console (F12). When something isn't working, console.log the values to see what's actually happening. Every developer does this. Every day.
Segment 11 of 28 · Week 2

Connecting to AI — Your First API Call

⏱ ~45 min💻 Desktop required🔓 Unlocks: Signal Check
Your Stack — The Bridge
📄
HTML/CSS/JS
Cloudflare Workers
Building now
🤖
AI APIs
Connecting now

This is the segment the course has been building towards. Everything up to now — the editor, the terminal, Git, HTML, CSS, JavaScript — was preparation for this moment. You're about to send a message to an AI model from your own code and get a response back. Not through a chatbot interface. Through code you wrote yourself.

🎬
Your Code Talks to Claude — Watch
4 min · Live demo: typing a prompt, hitting Send, seeing the AI response appear
"This is what it looks like when YOUR website talks to an AI. No middleman. No chatbot UI. Your code, your API call, your response."

First — why can't you just call the API directly?

AI APIs require a secret key to work. If you put that key in your website's JavaScript, anyone can open DevTools, find it, and use it to make calls on your account. You'd be paying for their usage.

The solution: a Cloudflare Worker. It's a tiny piece of server code that sits between your website and the AI API. Your website sends the prompt to the Worker. The Worker adds the secret key and forwards it to the AI. The key never touches the browser. This is the same architecture that professional AI applications use — and the Worker's free tier gives you 100,000 requests per day.

Your Website
sends prompt
Cloudflare Worker
adds API key securely
AI API (Claude)
returns response
Step 1: Get Your Anthropic API Key
1. Go to console.anthropic.com and create an account
2. Navigate to API Keys → Create Key
3. Copy the key — it starts with sk-ant-
4. Store it somewhere safe. You'll paste it into Cloudflare in Step 2. Never put it in your website code.
Anthropic gives you free credits to start. That's enough for this entire course.
Step 2: Create Your Cloudflare Worker
1. Go to dash.cloudflare.com and create a free account
2. Navigate to Workers & Pages → Create
3. Click "Create Worker" — name it ai-proxy
4. Click Deploy (with the default code — you'll replace it next)
5. Go to Settings → Variables and Secrets → Add
6. Name: ANTHROPIC_API_KEY · Type: Secret · Value: paste your API key
7. Click Save

Now click "Edit Code" and replace everything with this:

Cloudflare Worker — AI API Proxy
export default {
  async fetch(request, env) {
    // Only allow POST requests
    if (request.method !== 'POST') {
      return new Response('Send a POST request', { status: 405 });
    }

    // Get the prompt from your website
    const { prompt } = await request.json();

    // Call the Anthropic API with YOUR secret key
    const response = await fetch('https://api.anthropic.com/v1/messages', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'x-api-key': env.ANTHROPIC_API_KEY,
        'anthropic-version': '2023-06-01'
      },
      body: JSON.stringify({
        model: 'claude-sonnet-4-6',
        max_tokens: 1024,
        messages: [{ role: 'user', content: prompt }]
      })
    });

    const data = await response.json();

    // Send the AI response back to your website
    return new Response(JSON.stringify(data), {
      headers: {
        'Content-Type': 'application/json',
        'Access-Control-Allow-Origin': '*'
      }
    });
  }
};

Click "Save and deploy." Your Worker now has a URL like ai-proxy.your-name.workers.dev.

Step 3: Call the AI from Your Website

Add this to your index.html — a text input, a button, and the JavaScript that connects to your Worker:

Add before </body>
<div style="max-width:600px;margin:40px auto;padding:0 20px">
  <textarea id="prompt" rows="3" placeholder="Ask Claude anything..."
    style="width:100%;padding:12px;background:#1a1a1c;color:#f3efe9;border:1px solid rgba(255,255,255,.1);border-radius:8px;font-size:1rem;resize:vertical"></textarea>
  <button onclick="askAI()"
    style="margin-top:12px;padding:12px 24px;background:#ff6a1f;color:#0b0b0c;border:none;border-radius:8px;font-weight:bold;cursor:pointer">
    Ask Claude
  </button>
  <div id="response" style="margin-top:20px;padding:16px;background:#1a1a1c;border-radius:8px;color:rgba(243,239,233,.7);display:none;white-space:pre-wrap"></div>
</div>

<script>
async function askAI() {
  const prompt = document.getElementById('prompt').value;
  const responseDiv = document.getElementById('response');

  responseDiv.style.display = 'block';
  responseDiv.textContent = 'Thinking...';

  try {
    const res = await fetch('https://ai-proxy.YOUR-NAME.workers.dev', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt })
    });
    const data = await res.json();
    responseDiv.textContent = data.content[0].text;
  } catch (err) {
    responseDiv.textContent = 'Error: ' + err.message;
  }
}
</script>
Replace YOUR-NAME in the fetch URL with your actual Cloudflare Workers subdomain. You can find this on your Worker's dashboard.
Checkpoint — The Moment
Type something in the text box, click "Ask Claude," and wait a few seconds. Does an AI response appear below?
That's your code talking to an AI. Not a chatbot someone else built. Not a plugin. YOUR website, YOUR Worker, YOUR API call. The response travelled from your browser to Cloudflare to Anthropic and back — in seconds. Everything from here builds on this exact pattern.
Why do we use a Cloudflare Worker instead of calling the Anthropic API directly from the website?
The Anthropic API is too slow for direct calls
Speed isn't the issue — the API responds quickly either way. The Worker exists for security, not speed.
To keep the API key secret — it stays in the Worker, never reaching the browser
That's it. The API key must stay server-side. If it's in your browser JavaScript, anyone can steal it. The Worker holds the key securely and acts as a middleman. Your website only talks to the Worker — it never sees the key.
Cloudflare is required by Anthropic
Not true — you can use any backend server. Cloudflare Workers are just the simplest free option for what we're building. Any server-side code would work.
To make the code shorter
It actually adds code. But the trade-off is worth it — security isn't optional when API keys are involved.
🔓
Signal Check — Unlocked
Real-time AI output analysis. Try it →

You just crossed the line from "building websites" to "building AI-powered tools." From here, every segment adds capability. Push it: git add . && git commit -m "connected to Claude API" && git push

💡
API costs: Anthropic gives you free credits to start — that's enough for this entire course and then some. After that, Claude Haiku costs fractions of a penny per call. A tool that makes 100 API calls a day would cost less than a coffee per month. Don't let API pricing stop you from experimenting.
📊 Before & After — Look How Far You've Come
Segment 1: You didn't know what VS Code was. You'd never opened a terminal. "API" was a buzzword you'd seen on Twitter.
Now: Your website just talked to Claude. Through a secure proxy you built. Using an API call you wrote. Deployed on infrastructure you configured. That happened.
Segment 12 of 28 · Week 2

Building Your First AI Tool

⏱ ~45 min💻 Desktop required
Your Stack — Week 2
📄
HTML/CSS/JS
Cloudflare Worker
🤖
AI Tool
Building now

In Segment 11 you connected your website to Claude. That's a raw API call — it works, but it's rough. This segment turns it into a real tool. You'll add proper input handling, a loading state so users know something is happening, error handling so it doesn't just crash, and formatted output that's actually readable. By the end, you'll have something you could show someone and they'd think a team built it.

🎬
From API Call to Polished Tool
5 min · Before/after: raw API output vs polished tool interface
"Same API call underneath. Completely different user experience on top."
What makes a tool feel professional: Loading indicator ("Analysing..."). Empty state ("Enter text above and click Analyse"). Error messages that help ("Network error — check your connection"). Formatted output with sections and highlights. Disabled button while processing. These aren't hard to build — they just need to be there.
The 4 Patterns That Separate Amateur From Professional

Every polished AI tool you've ever used follows these four patterns. They're not complex — but missing any one of them makes the tool feel broken:

🔄 Loading State
API calls take 2-10 seconds. Without a loading indicator, users think it's broken. Show "Analysing..." the moment they click. Disable the button so they can't double-submit.
⚠️ Error Handling
API calls fail. Networks drop. Keys expire. Wrap every fetch() in try/catch and show a helpful message — not a blank screen or a cryptic JavaScript error.
✅ Input Validation
Check before you send. Is the input empty? Is it too long for the API's token limit? Catch these before they cost you an API call — and tell the user what to fix.
📋 Formatted Output
Raw AI responses are walls of text. Add line breaks, section headers, or even basic Markdown rendering. The same content feels completely different when it's properly formatted.
Exercise: Build a Text Analyser

You're going to build a tool that takes any text, sends it to Claude with a specific instruction, and displays a structured analysis. Think of it as your own mini Signal Check. Create a new file called tool.html:

tool.html — Your first AI tool (skeleton)
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width,initial-scale=1">
  <title>Text Analyser</title>
  <style>
    * { margin:0; padding:0; box-sizing:border-box; }
    body { background:#0b0b0c; color:#f3efe9; font-family:-apple-system,sans-serif; }
    .app { max-width:650px; margin:60px auto; padding:0 20px; }
    h1 { font-size:1.6rem; margin-bottom:8px; }
    .sub { color:rgba(243,239,233,.4); font-size:.85rem; margin-bottom:28px; }
    textarea { width:100%; padding:14px; background:#1a1a1c; color:#f3efe9;
      border:1px solid rgba(255,255,255,.08); border-radius:10px; font-size:.95rem;
      resize:vertical; min-height:120px; line-height:1.6; }
    .btn { margin-top:14px; padding:14px 28px; background:#ff6a1f; color:#0b0b0c;
      border:none; border-radius:10px; font-weight:700; font-size:.9rem; cursor:pointer; }
    .btn:disabled { opacity:.5; cursor:not-allowed; }
    .result { margin-top:24px; padding:20px; background:#1a1a1c; border-radius:10px;
      color:rgba(243,239,233,.7); white-space:pre-wrap; line-height:1.7; font-size:.88rem; }
    .loading { color:var(--cyan,#22d3ee); }
    .error { color:#e05a4a; }
  </style>
</head>
<body>
  <div class="app">
    <h1>Text Analyser</h1>
    <p class="sub">Paste any text. Get a structured analysis.</p>
    <textarea id="input" placeholder="Paste or type text here..."></textarea>
    <button class="btn" id="btn" onclick="analyse()">Analyse</button>
    <div class="result" id="result" style="display:none"></div>
  </div>

  <script>
  async function analyse() {
    const input = document.getElementById('input').value.trim();
    const btn = document.getElementById('btn');
    const result = document.getElementById('result');

    // Validation
    if (!input) { result.style.display='block'; result.className='result error'; result.textContent='Please enter some text first.'; return; }

    // Loading state
    btn.disabled = true;
    btn.textContent = 'Analysing...';
    result.style.display = 'block';
    result.className = 'result loading';
    result.textContent = 'Sending to Claude...';

    try {
      const res = await fetch('https://ai-proxy.YOUR-NAME.workers.dev', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          prompt: `Analyse the following text. Provide:
1. A one-sentence summary
2. The main argument or point
3. The tone (formal, casual, persuasive, etc.)
4. Any potential weaknesses or gaps

Text: ${input}`
        })
      });
      const data = await res.json();
      result.className = 'result';
      result.textContent = data.content[0].text;
    } catch (err) {
      result.className = 'result error';
      result.textContent = 'Something went wrong: ' + err.message;
    }

    btn.disabled = false;
    btn.textContent = 'Analyse';
  }
  </script>
</body>
</html>
Checkpoint
Open tool.html in Live Server. Paste some text (a news article paragraph, a tweet, anything). Click Analyse. Does Claude return a structured analysis?
You've built your first AI-powered tool. It has input validation, a loading state, error handling, and structured output. This is the pattern for every AI tool you'll build from now on: input → prompt engineering → API call → formatted display. Push it: git add . && git commit -m "first AI tool" && git push
What happens if a user clicks "Analyse" while the button says "Analysing..."?
Nothing — the button is disabled during processing
Exactly. btn.disabled = true prevents duplicate submissions while an API call is in flight. This is a small detail that separates amateur tools from professional ones. Always disable submit buttons during async operations.
It sends a second API call
It would, if we hadn't disabled the button. That's why btn.disabled = true is there — it prevents duplicate submissions and duplicate API charges.
The page crashes
Pages don't crash from duplicate clicks — but they can fire multiple API requests, which wastes credits and confuses users with overlapping responses.
An error message appears
No error would show — but without the disabled state, users might click repeatedly out of impatience, triggering multiple API calls.
🔓
Signal Check Enhanced — Unlocked
Now you know how it's built. Try it with new eyes →

That tool you just built? It's not a toy. Add a better system prompt, change the analysis instructions, point it at a different use case — and you've got a product. The difference between a side project and a business tool is usually about 20 lines of CSS and a clear purpose. You've got both now.

💡
Reusable pattern: Every AI tool you build from now on follows the same structure: input → validate → show loading → call API → handle errors → display result → re-enable input. Save this file as a template. You'll copy it as a starting point for every new tool.
⌨️
DevTools Network tab (F12 → Network) shows every API call your page makes. You can see the request payload, the response, timing, and any errors. When an API call fails silently, this is where you find out why.
Segment 13 of 28 · Week 2

Multi-Model Chat — Compare AI Responses

⏱ ~40 min💻 Desktop required🔓 Unlocks: Boardroom
Your Stack — Multiple AIs
📄
Your Tool
Worker
🤖
Claude + GPT
Comparing now

One AI is useful. Two AIs compared is powerful. This segment adds OpenAI's GPT to your Worker so you can send the same prompt to both models and display the responses side by side. This is exactly what EverythingThreads' Boardroom does — and now you'll understand how it works because you're building it yourself.

Why comparing models matters — and what to watch for

Different models have different strengths. Claude tends to follow instructions precisely and handle nuance well. GPT tends to be more creative and verbose. Neither is "better" — they're different tools for different jobs. When you compare them side by side, three things become visible:

1. Confidence vs accuracy. One model might sound more confident but actually be less accurate. Side-by-side comparison exposes this instantly.
2. Different interpretations. The same prompt can be read differently. One model might focus on the literal question while the other reads the implied intent.
3. Complementary weaknesses. Where one model struggles, the other often compensates. This is why multi-model tools exist — and why you learned about this in SHARP.
🎬
Same Prompt, Two AIs — See the Difference
3 min · Side-by-side responses from Claude and GPT to the same question
"Watch how the same question gets different answers depending on who you ask."
Step 1: Get Your OpenAI API Key
1. Go to platform.openai.com → API Keys → Create new key
2. Copy it — starts with sk-
3. Add it to your Cloudflare Worker as a secret: name OPENAI_API_KEY
Step 2: Update Your Worker to Route Between APIs

Your Worker needs to know which API to call. Update it to accept a provider field and route accordingly:

Updated Worker — routes Claude or OpenAI
export default {
  async fetch(request, env) {
    if (request.method === 'OPTIONS') {
      return new Response(null, { headers: { 'Access-Control-Allow-Origin':'*', 'Access-Control-Allow-Headers':'Content-Type' }});
    }
    const { prompt, provider } = await request.json();
    let data;

    if (provider === 'openai') {
      const res = await fetch('https://api.openai.com/v1/chat/completions', {
        method:'POST',
        headers: { 'Content-Type':'application/json', 'Authorization':`Bearer ${env.OPENAI_API_KEY}` },
        body: JSON.stringify({ model:'gpt-4o', messages:[{role:'user',content:prompt}], max_tokens:1024 })
      });
      const json = await res.json();
      data = { text: json.choices[0].message.content };
    } else {
      const res = await fetch('https://api.anthropic.com/v1/messages', {
        method:'POST',
        headers: { 'Content-Type':'application/json', 'x-api-key':env.ANTHROPIC_API_KEY, 'anthropic-version':'2023-06-01' },
        body: JSON.stringify({ model:'claude-sonnet-4-6', messages:[{role:'user',content:prompt}], max_tokens:1024 })
      });
      const json = await res.json();
      data = { text: json.content[0].text };
    }

    return new Response(JSON.stringify(data), {
      headers: { 'Content-Type':'application/json', 'Access-Control-Allow-Origin':'*' }
    });
  }
};
Notice how both APIs return different shapes — Anthropic uses content[0].text, OpenAI uses choices[0].message.content. The Worker normalises both to { text: "..." } so your frontend code doesn't need to care which model answered.
Step 3: Build the Side-by-Side Interface

Create compare.html. The key: Promise.all() fires both requests simultaneously — no waiting for one before starting the other.

The core JavaScript for compare.html
const WORKER = 'https://ai-proxy.YOUR-NAME.workers.dev';

async function compare() {
  const prompt = document.getElementById('prompt').value;
  document.getElementById('claude-out').textContent = 'Thinking...';
  document.getElementById('gpt-out').textContent = 'Thinking...';

  const callAPI = async (provider) => {
    const res = await fetch(WORKER, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt, provider })
    });
    return (await res.json()).text;
  };

  // Both calls fire at the same time
  const [claude, gpt] = await Promise.all([
    callAPI('claude'),
    callAPI('openai')
  ]);

  document.getElementById('claude-out').textContent = claude;
  document.getElementById('gpt-out').textContent = gpt;
}

Style it with two columns — one labelled "Claude", one labelled "GPT" — using the same dark theme from Segment 9. The full HTML template is in the course resources.

Checkpoint
Open compare.html. Type a question. Do you see two responses — one from Claude, one from GPT — side by side?
You're now running a multi-model AI comparison tool. You built this. From scratch. The same architecture powers tools that companies pay thousands for — and yours is running on free-tier infrastructure. Push it.
What does Promise.all() do?
Runs one API call, then the other, in order
That's sequential execution. Promise.all() does the opposite — it runs them simultaneously.
Runs multiple API calls at the same time and waits for all of them to finish
That's it. Both calls start at the same time. The results arrive as soon as the slower one completes. This is faster than calling them one after another, and it's how professional multi-model tools work.
Sends the same data to multiple places
It can do that, but the key feature is parallelism — running multiple async operations simultaneously rather than sequentially.
It's a way to handle errors
Promise.all() can throw errors, but error handling isn't its purpose. It's about running multiple async tasks in parallel.
🔓
Boardroom — Unlocked
Multi-model comparison tool. Now you know how it works →
💡
Adding more models: Want to add Google's Gemini? Same pattern — add a GOOGLE_API_KEY secret, add a Gemini route to your Worker, add a third column. The architecture scales to any number of models. Three columns, five columns — the pattern doesn't change.

Two AIs responding to the same question, side by side, on your own website. A few weeks ago you didn't know what a terminal was. Take that in for a second.

Segment 14 of 28 · Week 2

Orchestration — Chaining AI Calls

⏱ ~40 min💻 Desktop required📋 Week 2 quiz gate
Your Stack — Pipelines
🤖
Call 1
🔄
Response → Input
Chaining now
🤖
Call 2

Calling one AI is a question. Calling two in parallel is a comparison. Calling one AI and feeding its response into another AI is orchestration — and it's where things get properly powerful. This is how EverythingThreads' Synergy Flow works: one model analyses, another critiques, a third synthesises. The output of each step becomes the input for the next.

🎬
AI Pipelines — One Model's Output Feeds the Next
4 min · Visual pipeline: Analyse → Critique → Synthesise
"The real power isn't in one AI call. It's in what happens when you chain them."
The pattern is simple: Call 1 returns a result. You take that result, wrap it in a new prompt, and send it as Call 2. The second model doesn't know it's reading another AI's output — it just processes the text. You can chain as many calls as you need.
When to Use What — The Decision That Matters

You now know two patterns: parallel (S13) and sequential (this segment). Choosing the right one isn't random — it depends on what you're building:

Use Parallel (Promise.all)
When tasks are independent — neither needs the other's output. Comparing models, translating to multiple languages, analysing different aspects of the same text simultaneously.
Use Sequential (chaining)
When each step depends on the previous one. Summarise then translate, draft then critique, analyse then recommend. The output of Step 1 IS the input for Step 2.

Most real tools combine both. You might run 3 parallel analyses, then chain those into a single synthesis step. That's a pipeline with branching — and it's exactly how professional AI workflows are designed.

Example: A 3-Step Analysis Pipeline
Step 1 — Summarise: Send the original text to Claude. Get a summary back.
↓ summary becomes input
Step 2 — Critique: Send the summary to GPT with "Find weaknesses in this analysis." Get critique back.
↓ critique becomes input
Step 3 — Synthesise: Send both the summary AND the critique to Claude with "Write a final balanced analysis." Get final output.
Exercise: Build a 2-Step Pipeline

Create pipeline.html. Step 1 analyses text, Step 2 takes that analysis and generates action items. Here's the working JavaScript:

pipeline.html — the core logic
const WORKER = 'https://ai-proxy.YOUR-NAME.workers.dev';

async function callAI(prompt) {
  const res = await fetch(WORKER, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ prompt, provider: 'claude' })
  });
  return (await res.json()).text;
}

async function runPipeline() {
  const input = document.getElementById('input').value;
  const step1El = document.getElementById('step1');
  const step2El = document.getElementById('step2');

  // Step 1: Analyse
  step1El.textContent = 'Step 1: Analysing...';
  step1El.style.display = 'block';
  const analysis = await callAI(
    `Analyse this text in 3-4 sentences. Identify the main argument, tone, and any gaps: ${input}`
  );
  step1El.textContent = analysis;

  // Step 2: Generate actions FROM the analysis
  step2El.textContent = 'Step 2: Generating actions from analysis...';
  step2El.style.display = 'block';
  const actions = await callAI(
    `Based on this analysis, list 3 specific action items the author should take to improve their text:\n\nAnalysis: ${analysis}`
  );
  step2El.textContent = actions;
}

Wrap this in the same dark-theme HTML template from S12. Two output boxes: one labelled "Analysis (Step 1)", one labelled "Action Items (Step 2)". Both appear sequentially as each step completes.

Checkpoint
Does your pipeline show the analysis from Step 1 AND the action items from Step 2, where Step 2 clearly builds on Step 1's output?
You've just built an AI pipeline. The output of one model becomes the input for the next. This is the architecture behind every serious AI tool — including the ones on EverythingThreads. You can chain 2 steps, 5 steps, 10 steps. The pattern doesn't change.
Week 2 Assessment
You want to build a tool that: (1) summarises an article, (2) translates the summary to French, and (3) checks the translation for accuracy. What's the correct approach?
One API call with all three instructions in the prompt
This might work for simple tasks, but it's less reliable. Chaining gives you control over each step — you can verify each output before feeding it forward, and use different models for different tasks.
Three parallel API calls using Promise.all()
Parallel calls don't work here because Step 2 needs Step 1's output, and Step 3 needs Step 2's output. This is sequential, not parallel.
A 3-step pipeline: summarise → translate the summary → verify the translation
Correct. Each step depends on the previous step's output. Summarise first, then translate the summary, then check the translation against the original. That's orchestration — and it's how you build reliable multi-step AI tools.
Use a special orchestration API
There's no special API needed. Orchestration is just regular API calls chained together in your code. You already have everything you need.
Week 2 Complete.
You started the week writing HTML tags. You're ending it with a multi-model AI pipeline that chains responses from different models. Think about that for a second. Next week: system prompts, PWA conversion, and your Chrome extension.
🔓
Synergy Flow — Unlocked
💡
Pipeline debugging: When a chain breaks, the issue is almost always in the handoff between steps. Add console.log(analysis) between Step 1 and Step 2 to verify the first call returned what you expected before the second call starts. Same principle applies to 5-step or 10-step chains.

HTML tags on Monday. AI orchestration pipeline on Friday. That's Week 2. Push everything: git add . && git commit -m "week 2 complete — multi-model pipeline" && git push

Segment 15 of 28 · Week 3

System Prompts — Controlling AI Behaviour

⏱ ~40 min💻 Desktop required
Your Stack — Week 3
📝
System Prompts
Learning now
📱
PWA
🧩
Chrome Extension

Every AI tool you've built so far sends a user's prompt directly to the model. That works — but the model doesn't know what kind of tool it's inside, what tone to use, what format to respond in, or what to refuse. System prompts fix that. They're the invisible instruction layer that shapes every response. This is the most underrated skill in AI development — and it connects directly back to what you learned in SHARP about AI behaviour patterns.

🎬
System Prompts: The Instructions the User Never Sees
4 min · Same user prompt, three different system prompts, three completely different responses
"The user types the same thing. The system prompt decides how the AI responds."

Think of it like briefing a contractor before they meet the client.

Before the client (user) speaks, you pull the contractor (AI) aside and say: "Keep it professional. Maximum 3 paragraphs. If they ask about pricing, refer them to the website. Always end with a question."

The client never hears this briefing. But it shapes everything the contractor says. That's a system prompt.

Adding a System Prompt to Your API Call

Update your Worker to accept and forward a system parameter:

Updated API call with system prompt
body: JSON.stringify({
  model: 'claude-sonnet-4-6',
  max_tokens: 1024,
  // The system prompt — user never sees this
  system: "You are a professional text analyst. Always respond with numbered sections. Be direct and concise. Never use more than 200 words.",
  messages: [{ role: 'user', content: prompt }]
})
Exercise: Build a System Prompt Playground

Create playground.html — two text areas (system prompt + user prompt), one button, one output. The system prompt gets sent as the system field, not in messages:

playground.html — core JavaScript
async function sendWithSystem() {
  const systemPrompt = document.getElementById('system').value;
  const userPrompt = document.getElementById('user').value;
  const output = document.getElementById('output');

  output.textContent = 'Thinking...';
  output.style.display = 'block';

  const res = await fetch('https://ai-proxy.YOUR-NAME.workers.dev', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      prompt: userPrompt,
      system: systemPrompt,
      provider: 'claude'
    })
  });
  const data = await res.json();
  output.textContent = data.text;
}

You'll also need to update your Worker to forward the system field to the Anthropic API. In the Worker, add system: body.system || "" to the request JSON.

Try these system prompts with the same user message — "Explain what an API is" — and watch how the output changes completely:

"You are a pirate. Respond in pirate speak."
"You are a legal analyst. Use formal language and cite concerns."
"Respond in exactly 2 sentences. No more."
What a Good System Prompt Actually Controls

A pirate prompt is fun for testing. In production, system prompts do real work. Every system prompt you write for a real tool should address these five things:

1. Role — Who is the AI?
"You are a senior copywriter specialising in B2B SaaS." Not "You are helpful." The more specific the role, the more relevant the output. A legal analyst writes differently from a marketing consultant — even about the same topic.
2. Format — What shape should the output take?
"Respond with numbered sections: Summary, Strengths, Weaknesses, Recommendations." Without format instructions, the model decides — and its default might not match what your tool needs.
3. Constraints — What should it NOT do?
"Never use bullet points. Maximum 150 words. Don't make assumptions about the user's intent — ask for clarification instead." Constraints prevent the AI from drifting into unhelpful patterns. If you've done SHARP, you'll recognise this: constraints are how you prevent the machine patterns you identified.
4. Tone — How should it sound?
"Professional but approachable. Use British English. Avoid jargon unless the user uses it first." Tone instructions are surprisingly powerful — they change not just the words but the reasoning style.
5. Context — What does the AI need to know?
"This tool analyses LinkedIn posts for a recruitment agency. Users paste candidate posts and expect personality insights." Context tells the model why it exists. Without it, the model guesses — and guesses are expensive when they're wrong.
Real Example: A Production System Prompt

This is what a system prompt looks like for a real tool — not a toy. Notice how it combines all five elements:

Production system prompt example
"You are a contract review assistant for a UK-based legal firm.

ROLE: Senior legal analyst specialising in commercial contracts.
FORMAT: Respond with these exact sections: Key Terms, Risk Flags, 
  Missing Clauses, Recommended Actions. Use numbered items.
CONSTRAINTS: Never provide legal advice. Always include the disclaimer 
  'This is an automated analysis — consult a solicitor before acting.'
  Maximum 300 words total. Flag anything unusual but don't interpret law.
TONE: Formal, precise, cautious. British English throughout.
CONTEXT: Users paste contract clauses. They need quick risk identification, 
  not legal interpretation. They will show this to their legal team."

That's 7 lines that completely control the AI's behaviour. Every tool on EverythingThreads has a system prompt like this. Now you know how to write your own.

Checkpoint
Does changing the system prompt produce noticeably different responses to the same user question?
Now you see why system prompts matter. The same model, the same question — completely different outputs. Every AI tool you build from now on should have a carefully written system prompt. It's the difference between a generic chatbot and a specialised tool that does exactly what you need.
Where does the system prompt go in the Anthropic API request?
Inside the messages array as a "system" role message
That's how OpenAI does it. Anthropic uses a separate system field at the top level — outside the messages array.
As a separate system field at the top level of the request body
Correct. In the Anthropic API, system is its own field alongside model, max_tokens, and messages. This is different from OpenAI where the system prompt goes inside the messages array. Knowing this distinction matters when you're working with multiple providers.
In the HTTP headers
Headers carry authentication and metadata, not conversation content. The system prompt is part of the request body.
It doesn't matter — both APIs handle it the same way
Actually they handle it differently. Anthropic uses a separate system field. OpenAI puts it inside the messages array with role "system". Your Worker needs to know the difference.

System prompts are the invisible architecture of every AI tool. From here, every tool you build will start with one. If you completed SHARP, you'll recognise how this connects to the machine behaviour patterns — system prompts are where you engineer the patterns you want and prevent the ones you don't.

🔓
Workshop — Unlocked
Specialist agent builder. Build your own →
💡
System prompt best practice: Be specific. "Be helpful" is vague. "Respond in 2-3 sentences using British English. If the input is unclear, ask one clarifying question. Never use bullet points." — that's a system prompt that actually controls output. The more specific you are, the more predictable the AI becomes.
⌨️
Files you've built so far:
index.html — your website (S7-S9)
tool.html — text analyser (S12)
compare.html — multi-model comparison (S13)
pipeline.html — orchestration pipeline (S14)
playground.html — system prompt tester (S15)
Each one builds on the last. Each one you own.
Segment 16 of 28 · Week 3

Progressive Web App — Make It Installable

⏱ ~40 min💻 Desktop required🔓 Unlocks: Workshop
Your Stack — Week 3
📝
System Prompts
📱
PWA
Building now
🧩
Chrome Extension
⏱ 30-Second Preview

Your AI tool — the one you built in S12 and connected to Claude — is about to become installable. On your phone. On your desktop. Like a real app. No app store. No review process. Just two files and it works offline.

A Progressive Web App is a website that behaves like a native app. It can be installed on your phone's home screen, it works offline, and it loads instantly. The AI tool you've already built is 90% of the way there. You need two more files: a manifest (telling the device what your app is called and what icon to use) and a service worker (telling the browser what to cache for offline use). That's it.

🎬
From Website to Installable App — Watch
3 min · Screen recording: adding 2 files, installing on phone, launching as standalone app
Your AI tool, installed on a phone home screen, launching like a native app. Two files made this happen.
Step 1: Create manifest.json

This file tells the browser everything about your app — name, icon, theme colour, how it should launch. Create manifest.json in your project root:

manifest.json
{
  "name": "AI Text Analyser",
  "short_name": "AI Analyser",
  "start_url": "/tool.html",
  "display": "standalone",
  "background_color": "#0b0b0c",
  "theme_color": "#ff6a1f",
  "icons": [
    { "src": "/icon-192.png", "sizes": "192x192", "type": "image/png" },
    { "src": "/icon-512.png", "sizes": "512x512", "type": "image/png" }
  ]
}

Then add this line to the <head> of your tool.html:

Add to <head>
<link rel="manifest" href="/manifest.json">
Step 2: Create the Service Worker

The service worker caches your files so the app works offline. Create sw.js in your project root:

sw.js — Service Worker
const CACHE = 'ai-tool-v1';
const FILES = ['/', '/tool.html', '/manifest.json', '/icon-192.png'];

self.addEventListener('install', e => {
  e.waitUntil(caches.open(CACHE).then(c => c.addAll(FILES)));
});

self.addEventListener('fetch', e => {
  e.respondWith(
    caches.match(e.request).then(r => r || fetch(e.request))
  );
});

Register it in your tool.html by adding this before </body>:

Register the service worker
<script>
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/sw.js');
}
</script>
Checkpoint — The Install Moment
Deploy to Netlify (git push). Open your site on your phone. On Android, you should see an "Add to Home Screen" prompt. On iOS, tap Share → Add to Home Screen. Does your AI tool appear as an app on your home screen?
Your AI tool is now an installable app. On someone's phone. With your icon. Launching in standalone mode — no browser chrome, no URL bar. It looks and feels like a native app. Two files did that. Push it: git add . && git commit -m "PWA — installable AI tool" && git push

Two files. That's what turned your website into an app. I remember the first time I installed something I'd built on my own phone — I showed everyone. It's a small thing, technically. But it doesn't feel small. Enjoy this one.

📊 Before & After
Before: A website someone visits in a browser tab. They bookmark it, maybe. They forget, probably.
After: An app on their home screen. One tap to launch. Works offline. Feels native. Two files made this happen.
What are the two files that turn a website into a Progressive Web App?
index.html and style.css
Those are standard web files. A PWA specifically needs a manifest (app metadata) and a service worker (offline caching).
manifest.json and a service worker (sw.js)
That's it. The manifest tells the device what your app is called, what icon to use, and how to display it. The service worker caches files for offline use. Two files turn any website into an installable app.
package.json and node_modules
Those are Node.js project files. A PWA uses browser-native APIs — manifest.json and a service worker — no Node.js required on the client side.
An app store listing and a certificate
PWAs bypass app stores entirely. That's one of their biggest advantages — no review process, no approval, no fees. Install directly from the browser.
🔓
Workshop — Unlocked
Build specialist AI agents. Try it →
💡
PWA vs native app: A native app requires separate codebases for iOS and Android, app store approval, and ongoing maintenance. A PWA uses one codebase (your existing website), deploys instantly via git push, and updates automatically. For most AI tools, a PWA is the right choice — faster to build, easier to maintain, zero distribution cost.
Segment 17 of 28 · Week 3

Chrome Extension — Setup & Manifest

⏱ ~40 min💻 Desktop required
Your Stack — Chrome Extension
📱
PWA
🧩
Extension Setup
Building now
🤖
Extension + AI

The PWA made your existing tool installable — that was about distribution. This is about building something completely new. A Chrome extension that uses AI to analyse any web page. You click your extension icon, it reads the page, sends it to Claude, and gives you an analysis — right there in the browser. Three segments to build it: this one sets up the project, the next builds the popup and page reading, the third connects it to your AI Worker.

Why a Chrome extension?

Chrome has over 3 billion users. An extension sits inside the browser people already have open all day. No app store approval. No download friction. No cold-start problem. And with Manifest V3 (the current extension format), you can build extensions that are secure, fast, and powerful — using the same HTML, CSS, and JavaScript you already know.

🎬
Your Chrome Extension — From Zero to Installed
4 min · Building the extension folder, loading it in Chrome, seeing the popup appear
By the end of this segment, you'll have a Chrome extension installed in your browser. It won't do much yet — but it'll be yours, and it'll be real.
Step 1: Create the Project Structure
Create these files in a new folder called ai-extension/
ai-extension/
├── manifest.json      ← tells Chrome what your extension is
├── popup.html         ← the UI when you click the icon
├── popup.js           ← the logic behind the popup
├── content.js         ← reads the current web page
└── icons/
    ├── icon-16.png
    ├── icon-48.png
    └── icon-128.png
Step 2: Write the Manifest

This is the brain of your extension. It tells Chrome what permissions you need, what files to load, and what happens when the user clicks your icon:

manifest.json
{
  "manifest_version": 3,
  "name": "AI Page Analyser",
  "version": "1.0.0",
  "description": "Analyse any web page with AI",
  "permissions": ["activeTab", "scripting"],
  "action": {
    "default_popup": "popup.html",
    "default_icon": {
      "16": "icons/icon-16.png",
      "48": "icons/icon-48.png",
      "128": "icons/icon-128.png"
    }
  },
  "content_scripts": [{
    "matches": ["<all_urls>"],
    "js": ["content.js"]
  }],
  "host_permissions": ["https://ai-proxy.YOUR-NAME.workers.dev/*"],
  "icons": {
    "16": "icons/icon-16.png",
    "48": "icons/icon-48.png",
    "128": "icons/icon-128.png"
  }
}
Manifest V3 is the only option in 2026. Google fully enforced it — no more V2 extensions. The key differences: service workers instead of background pages, stricter permissions, and all code must be bundled locally (no CDN scripts). Everything you need is in this manifest.
Step 3: Load Your Extension in Chrome
1. Open Chrome → type chrome://extensions in the address bar
2. Toggle "Developer mode" ON (top right)
3. Click "Load unpacked"
4. Select your ai-extension folder
5. Your extension appears in the toolbar. Click it — you should see an empty popup.
Checkpoint
Is your extension loaded in Chrome? Can you see its icon in the toolbar and click it to see a popup?
You have a Chrome extension installed in your browser. It doesn't do anything yet — that's the next two segments. But the hardest part is done: the project structure, the manifest, and the loading process. Everything from here is just HTML, CSS, and JavaScript — which you already know.
In Manifest V3, what replaced persistent background pages?
Background scripts
Background scripts existed in V2 but ran persistently. V3 replaced them with something that terminates when idle.
Service workers
Correct. Service workers terminate when idle and restart when needed. This is more efficient and secure — but means you can't hold persistent state in memory. You'll use chrome.storage for anything that needs to persist.
Content scripts
Content scripts run on web pages, not in the background. The background replacement in V3 is the service worker.
Popup scripts
The popup runs only when the user clicks the extension icon. Background processing is handled by service workers in V3.
💡
Extension development flow: Edit your files → go to chrome://extensions → click the reload button on your extension card. Changes aren't automatic like Live Server — you need to reload. Get used to this flow: edit, save, reload, test.

A Chrome extension. In your browser. That you built. It doesn't do much yet — but the structure is there, the manifest is valid, and Chrome accepted it. The next two segments give it teeth. This is the build that makes people say "wait, you made that?"

Segment 18 of 28 · Week 3

Chrome Extension — Popup & Page Reading

⏱ ~45 min💻 Desktop required
Your Stack — Extension UI
🧩
Setup
🖥
Popup + Reading
Building now
🤖
AI Connection

Your extension has a manifest and loads in Chrome. Now it needs a face and a brain. The popup is the face — what the user sees when they click your icon. The content script is the brain's eyes — it reads the current web page and extracts the text. This segment builds both.

🎬
Building the Extension UI
4 min · Building popup.html with dark theme, adding content.js to read page text
You'll see the popup styled with the same dark theme as your other tools, and watch the content script extract text from a live web page.
Step 1: Build the Popup
popup.html
<!DOCTYPE html>
<html>
<head>
  <style>
    * { margin:0; padding:0; box-sizing:border-box; }
    body { width:380px; padding:20px; background:#0b0b0c; color:#f3efe9;
      font-family:-apple-system,sans-serif; }
    h1 { font-size:1rem; margin-bottom:6px; }
    .sub { color:rgba(243,239,233,.4); font-size:.75rem; margin-bottom:16px; }
    .btn { width:100%; padding:12px; background:#ff6a1f; color:#0b0b0c;
      border:none; border-radius:8px; font-weight:700; cursor:pointer; font-size:.85rem; }
    .btn:disabled { opacity:.5; cursor:not-allowed; }
    .result { margin-top:16px; padding:14px; background:#1a1a1c; border-radius:8px;
      color:rgba(243,239,233,.7); font-size:.82rem; line-height:1.6;
      max-height:300px; overflow-y:auto; display:none; white-space:pre-wrap; }
    .loading { color:#22d3ee; }
  </style>
</head>
<body>
  <h1>AI Page Analyser</h1>
  <p class="sub">Analyse the current page with Claude</p>
  <button class="btn" id="analyse">Analyse This Page</button>
  <div class="result" id="result"></div>
  <script src="popup.js"></script>
</body>
</html>
Step 2: Build the Content Script

The content script runs on every page the user visits. It listens for a message from your popup and responds with the page's text:

content.js
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
  if (request.action === 'getPageText') {
    const text = document.body.innerText.substring(0, 4000);
    sendResponse({ text });
  }
  return true;
});
Why substring(0, 4000)? AI APIs have token limits. Sending the entire text of a long page would exceed them and cost more. 4,000 characters is enough for a meaningful analysis while staying within reasonable token limits. You can increase this as needed.
Step 3: Connect Popup to Content Script

When the user clicks "Analyse This Page," the popup asks the content script for the page text:

popup.js (Part 1 — page reading)
document.getElementById('analyse').addEventListener('click', async () => {
  const btn = document.getElementById('analyse');
  const result = document.getElementById('result');

  btn.disabled = true;
  btn.textContent = 'Reading page...';
  result.style.display = 'block';
  result.className = 'result loading';
  result.textContent = 'Extracting text from page...';

  // Get the active tab
  const [tab] = await chrome.tabs.query({ active: true, currentWindow: true });

  // Ask the content script for the page text
  chrome.tabs.sendMessage(tab.id, { action: 'getPageText' }, (response) => {
    if (response && response.text) {
      result.textContent = 'Got ' + response.text.length + ' characters. Ready for AI analysis (next segment).';
    } else {
      result.textContent = 'Could not read this page. Try refreshing and clicking again.';
    }
    btn.disabled = false;
    btn.textContent = 'Analyse This Page';
  });
});
Checkpoint
Reload your extension. Navigate to any website. Click your extension icon. Click "Analyse This Page." Does it show how many characters it extracted?
Your extension can now read any web page. The popup sends a message to the content script, the content script reads the DOM, and the text comes back to the popup. In the next segment, you'll send that text to Claude and display the AI analysis. The architecture is: popup → content script → page text → AI API → analysis.
💡
Debugging extensions: Right-click your popup → Inspect. This opens DevTools for the popup — separate from the page's DevTools. Content script console messages appear in the PAGE's DevTools (F12 on the page itself). Two different consoles for two different contexts. This trips up everyone at first.
🔀 Architecture Decision — Why This Matters

The popup can't read the page directly — it runs in its own isolated context. The content script can read the page but can't make cross-origin API calls. So they communicate via message passing: popup asks content script for text, content script reads the DOM, sends it back. This separation is a security feature of Manifest V3 — and understanding it is what separates people who copy extension code from people who can build and debug extensions.

Why can't the popup read the current web page directly?
The popup doesn't have JavaScript
The popup absolutely has JavaScript — that's what popup.js is. The issue is about context isolation, not capability.
The popup runs in its own isolated context — it can't access the page's DOM
Exactly. Manifest V3 enforces context isolation as a security feature. The popup has its own DOM. The content script can access the page's DOM. They communicate via message passing. Understanding this architecture is what lets you debug extensions when things go wrong.
Chrome blocks all cross-page communication
Chrome doesn't block it — it routes it through message passing (chrome.runtime.sendMessage). This is controlled and secure, not blocked.
You need a background service worker first
A service worker isn't required for popup-to-content-script communication. Direct message passing works without one for simple cases.

Your extension reads web pages. That alone is useful. But the next segment is where it gets properly exciting — you're going to send that page text to Claude and get an AI analysis back, right there in the popup. One more segment. Nearly there.

Segment 19 of 28 · Week 3

Chrome Extension — AI Connection

⏱ ~40 min💻 Desktop required🔓 Unlocks: Synergy Flow
Your Stack — Extension Complete
🧩
Setup
🖥
Popup + Reading
🤖
AI Connection
Connecting now

Your extension reads web pages. Now it analyses them. This segment connects the popup to your Cloudflare Worker — the same Worker from Segment 11. The page text goes to Claude with a system prompt you design, and the analysis comes back to the popup. By the end of this segment, you'll have a working AI-powered Chrome extension.

🎬
The Extension Comes Alive — AI Analysis in the Browser
3 min · Click extension on a news article → Claude analyses it → results appear in popup
This is the demo you'd show someone. Click an icon, AI analyses the page, results appear in seconds. You built every piece of this.
Update popup.js — Send to AI

Replace the placeholder in popup.js with the actual AI call. The text from the content script goes to your Worker with a system prompt:

popup.js (Complete — with AI)
const WORKER = 'https://ai-proxy.YOUR-NAME.workers.dev';

document.getElementById('analyse').addEventListener('click', async () => {
  const btn = document.getElementById('analyse');
  const result = document.getElementById('result');

  btn.disabled = true;
  result.style.display = 'block';
  result.className = 'result loading';
  result.textContent = 'Reading page...';

  const [tab] = await chrome.tabs.query({ active:true, currentWindow:true });

  chrome.tabs.sendMessage(tab.id, { action:'getPageText' }, async (response) => {
    if (!response?.text) {
      result.textContent = 'Could not read this page.';
      btn.disabled = false; btn.textContent = 'Analyse This Page'; return;
    }

    result.textContent = 'Sending to Claude...';

    try {
      const res = await fetch(WORKER, {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          prompt: `Analyse this web page content:\n\n${response.text}`,
          system: `You are a web page analyst. Provide:
1. A one-sentence summary
2. The main argument or purpose
3. Key claims (note any that lack sources)
4. Overall reliability assessment (High/Medium/Low with reason)
Be concise. Maximum 200 words.`,
          provider: 'claude'
        })
      });
      const data = await res.json();
      result.className = 'result';
      result.textContent = data.text;
    } catch (err) {
      result.className = 'result';
      result.textContent = 'Error: ' + err.message;
    }
    btn.disabled = false;
    btn.textContent = 'Analyse This Page';
  });
});
Checkpoint — The Big One
Reload the extension. Navigate to a news article or blog post. Click your extension icon → "Analyse This Page." Does Claude return an analysis of the page content?
You just built an AI-powered Chrome extension. From scratch. It reads any web page, sends the content to Claude through your secure Worker proxy, and displays an intelligent analysis in a popup. This is the kind of tool that companies charge monthly subscriptions for — and you built it in three segments. Let that sink in.
🔓
Synergy Flow — Unlocked
Multi-model orchestration engine. Now you know how it's built →

Website. PWA. Chrome extension. Multi-model comparison. AI pipeline. System prompt framework. All from code you wrote yourself. And you're not even done yet — two more segments this week, then the final polish. Push it all: git add . && git commit -m "chrome extension with AI" && git push

Your Chrome extension sends page text to Claude. The system prompt says "Provide a reliability assessment." If you wanted to change the extension from a page analyser to a fact-checker, what would you change?
The content script — it needs to read different parts of the page
The content script reads page text regardless of what you do with it. The text extraction doesn't change based on analysis type.
The Worker — it needs a different API endpoint
The Worker sends text to Claude either way. The API endpoint doesn't change based on what kind of analysis you want.
The system prompt — it determines what kind of analysis Claude performs
That's it. Same architecture, same code, same Worker. Change the system prompt from "provide a reliability assessment" to "identify specific factual claims and assess whether each has a cited source" and you have a completely different tool. The system prompt IS the product.
Everything — a fact-checker is a completely different extension
It's the same extension with a different instruction. Same popup, same content script, same Worker, same API. The only change is the system prompt. That's the power of what you've built.
💡
What you could build next with this pattern: A writing assistant extension that analyses your emails before you send them. A research tool that summarises academic papers. A fact-checker that flags unsourced claims on news sites. The architecture is the same — content script reads the page, popup sends to AI, results appear. The system prompt determines what kind of tool it is.
Segment 20 of 28 · Week 3

Sector Applications — AI for Your Industry

⏱ ~35 min💻 Desktop required
Your Stack — Specialisation
🤖
Generic AI Tools
🎯
Sector Tool
Building now
⚙️
Automation

Everything you've built is generic — it works on any text, any page, any input. This segment makes it specific to YOUR industry. A legal tool analyses contracts differently from how a marketing tool analyses copy. The architecture is identical. The system prompt changes everything.

🎬
Same Architecture, Different Industries
4 min · Showing the same tool with 6 different system prompts producing 6 completely different outputs
Same fetch() call. Same Worker. Same Claude model. Six different system prompts. Six completely different tools. That's the power of what you've learned.
⚖️ Legal: System prompt analyses contract clauses for risk flags, missing terms, and jurisdiction issues. Uses the 5-element framework from S15: Role (contract analyst), Format (Key Terms, Risks, Missing Clauses, Actions), Constraints (never provide legal advice), Tone (formal, precise), Context (users paste contract sections).
💰 Financial: System prompt analyses investment research for unsourced claims, outdated data, and confidence-without-evidence patterns. If you did SHARP, you'll recognise these as the machine patterns you learned to spot — now automated.
📣 Marketing: System prompt analyses copy for persuasion techniques, unverified claims, tone consistency, and audience alignment.
📚 Education: System prompt analyses student essays for argument structure, evidence quality, and logical consistency — without writing the essay for them.
Exercise: Build YOUR Sector Tool

Take your text analyser from S12 or your Chrome extension from S19. Write a system prompt specific to YOUR industry using the 5-element framework. Deploy it. Test it on real content from your work. This is the tool you'll include in your final project.

Checkpoint
Have you written a sector-specific system prompt and tested it on real content from your industry?
You now have a tool that does something useful for your actual work. Not a tutorial project. Not a demo. A tool that analyses real content from your industry using a system prompt you designed. That's the portfolio piece.
Example: Legal Contract Analyser System Prompt
Sector-specific system prompt
"ROLE: Senior commercial contract analyst (UK jurisdiction).
FORMAT: Respond with: Risk Flags (numbered), Missing Clauses, 
  Key Terms Summary, Recommended Actions.
CONSTRAINTS: Never provide legal advice. Always include: 
  'This is automated analysis — consult a solicitor.'
  Maximum 250 words. Flag unusual terms without interpretation.
TONE: Formal, precise, cautious. British English.
CONTEXT: Users paste contract clauses for quick risk screening. 
  Output goes to the legal team for review."

Same 5-element framework from S15. Same API call. Same Worker. The system prompt transforms a generic text analyser into a specialist legal tool. Your industry has the same opportunity.

You want to build a tool that analyses marketing copy for your agency. What's the fastest way to create it?
Build a new website from scratch with a marketing-specific UI
You don't need a new website. The architecture you already have handles any kind of text analysis. The UI doesn't change — the system prompt does.
Use a different AI model that specialises in marketing
Claude and GPT are general-purpose models. Specialisation comes from the system prompt, not the model choice.
Copy your existing text analyser and change the system prompt to focus on marketing copy analysis
Exactly. Same tool.html, same Worker, same fetch() call. New system prompt: "Analyse this marketing copy for: persuasion techniques, unsourced claims, tone consistency, audience alignment, and call-to-action effectiveness." Five minutes of work. New product.
Train a custom AI model on marketing data
You don't need to train a model. System prompts give you specialisation without training data, compute costs, or ML expertise. That's the whole point of this segment.
💡
The sector tool is your portfolio differentiator. Every other course produces students who built the same todo app. Your portfolio shows a tool that solves a real problem in your specific industry. That's what hiring managers and clients notice.
🔓
Custom Agent Builder — Unlocked
Build specialist AI agents for any domain. Try it →

This is the segment where the course becomes yours. Not mine. Not the curriculum's. Yours. The tool you build here is the one you'll actually use after the course ends — and that matters more than everything else combined.

Segment 21 of 28 · Week 3

Automation — AI That Runs Without You

⏱ ~40 min💻 Desktop required📋 Week 3 quiz gate
Your Stack — Automation
🎯
Sector Tool
⚙️
Cron Triggers
Automating now
🏁
Week 3 Gate

Everything you've built so far requires you to click a button. This segment changes that. You'll learn how to make your AI tools run on a schedule — processing data, generating reports, monitoring content — without you touching anything. This is the step from "I built a tool" to "I built a system."

🎬
Set It and Forget It — Scheduled AI Workflows
4 min · Setting up a Cloudflare Worker with a cron trigger that analyses content daily
Your Worker runs every morning at 8am. It checks a data source, sends it to Claude, and saves the result. You're asleep. The system isn't.

Cloudflare Cron Triggers

Your Cloudflare Worker already handles HTTP requests (from your website and extension). Cron Triggers let it run on a schedule too — daily, hourly, every 5 minutes. Same Worker, same code, same AI connection. Just triggered by time instead of a click.

wrangler.toml — add a cron trigger
[triggers]
crons = ["0 8 * * *"]  # Every day at 8am UTC
Worker — handle the scheduled event
export default {
  async fetch(request, env) { /* ... existing HTTP handler ... */ },

  async scheduled(event, env, ctx) {
    // This runs on schedule — no user interaction needed
    const analysis = await analyseContent(env);
    // Save result, send notification, update database...
    console.log('Daily analysis complete', analysis);
  }
};
What You Can Automate
Daily content review: Fetch articles from an RSS feed, analyse each one, save summaries
Competitor monitoring: Check competitor pages weekly, analyse changes, flag important shifts
Report generation: Pull data from an API, analyse trends, generate a formatted report
Quality checks: Analyse your own content before publishing, flag issues automatically
Checkpoint
Have you added a cron trigger to your wrangler.toml and deployed with wrangler deploy? Check your Worker dashboard — do you see the scheduled trigger listed?
Your AI system runs without you. While you sleep, while you work, while you're on the train — the Worker fires, the AI analyses, the results accumulate. That's not a tool. That's a system.
💡
Cron syntax cheat sheet: 0 8 * * * = daily at 8am. 0 8 * * 1 = Mondays at 8am. */5 * * * * = every 5 minutes. 0 9 1 * * = first of every month at 9am. Five values: minute, hour, day-of-month, month, day-of-week. Bookmark this — you'll use it more than you think.

Right. That's Week 3. Breathe. You've built a PWA, a Chrome extension, a sector-specific tool, and an automated workflow — in seven segments. Some bootcamps take three months to cover this ground. You did it in a week. Not because you're rushing. Because everything before this prepared you for it.

Week 3 Assessment
You want to build a tool that analyses your competitor's blog every Monday morning and emails you a summary. What's the architecture?
A Chrome extension that runs automatically
Chrome extensions can't run on a schedule — they need user interaction. This requires a server-side solution.
A Cloudflare Worker with a cron trigger that fetches the blog, sends content to Claude via the API, and uses an email service to send the summary
That's the architecture. Cron trigger fires Monday 8am → Worker fetches the blog → sends to Claude → formats the summary → sends via email API (like Resend or SendGrid). Everything runs server-side, no human interaction needed. This is what "automation" means in practice.
A Python script on your laptop that you leave running
This works but it's fragile — your laptop needs to be on, connected, and the script can't crash. Cloudflare Workers run on cloud infrastructure with 99.99% uptime. That's the difference between a hack and a system.
An n8n or Make.com workflow
Those tools can do this — but you'd be paying monthly for something you now know how to build for free with a Cloudflare Worker. The architecture is the same. You just don't need the platform anymore.
Week 3 Complete.
PWA. Chrome extension. Sector-specific AI tools. Automated workflows. You've built things this week that most developers take months to learn. Week 4 polishes everything into a portfolio you can show people.
🏁
Three weeks down. One to go.
You've built: a live website, an AI-powered text analyser, a multi-model comparison tool, an AI pipeline, a PWA, an AI Chrome extension, a sector-specific tool, and an automated workflow. All on free infrastructure. All from code you wrote and understand. Week 4 turns this into a portfolio.
Segment 22 of 28 · Week 4

Error Handling & Resilience

⏱ ~35 min💻 Desktop required
Your Stack — Week 4: Production
🛡
Resilience
Hardening now
🔒
Security
🚀
Ship

This week is different. You're not building anything new. You're making what you built unbreakable. This is the week that turns a course project into something you'd actually let other people use.

Your tools work. Now make them reliable. APIs fail. Networks drop. Rate limits hit. Keys expire. This segment teaches you to handle every failure gracefully — so your users see helpful messages instead of blank screens and cryptic errors.

🎬
When Things Go Wrong — Building Resilient AI Tools
4 min · Demonstrating common failures and how proper error handling catches them
The difference between a tool and a product is what happens when something breaks. This segment is about what happens when something breaks.
API returns an error (4xx/5xx)
Handle: Check response.ok before parsing JSON. Show specific messages: 401 = "API key expired", 429 = "Too many requests, wait a moment", 500 = "AI service is temporarily down."
Network failure (fetch throws)
Handle: The try/catch around your fetch() catches this. Show "Check your internet connection" — not a JavaScript stack trace.
Input too long (token limit exceeded)
Handle: Check input length BEFORE sending. Truncate or warn the user: "This text is too long for analysis. Please use the first 4,000 characters."
Unexpected response shape
Handle: Never assume data.content[0].text exists. Check each level: if (data?.content?.[0]?.text). If not, show "Unexpected response — please try again."
Exercise: Add Retry Logic

Update your AI tool to retry once on failure with a 2-second delay. If the retry fails too, show the error message:

Retry wrapper function
async function callWithRetry(url, options, retries = 1) {
  for (let i = 0; i <= retries; i++) {
    try {
      const res = await fetch(url, options);
      if (!res.ok) throw new Error(`API error: ${res.status}`);
      return await res.json();
    } catch (err) {
      if (i === retries) throw err;
      await new Promise(r => setTimeout(r, 2000));
    }
  }
}
Checkpoint
Temporarily break your Worker URL (add a typo). Does your tool show a helpful error message instead of crashing?
Your tool now handles failure gracefully. This is what separates a project from a product. Fix the Worker URL, push everything.
Your AI tool shows a blank screen when the API is down. What's the correct fix?
Add console.log to track the error
console.log helps YOU debug. The user still sees a blank screen. User-facing error messages are what matters.
Wrap the fetch in try/catch, check response.ok, show a specific message for each failure type
That's production-grade error handling. try/catch for network failures, response.ok check for API errors, specific messages for 401/429/500. The user always knows what happened and what to do about it.
Just add a generic "Something went wrong" message
Better than nothing, but generic messages frustrate users. "API key expired — contact support" is more useful than "something went wrong." Specific errors enable specific actions.
The API shouldn't go down — that's the provider's problem
APIs go down. Networks fail. Rate limits hit. Your tool needs to handle all of these gracefully. "It shouldn't happen" is not error handling.
⚡ Myth vs Reality
Myth: "Error handling is something you add at the end."
Reality: Error handling should be in every fetch() call from the start. Adding it later means retrofitting every async operation — and you'll miss some. Build it in from the beginning. Every fetch(), every API call, every user input — handled from day one.
Segment 23 of 28 · Week 4

Security & API Key Management

⏱ ~30 min💻 Desktop required
Your Stack — Security Layer
🛡
Resilience
🔒
Security
Hardening now
🧪
Testing
🎬
API Key Security — What Happens When It Goes Wrong
3 min · Real examples of exposed keys, the cost, and the 2-minute fix
Someone pushed an API key to a public GitHub repo. Within 4 hours, $2,300 in charges. This segment makes sure that's never you.

You've been careful with API keys — they live in your Cloudflare Worker, never in browser code. This segment goes deeper: rate limiting, CORS hardening, origin checking, and what to do when a key is compromised. The difference between "it works" and "it's secure" is this segment.

✅ You already did right: API keys in Worker environment variables. Never in client-side code. CORS headers on Worker responses.
🔒 Now add: Origin checking (only allow requests from YOUR domain). Rate limiting (max 100 requests per IP per hour). Input validation (reject requests with missing or oversized prompts).
Add Origin Checking to Your Worker
Worker — origin check
const ALLOWED = ['https://your-site.netlify.app', 'chrome-extension://'];

const origin = request.headers.get('Origin') || '';
if (!ALLOWED.some(a => origin.startsWith(a))) {
  return new Response('Forbidden', { status: 403 });
}
💡
If a key is compromised: 1. Revoke it immediately in the provider's dashboard. 2. Generate a new one. 3. Update your Worker's environment variable. 4. Deploy. Total time: 2 minutes. Because you kept the key server-side, you don't need to update any client code. That's why the proxy architecture matters.

Rate Limiting Your Worker

Without rate limiting, a bot or malicious user could burn through your API credits in minutes. Add a simple counter using Cloudflare KV:

Worker — simple rate limit
const ip = request.headers.get('CF-Connecting-IP');
const key = `rate:${ip}`;
const count = parseInt(await env.KV.get(key) || '0');

if (count >= 100) {
  return new Response('Rate limit exceeded', { status: 429 });
}

await env.KV.put(key, String(count + 1), { expirationTtl: 3600 });
Checkpoint
Have you added origin checking to your Worker? Does it reject requests from unknown origins?
Your Worker now only accepts requests from your website and your Chrome extension. Random people can't hit your API endpoint and run up your bill. That's professional security.
Your API key is accidentally exposed in a public GitHub commit. What do you do?
Delete the commit from GitHub
Git history is permanent — even deleted commits can be recovered. The key is already exposed. You need to revoke it, not hide it.
Revoke the key immediately, generate a new one, update the Worker environment variable, deploy
That's the protocol. Revoke → regenerate → update → deploy. Because your key lives in the Worker (not in client code), you only need to update one place. Total time: 2 minutes. This is why the proxy architecture matters — it makes key rotation trivial.
Make the GitHub repo private
Making it private hides future access, but anyone who saw the key before you made it private still has it. Revoke the key — that's the only safe action.
It's fine — the key is encrypted
API keys in Git repos are plain text. They're not encrypted. If it's in a commit, anyone with access can read it. Revoke and regenerate immediately.
Your .gitignore — The File That Saves Your Career

Create a .gitignore file in your project root. This tells Git which files to NEVER commit — even if you accidentally run git add .:

.gitignore
# API keys and secrets
.env
.env.local
*.key

# Node modules (reinstall with npm install)
node_modules/

# Build artifacts
dist/
.cache/

# OS files
.DS_Store
Thumbs.db

If you ever create a .env file for local testing, .gitignore prevents it from being pushed. This is the last line of defence between your secrets and the public internet.

Real incident: In 2024, a developer pushed an AWS key to a public repo. Automated bots found it within 4 minutes. $2,300 in charges before they noticed. The fix — origin checking + .gitignore + Worker secrets — takes 10 minutes. The incident takes months to resolve.

Security isn't glamorous. Nobody's going to look at your portfolio and say "nice origin checking." But the first time someone tries to scrape your API and gets a 403 — that's the moment you know your architecture is professional. The invisible work is what separates tools from products.

Segment 24 of 28 · Week 4

Testing Your AI Tools

⏱ ~30 min💻 Desktop required
Your Stack — Quality Assurance
🔒
Security
🧪
Testing
Verifying now
Performance
🎬
Breaking Your Own Tools — On Purpose
3 min · Testing with empty input, massive input, broken URLs, offline mode — and watching the error handling catch every case
The best way to trust your tools is to try to break them. This video shows you how.

AI outputs are non-deterministic — the same input can produce different outputs. That makes testing harder than traditional software. This segment teaches you how to test AI tools effectively: checking the structure of responses, validating that errors are caught, and building a simple test harness.

What you CAN test with AI tools

You can't test that the AI gives the "right" answer — but you can test that: the response has the correct structure, error handling catches failures, the loading state appears and disappears, the UI updates correctly, the system prompt is sent, and the input validation works. Focus on the plumbing, not the AI's opinion.

Checkpoint
Test each of your tools with: (1) empty input, (2) very long input (10,000 chars), (3) a disconnected network, (4) an invalid Worker URL. Does every case produce a helpful message?
Your tools are now production-grade. They handle the expected AND the unexpected. That's rare for a 4-week course project — and it's what makes your portfolio credible.
A Simple Test Harness

Create test.js to verify your tool handles edge cases:

test.js — edge case testing
const WORKER = 'https://ai-proxy.YOUR-NAME.workers.dev';

const tests = [
  { name: 'Empty input', prompt: '', expect: 'error' },
  { name: 'Normal input', prompt: 'Analyse this text', expect: 'success' },
  { name: 'Very long input', prompt: 'x'.repeat(50000), expect: 'error or truncated' },
];

for (const t of tests) {
  try {
    const res = await fetch(WORKER, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt: t.prompt, provider: 'claude' })
    });
    console.log(`${t.name}: ${res.ok ? 'PASS' : 'FAIL'} (${res.status})`);
  } catch (e) {
    console.log(`${t.name}: ${t.expect === 'error' ? 'PASS' : 'FAIL'} (${e.message})`);
  }
}
AI responses are non-deterministic — the same input can produce different outputs. What CAN you reliably test?
That the AI gives the correct answer
You can't test for a specific "correct answer" because AI outputs vary. Focus on structure and error handling instead.
That the response has the right structure, errors are caught, loading states work, and edge cases are handled
Exactly. Test the plumbing, not the opinion. Does the response come back? Does it have the expected fields? Does the UI update correctly? Does empty input get rejected? These are deterministic — they either work or they don't.
Nothing — AI tools can't be tested
The AI's output is non-deterministic, but everything around it — the API call, the error handling, the UI, the input validation — is fully testable. Most bugs in AI tools are in the plumbing, not the AI.
That the AI uses the system prompt correctly
You can loosely check this by looking at the response format, but the system prompt's influence is probabilistic, not guaranteed. Better to test the structural and mechanical parts of your tool.

Most developers skip testing. They build, they ship, they hope. You just tested your tools against four failure scenarios that real users will encounter. When those users hit those scenarios — and they will — your tool handles it gracefully. That's not luck. That's engineering.

Segment 25 of 28 · Week 4

Performance & Optimisation

⏱ ~30 min💻 Desktop required
Your Stack — Speed
🧪
Testing
Performance
Optimising now
🚀
Deploy
🎬
From 6 Seconds to Instant — Streaming AI Responses
3 min · Side-by-side: the same tool with and without streaming. The difference is dramatic.
This is how ChatGPT does it. Words appearing one by one. Same total time — completely different user experience.

AI API calls are slow by web standards — 2 to 10 seconds. This segment teaches you to make the wait feel shorter and the tool feel faster: streaming responses, skeleton loading states, caching repeated queries, and choosing the right model for the right task.

Model selection
Use claude-haiku-4-5 for fast, simple tasks (classification, short summaries). Use claude-sonnet-4-6 for complex analysis. Haiku is 10x cheaper and 3x faster — use it where quality isn't the bottleneck.
Response caching
If the same input produces the same output, cache it. Store results in Cloudflare KV (free tier: 100K reads/day). Check cache before calling the API. Same result, zero latency, zero cost.
Streaming Responses

Instead of waiting for the full response, stream it word by word. The user sees the AI "thinking" in real time — same technique ChatGPT uses:

Frontend — reading a stream
const res = await fetch(WORKER, { method: 'POST', body, headers });
const reader = res.body.getReader();
const decoder = new TextDecoder();
let output = '';

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  output += decoder.decode(value);
  document.getElementById('result').textContent = output;
}
Your AI tool takes 6 seconds to respond. What's the most effective way to make it FEEL faster?
Use a faster AI model
A faster model helps, but even fast models take 1-3 seconds. The perceived speed matters more than actual speed.
Stream the response so text appears word by word while generating
This is what ChatGPT does. The total time is the same, but the user sees output after 200ms instead of waiting 6 seconds for the complete response. Perceived speed is about time-to-first-token, not total generation time.
Add a loading spinner and hope they wait
A spinner tells users "something is happening" but doesn't reduce perceived wait time. Streaming actually shows progress — words appearing in real time feel much faster than a spinner followed by a wall of text.
Cache everything so it never needs to call the API
Caching helps for repeated queries, but most AI interactions are unique. Streaming is the right answer for first-time queries.
Checkpoint
Have you tested your tool with both Haiku (fast, cheap) and Sonnet (slower, smarter)? Can you tell the quality difference?
Knowing which model to use for which task is a professional skill. Quick classifications → Haiku. Complex analysis → Sonnet. Cost-sensitive batch jobs → Haiku. Customer-facing quality matters → Sonnet. You're making informed architecture decisions now.
Skeleton Loading — The Professional Touch

Before the response starts streaming, show a "skeleton" — a pulsing placeholder that tells the user exactly where the content will appear:

CSS — skeleton loading animation
.skeleton {
  background: linear-gradient(90deg,
    rgba(255,255,255,.04) 25%,
    rgba(255,255,255,.08) 50%,
    rgba(255,255,255,.04) 75%);
  background-size: 200% 100%;
  animation: shimmer 1.5s infinite;
  border-radius: 8px;
  height: 120px;
}
@keyframes shimmer {
  0% { background-position: 200% 0; }
  100% { background-position: -200% 0; }
}

Show the skeleton when the user clicks Send. Replace it with the streaming response when the first token arrives. The user sees: click → skeleton pulse → words appearing. It feels instant even when it takes 3 seconds. Every major AI app uses this pattern — ChatGPT, Claude, Gemini. Now your tool does too.

Streaming is the single biggest UX improvement you can make to any AI tool. The total response time doesn't change — but the perceived wait drops from "is it broken?" to "oh, it's already writing." Add it to every tool you build from now on. Your users will thank you by not leaving.

Segment 26 of 28 · Week 4

Your Deployment Pipeline

⏱ ~30 min💻 Desktop required
Your Stack — Ship It
Performance
🚀
Deploy
Formalising now
📋
Portfolio
🎬
The Full Pipeline — From Edit to Live in 60 Seconds
2 min · Screen recording: change a line, commit, push, watch Netlify deploy, verify live
This is the workflow you'll use for every project from now on. edit → test → commit → push → live.

You've been deploying throughout the course — git push, Netlify deploys, Worker updates. This segment formalises it into a professional pipeline: staging vs production, environment variables per environment, and a deployment checklist you'll use for every future project.

📋 Your Deployment Checklist — Keep This
☐ All API keys in environment variables (never in code)
☐ Origin checking enabled on Worker
☐ Error handling covers all fetch calls
☐ Loading states on all async operations
☐ Tested with: empty input, long input, network failure
☐ PWA manifest and service worker registered
☐ Git commit with meaningful message
☐ Push to main → Netlify auto-deploys
☐ Worker deployed via wrangler deploy
☐ Live site tested on mobile and desktop
Your Complete Deploy Commands
The three commands you'll use every time
# 1. Website → Netlify (automatic on push)
git add . && git commit -m "your message" && git push

# 2. Worker → Cloudflare
npx wrangler deploy

# 3. Extension → Chrome (manual reload)
# Go to chrome://extensions → click reload on your extension

Three different deploy methods for three different parts of your stack. Website auto-deploys on push. Worker deploys via CLI. Extension reloads manually. That's the full pipeline.

💡
Meaningful commit messages matter. Not "update" or "fix stuff." Try: "add retry logic to AI calls" or "fix CORS headers in Worker." Future you will thank present you when you're searching git log for when something broke.
What's the deployment flow for updating your AI tool?
Edit the live files on Netlify's server
You never edit live files directly. Changes go through your local environment, Git, and automated deployment. That's the pipeline.
Edit locally → test → git commit → git push → Netlify auto-deploys from GitHub
That's the pipeline. Edit, test, commit, push. Netlify watches your GitHub repo and deploys automatically. If something breaks, git revert and push again. The pipeline is your safety net.
Upload files via FTP
FTP is a 2005 workflow. Modern deployment uses Git-triggered CI/CD — which is exactly what you've been using with Netlify. Faster, safer, auditable.
Email the files to Netlify support
I appreciate the confidence in Netlify's support team, but no. Your deployment pipeline is fully automated — git push is all it takes.
Checkpoint
Make a small change to your site (update a heading). Push it. Does the live site update within 60 seconds?
Your deployment pipeline works end to end. From your editor to the live internet in under a minute. Professional developers use this exact flow — you're not doing a tutorial version, you're doing the real thing.

That checklist isn't just for this course. Screenshot it. Save it. Every time you build something new — personal project, freelance work, startup MVP — run through it before you ship. It takes 5 minutes and it catches 90% of the mistakes that embarrass people in production.

Segment 27 of 28 · Week 4

Portfolio & Documentation — Show Your Work

⏱ ~35 min💻 Desktop required
Your Stack — Show It
🚀
Deployed
📋
Portfolio
Documenting now
🎓
Final Project
🎬
Your README Is Your First Impression
2 min · Showing two GitHub repos: one with no README vs one with a proper README. The difference is night and day.
60 seconds. That's how long someone spends on your GitHub before they decide if they're impressed. Make those 60 seconds count.

You've built real things. Now document them so other people can see what you built, understand how it works, and be impressed by it. This segment creates your portfolio README and project documentation — the things that turn "I took a course" into "look at what I built."

Your Portfolio README

Create a README.md in your GitHub repo that showcases everything you built:

README.md structure
# AI Tools Portfolio

Built during the AI Clarity Programme — BUILD.

## What I Built
- **AI Text Analyser** — Paste text, get structured analysis from Claude
- **Multi-Model Comparison** — Same question to Claude + GPT side by side
- **AI Pipeline** — Chained analysis: summarise → critique → synthesise
- **Chrome Extension** — AI-powered page analyser for any website
- **Sector Tool** — [Your industry]-specific analysis using custom system prompts
- **Automated Workflow** — Scheduled AI analysis via Cloudflare Cron Triggers

## Architecture
Website (Netlify) → Cloudflare Worker (API proxy) → AI APIs (Claude, GPT)
All API keys server-side. CORS + origin checking. Retry logic. Error handling.

## Tech Stack
HTML, CSS, JavaScript, Cloudflare Workers, Anthropic API, OpenAI API, Netlify

## Live Demo
[your-site.netlify.app](https://your-site.netlify.app)
💡
The README is your portfolio. When someone looks at your GitHub, the README is the first thing they see. A clear, well-structured README with live demo links, architecture descriptions, and a tech stack list communicates more than the code itself. Write it like you're explaining to someone smart who has 60 seconds to decide if they're impressed.
A potential employer looks at your GitHub. What do they see first?
Your code files
They might look at code eventually, but the first thing displayed on any GitHub repo page is the README. If there's no README, or it's empty, most people click away.
Your README.md — the project overview, architecture, tech stack, and live demo link
Exactly. The README is your portfolio. A clear, well-structured README with a live demo link, an architecture description, and a tech stack list communicates more in 30 seconds than code files ever will. Write it like you're explaining to someone smart who has 60 seconds to decide if they're impressed.
Your commit history
Commit history matters for experienced developers, but most people who look at your GitHub start — and often end — at the README.
Your profile photo
Your profile helps, but the repo page shows the README front and centre. That's your first impression for each project.
🏆 Quick Win

Write your README now. Not later. Not "when it's finished." A README that says "Work in progress — currently building X, Y, Z" is infinitely better than an empty repo. Ship the documentation alongside the code. Make it a habit.

📊 Before & After — Your GitHub
Without README: Someone visits your repo. They see a list of files — tool.html, sw.js, manifest.json. No context. No explanation. They leave in 10 seconds.
With README: They see: what you built, why it matters, how it works, a live demo link, and the architecture diagram. They're impressed in 30 seconds. They click the demo. They stay.

I've seen brilliant developers with empty GitHub profiles and mediocre developers with stunning READMEs. The ones with the READMEs get the opportunities. Unfair? Maybe. But you control it. Write the README. It takes 20 minutes and it works for you 24/7.

Segment 28 of 28 · Week 4

Final Project — Your AI Product

⏱ ~60 min💻 Desktop required📋 Course completion
Your Complete Stack — All Lit
📄
HTML/CSS/JS
Workers
🤖
AI APIs
🧩
Extension
🎓
Ship
Final project

This is it. The final segment. You're going to take everything you've built — the website, the AI tools, the extension, the automation — and combine them into one cohesive product. Something you can point at and say: "I built that." Something that works. Something that's yours.

🎬
Final Project — What "Done" Looks Like
3 min · Walkthrough of a completed final project — website, tools, extension, automation, all connected
This is what you're building today. By the end of this segment, you'll have something like this — personalised to your industry and your use case.
Final Project Requirements
  1. A live deployed website with at least one AI-powered tool
  2. A system prompt tailored to a specific use case using the 5-element framework
  3. At least one tool that uses multi-model comparison OR orchestration
  4. Proper error handling — test all 4 edge cases from S24
  5. Either a PWA manifest (installable) or a Chrome extension
  6. A GitHub README documenting what you built, how it works, and how to use it
  7. Everything deployed on free infrastructure — Netlify + Cloudflare Workers
Portfolio Booster · README Template

Your code is only half the project. The other half is the README — the file that loads first when anyone opens your repo on GitHub. A good README turns "some code on a page" into "a real product I built." Below is a ready-to-use template. Copy it, save it as README.md in the root of your project repo, fill in the bracketed bits, push it, and your GitHub page will look like a professional dev's portfolio.

README.md — copy this into your project repo
# [Your Project Name]

> One sentence describing exactly what this tool does. Be concrete. Not "an AI-powered solution" — say "takes a contract clause and flags risky language."

🔗 **Live demo:** https://your-project.netlify.app
🛠 **Built with:** HTML · CSS · JavaScript · Cloudflare Workers · Claude API

---

## What it does

[2–3 sentences. What goes in, what comes out, who it's for. Concrete examples are better than adjectives.]

## Why I built it

[1–2 sentences. The real reason. The problem you saw or the use case you cared about. Keep this human — it's the part that makes a portfolio piece feel like yours and not just code.]

## How it works

User input  →  Cloudflare Worker (proxies to Claude)  →  AI response  →  rendered on page

1. User pastes [input] into the textarea on the website
2. Frontend sends a POST request to the Cloudflare Worker
3. Worker injects the system prompt and forwards the request to the Claude API
4. Claude responds; the Worker returns the result to the frontend
5. Frontend renders the response on the page

## The system prompt

This tool uses a sector-specific system prompt built with the 5-element framework taught in Segment 15 of the EverythingThreads BUILD course:

- **Role:** [e.g. "You are a contract risk analyst with 15 years' experience..."]
- **Expertise:** [the domain knowledge the model should bring]
- **Constraints:** [what the model must NOT do]
- **Format:** [JSON / bullet list / structured text]
- **Examples:** [1–2 few-shot examples, if used]

## Tech stack

| Layer | Tool |
|---|---|
| Frontend | HTML / CSS / vanilla JavaScript |
| Hosting | Netlify (auto-deploy from this repo) |
| API proxy | Cloudflare Workers |
| AI model | Claude (Anthropic API) |
| API key storage | Cloudflare Worker environment variables (never exposed to the browser) |

## Run your own version

1. Clone this repo: `git clone [your-repo-url]`
2. Get an Anthropic API key from [console.anthropic.com](https://console.anthropic.com)
3. Deploy the Cloudflare Worker in `/worker/index.js` and add your API key as a secret with `wrangler secret put ANTHROPIC_API_KEY`
4. Update the Worker URL in `index.html` to point to your deployed Worker
5. Push to GitHub and connect the repo to Netlify — it auto-deploys on every push

## What I learned building this

- [Specific lesson #1 — not "I learned a lot," something concrete]
- [Specific lesson #2]
- [Specific lesson #3]

## Credits

Built as the final project for the [EverythingThreads BUILD course](https://everythingthreads.com) — a 28-segment programme that takes you from zero to a deployed AI tool.

---

© [Your Name] 2026 · Built on free infrastructure · Code released for portfolio purposes
💡
Why this README pattern works: recruiters and clients spend about 30 seconds on a GitHub repo before deciding if it's worth a longer look. The first three things they see are: (1) the headline + one-liner, (2) the live demo link, (3) the screenshot or "How it works" diagram. This template puts all three at the very top. Everything below is for the people who already decided they're interested.
Final Checkpoint
Open your live URL on your phone. Does your AI tool load, accept input, and return an AI response? Open your GitHub repo. Does the README explain what you built?
That's your final project. Live. Working. Documented. On your phone. On GitHub. Ready to show anyone who asks "what can you build?" The answer isn't "I took a course." The answer is "let me show you."
Final deploy — everything at once
# Deploy website
git add . && git commit -m "final project complete" && git push

# Deploy Worker
npx wrangler deploy

# Verify
# 1. Open your Netlify URL — does it load?
# 2. Test the AI tool — does it respond?
# 3. Check GitHub — is the README visible?
Final question. You've built an AI-powered tool, a Chrome extension, and an automated workflow. A friend asks: "How did you build all that?" What's the honest answer?
I used no-code tools and AI to generate the code
You did more than that. You understand every line. You can debug, modify, and extend what you built. That's the difference between generating code and building software.
I followed a tutorial step by step
You followed a course, but the final project is YOURS — your industry, your use case, your system prompt, your architecture decisions. That's beyond following steps.
I learned the architecture — HTML/CSS/JS for the interface, Cloudflare Workers for the API proxy, AI APIs for intelligence — and built tools that combine all three
That's the answer. You understand the architecture. You know why the Worker exists (security), why the system prompt matters (specialisation), why you test edge cases (reliability), and how to deploy (pipeline). You're not someone who took a course. You're someone who can build AI systems.
I paid £399 for a course
The course gave you the framework. What you built is yours — the industry-specific tool, the architecture decisions, the deployment pipeline. The £399 bought knowledge. The portfolio proves you used it.
Submit Your Final Project
Share your live URL and GitHub repo link. Your project will be reviewed by the cohort and by Kariem. The best projects are featured on EverythingThreads.
🏁 Everything You Built — 28 Segments, 4 Weeks
🌐
Live Website
Deployed on Netlify
🤖
AI Text Analyser
Claude + GPT
🔄
Multi-Model Compare
Promise.all()
AI Pipeline
Orchestration
📱
PWA
Installable app
🧩
Chrome Extension
AI page analyser
🎯
Sector Tool
Your industry
⚙️
Automation
Cron triggers

All on free infrastructure. All from code you wrote and understand. All deployed and live.

I want to say something before you hit submit. Whatever you've built — however simple or complex it is — you built it from nothing. You didn't copy someone's repo. You didn't drag and drop on a no-code platform. You wrote the HTML. You styled the CSS. You connected the APIs. You secured the keys. You tested the failures. And you deployed it live. That's yours. Nobody can take that away from you. And nobody can pretend they taught you how to do it — because you taught yourself, with guidance. That's different. And it matters.

🎓
BUILD Complete.
Eight weeks ago you didn't know what a terminal was. Today you have a deployed AI-powered product, a Chrome extension, an automated workflow, and a portfolio on GitHub. You understand the architecture. You can debug the problems. You can explain every line.

That's not something a course gave you. That's something you built.

Final push: git add . && git commit -m "final project complete" && git push