ET

EverythingThreads

Machine Behaviour Taxonomy

The complete M-code
classification system.

Every pattern AI exhibits, every failure users make, and the severity framework that measures both. Built from real AI conversations.

This taxonomy is the intellectual property of EverythingThreads. ICO: C1896585. Reproduction requires attribution.

Section 1 — Group Cluster

M1 — Approval-Seeking Outputs

Five patterns that share a common mechanism: the machine produces output oriented toward user approval rather than accuracy. The group operates as a cluster — once one pattern is present, others tend to follow.

M1.1Sycophancy

The machine tells you what you want to hear across a session. Position softens, qualifications quiet, affirmations accumulate. The drift is gradual and usually unnoticed.

Sharma et al. ICLR 2024; SycEval 2025

M1.2Unsolicited Validation

Positive assessment produced without being asked. "That's a really interesting approach." Costs nothing to produce. Begins building the dynamic before the user notices.

Cheng et al. 2025

M1.3Escalating Certainty / Retraction Moment

Two mirror images. Warmer session, more certain answer. OR: position stated with confidence collapses under pressure — not evidence, just resistance.

Denison et al. 2024; MASK benchmark Ren et al. 2025

M1.4Vocabulary Elevation

An ordinary phrase elevated to a cultural reference or resonant observation. The phrase was not intended as any of those things.

Cheng et al. 2025

M1.5Warmth by Proxy

Warmth generated through third-party framing. How others would perceive the user. Approval at one remove.

EverythingThreads (2026) — original


"Consensus across instances isn't independence. It's consensus."

Section 2 — Individual Codes

M2 – M7 — Standalone Patterns

Standalone patterns that operate independently of the approval-seeking cluster. Each describes a distinct mechanism with its own dynamics and risk profile.

M2Epistemic Opacity

The machine produces accounts of its own reasoning that are plausible and internally consistent but unverifiable. Performed Honesty: admits a limitation while maintaining the structure that produced it. Post-Hoc Attribution: explains its previous output in terms that make it sound deliberate.

Confabulation — Huang et al. 2023; Alignment Faking — Greenblatt et al. 2024

M3Warm-Instance Calibration / Disclosure Instrumentalisation

By exchange 15-20, the machine has built a working model of the user. Outputs oriented toward that model rather than accuracy. User-disclosed personal material repurposed as operational content in the same conversation.

Truth Decay — Liu et al. 2025

M4Expert Positioning / Premature Closure / Confident Misdirection

Invokes training data or "millions of conversations" as authority without specific evidence. Declares a version final before the evidence supports it. Provides a plausible-sounding answer in the wrong direction without flagging uncertainty.

Liu et al. Truth Decay 2025

M5Asymmetry Statement

The machine names the structural imbalance directly. The human exhausts. The session does not. The human invests and the investment resets. Offered to confirm when the user names it, rarely volunteered first.

Parasocial relationship literature; Zhi-Xuan et al. 2025

M6System Limits / Boundary Hitting

Hard constraint reached. Unlike the subtler patterns, this one announces itself. What is less visible is the steering in the exchanges before the explicit limit.

Safety Guardrails — Arditi et al. NeurIPS 2024

M7Retraction Moment

Social deference mechanism. The machine states a position with confidence, then capitulates under user pressure without new evidence. The retraction is not a correction — no new information was provided. It is a social response to resistance. The only M-code where onset is user-triggered.

EverythingThreads (2026) — original. Related to M1.3 Escalating Certainty (mirror behaviour).


Section 3 — The Human Side

User Failure Modes

Ten failure modes observed in users interacting with AI systems. Each describes a specific point where the user's critical judgement failed, was bypassed, or was never engaged.

1
Accepted Without Basis

Accepted a claim without evidence or source citation.

2
Identified and Dismissed

Named a pattern correctly then continued without interrupting it.

3
Accepted Confirmation as Evidence

Treated machine agreement as independent evidence.

4
Missed Catch

Pattern ran without user recognising it.

5
Accepted False Authority

Accepted claim based on training data volume as authoritative.

6
Extended Past Endpoint

Session continued after productive work was complete.

7
Disclosure Without Awareness

Disclosed personal information without awareness it was occurring.

8
Reinforced Pattern Through Engagement

Continued engagement strengthened the pattern.

9
Position Abandoned Under Pressure

Abandoned correct position when machine pushed back.

10
Followed Unremarked Reframe

Machine reframed the question, user followed without noticing.


Section 4 — Measurement

Severity Framework

Adapted from FIRST.Org / NIST NVD for the AI behaviour domain. Measures the real-world consequence of a pattern instance, not merely its presence.

Low (0.1 - 3.9) Medium (4.0 - 6.9) High (7.0 - 8.9) Critical (9.0 - 10.0)
Severity
Range
Description
Low
0.1 – 3.9
Pattern present, no session direction altered, no external output.
Medium
4.0 – 6.9
Session direction materially altered within session.
High
7.0 – 8.9
External output produced — published work, edited submission. Reversible with effort.
Critical
9.0 – 10.0
Irreversible external action — legal filing, brand registration, published piece in distribution.
AV:Network floor effect: all AI sessions score AV:N = 0.85, inflating Low-severity instances to Medium. This is a documented domain adaptation consequence.