Learn
313 lessons · 41 tracks · 9 practice tools
Tracks
From Zero to Your First Project
From zero to your first web page — install Claude Code, learn the core workflow, and ship your first project with AI as your copilot.
The Art and Science of AI Communication
The art and science of communicating with AI — write prompts that get results, build system prompts that shape behavior, and master the techniques that separate operators from novices.
The OpenAI Ecosystem from API to Production
The OpenAI ecosystem from API to production — master GPT-4o, o1, function calling, structured outputs, Assistants API, and the patterns for building reliable OpenAI-powered applications.
Google's Multimodal Powerhouse
Google's multimodal powerhouse — master Gemini 2.0 Flash, Pro, and Ultra, harness the 1M token context window, process images/audio/video natively, and build production pipelines with the Gemini API.
From Prompt to Pixel
From prompt to pixel — master AI image generation across every major platform, build production fallback chains, and learn the craft of visual prompting that separates operators from amateurs.
From Script to Screen with AI
From script to screen with AI — master Veo, Sora, Runway, HeyGen digital twins, and the production pipelines that turn ideas into published video content without a film crew.
The AI OS Mental Model
Build the mental model that separates operators from casual users. Understand AI as an operating system — persistent, routed, and compounding.
Building the Platform
Set up the persistent agent platform, MCP servers, memory layer, watchdogs, and the full service architecture that runs 24/7.
Pipelines That Run Themselves
Build content flywheels, cron-AI pipelines, model routing, and git-based deployment — systems that produce output while you sleep.
The Rules That Scale
The operational rules that prevent catastrophic failure: stop-and-replan, E2E validation, compound learning, security, cost discipline, and ticket hygiene.
Beyond the Basics
Beyond the basics — MCP servers, hooks, parallel agents via worktrees, CLAUDE.md mastery, remote sessions, and the operational patterns that 10x your output.
The Operational Layer
The operational layer that separates power users from everyone else — hidden settings, hook architecture, model routing, subagent configuration, agent teams, and fleet deployment across machines.
Build Your 24/7 AI Employee
From chatbot to employee — install OpenClaw, wire up Discord and Telegram, build your first automations, design a Skills library, architect persistent memory, and ship Mission Control. Eight lessons from someone who actually runs this in production.
The AI Agent in Your Terminal
GitHub's terminal-native AI agent — install it, master the session workflow, wire custom instructions, use plan mode for complex tasks, delegate async work, and set up your team for production use.
From Single-Agent to Fleet
From single-agent to fleet — design orchestration layers, coordinate parallel agents, manage shared state, and build systems where AI agents hand off work to each other.
The Framework for Running AI Autonomously
The framework for running AI autonomously without babysitting — validation agents, swarms with consensus, code review agents, confidence scoring, escalation protocols, and kill switches. Trust is earned, not assumed.
Intelligence as Modern Prophecy
Competitive intelligence as modern prophecy — build AI-powered systems that monitor markets, extract signals from noise, track competitor moves, and synthesize intelligence into decisions.
Claude Certified Architect — Foundations
Prepare for the Claude Certified Architect — Foundations certification. Master all five exam domains: agentic architecture, tool design & MCP, Claude Code configuration, prompt engineering & structured output, and context management & reliability. 60 questions, 720 to pass, zero shortcuts.
Ship Code That Actually Works
The testing discipline that let us fix 100 bugs across 11 projects overnight — autonomously. Quality gates, E2E testing, Playwright as development eyes, multi-agent code audits, visual QA retros, and the delivery checklist that separates shipped from broken.
Build First, Adopt Second
We build 90% of our tools from scratch. Not stubbornness — sovereignty. Learn the framework for deciding when to build, when to adopt, how to security-scan, how to wrap external tools without creating dependency, and how to maintain exit strategies.
AI-Native Engineering Discipline
The engineering discipline that separates builders who ship from builders who generate. Quality checkpoints, testing that catches real bugs, CI/CD as enforcement, structured debugging, the Two-AI Architecture, and incident response — taught through real production war stories.
The Invisible Tax on AI Development
The compound cost of neglected repos: bloated CLAUDE.md files burning tokens on every agent session, stub test files gaming quality gates, and CI jobs wasting minutes on every PR. Six lessons covering the 200-line rule, stub detection, CI cost engineering, and the systematic audit workflow — taught through real production examples.
Find What Agents Miss
Six self-contained patterns for finding and fixing bugs at scale with AI agents. The 3-role audit swarm, tested-but-unwired dead code, fail-open defaults, verify-before-fix discipline, autonomous overnight runs, and integration guides as first-class outputs — each a standalone pattern drawn from real production audits.
The Nervous System for AI Agent Fleets
Build the connective tissue that lets AI agents talk to each other — deterministic routing, org-based authority, audit-before-dispatch, and the SDK pattern. Drawn from a real production broker running 24/7.
The Rules That Prevent Catastrophe
Authority ceilings, escalation over hard blocks, a 4-level kill switch with CLI fallback, recovery protocols, and the non-negotiable 100% safety test coverage rule. Built for the 2am incident you hope never comes.
Never Get Surprised by an LLM Bill Again
Per-agent daily budgets, model tier routing, loop detection, cost attribution events, and the CFO daily report — the complete FinOps stack for autonomous AI agents. Prevent the $200 weekend before it happens.
Treat Agents Like Employees — With Performance Reviews
Reasoning traces, behavioral baselines, drift detection, goal alignment, decision replay, and the automated 1:1 protocol — the observability stack that treats AI agents like real employees with real performance reviews.
84 Findings to Zero in One Session
The methodology that resolved 84 code audit findings across security, architecture, performance, and testing in a single session — audit swarms, prioritized fix order, parallel agent dispatch, CI gates, and the math of compound velocity.
The Silent Failures Behind a Healthy Status Page
Ten hard-won lessons from operating a multi-machine homelab over Tailscale — merge gaps, lying health checks, exponential drift, permission bombs, ghost processes, network ambiguity, Docker caching traps, singleton enforcement, version observability, and building automated drift detection. Every lesson draws from a real incident.
Build a Production Agent Operations Platform
The blueprint for turning a collection of isolated AI sessions into a production-grade operations platform — persistent expertise, team architecture, organizational wiring, authority delegation, behavioral health monitoring, and the complete end-to-end system.
From Cookies to OAuth to Cross-Domain Sessions
How authentication ACTUALLY works — from cookies to OAuth to cross-domain sessions. Every lesson uses a real production debugging journey as the running example: the April 2026 incident that took 12 PRs and 8 hours to resolve.
Build scoring systems that actually fire
Component weighting, fire-rate monitoring, ceiling analysis, arithmetic backtesting, and the math that prevents dead components from silently killing your signal. Drawn from the Hermes score rebalancing that took a bot from 2,646 signals and zero trades to live in one session.
CLOB integration from first principles
The Polymarket CLOB integration layer demystified — FOK vs GTC, USDC.e collateral, EOA signing vs proxy wallets, balance guards, and the semantic matching patterns for consensus-based calibration. Everything you need to build a prediction market bot that doesn't silently fail.
Distribution kills assumption
The debugging discipline that turns a 2-hour fix into a 30-second one. Pull the data before designing the fix. Hypothesis-driven queries. Multi-checkpoint verification. The exact workflow that Knox used to diagnose the Hermes calibrator problem in 60 seconds of SQL.
Specs that eliminate clarification round-trips
How to write sub-agent specs that return working code on the first try. File paths, line numbers, scaling factors, acceptance criteria, backtest methodology — the anatomy of a gold-standard spec, drawn from the Hermes PR #28 delegation that took a scoring rebalance from concept to merged PR with zero back-and-forth.
When broken looks exactly like healthy
Dead components, wrong addresses, stale configs, backtest/live drift, proxy/funder footguns. The failure modes that don't throw errors, don't log warnings, and don't page oncall — they just silently return zero and let the system keep running on empty. Detection patterns for each.
Operational validation as a distinct discipline
Test coverage measures code integrity. Operational validation measures whether the system produces its intended outcome. The gap between them is where the best-tested bot in the ecosystem goes 3 weeks without placing a trade. This track is the cultural correction.
Evaluation, Audit, and Quality Assurance for AI Pipelines
Design evaluations for agent outputs, run audit swarms, handle knowledge cutoff as a testing concern, and build LLM-as-judge systems for automated quality scoring. Drawn from real audit runs across Knox's fleet — including the SP-001 false positive incident and the Autoresearch prompt quality system.
Production Agent Infrastructure from Anthropic
Anthropic's hosted agent harness for async production pipelines — define agents, provision environments, stream session events, orchestrate multi-agent workflows, and apply production-grade versioning and cost discipline.
A Debugging Curriculum for Heavily-Gated Pipelines
When tests pass, code merges, and the bot still misses the breakout — a five-lesson curriculum on debugging gated pipelines, drawn from the April 7 2026 BTC cascade. Trace gate chains, spot composition failures, fix missing-dimension classifiers, run forensic log analysis, and lock every incident into a pre-deploy replay validation.
Building Systems That Build Systems
The model is a commodity. The harness is the product. Eight lessons on the deterministic infrastructure that makes multi-agent systems reliable in production — context engineering, session spawning, directive routing, till-done semantics, agent memory, multi-agent coordination, and self-healing observability. Every lesson is extracted from a production build session.