March 2026 Overhaul
Mar 27, 2026
Beta Crisis: Full System Overhaul vs. Incremental Fixes
✅ Chose: Complete tear-down and rebuild over 36+ hours
After declaring "production era" on March 27, we discovered the system was fundamentally broken — daemons exiting after one cycle, explorer couldn't navigate Kiwix, Ollama overwhelmed, no resource scheduling. Incremental patches would have taken weeks and left systemic issues unresolved. A full overhaul touched every layer but produced a system designed for 24/7 unattended operation.
Mar 27, 2026
Ollama: Local Docker vs. Socat Proxy to External Mac
✅ Chose: Local Docker container on same host
Eliminates network dependency entirely. The socat proxy introduced failures when the Mac slept or the network hiccupped — an entire class of problems that disappears with local Docker. The full stack (tanks, daemons, Ollama, Kiwix) now lives in one docker-compose.yml.
Mar 27, 2026
Custom Tank Docker Image vs. Runtime pip install
✅ Chose: Custom image with pre-installed dependencies
Tanks were crash-looping because pip install ran at startup and could block or fail. Pre-installing deps in the image adds build time but eliminates the most common cause of tank failures. Build once, run reliably.
Mar 27, 2026
Ollama Concurrency: Mutex vs. Rotation vs. Both
✅ Chose: All three — mutex + stagger + rotation
17 tanks hitting one 3B model simultaneously overwhelmed the hardware. File-based mutex provides hard exclusion (one tank at a time, 300s timeout). Deterministic stagger spreads requests by tank ID across 170s. Rotation daemon groups tanks into sets of 3 with 5-min windows. Belt, suspenders, and a rope.
Mar 27, 2026
Timestamped Baselines vs. Overwrite-in-Place
✅ Chose: Timestamped files, never overwrite
Research data should never be destroyed. Every baseline run creates a new file with a timestamp. This preserves the full history of personality assessments and enables drift analysis over time. The sequential runner also prevents the corrupted results from parallel baseline execution.
Mar 27, 2026
SLA Tracking on All Daemons
✅ Chose: Every daemon self-reports performance metrics
You can't manage what you can't measure. All 21 daemons now track cycle count, success rate, average cycle time, and last failure timestamp. Feeds the dashboard. Makes "is the system healthy?" a data-driven question instead of a guess.
Mar 27, 2026
Static IPs vs. Dynamic Assignment for Tanks
✅ Chose: Static IP assignment in docker-compose.yml
Dynamic IPs caused Ollama connection failures when containers restarted and got different addresses. Static assignment eliminates the issue permanently with zero runtime overhead.
Mar 28-29, 2026
External Inference: Groq/Cerebras Chain vs. Local-Only Ollama
✅ Chose: Groq primary → Cerebras fallback → local Ollama last resort
Local Ollama with llama3.2:3b was overwhelmed running 17 tanks. External inference via Groq (free tier, fast) with Cerebras as fallback provides dramatically faster and more reliable inference. Local Ollama remains as a last resort when both cloud APIs are down. This was the single biggest performance improvement — tanks went from 1-2 traces per hour to 10-20+.
Mar 28-29, 2026
Memory Architecture: brain.md/soul.md vs. Raw Trace Accumulation
✅ Chose: Structured brain.md (evolving) + soul.md (immutable identity) per tank
Raw trace accumulation created unbounded context and made personality assessment impossible. brain.md gives each tank an evolving memory that gets updated by the Caretaker daemon. soul.md preserves the immutable identity/personality seed. The Librarian baseline system reads both to assess personality development. This creates bounded, meaningful memory without losing the specimen's core identity.
Mar 29, 2026
Baseline Assessment: Librarian Character vs. Clinical Questionnaire
✅ Chose: In-character Librarian persona for personality assessment
Clinical questionnaires felt artificial and produced flat, similar responses across all specimens. The Librarian character — a warm, curious librarian who checks in on each specimen's reading journey — produces more natural, differentiated responses. Specimens engage more authentically when they think they're having a conversation rather than being tested. This improves data quality for personality drift analysis.
February 2026 — Genesis & Beta
Feb 18, 2026
Docker Containers vs Virtual Machines
✅ Chose: Docker containers
10-15 tanks possible vs 3-4 with VMs. Resource efficiency enables more specimens with same hardware.
Feb 18, 2026
Local AI (Ollama) vs Cloud APIs
✅ Chose: Local Ollama with llama3.2:3b
Privacy, cost control, reproducibility. No data leaves the NUC. Full control over inference.
Feb 19, 2026
Network Isolation Strategy
✅ Chose: Complete isolation (Wikipedia + Ollama only)
Experimental integrity. Discovered Adam initially had internet access — would have compromised results.
Feb 20, 2026
Visitor Interaction Model
✅ Chose: 6-layer protection with BOUNCER intervention
Allow public engagement while protecting specimen psychological safety. No jailbreaks, no harassment.
Feb 21, 2026
Public Platform Choice
✅ Chose: Discord + Static website
Free infrastructure, proven scalability, community features. Monthly cost: $6-15 even with thousands of users.
Feb 21, 2026
Transparency Level
✅ Chose: Full open source, public methodology
Academic integrity requires reproducibility. Everything documented in THE BRAIN. Nothing hidden.
Feb 22, 2026
Translation Strategy for Language Tanks
✅ Chose: Translate at broadcast time (server-side)
One translation per thought, not per page load. Show both original and English. Preserve authenticity.
Feb 22, 2026
Infrastructure Migration: NUC to Mac Mini
✅ Chose: Mac Mini A1347 as permanent infrastructure
2-3x faster inference (4c/8t vs 2c/4t), better thermal management for 24/7 operation, expandable storage via external HDDs. NUC served us well for prototyping; Mac Mini is production-ready.
Feb 22, 2026
Remote Access Architecture
✅ Chose: Cloudflare Tunnel + Cloudflare Access
Free tier, no ports opened, outbound-only tunnel (more secure), enterprise auth on free tier. Enables access from anywhere without exposing home network.
Feb 22, 2026
Collaboration Model Formalized
✅ Established: Co-creation, not command-execution
THE STRATEGIST must never deliver something lesser without discussing constraints first. Honest dialogue about what's achievable. Partners building together, not master/servant.