AI Agents Just Need Good --help
AI agents succeed or fail based on your --help text. Clear command structure, explicit success signals, and structured output options make the difference between one API call and five retries.
AI agents succeed or fail based on your --help text. Clear command structure, explicit success signals, and structured output options make the difference between one API call and five retries.
Implemented FRE algorithm from Duan et al.'s 2025 paper in production Zig. Achieved O(m log^(2/3) n) complexity for single-source shortest paths, improving on Dijkstra's O(m + n log n). Shows advantage on large sparse graphs by breaking the sorting barrier, but overhead kills performance on small or dense graphs.
The future of AI isn't single agents but orchestrated swarms sharing temporal memory graphs. Picture agents that don't pass messages but share thoughts, with orchestrators that predict bottlenecks before they surface and memory systems that evolve themselves.
Resilient Future, Agrama v2
read where my mind is now →AI coding assistants are incredible at rapid code generation, but without human guidance they miss maintainability, architecture, and sustainable engineering practices. The key isn't perfect prompts, it's knowing when and how to nudge the AI toward better decisions.
Stargazer Observatory, Reading Progress, Agentic Patterns, Advisory Work
read where my mind is now →“Play long-term games with long-term people.” — Naval Ravikant
This hits different when someone extracts value from you, then actively works to devalue you.
Long-term games compound. Trust compounds. Reputation compounds.
The short-term player takes what they need, then burns the bridge to prevent you from collecting on the relationship later. It’s extraction with sabotage, ensuring the value only flows one way.
Long-term people understand that they protect your reputation because it’s connected to theirs.
When you find your long-term people, you’ve found something rare: partners who understand that mutual success compounds.
AI coding agents work best with inversion of control, curated context over comprehensive, usage-based pricing for real work, emergent behaviors over engineered features, rich feedback loops, and agent-native workflows. The revolution is here--adapt or be displaced.
Built AgentProbe to test how AI agents interact with CLI tools. Even simple commands like 'vercel deploy' show massive variance: 16-33 turns across runs, 40% success rate. The tool reveals specific friction points and grades CLI 'agent-friendliness' from A-F. Now available for Claude Code MAX subscribers.
From shipping 50+ AI projects in months, I learned that successful tools must master the duality between human needs (power/flexibility) and agent needs (clarity/determinism). Type safety, machine-readable docs, and friction-free workflows separate winners from losers in the AI-native era.