“Play long-term games with long-term people.” — Naval Ravikant

This hits different when someone extracts value from you, then actively works to devalue you.

Long-term games compound. Trust compounds. Reputation compounds.

The short-term player takes what they need, then burns the bridge to prevent you from collecting on the relationship later. It’s extraction with sabotage, ensuring the value only flows one way.

Long-term people understand that they protect your reputation because it’s connected to theirs.

When you find your long-term people, you’ve found something rare: partners who understand that mutual success compounds.

What Sourcegraph learned building AI coding agents

AI coding agents work best with inversion of control, curated context over comprehensive, usage-based pricing for real work, emergent behaviors over engineered features, rich feedback loops, and agent-native workflows. The revolution is here--adapt or be displaced.

Why I Built a Tool to Test AI's Command Line AX

Built AgentProbe to test how AI agents interact with CLI tools. Even simple commands like 'vercel deploy' show massive variance: 16-33 turns across runs, 40% success rate. The tool reveals specific friction points and grades CLI 'agent-friendliness' from A-F. Now available for Claude Code MAX subscribers.

The Agent-Friendly Stack: 50+ AI Projects Taught Me This

From shipping 50+ AI projects in months, I learned that successful tools must master the duality between human needs (power/flexibility) and agent needs (clarity/determinism). Type safety, machine-readable docs, and friction-free workflows separate winners from losers in the AI-native era.

The Anti-Playbook: Why AI Dev Tools Need Different Growth

The traditional SaaS playbook is dead for AI dev tools. Developers smell BS, the market has three overlapping layers, and you're fighting inertia—not competition. Success means activation through value, retention through community, and expansion through metrics.

Code with Claude AI from Your Phone: VM Setup Guide

Complete guide to setting up Claude Code in your homelab VM and accessing it securely from your phone via Cloudflare Tunnel - no open ports required.

The 20-Year Technology Adoption Cycle and AI's Acceleration

Infrastructure technologies historically take 20 years to reach critical mass adoption (GPS, mobile, autonomous vehicles). AI breaks this pattern, achieving rapid shallow adoption through existing digital infrastructure, but faces new barriers transitioning to deep societal integration by 2030-2040.

The Day the Skeptic Blinked

Kenton Varda, a Cloudflare engineer who was skeptical of AI, tested Claude by building an OAuth library. The code was surprisingly good, leading him to realize the power isn't in AI replacing humans, but in the combination of AI speed and human expertise.

The Agent is The Loop

The llm-loop-plugin gives Simon Willison's LLM CLI the ability to loop and iterate autonomously. Instead of being a bottleneck feeding prompts one by one, you can set a goal and watch it work file by file until complete. The magic isn't in the AI model—it's in the loop.

Hear me out: “Adversarial Pair Coding with AI Agents” — feels nice, keeps me in the flow and — velocity is immense!

+----------------------------+
|        Coder Agent        |
| - Generates Code          |
| - Learns patterns         |
| - Optimizes logic         |
+----------------------------+
             |
+----------------------------+
|   Shared Understanding     |
| - Language rules           |
| - Functional goals         |
| - Iterative improvement    |
+----------------------------+
             |
+----------------------------+
|     Adversary Agent       |
| - Finds bugs              |
| - Suggests attacks        |
| - Tests edge cases        |
+----------------------------+

Request an AI summary of this blog