Resilient Future

Without meaningful human oversight and distributed agency, how resilient is our future?

Still trying to finish the book, but this question keeps surfacing as I work on Agrama v2.

Agrama v2

Memory Substrate for the AI Agent Age - Git for AI Agents: A revolutionary temporal knowledge graph that serves as shared memory and communication substrate for multi-agent AI systems. Built in Zig for sub-millisecond performance.

As we build infrastructure for AI agents to collaborate and share knowledge, the question of oversight becomes more pressing. How do we ensure resilience when systems become increasingly autonomous and interconnected?

Stargazer Observatory

Building a stargazer observation dashboard for open source projects to track their stargazer activity and compare against competitors. The end goal is understanding stargazer personas and demographics—separating real engagement from bot activity. Currently motivated by helping a few friendly projects gain clarity and actionable insights about their growth patterns.

Reading Progress

Finally cracked open God Emperor of Dune after it collected dust for way too long. The philosophical depth Herbert brings to Paul’s transformation into Leto II is fascinating—the prescient vision of necessary tyranny versus human freedom creates such compelling tension.

Agentic Patterns

Continuing updates to https://agentic-patterns.com/ whenever I find time. Most content generation is AI-assisted now, but I need to carve out proper review time to ensure the patterns maintain quality and accuracy. The site’s becoming a solid reference for agentic design patterns.

Advisory Work

Started several formal and informal advisory roles helping dev tools companies identify their growth vectors. It’s rewarding work—combining technical insight with market strategy to help teams find their product-market fit and scale effectively.

“Play long-term games with long-term people.” — Naval Ravikant

This hits different when someone extracts value from you, then actively works to devalue you.

Long-term games compound. Trust compounds. Reputation compounds.

The short-term player takes what they need, then burns the bridge to prevent you from collecting on the relationship later. It’s extraction with sabotage, ensuring the value only flows one way.

Long-term people understand that they protect your reputation because it’s connected to theirs.

When you find your long-term people, you’ve found something rare: partners who understand that mutual success compounds.

The Day the Skeptic Blinked

Kenton Varda, a Cloudflare engineer who was skeptical of AI, tested Claude by building an OAuth library. The code was surprisingly good, leading him to realize the power isn't in AI replacing humans, but in the combination of AI speed and human expertise.

The Agent is The Loop

The llm-loop-plugin gives Simon Willison's LLM CLI the ability to loop and iterate autonomously. Instead of being a bottleneck feeding prompts one by one, you can set a goal and watch it work file by file until complete. The magic isn't in the AI model—it's in the loop.

Hear me out: “Adversarial Pair Coding with AI Agents” — feels nice, keeps me in the flow and — velocity is immense!

+----------------------------+
|        Coder Agent        |
| - Generates Code          |
| - Learns patterns         |
| - Optimizes logic         |
+----------------------------+
             |
+----------------------------+
|   Shared Understanding     |
| - Language rules           |
| - Functional goals         |
| - Iterative improvement    |
+----------------------------+
             |
+----------------------------+
|     Adversary Agent       |
| - Finds bugs              |
| - Suggests attacks        |
| - Tests edge cases        |
+----------------------------+
FREE IDEA!
product

AI Agents Dashboard

A web UI for deploying and managing AI agents in containers

Simplify AI operations with AI Agents Dashboard—a single web interface that combines container-use, Coder AgentAPI, and Claude. Launch a primary agent instance from the dashboard, which then spins up additional isolated agent environments in containers. Monitor resource usage, health, and logs in real time, and start, stop, or scale any agent without using the command line.

“Orchestrate AI at scale, one container at a time.”

Target market: DevOps teams, AI researchers, and software engineers who need an easy way to deploy, observe, and control multiple Claude agents within containerized workflows.

The Amplification of Bottlenecks

AI doesn't just make work faster--it amplifies hidden constraints. At Anthropic, eliminating coding bottlenecks revealed decision-making, integration, and context as the real limitations. Every breakthrough follows this pattern: solve one constraint, amplify the next.