How AI Agents Are Reshaping Creation
TL;DR >> Today's AI agents excel at computer operation and research, maintain coherence for hours, favor curious problem-solvers over technical experts, and are democratizing software creation while challenging traditional employment models. <<
The boundaries between technical and non-technical roles are dissolving before our eyes.
As AI agents rapidly evolve from simple coding assistants to autonomous digital workers, we’re witnessing nothing short of a fundamental shift in how software gets built. This isn’t just another incremental improvement in developer productivity—it’s a complete reimagining of who can create software and how quickly ideas can become reality.
Based on insights from Replit CEO Amjad Masad and AI agent pioneer Yohei Nakajima at a recent Village Global event, I’ve synthesized key developments in AI agents and their implications for the future of building technology.
What AI Agents Can Actually Do Today (Not Tomorrow)
AI agents today excel in two primary domains:
Computer use agents are fundamentally different from the narrow coding assistants. These aren’t just code generators—they’re digital operators trained to understand how to navigate computer environments. As Amjad Masad puts it:
People think of computer use as something like an operator, but actually it is more like you give the model a virtual machine, and it knows how to execute code on it, install packages, write scripts, use apps, do as much as possible with the computer.
Research agents have inverted the traditional search paradigm. Instead of deterministic systems retrieving information and then asking AI to summarize, modern agents now drive the entire process:
That question goes to the agent, the agent formulates the searches in the form of tool calls. So it'll search the Web, it'll search some existing index or what have you, and it'll iterate until it's sort of satisfied with the amount of information that it gets, and then summarizes the output for you.
What’s remarkable isn’t just what these agents can do, but how rapidly they’re improving.
The cycle time for meaningful capability jumps has compressed from years to months or even weeks.
The Coherence Breakthrough No One Is Talking About
The most underappreciated development in AI agents is their growing ability to maintain coherence over extended periods. This is the difference between a toy and a true collaborator.
Every seven months, we're actually doubling the number of minutes that the AI can work and stay coherent, and this is such a crucial thing for agents, because some tasks simply will need to take hours.
Early AI agents would “glitch out” after just 3-5 minutes of work—sometimes literally “start talking in Chinese” as Amjad colorfully described. The latest models can maintain coherence for hours.
This isn’t just a linear improvement; it’s a qualitative shift that enables entirely new categories of work.
If this exponential trend continues—and recent developments suggest it might accelerate—we’ll soon have agents that can work coherently for days or weeks on complex projects.
he implications for complex, multi-stage knowledge work are profound.
Who Thrives in This New Landscape? (Not Who You Think)
Perhaps the most counterintuitive insight is that technical expertise isn’t the primary predictor of success with AI agents. The traits that matter most aren’t what we’ve traditionally valued in software development:
We've been at Replit thinking a lot about what makes a great Replit user. It's actually a very tough question, because if you try to split it by how technical it is, it's not clear-cut... We have doctors and nurses that are not very technical, but have obviously very intelligent, have good systems thinking capabilities, are able to kind of break down problems and have some amount of grit.
The personality traits that predict success with AI agents include:
- Curiosity and openness to experimentation
- Grit and persistence to work through imperfect early drafts
- Systems thinking abilities to break down complex problems
- Comfort with ambiguity and iterative processes
Fascinatingly, being too technical can sometimes be a disadvantage. Technical people often try to micromanage the agent, forcing specific implementation decisions rather than allowing it the freedom to make optimal choices.
If you become a little too technical, they actually start to struggle to use the agent, because they're trying to force it to do certain technical decisions, whereas Replit agent is sort of programmed in a way to have more freedom.
This inverts the traditional power dynamics in software development, where technical knowledge has been the primary gatekeeping mechanism.
The Democratization Is Real (This Time)
We’ve heard promises about democratizing software development for decades. The difference now is that it’s actually happening—and at an astonishing pace.
Consider what Yohei Nakajima built with Replit agent:
[vcpedia.com](https://x.com/yoheinakajima/status/1917615153715241110)—I have a couple of Twitter queries that run on a schedule, and then an LLM decides if there's funding data in that tweet, and then it extracts funding data from that tweet, converts it into tables of funding startups, investors, and then enriches with EXA. And then I'm still working on the daily newsletter. Is it better than Crunchbase? No. Did I build it over a weekend by myself? Yes.
Or this example from a non-technical operations team member:
One of my ops people who has no technical background, who was managing all of our data on Notion... built a custom dashboard that pulls in all the data from different parts of our Notion, like, into all the stuff that I need to see in one place.
The barrier to entry for creating software has fallen dramatically, not through simpler programming languages or better IDEs, but through agents that can translate natural language intent into working code.
The Enterprise Opportunity Is Bigger Than Anyone Realizes
While consumer applications get most of the attention, the enterprise impact of AI agents may be even more transformative. Consider these real-world examples:
Yesterday, I was looking at what I called an arbitrage opportunity—someone's company was quoted from NetSuite $150,000 to build a NetSuite extension. He decided to [build it in Replit](https://x.com/billyjhowell/status/1927874359584051210). It cost him $400, and he sold it to his employer for $32,000.
This isn’t just cost-saving; it’s a fundamental rewriting of the economics of enterprise software development. When the implementation cost of custom software drops by two orders of magnitude, the decision-making calculus for what’s worth building changes completely.
Every department with a workflow bottleneck now has the potential to solve it themselves rather than waiting for scarce engineering resources or expensive consultants.
The Moat Question: Where’s the Durable Value?
The question of where durable value will accrue remains open. Amjad’s perspective on moats in AI is refreshingly clear-eyed:
In Silicon Valley, the word moat is overloaded to the point that it's often useless. Sometimes people will say 'Our moat is X, Y, and Z,' and specifically they're saying we have a feature.
For AI companies building applications, claiming to build proprietary models is often more about perception than reality:
A lot of it is cargo culting. A lot of applications should not be building models but are building models because of perception... You're either state of the art or not. If you're not state of the art, no one will use it.
The timeless principles of business still apply: founding team quality, market dynamics, execution speed, and customer obsession matter more than technical differentiators that can be quickly replicated.
The Employment Question: Beyond the Headlines
Dario Amodei of Anthropic recently predicted 10-20% unemployment within 1-5 years due to AI. Is this realistic?
A lot of routine jobs are within the bullseye, within reach—especially when we talked about computer use, quality assurance, data entry, any sort of routine in front of the computer thing is going to get automated.
But there are numerous limiting factors:
- Compute constraints
- Energy limitations
- Enterprise adoption willingness
- Regulatory interventions
- New job category creation
History suggests technological disruption creates as many jobs as it displaces—they’re just different jobs. As Yohei notes:
I don't know any robot mechanics, but I'm assuming there'll be plenty of those, probably more than car mechanics, right? Five to 10 years from now.
Where We Go From Here
The AI agent revolution isn’t coming—it’s already here. The most successful organizations will be those that:
-
Embrace the inversion of skills - Recognizing that systems thinking and problem formulation are now more valuable than implementation expertise
-
Rethink software economics - When building custom solutions costs 10-100x less, the calculation of what’s worth building changes entirely
-
Focus on agent-friendly workflows - Creating environments where humans and AI agents can collaborate effectively
-
Build a grit culture - Fostering persistence through imperfect early drafts toward increasingly capable solutions
The ultimate competitive advantage won’t come from having the best AI—it will come from having the best humans who know how to work with AI.
For individual professionals, the imperative is clear: start building with AI agents now, even if the results are imperfect.
As with all technological revolutions, those who adapt early will have an overwhelming advantage over those who wait for perfection.
The future belongs to those who have the courage to ship a shitty first draft.