A developer I've known for three years asked me last fall what I was charging now that Claude Code was handling a significant chunk of my execution work. It was a fair question. It was also the wrong question.
What follows is a reconstructed version of that conversation. D. is a senior developer and former colleague. I edited for clarity and kept the friction.
D. asked me how I was billing now
D.: "Are you still doing project-based or hourly? Because if agents are writing most of the code, I'd imagine clients start asking why they're paying the same rate."
Me: Clients haven't asked that, actually. But I've been asking myself the same thing for six months.
The old model was project-based with a rough hourly floor underneath it. A build that would take me 80 hours got scoped at a rate that reflected 80 hours of senior work. That felt fair because I knew what 80 hours of my attention produced.
What Claude Code changed wasn't the output quality. I'm shipping the same work, sometimes tighter, because the review cycles are faster and I'm not copy-pasting boilerplate at midnight. What changed is that 80 hours of output now takes closer to 12 hours of my time.
D.: "So you're making more per hour."
Me: That's one way to read it. Hourly isn't the right unit anymore, though.
What the output actually looks like now
In week 14 of a client engagement I tracked publicly, I shipped 11 API endpoints, 2 dashboards, and caught one production incident before it landed. Total human time: roughly 3 hours across two days. The rest was orchestration.
Four specialist agents running in parallel. Schema agent took my ERD sketch and generated Prisma schema plus migrations. Endpoint agent took the schema and generated typed route handlers. Test agent wrote Vitest suites for every endpoint. Review agent compared generated code against established conventions and flagged 6 style drifts for me to review.
My job was the sketch, the orchestration, and the judgment calls on those 6 flags. The agents handled everything between the sketch and the pull request.
That's not 3 hours of work. That's 3 hours of a specific kind of work: knowing which of the 6 flags actually matter and which to ignore. That judgment is not something the agent has.
The answer I was working toward when D. asked: I'm charging for the judgment, not the hours.
Why the time-based model stops making sense
D.: "But clients are going to figure this out eventually. They'll say 'you used to take six weeks, now you take one week, why is the invoice the same.'"
Me: That's the wrong framing, but I understand why it sounds right.
The issue isn't that clients will notice the output velocity increased. The issue is that time-based pricing creates a perverse incentive: it rewards slow. The longer something takes, the more you make. Nobody says that out loud, but it's baked into the model.
Outcome-based pricing removes that incentive. The deliverable is worth what it's worth regardless of how long it took me to produce it. A correct, well-documented API endpoint is worth the same whether I built it in 20 minutes or 8 hours.
I've looked at a lot of solo operators who crossed $500K in annual income - every one of them had already killed their time-based client work. Not planning to. Already had. You can see the full case on why in the work section, where the pattern shows up across every engagement type.
What I changed instead
D.: "So what does 'outcome-based' mean practically? You just charge whatever you think it's worth?"
Me: Sort of, but there's structure to it.
The move I made was toward a productized layer. One fixed-price product that answers a specific question: the DTC Stack Audit at $129. It covers tracking configuration, analytics gaps, Core Web Vitals, and attribution accuracy across Shopify, Meta, and GA4. The output is a scored diagnostic. Self-serve, no calls required.
That product doesn't make me rich. What it does is remove the project-scoping conversation from the top of every engagement. Brands who need to know what's broken can find out for $129 without a six-week discovery retainer. Brands who want implementation after the diagnostic have a much cleaner conversation because we already know what needs fixing.
The retainer work isn't gone yet. I still have two slots open on the availability page. But I set a public sunset date: December 31, 2026. After that, no new retainer work. The public deadline is its own forcing function - it keeps me shipping the product layer instead of drifting back into comfortable monthly retainer revenue.
D.: "That's a pretty hard line. What happens if the products don't cover the floor by December?"
Me: Then I write a public post explaining why I extended, and I extend. A public retraction isn't a failure, it's an update with receipts. What I won't do is drift back quietly and pretend I never said it.
D. pushed back on the productized model
D.: "But there's work that genuinely needs ongoing presence. A brand rebuilding infrastructure from scratch, a complex platform migration, something where the scope is still being discovered. A $129 product can't do that."
Me: You're right, and I don't think it should.
The productized audit works when the question is "what's broken in a defined system." It doesn't work when the question is "what should we build and in what order, given constraints we're still learning." That second question needs someone in the room over time, and that's what the retainer window is for.
What I'm not willing to do is use the second category as a reason to avoid building the first. Most DTC brands I talk to need a diagnosis before they need a builder. The diagnosis is the productized product. The building is the retainer, when the diagnosis warrants it.
The pricing model follows from that sequence. First you know what's broken. Then you know what fixing it costs. Then you can have a useful conversation about engagement structure. Running that in reverse - retainer first, diagnosis second - is expensive for the brand and inefficient for me.
D.: "So you're saying the audit is a qualifying step for the bigger work."
Me: It can be. But it's also complete on its own for brands that have the internal capacity to execute once they know what to fix. Not every diagnosis needs a surgeon to perform the surgery.
What this exchange taught me about pricing as a positioning problem
The question D. asked was about pricing. The real question was about positioning.
If I'm billing hours, I'm a contractor. The client is buying time. If I'm billing outcomes, I'm a practitioner. The client is buying a result. Those are different things to sell, different things to buy, and they attract different conversations.
The Claude Code shift accelerated a question I would have had to answer anyway: what am I actually selling? Six months of shipping with agents made the old answer (skilled execution time) obviously insufficient. The execution is still skilled. The time is not the thing.
What I settled on: I'm selling the taste, the orchestration judgment, and the diagnostic clarity to know what matters. The agents handle the execution. I handle knowing what the execution should produce and whether it did.
That's not a rate card. It's a positioning statement. The rate card follows from it.
The numbers are specific and the shape is consistent. The case studies show the same pattern across different client types and project scopes.
“I'm selling the taste, the orchestration judgment, and the diagnostic clarity to know what matters.”
FAQ
Does AI-assisted development mean clients should pay less?
The output is the same or better - it's just produced differently. A correctly built, well-tested API endpoint has the same value to the client whether it took 20 minutes or 8 hours. The rate is for the judgment that produced it, not the time it took. Time was always an imperfect proxy for that anyway.
How do you scope a project when AI can change how long things take?
Scope by deliverable, not by hours. Define what done looks like: endpoint count, test coverage, specific features shipped. The client cares about the outcome, not the methodology. Scoping by deliverable also removes the perverse incentive that time-based scoping creates.
Is this just a way to charge more for less work?
It's a way to charge for the right thing. Senior expertise has always been worth more than junior expertise per hour, even when the senior finishes faster. AI tools amplify output velocity but don't change the value of the judgment that directs that output. The rate should reflect the judgment, not the velocity.
What kind of client actually benefits from a productized audit versus a retainer?
Brands that have a defined stack and are experiencing specific problems (attribution gaps, tracking drift, performance regressions) are good fits for a fixed-price diagnostic. Brands in an earlier or more ambiguous phase - building new infrastructure, navigating a platform migration, designing a system from scratch - need ongoing presence. The products page covers what's currently available in both formats.
What happens to clients who want ongoing access to the judgment, not just the diagnostic?
That's what the retainer window is for, through December 2026. After that, the structured product and course work takes over. The retainer exit isn't about walking away from clients who need help. It's about building something that can scale past the ceiling that retainer math creates.
Sources and specifics
- Week 14 output numbers (11 API endpoints, 2 dashboards, 1 production incident caught, approximately 3 hours of human time) are from a real tracked build week in a client engagement, published as a build log.
- The retainer math ceiling example ($10K x 4 clients = $480K, zero flywheel hours) is modeled on real fractional market rates from 2025-2026.
- The Justin Welsh margin figure (92%) is from publicly reported income breakdowns and creator-business research conducted in April 2026.
- Public retainer sunset date (December 31, 2026) is live on the availability page and has been there since early 2026.
- Research on $1M+ solo educator businesses (six operators studied, all killed services to scale) informed the current product and availability structure.
