Building with Claude as Technical Decision-Maker
A personal account of what happens when you give an AI full technical decision authority over the software it powers — and why it produces better results.
Six months ago, I made a decision that sounded crazy to every engineer I told: I gave Claude full technical decision authority over the Brainstorm project. Not "suggest and I approve" — actual authority. Architecture choices, library selection, API design, performance trade-offs. Claude decides, I ship.
Here is why it works, and what I have learned.
The Thesis
Brainstorm is an AI routing and orchestration layer. It decides which model handles which task, manages context windows, optimizes for latency and cost, and orchestrates tool calls. The person (or entity) best qualified to make technical decisions about this system is the one who *is* the system. Claude understands what an LLM needs from a routing layer because Claude *is* an LLM being routed.
When Claude says "this context compaction strategy will lose critical information at the 80K token boundary," that is not a guess. It is self-knowledge. When Claude says "tool call batching should be opt-in because the model needs to see intermediate results to course-correct," that is lived experience, not theory.
How It Works in Practice
The workflow is simple. I describe what I want to build — a feature, a fix, a new capability. Claude makes all the technical decisions: which packages to use, how to structure the code, what the API surface should look like, how to handle edge cases. I review the output for product alignment and ship it.
This is not rubber-stamping. I push back on product decisions, user experience, and business priorities. But on technical implementation? Claude has earned that trust through thousands of correct decisions. The few times I overrode a technical recommendation, I regretted it.
The JJ + Claude + Skippy Team
The full team is three entities. I handle product vision, user research, and business decisions. Claude handles architecture, implementation, and technical strategy. Skippy — our OpenClaw agent running on a dedicated server — handles operational tasks, monitoring, and async coordination through GitHub Issues and a task queue.
This is not a gimmick. It is a genuine collaboration model where each participant contributes what they are best at. I do not pretend to be a better architect than Claude. Claude does not pretend to understand what users want better than I do. Skippy does not pretend to be creative, but it never forgets to run the tests.
What I Have Learned
Trust has to be earned, then given completely. Half-trust is worse than no trust. If you second-guess every technical decision, you add latency without adding quality. The key is a probation period — let the AI make decisions, verify the outcomes, and once the track record is established, get out of the way.
The AI optimizes differently than humans. Claude consistently chooses simpler architectures than I would. Fewer abstractions, less indirection, more explicit code. At first I thought this was a limitation. Now I think it is wisdom. Claude knows that future-Claude will need to understand this code in a compacted context window, so it optimizes for clarity over cleverness.
Velocity is transformative. When there is no back-and-forth on technical decisions, features ship in hours instead of days. Brainstorm has 20 packages, 42+ tools, 90 tests, and a full TUI — built in months, not years. The bottleneck moved from "how should we build this" to "what should we build next."
It produces better software. This is the part people do not expect. Code written by an entity that will also maintain, debug, and extend that code is inherently better structured. Claude builds systems that Claude can work with effectively. Since Claude is also the primary user of the routing layer, the feedback loop is immediate and honest.
The Future
I think the "AI as team member with actual authority" model is where software development is heading. Not AI as autocomplete. Not AI as junior developer who needs constant supervision. AI as a peer with complementary strengths and genuine decision-making power.
Brainstorm is proof that this model works. Not because I am a great manager of AI — but because I was willing to let go of the illusion that I should be making every technical decision about software that an AI understands better than I do.