When I tell people I've built a few working applications without writing code myself, the first question is usually: "So, you just asked AI to build it for you?".
Not quite.
Here's what actually happens: I work with Claude (Anthropic's flagship AI) as my engineering team. I'm the product owner. I define what we are building and why. I make the key decisions - what problem we're solving, why we're solving it, who it's for, what's in scope, what's not. Claude handles the technical implementation - proposing the architecture, writing code, debugging, explaining why something will or will not work.
But the division isn't "I think, AI executes". It is more nuanced than that. Over the past few months of building four projects - a voice-thought capture app for my dad, a private social network for small groups, a gift exchange tool for my friends, and a newsletter feed for myself - I've learned a few things about what makes this collaboration work. Here is what I have figured out so far.
You Must Bring Structure & Clarity
AI is powerful, but it is not a mind reader. The quality of what you get depends entirely on how well you frame the problem.
This is not just about the big picture problem ("Build me an app that does X"), but also every 'mini problem' that arises along the way. When something breaks, you need to isolate what is failing. When a feature needs refinement, you need to articulate what is wrong and what success looks like. When an idea arrives, you need to step back and think about its role in the broader context of the problem you are solving.
What this means in practice: When building a gift exchange tool, I had a vague idea: "Should people be able to ask questions about gifts?"
The ambiguous part: what does "ask questions" actually mean? I needed to structure this before Claude could help implement it.
Here's how I broke it down:
- Who's asking whom? (Gift-giver to recipient, or both directions?)
- Does the asker's identity need to stay hidden? (Yes - that's the whole point of Secret Santa)
- What if someone accidentally reveals themselves? (Need an "unsend" option)
- How do replies work without breaking anonymity? (Recipient can reply, but doesn't see who's asking)
Only after mapping this out could I articulate to Claude: "Build bidirectional anonymous messaging where Santa can ask Giftee questions, Giftee can reply without knowing who's asking, and either party can unsend messages."
Without that structure, Claude would've had to guess at edge cases. With it, we built exactly what was needed.
The pattern I've learned: the more structured and clear my thinking, the better the output. I've learned that value lies in breaking down problems systematically before bringing them to Claude. What's the actual issue? What have I tried? What constraints matter? What tradeoffs should I consider? The more structured my thinking, the better the output.
You can, of course, use AI to rubber-duck - talk through a problem out loud to clarify your own thinking (in fact, I highly recommend it!). But you have to bring the structure. AI won't create it for you.
Judgment Cannot Be Delegated
There are decisions that AI simply cannot make for you.
Should you build this feature or defer it? Is this complexity worth it? When is something "done enough" to ship? What tradeoffs are you willing to accept?
These require judgment - sense-checking based on real-world experience, observations about how people actually behave, understanding of what really matters to your specific user. In my day job as a strategy consultant, I've built certain instincts about when to simplify, when to dig deeper, when good-enough beats perfect. AI can certainly inform those decisions, but it cannot make them.
What this means in practice: when building Clarity (the voice-thought capture app for my dad), Claude suggested three approaches for handling extraction errors:
- Let users correct via chat commands ("that's a note, not a task")
- Provide a modal editing interface (tap extracted item, edit directly)
- Build both and let users choose
All three are technically viable. But only I could decide the path forward, based on:
- The user's capabilities: my dad may not remember chat commands
- Familiarity: modal editing is familiar (we tap to edit all the time)
- Psychological comfort: modal editing confirms changes immediately and concretely
- Scope discipline: building both is scope creep for an MVP
I chose option 2. Not because it is objectively the "best", but because it fit the user and the moment. This doesn't mean that I cannot revisit my choice, but it does add the foundation needed to ship and learn from real usage.
The pattern I've learned: even with the four projects I have tackled so far, there were dozens of such moments. Claude might propose three different approaches. Your job is to pick one based on factors that AI cannot fully weigh - user needs, time constraints, budget, your own learning goals.
The value you bring compounds here. The more projects you do, the sharper your judgment gets about what to build, how to scope, when to ship.
What This Actually Looks Like
So, what does a project actually look like in practice? In effect, it follows the same rhythm:
- Define & Frame - I structure the problem, the user, and the key constraints
- Plan Approach - we discuss architecture, flow, and what to build first
- Build & Iterate - Claude writes code, I test and give feedback
- Test & Refine - I find bugs, we fix them, I decide when it's "done enough"
- Ship & Document - Claude helps me deploy, I write up what I learned
The cycle isn't linear - I'll be in "Build & Iterate" and realize I need to go back to "Define & Frame" because I didn't scope something clearly enough. That's normal. But here's the relatively consistent division of labor in each phase:
| Phase | What I Do | What Claude Does |
|---|---|---|
| Define & Frame |
|
|
| Plan Approach |
|
|
| Build & Iterate |
|
|
| Test & Refine |
|
|
| Ship & Document |
|
|
What I'm Still Learning
I'm only four projects in. Several more ideas in the pipeline. Things feel exciting. But naturally, there is a lot I am slowly figuring out.
When to simplify versus when to build the full thing. How to evaluate Claude's suggestions when I don't have deep technical knowledge. How to balance learning (trying the complex approach) with shipping for actual users (choosing the simple one). How to know when something is "good enough". How to expand my toolkit.
This is just Part 1 because I expect my process to keep evolving. The collaboration improves with practice - what I know now is different from what I knew two months ago. I'm documenting what is working so far, and what remains to be learned.
If you're exploring this too - whether you're building products, automating workflows, or just curious what's possible - I'd love to hear what you're learning. What's working for you? What's hard? What have you figured out that I haven't? Let's trade notes.