Pallavi Geethika
Learning to Build with AI

Learning to Build with AI (Pt. I)

Early learning notes from building products with Claude as my technical partner - and how this collaboration actually works

When I tell people I've built a few working applications without writing code myself, the first question is usually: "So, you just asked AI to build it for you?".

Not quite.

Here's what actually happens: I work with Claude (Anthropic's flagship AI) as my engineering team. I'm the product owner. I define what we are building and why. I make the key decisions - what problem we're solving, why we're solving it, who it's for, what's in scope, what's not. Claude handles the technical implementation - proposing the architecture, writing code, debugging, explaining why something will or will not work.

But the division isn't "I think, AI executes". It is more nuanced than that. Over the past few months of building four projects - a voice-thought capture app for my dad, a private social network for small groups, a gift exchange tool for my friends, and a newsletter feed for myself - I've learned a few things about what makes this collaboration work. Here is what I have figured out so far.


You Must Bring Structure & Clarity

AI is powerful, but it is not a mind reader. The quality of what you get depends entirely on how well you frame the problem.

This is not just about the big picture problem ("Build me an app that does X"), but also every 'mini problem' that arises along the way. When something breaks, you need to isolate what is failing. When a feature needs refinement, you need to articulate what is wrong and what success looks like. When an idea arrives, you need to step back and think about its role in the broader context of the problem you are solving.

What this means in practice: When building a gift exchange tool, I had a vague idea: "Should people be able to ask questions about gifts?"

The ambiguous part: what does "ask questions" actually mean? I needed to structure this before Claude could help implement it.

Here's how I broke it down:

Only after mapping this out could I articulate to Claude: "Build bidirectional anonymous messaging where Santa can ask Giftee questions, Giftee can reply without knowing who's asking, and either party can unsend messages."

Without that structure, Claude would've had to guess at edge cases. With it, we built exactly what was needed.

The pattern I've learned: the more structured and clear my thinking, the better the output. I've learned that value lies in breaking down problems systematically before bringing them to Claude. What's the actual issue? What have I tried? What constraints matter? What tradeoffs should I consider? The more structured my thinking, the better the output.

You can, of course, use AI to rubber-duck - talk through a problem out loud to clarify your own thinking (in fact, I highly recommend it!). But you have to bring the structure. AI won't create it for you.


Judgment Cannot Be Delegated

There are decisions that AI simply cannot make for you.

Should you build this feature or defer it? Is this complexity worth it? When is something "done enough" to ship? What tradeoffs are you willing to accept?

These require judgment - sense-checking based on real-world experience, observations about how people actually behave, understanding of what really matters to your specific user. In my day job as a strategy consultant, I've built certain instincts about when to simplify, when to dig deeper, when good-enough beats perfect. AI can certainly inform those decisions, but it cannot make them.

What this means in practice: when building Clarity (the voice-thought capture app for my dad), Claude suggested three approaches for handling extraction errors:

  1. Let users correct via chat commands ("that's a note, not a task")
  2. Provide a modal editing interface (tap extracted item, edit directly)
  3. Build both and let users choose

All three are technically viable. But only I could decide the path forward, based on:

I chose option 2. Not because it is objectively the "best", but because it fit the user and the moment. This doesn't mean that I cannot revisit my choice, but it does add the foundation needed to ship and learn from real usage.

The pattern I've learned: even with the four projects I have tackled so far, there were dozens of such moments. Claude might propose three different approaches. Your job is to pick one based on factors that AI cannot fully weigh - user needs, time constraints, budget, your own learning goals.

The value you bring compounds here. The more projects you do, the sharper your judgment gets about what to build, how to scope, when to ship.


What This Actually Looks Like

So, what does a project actually look like in practice? In effect, it follows the same rhythm:

  1. Define & Frame - I structure the problem, the user, and the key constraints
  2. Plan Approach - we discuss architecture, flow, and what to build first
  3. Build & Iterate - Claude writes code, I test and give feedback
  4. Test & Refine - I find bugs, we fix them, I decide when it's "done enough"
  5. Ship & Document - Claude helps me deploy, I write up what I learned

The cycle isn't linear - I'll be in "Build & Iterate" and realize I need to go back to "Define & Frame" because I didn't scope something clearly enough. That's normal. But here's the relatively consistent division of labor in each phase:

Phase What I Do What Claude Does
Define & Frame
  • Identify the problem and who it's for
  • Define what success looks like
  • Set scope boundaries (what's in, what's out)
  • Articulate key constraints (user capabilities, budget, timeline)
  • Ask clarifying questions to surface ambiguities
  • Flag technical constraints I might not see
  • Suggest alternative framings when the problem seems unclear
Plan Approach
  • Decide on architecture based on tradeoffs (cost, complexity, learning goals)
  • Prioritize what to build first
  • Make judgment calls on "good enough" vs. "ideal"
  • Approve or redirect technical proposals
  • Propose technical architecture options
  • Explain tradeoffs between approaches
  • Recommend tools and frameworks
  • Map out implementation sequence
Build & Iterate
  • Test functionality in real-world scenarios
  • Identify what feels broken or unclear
  • Articulate what's wrong and what "better" looks like
  • Decide when refinement is worth the effort
  • Write and debug code
  • Implement feedback systematically
  • Explain why something works or doesn't
  • Propose solutions when issues arise
Test & Refine
  • Stress-test edge cases based on how users actually behave
  • Evaluate if complexity is justified by value
  • Determine when "done enough" becomes "ship it"
  • Make final scope cuts if needed
  • Fix bugs as they're identified
  • Optimize performance where needed
  • Validate technical implementation
  • Suggest additional testing scenarios
Ship & Document
  • Decide what's worth documenting and for whom
  • Synthesize lessons learned
  • Determine what should be shared publicly vs. kept private
  • Guide deployment process
  • Help structure documentation
  • Provide technical context for writeups

What I'm Still Learning

I'm only four projects in. Several more ideas in the pipeline. Things feel exciting. But naturally, there is a lot I am slowly figuring out.

When to simplify versus when to build the full thing. How to evaluate Claude's suggestions when I don't have deep technical knowledge. How to balance learning (trying the complex approach) with shipping for actual users (choosing the simple one). How to know when something is "good enough". How to expand my toolkit.

This is just Part 1 because I expect my process to keep evolving. The collaboration improves with practice - what I know now is different from what I knew two months ago. I'm documenting what is working so far, and what remains to be learned.

If you're exploring this too - whether you're building products, automating workflows, or just curious what's possible - I'd love to hear what you're learning. What's working for you? What's hard? What have you figured out that I haven't? Let's trade notes.

← Back to Field Notes