I’ve been refining my development workflow over the past few months, and I’ve landed on a combination that feels incredibly productive: Claude Code from the Desktop app paired with Cursor for code review. Let me walk you through how I work.
Anthropic recently added a native “Code” tab directly inside the Claude Desktop app. It is a GUI for the engine that powers the CLI. One notable feature is its support for isolated Git worktrees. This means you can have a brainstorming conversation in one tab while a “Code” session runs in another, making changes in a separate worktree that won’t touch your main working directory until you’re ready to merge.
Why not git worktree? My workflow is simple enough that I just work directly in my main/feature branch and review everything in Cursor before committing. But if you’re juggling multiple experimental features or want that extra safety net, the worktree support is there.
Why not the CLI? I know many developers swear by the Claude Code CLI, but I’ve found the Desktop app suits my workflow better. There’s something about having a dedicated window for my AI conversations that keeps things organized. I can easily reference previous discussions, and the interface feels more natural for the kind of back-and-forth dialogue I have when working through complex problems.
The Workflow
Here’s how my typical development session looks:
1. Brainstorm with Claude Desktop
Before writing any code, I start by talking through the problem with Claude. I describe what I’m trying to accomplish, share relevant context about my project, and bounce ideas back and forth. This conversation helps me clarify my thinking and often surfaces edge cases I hadn’t considered. It’s like having a patient colleague who’s always available to rubber duck with.
I then ask Claude to generate a prompt for Claude Code based on this discussion.
2. Generate Code with Claude Code
I open Claude Code in the Desktop app and paste in the prompt generated from step 1. It would have context about my project structure, the technologies I’m using, and any constraints I’m working with. Claude generates code, explains its reasoning, and I can ask follow-up questions right there in the conversation.
3. Review in Cursor
Once Claude has generated the code, I switch to Cursor. This is where I put on my reviewer hat. I don’t just blindly accept what the AI produces—I read through it carefully, understand what it’s doing, and verify it aligns with my project’s patterns and standards.
Cursor’s diff view makes this review process smooth. I can see exactly what’s being added or changed, accept individual hunks, or modify the suggestions before committing.
4. Test, Accept, and Commit
After reviewing and testing, I accept the changes I’m happy with and commit them with meaningful commit messages.
Why This Combination Works (for me)
The separation of concerns is what makes this workflow powerful:
- Claude Desktop for Brainstorming — This is where ideas take shape. I describe the problem I’m solving, share context about my project architecture, and have a back-and-forth conversation to explore different approaches. Claude helps me think through edge cases, consider alternative implementations, and refine my requirements before writing any code. By the end of this phase, I have a clear mental model and a well-crafted prompt ready for code generation.
- Claude Code Desktop for Generation — With the refined prompt from brainstorming, I switch to Claude Code which has direct access to my codebase. It understands my project structure, existing patterns, and dependencies. The code it generates is contextually aware—it follows my naming conventions, integrates with existing modules, and respects the architectural decisions already in place. I can iterate here too, asking for adjustments or alternative approaches.
- Cursor for Review — This is my quality gate. I examine every diff carefully, understanding not just what changed but why. Cursor’s interface makes it easy to accept good changes, reject problematic ones, and make surgical edits where needed. This deliberate review process ensures I never ship code I don’t understand. It’s also a learning opportunity—I often discover new patterns or techniques by studying what the AI produced.
This two-step process forces me to slow down and actually review what the AI produces. It’s easy to fall into the trap of accepting AI-generated code without understanding it. By deliberately switching tools for the review phase, I create a mental checkpoint that keeps me engaged with the code.
Tips for This Workflow
- Be specific with Claude — The better your prompts, the less cleanup you’ll need in Cursor
- Review as much as possible — Don’t let the convenience of AI make you lazy about code review
- Commit incrementally — Small, atomic commits make it easier to track what the AI contributed
- Keep learning — Use the review phase as an opportunity to understand patterns you might not have written yourself
Final Thoughts
AI coding assistants are powerful tools, but they work best when you stay in the driver’s seat. My Claude Code Desktop + Cursor workflow keeps me productive while ensuring I remain the decision-maker for every line of code that ships.
If you’ve been looking for a way to integrate AI into your development process without losing control, give this approach a try. The key is finding the right balance between leveraging AI’s capabilities and maintaining your own understanding of the codebase.
Thanks for reading. Let me know whether you agree/disagree or have a different take.






