I Researched 8 Product Ideas in One Week Using Claude Code
The Most Expensive Mistake an Indie Developer Can Make
The biggest trap for indie developers isn’t lacking technical skills or design chops — it’s building the wrong thing.
Sound familiar? You get a flash of inspiration: “People definitely need this tool!” You rush to write the code, launch it, and then discover — nobody uses it.
What went wrong? It’s not that your code was bad. It’s that before you started building, you never validated whether the idea was worth pursuing.
Traditional market research is painfully slow. You have to search for competitors, dig through App Store reviews, find industry reports, estimate market size… A thorough investigation of just one direction can easily eat up a week or two. If you have three to five ideas to compare, that’s a whole month gone — without writing a single line of code.
For an indie developer, time is the scarcest resource. I needed a way to quickly determine whether an idea was worth pursuing.
Why Claude Code
I’ve used ChatGPT and Claude’s web interface for research before. The experience is quite different.
The web-based chat model is call-and-response: you ask a question, it answers, you follow up, it answers again. The whole process is fragmented, and at the end, you have to piece together the bits scattered across the conversation into a coherent picture.
Claude Code is different. It’s more like assigning a task to a researcher: “Investigate this direction for me and deliver a complete report.” It goes off to search, analyze, and organize on its own, then hands you a structured document.
The key differences:
- It outputs files, not chat logs. You can save, compare, and iterate on them directly.
- It can handle complex tasks in one shot. No need to guide it step by step — give it one prompt and it can complete the entire research workflow.
- It works directly in your project directory. Generated reports go straight to wherever you specify — no copy-pasting required.
In short, the web version is an “assistant.” Claude Code is a “researcher.”
Build a Research Project: Your Idea Repository
The first thing I did was create a dedicated project directory called research.
Its sole purpose: store all product research results. The structure is simple:
research/
├── README.md # Summary comparison table for all directions
├── voicedoz/
│ └── research.md # VoiceDoz deep dive
├── clipboard-app/
│ └── research.md # Clipboard tool research
├── ai-form-builder/
│ └── research.md # AI form builder research
├── disk-cleaner/
│ └── research.md # Disk cleaner research
├── future-directions/
│ └── research.md # Future direction exploration
└── ...
One folder per direction, one standardized research.md per folder.
The real value of this project is accumulation. You don’t need to wait until you’re “ready to do serious research.” Whenever an idea pops into your head — even a random thought in the shower — open your terminal, give Claude Code a prompt, let it run a preliminary analysis, and save it.
After three months, your research directory might have a dozen or twenty directions. Some you’ll dismiss immediately after reading the report. Others will keep fermenting in the back of your mind. When you’re ready to seriously pick your next project, all the material is right there — ready to be pulled up, compared, and ranked.
This isn’t a one-time exercise. It’s a continuously running idea system.
How to Assign Research Tasks to Claude Code
It all comes down to one prompt. But the quality of the prompt directly determines the quality of the output.
What I’ve found works best: don’t ask vague questions — give it a specific analytical framework. For example, don’t say “research the market for voice input tools.” Instead:
Research the “voice-to-text input tool” space and analyze it from the following dimensions:
- Market size and growth trends
- Existing competitors, categorized by tier (VC-backed, indie developer, open source)
- Pricing strategy and revenue scale for each major competitor (if public data is available)
- Feature comparison matrix: where can my product differentiate?
- Risk assessment: platform risk, competitive risk, technical feasibility
- Revenue estimates: conservative / growth / optimistic scenarios Save the results as a research.md file in the research/voicedoz/ directory.
One prompt, and Claude Code runs through all the steps on its own, producing a complete report in the directory you specified.
What a Research Report Looks Like
Taking my VoiceDoz (voice input tool) research as an example, the final report roughly covers these sections:
Market Size: The speech recognition market is approximately $225 billion in 2025, growing at 22% annually. Consumer-grade tools are the fastest-growing segment.
Competitor Tiers:
| Tier | Representative Products | Characteristics |
|---|---|---|
| VC Giants | Wispr Flow ($1B valuation) | Heavy funding, but limited features |
| Indie Developers | Superwhisper, Aiko | One-person teams, $5K–$15K/month revenue |
| Open Source | Whisper.cpp | Free, but requires DIY setup |
Feature Comparison Matrix:
| Dimension | VoiceDoz | Wispr Flow | Superwhisper |
|---|---|---|---|
| Cross-platform | ✅ All platforms | ❌ Mac only | ❌ Mac only |
| Voice translation | ✅ | ❌ | ❌ |
| Local models | ✅ | ❌ | ✅ |
| Price | $29.99 one-time | $180/year | $80/year |
Risk Assessment:
- Platform risk: Low. Apple/Google’s native voice input doesn’t support translation or text refinement — no near-term threat.
- Competitive risk: Medium. Wispr Flow has raised $700M, but currently only supports Mac.
Revenue Estimates:
- Conservative: $1.5K–$3K/month
- Growth: $5K–$10K/month
- Optimistic: $15K–$20K/month
Every report follows the same structure. Once you have seven or eight of these, side-by-side comparison becomes trivially easy.
Side-by-Side Comparison: Let AI Rank Your Options
Once your research directory has enough entries, the most valuable step is cross-comparison.
I had Claude Code read all the research reports, score them against a uniform set of criteria, and output a summary table saved in research/README.md:
| Direction | Differentiation | Market Size | Indie Feasibility | Recommendation |
|---|---|---|---|---|
| VoiceDoz | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ✅ Top priority |
| Clipboard Tool | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ✅ Do second |
| AI Forms | ⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ | 🔶 Consider |
| Disk Cleaner | ⭐⭐ | ⭐⭐ | ⭐⭐⭐⭐ | ❌ Not recommended |
One table, and the big picture is clear. Which directions have strong differentiation, which markets are too crowded, which ones are impossible for a solo developer — all laid out at a glance.
Real Examples: One Killed, One Chosen
Methodology is abstract — let me share two real research outcomes.
Killed: Disk Cleaner
A disk cleaning tool — sounds like a no-brainer, right? Everyone’s computer runs out of space.
But the research report killed the idea instantly:
- Extreme platform risk. iOS sandboxing severely limits what you can actually clean. On Mac, Apple could build cleanup features into the OS at any time (in fact, every new macOS version improves storage management) — and the moment they do, your product goes to zero.
- Entrenched competition. CleanMyMac has 29 million downloads. The open-source Mole has a large user base on GitHub. There’s very little room for a newcomer.
- Low ceiling. Willingness to pay for this kind of tool is inherently low — most users feel “the built-in tools are good enough.”
Verdict: don’t build it. Too much risk, too little reward.
Without the research, my gut feeling might have been “everyone needs disk space — there must be a market!” Then I’d have spent three months building a product destined to be ignored.
Chosen: VoiceDoz
The VoiceDoz research told a completely different story:
- Clear differentiation. Existing competitors either only support Mac, only work in the cloud, or don’t support translation. VoiceDoz is the only product that combines cross-platform support + local models + voice translation.
- Obvious price advantage. Competitors charge $100–$180/year. VoiceDoz is a one-time $29.99 purchase — highly attractive to price-conscious users.
- Market already validated. Wispr Flow reached a $1B valuation and $10M in annual revenue, proving the market exists and is growing.
- One person can build it. The core technology is based on the Whisper model, with mature tech stacks on both front-end and back-end.
Verdict: build it first. Clear differentiation, validated market, technically feasible.
This is the value of research — it doesn’t tell you “what to build.” It helps you eliminate what you shouldn’t build, so you can focus your limited time on the direction with the best odds.
The Limits of Claude Code for Research
I’ve talked up the benefits — let me also be honest about the limitations.
What it’s good at:
- Structured analysis. Give it a framework and it delivers consistently, without missing anything.
- Competitor mapping. It quickly surfaces the major players and their differences.
- Multi-direction comparison. Putting a dozen directions side by side for high-density information processing is something humans struggle to do efficiently.
What it’s not good at:
- The latest data. It has a knowledge cutoff, so recent funding news or revenue changes might be inaccurate. You’ll need to verify key numbers yourself on platforms like Latka or AppFigures.
- User insights. Questions like “does this actually hurt enough for users to pay?” — AI can only reason logically. For real answers, you need to read actual user feedback in communities.
- Gut calls. Some directions look mediocre on paper, but your instincts say there’s an opportunity. AI can’t make that call for you.
The right approach: AI handles information gathering and structured analysis; you make the final judgment. AI levels the information playing field — it doesn’t make decisions for you.
Final Thoughts
An indie developer’s scarcest resource is time, and choosing what to build is the biggest time lever — pick right, and everything that follows actually matters.
My advice:
- Create a research project right now. Don’t wait until you have a clear direction. Whenever an idea strikes, have Claude Code run a quick analysis and save it.
- Give AI a specific analytical framework. Don’t ask “what do you think about this idea?” — tell it exactly which dimensions to analyze.
- Do a cross-comparison once you’ve accumulated enough. It’s hard to judge a single direction in isolation — only by comparing them side by side can you see which ones stand out.
Instead of spending three months building a product nobody wants, spend one week doing research first. That week might be the best-spent time of your entire year.