← Back to blog

gstack Demystified (Part 1): What's Actually Inside Y Combinator CEO Garry Tan's Claude Code Skill Pack?

· 7 min read

You’ve Definitely Seen This Thing Everywhere

If you follow the AI coding space, you’ve been bombarded with gstack recently.

44k+ stars on GitHub, tech influencers falling over each other to recommend it, comment sections full of “this is insane,” “game-changer,” “one person = entire team.”

But here’s what I noticed: tons of people praising it, almost nobody explaining what it actually is.

Most recommendation posts follow the same formula: introduce Garry Tan’s credentials, paste the list of 28 skill names, and close with “go install it now.” As for what each skill actually contains, why it’s designed that way, and what happens when you use it — crickets.

So I decided to dig through the source code myself and write a series breaking down every skill in gstack.

This is Part 1: what gstack actually is.

Who Is Garry Tan, and Why Did He Build This?

Garry Tan is the current CEO of Y Combinator — Silicon Valley’s most famous startup accelerator, which backed Airbnb, Stripe, Dropbox, and many more.

He’s an engineer by background, having worked as both a designer and programmer before transitioning into investing. By his own account, he used gstack + Claude Code to write over 600,000 lines of production code in 60 days — while running YC full-time.

Whether you believe that number or not, it tells you one thing: he’s a heavy Claude Code user with a well-developed methodology. gstack is that methodology, packaged up.

The Big Reveal: What gstack Actually Is

It’s simpler than you think — gstack is a collection of carefully crafted prompt templates.

No magic, no secret sauce, no special AI calls. When you type /plan-ceo-review in Claude Code, the system loads a pre-written prompt that tells Claude what role to play, what process to follow, and what format to output.

Think of it as an advanced prompt library, except:

  1. Deep Claude Code integration — not just text templates, but leveraging Claude Code’s skill system to call tools like file reading, code search, and terminal commands
  2. Flow control — not dumping everything at the AI in one shot, but executing step by step, pausing at each step for your confirmation
  3. Inter-skill dependencies — for example, plan-ceo-review can invoke office-hours mid-session

But at the end of the day, the core is prompt engineering. Structured instructions telling the AI how to think.

What’s in the 28 Skills?

gstack breaks the entire software development lifecycle into 28 skills, simulating a full team’s division of labor:

Planning Phase (Figure Out What to Build)

SkillSimulated RoleWhat It Does
/office-hoursProduct Advisor6 key questions to clarify what you’re actually building
/plan-ceo-reviewCEOStrategic plan review, challenges assumptions
/plan-eng-reviewVP of EngineeringReviews architecture and data flow
/plan-design-reviewDesign DirectorAudits the plan from a design perspective

Development Phase (Build the Thing)

SkillSimulated RoleWhat It Does
/design-consultationDesignerGenerates a complete design system
/reviewSenior EngineerCode review, finds bugs and auto-fixes
/investigateDebug SpecialistSystematic root cause analysis
/design-reviewDesign QADesign audit + atomic commit fixes
/qaQA LeadBrowser testing + regression tests
/qa-onlyTesterReports bugs only, doesn’t touch code

Release Phase (Ship the Thing)

SkillSimulated RoleWhat It Does
/shipRelease ManagerTest sync, coverage audit, PR creation
/land-and-deployDevOpsMerge, wait for CI, verify production
/canaryOn-Call MonitorPost-deploy error and performance monitoring
/benchmarkPerformance EngineerCore Web Vitals benchmarking
/document-releaseDocs EngineerAuto-updates documentation

Tools & Security

SkillWhat It Does
/browseBuilt-in headless browser for web interaction
/carefulDangerous command interception
/freezeLocks file editing scope
/csoOWASP Top 10 + STRIDE threat modeling
/codexCross-model code review via OpenAI

Plus a few utility ones I won’t list individually.

Cracking Open a Skill

Saying “it’s just prompts” might not click without a concrete example. Let’s open the source code for plan-ceo-review.

Each skill’s core is a SKILL.md.tmpl file (template), which gets compiled into the final SKILL.md.

First, the metadata header:

---
name: plan-ceo-review
description: |
  CEO/founder-mode plan review. Four modes:
  SCOPE EXPANSION, SELECTIVE EXPANSION, HOLD SCOPE, SCOPE REDUCTION.
allowed-tools:
  - Read
  - Grep
  - Glob
  - Bash
  - AskUserQuestion
  - WebSearch
---

Notice allowed-tools — it restricts the AI to reading and searching only, no code modifications. Smart design for a review skill: reviewing means reviewing, not editing.

Then the role definition (simplified):

“You are not here to rubber-stamp this plan. You are here to make it extraordinary.”

Next, four review modes:

  • SCOPE EXPANSION: Dream mode, think big — “What’s 10x more ambitious for only 2x the effort?”
  • SELECTIVE EXPANSION: Hold the baseline, but surface expansion opportunities for cherry-picking
  • HOLD SCOPE: Scope stays fixed, make the plan bulletproof
  • SCOPE REDUCTION: Surgical cuts, strip to minimum viable version

Then the review steps. Step 0 alone includes:

  1. Premise Challenge — First ask “is this even the right problem?” before reviewing the plan
  2. Existing Code Leverage — Has existing code already solved parts of this?
  3. Dream State Mapping — Map out “current state → this plan → 12-month ideal state”
  4. Implementation Alternatives — Mandatory 2-3 alternative approaches for comparison
  5. Mode Selection — Let the user choose their review mode

After that, 11 review dimensions: architecture, error handling, security, data flow, code quality, testing, performance, observability, deployment, long-term maintenance, and UI/UX.

The most impressive part? It embeds 18 “CEO cognitive patterns,” referencing decision frameworks from Bezos, Munger, Jobs, and others:

  • Bezos’s one-way/two-way door theory — Most decisions are reversible, so move fast
  • Munger’s inversion reflex — Don’t just ask “how do we win?” Also ask “what would make us fail?”
  • Jobs’s focus as subtraction — 350 products cut down to 10

The entire file is roughly 3,000 lines. All for a single /plan-ceo-review command.

Now you see why I say it’s fundamentally prompt engineering. It encodes a senior CEO’s entire plan review thought process into structured AI instructions.

My Take

gstack is valuable, but it’s not magic.

Where it delivers value:

  1. Knowledge codification — Turns tacit knowledge like “what dimensions should a good plan review cover” into explicit checklists. You might remember to check security, but you’ll forget to trace “data flow through four paths (normal/nil/empty/error).”
  2. Process discipline — Forces the AI to follow steps without skipping or cutting corners. Each issue gets its own pause-and-ask, rather than dumping all suggestions at once.
  3. Role-playing — Well-crafted prompts that put the AI into a specific “role” genuinely produce better output than casual one-liner questions.

But know what it isn’t:

  • Not an AI capability upgrade — Claude is still Claude. gstack just tells it how to think.
  • Not for every project — The 28 skills are designed for software projects with real complexity. If you’re writing scripts or maintaining a blog, it’s overkill.
  • Not plug-and-play — You need to understand each skill’s design intent to use the right skill at the right time.

The key point: if you already know how to communicate with AI effectively, you can achieve similar results with natural language. gstack’s core value proposition is “I don’t know what questions to ask the AI.” If you already know, it’s just a convenient shortcut.

What’s Next in This Series

Upcoming articles will break down each gstack skill individually — what’s in the prompt, what’s the design thinking, and what’s worth learning.

Next up: /office-hours — the skill gstack recommends you start with, and the entry point to the entire workflow.

If you find this series valuable, stay tuned for more.