To set up a Claude Code project well, start with a small repo that supports one real workflow, one context layer, one reusable instruction path, and one durable output. If you build folders, commands, agents, and rules before one useful job works end to end, you did not build an operator system. You built theater.
That is the real setup problem.
Not installation.
Not model quality.
The blank-folder problem.
When to use this approach
Use this setup if you installed Claude Code or a similar agent tool, opened an empty folder, and realized you do not know what should exist inside it.
This is especially useful if your work looks more like:
- CRM audits
- transcript analysis
- content operations
- workflow design
- reporting and internal operating docs
Not every useful AI workspace starts as a software project.
What you need before you start
Before you create anything, define four things.
- one real workflow you want the system to support
- the minimum context that workflow needs
- one reusable instruction path
- one output artifact that should survive the session
If you do not know those yet, do not start building structure.
The blank-folder problem
Most people do not fail because the AI tool is bad.
They fail because they opened an empty folder and had no operating model for what should live there.
That keeps coming up.
Someone installs Claude Code.
Or Codex.
Or whatever agent tool is having a moment this week.
They open a folder.
They ask one or two questions.
The output is decent.
Then they hit the wall.
Now what?
What files should exist?
What belongs in instructions versus normal docs?
What should become a reusable workflow?
What should stay out completely?
If you do not answer that, people do one of two dumb things.
They either keep everything in chat and rebuild the context every session.
Or they overcorrect and start building a giant AI shrine with folders, commands, agents, rules, templates, sidecars, dashboards, and ten layers of complexity before one useful workflow actually works.
Both paths are bad.
One gives you no memory.
The other gives you theater.
This is the part the macro data keeps confirming. BCG's 2023 analysis of companies getting real value from AI found that only about 10% of the value comes from the algorithms themselves, with roughly 70% coming from business process change and adoption. The setup is the job. The model is the easy part.
The right answer is smaller.
Build the minimum structure that supports one real job.
Then earn the next layer.
What the repo is actually for
A good repo is not there to look impressive.
It is there to hold the context and artifacts that should survive the session.
That means the repo should answer a few practical questions.
- what work are we doing
- what context should already be known
- what outputs need to persist
- what workflow gets repeated often enough to deserve structure
That is it.
If a folder or file does not serve one of those jobs, it probably does not need to exist yet.
That is the mistake a lot of setup content makes. It shows the full anatomy of a mature system and people assume they need to build all of it on day one.
You do not.
You need a small system that can survive real work.
The minimum viable repo architecture
If you are starting from zero, you need fewer layers than most people think.
A useful starter workspace usually needs four things.
| Layer | What it is for | Example |
|---|---|---|
| Context | durable facts that should already be known | who the company is, how the team works, core definitions |
| Project | the active working area for one stream of work | attribution rebuild, transcript pipeline, content engine |
| Instructions | reusable rules for how outputs should behave | tone rules, formatting rules, scoring rules |
| Artifacts | outputs that need to survive the session | plans, briefs, proofs, reports, handoffs |
That is enough to do serious work.
Not sexy.
Useful.
A starter tree that actually makes sense
If I were setting this up for an operator from scratch, I would start smaller than most tutorials do.
my-ai-workspace/
├── README.md
├── context/
│ ├── company.md
│ ├── audience.md
│ └── operating-rules.md
├── projects/
│ └── sales-transcript-pipeline/
│ ├── plan.md
│ ├── notes.md
│ └── deliverables/
├── instructions/
│ ├── writing-rules.md
│ ├── analysis-rules.md
│ └── crm-format.md
└── outputs/
├── drafts/
└── reports/
That already gives the system:
- durable context
- one active project area
- a place for reusable rules
- a place for outputs that should survive
You can add more later.
But this already works.
What to create first and why
If you are staring at an empty repo, create these in order.
README.md- one paragraph on what the workspace is forcontext/company.md- what should already be known about the business or projectinstructions/operating-rules.md- the durable rules that keep outputs usableprojects/<first-workflow>/plan.md- the active job and what success looks like- one output artifact the system can actually produce
That last part matters.
The repo should produce something useful early.
Not just structure.
A report.
A brief.
A cleaned transcript summary.
A pipeline review.
A CRM audit that actually walks the system instead of giving generic advice.
Something that proves the system is doing work.
A simple way to think about the first files is this.
| File or folder | Job to be done | Create it when |
|---|---|---|
README.md | states what the workspace is for | immediately |
context/ | holds durable facts that should not be re-explained | immediately |
instructions/ | stores reusable rules across tasks | once rules repeat |
projects/<workflow>/ | isolates one stream of work | as soon as you pick a real workflow |
outputs/ or deliverables/ | preserves artifacts worth keeping | before the first real run |
| custom commands or agents | reduce repeated work | only after the workflow works manually |
What not to build yet
This matters just as much.
Do not build:
- ten nested instruction layers before one workflow works
- custom commands for jobs you have only done once
- agents for tasks you still do not understand manually
- a giant context file that tries to store everything
- folders created because they looked smart in somebody else’s setup
A good rule is this:
If a layer does not reduce repeated work yet, it is probably premature.
That includes a lot of AI setup advice.
Real operator workflows a starter repo should support
A good starter workspace should support a few concrete jobs.
Not generic productivity.
Real work.
For a GTM or RevOps operator, 3 to 5 starter workflows is enough.
- transcript analysis into action items, scoring, and CRM notes
- pipeline review with named risks, stuck deals, and owner follow-up
- CRM audit across fields, pipelines, duplicates, workflows, and lifecycle logic
- content brief generation with proof and structure
- weekly operating summary for client or internal review
If the repo can support even one of those end to end, you are past the gimmick stage.
Example workflow
A CRM audit is a good first example because it forces the repo to do real work.
- create the minimum context files for the business and CRM setup
- create one project area for the audit
- define one instruction file for how findings should be documented
- walk the fields, pipelines, duplicates, workflows, and lifecycle logic
- write the findings into a durable artifact that survives the session
That is enough to prove the workspace is useful.
You do not need the whole city before one room works.
The handoff rule that makes this work
This is the part that quietly matters the most.
Planning, execution, and memory should not all live in the same blob.
A plan has one job.
Project context has another.
Reusable instructions have another.
Final outputs have another.
When those get mixed together, the system gets muddy fast.
That is why I keep coming back to a simple pattern:
- plan in one place
- execute in another
- write the output down
- let the repo carry the memory forward
That is what turns working memory into infrastructure.
If you want the deeper operating argument behind that separation, read Prompt Engineering vs Context Engineering: What Actually Improves AI Output? and AI Memory Layer for Workflow Automation: What It Is and Why It Matters.
Do non-developers need all this?
No.
They just need the useful version of it.
This is where a lot of non-technical people bounce off these tools. They assume the repo structure is secretly for engineers and they are just borrowing it.
That is wrong.
If your work involves client systems, documents, workflows, research, reporting, CRM logic, or content operations, the structure helps you too.
You do not need to code to benefit from:
- scoped project context
- reusable instructions
- durable output artifacts
- cleaner handoffs between sessions
You just need the setup tied to real work instead of terminal cosplay.
That same pattern shows up in public Claude Code setup writing from AI Maker and in Hannah Stulberg’s framing for non-developers: people get leverage when the working environment is structured around the job, not when the tool is treated like magic.
Signs you are overbuilding too early
- you have more structure than outputs
- you built commands or agents you never use
- your context files are getting longer but the work is not getting cleaner
- nobody knows what belongs where
- the system looks sophisticated but still cannot complete one useful workflow end to end
That last one is the kill shot.
If the system cannot do one real job cleanly, more layers will not save it.
Bottom line setup sequence
If you want the shortest path from blank repo to useful system, it goes like this.
- Pick one real workflow.
- Write the minimum context that workflow needs.
- Create one project area for it.
- Create one reusable instruction file.
- Produce one durable output artifact.
- Run the workflow a few times.
- Only then add another layer if repeated pain justifies it.
That is enough.
Practitioner view
"Most people do not need a more powerful model. They need a repo that knows what job it is there to do."
Sebastian Silva, Founder, HigherOps
Key takeaways
- The blank-folder problem, not the tool, is what stalls most operators starting a Claude Code project. Build structure only for work you are actually doing.
- Four layers are enough to start: context, project, instructions, and artifacts. Everything else is premature until a real workflow is running end to end.
- A repo's job is not to look impressive. It is to carry the context and artifacts that should survive the session.
- If structure is growing faster than outputs, you are overbuilding. Pull the layers back and ship a workflow instead.
- Skills, commands, and agents come after the underlying workflow works manually and the repetition is real, not before.
Frequently asked questions
What files should I create first in a new AI workspace?
Start with a README, one context file, one instruction file, one project folder, and one output artifact tied to a real workflow.
Do I need to know how to code to use Claude Code well?
No.
But you do need enough structure that the tool is working against real context instead of a blank slate every session.
What belongs in instructions versus normal project docs?
If it applies repeatedly across tasks, it probably belongs in instructions. If it is specific to one stream of work right now, it probably belongs in the project docs.
When should I add skills, commands, or agents?
After the underlying workflow works manually and the repetition is real.
Not before.
How do I know if I am overbuilding the system?
If the structure is growing faster than the outputs, you are probably overbuilding.