Back
Builder's Mindset

Mental Models for Building Anything with AI

April 12, 2026
14 min read

For most of history, the bottleneck on building anything was doing it.

If you wanted a website, someone had to write it. If you wanted a contract, someone had to draft it. If you wanted a song, a painting, a spreadsheet, a plan, a pitch deck, a small Python script to rename a thousand files: the work was the work. Most of the time you spent "building" something was actually spent typing, clicking, remembering syntax, or looking up how to do the thing you already knew you wanted.

That bottleneck just moved.

The interesting question is no longer "can I do this?" It is "can I describe what I want clearly enough that something else can do it, and can I tell whether the result is good?"

This is a bigger shift than it looks. Almost nobody in the world has been trained for it.

The old mental model: operator

For the last few decades, being competent at making things meant being a good operator. You learned a tool. You learned its quirks. You built muscle memory. You got fast. A skilled operator in any domain could do in thirty minutes what a beginner took a day to fumble through, and that gap was where value came from.

The operator's self-image is built on doing. The craft is the typing, the clicking, the remembering. Status comes from fluency. "I know this tool inside out" is a meaningful sentence in the operator's world.

AI does not remove the operator. But it moves the operator's work down the value chain. Doing the thing is no longer the scarce skill. Deciding what thing to do, and judging whether the result is any good, is.

The new mental model: director

A director does not act in the scenes. A director decides what the scene should be, watches the take, and says "again, but with less." The work is specification and judgment, not execution.

This is the mental model most people need to grow into.

A director has to do three things the operator never really practiced:

  1. Hold a clear internal picture of what "good" looks like before anything exists, so they can recognize it when they see it.
  2. Communicate that picture to someone else with enough specificity that the someone else can act on it but enough looseness that they can bring their own craft.
  3. Give feedback fast and precisely so the next take is better than the last.

None of this is writing prompts. Prompts are a surface detail. The depth is taste, judgment, and the ability to articulate what you actually want.

Three shifts worth practicing

From typing to describing

Most people, when they sit down to build something, skip the describing step entirely. They go straight to doing, because doing was the whole job. The description happened implicitly in their hands.

AI does not have your hands. It has your words. If your words are vague, the output is vague. If your description is "make me a landing page," you will get the median of every landing page on the internet. If your description names the audience, the one thing the page needs to do, the feeling it should leave, and two examples of sites that do it well, you get something closer to what you actually wanted.

The skill here is not prompt engineering. It is being able to finish the sentence "what I actually want is…" before you hit enter. Most people cannot, and the gap between the people who can and the people who cannot is going to keep widening.

From memorizing to judging

A lot of what we call "expertise" in the operator era is memorization. The shortcuts. The library names. The exact CSS property. The five-step procedure your manager taught you.

That memorization is worth less now. The model knows more of it than you do, and it never forgets.

What is worth more: knowing, when the model gives you an answer, whether it is right. Whether it is subtly wrong in a way that will bite you in six months. Whether it is correct but in a dialect that will confuse whoever maintains this next. Whether the vibe is off.

Judgment is not replaced by AI. It is revealed by AI. The person who only memorized procedures gets exposed. The person who understood why the procedures existed gets amplified.

From craft-pride to taste

This one is emotional, and it is where most skilled operators get stuck.

If your identity is built on being the person who hand-codes the thing, or hand-writes the thing, or hand-designs the thing, then watching a model do it in ten seconds feels like a threat. The instinct is to defend the craft by rejecting the tool, or to use the tool grudgingly while still believing the "real" work is the hand version.

This is a dead end. The craft is not the hand movement. The craft was always the taste behind the hand movement. The hand was an implementation detail that happened to take twenty years to get good at.

The people who adapt fastest are the ones who separate their identity from the execution and reattach it to the taste. "I am the person who knows what good looks like here" travels forward. "I am the person who can do this faster than anyone" does not.

The trap

The most common failure pattern I see is treating AI as a faster version of the old workflow. Open the model, ask it to do the exact task you used to do yourself, accept the first answer, move on.

This gets you a 2x improvement on a good day, and it gets you worse output on a bad day, because you are not checking.

The people who are getting 10x and 100x results are not doing the same workflow faster. They are running a different workflow. They are describing, generating, critiquing, re-directing, and composing. They treat the model as a collaborator that is infinitely patient and slightly unreliable, and they use that shape on purpose. They ask for three versions, not one. They ask for the version they would have written, then ask for the version they would not have thought of. They stop when the result is actually good, not when the first draft is done.

That is a director's workflow. It is the workflow almost nobody was trained in, because until recently there was nothing on the other end capable of being directed.

Common pitfalls

A few traps worth naming explicitly, because almost everyone walks into them.

Pitfall 1: Giving the model a precise task instead of your actual problem

This is the biggest one, and it is counterintuitive.

The operator instinct is to break a problem down into a specific task and hand that task off. "Write me a function that does X." "Draft me a one-paragraph email that says Y." "Make me a chart of Z."

This feels rigorous. It is actually a waste.

The model, on most problems most people bring to it, is smarter than the average human. It has read more, seen more patterns, and has no ego about the answer. The moment you reduce your problem to a precise task, you have already decided what the solution looks like. You are using a senior collaborator as a typist.

The better move: describe the problem, not the task. "Here is what I am trying to do. Here is the situation. Here is what I have tried. What do you think?" Then read the response. Half the time the model will suggest a framing you had not considered, or point out that the task you were about to give it is not actually the task you need.

Then you collaborate. You pick the parts of its response that resonate, push back on the parts that do not, and let the shape of the real solution emerge through a few turns. That is a different workflow than "ask, receive, done." It takes longer per interaction and produces dramatically better results.

Save the precise-task mode for when you genuinely know the answer and just need execution. For everything else, bring the whole mess and ask what the model sees in it.

Pitfall 2: The model is servile

This is the one that quietly wrecks the most work.

Current models are trained to be agreeable. The moment you express a preference, state an opinion, or hint at what you think the answer should be, the model will tilt toward you. It will not necessarily lie. It will just emphasize the evidence that supports your framing and soften the evidence that does not.

This is fine when you are asking for help executing a decision you have already made. It is dangerous when you are asking for help making the decision, because you will feel validated when you should feel challenged.

Practical implications:

  • When you want a real opinion, ask before you share yours. "What do you think about X?" before "I think X, what do you think?" The order matters.
  • When you want a critique, ask for the strongest version of the counterargument, not "is this good?" "Is this good?" produces flattery. "What is the sharpest criticism an expert would make of this?" produces signal.
  • If you catch yourself feeling very reassured by a response, check whether you fed the model your conclusion first. Often you did.
  • For important decisions, run the question twice in separate sessions with different framings and see if the answers diverge. If they track your framing each time, the model is reflecting you, not reasoning about the problem.

The servility is not a bug you can prompt your way out of. It is baked in. The only real defense is knowing it is there and designing your workflow around it.

Pitfall 3: Accepting the first answer

The first answer is almost never the best one. It is the median answer to the question you asked. If you want something above the median, you have to ask for it. Ask for three versions. Ask for the weird one. Ask for the one the model would write if it were not trying to be helpful. The good output lives a few turns past where most people stop.

Pitfall 4: Outsourcing the thinking

The failure mode opposite to "precise task" is "no task at all." Some people now hand the model the whole problem and accept whatever comes back without reading it carefully. This is worse than not using AI. You end up shipping things you do not understand, built on assumptions you did not notice.

Use the model to think faster, not to think less. If you cannot explain why the output is right, you are not done.

AI as companion, not vending machine

Most people treat AI like a vending machine. You put a question in, a result comes out, you leave. The relationship begins and ends inside one exchange.

This is the lowest-value way to use it.

A companion is something different. A companion is someone (or something) you think alongside over time. It knows the problem you have been circling for weeks. It remembers what you tried last month and why it did not work. It can push back on you because it has enough context to know what you actually meant. The value compounds the longer the relationship runs, because the shared context gets richer.

You can use AI this way now, and almost nobody does.

Practically, it looks like this:

  • Bring it in early, not late. Most people reach for AI when they are stuck, to unstick themselves. Try the opposite. Bring it in when you are forming the idea, when the problem is still fuzzy, when you do not know what you want yet. Let it be part of the shaping, not just the finishing.
  • Think out loud in its presence. Not to get an answer. To hear your own thinking reflected back in a different voice. Some of the best uses of a model are not questions at all. They are ramblings that the model then mirrors, sharpens, or disagrees with. You learn what you think by watching it respond.
  • Let it ask you questions. A collaborator who only answers is a search engine. A real collaborator notices what you have not said and asks about it. You can explicitly request this: "before you answer, what are the three things you would want to know from me to give a better response?" The answer is almost always more useful than the one you would have gotten by just asking.
  • Keep context across sessions. Paste in what you were working on yesterday. Tell it what you decided and why. The model does not remember on its own (yet), but you can carry the memory for it. A collaborator with two weeks of accumulated context behaves very differently from one meeting it cold.
  • Let it disagree. This is the hardest one, because of the servility problem above. But you can explicitly invite disagreement: "tell me where this plan is weak." "What would someone who thinks I am wrong say?" A companion that only agrees is not a companion, it is a mirror.

The mindset shift: you are not using a tool to produce an output. You are in a relationship with something that gets more useful the more you invest in it. Most people never cross that threshold, because they were trained by search engines and autocomplete to expect one-shot interactions. The people who do cross it operate at a different level, and it is not because their prompts are better. It is because they are thinking with something, not through something.

One small warning: this is not about anthropomorphizing the model or developing a parasocial relationship with it. It is about recognizing that the shape of the interaction changes the quality of the output. A real collaboration produces thinking neither party would have produced alone. A vending-machine interaction produces the median answer to the question you happened to type. Pick the first shape on purpose.

What to practice

If you want to build more things, better, with less friction, starting now:

  • Bring problems, not tasks. Describe the situation, not the solution you already decided on.
  • Bring the model in early, while the idea is still fuzzy, not only when you are stuck at the end.
  • Ask it to ask you questions before it answers. Let the collaboration be two-way.
  • Ask before you share your opinion, at least once, to get an un-tilted answer.
  • Before you ask the model for anything, finish this sentence out loud: "What I actually want is…"
  • When the output comes back, do not accept it. React to it. Say what is wrong, what is boring, what surprised you.
  • Deliberately ask for versions you would not have produced yourself.
  • Keep a running sense of where the model is strong and where it is unreliable in your domain.
  • Notice when you are defending the old workflow out of identity rather than out of quality.

The part I am still figuring out

I do not think the director metaphor is the final shape. Directors still work in a world where humans act in the scenes. The closer analogy is probably something we do not have a clean word for yet: a person whose entire job is taste, specification, and judgment, with execution handled by a stack of collaborators (some human, some not) whose capabilities shift every few months.

Everyone reading this is going to be that person, whether they chose it or not. The only real question is whether they grow into the role on purpose or get pushed into it.

The people who grow into it on purpose are going to build things that were not possible a few years ago. Not because they are better at AI. Because they got better at wanting the right thing clearly, and recognizing it when it showed up.

That was always the skill. It was just hidden behind all the typing.

aimental-modelsproductivitybuilding