Back to writing

AI Is a Second Brain, Not a Replacement

May 5, 2026

The most valuable way to use AI is not to skip thinking, but to expand it and pressure-test your reasoning before you commit to a solution.

AI is often framed as a replacement.

Faster coding. Faster writing. Faster answers.

That is useful. It is also a shallow way to use it.

The most valuable way I have found to use AI is as a second brain.

Not something that replaces thinking.

Something that helps me think better.

The mistake is collapsing the problem too early

A lot of AI usage looks like this:

You have a hard problem. You ask for a solution. You get something plausible. You move on.

The problem is not always that the answer is wrong.

The problem is that you skipped the part where you actually understand what you are solving.

That usually comes back later.

You realize the constraints were fuzzier than you thought. The tradeoffs were different. The first clean answer solved the visible problem but missed the real one.

That is the failure mode I see most.

What works better

I use AI more like an overpowered rubber duck.

I will ask it to interview me. Push back on my assumptions. Force me to explain what I am actually trying to do.

That is surprisingly effective.

The moment you try to explain a system out loud, even to an AI, you start noticing things you had been hand-waving past:

  • constraints you did not make explicit
  • assumptions you did not verify
  • edge cases you were quietly ignoring
  • places where you were jumping to implementation too quickly

That is already useful before a single line of code gets generated.

Example: designing a solution

When I am working on something non-trivial, my instinct is still to start implementing too early.

That is usually the fastest way to feel productive.

It is not always the fastest way to get the design right.

Now I slow that part down.

I will open a conversation and start exploring:

  • what are the actual constraints?
  • what are a few realistic approaches?
  • where does each one get awkward?
  • what failure modes am I not modeling yet?
  • what assumptions am I making because the first approach feels convenient?

The AI is not deciding the design for me.

It is helping me explore the space faster and more deliberately.

By the time I start coding, the problem is usually smaller and the tradeoffs are more visible.

That matters more than generating code faster.

Example: understanding a codebase before changing it

Another place this helps is research.

Before I make a change in a messy or unfamiliar system, I can use AI to help walk the codebase:

  • find similar patterns
  • trace how a concept is used in different places
  • compare how related problems were solved before
  • pressure-test whether my proposed change fits the system that already exists

That changes the quality of the solution.

You are no longer just guessing from one file. You are making a decision with more context.

And that usually leads to better changes than jumping straight from ticket to implementation.

Why this works

AI is good at a few things that matter here:

  • holding context across a discussion
  • continuing a line of thought without getting tired
  • offering alternative angles quickly
  • asking the kind of follow-up question that forces articulation

It is not good at the part that still belongs to the engineer:

  • knowing what actually matters in your system
  • deciding which tradeoff is acceptable
  • understanding organizational or production context
  • owning the outcome when the decision is wrong

That division of labor is the whole point.

You own the judgment.

It helps you think.

The boundary still matters

This is not how I would use AI for everything.

If the problem is trivial, I do not need a long conversation.

If the answer is well-known, I might just ask for the answer.

And if the decision is high-stakes, AI is input, not authority.

The point is not to turn every task into a philosophical session with a chatbot.

The point is to use the tool in the place where it actually compounds your reasoning.

The shift

The biggest change is small.

Instead of asking:

What's the solution?

Ask:

Help me think this through.

That is the difference between using AI as a shortcut and using it as a tool.

One tries to replace thinking.

The other makes you better at it.

Related writing

The Last 20% Is Still the Job

AI can one-shot a convincing first pass, but the work that makes software valuable still lives in judgment, boundaries, and tradeoffs.

The Problem-Solving Pace

Slowing down, explaining your thinking, and focusing on process can make hard debugging work much easier.