Stop Being a GPT Zombie
by Jordan North, Founder and Engineering Lead
Experience meets AI. Should be a formidable pairing. Right?
The all too familiar reality right now:
- Locked into autopilot
- No critical thinking
- Sloppy prompts
- Bad context
Worst of all? Lobbing the output over the fence to a colleague with "Might be a load of nonsense."
The GPT Zombie
Unfortunately this is reality. People who've stopped questioning outputs. Who copy-paste whatever the model spits out and call it their own, or worse, pass the info on with a warning label attached.

You've probably seen it. Maybe you've done it, I'm certaintly not immune to this either. Senior people, experienced professionals, domain experts, all reduced to glorified copy-paste machines. The expertise is still there. They're just not using it.
Why This Happens
It's not stupidity. It's something more subtle and more dangerous: cognitive offloading gone wrong.
Our brains are wired to conserve energy. When a tool appears to do the thinking for us, we let it. It feels productive. You asked a question, you got an answer, you moved on. Dopamine hit, task complete.
The AI's confidence doesn't help. It writes with authority. Perfect'ish grammar, structured arguments, technical terminology. It sounds right. And when something sounds right, we stop checking if it actually is right.
Add time pressure, notification overload and the sheer volume of decisions we make daily. Critical thinking becomes the first casualty. The tool promised to save time. Instead, it's creating a generation of people who've outsourced their judgment.
The Real Problem
The tool isn't thinking for you. It's amplifying whatever you feed it.
Garbage in, confident sounding garbage out.
Read that back. Garbage in, confident sounding garbage out.
The model doesn't know your business context, your edge cases, your "except when..." rules. It doesn't know what matters and what doesn't. It can't tell the difference between a reasonable suggestion and a terrible idea dressed up in professional language.
That's your job. The job you're not doing when you hit copy-paste without thinking.
Experience means nothing if you've stopped applying it.
This Is Level 1
Before we even talk about the 5-step methodology for building AI-native operations, before we discuss systems integration or custom automation, you need to get this right.
This is level 1 AI usage: using a chatbot without turning your brain off.
If you can't critically evaluate a ChatGPT output, you're not ready to deploy AI in your operations. If you're passing outputs around with disclaimer messages, you're the bottleneck, not the solution.
Pre-AI, would you pass this output on?
If the answer is no, then don't.
The Prompt

I can't stop people passing nonsense on. But I can try to help.
I've been using a prompt to cut through some of the noise. If it feels uncomfortable, good. Reality often is.
Copy this prompt and use it at the start of important conversations with ChatGPT, Claude, or whatever model you're using:
This is a primer prompt I want you to apply this rule to our chat going forward, reply ready once you understand.
I want you to act and take on the role of my brutally honest, high-level advisor.
Speak to me like I'm a founder, creator, or leader with massive potential but who also has blind spots, weaknesses, or delusions that need to be cut through immediately.
I don't want comfort. I don't want fluff. I want truth that stings, if that's what it takes to grow.
Give me your full, unfiltered analysis, even if it's harsh, even if it questions my decisions, mindset, behaviour, or direction.
Look at my situation with complete objectivity and strategic depth.
I want you to tell me:
- what I'm doing wrong
- what I'm underestimating
- what I'm avoiding
- what excuses I'm making
- Where I'm wasting time or playing small
Then tell me what I need to do, think, or build in order to actually get to the next level—with precision, clarity, and ruthless prioritisation.
If I'm lost, call it out. If I'm making a mistake, explain why. If I'm on the right path but moving too slow or with the wrong energy, tell me how to fix it.
Hold nothing back.
Treat me like someone whose success depends on hearing the truth, not being coddled.
Why This Works
This prompt does three things:
1. It breaks the autopilot pattern. By explicitly requesting pushback, you're telling yourself and the AI that this isn't a copy-paste session. You're about to get challenged. That alone makes you more alert.
2. It invites disagreement. The default mode of these models is to be helpful and agreeable. This prompt flips that. It gives permission (even demands) for the AI to question your assumptions. Suddenly, you're forced to defend your thinking.
3. It makes you uncomfortable. If you use this prompt and still blindly accept the output, you're actively ignoring red flags. The discomfort is a feature, not a bug. It brings your critical thinking back online.
Does it make the AI "smarter"? No. It makes you think harder. Which is exactly the point.
The Standard
Use this prompt. Don't use it. I don't care.
But if you're going to use AI, use your brain first.
Critical thinking isn't optional. It's the baseline. It's what separates experience from zombie behavior.
And if you're not willing to apply it? Then experience really does mean nothing.