Two companies buy the same AI platform. Same vendor, same model, same price.
Company A does everything right. They hire a prompt engineering consultant. They run workshops. Their team spends weeks crafting detailed instructions—specifying tone, format, constraints, step-by-step reasoning chains. The prompts are beautiful.
Company B takes a different approach. They spend those same weeks uploading their sales data, customer call transcripts, pricing history, and internal strategy docs. Then someone on the team types a one-sentence question. No special formatting. No chain-of-thought tricks.
Company B gets better results. Not slightly better. Meaningfully better.
The difference wasn't how they asked. It was what they gave it to work with.
The Prompt Engineering Trap
For the past two years, the entire conversation around AI has centered on prompting. How to ask. What words to use. Which frameworks produce the best output. There are courses, certifications, even six-figure job titles built around the craft of writing better instructions.
And prompt engineering isn't useless. It matters. But it's been given a weight it doesn't deserve.
Chip Huyen draws this line clearly in AI Engineering (2025). She separates two things most people treat as one: instructions and context. Instructions tell the model how to do the task. Context gives it the information todo it. They're both part of the prompt—but they are not the same thing, and they are not equally important.
Most companies are spending almost all their energy on the instructions side. Polishing how they ask while starving the model of the information it needs to answer well.
It's like giving someone perfect driving directions but not telling them where they're starting from.
Context Is the Real Variable
Andrej Karpathy—one of the most technically capable people in AI—recently reframed the entire discipline. He stopped calling it prompt engineering. He calls it context engineering: “the delicate art and science of filling the context window with just the right information for the next step.”
That word “just” is doing a lot of work. Not all the information. Not as much as possible. The right information, structured the right way, at the right time.
Stanford's “Lost in the Middle” research backs this up. Their team found that language models lose more than 30% of their performance when relevant information is buried in the middle of what you give them—even models explicitly designed for long context. The information was there. The model just couldn't find it. Structure and positioning of context mattered more than having the information at all.
Anthropic's own engineering team confirms it. Their context engineering guide describes it as “a fundamental shift in how we build with LLMs—focusing on thoughtfully curating what information enters the model's limited attention budget at each step.”
Huyen puts it simply:
“To solve a task, a model needs both instructions on how to do it and the necessary information to do so. Just like how a human is more likely to give a wrong answer when lacking information, AI models are more likely to make mistakes and hallucinate when they are missing context.”
The model isn't the bottleneck anymore. What you feed it is.
Before You Hire a Prompt Engineer
Before your next investment in prompting tools, AI training, or another round of workshops on how to talk to ChatGPT—ask your team these three questions:
1. What does your AI actually know about your business?
If you're asking it to help with pricing but haven't given it your pricing data, margins, and competitive position—no prompt will fix that. You're asking someone to solve a problem without telling them the facts.
2. When was the last time you updated what your AI has access to?
Most companies set up their AI tools once and never refresh the context. Your business changed last quarter. Your market shifted. Your AI's context didn't. It's working with a version of your company that no longer exists.
3. Is your team spending more time on how they ask than on what they provide?
If people are in prompt workshops but nobody has organized your institutional knowledge for AI consumption—you're optimizing the wrong variable. A mediocre question with excellent context will outperform a perfect question with no context, every time.
If the answers are vague, that's the signal. The gap in your AI strategy isn't cleverer prompts. It's the context layer nobody has built yet.
The Work That Actually Matters
This is what we focus on at Guthrie Blackwell & Co. Not teaching your team fancier ways to talk to AI. Not picking the right model or the right vendor. We help you figure out what your AI needs to know about your business—and build the systems that keep that knowledge current, structured, and useful.
Because the companies that win with AI won't be the ones with the best prompts. They'll be the ones whose AI actually understands their business.
The prompt is the easy part. The context is the work.
Geoff Price
Guthrie Blackwell & Co. LLC
Get the next client letter sent straight to your inbox.
Or, get in touch.