Lost in Context: Why your AI needs context, not just prompts
Context Engineering is World-Building for AI
Same character, different context: Thandiwe Newton as Maeve in HBO's Westworld "Akane No Mai"
Sometimes when we meet people, I wish it would play out like an episode from Euphoria season 1.
Allow me to explain. How amazing would it be to get a 10-15 minute backstory on someone, their origin story in effect, soon after you’re introduced? A flashback that helps reframe them, and maybe recontextualizes what you thought you knew about them.
Rue’s addiction wasn’t just a teenager being rebellious; it started when she was spending time with her ailing father, which led to her sneaking his pills. We understand her better after observing this. (Bonus, the casting of the younger characters was beyond reproach.)
This is exactly what context engineering does for AI.
Still Playing Prompt Roulette
I’m raising my hand, I’m guilty of this. Many of my newsletters have a prompt library at the end. I’m still sending prompts to fellow Substackers, to help them answer questions I think an AI is better suited to answer.
But if you have any AI flavored posts in your social media feeds, you can already sense a landscape shift taking place. That there’s something beyond the “art of the prompt” to yield better outcomes.
I always assume there’s more to gain from using AI. And that accessing that gain rests on me, how am I approaching it? There's a time and place for lazy prompting - sometimes you want that quick, generic response. But when you need your AI to consistently perform complex tasks or maintain quality across interactions, that's when context engineering becomes essential.
I was going to dive into creating an agent next, but this has piqued my interest first. Prompts taught us how to communicate with AI. Context engineering teaches us how to build relationships with AI.
Context Engineering = World-Building for AI
I work in marketing, so I’m a storyteller by trade. The other day I looked at some of my Notes and thought “sheesh, those are long.” But, that’s also just how long it took to make my point, to tell my story.
Storytelling Is Context Engineering. As storytellers, we instinctively understand that context is everything. Who was there, what was their motivation, why now?
You wouldn't drop a character into a story without backstory, world-building, or context for their actions. The most compelling stories (Lord Of The Rings, Game Of Thrones, Star Wars, Westworld) are the ones with fully built-out worlds. Sometimes those worlds also span multiple generations, providing more history and lineage.
And who doesn’t love a prequel or a sequel to a really fantastic story?
So why don’t we do any world-building with AI? We prompt for answers, but shouldn't we give elaboration as to why we're prompting?
Context engineering is literally world-building: creating the environment your AI needs to "live" and perform in.
The Storytelling Framework Applied to AI
I’m assuming almost everyone here on Substack loves a good story. Either reading one, writing one, or both. And so most of this will come naturally to you.
In Storytelling, you build:
Character backstory → AI persona/role definition
World history → Conversation history & memory
Setting details → Background data & documents
Genre conventions → Rules and behavioral guidelines
Plot context → Current task/immediate request
Gradual reveals → Layered information delivery
The magic is in the complexion - the details that make worlds feel real and AI responses feel intelligent.
Is This Just for Engineers?
If you’re a reader of Natural Intelligence, one of the recurring themes is “you already know this!” And so guess what? Good news: You're already a context engineer too.
Every time you tell a story, you're managing context. You inherently understand character development, world consistency, and layered information. And these skills transfer directly to working with AI systems.
It's more about understanding your needs, your use case, than it is about technical coding.
When I think of world-building, I remember the audible gasp I made when we got a glimpse of “Shogun World” in HBO’s Westworld. And then eventually we got a full-blown episode there, in Akane No Mai, which is still one of my favorites of the entire series.
If you’re similarly a fan of the show, you can think of how the following context informed that series and could apply in AI world-building. For example:
Create character sheets for your AI personas (Maeve operates differently in Westworld vs. Shogun World - same core, different context)
Build world atlases for your specific use cases (Wild West saloon rules vs. Edo period protocols)
Layer information like the hosts' backstories (memories that feel real and inform every interaction)
Establish rules like Westworld's narrative loops (what can happen, what can't, how should conflicts resolve)
Think systems, not one-offs - different parks, same underlying technology
There isn’t a need to save random prompts anymore, it’ll be more impactful to save contextual cues to build your AI worlds. Instead of trying to figure out the right prompt, the question becomes “what additional context can I give?”
You get to think like a showrunner managing a series playbook, or that rabid Reddit commenter who has made compendiums by season.
Or maybe you are an engineer - there’s always a Substack for that. Check out this Practical Guide To Context Engineering.
I Still Don’t Get It
Maybe that was too philosophical or you’ve never seen Westworld. And while I’d love to demo an example for you, that’s challenging without screenshotting… a lot.
I kept trying to create dramatic before/after examples, but kept getting solid results from both a generic prompt and a context-engineered one. Because that’s the thing, context engineering isn’t about fixing generic prompts. It’s not for a single scene, it’s the context you provide to easily build an entire episode, a full series arc, and even a movie trilogy.
The best real-world example I have to date? How about my transition from using ChatGPT as my primary GenAI to using Claude?
What I built with ChatGPT and with Claude, in the form of JSON files to which Claude can connect:
Character sheets: Includes my professional profile, communication preferences
World atlas: My business memory system, project contexts
Backstory/memory: ChatGPT conversation history, past work together
Rules and guidelines: How I want feedback, my workflow preferences
Persistent context that now carries across sessions
This is my JHong-specific AI universe that makes my AI interactions more effective than generic Claude usage.
It required a bit more time upfront, but now every chat is more targeted, more useful. I don’t have to re-explain my background, goals, or preferences. And Claude remembers my universe between sessions.
Closing Thoughts
I gave Claude my Euphoria backstory. My origin story. And so far, it’s been showing up like it knows me well.
And while I haven’t brought this to work, yet, I can imagine how context engineering now becomes interesting to support enterprise-level marketing:
Crisis Communications AI - Brand voice guidelines, stakeholder mapping, past crisis responses, legal constraints, approval workflows, regional considerations
Global Campaign Localization - Brand book, cultural considerations by market, competitor landscape, regional legal restrictions, approved imagery libraries
Partnership Content - Co-brand guidelines, approval hierarchies, legal partnership terms, brand equity protection rules, past successful collaborations
Social Media at Scale - Platform-specific brand voice, community management escalation rules, trending topic response protocols, influencer collaboration guidelines
I'm pretty excited to build an entire movie and a few sequels using this. Turns out, the future of AI isn't about finding the perfect prompt - it's about becoming better storytellers for our digital collaborators.



Westworld had it right! Fascinating to me always finding the common threads across “separate” crafts. Context Engineering, when you think of it as things you already do and how things already tend to exist, becomes a whole lot more approachable conceptually. Once you understand that it’s more “ok now how do I do this for my AI”.
This piece is a powerful reflection on how personal moments are never separate from global realities. It captures the tension between privilege and pain, presence and politics, and reminds us that healing and resistance are intertwined. A beautifully written call to stay tender, awake, and radically human.