Good Prompt Hunting: What Google, OpenAI, and Anthropic say about how to actually talk to your AI
Skip the TikTok hacks (just for a minute). I read the official prompt guides so you don’t have to.
Photo by Saad Ahmad on Unsplash
I wrote this back in June, but in light of renewed interest in how AI models feel - not just how they perform - let’s revisit the fundamentals. Updated to include GPT-5’s model notes.
Maybe because I’m closing out my “summer of upskilling,” I’ve rediscovered the value of learning formally - rather than just from doing (or letting TikTok teach me things in 60-second bursts). So, in the spirit of structured learning, I decided to go straight to the source and read the official prompt guides from the major platforms.
Below is some advice, my take on who each guide is really for, and even some thoughts on who likely wrote them.
Gemini by Google
I’ve only used Gemini as the model behind AI-powered Google search results. But I wanted to understand how it behaves as an LLM, so I dug into three things: Google’s public “Prompting Basics” page, its internal guide for Gemini for Workspace users, and its prompt engineering white paper.
Gemini for Workspace can access all your Google documents. For example, if you jotted down notes in a Google Doc, you could say:
“Review my meeting notes and create three actionable takeaways for my team.”
This skips the usual step of uploading or copy-pasting content into your AI - which is handy, especially if you're already embedded in the Google ecosystem. It can see whatever is in your Drive - which could save you friction. Though Google has since rebranded many of its tools under the “Gemini” umbrella - including Gmail, Docs, and Sheets - adoption across enterprise still lags behind Microsoft 365 in most orgs.
Some highlights from the page and guide, respectively:
The best prompts, use around 21 words, even though most users average just 9
The guide is structured by job function - I explored the Marketing and Executive sections
The Marketing section offers ~15 prompts tailored for managers
The Executive section includes CMO-level prompts for market research, campaign planning, and brainstorming off meeting notes
The CMO examples were mildly strategic, but not particularly sophisticated - more like a good starting point than something you’d actually hand to a leadership team.
These two guides are very basic, no jargon, easy to digest but I personally would like something with more sophistication and practical tips. I suspect they’re human-written, by the product marketing team or HR, and that they assume the reader is a newly onboarding employee.
Onto the 69 page white paper. If you read Training Your Dragon, the Prompt 101 section included something similar to this structure for good prompting:
[Role], [Task], [Constraints], [Output Format]
Google references three prompting styles - Zero-shot, Few-shot, and Chain-of-Thought (CoT) - which were first popularized by OpenAI but are now widely used.
Here’s a working example:
Zero-shot = Is Good Will Hunting a good movie?
Few-shot = Here are three examples of how I evaluate movies... Now apply that to Good Will Hunting.
Chain-of-thought = Break down the pros and cons of Good Will Hunting - acting, writing, pacing - and explain whether it holds up today.
Or, in human terms:
Zero-shot is like asking a stranger off the street.
Few-shot is asking a friend - but first explaining what you mean by “good movie.”
Chain-of-thought is asking a film major to defend their senior thesis.
Reading this made me realize: I’ve been using few-shot prompting while editing this newsletter - I just didn’t know it had a name. My posts always follow the same structure: a movie-inspired title, short descriptor, the main body, etc.
ChatGPT has picked up on this (and I guess Gemini would as well). It often asks if I’d like it to generate a description. I still like mine better (this is a human-written newsletter, though AI-checked), but it is great for checking character counts by placement. And this shows how structure and a predictable logic helps AI generate and proactively suggest outputs.
Lastly, for multimedia inputs, tell Gemini what to focus on. If for example you upload images and text, tell it to analyze the chart specifically and what you’d like to learn from it. Iteration is normal and, when in doubt, rephrase. Gemini can also do that for you, it can generate and refine prompts.
My guess? This one wasn’t written by marketing. It feels authored (or at least shaped) by Gemini’s internal research team - engineers or applied scientists who’ve been living inside the model. It’s technical but readable, and clearly meant to lay the groundwork for prompting as a real discipline. You can feel the intention behind it: not just usage tips, but best practices for AI as a collaborator.
So Google didn’t just drop one prompt guide - they dropped a stack.
For casual users: the in-app Gemini guide
For Workspace users: the productivity-focused prompt patterns
And for engineers and advanced builders: a 69-page whitepaper on prompt engineering
Three tiers with the tones to match. One implicit message: prompting is becoming its own discipline.
ChatGPT by OpenAI
With the release of GPT‑5, OpenAI didn’t just launch a new model - they released a full prompting playbook (“cookbook style” more on that later). There’s now a 50-page guide and a Prompt Optimizer in Playground to help refine structure, sequence, and outputs. You can even adjust verbosity, reasoning depth, and routing - useful as GPT‑5 adapts in real time to your task.
From the OG guide, OpenAI’s online resource starts with message roles - something most browser users will never touch. It’s written for developers working through APIs. If you're using ChatGPT in the browser, you can skip ahead. The core principles remain: break tasks into steps, state your goals clearly, and define your desired output. Zero-shot, few-shot, and chain-of-thought are all named and explained.
One funny thing about the guide: parts of it read like they were written by a language model with a PhD in Poetry. Words like “asynchronously” and “recursively” are common in the AI world - but they’re still abstract to the average user. If Gemini veers too simple, ChatGPT veers a little jargony.
(Recursive asynchronous prompting just means the AI loops back to rethink its answer, and on its own.)
Here’s another fun excerpt:
“Section summaries can be concatenated and summarized, producing summaries of summaries.”
And this is why I don’t think AI should ever write a newsletter about AI, too many multisyllabic words in sequence.
That said, OpenAI’s “cookbooks” - which offer applied examples - are easier to read and worth checking out if the main guide feels too academic. My guess here is predominantly LLM-written with some light oversight by a technical human or product marketer. And that makes sense for an audience of engineers and developers.
Claude by Anthropic
Next up is Claude, and yes - some of the prompt engineering content here is aimed at coders. But there's still plenty of guidance that non-technical users can learn from.
Anthropic offers a few standout tools:
Console - a prompt generator you can use directly in your browser
Templates - prebuilt for different use cases
Prompt Improver - which edits your prompts to make them clearer and more effective
I appreciated how thoughtfully this guide was written. It reads like it was shaped by technical writers - there’s structure and clarity, but no unnecessary jargon.
The core advice is familiar by now:
Be clear and direct
Use multi-shot prompting (their term for what OpenAI calls few-shot)
Let Claude “think” through problems - a version of Chain-of-Thought prompting
Claude also now supports tone adjustments - you can ask it to respond more casually, diplomatically, or even "with more warmth." This puts even more power in the hands of the prompt writer, especially in high-context or human-centered work.
Of the three, Claude feels the most like it wants to help you get better at prompting - not just get an answer. My guess is it’s LLM-drafted but heavily structured by technical writers. It feels written for thoughtful professionals and curious builders - people who want a collaborative AI, not just a transactional one.
Final Thoughts
You don’t need to speak fluent tech to prompt well. You just need clarity of thought, a little structure, and a sense of what you’re actually asking for.
That’s the thread all three guides share:
Good prompting isn’t about tricking the machine. It’s about thinking clearly enough to ask the right questions.
And maybe that’s the point: prompting is less about tools and more about thinking out loud. If that’s the case, we might all get better at both. Good prompting is just clear thinking, turned inside out.