- Published on
A Collection about Prompting
While taking a Google course, I came across a simple prompting pattern. It wasn’t technical. It wasn’t complicated. But for some reason, it made me pause.
I realized most of the time when I didn’t like ChatGPT’s answer, it wasn’t because the AI was “bad.”
It was because I was being vague.
Since then, I’ve been using this pattern as a mental checklist whenever I want better results.
The Pattern
1. Task – What exactly you want the AI to do
2. Context – Why you want it or how the output will be used
3. References – Examples or styles to follow (if you have them)
4. Evaluate – Check if the output makes sense and is accurate
5. Iterate – Adjust and ask again until it works for you
It looks simple. But when I actually started applying it, everything changed. Let me show you with a real example.
A Real Moment: Planning Our Tokyo Trip
One night, I was sitting on the couch thinking about our summer trip to Japan. I opened ChatGPT and typed:
Plan a trip to Tokyo
The response wasn’t wrong. But it felt generic. Like something copied from a travel website. And here was how I applied this pattern to make my prompt is better:
Step 1: Task - be clear about what you want
So I rewrote it:
Please plan a 7-day trip to Tokyo for 4 people (2 adults and 2 kids) traveling from Atlanta, GA in early June, our budget around 10,000.
Immediately, this felt different.
Now ChatGPT knew:
- We are a family
- We’re flying from Atlanta
- We’re going in early June
- We have a budget
- We’re staying 7 days
More constraints. More clarity. And the answer improved.
Step 2: Context - Explain the Situation
Then I realized something else.
This isn’t just a trip. This is a trip with a 6-year-old and a 10-year-old.
So I added:
We want a relaxed, family-friendly trip. The kids are 6 and 10 years old, so we don't want the schedule to be too packed.
Now ChatGPT understands:
- This is not a fast-paced adult trip
- The schedule needs to be realistic
- Activities should be suitable for young kids
This changes the tone of the itinerary completely.
Step 3: References - Add References
Then I guided it even more:
Please focus on kid-friendly attractions, simple transportation instructions, and include an estimated daily budget. Avoid luxury hotels and fine dining.
Now the output became practical.
Instead of:
Explore Tokyo’s vibrant culinary scene
It said:
- Which train lines to take
- How long travel might take
- Rough daily cost estimates
That’s when I realized: small instructions make big differences.
Step 4: Evaluate - Don’t just accept it
The first improved version still wasn’t perfect.
Some days looked too busy. One day had 4 major attractions — unrealistic with kids.
So instead of thinking “ChatGPT isn’t good,” I asked myself:
- Would my kids actually enjoy this?
- Is this too much walking?
- What if it rains?
That’s when Step 5 comes in.
Step 5: Iterate - Refine it
I wrote:
Day 3 looks too busy. Can you slow it down and suggest one indoor activity in case it rains?
And just like that, it adjusted.
Slower pace. Backup option. More realistic.
That’s when it clicked for me:
You don’t need the perfect prompt at the beginning. You build it.
The Final Version of My Prompt
After refining it step by step, this is what it looked like:
Please plan a 7-day family-friendly trip to Tokyo for 4 people (2 adults and 2 kids, ages 6 and 10) traveling from Atlanta, GA in early June. Our total budget is around $10,000. We prefer a relaxed schedule with kid-friendly attractions. Include transportation tips and an estimated daily budget breakdown. Avoid luxury recommendations.
Clear. Specific. Aligned with reality.
And the result felt completely different from that first vague request.
What I’ve Learned
Most of the time, better answers don’t require a smarter AI.
They require clearer thinking.
Now whenever I use AI — whether for travel planning, writing, coding, or learning — I slow down and ask:
The difference between a bad answer and a great answer is usually clarity. Instead of just asking something quickly, I:
- Did I define the task clearly?
- Did I explain the context?
- Did I guide the output?
- Did I review it?
- Did I refine it?
It’s a small shift, but it changes everything.
Better prompts → better answers.
