The fastest way to get better answers from ChatGPT isn’t “more clever wording”—it’s more structure. This guide gives you a practical prompt recipe you can reuse for almost anything: writing, coding, planning, studying, brainstorming, and decision-making.
Related: questions to ask ChatGPT · reduce hallucinations · prompt templates · work prompts · all prompts
Most “generic” AI answers come from missing information. The model is forced to guess what you mean, who you are, what matters, and what “good” looks like. This recipe prevents that by giving the model the minimum set of details it needs to be useful.
State what you want as a deliverable: a plan, a draft, a checklist, a table, a script, or a decision.
Explain your situation: who it’s for, what you already tried, and what “success” means.
Budget, time, tools, tone, length, style, must-include, must-avoid—anything that narrows the space.
Ask for structure: bullets, a table, steps, headings, JSON, a checklist, or a template you can reuse.
Provide a draft to improve, or show a “good vs bad” example. This aligns expectations fast.
Don’t stop at version 1. Ask for alternatives, critique, and revisions until it matches your intent.
Large language models are pattern engines: they map your request to patterns in their training and produce a likely continuation. If you give a vague request, the model picks “average” patterns, which reads as generic. By giving goal, context, and constraints, you reduce the number of plausible interpretations—and the model can focus on the kind of output you actually want.
The format step is surprisingly powerful. It converts an open-ended conversation into a task with a “definition of done.” A table forces comparisons. A checklist forces action. A template forces reusability. If you want more examples, browse work & productivity prompts or all prompt lists.
Bad: “Give me marketing ideas for my app.”
Better:
Act as a growth marketer.
Goal: 12 ideas to get the first 100 users for a habit app.
Context: audience is busy students; we have €200; we can post on TikTok and Reddit.
Constraints: ideas must be doable in 2 weeks; avoid paid ads.
Format: a table with idea, cost, effort, expected impact.
Iteration: recommend the top 3 and explain why.
Better:
Goal: Build a weekly study plan for learning Python.
Context: I know basic programming; I can study 45 minutes/day.
Constraints: I learn best by doing projects; avoid long lectures.
Format: a 4-week schedule with daily tasks and mini-projects.
Iteration: add a “catch-up day” each week if I fall behind.
Better:
Write a 900-word blog post about [topic].
Audience: busy founders. Tone: friendly, practical, not hypey.
Constraints: include 3 examples; avoid buzzwords; end with a 5-bullet summary.
Format: H2 headings + short paragraphs.
Iteration: suggest 5 headline options and 3 alternate intros.
Treat AI output like a draft. The first response is rarely perfect because the model still doesn’t fully “own” your intent. A simple iteration loop is:
Give me 5 options for [goal].
Then pick the best option for [constraints].
Then produce a step-by-step plan and a risk checklist.
If you’re doing anything high-stakes (money, health, legal, security), add: “List assumptions and label confidence (high/medium/low). Then give a verification checklist.” See How to Reduce Hallucinations for deeper patterns.
Here are mini-recipes for common tasks. Each one uses the same structure but emphasizes different parts. You can copy and adapt these.
Best emphasis: constraints + format.
Act as a project manager.
Goal: [deliverable].
Context: [team/situation].
Constraints: [deadline/budget/tools].
Format: milestones + weekly plan + risks + KPIs.
Best emphasis: audience + tone + examples.
Rewrite this for [audience] in a [tone] voice.
Keep key points, remove fluff, and improve structure.
Text: [paste]
Best emphasis: format + iteration (quiz).
Teach me [topic] step-by-step:
1) simple explanation
2) worked example
3) 5-question quiz + corrections
4) next practice tasks.
Best emphasis: environment + tests + edge cases.
Act as a senior engineer.
Given this code and error:
1) explain cause
2) propose fix
3) provide corrected code
4) add tests and edge cases.
Code: [...]
Error: [...]
Want more examples by category? Browse: funny prompts, deep prompts, LLM limit tests, and daily prompts.
“Best” depends on what you value (speed, cost, quality, risk, simplicity). Fix it by adding priorities and requesting a decision matrix.
I’m choosing between [A], [B], [C].
My priorities: [1], [2], [3].
Make a decision matrix and recommend one.
Without constraints, the model defaults to safe, common ideas. Add a time limit, a budget, and an audience. If you don’t know constraints, ask the model what it needs to know first.
Version 1 is just a draft. Ask for critique, alternatives, and a tighter rewrite. This is where quality comes from. If you want a grab bag of reusable templates, see Prompt Templates.
A reliable structure is: Goal, Context, Constraints, Format, Examples, and Iteration. It reduces ambiguity and forces usable output.
Long enough to include the key context and constraints. Short prompts can work if they’re specific; longer prompts help when tasks are complex or accuracy matters.
Add constraints, request a concrete format (checklist/table), and ask for multiple options. Then iterate by asking for critique and a revision.
Ground it in your data, request assumptions and confidence labels, and ask for a verification checklist. Learn more in How to Reduce Hallucinations.
Next: Questions to Ask ChatGPT · All prompt lists · Suggest a prompt