Last updated: March 2026 | Reading time: 7 min
Prompt engineering isn't magic. It's just communication — telling an AI what you need clearly enough that it actually gives you something useful instead of corporate word salad. I've written thousands of prompts across ChatGPT, Claude, and Gemini at this point, and I've figured out 15 techniques that consistently produce way better results.
I wrote most of this while waiting for my coffee to cool down, which is fitting because good prompts are about patience and specificity.
Here's the thing: bad prompts fail for three reasons.
Too vague: "Write me something about marketing" gets you generic garbage. No context: The AI doesn't know your audience, tone, or what you actually need it for. Wrong format: You ask for an essay when what you really need is a bullet list.
Every technique below fixes one or more of these problems.
Instead of: "Write a product description"
Try: "You are an experienced copywriter who specializes in premium outdoor gear. Write a product description for a $200 titanium water bottle targeting environmentally-conscious millennials who value design."
Why's this better? Giving the AI a specific role constrains it to match the expertise and perspective you actually need. The more specific the role, the better the output gets. I learned this the hard way after getting five generic descriptions before realizing I needed to tell the AI who it was supposed to be.
Instead of: "Write a blog post about AI tools"
Try: "Write a blog post about AI tools. Avoid: clichés like 'in today's fast-paced world,' starting sentences with 'It's important to note,' bullet points where paragraphs work better, exclamation marks, the word 'delve' or 'leverage,' and sounding enthusiastic about everything."
This works because AI models default to the most common patterns in their training data. Corporate and marketing writing is everywhere in training data. When you explicitly forbid those patterns, you force more natural, distinctive output. (Honestly, this took me way too long to figure out.)
Instead of: "Write tweet threads about AI"
Try: "Write a tweet thread about Cursor IDE. Here are examples of my style:
Thread 1: 'I've been using Notion for 3 years. Here's what actually works and what's just pretty screenshots. A thread. [1/7] The daily note is overhyped. I tried...'
Thread 2: 'Hot take: Most productivity apps make you less productive. Here's the setup I use that's boring but works. [1/5]...'
Now write a similar thread about Cursor IDE, matching my tone — direct, slightly skeptical, focused on practical value."
Examples beat instructions every time. The AI pattern-matches your style, tone, and structure from the examples instead of guessing what you want.
Instead of: "What's the best pricing strategy for my SaaS?"
Try: "I'm pricing a B2B SaaS tool that helps recruiters write job descriptions. Think through this step by step: (1) What pricing models exist for this type of product? (2) What are competitors charging? (3) What's the value to the customer in time saved? (4) Based on this analysis, recommend a pricing strategy with specific numbers."
Asking the AI to reason through something step-by-step produces more thorough, actually-useful answers. It forces the model to build toward a conclusion instead of just jumping to whatever first pops out.
Instead of: "Analyze this data"
Try: "Analyze this sales data. Present your findings as:
1. Key Finding (one sentence headline)
2. Supporting Data (2-3 specific numbers)
3. Why It Matters (one sentence of business impact)
4. Recommended Action (one specific next step)
Repeat this format for each finding. Maximum 5 findings."
Specifying the exact output format eliminates all the guesswork. You get actionable, consistently structured responses instead of whatever format the AI felt like using.
Instead of: "Write an article about RAG"
Try: Step 1: "Explain how RAG (Retrieval-Augmented Generation) works. Focus on the intuition, not the math."
Step 2: Read the explanation, form your own understanding.
Step 3: "Now help me write an article explaining RAG to product managers who've heard the term but don't understand the technical implementation."
By separating understanding from writing, you end up with content that's actually grounded in real comprehension. The AI adapts its explanation to your audience instead of defaulting to some generic technical explanation it's written a thousand times.
Don't expect perfection on the first try. Just don't. Build through conversation:
1. "Write a first draft of [topic]" — get the raw material
2. "The section on [X] is too generic. Make it more specific with concrete examples" — fix the weak spots
3. "Shorten the introduction to 2 sentences and make the CTA more urgent" — fine-tune
4. "Read this aloud in your head. Flag any sentences that sound awkward" — polish
Each refinement step is easier than getting everything right in one shot. You also catch issues you didn't even think about in the original prompt.
At the start of a conversation, dump context all at once:
"Before we begin, here's context you need:
For the rest of this conversation, match this style and audience level."
Setting context once saves you from repeating it constantly. The AI maintains consistency throughout the whole conversation instead of drifting back to generic tone every few messages.
"I'm going to argue that [YOUR POSITION]. Your job is to give me the strongest possible counter-argument — not a strawman, but the genuine best case for the opposing view. Then tell me where my position is weakest."
AI's trained to be agreeable. Which is fine, except when you actually need it to challenge you. Explicitly asking for this produces more valuable, critical thinking than just asking "what do you think?"
"Extract the following from this text and return as JSON:
If a field isn't mentioned, use null. Return only valid JSON."
Structured output formats are way more reliable than free-form responses. JSON specifically prevents the AI from padding the response with unnecessary commentary.
"Explain [CONCEPT] three ways:
1. For a 10-year-old (use analogies from everyday life)
2. For a business executive (focus on business impact and ROI)
3. For a technical expert (use precise terminology, skip basics)"
Seeing the same concept at multiple levels helps you find the right explanation for your specific audience. It also reveals gaps in the AI's understanding — and your own, honestly.
Start tight, then loosen:
1. "Explain blockchain in exactly 1 sentence."
2. "Now expand that to 1 paragraph."
3. "Now expand the most important concept into 3 paragraphs."
Or start loose, then tighten:
1. "Write everything you know about X" (brainstorm)
2. "Cut that to the 5 most important points"
3. "Write a 200-word summary covering only those 5 points"
Constraints force clarity. It's way easier to expand a clear core idea than to edit something vague and rambling.
"I need to write a prompt that will [GOAL]. Draft the prompt for me, including: role assignment, context, specific instructions, output format, and constraints. Then explain why you structured the prompt this way."
Just let the AI write prompts for you. Seriously. This is especially useful for complex, multi-step tasks where crafting the right prompt is itself a challenge.
After getting output, add: "Rate this output 1-10 for [CRITERIA]. Then rewrite it to be a 9 or 10, explaining what you changed and why."
Self-evaluation makes AI models produce noticeably better second drafts. The model identifies its own weaknesses and actually addresses them.
Add this to any writing prompt: "Before finalizing, check your output for these AI writing tells and remove them: (1) Starting with 'In the realm of' or 'In today's...', (2) Using 'delve,' 'leverage,' 'tapestry,' 'landscape,' or 'crucial,' (3) Ending paragraphs with 'It's worth noting that,' (4) Unnecessary hedge phrases like 'It should be mentioned,' (5) Starting more than 2 bullet points with the same word."
These specific patterns are the most common AI writing tells. Filter them out and you get output that actually reads human.
| Technique | When to Use |
|-----------|-------------|
| Role Assignment | Any task requiring specific expertise |
| Negative Constraint | When output sounds too generic or "AI-like" |
| Few-Shot Examples | When you need a specific style matched |
| Chain of Thought | Complex reasoning or analysis |
| Format Specification | When you need structured, consistent output |
| Iterative Refinement | Complex writing tasks |
| Context Loading | Start of any multi-turn session |
| Steelman Challenge | Decision-making, argument testing |
| Data Extraction | Processing documents, emails, or unstructured text |
| Meta-Prompting | When you're stuck on how to prompt |
The best prompt engineers aren't people memorizing tricks. They're people who communicate clearly. If you can explain what you need to a smart person who knows nothing about your topic, you can write a great AI prompt.
Start with what you want, who it's for, and what good looks like. Everything else is just refinement.
Want all 100 tested prompts organized by use case? Get the AI Prompts Pack — 100 production-ready prompts for writing, business, coding, research, and productivity.
Disclosure: Product links are to my own digital products. Article affiliate links earn me a commission at no cost to you.