How to Write Better AI Prompts: A Practical Guide for 2026
How to Write Better AI Prompts: A Practical Guide for 2026
The difference between a useless AI response and a genuinely helpful one almost always comes down to the prompt. Not the model, not the subscription tier, not some secret trick -- just how clearly you tell the AI what you actually want.
I've spent the last two years building AI automation for UK businesses, and the single biggest unlock for every client has been the same thing: better prompts. Not longer prompts. Not more complicated prompts. Just clearer ones.
Here are five techniques that consistently produce better results across ChatGPT, Claude, Gemini, and every other model worth using. Each one includes a before-and-after example so you can see the difference immediately.
Technique 1: Set the Role and Context First
The most common prompting mistake is jumping straight to your question without giving the AI any context about who it's supposed to be or what situation it's in.
When you set a role, you're not just playing dress-up. You're activating a specific subset of the model's training data. A "senior financial analyst" draws on different patterns than a "primary school teacher," even when answering the same question.
Before
What should I do about my business cash flow?
This gets you a generic article about cash flow management. Textbook stuff. Useless for your actual situation.
After
You are a UK small business financial adviser specialising in service businesses
with 5-20 employees.
My consultancy does £40k/month revenue but cash flow is tight because clients
pay on 30-day terms and I have £8k/month in fixed costs (staff, software, office).
I have £12k in the bank and £25k in outstanding invoices.
What are the 3 most impactful things I should do this week to improve my
cash position?
Now you get specific, actionable advice grounded in your actual numbers. The role-setting ("UK small business financial adviser") ensures the advice accounts for UK-specific factors like VAT, HMRC payment plans, and invoice factoring options available here.
Why This Works
Context eliminates ambiguity. The model doesn't have to guess whether you're a startup founder, a multinational CFO, or a student doing homework. It narrows the response space to what's actually relevant to you.
Technique 2: Specify the Output Format
If you don't tell an AI what format you want, you'll get whatever it defaults to -- usually a wall of text with some bullet points. If you need a table, say so. If you need code, say so. If you need something you can paste into an email, say exactly that.
Before
Compare the pros and cons of React and Vue for a small project.
You'll get a decent but unfocused essay. Good luck extracting the key points quickly.
After
Compare React and Vue.js for a small internal business tool (5-10 pages,
2 developers, 6-month timeline).
Present the comparison as:
1. A table with columns: Criteria | React | Vue | Winner
2. Cover these criteria: learning curve, ecosystem size, hiring availability,
bundle size, TypeScript support, long-term maintenance
3. End with a single clear recommendation with one sentence explaining why.
You get a structured, scannable comparison that actually helps you make a decision. The table format means you can paste it straight into a Slack message or proposal document.
Why This Works
LLMs are excellent at following format instructions. They just need you to be explicit. Think of it like briefing a freelancer -- you wouldn't say "write me something about React vs Vue" and hope for the best. You'd specify the deliverable.
Technique 3: Use Examples (Few-Shot Prompting)
Showing the AI an example of what you want is almost always more effective than describing it. This is called "few-shot prompting" -- you give one or more examples (shots) of the input-output pattern you're after, and the model matches that pattern.
Before
Write product descriptions for my online store.
Generic, bland, could-be-any-store descriptions. Not useful.
After
Write product descriptions for my UK outdoor clothing store. Match this
style and format exactly:
EXAMPLE:
Product: Cairngorm Waterproof Jacket
Description: Built for Scottish winters, tested in worse. The Cairngorm
sheds rain like a collie sheds hair -- reliably and without complaint.
Fully taped seams, adjustable hood that actually stays up in wind, and
pockets deep enough for an OS map. Not the cheapest jacket on the rack,
but the one you'll still be wearing in five years.
Price: £189
Now write descriptions for:
1. A merino wool base layer (£45)
2. A pair of hiking boots (£129)
3. A lightweight packable down jacket (£165)
The model picks up your tone (practical, slightly humorous, UK-centric), your structure (name, description, price), and your style (specific details, honest comparisons). Every output matches your brand voice because you showed it rather than told it.
Why This Works
Examples eliminate interpretation. Instead of the model choosing from thousands of possible styles for "product description," you've narrowed it down to exactly one. One example is good. Two examples is better. Three is usually the sweet spot.
Technique 4: Chain Your Thinking (Step-by-Step Instructions)
Complex tasks fail when you try to do everything in one prompt. Break the task into explicit steps. This forces the model to work through each stage rather than trying to jump to a final answer.
Before
Write a marketing strategy for my B2B SaaS product.
You'll get a superficial overview that covers everything and is useful for nothing.
After
I'm launching a B2B SaaS product (project management tool for construction
companies, £299/month, UK market). Help me build a marketing strategy by
working through these steps:
Step 1: Identify the top 5 pain points that construction project managers
have with their current tools. For each, explain why it costs them money.
Step 2: For each pain point, write a one-line message that would make a
construction PM stop scrolling on LinkedIn.
Step 3: Rank the 5 messages by likely conversion potential, considering
that our audience is time-poor, sceptical of new software, and primarily
uses mobile.
Step 4: For the top-ranked message, outline a 4-week LinkedIn campaign
including: post frequency, content types (text/image/video), a sample
post for week 1, and how to measure whether it's working.
Each step builds on the previous one. By Step 4, you have a specific, actionable campaign rather than a generic "use social media" recommendation. The model's output at each stage is grounded in the thinking it did in the previous stage.
Why This Works
Step-by-step instructions mirror how humans actually solve problems -- you don't jump from "I need a marketing strategy" to a finished plan. You research, ideate, prioritise, then plan. Forcing the model through these stages produces dramatically better final output.
Technique 5: Add Constraints and Boundaries
Unconstrained prompts get unconstrained (read: unfocused) answers. The best prompts include explicit boundaries: word limits, things to avoid, assumptions to make, and non-negotiables.
Before
Help me write a proposal for a client.
After
Write a project proposal for a client (UK manufacturing company, 50 employees)
who wants to automate their sales order processing.
Constraints:
- Maximum 1 page (roughly 400 words)
- No technical jargon -- the reader is the MD, not the IT team
- Include: problem summary, proposed solution (3 bullet points max),
timeline, investment, and expected ROI
- The investment is £7,500 and we estimate it saves them 15 hours/week
of manual data entry
- Tone: professional but warm, not corporate-speak
- Do NOT include: disclaimers, "about us" sections, or generic benefits
of automation
The constraints do the heavy lifting. "No technical jargon" prevents the AI from showing off. "Maximum 1 page" forces conciseness. "Do NOT include" prevents the padding that AI loves to add. You get a tight, client-ready proposal instead of a five-page template.
Why This Works
Constraints are liberating -- for humans and for AI. When you tell the model what NOT to do, you eliminate entire categories of mediocre output. The "Do NOT include" instruction is particularly powerful because it cuts the filler that makes AI writing feel obviously AI-generated.
Putting It All Together
You don't need all five techniques in every prompt. For a quick question, just adding a role and format instruction (Techniques 1 and 2) will significantly improve your results. For complex tasks, combining all five creates prompts that consistently produce output you can actually use.
Here's a framework you can follow:
1. Role: Who should the AI be?
2. Context: What's the situation?
3. Task: What do you need?
4. Format: How should the output look?
5. Examples: What does good output look like?
6. Constraints: What should be included or excluded?
You don't need to use every element every time. But the more of these you include, the more useful the response will be.
Optimise Your Prompts Automatically
If you want to skip the manual work of structuring your prompts, our Prompt Optimizer tool takes your rough prompt and restructures it using these techniques automatically. Paste in what you've got, and it'll suggest an improved version with proper role-setting, format specifications, and constraints added.
It's free, runs in your browser, and doesn't store your prompts.
For teams that use AI regularly, we've also put together a Prompt Templates pack (£4.99) with 50+ ready-to-use templates covering business writing, data analysis, coding, marketing, and more. Each template is built using the five techniques above and includes fill-in-the-blank placeholders so you can adapt them in seconds.
Quick Reference: Prompt Quality Checklist
Before you hit send on your next prompt, run through this checklist:
- Did you set a role? Even a brief one like "You are a senior copywriter" helps.
- Did you provide context? Relevant details about your situation, audience, or constraints.
- Did you specify the format? Table, bullet points, email, code, numbered list -- be explicit.
- Did you give examples? Even one example of what good output looks like saves you iterations.
- Did you set boundaries? Word limits, things to avoid, assumptions to make.
- Is the task clear? Could someone read your prompt and know exactly what deliverable you expect?
If you can tick at least four of those six, you'll get a response worth using on the first attempt rather than the third.
The Biggest Mistake People Make
The single most common mistake isn't about technique at all. It's treating AI as a search engine instead of a collaborator.
Search engines are good at "What is X?" questions. AI is good at "Help me do Y given Z constraints" tasks. If your prompts mostly start with "What is..." or "Tell me about...", you're using about 10% of what these models can do.
Shift from asking questions to giving briefs. Think of the AI as a capable but new team member who needs clear instructions and context to do good work. That mental model alone will improve every prompt you write.
Start Writing Better Prompts Now
Try these techniques with your next ChatGPT, Claude, or Gemini conversation. Pick one technique -- I'd start with Technique 1 (Role and Context) since it gives the biggest improvement for the least effort -- and notice the difference.
If you want to shortcut the process, use our Prompt Optimizer to automatically restructure your prompts using all five techniques.
Good prompts aren't about being clever. They're about being clear.