A prompt is as simple as “Why is the sky blue?” And that’s how most people are interacting with ChatGPT et al. But they soon find out that if they want to go beyond simple queries then prompting is something of a skill and something of a wordplay game.
AI prompting is an essential skill for working with large language models and getting them to do what you want them to do. Poor prompts will hurt the quality and relevance of the output we receive, while well-planned and reflective prompting, using these techniques, can yield far more valuable and accurate info. These are 10 common prompting techniques.
1. Chain-of-thought (CoT) prompting
This technique asks the model to walk through each stage of its reasoning, improving performance on complex tasks. It’s like having the AI narrate its inner monologue—useful for understanding the model’s decision-making process.
Example 1: “Explain why pineapple belongs on pizza step-by-step, no skipping the emotional trauma it causes.”
Example 2: “Break down how cats took over the internet, one meme at a time.”
2. Zero-shot Prompting
This relies entirely on the model’s existing knowledge, without providing specific examples. It’s useful for testing a model’s general understanding and capabilities.
Example 1: “Write a love letter to coffee, but make it sound like it’s from a medieval knight.”
Example 2: “Describe the life of a paperclip who dreams of becoming a sword.”
3. Few-shot prompting
This uses a few examples to guide the model’s response. It’s particularly effective when dealing with new or specialized tasks.
Example 1: Give two haikus about office printers, then ask: “Write another haiku, this time about the copier jamming.”
Example 2: Provide three terrible puns about bananas, then prompt: “Make a fourth pun, preferably even worse.”
4. One-shot prompting
This gives the model just one example, allowing it to infer the desired output format and style. It’s a balance between zero-shot and few-shot approaches.
Example 1: Show one cat photo captioned, “Chairman Meow,” and ask: “Write similar captions for three other cats.”
Example 2: Give the phrase, “This is fine,” then prompt: “Write something similar for a raccoon discovering a dumpster fire.”
5. Prompt chaining
This connects multiple prompts, where each answer shapes the next. It’s useful for complex tasks that require multiple steps or iterations.
Example 1: “Draft a villain’s evil plan. Now write the henchman’s memo about why it’s a bad idea.”
Example 2: “Generate a list of terrible superhero names, then create backstories for the top two.”
6. Tree-of-thoughts (ToT) prompting
This technique encourages the model to explore multiple reasoning paths, improving problem-solving capabilities.
Example 1: “Propose five ways to survive a zombie apocalypse, one of which involves befriending the zombies.”
Example 2: “List different theories on why your plants keep dying, including their potential conspiracy against you.”

7. Meta prompting
This involves giving the model clear rules about how to behave or respond. It’s useful for creating more controlled and consistent outputs.
Example 1: “Pretend you’re an overenthusiastic car salesperson. Pitch me this wheelbarrow.”
Example 2: “You’re a hipster barista. Recommend a coffee order for someone who’s afraid of commitment.”
8. Generated knowledge prompting
This technique reuses information the model has already generated to create more detailed or creative outputs.
Example 1: “Recall your explanation of quantum physics, but now explain it to a toddler using jellybeans.”
Example 2: “Take your previous summary of Shakespeare’s plays and turn it into a rap battle between Hamlet and Macbeth.”
9. Least-to-most prompting
This builds complexity gradually, breaking down complex tasks into simpler subproblems. It’s particularly effective for multi-step reasoning tasks.
Example 1: “Start by defining procrastination. Now explain why I still haven’t folded my laundry from 2019.”
Example 2: “First, explain why dogs wag their tails. Now write a dissertation on how tail wagging could solve world peace.”
10. Self-consistency prompting
This technique involves asking the same question multiple times and selecting the most consistent answer, improving reliability in complex reasoning tasks.
Example 1: “What’s the best pasta shape? Ask three times and let’s see if spaghetti finally beats out elbow macaroni.”
Example 2: “What’s the most dangerous animal in the world? Ask multiple times and compare results: sharks, mosquitoes, or toddlers with markers.”
MacroLingo for AI use and global communication
AI is so much more than simple queries. We engineer prompts to help you do what you need to do. We also help you communicate your science in AI and other emerging areas. Get in touch with MacroLingo to put AI to work for you on a global scale.