Skip to content


Prompting is a challenging task, with many small nuances that we need to take note of. We've created examples of 58 different prompting techniques* that you can take advantage of today in order to get a quick boost to your model's performance.

The prompting techniques are separated into the following categories - Zero Shot, Few Shot, Thought Generation, Ensembling, Self-Criticism and Decomposition.

Each of these techniques offers unique advantages in different scenarios. Click on the links to learn more about each method and how to apply them effectively in your prompts.


Before you get any examples, how can you maximise the effectiveness of your prompt? Zero Shot techniques help us to do so well.

  1. Emotion Prompting
  2. Role Prompting
  3. Style Prompting
  4. S2A (Sentence to Action)
  5. SimToM (Simulated Theory of Mind)
  6. RaR (Retrieval-augmented Response)
  7. RE2 (Recursive Explanation and Elaboration)
  8. Self-Ask


When choosing examples, how can we ensure they make a big difference in our model's performance? This isn't an easy thing to do and so we've broken it down into a few different things

  1. SG-ICL
  2. Example Ordering
  3. KNN Choice
  4. Vote-K

Thought Generation

How can we encourage our model to reason better to get to the final result?

  1. Analogical Prompting
  2. Step-Back Prompting
  3. Thread-of-Thought (ThoT)
  4. Tab-CoT
  5. Active-Prompt
  6. Auto-CoT
  7. Complexity-Based
  8. Contrastive
  9. Memory-of-Thought
  10. Uncertainty-Routed CoT
  11. Prompt Mining


How can we combine multiple parallel inference calls to get a significant boost in performance? Ensembling techniques allow us to leverage the strengths of multiple model runs, potentially leading to more accurate and robust results.

  1. COSP
  2. DENSE
  3. DiVeRSe
  4. Max Mutual Information
  5. Meta-CoT
  6. MoRE
  7. Self-Consistency
  8. Universal Self-Consistency
  9. USP
  10. Prompt Paraphrasing


What concrete steps can we take to get our model to critically evaluate and improve its own outputs? Self-criticism methods encourage the model to evaluate and refine its responses, promoting higher quality and more thoughtful outputs.

  1. Chain-Of-Verification
  2. Self-Calibration
  3. Self-Refine
  4. Self-Verification
  5. ReverseCoT
  6. Cumulative Reason


How can we break down complex problems into more manageable parts? Decomposition prompting methods offer an effective strategy to approach intricate questions by dividing them into smaller, more manageable sub-questions, allowing for a more structured and comprehensive problem-solving approach.

  2. Faithful CoT
  3. Least-to-Most
  4. Plan-and-Solve
  5. Program-of-Thought
  6. Recurs.-of-Thought
  7. Skeleton-of-Thought
  8. Tree-of-Thought

*: The Prompt Report: A Systematic Survey of Prompting Techniques