Advanced Prompting Techniques¶
-
Basic Approaches
Zero-shot and few-shot techniques for immediate improvements
-
Reasoning Methods
Techniques to improve model reasoning and problem-solving
-
Verification
Methods for self-assessment and correction
-
Collaboration
Ensemble techniques for aggregating multiple model outputs
This guide presents 58 research-backed prompting techniques mapped to Instructor implementations. Based on The Prompt Report by Learn Prompting which analyzed over 1,500 academic papers on prompting.
Prompting Technique Map¶
The following diagram shows how different prompting techniques relate to each other and when to use them:
flowchart TD
A[Choose Prompting Technique] --> B{Have Examples?}
B -->|No| C[Zero-Shot Techniques]
B -->|Yes| D[Few-Shot Techniques]
C --> C1[Role Prompting]
C --> C2[Emotional Language]
C --> C3[Style Definition]
C --> C4[Follow-Up Generation]
D --> D1[Example Ordering]
D --> D2[Example Selection]
D --> D3[Example Generation]
A --> E{Need Reasoning?}
E -->|Yes| F[Thought Generation]
F --> F1[Chain of Thought]
F --> F2[Step-Back Prompting]
F --> F3[Thread of Thought]
A --> G{Complex Problem?}
G -->|Yes| H[Decomposition]
H --> H1[Least-to-Most]
H --> H2[Tree of Thought]
H --> H3[Plan and Solve]
A --> I{Need Verification?}
I -->|Yes| J[Self-Criticism]
J --> J1[Self-Verification]
J --> J2[Chain of Verification]
J --> J3[Self-Refinement]
A --> K{Want Multiple Perspectives?}
K -->|Yes| L[Ensembling]
L --> L1[Self-Consistency]
L --> L2[Meta-CoT]
L --> L3[Specialized Experts]
classDef category fill:#e2f0fb,stroke:#b8daff,color:#004085;
classDef technique fill:#d4edda,stroke:#c3e6cb,color:#155724;
classDef decision fill:#fff3cd,stroke:#ffeeba,color:#856404;
class A,C,D,F,H,J,L category
class C1,C2,C3,C4,D1,D2,D3,F1,F2,F3,H1,H2,H3,J1,J2,J3,L1,L2,L3 technique
class B,E,G,I,K decision
When to Use Each Technique¶
Goal | Recommended Techniques |
---|---|
Improve accuracy | Chain of Thought, Self-Verification, Self-Consistency |
Handle complex problems | Decomposition, Tree of Thought, Least-to-Most |
Generate creative content | Role Prompting, Emotional Language, Style Definition |
Verify factual correctness | Chain of Verification, Self-Calibration |
Optimize with few examples | KNN Example Selection, Active Prompting |
Handle uncertainty | Uncertainty-Routed CoT, Self-Consistency |
Zero-Shot¶
These techniques improve model performance without examples:
Technique | Description | Use Case |
---|---|---|
Emotional Language | Add emotional tone to prompts | Creative writing, empathetic responses |
Role Assignment | Give the model a specific role | Expert knowledge, specialized perspectives |
Style Definition | Specify writing style | Content with particular tone or format |
Prompt Refinement | Automatic prompt optimization | Iterative improvement of results |
Perspective Simulation | Have the model adopt viewpoints | Multiple stakeholder analysis |
Ambiguity Clarification | Identify and resolve unclear aspects | Improving precision of responses |
Query Repetition | Ask model to restate the task | Better task understanding |
Follow-Up Generation | Generate clarifying questions | Deep exploration of topics |
Few-Shot¶
Techniques for effectively using examples in prompts:
Technique | Description | Use Case |
---|---|---|
Example Generation | Automatically create examples | Domains with limited example data |
Example Ordering | Optimal sequencing of examples | Improved pattern recognition |
KNN Example Selection | Choose examples similar to query | Domain-specific accuracy |
Vote-K Selection | Advanced similarity-based selection | Complex pattern matching |
Thought Generation¶
Methods to encourage human-like reasoning in models:
Zero-Shot Reasoning¶
Technique | Description | Use Case |
---|---|---|
Analogical CoT | Generate reasoning using analogies | Complex problem-solving |
Step-Back Prompting | Consider higher-level questions first | Scientific and abstract reasoning |
Thread of Thought | Encourage step-by-step analysis | Detailed explanation generation |
Tabular CoT | Structure reasoning in table format | Multi-factor analysis |
Few-Shot Reasoning¶
Technique | Description | Use Case |
---|---|---|
Active Prompting | Annotate uncertain examples | Improved accuracy on edge cases |
Auto-CoT | Choose diverse examples | Broad domain coverage |
Complexity-Based CoT | Use complex examples | Challenging problem types |
Contrastive CoT | Include correct and incorrect cases | Error detection and avoidance |
Memory of Thought | Use high-certainty examples | Reliability in critical applications |
Uncertainty-Routed CoT | Select the most certain reasoning path | Decision-making under uncertainty |
Prompt Mining | Generate templated prompts | Efficient prompt engineering |
Ensembling¶
Techniques for combining multiple prompts or responses:
Technique | Description | Use Case |
---|---|---|
Consistent, Diverse Sets | Build consistent example sets | Stable performance |
Batched In-Context Examples | Efficient example batching | Performance optimization |
Step Verification | Validate individual steps | Complex workflows |
Maximizing Mutual Information | Information theory optimization | Information-dense outputs |
Meta-CoT | Merge multiple reasoning chains | Complex problem-solving |
Specialized Experts | Use different "expert" prompts | Multi-domain tasks |
Self-Consistency | Choose most consistent reasoning | Logical accuracy |
Universal Self-Consistency | Domain-agnostic consistency | General knowledge tasks |
Task-Specific Selection | Choose examples per task | Specialized domain tasks |
Prompt Paraphrasing | Use variations of the same prompt | Robust outputs |
Self-Criticism¶
Methods for models to verify or improve their own responses:
Technique | Description | Use Case |
---|---|---|
Chain of Verification | Generate verification questions | Fact-checking, accuracy |
Self-Calibration | Ask if answer is correct | Confidence estimation |
Self-Refinement | Auto-generate feedback and improve | Iterative improvement |
Self-Verification | Score multiple solutions | Quality assessment |
Reverse CoT | Reconstruct the problem | Complex reasoning verification |
Cumulative Reasoning | Generate possible steps | Thorough analysis |
Decomposition¶
Techniques for breaking down complex problems:
Technique | Description | Use Case |
---|---|---|
Functional Decomposition | Implement subproblems as functions | Modular problem-solving |
Faithful CoT | Use natural and symbolic language | Mathematical reasoning |
Least-to-Most | Solve increasingly complex subproblems | Educational applications |
Plan and Solve | Generate a structured plan | Project planning |
Program of Thought | Use code for reasoning | Algorithmic problems |
Recursive Thought | Recursively solve subproblems | Hierarchical problems |
Skeleton of Thought | Generate outline structure | Writing, planning |
Tree of Thought | Search through possible paths | Decision trees, exploration |
Implementation with Instructor¶
All these prompting techniques can be implemented with Instructor by:
- Defining appropriate Pydantic models that capture the expected structure
- Incorporating the prompting technique in your model docstrings or field descriptions
- Using the patched LLM client with your response model
import instructor
from openai import OpenAI
from pydantic import BaseModel, Field
# Example implementing Chain of Thought with a field
class ReasonedAnswer(BaseModel):
"""Answer the following question with detailed reasoning."""
chain_of_thought: str = Field(
description="Step-by-step reasoning process to solve the problem"
)
final_answer: str = Field(
description="The final conclusion after reasoning"
)
client = instructor.from_openai(OpenAI())
response = client.chat.completions.create(
model="gpt-4",
response_model=ReasonedAnswer,
messages=[
{"role": "user", "content": "What is the cube root of 27?"}
]
)
print(f"Reasoning: {response.chain_of_thought}")
print(f"Answer: {response.final_answer}")
References¶
* Based on The Prompt Report: A Systematic Survey of Prompting Techniques