Instructor, Generating Structured Outputs with LLMs¶
Structured outputs powered by llms. Designed for simplicity, transparency, and control.
Instructor makes it easy to get structured data like JSON from LLMs like GPT-3.5, GPT-4, GPT-4-Vision, and open-source models including Mistral/Mixtral, Anyscale, Ollama, and llama-cpp-python.
It stands out for its simplicity, transparency, and user-centric design, built on top of Pydantic. Instructor helps you manage validation context, retries with Tenacity, and streaming Lists and Partial responses.
Star the Repo Cookbooks Prompting Guide
Why use Instructor?¶
-
Simple API with Full Prompt Control
Instructor provides a straightforward API that gives you complete ownership and control over your prompts. This allows for fine-tuned customization and optimization of your LLM interactions.
-
Multi-Language Support
Simplify structured data extraction from LLMs with type hints and validation.
-
Reasking and Validation
Automatically reask the model when validation fails, ensuring high-quality outputs. Leverage Pydantic's validation for robust error handling.
-
Streaming Support
Stream partial results and iterables with ease, allowing for real-time processing and improved responsiveness in your applications.
-
Powered by Type Hints
Leverage Pydantic for schema validation, prompting control, less code, and IDE integration.
-
Simplified LLM Interactions
Support for OpenAI, Anthropic, Google, Vertex AI, Mistral/Mixtral, Anyscale, Ollama, llama-cpp-python, Cohere, LiteLLM.
Getting Started¶
If you ever get stuck, you can always run instructor docs
to open the documentation in your browser. It even supports searching for specific topics.
You can also check out our cookbooks and concepts to learn more about how to use Instructor.
Make sure you've installed the dependencies for your specific client
To keep the bundle size small, instructor
only ships with the OpenAI client. Before using the other clients and their respective from_xx
method, make sure you've installed the dependencies following the instructions below.
- Anthropic :
pip install "instructor[anthropic]"
- Google Generative AI:
pip install "instructor[google-generativeai]"
- Vertex AI:
pip install "instructor[vertexai]"
- Cohere:
pip install "instructor[cohere]"
- Litellm:
pip install "instructor[litellm]"
- Mistral:
pip install "instructor[mistralai]"
Now, let's see Instructor in action with a simple example:
Using OpenAI¶
Want to use OpenAI's Structured Output Response?
We've added support for OpenAI's structured output response. With this, you'll get all the benefits of instructor you like with the constrained sampling from OpenAI.
from openai import OpenAI
from instructor import from_openai, Mode
from pydantic import BaseModel
client = from_openai(OpenAI(), mode=Mode.TOOLS_STRICT)
class User(BaseModel):
name: str
age: int
resp = client.chat.completions.create(
response_model=User,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
model="gpt-4o",
)
import instructor
from pydantic import BaseModel
from openai import OpenAI
# Define your desired output structure
class UserInfo(BaseModel):
name: str
age: int
# Patch the OpenAI client
client = instructor.from_openai(OpenAI())
# Extract structured data from natural language
user_info = client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=UserInfo,
messages=[{"role": "user", "content": "John Doe is 30 years old."}],
)
print(user_info.name)
#> John Doe
print(user_info.age)
#> 30
Using Anthropic¶
import instructor
from anthropic import Anthropic
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_anthropic(Anthropic())
# note that client.chat.completions.create will also work
resp = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
Using Gemini¶
Google AI¶
import instructor
import google.generativeai as genai
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_gemini(
client=genai.GenerativeModel(
model_name="models/gemini-1.5-flash-latest",
),
mode=instructor.Mode.GEMINI_JSON,
)
# note that client.chat.completions.create will also work
resp = client.messages.create(
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
Vertex AI¶
Note: Gemini Tool Calling is still in preview, and there are some limitations. You can learn more about them in the Vertex AI examples notebook.
import instructor
import vertexai # type: ignore
from vertexai.generative_models import GenerativeModel # type: ignore
from pydantic import BaseModel
vertexai.init()
class User(BaseModel):
name: str
age: int
client = instructor.from_vertexai(
client=GenerativeModel("gemini-1.5-pro-preview-0409"),
mode=instructor.Mode.VERTEXAI_TOOLS,
)
# note that client.chat.completions.create will also work
resp = client.create(
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
Want to use Gemini's multi-part formats?
Instructor supports both the gemini and the vertexai libraries. We've most recently added support for multi-part file formats using google's gm.Part
objects. This allows you to pass in additional information to the LLM about the data you'd like to see.
Here are two examples of how to use multi-part formats with Instructor.
We can combine multiple gm.Part
objects into a single list and combine them into a single message to be sent to the LLM. Under the hood, we'll convert them into the appropriate format for Gemini.
import instructor
import vertexai.generative_models as gm # type: ignore
from pydantic import BaseModel, Field
import requests
client = instructor.from_vertexai(gm.GenerativeModel("gemini-1.5-pro-001"))
content = [
"Order Details:",
gm.Part.from_text("Customer: Alice"),
gm.Part.from_text("Items:"),
"Name: Laptop, Price: 999.99",
"Name: Mouse, Price: 29.99",
]
class Item(BaseModel):
name: str
price: float
class Order(BaseModel):
items: list[Item] = Field(..., default_factory=list)
customer: str
resp = client.create(
response_model=Order,
messages=[
{
"role": "user",
"content": content,
},
],
)
print(resp)
#> items=[Item(name='Laptop', price=999.99), Item(name='Mouse', price=29.99)] customer='Alice'
This is also the same for multi-modal responses when we want to work with images. In this example, we'll ask the LLM to describe an image and pass in the image as a gm.Part
object.
import instructor
import vertexai.generative_models as gm # type: ignore
from pydantic import BaseModel
import requests
client = instructor.from_vertexai(
gm.GenerativeModel("gemini-1.5-pro-001"), mode=instructor.Mode.VERTEXAI_JSON
)
content = [
gm.Part.from_text("Count the number of objects in the image."),
gm.Part.from_data(
bytes(
requests.get(
"https://img.taste.com.au/Oq97xT-Q/taste/2016/11/blueberry-scones-75492-1.jpeg"
).content
),
"image/jpeg",
),
]
class Description(BaseModel):
description: str
resp = client.create(
response_model=Description,
messages=[
{
"role": "user",
"content": content,
},
],
)
print(resp)
#> description='Seven blueberry scones sit inside a metal pie plate.'
Using Litellm¶
import instructor
from litellm import completion
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_litellm(completion)
resp = client.chat.completions.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
Using Cohere¶
We also support users who want to use the Cohere models using the from_cohere
method.
Want to get the original Cohere response?
If you want to get the original response object from the LLM instead of a structured output, you can pass response_model=None
to the create
method. This will return the raw response from the underlying API.
# This will return the original Cohere response object
raw_response = client.chat.completions.create(
response_model=None,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
)
This can be useful when you need access to additional metadata or want to handle the raw response yourself.
import instructor
from pydantic import BaseModel
from cohere import Client
class User(BaseModel):
name: str
age: int
client = instructor.from_cohere(Client())
resp = client.chat.completions.create(
response_model=User,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
)
assert resp.name == "Jason"
assert resp.age == 25
Correct Typing¶
This was the dream of instructor but due to the patching of openai, it wasnt possible for me to get typing to work well. Now, with the new client, we can get typing to work well! We've also added a few create_*
methods to make it easier to create iterables and partials, and to access the original completion.
Calling create
¶
import openai
import instructor
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_openai(openai.OpenAI())
user = client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create a user"},
],
response_model=User,
)
Now if you use a IDE, you can see the type is correctly infered.
Handling async: await create
¶
This will also work correctly with asynchronous clients.
import openai
import instructor
from pydantic import BaseModel
client = instructor.from_openai(openai.AsyncOpenAI())
class User(BaseModel):
name: str
age: int
async def extract():
return await client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create a user"},
],
response_model=User,
)
Notice that simply because we return the create
method, the extract()
function will return the correct user type.
Returning the original completion: create_with_completion
¶
You can also return the original completion object
import openai
import instructor
from pydantic import BaseModel
client = instructor.from_openai(openai.OpenAI())
class User(BaseModel):
name: str
age: int
user, completion = client.chat.completions.create_with_completion(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create a user"},
],
response_model=User,
)
Streaming Partial Objects: create_partial
¶
In order to handle streams, we still support Iterable[T]
and Partial[T]
but to simply the type inference, we've added create_iterable
and create_partial
methods as well!
import openai
import instructor
from pydantic import BaseModel
client = instructor.from_openai(openai.OpenAI())
class User(BaseModel):
name: str
age: int
user_stream = client.chat.completions.create_partial(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create a user"},
],
response_model=User,
)
for user in user_stream:
print(user)
#> name=None age=None
#> name=None age=None
#> name=None age=None
#> name=None age=None
#> name=None age=25
#> name=None age=25
#> name=None age=25
#> name=None age=25
#> name=None age=25
#> name=None age=25
#> name='John Doe' age=25
# name=None age=None
# name='' age=None
# name='John' age=None
# name='John Doe' age=None
# name='John Doe' age=30
Notice now that the type infered is Generator[User, None]
Streaming Iterables: create_iterable
¶
We get an iterable of objects when we want to extract multiple objects.
import openai
import instructor
from pydantic import BaseModel
client = instructor.from_openai(openai.OpenAI())
class User(BaseModel):
name: str
age: int
users = client.chat.completions.create_iterable(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create 2 users"},
],
response_model=User,
)
for user in users:
print(user)
#> name='John Doe' age=30
#> name='Jane Doe' age=28
# User(name='John Doe', age=30)
# User(name='Jane Smith', age=25)
Validation¶
You can also use Pydantic to validate your outputs and get the llm to retry on failure. Check out our docs on retrying and validation context.
More Examples¶
If you'd like to see more check out our cookbook.
Installing Instructor is a breeze. Just run pip install instructor
.
Contributing¶
If you want to help out, checkout some of the issues marked as good-first-issue
or help-wanted
. Found here. They could be anything from code improvements, a guest blog post, or a new cook book.
License¶
This project is licensed under the terms of the MIT License.