🎉 Introducing Kura: Turn your chat logs into actionable insights! Discover user patterns, extract intents, and understand conversation flows at scale. Try it on GitHub →
If you want to get updates on new features and tips on how to use Instructor, you can subscribe to our newsletter below to get notified when we publish new content.
London Stock Exchange Group (LSEG) has deployed Instructor in production to power their AI-driven market surveillance system, demonstrating the library's capability in mission-critical financial applications.
Today we integrated the mkdocs-llmstxt plugin into Instructor's documentation pipeline. This powerful plugin automatically generates llms.txt files from our MkDocs documentation, making our comprehensive guides instantly accessible to AI language models.
Semantic validation uses LLMs to evaluate content against complex, subjective, and contextual criteria that would be difficult to implement with traditional rule-based validation approaches.
As LLMs become increasingly integrated into production systems, ensuring the quality and safety of their outputs is paramount. Traditional validation methods relying on explicit rules can't keep up with the complexity and nuance of natural language. With the release of Instructor's semantic validation capabilities, we now have a powerful way to validate structured outputs against sophisticated criteria.
We're excited to announce Instructor's integration with OpenAI's new Responses API. This integration brings a more streamlined approach to working with structured outputs from OpenAI models. Let's see what makes this integration special and how it can improve your LLM applications.
We are pleased to introduce a significant enhancement to Instructor: the from_provider() function. While Instructor has always focused on providing robust structured outputs, we've observed that many users work with multiple LLM providers. This often involves repetitive setup for each client.
The from_provider() function aims to simplify this process, making it easier to initialize clients and experiment across different models.
This new feature offers a streamlined, string-based method to initialize an Instructor-enhanced client for a variety of popular LLM providers.
Anthropic's new web search tool, when combined with Instructor, provides a powerful way to get real-time, structured data from the web. This allows you to build applications that can answer questions and provide information that is up-to-date, going beyond the knowledge cut-off of large language models.
In this post, we'll explore how to use the web_search tool with Instructor to fetch the latest information and structure it into a Pydantic model. Even a simple structure can be very effective for clarity and further processing.
With OpenAI joining Anthropic in supporting the Model Context Protocol (MCP), we're witnessing a unified standard for language models to interact with external systems. This creates exciting opportunities for multi-LLM architectures where specialized AI applications work in parallel-discovering tools, handing off tasks, and accessing powerful capabilities through standardized interfaces.
We're excited to announce that Instructor now implements the llms.txt specification! You can now find our llms.txt file at python.useinstructor.com/llms.txt. This adoption marks an important step in making our documentation more accessible to AI language models.
AI-assisted coding is changing how we use version control. Many developers now use what I call "vibe coding" - coding with AI help. This creates new challenges with Git. Today I'll share how we're using Cursor rules in Instructor to solve these problems.