When working with LangChain, a powerful framework for developing applications with large language models, you might encounter errors that can temporarily halt your progress. One such common hiccup involves the “ChatPromptTemplate.from_messages” method, often leading to a ValidationError. This guide will walk you through the problem and how to fix it.
The Problem: ValidationError with from_messages
You might be following the LangChain Quickstart guide or a similar example, trying to create a chat prompt template like so.
You’d use code similar to this:
from langchain.prompts.chat import ChatPromptTemplate
template = "You are a helpful assistant that translates {input_language} to {output_language}."
human_template = "{text}"
chat_prompt = ChatPromptTemplate.from_messages([
("system", template),
("human", human_template),
])
# Attempting to format the messages
# chat_prompt.format_messages(input_language="English", output_language="French", text="I love programming.")
However, running this code can result in a pydantic.error_wrappers.ValidationError. The error messages typically indicate issues like “value is not a valid dict” for the messages passed to the template. This means LangChain is not interpreting the tuples (“system”, template) and (“human”, human_template) as the expected message objects.
Why Does This Happen?
The primary reason for this error often lies in the version of LangChain you are using or changes in its API. What works in one version might be deprecated or altered in another. The way from_messages expects its input can vary.
Solutions to Fix the Error
Here are a couple of ways to address this ValidationError:
1. Update LangChain and Dependencies
The simplest and often most effective solution is to ensure your LangChain library and its dependencies are up to date. Developers frequently release new versions with bug fixes and API improvements.
You can typically update LangChain using pip:
pip install --upgrade langchain
After updating, try running your original code again. Many users find that the example code from the documentation works correctly with the latest library versions. For instance, Python 3.9.5 with LangChain 0.0.300, or later versions like 0.1.8, have been reported to work with the standard tuple-based message definitions.
2. Explicitly Use Message Prompt Templates
If updating doesn’t resolve the issue, or if you are working in an environment where you cannot update easily, you might need to adjust your code to be more explicit about the message types. This involves using classes like SystemMessagePromptTemplate and HumanMessagePromptTemplate.
Here’s how you can modify the code:
from langchain.prompts.chat import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
template = "You are a helpful assistant that translates {input_language} to {output_language}."
human_template = "{text}"
# Create message prompt templates explicitly
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
# Pass these message prompt objects to from_messages
chat_prompt = ChatPromptTemplate.from_messages([
system_message_prompt,
human_message_prompt
])
# Now, formatting should work
formatted_messages = chat_prompt.format_messages(
input_language="English",
output_language="French",
text="I love programming."
)
# print(formatted_messages)
This approach clearly defines each part of the prompt as a specific message type that LangChain can readily understand, which was a successful workaround for users on versions like LangChain 0.0.181 with Python 3.11.
An Alternative for Specific Message Inputs
In some cases, particularly when dealing with lists of messages directly (not templates), an error like “Invalid input type … Must be a PromptValue, str, or list of BaseMessages” might appear. If you are constructing messages directly, ensure they are correctly instantiated and, if necessary, wrapped in a list. For example:
from langchain_core.messages import HumanMessage
# If directly creating a message to be used in a list
message_list = [HumanMessage(content="what is the capital of India")]
By trying these solutions, you should be able to resolve the ValidationError
and get your LangChain chat prompts working smoothly. Always refer to the official LangChain documentation for the most current practices, as the library evolves rapidly.