How to Customize an LLM, Embedder or Document Store in Wren AI
Decide on the LLM, Embedder or Document Store you would like to add
Underneath Wren AI, we are using Haystack and LiteLLM to provide the LLM, embedding models and Document Store functionalities. You can find the list of supported LLMs and Document Stores.
Haystack provides a wide range of LLMs and Document Stores, and it has simple APIs and great developer experience that we can easily add custom components to Wren AI.
For embedders, please make sure it is supported by the Document Store you choose. For example, these are embedding models supported by Qdrant. Also, you can refer to the supported LLMs from Haystack to check if the corresponding embedder is supported.
Create a provider definition file under the llm, embedder or document_store package
The file structure should look like this:
src
|__ providers
| |__ llm
| |__ embedder
| |__ document_store
For example, if you would like to add Mistral as a new LLM provider, you might add a new file called mistral.py
under the llm
package.
Create a class that inherits from LLMProvider
, EmbedderProvider
or DocumentStoreProvider
Below is an example of OpenAILLMProvider
implementation, and there are several things you need to consider:
- The class should be inherited from
LLMProvider
and implemented the necessary methods - We use the async version of the generator class; otherwise there will be a performance issue.
- Please make sure the provider name is the same as the file name with
_llm
as suffix. - Please make sure you define several default variables for the provider such as
GENERATION_MODEL
,GENERATION_MODEL_KWARGS
, etc., and these variables should also be defined in the env files.
OPENAI_API_BASE = "https://api.openai.com/v1"
GENERATION_MODEL_NAME = "gpt-4o-mini"
GENERATION_MODEL_KWARGS = {
"temperature": 0,
"n": 1,
"max_tokens": 4096,
"response_format": {"type": "json_object"},
}
@provider("openai_llm")
class OpenAILLMProvider(LLMProvider):
def __init__(
...
):
...
def get_generator(
...
):
return AsyncGenerator(
...
)
Other providers such as EmbedderProvider
and DocumentStoreProvider
should follow the similar pattern. You can check out the official implementations for reference here.
Configure your provider
After creating your custom provider class, you'll need to configure it in the wren-ai-service/config.yaml
file. This configuration file is essential for telling Wren AI how to interact with your provider and its models.
For LLM providers, add a configuration block with the following structure. This defines how your custom LLM provider will be initialized and used:
type: llm
provider: custom_llm_name
models:
- model: model_name
kwargs:
temperature: 0
max_tokens: 4096
# other model-specific parameters
api_base: api_endpoint
# other provider-specific configurations
Note that provider-specific configurations are optional and depend on your implementation. The parameter names in the configuration must match the parameter names in your provider's constructor. For example, if your constructor takes project_id
and organization_id
, you would configure those same names in the YAML file.
For embedder providers, configure them using this structure:
type: embedder
provider: custom_embedder_name
models:
- model: model_name
dimension: 1536 # specify your embedding dimension
api_base: api_endpoint
timeout: 30 # optional timeout in seconds
Finally, configure your custom provider in the pipeline section. This section defines how different components like LLMs and embedders work together:
type: pipeline
pipes:
- name: pipeline_name
llm: custom_llm_name.model_name
embedder: custom_embedder_name.model_name
# other pipeline configurations
For a more in-depth understanding of how to configure custom providers, including practical examples and best practices, please check out our detailed configuration documentation. Additionally, for comprehensive configuration examples, you can refer to the configuration example file.