Skip to main content

How to use your preferred LLM, Embedder or Document Store in Wren AI

warning

We highly recommend using OpenAI GPT-4o or GPT-4o-mini with Wren AI. These models have been extensively tested to ensure optimal performance and compatibility.

While it is technically possible to integrate other AI models, please note that they have not been fully tested with our system. Therefore, using alternative models is at your own risk and may result in unexpected behavior or suboptimal performance.

Running Wren AI with your Custom LLM, Embedder or Document Store

To set up Wren AI with your custom LLM, Embedder or Document Store, follow these steps:

  1. Copy and Rename the Configuration File
    First, you need to copy the example configuration file and rename it. This file will be used to configure your custom provider.
  • Replace <WRENAI_VERSION_NUMBER> with the version number of the Wren AI you are using.

  • For MacOS or Linux Users: Open your terminal and run the following command:

    wget -O config.example.yaml https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml && \
    mkdir -p ~/.wrenai && cp config.example.yaml ~/.wrenai/config.yaml
  • For Windows Users: Open PowerShell and execute these commands:

    wget -O config.example.yaml https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
    mkdir -p ~/.wrenai
    cp config.example.yaml ~/.wrenai/config.yaml.txt
    notepad ~/.wrenai/config.yaml.txt # Fill in required configurations
    mv ~/.wrenai/config.yaml.txt ~/.wrenai/config.yaml # Rename the file
  1. Update Your Configuration
    Open the ~/.wrenai/config.yaml file and update it to match your custom LLM, Embedder, or Document Store settings. You can refer to the section for guidance on how to configure these settings.
  • For custom LLM

    • We are now using LiteLLM to support LLMs, so basically you can use any LLMs supported by LiteLLM.

    • For example, if you want to use llama3.1:8b from LM Studio(For local LLMs, we recommend you use LM Studio for more consistent JSON output support)

      1. add the following configuration to your config.yaml under the litellm_llm section:
      type: llm
      provider: litellm_llm
      timeout: 120
      models:
      # omitted other model definitions
      - kwargs:
      n: 1
      temperature: 0
      # please replace with your model name here, should be lm_studio/<MODEL_NAME>
      model: lm_studio/mlx-community/meta-llama-3.1-8b-instruct
      api_base: http://host.docker.internal:1234
      api_key_name: LLM_LM_STUDIO_API_KEY
      1. add the following environment variable to the .env file in the ~/.wrenai directory:
      LLM_LM_STUDIO_API_KEY=random # just put a random string here, should not be empty
    • Please refer to the LiteLLM documentation for more details about each LLM's supported parameters.

  • For custom Embedder

    • As of now, we only support embedding models from OpenAI, Azure OpenAI, OpenAI-compatible models, and Ollama.

    • For example, if you want to use nomic-embed-text from Ollama, add the following configuration to your config.yaml under the ollama_embedder section, also make sure the embedding_model_dim is set to the dimension of the embedding model under the document_store section:

      ---
      type: embedder
      provider: ollama_embedder
      models:
      - model: nomic-embed-text
      dimension: 768
      url: http://localhost:11434
      timeout: 120
      ---
      type: document_store
      provider: qdrant
      location: http://qdrant:6333
      embedding_model_dim: 768
      timeout: 120
      recreate_index: true
    • If you are using Ollama, please add EMBEDDER_OLLAMA_URL=http://host.docker.internal:11434 to the .env file in the ~/.wrenai directory.

  1. Launch Wren AI
  • You should run the following commands to start Wren AI:
    • Go to ~/.wrenai directory
    cd ~/.wrenai
    • Run the following command to start Wren AI:
    docker-compose --env-file .env up -d --force-recreate wren-ai-service
note

For Ollama Integration:

  • Run Ollama in deskstop application:
    • Only for Windows/MacOS users.
    • Install Ollama from ollama.com.
    • Start the Ollama desktop application or run ollama serve in your terminal.
    • Pull your desired model using the command: ollama pull <model_name>.
    • Set the url in the ollama_embedder/ollama_llm section of config.yaml to point to your Ollama server (default: http://docker.host.internal:11434).
  • Run Ollama in docker container:
    • For Windows/MacOS/Linux users.
    • Run Ollama in docker container using the following command: docker run -d --network wrenai_wren -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama.
    • Set the url in the ollama_embedder/ollama_llm section of config.yaml to point to your Ollama server (default: http://ollama:11434).

Adding a Custom LLM, Embedder or Document Store to Wren AI

Wren AI team is working hard to make text-to-sql user experience and performance better. However, we hope to leverage the power of the community to make faster progress. We warmly welcome contributors to provide feedback, create issues, or open pull requests.

Currently we receive feedbacks from our users that they want to use their preferred LLM or Document Store. We are happy to announce that we have made it possible for you to add your preferred LLM or Document Store to Wren AI.

Here is how you can add your preferred LLM or Document Store and contribute on this topic:

Decide on the LLM, Embedder or Document Store you would like to add

Underneath Wren AI, we are using Haystack to provide the LLM and Document Store functionalities. You can find the list of supported LLMs and Document Stores.

Haystack provides a wide range of LLMs and Document Stores, and it has simple APIs and great developer experience that we can easily add custom components to Wren AI.

For embedders, please make sure it is supported by the Document Store you choose. For example, these are embedding models supported by Qdrant. Also, you can refer to the supported LLMs from Haystack to check if the corresponding embedder is supported.

Create a provider definition file under the llm, embedder or document_store package

The file structure should look like this:

src
|__ providers
| |__ llm
| |__ embedder
| |__ document_store

For example, if you would like to add Mistral as a new LLM provider, you might add a new file called mistral.py under the llm package.

Create a class that inherits from LLMProvider, EmbedderProvider or DocumentStoreProvider

Below is an example of OpenAILLMProvider implementation, and there are several things you need to consider:

  1. The class should be inherited from LLMProvider and implemented the necessary methods
  2. We use the async version of the generator class; otherwise there will be a performance issue.
  3. Please make sure the provider name is the same as the file name with _llm as suffix.
  4. Please make sure you define several default variables for the provider such as GENERATION_MODEL, GENERATION_MODEL_KWARGS, etc., and these variables should also be defined in the env files.
OPENAI_API_BASE = "https://api.openai.com/v1"
GENERATION_MODEL_NAME = "gpt-4o-mini"
GENERATION_MODEL_KWARGS = {
"temperature": 0,
"n": 1,
"max_tokens": 4096,
"response_format": {"type": "json_object"},
}

@provider("openai_llm")
class OpenAILLMProvider(LLMProvider):
def __init__(
...
):
...

def get_generator(
...
):
return AsyncGenerator(
...
)

Other providers such as EmbedderProvider and DocumentStoreProvider should follow the similar pattern. You can check out the official implementations for reference here.

Configure your provider

After creating your custom provider class, you'll need to configure it in the wren-ai-service/config.yaml file. This configuration file is essential for telling Wren AI how to interact with your provider and its models.

For LLM providers, add a configuration block with the following structure. This defines how your custom LLM provider will be initialized and used:

type: llm
provider: custom_llm_name
models:
- model: model_name
kwargs:
temperature: 0
max_tokens: 4096
# other model-specific parameters
api_base: api_endpoint
# other provider-specific configurations

Note that provider-specific configurations are optional and depend on your implementation. The parameter names in the configuration must match the parameter names in your provider's constructor. For example, if your constructor takes project_id and organization_id, you would configure those same names in the YAML file.

For embedder providers, configure them using this structure:

type: embedder
provider: custom_embedder_name
models:
- model: model_name
dimension: 1536 # specify your embedding dimension
api_base: api_endpoint
timeout: 30 # optional timeout in seconds

Finally, configure your custom provider in the pipeline section. This section defines how different components like LLMs and embedders work together:

type: pipeline
pipes:
- name: pipeline_name
llm: custom_llm_name.model_name
embedder: custom_embedder_name.model_name
# other pipeline configurations

For a more in-depth understanding of how to configure custom providers, including practical examples and best practices, please check out our detailed configuration documentation. Additionally, for comprehensive configuration examples, you can refer to the configuration example file.