Skip to main content

Ollama

info

This is only helpful for self-hosted users. If you're using Khoj Cloud, you can use our first-party supported models.

info

Khoj can directly run local LLMs available on HuggingFace in GGUF format. The integration with Ollama is useful to run Khoj on Docker and have the chat models use your GPU or to try new models via CLI.

Ollama allows you to run many popular open-source LLMs locally from your terminal. For folks comfortable with the terminal, Ollama's terminal based flows can ease setup and management of chat models.

Ollama exposes a local OpenAI API compatible server. This makes it possible to use chat models from Ollama with Khoj.

Setup

info

Restart your Khoj server after first run or update to the settings below to ensure all settings are applied correctly.

  1. Setup Ollama: https://ollama.com/
  2. Download your preferred chat model with Ollama. For example,
    ollama pull llama3.1
  3. Uncomment OPENAI_API_BASE environment variable in your downloaded Khoj docker-compose.yml
  4. Start Khoj docker for the first time to automatically integrate and load models from the Ollama running on your host machine
    # run below command in the directory where you downloaded the Khoj docker-compose.yml
    docker-compose up

That's it! You should now be able to chat with your Ollama model from Khoj.