Skip to content

Ollama

Configure HolmesGPT to use local models with Ollama.

Warning

Ollama support is experimental. Tool-calling capabilities are limited and may produce inconsistent results. Only LiteLLM supported Ollama models work with HolmesGPT.

Setup

  1. Download Ollama from ollama.com
  2. Start Ollama: ollama serve
  3. Download models: ollama pull <model-name>

Configuration

export OLLAMA_API_BASE="http://localhost:11434"
holmes ask "what pods are failing?" --model="ollama_chat/<your-ollama-model>"

Using CLI Parameters

You can also specify the model directly as a command-line parameter:

holmes ask "what pods are failing?" --model="ollama_chat/<your-ollama-model>"

Additional Resources

HolmesGPT uses the LiteLLM API to support Ollama provider. Refer to LiteLLM Ollama docs for more details.