Ollama¶
Configure HolmesGPT to use local models with Ollama.
Warning
Ollama support is experimental. Tool-calling capabilities are limited and may produce inconsistent results. Only LiteLLM supported Ollama models work with HolmesGPT.
Setup¶
- Download Ollama from ollama.com
- Start Ollama:
ollama serve
- Download models:
ollama pull <model-name>
Configuration¶
export OLLAMA_API_BASE="http://localhost:11434"
holmes ask "what pods are failing?" --model="ollama_chat/<your-ollama-model>"
Using CLI Parameters¶
You can also specify the model directly as a command-line parameter:
Additional Resources¶
HolmesGPT uses the LiteLLM API to support Ollama provider. Refer to LiteLLM Ollama docs for more details.