Install CLI¶
Run HolmesGPT from your terminal as a standalone CLI tool.
Installation Options¶
-
Add our tap:
-
Install HolmesGPT:
-
To upgrade to the latest version:
-
Verify installation:
-
Install pipx
-
Install HolmesGPT:
-
Verify installation:
For development or custom builds:
-
Install Poetry
-
Install HolmesGPT:
-
Verify installation:
Run HolmesGPT using the prebuilt Docker container:
docker run -it --net=host \
-e OPENAI_API_KEY="your-api-key" \
-v ~/.holmes:/root/.holmes \
-v ~/.aws:/root/.aws \
-v ~/.config/gcloud:/root/.config/gcloud \
-v $HOME/.kube/config:/root/.kube/config \
us-central1-docker.pkg.dev/genuine-flight-317411/devel/holmes ask "what pods are unhealthy and why?"
Note: Pass environment variables using
-e
flags. An example for OpenAI is shown above. Adjust it for other AI providers by passing-e GEMINI_API_KEY
,-e ANTHROPIC_API_KEY
, etc.
Quick Start¶
After installation, choose your AI provider and follow the steps below. See supported AI Providers for more details.
-
Set up API key:
See OpenAI Configuration for more details.
-
Create a test pod to investigate:
-
Ask your first question:
-
Set up API key:
export AZURE_API_VERSION="2024-02-15-preview" export AZURE_API_BASE="https://your-resource.openai.azure.com" export AZURE_API_KEY="your-azure-api-key"
See Azure OpenAI Configuration for more details.
-
Create a test pod to investigate:
-
Ask your first question:
-
Set up API key:
export AWS_ACCESS_KEY_ID="your-access-key" export AWS_SECRET_ACCESS_KEY="your-secret-key" export AWS_DEFAULT_REGION="your-region"
See AWS Bedrock Configuration for more details.
-
Create a test pod to investigate:
-
Ask your first question:
Note: You must install
boto3>=1.28.57
and replace<your-model-name>
with an actual model name likeanthropic.claude-3-5-sonnet-20240620-v1:0
. See Finding Available Models for instructions.
Ask follow-up questions to refine your investigation
-
Set up API key:
See Anthropic Configuration for more details.
-
Create a test pod to investigate:
-
Ask your first question:
-
Set up API key:
See Google Gemini Configuration for more details.
-
Create a test pod to investigate:
-
Ask your first question:
-
Set up credentials:
export VERTEXAI_PROJECT="your-project-id" export VERTEXAI_LOCATION="us-central1" export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account-key.json"
See Google Vertex AI Configuration for more details.
-
Create a test pod to investigate:
-
Ask your first question:
-
Set up API key: No API key required for local Ollama installation.
See Ollama Configuration for more details.
-
Create a test pod to investigate:
-
Ask your first question:
Note: Only LiteLLM supported Ollama models work with HolmesGPT. Check the LiteLLM Ollama documentation for supported models.
Next Steps¶
- Add Data Sources - Use built-in toolsets to connect with ArgoCD, Confluence, and monitoring tools
- Connect Remote MCP Servers - Extend capabilities with external MCP servers
Need Help?¶
- Join our Slack - Get help from the community
- Request features on GitHub - Suggest improvements or report bugs
- Troubleshooting guide - Common issues and solutions