Skip to content

Install CLI

Run HolmesGPT from your terminal as a standalone CLI tool.

Installation Options

  1. Add our tap:

    brew tap robusta-dev/homebrew-holmesgpt
    

  2. Install HolmesGPT:

    brew install holmesgpt
    

  3. To upgrade to the latest version:

    brew upgrade holmesgpt
    

  4. Verify installation:

    holmes ask --help
    

  1. Install pipx

  2. Install HolmesGPT:

    pipx install holmesgpt
    

  3. Verify installation:

    holmes ask --help
    

For development or custom builds:

  1. Install Poetry

  2. Install HolmesGPT:

    git clone https://github.com/robusta-dev/holmesgpt.git
    cd holmesgpt
    poetry install --no-root
    

  3. Verify installation:

    poetry run holmes ask --help
    

Run HolmesGPT using the prebuilt Docker container:

docker run -it --net=host \
  -e OPENAI_API_KEY="your-api-key" \
  -v ~/.holmes:/root/.holmes \
  -v ~/.aws:/root/.aws \
  -v ~/.config/gcloud:/root/.config/gcloud \
  -v $HOME/.kube/config:/root/.kube/config \
  us-central1-docker.pkg.dev/genuine-flight-317411/devel/holmes ask "what pods are unhealthy and why?"

Note: Pass environment variables using -e flags. An example for OpenAI is shown above. Adjust it for other AI providers by passing -e GEMINI_API_KEY, -e ANTHROPIC_API_KEY, etc.

Quick Start

After installation, choose your AI provider and follow the steps below. See supported AI Providers for more details.

  1. Set up API key:

    export OPENAI_API_KEY="your-api-key"
    

    See OpenAI Configuration for more details.

  2. Create a test pod to investigate:

    kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/pending_pods/pending_pod_node_selector.yaml
    

  3. Ask your first question:

    holmes ask "what is wrong with the user-profile-import pod?"
    

  1. Set up API key:

    export AZURE_API_VERSION="2024-02-15-preview"
    export AZURE_API_BASE="https://your-resource.openai.azure.com"
    export AZURE_API_KEY="your-azure-api-key"
    

    See Azure OpenAI Configuration for more details.

  2. Create a test pod to investigate:

    kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/pending_pods/pending_pod_node_selector.yaml
    

  3. Ask your first question:

    holmes ask "what is wrong with the user-profile-import pod?" --model="azure/<your-model-name>"
    

  1. Set up API key:

    export AWS_ACCESS_KEY_ID="your-access-key"
    export AWS_SECRET_ACCESS_KEY="your-secret-key"
    export AWS_DEFAULT_REGION="your-region"
    

    See AWS Bedrock Configuration for more details.

  2. Create a test pod to investigate:

    kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/pending_pods/pending_pod_node_selector.yaml
    

  3. Ask your first question:

    holmes ask "what is wrong with the user-profile-import pod?" --model="bedrock/<your-model-name>"
    

    Note: You must install boto3>=1.28.57 and replace <your-model-name> with an actual model name like anthropic.claude-3-5-sonnet-20240620-v1:0. See Finding Available Models for instructions.

Ask follow-up questions to refine your investigation

  1. Set up API key:

    export ANTHROPIC_API_KEY="your-api-key"
    

    See Anthropic Configuration for more details.

  2. Create a test pod to investigate:

    kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/pending_pods/pending_pod_node_selector.yaml
    

  3. Ask your first question:

    holmes ask "what is wrong with the user-profile-import pod?" --model="anthropic/<your-model-name>"
    

  1. Set up API key:

    export GEMINI_API_KEY="your-gemini-api-key"
    

    See Google Gemini Configuration for more details.

  2. Create a test pod to investigate:

    kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/pending_pods/pending_pod_node_selector.yaml
    

  3. Ask your first question:

    holmes ask "what is wrong with the user-profile-import pod?" --model="gemini/<your-gemini-model>"
    

  1. Set up credentials:

    export VERTEXAI_PROJECT="your-project-id"
    export VERTEXAI_LOCATION="us-central1"
    export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account-key.json"
    

    See Google Vertex AI Configuration for more details.

  2. Create a test pod to investigate:

    kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/pending_pods/pending_pod_node_selector.yaml
    

  3. Ask your first question:

    holmes ask "what is wrong with the user-profile-import pod?" --model="vertex_ai/<your-vertex-model>"
    

  1. Set up API key: No API key required for local Ollama installation.

    See Ollama Configuration for more details.

  2. Create a test pod to investigate:

    kubectl apply -f https://raw.githubusercontent.com/robusta-dev/kubernetes-demos/main/pending_pods/pending_pod_node_selector.yaml
    

  3. Ask your first question:

    holmes ask "what is wrong with the user-profile-import pod?" --model="ollama/<your-model-name>"
    

Note: Only LiteLLM supported Ollama models work with HolmesGPT. Check the LiteLLM Ollama documentation for supported models.

Next Steps

Need Help?