How to collect agent pod logs from Google Cloud Logger

Last Reviewed:2025-12-17()
2 minute readKnowledge base

Issue

How can you collect logs from agent pods that have terminated unexpectedly in Google Kubernetes Engine (GKE)?

When agent pods terminate unexpectedly, it can be difficult to grab logs from the agent since the agent pods are removed quickly after the corresponding build fails.

Google Cloud Logger provides a solution by collecting and storing pod logs for up to 30 days by default.

Prerequisites:

  • CloudBees CI running on Google Kubernetes Engine (GKE)

  • Google Cloud Logger enabled on the cluster (enabled by default for GKE clusters)

  • Access to Google Cloud Console with appropriate IAM permissions to view logs

Resolution

  1. Navigate to the Google Cloud Console at https://console.cloud.google.com

  2. Select your GKE project from the project dropdown

  3. Open the navigation menu and select Logging  Logs Explorer

  4. In the query editor, enter the following filter to locate agent pod logs:

    resource.type="k8s_container" resource.labels.namespace_name="YOUR_NAMESPACE" labels."k8s-pod/jenkins" = "slave" resource.labels.container_name="jnlp" resource.labels.pod_name="POD_NAME_XXXXX"
  5. Replace the following placeholders with your actual values:

    • YOUR_NAMESPACE - The Kubernetes namespace where the agent pod ran

    • POD_NAME_XXXXX - The specific pod name (or partial name) to search for

  6. Click Run query to execute the search

  7. Review the log entries in the results panel

  8. To download the logs, click More actions (three vertical dots) and select either:

    • Download logsJSON for structured log data

    • Download logsCSV for human-readable format

Examples

Filtering logs by time range

To narrow down logs to a specific time period when the pod terminated:

  1. In the Logs Explorer, use the time range selector at the top of the page

  2. Select a custom time range or choose from preset options (last hour, last day, etc.)

  3. Combine with the query filter from the Resolution section:

    resource.type="k8s_container" resource.labels.namespace_name="YOUR_NAMESPACE" labels."k8s-pod/jenkins" = "slave" resource.labels.container_name="jnlp" timestamp>="2025-12-17T10:00:00Z" timestamp<="2025-12-17T11:00:00Z"

Searching for all agent jnlp containers in a namespace

To view logs from all agent jnlp containers regardless of pod name:

  1. In the Logs Explorer query editor, enter:

    resource.type="k8s_container" resource.labels.namespace_name="YOUR_NAMESPACE" labels."k8s-pod/jenkins" = "slave" resource.labels.container_name="jnlp"
  2. Omit the resource.labels.pod_name filter to include all pods

  3. Click Run query to view logs from all agent pods in the namespace

Viewing logs for other container types

To view logs from the another agent container instead of the JNLP container:

  1. Modify the resource.labels.container_name filter in your query:

    resource.type="k8s_container" resource.labels.namespace_name="YOUR_NAMESPACE" labels."k8s-pod/jenkins" = "slave" resource.labels.container_name="YOUR_CONTAINER"
This article is part of our Knowledge Base and is provided for guidance-based purposes only. The solutions or workarounds described here are not officially supported by CloudBees and may not be applicable in all environments. Use at your own discretion, and test changes in a safe environment before applying them to production systems.