Issue
How can you collect logs from agent pods that have terminated unexpectedly in Google Kubernetes Engine (GKE)?
When agent pods terminate unexpectedly, it can be difficult to grab logs from the agent since the agent pods are removed quickly after the corresponding build fails.
Google Cloud Logger provides a solution by collecting and storing pod logs for up to 30 days by default.
Prerequisites:
-
CloudBees CI running on Google Kubernetes Engine (GKE)
-
Google Cloud Logger enabled on the cluster (enabled by default for GKE clusters)
-
Access to Google Cloud Console with appropriate IAM permissions to view logs
Resolution
-
Navigate to the Google Cloud Console at https://console.cloud.google.com
-
Select your GKE project from the project dropdown
-
Open the navigation menu and select
-
In the query editor, enter the following filter to locate agent pod logs:
resource.type="k8s_container" resource.labels.namespace_name="YOUR_NAMESPACE" labels."k8s-pod/jenkins" = "slave" resource.labels.container_name="jnlp" resource.labels.pod_name="POD_NAME_XXXXX" -
Replace the following placeholders with your actual values:
-
YOUR_NAMESPACE- The Kubernetes namespace where the agent pod ran -
POD_NAME_XXXXX- The specific pod name (or partial name) to search for
-
-
Click Run query to execute the search
-
Review the log entries in the results panel
-
To download the logs, click More actions (three vertical dots) and select either:
-
Download logs → JSON for structured log data
-
Download logs → CSV for human-readable format
-
Examples
Filtering logs by time range
To narrow down logs to a specific time period when the pod terminated:
-
In the Logs Explorer, use the time range selector at the top of the page
-
Select a custom time range or choose from preset options (last hour, last day, etc.)
-
Combine with the query filter from the Resolution section:
resource.type="k8s_container" resource.labels.namespace_name="YOUR_NAMESPACE" labels."k8s-pod/jenkins" = "slave" resource.labels.container_name="jnlp" timestamp>="2025-12-17T10:00:00Z" timestamp<="2025-12-17T11:00:00Z"
Searching for all agent jnlp containers in a namespace
To view logs from all agent jnlp containers regardless of pod name:
-
In the Logs Explorer query editor, enter:
resource.type="k8s_container" resource.labels.namespace_name="YOUR_NAMESPACE" labels."k8s-pod/jenkins" = "slave" resource.labels.container_name="jnlp" -
Omit the
resource.labels.pod_namefilter to include all pods -
Click Run query to view logs from all agent pods in the namespace
Viewing logs for other container types
To view logs from the another agent container instead of the JNLP container:
-
Modify the
resource.labels.container_namefilter in your query:resource.type="k8s_container" resource.labels.namespace_name="YOUR_NAMESPACE" labels."k8s-pod/jenkins" = "slave" resource.labels.container_name="YOUR_CONTAINER"