New Features
- CloudBees CI on modern cloud platforms supports Red Hat OpenShift 4.19
-
CloudBees CI on modern cloud platforms now supports Red Hat OpenShift 4.19. Please note that at this time, Kubernetes 1.33 is currently not supported. Refer to Supported platforms for CloudBees CI on modern cloud platforms for more information.
Feature Enhancements
- Backup to AWS S3 Bucket supports custom endpoints
-
Support for custom endpoints has been added to the S3 store for backup jobs. The following advanced options can now be configured to support most S3-compatible storage systems:
-
custom endpoints and protocol -
custom signing region -
path-style access
-
- CloudBees CI version format change – June 2025
-
Starting with the June 2025 release, the CloudBees CI version format will include a fourth segment, changing from
2.504.3.xto2.504.3.abcd.
- Configuration as Code Bundle Service can be deployed in a multi-cluster setup
-
The Configuration as Code Bundle Service can now be used by controllers provisioned in different namespaces or clusters. Note that this deployment type requires additional settings that are not the default. Refer to Deploy CloudBees CI across multiple Kubernetes namespaces and clusters for more information about different kinds of deployments, including multi-cluster deployments.
- Disabling template-based jobs through UI by using a toggle switch
-
Added a toggle switch at the top right of the job configuration page. This allows users to enable or disable "template-based jobs" directly from the UI. The placement of the toggle switch is consistent with existing toggle switches used across different job types.
Resolved Issues
- Reclaim threads after WebSocket reverse proxy operations
-
Whenever a WebSocket agent was connected to an High Availability (HA) controller, a new thread was spawned and retained for a few minutes. This remains true on Java 17. However, on controllers running Java 21 (or newer), this thread is now reclaimed immediately after use, which reduces memory requirements when a large number of WebSocket agents attempt to connect.
- Added an ingress rule in
NetworkPolicyto allow connection from different namespaces -
Added a rule in the CloudBees operations center
NetworkPolicyto allow an ingress connection from the controller installed in a separate namespace. The namespace for a secondary deployment needs to be labeled withcloudbees.com/role: controlleras the chart doesn’t create the namespace where it is deployed.
- Configuration as Code downloads using $JENKINS_HOME for temporary directories
-
Various Configuration as Code bundle download modes created temporary directories in the root of
$JENKINS_HOME. While these were usually deleted soon after use, problems could occur deleting the files when this volume was mounted on NFS. Now, the standard system temporary directory is used.
- Duplicate
PodDisruptionBudgetgeneration when overriding it via advanced yaml field -
When overriding a
PodDisruptionBudgetvia the advanced yaml field, the resulting model would duplicate the object.
- Fixed the SAML plugin (
saml) session serialization issue in High Availability (HA) environments -
In the SAML plugin (
saml), authenticated user sessions weren’t properly serialized because of the use ofNonSerializableSecurityContext. As a result, session data couldn’t be restored correctly when requests were routed to different cluster replicas, causing users to be unexpectedly logged out or to appear unauthenticated. This fix updates the SAML plugin (saml) to use aSecurityContextImpl, which is serializable.
- High Availability (HA) Multiple executor permanent agents now dynamically adjust processes on scale up/down
-
Previously, when scaling up and down a multiple executor permanent agent on High Availability (HA) controllers, agent processes on target operating systems (Linux or Windows) didn’t reflect the change dynamically. Additional executors remained in a disconnected state if the count was increased, and executor processes weren’t terminated when scaling down. This release introduces improvements to the agent runtime behavior. Now, when the executor count is increased or decreased from the controller, new Java agent processes are automatically spawned on target VMs during scale-up, and surplus agent processes are gracefully terminated during scale-down.
- Multi-branch pipeline validation no longer fails due to GitHub App credentials if an Owner is not specified
-
Previously, if a GitHub App was installed to multiple GitHub organizations, the Owner of the App was not specified in the CloudBees CI credentials, and a Multibranch Pipeline was run using the GitHub App as the authentication method, an exception was returned when validating the configuration of the Multibranch Pipeline.
- Duplicate
idin Role-Based Access Control role exports -
When exporting Role-Based Access Control (RBAC) roles using
http://<cjoc_url>/roles/id/api/xmlfor CloudBees CI on modern cloud platforms orhttp://<cjoc_url>/cjoc/roles/id/api/xmlfor CloudBees CI on traditional platforms, theidwas previously duplicated and returned for bothgetDisplayName()andgetId(). It is now only returned forgetId().
withDockerRegistryandwithDockerServersteps using obsolete persistence idiom-
The
withDockerRegistryandwithDockerServerPipeline steps from the Docker Pipeline plugin (docker-workflow), and some related syntactic conveniences for both scripted and declarative syntaxes, used an obsolete method of tracking the locations of temporary directories that were to be deleted at the end of a build stage. This could have caused unexpected behavior when resuming builds across controller restarts, including High Availability (HA) adoption and disaster recovery, particularly in conjunction withretry.
- Avoid thread consumption from
triggerRemoteJob -
When the Pipeline
triggerRemoteJobstep was running in some modes, it consumed an operating system thread while waiting for the downstream build to complete. This also applied to thebuildstep running in an High Availability (HA) controller whenbuildstep emulation is enabled. This remains true on Java 17. However, on controllers running Java 21 (or newer), this step no longer consumes a thread.
- Issue accessing Kubernetes pod template
-
Previously an unhandled exception occurred when navigating to an existing pod template or a newly created pod template while running CloudBees CI 2.504.2.5.
- Shared agents with label selector are now properly provisioned when builds are waiting for an agent
-
Previously, when multiple pipelines requiring the same agent were launched, or when a pipeline with multiple steps requiring the same agent was launched, it caused the pipeline to be stuck. Once the lease on the agent is released, the next build starts.
Known Issues
- S3 cache cleanup causing memory spike
-
In the CloudBees Cache Step Plugin (
cloudbees-cache-step), if you configured workspace caching to use AWS S3, the plugin’s automated cleanup process would delete customer artifacts from the S3 bucket, even when you do not actively use workspace caching in any features.
CloudBees advises disabling cache steps by selecting None (disable caching) from the Cache Manager field of the Workspace Caching section of the System configuration page. This action turns off workspace caching and prevents the cache cleanup process from deleting content in the S3 bucket.
- Errors resuming declarative builds from older releases after extra restart
-
If a Declarative Pipeline build was running in version 2.492.2.3 (or earlier) and the controller was then upgraded to this release, the build would resume. However, if the controller was restarted a second time, the build would fail. This issue also impacts most running Declarative Pipelines during HA controllers rolling upgrades to this release. This issue is resolved when upgrading to 2.516.1.28662.
- Duplicate plugins in the Operations center Plugin Manager UI
-
When you search for a specific plugin under the Available tab in the Operations center Plugin Manager, the search results show duplicate entries for the plugin.
- Configuration as Code items failed to load Pipeline Catalog based jobs
-
When Jobs based on a Pipeline Catalog are defined with CasC, the controller fails to load the CasC bundle. This causes the controller startup to fail.
A temporary workaround is to disable CasC or to temporarily not define items with CasC until a fix is available.