CloudBees CI release highlights

What’s new in CloudBees CI 2.479.2.3

Watch video

Upgrade Notes

Hazelcast upgrade in the High Availability (HA) controllers

The Hazelcast library used in High Availability (HA) controllers was upgraded from the 5.3 line to the 5.5 line.

This transition does not support a rolling upgrade. If you were previously running 2.462.3.3 or 2.479.1.x (either of the October 2024 releases), the upgrade to this (or a newer version) should be automatic, but it will involve a temporary outage of the controller, similar to a simple restart of a non-High Availability (HA) controller. If you ran an older release, it is recommended that you first upgrade to one of the October releases; otherwise, you can upgrade directly but only by first manually turning off all replicas of the old version (for example, by using the Reprovision gesture in the case of a managed controller).


New Features

None.

Feature Enhancements

In Configuration as Code (CasC) it is possible to configure an alternate download site to install plugins

By default, CloudBees CI uses CloudBees Update Centers during the installation of plugins in Configuration as Code (CasC). Now, if the bundle is using the new CasC Plugin Management (apiVersion: 2 in the bundle.yaml file), you can define a mirror server where the plugins and the Update Center JSON file can be downloaded and used instead. For more information, refer to Create an alternate plugin download site.


Prepare a JAR cache for Kubernetes agents

The commonly used JAR files used by Kubernetes agents are now directly built in the container image. This results in the faster initialization of Kubernetes agents and the reduction of the load on controllers.


Using per-replica temporary directory for checkouts of Pipeline libraries and scripts in High Availability (HA) controllers

Starting with the 2.414.2.2 release (September 2023) that introduced High Availability (HA), all managed controllers (High Availability (HA) or otherwise) in a Modern installation were started with the jenkins.plugins.git.AbstractGitSCMSource.cacheRootDir and org.jenkinsci.plugins.github_branch_source.GitHubSCMSource.cacheRootDir system properties set to a temporary directory under /tmp/jenkins. Traditional installations were also required to set these properties to a per-replica temporary directory. This prevented clashes between replicas that try to use the same cache directory at the same time. Since library checkouts defaulted to a location in $JENKINS_HOME for the same reason that High Availability (HA) controllers were required to use “clone” mode for libraries (enforced with an administrative monitor), the library caching system was not available and non-multibranch Pipeline projects loading the script from SCM with “lightweight mode” disabled were not safe to use in a High Availability (HA) controller.

Now, a series of cache-related system properties are automatically set on all High Availability (HA) controllers, Modern or Traditional, to a per-replica temporary location. (In Traditional, the $CACHE_DIR environment variable can be used to customize the root directory; otherwise, it defaults to the system temporary directory.) In addition to the GitHub and (generic) Git caches, the list includes checkouts of Pipeline Groovy libraries, with or without caching mode enabled, when the library is not configured to use clone mode; and heavyweight checkouts of repositories used for script from SCM. In Modern, since the caches are unique for each pod, builds run on a recently started replica may be somewhat slower, but the speed should improve after the replica is running and its caches on the local disk are warm.

It is no longer required to use clone mode in a Pipeline library in an High Availability (HA) controller. The administrative monitor has been removed.

Non-High Availability (HA) managed controllers (Modern) no longer override GitHub or (generic) Git caches, so these revert, by default, to keeping caches inside $JENKINS_HOME. You can also explicitly select a temporary directory.

Due to the expanded usage of the temporary volume, if your High Availability (HA) controller uses many distinct libraries (or other repositories that must be checked out on the controller due to loading script from SCM and disabling lightweight checkout), or if some of these repositories have large histories or working copies, you may run low on temporary directory space. In a managed controller, exhausting the /tmp volume causes the pod to restart with a fresh volume. It may be appropriate to set a higher resources.limits.ephemeral-storage, but in some environments it may result in the pod being unschedulable.

In Google Kubernetes Engine, as per https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#example_ephemeral you can request a larger ephemeral volume, over 300Gib instead of the default of approximately 100Gib. In Amazon Elastic Kubernetes Service, the ephemeral volume is around 100Gib. You can configure a generic ephemeral volume mounted on /tmp (or /tmp/jenkins) that can be any size, but performance will not be as optimal as when you use a local volume; if space is exhausted on this volume, the pod is not automatically restarted, so you will have to set up additional monitoring.


Managed controller hibernation monitor will use startup probe of CloudBees CI container to find wakeup timeout threshold

The managed controller Hibernation Monitor will use the CloudBees CI container’s startup probe to determine the wake-up timeout threshold. It calculates the wake-up threshold accordingly. If the startup probe is not available, the monitor falls back on the readiness probe to establish the threshold.

Resolved Issues

GitHub credentials defined via CloudBees HashiCorp Vault Plugin caused Configuration as Code exports to fail

The GitHub credentials defined through the CloudBees HashiCorp Vault Plugin introduced a challenge where Configuration as Code exports failed. This issue arises when the plugin interacts with the GitHub credentials stored in HashiCorp Vault, affecting the seamless functionality of Configuration as Code.


Update UI to comply with design guidelines

Several UI design issues like font sizes, weights, and so on, have been updated to comply with the design guidelines.


Only activate the GitHub checkbox if GitHub is selected

The GitHub checks activated checkbox was selected even when GitHub was not selected as the SCM tool. This issue has been resolved.


Updated support for the Integer version type in the Configuration as Code bundle

Support for the Integer version type in the Configuration as Code bundle.yaml file has been updated.


Unable to dismiss a warning message from the administrative monitor

The Dismiss button to dismiss a warning message from the administrative monitor was behind the message itself and could not be selected. This has been fixed so that users can now select the Dismiss button and dismiss a warning message they no longer wish to see.


Error with 'Test Connection' when configuring a Folder

The Test connection button under Vault authentications Folder Property throws an error when configuring a folder.

This issue is resolved.


Exclude the operations center credentials cache file from support bundles when the "Other Jenkins configuration files" component is included

The operations center credentials cache file is excluded from the support bundles when the "Other Jenkins configuration files" component is included. This matches what is already done for the main credentials configuration file.


CloudBees Restricted Credentials plugin reloaded all restricted credentials from disk whenever a credential was retrieved

The CloudBees Restricted Credentials plugin reloaded all restricted credentials from disk whenever a credential was retrieved. This caused poor performance of credentials lookups when many restricted credentials were configured.


The CloudBees Hashicorp Vault plugin reloaded all Vault credentials from disk whenever a credential was retrieved

The CloudBees Hashicorp Vault plugin reloaded all of the Vault credentials from disk whenever a credential was retrieved, causing poor performance of credentials lookups when many Vault credentials were configured.

Additionally, secrets retrieved from Vault are now cached for one hour, by default, before they are retrieved again from Vault. Previously, every single use of a Vault credential in CloudBees CI fetched the necessary secrets from Vault.


Product name on the title bar displayed on two lines

There was an unexpected line break introduced in the product name in the title bar. This is now fixed.


Added the Force Sandbox option to the CloudBees Template and CloudBees Pipeline: Templates Plugins

If there are highly secured environments where only sandbox scripts are allowed, the new Force the use of the sandbox globally in the system security option allows you to force the use of the sandbox globally in the system.

The CloudBees Template and CloudBees Pipeline: Templates Plugins were adapted to work correctly with this new functionality.


Cannot add CyberArk / Hashicorp Vault credentials from the Add button of a credentials form

When you try to add a CyberArk credential or Hashicorp Vault credential from the Add button of a credentials form, it fails with the error Domain is read-only and the credential is not created.


Failure to retry stages using Kubernetes agents after Disaster Recovery

In a disaster recovery scenario, when a controller is restored from backup/snapshots in a different cluster, the expectation is that any builds using Kubernetes agents at the time of snapshot should automatically retry the stage when using the "kubernetesAgent()" condition. This logic was incorrect in this scenario, and the build failed to resume, claiming no termination reasons for the agent pod. Now, an empty list of termination reasons, normal behavior when there is no record of the pod’s existence in the new cluster, will qualify for a retry.


Improved recovery of partially corrupt builds running agents after Disaster Recovery restore

In some disaster recovery restore cases that have with an inconsistent filesystem backup, a Pipeline build might detect that an agent it used in the original cluster is missing, but fails to cleanly abort the corresponding node step and let any enclosing retry step take over. This logic has been made more resilient.


High Availability (HA) fix warning displayed when adding a new node in a Configuration as Code bundle and loading a new replica

In a High Availability (HA) controller, the fix event handling of a new node is declared in a Configuration as Code bundle.


Maven dependency listing was empty due to a missing integration

The Maven dependency listing was empty due to a missing integration. Now, the Mavenized dependencies tab displays the correct information.


Hashicorp Vault GitHub App Credentials usage fails over remoting

When using the Hashicorp Vault Github App Credentials, the build execution sometimes failed when a token refresh is required at the time a remote agent needs it.


CloudBees Slack plugin management page is broken (UI)

The management page for the CloudBees Slack plugin has been updated. The overview page design that failed due to a broken CSS integration has been restored.

Known Issues

Duplicate Plugins in Operations Center Plugin Manager UI

When you search for a specific plugin under Available tab in operations center Plugin Manager, the search results shows duplicate entries of the searched plugin.


Authentication to operations center Configuration as Code Retriever API fails

After upgrading to version 2.462.1.3 or later, authentication to the operations center Configuration as Code Retriever endpoint API at ${OC_URL}/casc-retriever/* fails. This is due to the removal of the authentication method enabled by the attribute .OperationsCenter.CasC.Retriever.secrets.adminPassword.