Upgrade Notes

YahooUI library has been removed

As previously noted in the 2.492.2.2 release, starting with this release, the YahooUI library has been removed. This may cause user interface rendering issues with custom or unmaintained community (Tier 3) plugins.

If you have not done so already, you should make plans to remove usage of the YahooUI library in your custom plugins.


Removed support for non-CloudBees Assurance Program SCM plugins from SDA Analytics

SDA Analytics kept support for some SCM plugins that were not in CloudBees Assurance Program, including Subversion (subversion), Mercurial (mercurial), Gitea (gitea), and multi-SCM (multiple-scms). They are now removed.


New Features

None.

Feature Enhancements

Improved Node Display on Label Index Screen in High Availability (HA) controller

The /label/$name/ page on a High Availability (HA) controller displayed inconsistent results. This depended on which replica hosted the sticky sessions and which replicas agents were connected to. Now, this page displays all permanent or cloud agents defined in the controller, with their online/offline status icons, regardless of which replica handles the request.


The CloudBees Pipeline Explorer plugin (cloudbees-pipeline-explorer-plugin) no longer has a dependency on the Operations Center Context plugin (operations-center-context)

Previously, the CloudBees Pipeline Explorer plugin (cloudbees-pipeline-explorer-plugin) had a hard dependency on the Operations Center Context plugin (operations-center-context). This is now optional. If features that are supported by the Operations Center Context plugin are not in use (for example, the triggerRemoteJob Pipeline step), the Operations Center Context plugin may be disabled or uninstalled.


Improved appearance of table elements to align with the Jenkins UI

The appearance of table elements has been updated to align with recent improvements to the Jenkins UI.

Resolved Issues

High Availability (HA) controllers failed to synchronize a metadata file for completed builds.

When a Pipeline build completed High Availability (HA) controller, the workflow-completed/flowNodeStore.xml file was written by the replica managing the build. Other replicas were notified that they could now load the completed build. However, they weren’t instructed to wait until the expected timestamp of this file was observed. This could lead to problems loading the build from other replicas, depending on the file system (such as NFS).


Resolved a performance issue when using permanent outbound agents on High Availability (HA) controllers with a large number of nodes.

When using many permanent outbound agents on High Availability (HA) controllers , the recommended CloudBees High Availability retention strategy experienced a performance issue at high scale. This issue caused excessive CPU usage and could even cause queue processing to hang.


Removed excessive calls to retention strategy check to prevent performance issues

When adding, updating, or removing nodes in a controller, the retention strategies of all other nodes were checked.

In systems with a large number of nodes and high activity among cloud nodes (which were frequently added and removed), this could cause high CPU usage and associated performance issues.


Next and previous buttons in CloudBees Pipeline Explorer inaccurate in High Availability (HA) controllers

On a High Availability (HA) controller with multiple builds of a given job running across multiple replicas, the Next and Previous buttons in CloudBees Pipeline Explorer would only navigate among completed builds or builds running on the replica hosting the sticky session. Now, these links reflect the newest build older than the displayed build, or the oldest build newer than the displayed build, respectively, even on High Availability (HA) controllers.


Resolved issue with fallback security realm activation during High Availability (HA) rolling upgrades

During a rolling upgrade of a High Availability (HA) controller connected to operations center, a race condition could occur when the replica connected to operations center shuts down. The remaining replicas, detecting the connection as offline, would simultaneously attempt to switch to the fallback security realm via SecurityEnforcer#useFallbackIfOffline.

Because this process involved concurrent writes to config.xml from multiple replicas, it could cause issues on certain file systems, such as CIFS. These issues included temporary write failures or, in rare cases, deletion of the config.xml file due to locking conflicts.

The fallback mechanism has been improved to prevent simultaneous write attempts, ensuring safe and consistent configuration management during rolling upgrades.


Retention strategy checks skipped when a High Availability (HA) replica is shutting down.

The High Availability (HA) retention strategy no longed does unnecessary checks when shutting down a replica.


Avoid unnecessary logging when the High Availability (HA) controller is terminating

If the High Availability (HA) controller replica were already terminating, operations relying on Hazelcast availability would fail and generate unnecessary exceptions.

These operations are now skipped in such cases, eliminating the logging noise.


Extra executor agents in an High Availability (HA) controller offered launch instructions

When configuring an inbound permanent agent on a High Availability (HA) controller with multiple HA executors, the main (Status) page of the agent, when offline, displayed incorrect instructions for connecting it. These instructions were inappropriate because you should connect only directly to the main agent.


Users could directly launch clones of High Availability (HA) multi-executor agents

Users should connect only to the original agent when configuring a multi-executor inbound agent on a High Availability (HA) controller. This agent automatically launches additional processes to handle the extra executors. Previously, this constraint wasn’t enforced, which could result in duplicated processes or other unexpected behavior. Now, direct attempts to launch any executor other than the original agent are blocked.


High Availability (HA) controllers no longer switch to the offline backup security realm when applying a rolling restart.

When applying a rolling restart to a High Availability (HA) controller, some replicas could briefly switch to the offline backup security realm.

This behavior no longer occurs. As long as operations center is available, the High Availability (HA) controller will continue to use single sign-on (SSO).


CloudBees Pipeline Explorer was unusable if the log or its metadata was malformed

When writing Pipeline build logs, CloudBees Pipeline Explorer also writes a log-metadata file, which contains information used to support various CloudBees Pipeline Explorer features. Errors when writing logs or restarting CloudBees CI may result in this metadata becoming out of sync with the main log file.

Previously when this occurred, CloudBees Pipeline Explorer was unable to load the build log and returned an error. Now, CloudBees Pipeline Explorer is typically able to recover from this type of issue, affected log lines are marked as [malformed metadata], and CloudBees Pipeline Explorer displays a warning message explaining that the log contains malformed metadata. If this occurs, download the log to view the raw log data.


Avoid thread spikes with controller lifecycle notifications

Controller Lifecycle Notifications can cause thread spikes and memory issues when managing a large number of controllers. To avoid such issues, the maximum number of parallel http requests is now limited by default.

Set the System property com.cloudbees.opscenter.server.webhooks.WebhooksSender.BOUNDED_WEBHOOK_DELIVERY to false to maintain the previous unbounded behavior.


Hashicorp Vault self client token not revoked due to trailing slash

When the global configuration of the CloudBees Hashicorp Vault plugin contains a trailing slash (/) in the Vault URL, the request to revoke the self client token fails. The token is not revoked and the Jenkins logs are spammed with Revoke self client token error errors.

The revoke token requests now handle the trailing slash accordingly.


CloudBees Update Center incorrectly displayed plugin updates if using a plugin catalog

If a plugin catalog was used to install non-CloudBees Assurance Program plugins to a controller, the CloudBees Update Center may have incorrectly indicated that a new version of the plugin was available for installation, when the catalog’s version of the plugin was already installed to the controller.

Known Issues

Duplicate plugins in the Operations Center Plugin Manager UI

When you search for a specific plugin under the Available tab in the operations center Plugin Manager, the search results show duplicate entries for the plugin.


Wrong error message being displayed

Under certain conditions, after applying a new Configuration as Code Bundle without restarting the controller (using the Reload Configuration option), users might see the following message in the Manage Jenkins section of the affected controller:

A new version of the Configuration Bundle () is available, but it cannot be applied because it has validation errors.

This message does not affect the system stability.

To remove the false message, go to Manage Jenkins  CloudBees Configuration as Code export and update  Bundle Update.


The controller and operations center fail to start when upgrading CloudBees CI

When upgrading or restarting CloudBees CI, the controller or operations center fails to start and returns a Messaging.afterExtensionsAugmented error. The operations center can also fail to start with an OperationsCenter.afterExtensionsAugmented error. Refer to CloudBees CI startup failure due to IndexOutOfBoundsException related to corrupt messaging transport files for a workaround for this issue.