Upgrade Notes
- Operations center CloudBees Assurance Program plugin changes since 2.462.3.3
-
The following plugins have been added to the Operations center CloudBees Assurance Program since 2.462.3.3:
-
OpenId Connect Authentication Plugin (
oic-auth
)
-
- Controller CloudBees Assurance Program plugin changes since 2.462.3.3
-
The following plugins have been added to the Controller CloudBees Assurance Program since 2.462.3.3:
-
OpenId Connect Authentication Plugin (
oic-auth
)
-
Feature Enhancements
- CloudBees CI Upgraded to Spring Security 6
-
CloudBees CI has been upgraded from Spring Security 5 to Spring Security 6. This update brings an enhanced security posture and alignment with the latest advancements in the Spring ecosystem.
- Aggregate nodes from different replicas in the Nodes page for a High Availability (HA) controller
-
Nodes from both replicas are now displayed under the Nodes page, making the UI experience more transparent and consistent with the executors widget.
Resolved Issues
- The Manage Jenkins tab disappears while navigating through the tabs
-
The Manage Jenkins tab disappears while navigating through the tabs on the CloudBees Configuration as Code export and update page. When no tab is clicked, the tab appears, but the tab disappears as navigation through tabs starts.
- SAML plugin cannot be configured when
JENKINS_HOME
is a link to an NFS mount point -
The SAML plugin can now be configured when
JENKINS_HOME
is a link to an NFS mount point.
- Vault Authentication cannot be deleted from Credentials
-
When you have a single HashiCorp Vault authentication because you are using the CloudBees HashiCorp Vault plugin and you try to remove it from the
page, it is not deleted after saving. It is still visible in the authentication list when you return to the “Manage Jenkins > Credential Providers” page.
Deleting the last HashiCorp Vault authentication is fixed, resulting in an empty authentication list.
- CloudBees CI Role-Based Access Control plugin could cause JVM deadlock
-
When you save the configuration settings for the CloudBees CI Role-Based Access Control plugin (
nectar-rbac
) during a permission check, it can lead to JVM deadlock that can only be resolved when you restart CloudBees CI. This issue has been resolved.
- Better handling of empty libraries and unique library paths
-
This update addresses two issues with the Pipeline: Groovy Libraries (
pipeline-groovy-lib
) plugin:-
Empty Libraries Handling: The caching mechanism did not handle empty libraries (libraries where both
src
andvars
directories are missing or contain no Groovy files) properly. -
Unique Cache Directory Per
libraryPath
: Previously, thelibraryPath
of a library was not considered when generating the unique cache directory, leading to potential cache conflicts.
-
These fixes ensure more reliable caching behavior and better handling of empty libraries.
- Wait for Pipeline builds to fully complete before allowing their parent job to be deleted
-
Previously, when you deleted a Pipeline job, any currently running Pipeline builds of that job were interrupted, but the job deletion was allowed to proceed immediately without waiting for the build to fully complete. This can lead to various problems, including partially written build directories that exist after their job was deleted. This issue has been resolved.
- Deserialization of Map fields in XML configuration files needs to be more robust
-
The deserialization of Map fields in XML files is now more robust to limit the impact of accidental serial compatibility issues in plugins.
- More reliable selection of build numbers in High Availability (HA) controller
-
Under heavy load conditions and on some file systems, the repeated build triggers of a single job in a High Availability (HA) controller can result in two replicas picking the same build number, leading to corrupt metadata and various other issues. Now, the build number selection has improved coordination across replicas.
- When terminating TLS at the controller level, websocket agents could not roam between replicas
-
In a High Availability (HA) setup, when terminating TLS at the controller level, websocket agents connect to a single replica. However, when there is a running build that requires this agent, a replica might reverse proxy the websocket connection to another one and this would fail.
- Fix synchronization of temporary offline cause for agents in High Availability (HA)
-
When applying a temporary offline cause to an agent, this was not applied consistently on other replicas.
- Improperly cancelled build adoption in High Availability (HA) controller
-
Under certain timing conditions, a build offered for adoption by an exiting replica in a High Availability (HA) controller could be assigned to another replica that is about to exit, leading to the build becoming orphaned at least temporarily. The adoption processing logic now handles this case more robustly and continues to search for a new owner.
- Excessive work when propagating item deletion in a High Availability (HA) controller
-
If an item (such as a job or folder) is deleted in a High Availability (HA) controller or moved out of a folder, other replicas mirror the change. However, these replicas then duplicated some of the processing and events applied by the replica that actually performed the deletion, possibly leading to unwanted side effects. Now, the other replicas note it as missing from its containing folder or root of the controller without further actions.
- Fix a provisioning delay of permanent outbound agents when a node is requested without a specified label
-
When using a permanent outbound agent in a High Availability (HA) controller and this agent is requested to be used without specifying a label, a delay of up to a minute could be necessary to connect the node.
There is no longer a delay and this type of request now triggers an immediate agent connection.
- Resolve cases of agents with multiple executors to handle changes to executor number
-
This resolves an edge case when the monitor does not migrate a permanent node configuration to use the High Availability (HA) method of setting the executor number.
- Synchronization among High Availability (HA) replicas sensitive to file system cache coherence
-
Some messages that were sent between replicas in an High Availability (HA) controller, such as those that notify other replicas of a change in settings or a newly created job, were processed under the assumption that the file(s) written by the sending replica are already visible to the receiving replica(s). In certain network file systems, such as NFS - Network File System, depending on the cache configuration on either the client or server, this assumption is unreliable, and anomalous behavior can result in configuration changes that do not appear on other replicas. The High Availability (HA) controllers now check explicitly for filesystem changes to be observed before they process certain messages.
- Fix reverse proxy support for configurations terminating TLS at the controller level
-
Previously, an exception would occur in the following scenarios:
-
Terminating TLS at the controller level (controller started with
-httpsPort=…
) -
Browsing a running build in a replica other than the one the user is tied to via a sticky session
-
- Fix a synchronization issue or Role-Based Access Control groups after recreating a folder
-
In some cases, after you delete and recreate a folder and associated Role-Based Access Control groups, an inconsistent state could be observed between different replicas.
- Tweak node-related UI to clarify why nodes are offline
-
In some cases, nodes can be offline for a valid reason. When using a retention strategy that only connects the agent whenever a workload requests it, the agent being online is not an error state. When you put an agent temporarily offline, it is now indicated as being online of being reported as offline.
- Pipeline subtasks not reliably rescheduled during build adoption when a High Availability (HA) controller has more than two replicas
-
When a High Availability (HA) controller replica is managing a Pipeline build that has some node steps waiting in the queue for an available executor, and when the replica exits, another replica must adopt the build, and must also adopt the build’s associated queue items. This logic was not reliable when there were multiple replicas remaining in running status. Sometimes, the queue items would be adopted by a different replica than the one that adopted the overall build, leading to errors. Now, the queue items are adopted only by the replica that also adopted the build, so that the build can proceed if and when such a label expression becomes free.
- High Availability (HA) - Console link in build executor widget was leading to a HTTP 404 if the build was not local
-
When you viewed the progress for a remote build through the executor widget, the generated console link was incorrect.
- Deadlock in logging in the High Availability (HA) controller
-
A deadlock was found related to log messages in a High Availability (HA) controller. The code has been reworked so it does not acquire locks.
- Inbound shared agents failed to reconnect to the High Availability (HA) controller
-
If a shared agent used an inbound launcher on one replica of a High Availability (HA) controller, it would fail to connect to another replica of the same High Availability (HA) controller even when it was needed by a pending queue item there. The ownership of this class of agents is now tracked correctly.
- Subtasks in the queue of a High Availability (HA) controller could be lost if replica restarts quickly
-
If a replica of a High Availability (HA) controller restarts in less than one minute and retains the same host name (for example, in Kubernetes if using the
/exit
REST endpoint instead of performing a rolling restart), a scheduled Pipeline build stage (that was waiting for an executor at the time) could be re-read by the restarted replica before the adoptive owner of the build noticed. This prevents the stage from being run and ultimately causes the build to abort. Now, replicas will explicitly check for scheduled stages at startup that are intended to be managed by another replica and transfer them, allowing the build to complete.
- Deadlock between the operations center and Role-Based Access Control code
-
Under some conditions, such as during logging from a High Availability (HA) controller, there could be a deadlock between the code involved in Role-Based Access Control and maintaining the connection to the operations center.
- In High Availability (HA), there is a case where the controller never terminates
-
There are some cases where a Hazelcast shutdown remained stuck and blocked the controller to shutdown. This is now resolved and the controller will terminate as expected within a reasonable duration.
- High Availability (HA) multi-executor agents could leave behind orphaned clones
-
In a High Availability (HA) controller, changes made to the permanent agents using the multi-executor option (additions, deletions, reconfigurations) while the controller is running are reflected in the list of read-only clones. However, applying the
jenkins.yaml
file from a Configuration as Code bundle that includes the jenkins.nodes section could change the list of agents without triggering the correct notifications. This results in some orphaned clones that could not be readily deleted. Now, a controller replica will explicitly check for orphaned clones during startup and remove them.
- Fixed incorrect exception handling in the launching sequence of the GCE plugin
-
When a VM used Java 11 and the controller requires agents to use Java 17 or newer, the agent provisioned by the Google Compute Engine cloud was failing to launch. This leaked the connection-related objects and can eventually lead to memory exhaustion.
- Fix for incorrect parsing of license
-
Under certain circumstances, the Operations center incorrectly assumed the controllers' licenses were invalid and that lead to issuing new ones. The parsing has now been fixed to avoid issuing new controller licenses.
- GitHub Branch Source performed many credentials lookups of the same scan credentials when scanning (branch indexing / organization scan)
-
GitHub Branch Source now reuses the credentials object reference through scans to avoid frequent duplicated lookups.