Upgrade Notes
- End-of-life announcement for CloudBees CI on modern cloud platforms FIPS support
-
The CloudBees CI 2.516.3.29358 release will be the last version of CloudBees CI on modern cloud platforms to produce FIPS-compliant images. For more information regarding this end-of-life announcement, please contact CloudBees Support.
- Operations center CloudBees Assurance Program plugin changes since 2.516.2.29000
-
The following plugins have been added to the operations center CloudBees Assurance Program since 2.516.2.29000:
-
CloudBees License Tracker Plugin (
cloudbees-license-tracker
)
-
- Controller CloudBees Assurance Program plugin changes since 2.516.2.29000
-
The following plugins have been added to the controller CloudBees Assurance Program since 2.516.2.29000:
-
CloudBees License Tracker Plugin (
cloudbees-license-tracker
)
-
New Features
- CloudBees CI integration with CloudBees platform
-
You can now view all Multibranch Pipelines running on your CloudBees CI controllers directly within CloudBees platform. This new visualization delivers an all-in-one view of Multibranch Pipelines and builds across your entire CloudBees CI environment.
This integration is made possible by the CloudBees Platform Integration plugin (cloudbees-cbp-unify-integration
), which connects your CloudBees CI controller with CloudBees platform for a seamless experience.
Key Features:
-
View Multibranch Pipeline activities from every CloudBees CI controller in one unified, user-friendly interface.
-
Easily monitor and manage jobs from multiple controllers in a single dashboard.
-
Instantly view the status of each job—whether it’s done, in progress, or failed—along with other key details.
-
Capture build artifact traceability for artifacts produced from CloudBees CI runs in CloudBees platform.
-
Publish test results to CloudBees platform from any testing steps in the pipelines.
-
Publish security scan results to CloudBees platform from scanners that are used in the pipelines.
-
View consolidated cross-component metrics on Security, Software delivery, Testing, and more for the CloudBees CI workflows.
For more information, refer to CloudBees CI integration with CloudBees platform and Using CloudBees CI and Jenkins with CloudBees platform.
- Introducing the new CloudBees License Tracker Plugin (
cloudbees-license-tracker
) -
The CloudBees License Tracker plugin (
cloudbees-license-tracker
) is introduced to collect activity data from controllers and transmit it to CloudBees’ internal analytics systems.
As CloudBees respectfully uses appropriate personal data (name, username, email address, SCM handle), we have updated our Privacy Policy.
This data is used by CloudBees to help us understand how CloudBees CI is being used and support licensing discussions. CloudBees does not have any access to any other customer confidential data / records as a general matter.
For more information, refer to Count and monitor user licenses with the CloudBees User License Counting (ULC) system.
- Option to emulate the
lock
step in High Availability (HA) controllers -
The Lockable Resources plugin (
lockable-resources
) was not working correctly in a High Availability (HA) controller. Because the most commonly required aspect of this plugin is thelock
Pipeline step, and most usages of this step simply use the singleresource
argument, there’s now an option in a High Availability (HA) controller to define alock
step with similar behavior that works correctly across replicas.
Feature Enhancements
- High Availability (HA) support for Parameterized Scheduler plugin (
parameterized-schedule
) -
When the Parameterized Scheduler plugin (
parameterized-schedule
) was used in a High Availability (HA) controller, triggers would be run once per replica, causing duplicated builds. Now each scheduled time slot is processed by only one replica, similar to the built-in timer trigger.
- Stricter health checks for High Availability (HA) controllers
-
The new health check for High Availability (HA) controllers now verifies that the Jenkins queue lock isn’t held for too long and that basic Hazelcast communications are working.
- High Availability (HA) compatible configuration for preallocating warm EC2 agents
-
You can now configure an EC2 cloud to operate in a High Availability (HA) controller so that most builds don’t need to wait for instances to launch. Instead, you can use a pool of preallocated agents by doing the following: Define a minimum number of spare instances (this number is per replica, not per controller), set the idle timeout to zero, and request that idle agents be terminated during shutdown.
Resolved Issues
- Redesigned handling of permanent inbound agents in High Availability (HA) controllers
-
The previous system for transferring permanent inbound agents from one replica to another in an HA controller was fragile and exhibited problematic behavior in some edge cases, especially under load. The new, simplified implementation should more reliably transfer exactly one agent to another replica when necessary.
- Configuration as Code items failed to load Pipeline Catalog based jobs
-
When jobs based on a Pipeline Catalog are defined with Configuration as Code, the controller fails to load the CasC Controller Bundle Service. This failure would cause the controller to fail to start.
- High Availability (HA) build corruption under load
-
Under certain heavy load circumstances, a High Availability (HA) controller may have allowed multiple replicas to adopt the same build in close succession, leading to corrupted build metadata.
- Deadlock preventing queue operations in High Availability (HA) controllers
-
When triggering new builds rapidly in a High Availability (HA) controller, a deadlock could sometimes occur in the load balancing code, preventing queue activity from proceeding. This deadlock could also trigger health check failures across multiple or even all replicas simultaneously.
- Outbound SSH agents now reconnect after connection loss in High Availability (HA) configurations
-
Previously, in High Availability (HA) configurations, permanent outbound agents wouldn’t reconnect when their channel was broken. This caused builds to hang. Going forward, outbound SSH agents will correctly attempt to reconnect when the channel is broken, ensuring that builds resume instead of becoming stuck.
- Agent can be grabbed by the wrong build after High Availability (HA) adoption if the owner is slow to resume
-
Under certain circumstances in a High Availability (HA) controller under load, a build that has been adopted after a replica exits might begin resuming promptly, but not immediately resume a running
node
step. If the corresponding agent comes online quickly, a pending queue item on the same replica as the one that adopted the build might start running on the agent, which was intended to be reserved for the build already running. These pending queue items are now blocked to allow the proper owner to complete the resumption.
- Unreliable tracking of High Availability (HA) replica owning TCP inbound agent
-
If information about an inbound TCP agent connected to an HA controller was recorded early in startup, the TCP port might not have been properly tracked, preventing some agent management features from working. Now, the configured port (always
50000
on managed controllers) is used where necessary, and potential problems are limited to client controllers where the port number is set to zero (to bind a random server port on startup).
- Unresponsive queue operations in High Availability (HA) controller with many outbound permanent agents
-
When an High Availability (HA) controller has many agents, under certain heavy load conditions, particularly after a rolling restart, some replicas might become unresponsive because of a lock on the Jenkins queue that’s held while waiting for various types of Hazelcast.
- EC2 instances terminate when Clean up orphan nodes option is enabled
-
When the Clean up orphaned nodes option is enabled for a cloud on one controller, it can inadvertently terminate EC2 instances belonging to other controllers or clouds. This occurs because the plugin doesn’t validate the value of the
jenkins_server_url
tag on the EC2 instances before terminating them.
The plugin only checks for the presence of the tag. Additionally, the orphaned node cleanup logic is scoped per Jenkins cloud UI configuration, but it operates globally across all configured clouds in a controller. This results in orphaned agents from clouds without the cleanup option also being terminated.
- Aborting upstream jobs now correctly aborts all corresponding downstream jobs
-
Previously, in
triggerRemoteJob
awaitResult mode, aborting an older upstream job would fail to abort its corresponding downstream job if more than five newer upstream-to-downstream job pairs had started subsequently. The abort logic now ensures that the downstream job linked to any aborted upstream job is correctly terminated, regardless of its position in the build queue.
- Ensuring uniqueness of High Availability (HA) replica identifiers
-
When a replica of a High Availability (HA) controller started with the same hostname as another replica (current or former), such as when restarting a container in a Kubernetes pod, internal structures keyed off
$HOSTNAME
could have become confused. Now it is guaranteed that a replica name is not reused.
HazelcastInstanceNotActiveException
if a replica is generating bundle on shutdown-
When a replica shuts down while a support bundle is being generated, a
HazelcastInstanceNotActiveException
can occur when attempting to retrieve information from other replicas.
Information from other replicas isn’t collected if Hazelcast isn’t active.