Upgrade Notes
- Remove the CloudBees Configuration as Code API plugin from CloudBees Assurance Program
-
The CloudBees Configuration as Code API plugin has been removed from CloudBees Assurance Program. Please ensure that you update any
plugins.yaml
files and use them accordingly.
- Operations center CloudBees Assurance Program plugin changes since 2.462.1.3
-
The following plugins have been removed from the Operations center CloudBees Assurance Program since 2.462.1.3:
-
CloudBees CasC API Plugin (Deprecated) (
cloudbees-casc-api
)
-
- Controller CloudBees Assurance Program plugin changes since 2.462.1.3
-
The following plugins have been removed from the Controller CloudBees Assurance Program since 2.462.1.3:
-
CloudBees CasC API Plugin (Deprecated) (
cloudbees-casc-api
)
-
Feature Enhancements
- New logs when executing specific commands
-
The SSH Agent Launcher logs details about the system commands executed and their output results. These logs are included in the support bundle that help in diagnosis.
- CloudBees Pipeline Explorer Enhancements
-
-
New preference to display time spent waiting in the queue in the build information bar The CloudBees Pipeline Explorer now has a new Show queuing time option under the Build information preferences. This option adds a widget to the build info bar that shows the total amount of the build and all of its
node
steps that are waiting in the queue. -
New preference to configure the number of context badges displayed in the log view Previously only four context badges were displayed in the CloudBees Pipeline Explorer log view. Now, a new preference allows you to force the display of all of the badges and configure the number of context badges.
-
Step badges displayed on search display in tree view The step badges are displayed on the search display when they are visible in the tree view.
-
- Improved HTTP response code when maximum load per replica is exceeded
-
When the Maximum load per replica setting is configured on an High Availability (HA) controller, and the actual load has reached the configured maximum, the HTTP requests to trigger builds that produced
302
responses. This was consistent with the behavior of Jenkins generally when refusing to schedule builds, or when a build was already scheduled and the trigger would duplicate it, but it is misleading in this context. A429 Too Many Requests
response is now served, along withRetry-After
,X-Current-Load
, andX-Maximum-Load
headers (the load counts here are for the controller as a whole, not a single replica).
- New Pipeline Policies rule for High Availability (HA) controllers
-
A Pipeline Policies rule was added that detects idioms known to not work well in High Availability (HA) controllers. Initially, it warns about the build, lock, and milestone steps.
- Aligning IP address used for High Availability (HA) reverse proxying with that of Hazelcast network
-
Previously, replicas of a High Availability (HA) controller on traditional platforms would attempt to contact one another for internal reverse proxying (such as for displaying running builds) on the IP address that was the default address of the system. Now, this choice is aligned with the IP address selected by Hazelcast for its own networking. The previous behavior can be selected using the system property
-Dcom.cloudbees.jenkins.plugins.replication.hazelcast.Hazelcast.disable-BEE-48512=true
in case of problems (in which case you can also contact CloudBees support).
- Enhance pipeline durability Administrative Monitor to show affected pipelines and include a fix button
-
The pipeline durability Administrative Monitor has been enhanced to include a list of affected pipelines and a button that removes incompatible properties. When you select the Fix Incompatibilities button, it removes all of the properties that set the pipeline durability to anything other than
Maximum survivability/durability
(this includes the global pipeline durability level and the job’s individual settings), and also removes theDo not allow the pipeline to resume if the controller restarts
property from all affected jobs.
- Users with Overall/Manage permission can now configure policies
-
The users who have
Overall/Manage
permission can now configure their policies.
Resolved Issues
- The Pipeline stage view displayed incomplete information on High Availability (HA) controllers
-
On a High Availability (HA) controller, the Pipeline stage view displayed only builds that were either complete or managed by the replica with the sticky session. Now, metadata is aggregated across replicas from the internal REST API so the widget can display a full view of all completed and running builds.
- CloudBees Pipeline Explorer Resolved Issues
-
The following CloudBees Pipeline Explorer issues have been resolved:
-
There is an issue in the Scroll to line Door icon in the unsuccessful steps drawer In the Unsuccessful Steps drawer, the Scroll to line Door icon nowt properly loads the previous page in specific conditions.
-
The log badges are now displayed on one line The log badges that have long labels or a long sequence hierarchy were sometimes displayed on multiple lines, leading to an unexpected rendering. Log badges are now always displayed on one line.
-
In the CloudBees Pipeline Explorer map display, avoid collision with the controls In the Map view display, the zoom-to-fit feature will now not cause collisions between the controls and the left-most node.
-
In some cases, the logs failed to load for incomplete builds The CloudBees Pipeline Explorer now loads the log content for incomplete builds when loading log content near the end of the log that was not yet fully written to disk.
-
- Missing High Availability (HA) synchronization of Pipeline Policies
-
The Pipeline Policies saved on an High Availability (HA) controller did not propagate to other replicas until they were restarted. Now, this setting is synchronized normally.
- Spaces in agent names no longer break UI navigation
-
The spaces in agent names no longer break the UI navigation when navigating to the agent.
- Problems loading a Pipeline Template Catalog defined in Configuration as Code could break startup
-
If there was a problem loading a Pipeline template catalog defined in Configuration as Code (
globalCloudBeesPipelineTemplateCatalog
injenkins.yaml
) such as a Git server outage, a startup could fail. Now, the failure to import the catalog is recorded, but startup continues.
- Improved robustness of the
sshagent
step -
Resuming a build that runs some commands wrapped in an
sshagent
block did not work reliably; sometimes, the build would fail stating there was a problem running thessh-agent
command. A revised implementation is simpler and should avoid these issues.
- Avoid a log warning when visiting the Parameters page for a job
-
Internal changes were made to avoid displaying the following warning:
New Stapler routing rules result in the URL "/job/dummy-job/1/parameters/" no longer being allowed. […]
.
- Fix a warning when setting up a new High Availability (HA) controller
-
A warning message was removed that was displayed on a High Availability (HA) controller startup that mentioned the class LeaseId.
- CasC export of controller items using
jenkinsEnterpriseUpdateSource
improperly includedtoolInstallerNames
field -
Exporting a Configuration as Code item for a (client or managed) controller that uses the
updateCenter
property withjenkinsEnterpriseUpdateSource
may have included a list of strings under atoolInstallerNames
field that was incorrect since this is not a configurable property of the update source.
IllegalArgumentException
when applying items in CasC-
Under some conditions, especially depending on the Java version that is used, the items Configuration as Code bundle application could return an error in
ReflectionHelper
.
- GitHub App installation token cannot be generated when the GitHub App is installed in multiple organizations
-
The authentication using GitHub App credentials was failing with the following configuration:
-
GitHub App installed in more than one GitHub organization.
-
GitHub App credentials owner left blank (owner inferred from the repository) The owner is now inferred from the repository, as expected when the GitHub App is installed on multiple organizations.
-