Upgrade Notes
- Operations center CloudBees Assurance Program plugin changes since 2.440.1.3
-
The following plugins have been added to the Operations center CloudBees Assurance Program since 2.440.1.3:
-
ASM API Plugin (
asm-api
) -
JSON Api Plugin (
json-api
)
-
- Controller CloudBees Assurance Program plugin changes since 2.440.1.3
-
The following plugins have been added to the Controller CloudBees Assurance Program since 2.440.1.3:
-
ASM API Plugin (
asm-api
) -
JSON Api Plugin (
json-api
)
-
New Features
- Internet Protocol version 6 (IPv6) Now Supported
-
Starting with this release, CloudBees CI supports Internet Protocol version 6 (IPv6) with the following considerations.
The following third-party tools that interact with the following CloudBees CI plugins do not support IPv6:
-
The CloudBees Analytics plugin (
cloudbees-analytics
) interacts with the Segment HTTPS Tracking API (api.segment.io
) that does not provide IPv6 support. For more information about the CloudBees Analytics plugin, refer to CloudBees Analytics Plugin. -
The CloudBees Slack Integration plugin (
cloudbees-slack
) interacts with the Slack Web API (api.slack.com
) that does not provide IPv6 support. For more information about the CloudBees Slack Integration plugin, refer to CloudBees Slack Integration Plugin.
CloudBees CI relies on the standard Java platform behavior with regards to the handling of IPv4/IPv6. For more information, refer to Networking properties. If you want Java to establish connections over IPv6 instead of IPv4, you should add the argument -Djava.net.preferIPv6Addresses=true
in the Java command line that is passed to start a CloudBees CI component (operations center or controller).
- System for simulating multiple executors on agent in High Availability (HA) controller
-
The High Availability (HA) controllers do not permit agents with multiple executors to be attached. The recommended alternative has been to define multiple agents that use the same computer, but this can be complicated to set up. There is now a new option on static agents that run on an High Availability (HA) controller that allows an executor count above one to be defined. This results in one or more extra agents with similar configuration to be automatically created and managed; in the case of inbound agents, connecting the original agent automatically launches the new agent processes on the same computer for the extras.
- Spread replicas of a High Availability (HA) controller in different availability zones if possible
-
When running in a Kubernetes cluster with multiple availability zones, replicas of the same controller will now prefer to spread across different zones. This relies on the standard Kubernetes node label
topology.kubernetes.io/zone
.
- New CloudBees HashiCorp Vault credentials supported
-
The CloudBees HashiCorp Vault plugin now supports
User/Private Key
andFile
credentials.
Feature Enhancements
- Support Kubernetes 1.28 for EKS
-
CloudBees CI on modern cloud platforms now supports Kubernetes
1.28
only when running on Amazon EKS.
- High Availability IPv6-only support
-
When running in High Availability (HA) mode without an IPv4 network interface, Hazelcast support for IPv6 is automatically enabled.
- Add an aggregated API for listing agents
-
When you run CloudBees CI in High Availability (HA) mode, the REST API endpoint
/computer/api/json
now returns an aggregated list of agents across all replicas.
- Better selection of components from High Availability (HA) support bundles
-
Previously, the
replicas/live/
directory of a support bundle from a High Availability (HA) controller included a predefined list of information from each other replica, regardless of which components were requested for the bundle. Now, it includes a subset of the selected components that are meaningful for this context.
The replicas/exited/
will still contain a more mixed list of information, depending on what was last captured by that replica before it exited.
- Show live Operations center page from all High Availability (HA) replicas
-
In a High Availability (HA) controller, only one replica at a time holds a socket open to the operations center. Previously, the Operations center sidebar link from other replicas (depending on your sticky session) would display a page that indicates this replica did not hold that connection. Now, the same view is displayed from any replica, including the running connection log.
- Set the default service exposure method for controllers to Route when the operations center is installed in Openshift
-
Now, the default method to expose controllers outside of the cluster when configuring a Cluster Endpoint is to use
Route
when installing the operations center in OpenShift.
- Display admin monitor for Pipeline job durability and disable resume properties
-
Display an admin monitor to alert admins that the global default for pipeline jobs is not set to the maximum durability, or if individual jobs are overriding it to something else. You can also check to see if the
disable resume
property is not set on any pipeline jobs. These properties are not compatible with High Availability (HA) mode.
- Admin monitors for Proxy patterns for High Availability (HA) controllers
-
If a proxy is set up, there should be a pattern in the No Proxy Host list that matches the replica’s address. If a replica address does not match any of the proxy patterns in No Proxy Host, an administrative monitor is displayed to warn about potential issues.
- New administrative monitor to inform when actively assigned Configuration as Code bundles have been removed from a remote bundle source
-
If a Configuration as Code bundle that is currently assigned to a controller is removed from the remote bundle store, an administrative monitor is displayed which lists those controllers who are impacted by the missing bundle.
Resolved Issues
- Unable to configure pod templates in the Operations center via the UI
-
In previous releases, the way pod templates were managed caused issues upstream where the Operations center was unable to manage pod templates from the UI.
The pod templates can now be configured from the UI.
- Fix High Availability (HA) discovery if agents are configured to be scheduled in another namespace
-
When you run a controller in High Availability (HA) mode, it needs access to the Kubernetes API to discover other nodes in the cluster.
When you configure agents to be scheduled in a separate namespace, the required access rights were missing.
This condition has been fixed, and a separate role and role binding is now provisioned when using High Availability (HA).
- Improve reliability of outbound agents connection
-
In some cases, a replica can claim ownership of an outbound agent configured with High Availability (HA) mode, causing it to become unavailable for other replicas.
This behavior has been corrected and the issue is resolved.
- Address speed issues with reconnection of outbound agents in High Availability (HA) mode
-
When using static outbound agents configured with High Availability (HA) mode, there were connection delays when a build was requesting them.
These connection delays have been eliminated and the agents should now connect as soon as needed, resulting in faster assignments to builds that request them.
- Fix High Availability (HA) compatibility of the Pipeline Template Catalogs
-
Interactions with the Pipeline Template Catalogs are now correctly applied to both replicas when running a controller in High Availability (HA) mode.
- Propagation of proxy settings in High Availability (HA) mode
-
Now, when a new proxy configuration is set, or a proxy configuration is edited, the proxy configuration will be synced across the replicas without requiring a restart. This does not work for deleting the proxy, as currently an
onChange
event is not fired in this case. This means that if you want to delete a proxy, you will need to restart, or manually remove the proxy from all replicas.
- The CloudBees Blue Ocean Default Theme plugin was forcibly installed on all controllers
-
Previously, this plugin (that adjusts the Blue Ocean theme: color and some other styling) was forcibly installed on all controllers, even if Blue Ocean was not in use or installed. This caused some confusion with an administrative monitor active on High Availability (HA) controllers that prompts you to remove Blue Ocean (which is not compatible with High Availability).
Now, the theme plugin can be uninstalled. Additionally, since the theme plugin is is no longer forcibly installed, when you create a new controller with the setup wizard or Configuration as Code and are including Blue Ocean, you should also include the theme plugin. Existing controllers and the Operations center are unaffected, as well as new team controllers that will always include the theme plugin.
- Failure to adopt running build during High Availability (HA) rolling restart
-
Under some circumstances, a High Availability (HA) controller replica can attempt to adopt a running build from another replica being cycled out by a rolling restart, but fail to do so within one minute. This lets another replica adopt the build instead and potentially can lead to data corruption. Changes to the blocking behavior should now avoid some of these cases.
- Honor concurrent build setting on jobs on High Availability (HA) controllers
-
When you configure a Pipeline project on a HIgh Availability (HA) controller to not allow concurrent builds (or not configure a freestyle project to allow concurrent builds), multiple builds of the project could have run on different replicas. Now, the controller tracks whether a build of the project is running on any replica, and if so, it blocks new builds from being started on any replica.
- CloudBees Pipeline Explorer Resolved Issues
-
-
Related builds pane displays a loading spinner when it auto-refreshes The related builds loading spinner is displayed only when the builds are initially loading. This issue is resolved.
-
In tree view, several
node
steps were not displayed Thenode
steps that were cancelled while waiting for the requested node in the queue were incorrectly hidden in the CloudBees Pipeline Explorer, even when you use the “show node steps in tree“ preference or the “show steps“ advanced option in the tree view. This issue has been fixed. -
Reduce the size of fonts required by CloudBees Pipeline Explorer to improve load times when the fonts are not cached The size of the fonts required by the CloudBees Pipeline Explorer were reduced to improve the load times when the fonts are not cached.
-
Line number input behavior in special cases was inconsistent If you entered a line that was past the last line of the log file, or was not part of an active filter, you could experience undesirable behavior; for example, an empty page of results was displayed. The CloudBees Pipeline Explorer now attempts to always load the full page of lines that is closest to the line number you requested, and it scrolls to the closest available line number in the results.
-
The tree click targets were adjusted Several tree click targets were not adjusted correctly to activate their trees. This issue has been resolved.
-
Now truncates very large lines instead of displaying an error The CloudBees Pipeline Explorer now truncates individual log lines to a maximum of 10 KiB, or 10240 ASCII characters. Lines longer than this will display the first 10240 characters, and then a message that looks like this
[N bytes truncated]
to note how many additional bytes were truncated.If truncated lines are analyzed as part of a search, a warning icon is displayed with a tooltip that explains what occurred.
Long lines are not truncated when you download the log file.
-
Add a download button to the error message bar There is now a download button on the error message bar that downloads the logs.
-
Viewing CloudBees Pipeline Explorer for an in-progress build may fail transiently When you opened the CloudBees Pipeline Explorer for a Pipeline where a
stage
step was starting, an error message was displayed and aNullPointerException
was logged. This issue has been fixed.
-
- Persistent volume claim created in incorrect namespace for High Availability (HA) managed controller
-
When a managed controller was configured to be provisioned in a distinct namespace from Operations center, the persistent volume claim
jenkins-home-$name-0
was created in the same namespace as Operations center, preventing the controller’sDeployment
from starting. Provisioning has been corrected to create the PVC in the same namespace as the other resources.
- CAP plugins are installed in their CAP version when the plugin catalog defines a Beekeeper exception
-
For plugins defined as exceptions in the plugin catalog, the version in CAP was installed while the version defined as the exception was offered as an upgrade.
Now the version indicated in the exception is installed.
- Controller bundle with invalid yaml blocks OC startup process
-
If operations center checks out controller bundles that have structural errors, meaning the file is not a valid yaml file, the OC gets blocked and does not start. Besides, the error log does not provide enough information to figure out the bundle causing the error.
Now the error is properly logged and the start up process continues.
- Invalid Bundle Assignment administrative monitor does not display bundles names
-
The Invalid Bundle Assignment administrative monitor does not notify about the bundles that have an incorrect state.
This issue is now fixed.
- Operations center server endpoint to load a user from the security realm returned a technical error instead of a functional one
-
The operations center endpoint
/openid/loadUserByUsername
returned a HTTP 403 status when it should return a 200 status with a load status being eitherUSERNAME_NOT_FOUND
orUSER_MAY_OR_MAY_NOT_EXIST
.
This condition is now fixed and now returns a proper functional error.
- Fixed memory leak in Operations center
-
When a communication error occured between the controller and the operations center, some resources associated with the HTTP exchange were not released. This resulted in a memory leak and thread retention.
Now, the resources associated with HTTP communication are released systematically, including error cases.
- Operations center single sign-on broken on Java 17 when using an authenticated proxy
-
When running a connected controller on Java 17 with a proxy configured with username/password authentication, the Operations center single sign-on did not work due to a limitation in a Java HTTP client. Now the proxy configuration is ignored and the controller contracts the Operations center directly regardless of proxy settings.
NOTE: Saving the proxy configuration in the GUI currently saves a blank username. This will be fixed in a future release. Configuring the proxy via Configuration as Code does not enable authentication unless a username is actually defined.
- External authentication groups not set for API/CLI calls authenticated via Operations center SSO using username/password
-
An earlier regression fix enabled API/CLI calls to be made through the Operations center SSO using username/password (an API token is still strongly encouraged). However, the fix omitted external groups such as those from LDAP. These are now set correctly.
NullPointerException
leading to some RBAC permissions not being loaded when using FINE logger-
When configuring a FINE logger for RBAC and restarting a controller, a
NullPointerException
would appear in the logs and block the proper loading of RBAC Groups permissions.This issue is now fixed.
- The SSH agents fail to launch at startup
-
The SSH based agents fail to launch. This was caused by a bug in Jenkins where a thread pool was not initialized correctly with a context
ClassLoader
. This context caused the class loader to be dependent on the initialization order and previous calls to the thread pool. This bug has been fixed.
Known Issues
- Failed parsing of data in the User Activity Monitoring plugin leads to incomplete data
-
Failed parsing of data from the User Activity Monitoring plugin will overwrite the user activity database. All user activity data that is logged up to that point in time is lost, in order to avoid this, refer to this knowledge base article Why is my user activity missing?.
- HTTP Client used for Operations Center to Controllers connection leads to performance issues
-
Because of known issues in the Java HTTP Client, there could be performance issues in Operations Center to Controllers interactions in heavily loaded environments.
More details about this issue and workarounds are documented in Operations Center Client leaks HTTP Clients since version 2.401.1.3.
- Duplicate Pipeline Template Catalogs in the Configuration as Code for controllers
jenkins.yaml
file on each instance restart -
If a Pipeline Template Catalog is configured in the Configuration as Code
jenkins.yaml
file and theid
property is not defined, the catalog is duplicated on each instance restart and in the exported Configuration as Code configuration.
- Inconsistencies between some Configuration as Code (CasC) features and High Availability (HA)
-
When using Configuration as Code in controllers that run in High Availability (HA) mode, the CloudBees Configuration as Code Export and Update screen may display inconsistent information about the bundle along with two buttons: Restart and Reload. This is caused by information not being properly synchronized between replicas. Furthermore, users may experience the following problems when trying to use one of the two buttons on that page:
-
Automatic reload bundle: clicking this button shows an error message.
-
Skip new bundle version: clicking this button forces a restart and the instance will not start again.
-
While the fix for this issue is being worked on, we recommend the following if you are using Configuration as Code in controllers that run High Availability (HA):
-
Controllers that have configured the automatic reload. Users must disable it and configure the automatic restart instead.
-
Controllers that do not have any automation (Bundle Update Timing). Users must stop using the Reload button and start using the Restart button instead.
- Error when renaming an existing EC2 cloud
-
When the name of an existing cloud node is updated, the user receives a 404 error after selecting save. This is because the cloud page uses the cloud name as part of its URL. When the user saves the name, Jenkins sends the user to the URL with the old cloud name. Please note that all node changes are successfully saved.