CloudBees CI release highlights

What’s new in CloudBees CI 2.516.2.28983

Watch video

Critical Issues

Update to CloudBees CI 2.516.2.28991 – Critical fixes for provisioning and workspace caching

CloudBees strongly recommends upgrading CloudBees CI environment to 2.516.2.28991 when using workspace caching or provisioning controllers with subdomains, to prevent possible service disruption or data loss.

Issues Addressed in 2.516.2.28991:

  • Provisioning Fix: Fixed an issue where a managed controller with the same domain name and different endpoints fails to provision.

  • Caching Cleanup Correction: Fixed a workspace caching cleanup problem where an incorrect S3 fix triggered excessive memory consumption and deletion of artifacts.

Upgrade Notes

Operations center CloudBees Assurance Program plugin changes since 2.516.1.28669

The following plugins have been added to the operations center CloudBees Assurance Program since 2.516.1.28669:

  • Dev Tools Symbols API Plugin (oss-symbols-api)

The following plugins have been removed from the operations center CloudBees Assurance Program since 2.516.1.28669:

  • Operations Center Embedded elasticsearch (operations-center-embedded-elasticsearch)

  • Popper.js 2 API Plugin (popper2-api)


Controller CloudBees Assurance Program plugin changes since 2.516.1.28669

The following plugins have been added to the controller CloudBees Assurance Program since 2.516.1.28669:

  • Dev Tools Symbols API Plugin (oss-symbols-api)

The following plugins have been removed from the controller CloudBees Assurance Program since 2.516.1.28669:

  • Popper.js 2 API Plugin (popper2-api)


New Features

None.

Feature Enhancements

None.

Resolved Issues

Backups to Amazon Web Services S3 fail when larger than 5GB

Backups to Amazon Web Services S3 failed immediately with the message Your proposed upload exceeds the maximum allowed size when the backup size was greater than 5 GB. This was caused by a recent migration to the Amazon Web Services SDK v2, during which multipart upload was not preserved. The upload now uses a multipart upload and can handle large files.


High Availability (HA) build corruption under load

Under certain heavy load circumstances, a High Availability (HA) controller may have allowed multiple replicas to adopt the same build in close succession, leading to corrupted build metadata.


Race condition when an SSH agent connects to multiple High Availability (HA) replicas

Resolved a race condition that could occur when an SSH agent was simultaneously connected to multiple High Availability (HA) replicas. This issue was more likely to occur in environments with a large number of SSH agents and high parallel build activity. In such cases, multiple replicas could mistakenly schedule and run two builds concurrently on the same agent.


Offline cloud agents are now displayed in every replica

Offline cloud agents were previously displayed only in their owning replica. This led to a confusing user experience. Now, these agents are displayed in every replica.


Automatic termination of Pipeline build when agent remains offline

If a Pipeline build was using an agent that gets disconnected and then stays disconnected for five minutes, it will now abort the sh step automatically. As usual, the retry step with conditions can be used to restart the stage if this was a temporary issue.


Backups to Amazon Web Services S3 fail due to token expiration

Backups to Amazon Web Services S3 could fail immediately when uploading the generated backup file. This occurred particularly when using AWS credentials with an IAM role. The client used to upload the backup was initiated at the beginning of the backup operation. Therefore, the issue occurred when generating the backup file took longer than the configured Security Token Service (STS) Token Expiration period. The session token used for uploading is now initiated right before the upload.


FileNotFound exception from PluginListExpander

Using a file-protocol URL in a plugins.yaml file would typically result in a FileNotFoundException error in the system log. This behavior could have lead to confusion or misconfiguration during plugin management.

Known Issues

S3 cache cleanup causing memory spike

In the CloudBees Cache Step Plugin (cloudbees-cache-step), if you configured workspace caching to use AWS S3, the plugin’s automated cleanup process would delete customer artifacts from the S3 bucket, even when you do not actively use workspace caching in any features.

CloudBees advises disabling cache steps by selecting None (disable caching) from the Cache Manager field of the Workspace Caching section of the System configuration page. This action turns off workspace caching and prevents the cache cleanup process from deleting content in the S3 bucket.


Implemented periodic cleanup of orphaned EC2 Instances in Amazon EC2 plugin (ec2)

Resolved an issue where orphaned EC2 instances could accumulate in the Amazon Web Services (AWS) console when using the Amazon EC2 plugin (ec2) configured to avoid using orphaned nodes. This update introduces a periodic cleanup mechanism for EC2 nodes that are no longer attached to active controllers. This mechanism prevents the unintentional buildup of EC2 instances, which contributes to resource exhaustion and breaches of plugin global caps.


Duplicate plugins in the Operations center Plugin Manager UI

When you search for a specific plugin under the Available tab in the Operations center Plugin Manager, the search results show duplicate entries for the plugin.


Configuration as Code items failed to load Pipeline Catalog based jobs

When Jobs based on a Pipeline Catalog are defined with CasC, the controller fails to load the CasC bundle. This causes the controller startup to fail.

A temporary workaround is to disable CasC or to temporarily not define items with CasC until a fix is available.