|
CloudBees Beta
This feature is available as a beta release and is subject to change without notice. CloudBees recommends stringent testing in a development environment and a complete review of the documentation and architecture before using it in production. |
This content applies only to CloudBees CI on modern cloud platforms.
Follow these steps to migrate an existing CloudBees CI on modern cloud platforms deployment from Kubernetes Ingress to Gateway API.
|
This migration requires deprovisioning and reprovisioning all managed controllers. Controllers are unavailable during the migration window. Plan a maintenance window that accommodates the full deprovision and reprovision cycle. Do not combine this migration with a CloudBees CI on modern cloud platforms version upgrade. Upgrade to your target release first, verify the environment is stable, and then perform the Gateway API migration as a separate operation. |
Prepare for the migration
Complete the following before starting the migration:
-
Verify all Gateway API prerequisites, including:
-
Gateway resource created in the
Programmedstate with appropriate TLS certificates. -
Namespace labels applied to namespaces containing operations center or managed controller Services so that HTTPRoutes can attach to the Gateway. Refer to Verify namespace labels for details.
-
-
Back up your operations center and managed controller configurations. Refer to Run backups using cluster operations for backup procedures.
-
Lower your DNS TTL to a short value (for example, 60 seconds) at least 24 hours before the migration window. This ensures DNS changes propagate quickly during the cutover. Reset the TTL to its original value after the migration completes.
-
For High Availability (HA) or High Scalability (HS) controllers, configure session persistence before the maintenance window. Refer to session persistence implementation support for details.
Session persistence can often be configured in advance of the migration without downtime. The Helm upgrade automatically adds the required Role-Based Access Control (RBAC) permissions for HTTPRoute management. -
Review Clean up Helm values for Gateway API migration to understand which Ingress-specific Helm values to remove.
-
Audit your existing Ingress resources for custom annotations. Behaviors configured through Ingress NGINX annotations (such as rate limiting, IP allowlisting, CORS, proxy timeouts, and client body size limits) are not migrated. Re-implement these behaviors using Gateway API policies or your gateway implementation’s extension mechanisms before or after the migration.
-
If you used cert-manager Ingress annotations for TLS certificate management, those annotations are not migrated. Ensure TLS is configured on the Gateway resource directly, for example using a cert-manager
Certificateresource or Gateway-level annotation. -
If you use external-dns with Ingress as a source, verify whether it is also configured to watch HTTPRoute resources.
-
If your managed controllers are created and managed through an operations center CasC bundle, update your operations center CasC bundle before the migration. Refer to Update CasC-managed environments for details.
Understand the migration approach
Enable Gateway API in your Helm values, and then cycle all managed controllers through a deprovision and reprovision. Reprovisioned controllers receive HTTPRoute resources instead of Ingress resources.
-
IngressandHTTPRouteare mutually exclusive per endpoint. CloudBees CI on modern cloud platforms creates either anIngressor anHTTPRoutefor each endpoint, but never both. All controllers sharing a Kubernetes endpoint must use the same routing type. If you have multiple endpoints, one endpoint can use Gateway API while another uses Ingress. For the simplest migration, migrate all endpoints in a single maintenance window. -
The Helm upgrade switches the operations center to Gateway API immediately. The Helm chart deletes the operations center Ingress and creates an HTTPRoute. The operations center is unreachable until you update the DNS records.
-
Controller Ingress resources are not cleaned up automatically. The Helm chart manages the operations center Ingress, but when the service exposure switches from Ingress to Gateway API, the reprovision cycle creates new HTTPRoute resources but does not delete old Ingress resources. For manual cleanup steps, refer to Clean up old Ingress resources.
-
HTTP-to-HTTPS redirect must be configured on the Gateway. With Ingress NGINX, the HTTP-to-HTTPS redirect was handled by the
ssl-redirectannotation. With Gateway API, configure this redirect on the Gateway resource directly. The operations center root path redirect (/to/cjoc/) is handled automatically by the Helm chart HTTPRoute. Refer to Recommended Gateway-level routing configuration. -
JNLP/TCP agent routing is unaffected. JNLP agents (TCP port 50000) use direct Kubernetes Service connectivity, not Ingress or Gateway API. WebSocket-based agents route through the Gateway after migration.
Update CasC-managed environments
If your managed controllers are created and managed through an operations center CasC bundle, update the serviceExposure configuration in the operations center bundle before running the Helm upgrade in Update Helm values.
The Helm upgrade triggers a restart of the operations center.
The operations center applies the updated bundle upon restart.
Ingress-based configuration (before):masterprovisioning: kubernetes: clusterEndpoints: - id: "default" name: "kubernetes" serviceExposure: ingress: ingressClass: "nginx"
masterprovisioning: kubernetes: clusterEndpoints: - id: "default" name: "kubernetes" serviceExposure: gateway: gatewayName: "cloudbees-gateway"(1) gatewayNamespace: "gateway-infra"(2) # sectionName: "https"(3)
| 1 | Name of the Gateway resource. |
| 2 | Namespace where the Gateway resource is deployed. |
| 3 | Optional. Target a specific Gateway listener by name. Required when the Gateway has multiple listeners. |
You must still deprovision and reprovision managed controllers for the routing change to take effect. The CasC change ensures controllers receive Gateway API routing when they are provisioned in Provision controllers.
Configure session persistence for HA/HS controllers
If you run HA/HS controllers, configure session persistence before the maintenance window. Refer to session persistence implementation support for the current status of each gateway implementation and configuration instructions.
-
For gateway implementations that support
sessionPersistence(such as Envoy Gateway), ensure you have installed the Gateway API experimental channel CRDs. For more information, refer to Verify experimental channel CRDs (HA/HS controllers only). -
For gateway implementations that do not support
sessionPersistence(such as Istio), refer to Configure sticky sessions with Istio. Istio does not currently support GEP-1619.
|
Istio DestinationRules must be configured before deprovisioning
For HA/HS controllers on Istio, add the |
Perform the migration
The migration switches CloudBees CI on modern cloud platforms routing from Ingress to Gateway API through a Helm upgrade followed by a controller deprovision and reprovision cycle.
To perform the migration, you must complete the following steps:
|
If |
Update Helm values
-
Add the
Gatewayblock to your Helmvalues.yamland remove Ingress-specific values.Gateway: Enabled: true Name: "cloudbees-gateway"(1) Namespace: "gateway-infra"(2) # SectionName: "https"(3)1 Name of the Gateway resource in your cluster. 2 Namespace where the Gateway resource is deployed. 3 Optional. Target a specific Gateway listener by name. Required when the Gateway has multiple listeners. -
Remove the
OperationsCenter.Ingressblock and any Ingress-specific annotations. For a complete list of values to remove, refer to Clean up Helm values for Gateway API migration. -
Run the Helm upgrade:
helm upgrade <release-name> cloudbees/cloudbees-core \ --namespace <namespace> \ --values values.yamlThe Helm upgrade deletes the operations center Ingress and creates an HTTPRoute. The operations center is unreachable until you complete Update DNS records. Existing managed controllers continue running with their current Ingress resources until deprovisioned and reprovisioned.
Update endpoint exposure
The Helm upgrade does not change existing saved endpoint configurations. You must update the endpoint exposure so that reprovisioned controllers receive HTTPRoute resources instead of Ingress resources.
If your managed controllers are created and managed through an operations center CasC bundle, and you updated the bundle in Update CasC-managed environments, the operations center applies the updated serviceExposure on restart and no manual step is needed.
For non-CasC environments, update the endpoint exposure in the operations center UI:
-
In the operations center, navigate to .
-
For each endpoint, change the Service Exposure from Ingress to Gateway.
-
Set the Gateway Name and Gateway Namespace to match the applicable Gateway resource.
-
(Optional) Set the Section Name if your Gateway has multiple listeners.
-
Save the configuration.
Update DNS records
Update your DNS records to point to the Gateway’s external address instead of the Ingress NGINX load balancer.
-
Retrieve the Gateway’s external address:
kubectl get gateway <gateway-name> -n <gateway-namespace> \ -o jsonpath='{.status.addresses[0].value}' -
Update your DNS A record or CNAME record to resolve to this address. For subdomain-based routing, update the wildcard DNS record (for example,
*.cloudbees.example.com). For path-based routing, update the A or CNAME record for your single CloudBees CI on modern cloud platforms hostname. -
Verify DNS resolution:
dig +short cjoc.cloudbees.example.com(1)1 Replace with a hostname matching your CloudBees CI on modern cloud platforms domain. The output must match the Gateway’s external address. -
Verify the operations center is accessible through the Gateway:
curl -sk -o /dev/null -w '%{http_code}' https://cjoc.cloudbees.example.com/cjoc/(1)1 Replace with your operations center URL. The expected output is 403.
Drain build queues
Before deprovisioning, minimize disruption to running builds. The build queue is persisted across deprovision and reprovision cycles, but running builds lose their connection when the controller is deprovisioned. Pipelines can resume after reprovisioning, but draining avoids unnecessary build interruptions.
-
On each managed controller, navigate to or enable quiet mode to prevent new builds from starting.
-
Wait for all running builds to complete, or cancel builds that cannot wait.
-
Verify the build queue is empty on each controller.
Deprovision controllers
Deprovision all managed controllers using one of the following methods:
Option 1: Deprovision manually
Deprovision each managed controller individually in the operations center.
-
In the operations center, navigate to each managed controller.
-
Select .
-
Wait for the controller to be fully deprovisioned before proceeding.
Option 2: Deprovision with a cluster operation
Use a cluster operation to deprovision all managed controllers.
-
In the operations center, select New Item and create a Cluster Operations job.
-
Add an operation targeting controllers.
-
Under source, select From Operations Center Root.
-
Add the Deprovision controller step.
-
Run the cluster operation.
Provision controllers
Provision all deprovisioned managed controllers using one of the following methods. Each controller starts with HTTPRoute resources instead of Ingress resources.
| After deprovisioning, the operations center UI displays Start for the controller action. Starting a deprovisioned controller provisions it with the current endpoint configuration. |
|
Consider a canary approach: provision a single non-critical controller first, verify it is accessible through the Gateway and that builds can run, and then proceed with the remaining controllers. This reduces blast radius if something is misconfigured. |
Option 1: Provision manually
Provision each deprovisioned managed controller individually in the operations center.
-
In the operations center, navigate to each deprovisioned managed controller.
-
Select Start.
-
Wait for the controller to be fully provisioned.
Option 2: Provision with a cluster operation
Use a cluster operation to provision all managed controllers.
-
Create a new Cluster Operations job (or modify the existing one).
-
Add the Provision controller step.
-
Run the cluster operation.
Alternative: Reprovision in place
If OperationsCenter.HostName is unchanged after the Helm upgrade, you can migrate controllers by reprovisioning them directly without deprovisioning first.
This skips the Drain build queues, Deprovision controllers, and Provision controllers steps.
This approach:
-
Avoids the full deprovision and provision cycle.
-
Reduces downtime because controllers remain running until the reprovision.
-
Requires that
OperationsCenter.HostNameremains unchanged, because reprovisioning reuses the endpoint URL cached at initial provisioning time.
|
If |
Use one of the following options to reprovision in place:
-
Manually: In the operations center, navigate to each running managed controller and select .
-
Cluster operation: Create a cluster operation with the Reprovision controller step.
Verify the migration
After reprovisioning, verify that all components are accessible through the Gateway.
-
Verify HTTPRoute resources are created:
kubectl get httproute -n <namespace>(1)1 Replace with the namespace where your operations center or managed controllers are deployed. The output lists HTTPRoute resources for the operations center and each reprovisioned managed controller. -
Verify the Gateway has attached routes:
kubectl get gateway <gateway-name> -n <gateway-namespace> \ -o jsonpath='{.status.listeners[*].attachedRoutes}'(1)1 If your Gateway has multiple listeners, replace *with the listener name (for example,?(@.name=="https")).The output must be greater than
0. -
Verify each managed controller is accessible through the Gateway.
-
Verify that agents can connect to their controllers:
-
JNLP/TCP agents: Unaffected by routing changes. Verify they reconnect after reprovisioning.
-
WebSocket agents: Route through the Gateway. Verify the connection establishes correctly.
-
External inbound agents: Verify they can reach controllers through the new Gateway address.
-
-
Reset your DNS TTL to its original value.
Clean up old Ingress resources
After a successful migration, remove orphaned Ingress resources.
-
Export existing Ingress resources for backup:
kubectl get ingress -n <namespace> -o yaml > ingress-backup.yaml(1)1 Run for each namespace where CloudBees CI on modern cloud platforms components were deployed. Useful for reference during rollback. -
List Ingress resources in CloudBees CI on modern cloud platforms namespaces:
kubectl get ingress -n <namespace>(1)1 Run for each namespace where CloudBees CI on modern cloud platforms components were deployed. -
Delete each orphaned Ingress resource:
kubectl delete ingress <ingress-name> -n <namespace> -
(Optional) Remove the Ingress NGINX controller if no other applications in the cluster depend on it.
Roll back the migration
If the migration fails, you can revert to Ingress-based routing:
-
In your Helm
values.yaml, remove theGatewayblock and restore theOperationsCenter.Ingressblock. -
Run the Helm upgrade:
helm upgrade <release-name> cloudbees/cloudbees-core \ --namespace <namespace> \ --values values.yamlThe operations center restarts with Ingress routing enabled.
-
Update DNS records to point back to the Ingress NGINX load balancer address.
-
Deprovision all managed controllers, and then reprovision them. Each controller creates Ingress resources on reprovision.
-
Clean up orphaned HTTPRoute resources:
kubectl delete httproute -l app.kubernetes.io/managed-by=cloudbees-core -n <namespace> -
If you configured Istio DestinationRules for session persistence, remove them from controller custom YAML and delete the DestinationRule resources from controller namespaces.
-
If you updated your CasC bundle, revert the
serviceExposureconfiguration to the Ingress-based settings.
Troubleshoot common issues
HTTPRoutes exist but do not attach to the Gateway
Verify that the namespace has the label expected by the Gateway’s allowedRoutes selector.
Run kubectl get namespace <ns> --show-labels and compare with the Gateway selector.
Refer to Verify namespace labels for details.
Gateway shows Accepted: True but Programmed: False
The gateway controller cannot program the data plane.
Check controller logs and Gateway events with kubectl describe gateway <name> -n <ns>.
Operations center is unreachable after Helm upgrade
The Helm upgrade deletes the operations center Ingress and creates an HTTPRoute. Verify DNS records point to the Gateway’s external address. Refer to Update DNS records.
Controllers are not accessible after reprovisioning
{#ctrl-unreachable-after-reprovision} Verify DNS records point to the Gateway’s external address, not the old Ingress NGINX load balancer. Refer to Update DNS records. If you recently updated DNS, wait for TTL expiration or flush your local DNS cache.
Sticky sessions fail on HA/HS controllers with Istio
Verify the DestinationRule exists in the controller namespace and that the operations center service account has RBAC permissions for destinationrules in the networking.istio.io API group.
Refer to Configure session persistence for HA/HS controllers.
TLS errors when accessing CloudBees CI on modern cloud platforms through the Gateway
Verify the TLS certificate Secret exists in the Gateway namespace and covers your domain. Refer to Verify TLS certificate (Gateway-level TLS termination only).
HTTP requests are not redirected to HTTPS
With Gateway API, HTTP-to-HTTPS redirect is no longer managed by CloudBees CI on modern cloud platforms. Configure a redirect HTTPRoute on the Gateway. Refer to Recommended Gateway-level routing configuration.
Builds were interrupted during the migration
Running builds lose their connection when a controller is deprovisioned, but pipelines can resume after reprovisioning. Drain build queues before deprovision to avoid interruptions. Refer to Drain build queues.
WebSocket agents fail to connect after migration
Verify the Gateway supports WebSocket upgrade headers and that no intermediate proxy strips them. With Ingress NGINX, WebSocket support was enabled through annotations; with Gateway API, it must be supported natively by the gateway implementation.
HA/HS controller shows sign-in loops or CRUMB validation errors
Session persistence is not configured.
Verify the DestinationRule (Istio) or GEP-1619 session persistence (Envoy Gateway) is in place.
Refer to Configure session persistence for HA/HS controllers.