Migrate from Ingress to Gateway API

10 minute read
CloudBees Beta

This feature is available as a beta release and is subject to change without notice. CloudBees recommends stringent testing in a development environment and a complete review of the documentation and architecture before using it in production.

Modern Cloud Platforms

This content applies only to CloudBees CI on modern cloud platforms.

Follow these steps to migrate an existing CloudBees CI on modern cloud platforms deployment from Kubernetes Ingress to Gateway API.

This migration requires deprovisioning and reprovisioning all managed controllers. Controllers are unavailable during the migration window. Plan a maintenance window that accommodates the full deprovision and reprovision cycle.

Do not combine this migration with a CloudBees CI on modern cloud platforms version upgrade. Upgrade to your target release first, verify the environment is stable, and then perform the Gateway API migration as a separate operation.

Prepare for the migration

Complete the following before starting the migration:

  • Verify all Gateway API prerequisites, including:

    • Gateway resource created in the Programmed state with appropriate TLS certificates.

    • Namespace labels applied to namespaces containing operations center or managed controller Services so that HTTPRoutes can attach to the Gateway. Refer to Verify namespace labels for details.

  • Back up your operations center and managed controller configurations. Refer to Run backups using cluster operations for backup procedures.

  • Lower your DNS TTL to a short value (for example, 60 seconds) at least 24 hours before the migration window. This ensures DNS changes propagate quickly during the cutover. Reset the TTL to its original value after the migration completes.

  • For High Availability (HA) or High Scalability (HS) controllers, configure session persistence before the maintenance window. Refer to session persistence implementation support for details.

    Session persistence can often be configured in advance of the migration without downtime. The Helm upgrade automatically adds the required Role-Based Access Control (RBAC) permissions for HTTPRoute management.
  • Review Clean up Helm values for Gateway API migration to understand which Ingress-specific Helm values to remove.

  • Audit your existing Ingress resources for custom annotations. Behaviors configured through Ingress NGINX annotations (such as rate limiting, IP allowlisting, CORS, proxy timeouts, and client body size limits) are not migrated. Re-implement these behaviors using Gateway API policies or your gateway implementation’s extension mechanisms before or after the migration.

  • If you used cert-manager Ingress annotations for TLS certificate management, those annotations are not migrated. Ensure TLS is configured on the Gateway resource directly, for example using a cert-manager Certificate resource or Gateway-level annotation.

  • If you use external-dns with Ingress as a source, verify whether it is also configured to watch HTTPRoute resources.

  • If your managed controllers are created and managed through an operations center CasC bundle, update your operations center CasC bundle before the migration. Refer to Update CasC-managed environments for details.

Understand the migration approach

Enable Gateway API in your Helm values, and then cycle all managed controllers through a deprovision and reprovision. Reprovisioned controllers receive HTTPRoute resources instead of Ingress resources.

  • Ingress and HTTPRoute are mutually exclusive per endpoint. CloudBees CI on modern cloud platforms creates either an Ingress or an HTTPRoute for each endpoint, but never both. All controllers sharing a Kubernetes endpoint must use the same routing type. If you have multiple endpoints, one endpoint can use Gateway API while another uses Ingress. For the simplest migration, migrate all endpoints in a single maintenance window.

  • The Helm upgrade switches the operations center to Gateway API immediately. The Helm chart deletes the operations center Ingress and creates an HTTPRoute. The operations center is unreachable until you update the DNS records.

  • Controller Ingress resources are not cleaned up automatically. The Helm chart manages the operations center Ingress, but when the service exposure switches from Ingress to Gateway API, the reprovision cycle creates new HTTPRoute resources but does not delete old Ingress resources. For manual cleanup steps, refer to Clean up old Ingress resources.

  • HTTP-to-HTTPS redirect must be configured on the Gateway. With Ingress NGINX, the HTTP-to-HTTPS redirect was handled by the ssl-redirect annotation. With Gateway API, configure this redirect on the Gateway resource directly. The operations center root path redirect (/ to /cjoc/) is handled automatically by the Helm chart HTTPRoute. Refer to Recommended Gateway-level routing configuration.

  • JNLP/TCP agent routing is unaffected. JNLP agents (TCP port 50000) use direct Kubernetes Service connectivity, not Ingress or Gateway API. WebSocket-based agents route through the Gateway after migration.

Update CasC-managed environments

If your managed controllers are created and managed through an operations center CasC bundle, update the serviceExposure configuration in the operations center bundle before running the Helm upgrade in Update Helm values. The Helm upgrade triggers a restart of the operations center. The operations center applies the updated bundle upon restart.

Ingress-based configuration (before):
masterprovisioning: kubernetes: clusterEndpoints: - id: "default" name: "kubernetes" serviceExposure: ingress: ingressClass: "nginx"
Gateway API configuration (after):
masterprovisioning: kubernetes: clusterEndpoints: - id: "default" name: "kubernetes" serviceExposure: gateway: gatewayName: "cloudbees-gateway"(1) gatewayNamespace: "gateway-infra"(2) # sectionName: "https"(3)
1 Name of the Gateway resource.
2 Namespace where the Gateway resource is deployed.
3 Optional. Target a specific Gateway listener by name. Required when the Gateway has multiple listeners.

You must still deprovision and reprovision managed controllers for the routing change to take effect. The CasC change ensures controllers receive Gateway API routing when they are provisioned in Provision controllers.

Configure session persistence for HA/HS controllers

If you run HA/HS controllers, configure session persistence before the maintenance window. Refer to session persistence implementation support for the current status of each gateway implementation and configuration instructions.

Istio DestinationRules must be configured before deprovisioning

For HA/HS controllers on Istio, add the DestinationRule to each controller’s custom YAML before stopping the controller. When you save the configuration, the DestinationRule is created immediately. If you use cluster operations, pre-configure all DestinationRule resources first, as cluster operations cannot inject custom YAML during the migration. Alternatively, a Groovy script executed on operations center can pre-configure DestinationRules on controllers in bulk before running the cluster operation.

Perform the migration

The migration switches CloudBees CI on modern cloud platforms routing from Ingress to Gateway API through a Helm upgrade followed by a controller deprovision and reprovision cycle.

To perform the migration, you must complete the following steps:

If OperationsCenter.HostName is unchanged after the Helm upgrade, you can reprovision controllers in place instead of performing the full deprovision and provision cycle. This reduces downtime because controllers remain running until the reprovision.

Update Helm values

  1. Add the Gateway block to your Helm values.yaml and remove Ingress-specific values.

    Gateway: Enabled: true Name: "cloudbees-gateway"(1) Namespace: "gateway-infra"(2) # SectionName: "https"(3)
    1 Name of the Gateway resource in your cluster.
    2 Namespace where the Gateway resource is deployed.
    3 Optional. Target a specific Gateway listener by name. Required when the Gateway has multiple listeners.
  2. Remove the OperationsCenter.Ingress block and any Ingress-specific annotations. For a complete list of values to remove, refer to Clean up Helm values for Gateway API migration.

  3. Run the Helm upgrade:

    helm upgrade <release-name> cloudbees/cloudbees-core \ --namespace <namespace> \ --values values.yaml

    The Helm upgrade deletes the operations center Ingress and creates an HTTPRoute. The operations center is unreachable until you complete Update DNS records. Existing managed controllers continue running with their current Ingress resources until deprovisioned and reprovisioned.

Update endpoint exposure

The Helm upgrade does not change existing saved endpoint configurations. You must update the endpoint exposure so that reprovisioned controllers receive HTTPRoute resources instead of Ingress resources.

If your managed controllers are created and managed through an operations center CasC bundle, and you updated the bundle in Update CasC-managed environments, the operations center applies the updated serviceExposure on restart and no manual step is needed.

For non-CasC environments, update the endpoint exposure in the operations center UI:

  1. In the operations center, navigate to Manage Jenkins  Kubernetes  Cluster Endpoints.

  2. For each endpoint, change the Service Exposure from Ingress to Gateway.

  3. Set the Gateway Name and Gateway Namespace to match the applicable Gateway resource.

  4. (Optional) Set the Section Name if your Gateway has multiple listeners.

  5. Save the configuration.

Update DNS records

Update your DNS records to point to the Gateway’s external address instead of the Ingress NGINX load balancer.

  1. Retrieve the Gateway’s external address:

    kubectl get gateway <gateway-name> -n <gateway-namespace> \ -o jsonpath='{.status.addresses[0].value}'
  2. Update your DNS A record or CNAME record to resolve to this address. For subdomain-based routing, update the wildcard DNS record (for example, *.cloudbees.example.com). For path-based routing, update the A or CNAME record for your single CloudBees CI on modern cloud platforms hostname.

  3. Verify DNS resolution:

    dig +short cjoc.cloudbees.example.com(1)
    1 Replace with a hostname matching your CloudBees CI on modern cloud platforms domain. The output must match the Gateway’s external address.
  4. Verify the operations center is accessible through the Gateway:

    curl -sk -o /dev/null -w '%{http_code}' https://cjoc.cloudbees.example.com/cjoc/(1)
    1 Replace with your operations center URL. The expected output is 403.

Drain build queues

Before deprovisioning, minimize disruption to running builds. The build queue is persisted across deprovision and reprovision cycles, but running builds lose their connection when the controller is deprovisioned. Pipelines can resume after reprovisioning, but draining avoids unnecessary build interruptions.

  1. On each managed controller, navigate to Manage Jenkins  Prepare for Shutdown or enable quiet mode to prevent new builds from starting.

  2. Wait for all running builds to complete, or cancel builds that cannot wait.

  3. Verify the build queue is empty on each controller.

Deprovision controllers

Deprovision all managed controllers using one of the following methods:

Option 1: Deprovision manually

Deprovision each managed controller individually in the operations center.

  1. In the operations center, navigate to each managed controller.

  2. Select Manage  Deprovision.

  3. Wait for the controller to be fully deprovisioned before proceeding.

Option 2: Deprovision with a cluster operation

Use a cluster operation to deprovision all managed controllers.

  1. In the operations center, select New Item and create a Cluster Operations job.

  2. Add an operation targeting controllers.

  3. Under source, select From Operations Center Root.

  4. Add the Deprovision controller step.

  5. Run the cluster operation.

Provision controllers

Provision all deprovisioned managed controllers using one of the following methods. Each controller starts with HTTPRoute resources instead of Ingress resources.

After deprovisioning, the operations center UI displays Start for the controller action. Starting a deprovisioned controller provisions it with the current endpoint configuration.

Consider a canary approach: provision a single non-critical controller first, verify it is accessible through the Gateway and that builds can run, and then proceed with the remaining controllers. This reduces blast radius if something is misconfigured.

Option 1: Provision manually

Provision each deprovisioned managed controller individually in the operations center.

  1. In the operations center, navigate to each deprovisioned managed controller.

  2. Select Start.

  3. Wait for the controller to be fully provisioned.

Option 2: Provision with a cluster operation

Use a cluster operation to provision all managed controllers.

  1. Create a new Cluster Operations job (or modify the existing one).

  2. Add the Provision controller step.

  3. Run the cluster operation.

Alternative: Reprovision in place

If OperationsCenter.HostName is unchanged after the Helm upgrade, you can migrate controllers by reprovisioning them directly without deprovisioning first. This skips the Drain build queues, Deprovision controllers, and Provision controllers steps.

This approach:

  • Avoids the full deprovision and provision cycle.

  • Reduces downtime because controllers remain running until the reprovision.

  • Requires that OperationsCenter.HostName remains unchanged, because reprovisioning reuses the endpoint URL cached at initial provisioning time.

If OperationsCenter.HostName changed during the Helm upgrade, use the full deprovision and provision cycle instead. Reprovisioning without deprovisioning uses the cached hostname, resulting in HTTPRoutes with the old hostname.

Use one of the following options to reprovision in place:

  • Manually: In the operations center, navigate to each running managed controller and select Manage  Reprovision.

  • Cluster operation: Create a cluster operation with the Reprovision controller step.

Verify the migration

After reprovisioning, verify that all components are accessible through the Gateway.

  1. Verify HTTPRoute resources are created:

    kubectl get httproute -n <namespace>(1)
    1 Replace with the namespace where your operations center or managed controllers are deployed. The output lists HTTPRoute resources for the operations center and each reprovisioned managed controller.
  2. Verify the Gateway has attached routes:

    kubectl get gateway <gateway-name> -n <gateway-namespace> \ -o jsonpath='{.status.listeners[*].attachedRoutes}'(1)
    1 If your Gateway has multiple listeners, replace * with the listener name (for example, ?(@.name=="https")).

    The output must be greater than 0.

  3. Verify each managed controller is accessible through the Gateway.

  4. Verify that agents can connect to their controllers:

    • JNLP/TCP agents: Unaffected by routing changes. Verify they reconnect after reprovisioning.

    • WebSocket agents: Route through the Gateway. Verify the connection establishes correctly.

    • External inbound agents: Verify they can reach controllers through the new Gateway address.

  5. Reset your DNS TTL to its original value.

Clean up old Ingress resources

After a successful migration, remove orphaned Ingress resources.

  1. Export existing Ingress resources for backup:

    kubectl get ingress -n <namespace> -o yaml > ingress-backup.yaml(1)
    1 Run for each namespace where CloudBees CI on modern cloud platforms components were deployed. Useful for reference during rollback.
  2. List Ingress resources in CloudBees CI on modern cloud platforms namespaces:

    kubectl get ingress -n <namespace>(1)
    1 Run for each namespace where CloudBees CI on modern cloud platforms components were deployed.
  3. Delete each orphaned Ingress resource:

    kubectl delete ingress <ingress-name> -n <namespace>
  4. (Optional) Remove the Ingress NGINX controller if no other applications in the cluster depend on it.

Roll back the migration

If the migration fails, you can revert to Ingress-based routing:

  1. In your Helm values.yaml, remove the Gateway block and restore the OperationsCenter.Ingress block.

  2. Run the Helm upgrade:

    helm upgrade <release-name> cloudbees/cloudbees-core \ --namespace <namespace> \ --values values.yaml

    The operations center restarts with Ingress routing enabled.

  3. Update DNS records to point back to the Ingress NGINX load balancer address.

  4. Deprovision all managed controllers, and then reprovision them. Each controller creates Ingress resources on reprovision.

  5. Clean up orphaned HTTPRoute resources:

    kubectl delete httproute -l app.kubernetes.io/managed-by=cloudbees-core -n <namespace>
  6. If you configured Istio DestinationRules for session persistence, remove them from controller custom YAML and delete the DestinationRule resources from controller namespaces.

  7. If you updated your CasC bundle, revert the serviceExposure configuration to the Ingress-based settings.

Troubleshoot common issues

HTTPRoutes exist but do not attach to the Gateway

Verify that the namespace has the label expected by the Gateway’s allowedRoutes selector. Run kubectl get namespace <ns> --show-labels and compare with the Gateway selector. Refer to Verify namespace labels for details.

Gateway shows Accepted: True but Programmed: False

The gateway controller cannot program the data plane. Check controller logs and Gateway events with kubectl describe gateway <name> -n <ns>.

Operations center is unreachable after Helm upgrade

The Helm upgrade deletes the operations center Ingress and creates an HTTPRoute. Verify DNS records point to the Gateway’s external address. Refer to Update DNS records.

Controllers are not accessible after reprovisioning

{#ctrl-unreachable-after-reprovision} Verify DNS records point to the Gateway’s external address, not the old Ingress NGINX load balancer. Refer to Update DNS records. If you recently updated DNS, wait for TTL expiration or flush your local DNS cache.

Sticky sessions fail on HA/HS controllers with Istio

Verify the DestinationRule exists in the controller namespace and that the operations center service account has RBAC permissions for destinationrules in the networking.istio.io API group. Refer to Configure session persistence for HA/HS controllers.

TLS errors when accessing CloudBees CI on modern cloud platforms through the Gateway

Verify the TLS certificate Secret exists in the Gateway namespace and covers your domain. Refer to Verify TLS certificate (Gateway-level TLS termination only).

HTTP requests are not redirected to HTTPS

With Gateway API, HTTP-to-HTTPS redirect is no longer managed by CloudBees CI on modern cloud platforms. Configure a redirect HTTPRoute on the Gateway. Refer to Recommended Gateway-level routing configuration.

Builds were interrupted during the migration

Running builds lose their connection when a controller is deprovisioned, but pipelines can resume after reprovisioning. Drain build queues before deprovision to avoid interruptions. Refer to Drain build queues.

WebSocket agents fail to connect after migration

Verify the Gateway supports WebSocket upgrade headers and that no intermediate proxy strips them. With Ingress NGINX, WebSocket support was enabled through annotations; with Gateway API, it must be supported natively by the gateway implementation.

HA/HS controller shows sign-in loops or CRUMB validation errors

Session persistence is not configured. Verify the DestinationRule (Istio) or GEP-1619 session persistence (Envoy Gateway) is in place. Refer to Configure session persistence for HA/HS controllers.