Kubernetes platform-specific configurations

8 minute readReference

CloudBees CD/RO can be integrated with multiple third-party Kubernetes platforms and configured to meet your project-specific DevOps needs. For more information on which Kubernetes platforms CloudBees CD/RO supports, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

Because CloudBees CD/RO is so extensible and versatile, it is difficult for CloudBees to anticipate your project-specific configurations on third-party platforms. However, this section provides select integration information you may find helpful when configuring CloudBees CD/RO or its integration into your Kubernetes platform.

OpenShift

CloudBees CD/RO supports multiple versions of RedHat’s OpenShift Kubernetes platform. For more information on which versions are supported, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

This section provides helpful topics that may ease your CloudBees CD/RO integration with OpenShift.

Requirements for Red Hat OpenShift installations

The following CloudBees CD/RO configurations are required when running on Red Hat OpenShift.

Configure a OpenShift service account

Red Hat OpenShift installations require a service account named cbflow that must be associated with the OpenShift project. When setting up your service account, ensure the account is created in your OpenShift project. To create an account:

  1. Navigate to User Management  Service Accounts  Create Service Account.

  2. Enter cbflow as the name, and select Create.

Configure Helm charts configurations

When running CloudBees CD/RO on Red Hat OpenShift, you must configure the following required values in your cloudbees-flow values file:

  1. The CloudBees Analytics server requires OpenShift node tuning. Set the following value to activate it:

    • dois.openshiftNodeTuning=true

  2. CloudBees CD/RO depends on OpenShift routes to expose services. To use routes and disable the built-in ingress controller, set the following values:

    • ingress.route=true

    • nginx-ingress.enabled=false (Kubernetes versions 1.21 and earlier)

    • ingress-nginx.enabled=false (Kubernetes versions 1.22 and later)

  3. CloudBees CD/RO containers are, by default, configured to run in privileged mode, which is not compatible on OpenShift. To disable running CloudBees CD/RO containers in privileged mode, configure the following values:

    • dois.sysctlInitContainer.enabled=false

    • volumePermissions.enabled=false

    • mariadb.securityContext.enabled=false

    • mariadb.volumePermissions.enabled=false

Configure ZooKeeper security context

The CloudBees CD/RO ZooKeeper integration requires a UID to be set for zookeeper.securityContext parameters. When installing CloudBees CD/RO, to set the ZooKeeper security context:

  1. Run the following command to retrieve your existing namespace UID:

    UID=$(kubectl get ns $NAMESPACE -o=jsonpath="{.metadata.annotations.openshift\.io\/sa\.scc\.uid-range}" | cut -f1 -d/)
    When running this command, ensure you use a namespace where a cbflow service account named is configured. For more information, refer to Configure a OpenShift service account.
  2. To set the zookeeper.securityContext with the UID, include the following directives with your helm install or helm upgrade command:

    ... --set zookeeper.securityContext.fsGroup=$UID \ --set zookeeper.securityContext.runAsUser=$UID \ ...

Access CloudBees CD/RO components inside an OpenShift cluster

The following endpoints must be accessible by external clients.

  • flow-repository:8200

  • flow-server:8443

Clients running outside a cluster access these endpoints via gateway agents. However, on platforms like OpenShift where Ingress isn’t supported or doesn’t support exposing non-web, TCP ports, you configure the required endpoints on an external load balancer service using the following parameters.

--set server.externalService.enabled=true --set repository.externalService.enabled=true

By default, these parameters are false.

To get the DNS of the external exposed load balancer, run the following commands, where <namespace> is the release namespace.

export DEPLOY_NAMESPACE=<namespace> SERVER_LB_HOSTNAME=$(kubectl get service flow-server-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$SERVER_LB_HOSTNAME:8443/" REPO_LB_HOSTNAME=$(kubectl get service flow-repository-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$REPO_LB_HOSTNAME:8200/"

Amazon Elastic Kubernetes Service (EKS)

CloudBees CD/RO supports multiple versions of Amazon Web Services' EKS platform and subcomponents. For more information on which versions are supported, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

This section provides helpful topics that may ease your CloudBees CD/RO integration with EKS.

Load balancing with AWS Application Load Balancer (ALB)

Beginning with CloudBees CD/RO v10.3.2, support has been added for AWS Application Load Balancer (ALB). ALB can be used for application layer (layer 7) load balancing and includes several features for CloudBees CD/RO deployments on EKS, including:

  • Simplified deployment of microservices and container-based applications

  • Content-based routing

  • Redirects

Prerequisites

ALB support requires access to AWS with administrative privileges as it involves IAM configuration.

  1. Follow the steps from AWS Load Balancer Controller to install the ALB controller in the EKS cluster.

    ALB support requires EKS version 1.19 or later and AWS Load Balancer Controller 2.2.0 or later.

  2. Create a certificate in AWS Certificate Manager for the required domain name. For example, cloudbees-cd.example.com.

  3. Set up External-DNS using the tutorial for AWS.

Configuring values.yaml for ALB support

The following stanza from the values.yaml configuration enables support for ALB.

ingress: enabled: true host: cloudbees-cd.example.com class: alb annotations: alb.ingress.kubernetes.io/certificate-arn: <certificate-arn> platform: eks nginx-ingress: enabled: false

Configuring external client component access with ALB enabled

The following endpoints are needed for user command-line access to the CD artifact repository and CD server:

  • flow-repository:8200

  • flow-server:8443

  • flow-server:61613

Agents running outside a cluster access these endpoints via the external gateway (bidirectional tcp port 7800). However, ALB does not support exposing non-web TCP ports. You must configure the required endpoints on an external load balancer service (ELB) using the following parameters that are set as false by default:

--set server.externalService.enabled=true --set repository.externalService.enabled=true

To get the DNS of the external exposed load balancer, run the following commands, where <namespace> is the release namespace.

export DEPLOY_NAMESPACE=<namespace> SERVER_LB_HOSTNAME=$(kubectl get service flow-server-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$SERVER_LB_HOSTNAME:8443/" REPO_LB_HOSTNAME=$(kubectl get service flow-repository-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$REPO_LB_HOSTNAME:8200/"

Google Cloud Platform (GCP)

CloudBees CD/RO supports multiple versions of Google’s GCP platform and subcomponents. For more information on which versions are supported, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

This section provides helpful topics that may ease your CloudBees CD/RO integration with GCP.

Using CloudBees CD/RO with Google Cloud SQL

Your CloudBees CD/RO instances running in the Google Kubernetes Engine (GKE) can access and use database instances from Cloud SQL. To access a Cloud SQL database instance from CloudBees CD/RO, you can use either:

  • The Cloud SQL Auth proxy with a public or private IP, or

  • A direct connection using a private IP address.

It is strongly recommended to use the Cloud SQL Auth proxy to connect to Cloud SQL, even when using a private IP. The Cloud SQL Auth proxy provides strong IAM services to help maintain your database security. For more information, refer to Cloud SQL Auth proxy documentation.

Using a direct connection is not recommended and may expose your database to multiple threats. Because of the security issues involved with direct connections, the following information only covers connecting with the Cloud SQL Auth proxy.

Creating a Cloud SQL service account for CloudBees CD/RO

Creating a dedicated service account for CloudBees CD/RO is strongly recommended to reduce vulnerabilities. Using a shared service account with broad permissions may result in individual applications being granted much broader access than intended. Whereas, creating a dedicated service account for CloudBees CD/RO is more secure and allows you to grant and limit permissions as needed.

The CloudBees CD/RO service account must meet the following criteria:

  • The Cloud SQL Admin API must be enabled for your project.

  • The CloudBees CD/RO service account requires the cloudsql.instances.connect permission. By default, new service accounts are created with the role cloudsql.editor. However, you can assign the cloudsql.client role implicitly when creating the account. The cloudsql.client role includes cloudsql.instances.connect and prevents over assigning editor permissions.

The steps to include these requirements are included in Configuring Cloud SQL instances for CloudBees CD/RO.

Configuring Cloud SQL instances for CloudBees CD/RO

The Cloud SQL instance you connect to must either have a public IPv4 address or be configured to use private IP. If you are connecting with a private IP, you must use a VPC-native GKE cluster in the same VPC as your Cloud SQL instance.

For connecting using private IP, the GKE cluster must be VPC-native and peered with the same VPC network as the Cloud SQL instance. For more information, refer to Cloud SQL private IP documentation.

Configuring Cloud SQL Auth proxy to connect to CloudBees CD/RO

The following steps describe how to add the Cloud SQL Auth proxy to the CloudBees CD/RO flow-server pod using the sidecar container pattern. Adding the Cloud SQL Auth proxy container to the same pod as the CloudBees CD/RO server enables the server to connect to the Cloud SQL Auth proxy using localhost, increasing security and performance.

  1. Set the following values based on your environment:

    export CLUSTER_NAME=<CLUSTER_WHERE_CDRO_IS_INSTALLED> export PROJECT_ID=<YOUR_GCP_PROJECT_ID> export GSA_NAME=<YOUR_GLOBAL_SERVICE_ACCOUNT_NAME>
  2. To set the default project for gcloud commands, run:

    gcloud config set project $PROJECT_ID
  3. To enable Cloud SQL Admin API for the project, run:

    gcloud services enable sqladmin.googleapis.com
  4. To create a Google Service Account (GSA) for CloudBees CD/RO, run:

    gcloud iam service-accounts create $GSA_NAME --project=$PROJECT_ID
    This account will be used to connect to the database instance using the Cloud SQL Auth proxy.
  5. To grant the Cloud SQL Client IAM role to the service account, run:

    gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role=roles/cloudsql.client
  6. Configure GKE to provide the service account to the Cloud SQL Auth proxy using GKE’s Workload Identity feature.

    1. To add the workload identity to a new cluster:

      1. Set the following values based on your environment:

        export CLUSTER_NAME=<CLUSTER_WHERE_CDRO_IS_INSTALLED> export KSA_NAME=<YOUR_KUBERNETES_SERVICE_ACCOUNT_NAME> export NAMESPACE=<NAMESPACE_WHERE_CDRO_SERVER_IS_INSTALLED>
      2. To create a cluster for your containers, run:

        gcloud container clusters create $CLUSTER_NAME --project=$PROJECT_ID --workload-pool=$PROJECT_ID.svc.id.goog
      3. To create the new namespace, run:

        kubectl create namespace $NAMESPACE
      4. To create a Kubernetes service account (KSA) for the new namespace, run:

        kubectl create serviceaccount $KSA_NAME --namespace $NAMESPACE
      5. To enable the IAM binding between the KSA and GSA, run:

        kubectl annotate serviceaccount $KSA_NAME iam.gke.io/gcp-service-account=$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com --namespace $NAMESPACE
        gcloud iam service-accounts add-iam-policy-binding --role="roles/iam.workloadIdentityUser" --member="serviceAccount:$PROJECT_ID.svc.id.goog[$NAMESPACE/$KSA_NAME]" $GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com
    2. To add the workload identity to an existing cluster, refer to Google documentation to update an existing cluster.

  7. Run the Cloud SQL Auth proxy as a sidecar. If you have not configured the Cloud SQL Auth proxy in the CloudBees CD/RO installation, refer to Running the Cloud SQL Auth proxy as a sidecar.

Your Cloud SQL Auth proxy to the CloudBees CD/RO flow-server pod is added using the sidecar container pattern.

Running the Cloud SQL Auth proxy as a sidecar

Running the Cloud SQL Auth proxy using a sidecar pattern (as an additional container sharing a pod with the CloudBees CD/RO flow-server) is recommended over running it as a separate service for several reasons:

  • The Cloud SQL Auth proxy provides encryption on outgoing connections, which may help prevent your SQL traffic from being exposed locally.

    Even though the Cloud SQL Auth proxy provides encryption on outgoing connections, you still need to limit exposure for incoming connections.
  • Each CloudBees CD/RO flow-server replica’s access to the database is independent of the others, which prevents a single point of failure and makes your cluster more resilient.

  • Using the sidecar pattern limits access to the Cloud SQL Auth proxy. This allows you to use IAM permissions per application rather than exposing the database to the entire cluster.

  • Because the Cloud SQL Auth proxy consumes resources linearly to usage, this pattern allows you to more accurately scope and request resources to match your CloudBees CD/RO installation as it scales.

To run Cloud SQL Auth proxy as a sidecar with flow-server, configure the following values in your values.yaml when installing CloudBees CD/RO.

Only the relevant values for configuring the Cloud SQL Auth proxy as a sidecar are shown in the following example.
database: # Connect to the database on 127.0.0.1 using Cloud SQL proxy running as sidecar clusterEndpoint: 127.0.0.1 rbac: # Replace KSA_NAME with the Kubernetes service account serviceAccountName: "<KSA_NAME>" server: # Enables the workloads to be scheduled on nodes that use Workload Identity nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" additionalContainers: - name: cloud-sql-proxy image: "gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.0.0-alpine" command: ["/bin/sh", "-c"] # In `args`: # Replace <DB_PORT> with the port the proxy should listen on. # Defaults: MySQL: 3306, Postgres: 5432, SQLServer: 1433 # # Replace <INSTANCE_CONNECTION_NAME> with the instance connection name of your Cloud SQL instance. # The instance connection name can be found on the Google Cloud console *Cloud SQL Instance details* # page or returned by running `gcloud sql instances describe <INSTANCE_ID>`. # args: - | /cloud-sql-proxy --private-ip --structured-logs --port=<DB_PORT> <INSTANCE_CONNECTION_NAME> & CHILD_PID=$! (i=15; while [ $i -gt 0 ]; do if [ $(wget localhost:8000 2>&1 | grep 404 | wc -l) -eq 0 ] ; then echo "Wait till Server endpoint started $(( i--)) more minutes"; sleep 60; else break; fi; done) (while true; do if [ $(wget localhost:8000 2>&1 | grep 404 | wc -l) -eq 0 ] ; then kill $CHILD_PID; echo "Killed $CHILD_PID because the main container terminated."; else echo Server answers on 8000 port; fi; sleep 5; done) & wait $CHILD_PID if [ $(wget localhost:8000 2>&1 | grep 404 | wc -l) -eq 0 ] ; then echo "Job completed. Exiting..."; exit 0; fi securityContext: runAsNonRoot: true resources: requests: # The proxy's memory use scales linearly with the number of active # connections. Fewer open connections will use less memory. Adjust # this value based on your application's requirements. memory: "1Gi" # The proxy's CPU use scales linearly with the amount of IO between # the database and the application. Adjust this value based on your # application's requirements. cpu: "1"