Kubernetes platform-specific configurations

9 minute readReference

CloudBees CD/RO can be integrated with multiple third-party Kubernetes platforms and configured to meet your project-specific DevOps needs. For more information on which Kubernetes platforms CloudBees CD/RO supports, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

Because CloudBees CD/RO is so extensible and versatile, it is difficult for CloudBees to anticipate your project-specific configurations on third-party platforms. However, this section provides select integration information you may find helpful when configuring CloudBees CD/RO or its integration into your Kubernetes platform.

OpenShift

CloudBees CD/RO supports multiple versions of RedHat’s OpenShift Kubernetes platform. For more information on which versions are supported, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

This section provides helpful topics that may ease your CloudBees CD/RO integration with OpenShift.

Requirements for Red Hat OpenShift installations

The following CloudBees CD/RO configurations are required when running on Red Hat OpenShift.

Configure an OpenShift service account

Red Hat OpenShift installations require a service account named cbflow that must be associated with the OpenShift project. When setting up your service account, ensure the account is created in your OpenShift project. To create an account:

  1. Navigate to User Management  Service Accounts  Create Service Account.

  2. Enter cbflow as the name, and select Create.

Configure Helm charts configurations

When running CloudBees CD/RO on Red Hat OpenShift, you must configure the following required values in your cloudbees-flow values file:

  1. The CloudBees Analytics server requires OpenShift node tuning. Set the following value to activate it:

    • analytics.openshiftNodeTuning=true

  2. CloudBees CD/RO depends on OpenShift routes to expose services. To use routes and disable the built-in ingress controller, set the following values:

    • ingress.route=true

    • nginx-ingress.enabled=false (Kubernetes versions 1.21 and earlier)

    • ingress-nginx.enabled=false (Kubernetes versions 1.22 and later)

  3. By default, CloudBees CD/RO containers are configured to run in privileged mode, which is not compatible on OpenShift. To disable running CloudBees CD/RO containers in privileged mode, configure the following values:

    • analytics.sysctlInitContainer.enabled=false

    • volumePermissions.enabled=false

    • mariadb.securityContext.enabled=false

    • mariadb.volumePermissions.enabled=false

  4. By default, the CloudBees CD/RO bound agent (cbflow-agent) is configured with volume permissions, boundAgent.volumePermissions.enabled=true. This is not compatible with OpenShift. This default setting returns the following error during OpenShift installation or upgrade:

    Please make boundAgent.volumePermissions.enabled=false \ to install/upgrade on OpenShift platform
    1. To disable volume permissions, open your cloudbees-flow values file and search for boundAgent:.

    2. Find volumePermissions.enabled=true and update to volumePermissions.enabled=false

      If and only if the boundAgent configuration section has been completely removed from your cloudbees-flow values file, add the following lines to disable volume permissions:

      boundAgent: volumePermissions: enabled: false

Configure ZooKeeper security context

The CloudBees CD/RO ZooKeeper integration requires a UID to be set for zookeeper.securityContext parameters. When installing CloudBees CD/RO, to set the ZooKeeper security context:

  1. Run the following command to retrieve your existing namespace UID:

    UID=$(kubectl get ns $NAMESPACE -o=jsonpath="{.metadata.annotations.openshift\.io\/sa\.scc\.uid-range}" | cut -f1 -d/)
    When running this command, ensure you use a namespace where a cbflow service account named is configured. For more information, refer to Configure an OpenShift service account.
  2. To set the zookeeper.securityContext with the UID, include the following directives with your helm install or helm upgrade command:

    ... --set zookeeper.securityContext.fsGroup=$UID \ --set zookeeper.securityContext.runAsUser=$UID \ ...

Access CloudBees CD/RO components inside an OpenShift cluster

The following endpoints must be accessible by external clients.

  • flow-repository:8200

  • flow-server:8443

Clients running outside a cluster access these endpoints via gateway agents. However, on platforms like OpenShift where Ingress isn’t supported or doesn’t support exposing non-web, TCP ports, you configure the required endpoints on an external load balancer service using the following parameters.

--set server.externalService.enabled=true --set repository.externalService.enabled=true

By default, these parameters are false.

To get the DNS of the external exposed load balancer, run the following commands, where <namespace> is the release namespace.

export DEPLOY_NAMESPACE=<namespace> SERVER_LB_HOSTNAME=$(kubectl get service flow-server-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$SERVER_LB_HOSTNAME:8443/" REPO_LB_HOSTNAME=$(kubectl get service flow-repository-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$REPO_LB_HOSTNAME:8200/"

Amazon Elastic Kubernetes Service (EKS)

CloudBees CD/RO supports multiple versions of Amazon Web Services' EKS platform and subcomponents. For more information on which versions are supported, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

This section provides helpful topics that may ease your CloudBees CD/RO integration with EKS.

Load balancing with AWS Application Load Balancer (ALB)

Beginning with CloudBees CD/RO v10.3.2, support has been added for AWS Application Load Balancer (ALB). ALB can be used for application layer (layer 7) load balancing and includes several features for CloudBees CD/RO deployments on EKS, including:

  • Simplified deployment of microservices and container-based applications

  • Content-based routing

  • Redirects

Prerequisites

ALB support requires access to AWS with administrative privileges as it involves IAM configuration.

  1. Follow the steps from AWS Load Balancer Controller to install the ALB controller in the EKS cluster.

    ALB support requires EKS version 1.19 or later and AWS Load Balancer Controller 2.2.0 or later.

  2. Create a certificate in AWS Certificate Manager for the required domain name. For example, cloudbees-cd.example.com.

  3. Set up External-DNS using the tutorial for AWS.

Configuring values.yaml for ALB support

The following stanza from the values.yaml configuration enables support for ALB.

ingress: enabled: true host: cloudbees-cd.example.com class: alb annotations: alb.ingress.kubernetes.io/certificate-arn: <certificate-arn> platform: eks nginx-ingress: enabled: false

Configuring external client component access with ALB enabled

The following endpoints are needed for user command-line access to the CD artifact repository and CD server:

  • flow-repository:8200

  • flow-server:8443

  • flow-server:61613

Agents running outside a cluster access these endpoints via the external gateway (bidirectional tcp port 7800). However, ALB does not support exposing non-web TCP ports. You must configure the required endpoints on an external load balancer service (ELB) using the following parameters that are set as false by default:

--set server.externalService.enabled=true --set repository.externalService.enabled=true

To get the DNS of the external exposed load balancer, run the following commands, where <namespace> is the release namespace.

export DEPLOY_NAMESPACE=<namespace> SERVER_LB_HOSTNAME=$(kubectl get service flow-server-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$SERVER_LB_HOSTNAME:8443/" REPO_LB_HOSTNAME=$(kubectl get service flow-repository-external -n $DEPLOY_NAMESPACE \ -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo "Available at: https://$REPO_LB_HOSTNAME:8200/"

Google Cloud Platform (GCP)

CloudBees CD/RO supports multiple versions of Google’s GCP platform and subcomponents. For more information on which versions are supported, refer to Supported platforms for CloudBees CD/RO on Kubernetes.

Examples of installing CloudBees CD/RO within GKE demo and production clusters can be found in the CloudBees examples GitHub repository.

This section provides helpful topics that may ease your CloudBees CD/RO integration with GCP.

Using CloudBees CD/RO with Google Cloud SQL

Your CloudBees CD/RO instances running in the Google Kubernetes Engine (GKE) can access and use database instances from Cloud SQL. To access a Cloud SQL database instance from CloudBees CD/RO, you can use either:

  • The Cloud SQL connectors with a public or private IP.

  • A direct connection using a private IP address.

It is strongly recommended to use the Cloud SQL connectors to connect to Cloud SQL, even when using a private IP. Cloud SQL connectors provide strong IAM services to help maintain your database security. For more information, refer to Cloud SQL connectors documentation.

Using a direct connection is not recommended and may expose your database to multiple threats. Because of the security issues involved with direct connections, the following information only covers connecting with the Cloud SQL connectors.

Creating a Cloud SQL service account for CloudBees CD/RO

Creating a dedicated service account for CloudBees CD/RO is strongly recommended to reduce vulnerabilities. Using a shared service account with broad permissions may result in individual applications being granted much broader access than intended. Whereas, creating a dedicated service account for CloudBees CD/RO is more secure and allows you to grant and limit permissions as needed.

The CloudBees CD/RO service account must meet the following criteria:

  • The Cloud SQL Admin API must be enabled for your project.

  • The CloudBees CD/RO service account requires the cloudsql.instances.connect permission. By default, new service accounts are created with the role cloudsql.editor. However, you can assign the cloudsql.client role implicitly when creating the account. The cloudsql.client role includes cloudsql.instances.connect and prevents over assigning editor permissions.

The steps to include these requirements are included in Connect to CloudBees CD/RO using Cloud SQL connectors.

Configuring Cloud SQL instances for CloudBees CD/RO

The Cloud SQL instance you connect to must either have a public IPv4 address or be configured to use private IP. If you are connecting with a private IP, you must use a VPC-native GKE cluster in the same VPC as your Cloud SQL instance.

For connecting using private IP, the GKE cluster must be VPC-native and peered with the same VPC network as the Cloud SQL instance. For more information, refer to Cloud SQL private IP documentation.

Connect to CloudBees CD/RO using Cloud SQL connectors

The following steps describe how to add the Cloud SQL connectors to the CloudBees CD/RO flow-server.

  1. Set the following values based on your environment:

    export CLUSTER_NAME=<CLUSTER_WHERE_CDRO_IS_INSTALLED> export PROJECT_ID=<YOUR_GCP_PROJECT_ID> export GSA_NAME=<YOUR_GLOBAL_SERVICE_ACCOUNT_NAME>
  2. To set the default project for gcloud commands, run:

    gcloud config set project $PROJECT_ID
  3. To enable Cloud SQL Admin API for the project, run:

    gcloud services enable sqladmin.googleapis.com
  4. To create a Google Service Account (GSA) for CloudBees CD/RO, run:

    gcloud iam service-accounts create $GSA_NAME --project=$PROJECT_ID
    This account will be used to connect to the database instance using the Cloud SQL connectors.
  5. To grant the Cloud SQL Client IAM role to the service account, run:

    gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role=roles/cloudsql.client
  6. Configure GKE to provide the service account to the Cloud SQL connectors using GKE’s Workload Identity feature.

    1. To add the workload identity to a new cluster:

      1. Set the following values based on your environment:

        export CLUSTER_NAME=<CLUSTER_WHERE_CDRO_IS_INSTALLED> export KSA_NAME=<YOUR_KUBERNETES_SERVICE_ACCOUNT_NAME> export NAMESPACE=<NAMESPACE_WHERE_CDRO_SERVER_IS_INSTALLED>
      2. To create a cluster for your containers, run:

        gcloud container clusters create $CLUSTER_NAME --project=$PROJECT_ID --workload-pool=$PROJECT_ID.svc.id.goog
      3. To create the new namespace, run:

        kubectl create namespace $NAMESPACE
      4. To create a Kubernetes service account (KSA) for the new namespace, run:

        kubectl create serviceaccount $KSA_NAME --namespace $NAMESPACE
      5. To enable the IAM binding between the KSA and GSA, run:

        kubectl annotate serviceaccount $KSA_NAME iam.gke.io/gcp-service-account=$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com --namespace $NAMESPACE
        gcloud iam service-accounts add-iam-policy-binding --role="roles/iam.workloadIdentityUser" --member="serviceAccount:$PROJECT_ID.svc.id.goog[$NAMESPACE/$KSA_NAME]" $GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com
    2. To add the workload identity to an existing cluster, refer to Google documentation to update an existing cluster.

  7. Configure your database authentication using either:

Configure built-in database authentication for GCP

Using built-in database authentication allows you to directly authenticate database users based usernames and passwords, and allows you to manage database access based on specific users. To configure built-in database authentication for GCP:

  1. Create a database user using the GCP database-specific steps:

  2. Configure the following values in your myvalues.yaml when installing CloudBees CD/RO:

    Ensure you replace the value placeholders, shown as <value>, in the following YAML.
    database: dbType: <postgresql|mysql|sqlserver> existingSecret: <secret-with-db-user-and-pwd> dbName: <DB_NAME> customDatabaseUrl: jdbc:<DB_TYPE>:///<DB_NAME>?cloudSqlInstance=<YOUR_GCP_PROJECT_ID>:<REGION_ID>:<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.<DB_TYPE>.SocketFactory&ipTypes=PRIVATE # IMPORTANT: If dbType:postgresql, use: socketFactory=com.google.cloud.sql.postgres.SocketFactory&ipTypes=PRIVATE rbac: # Replace KSA_NAME with the Kubernetes service account serviceAccountName: "<KSA_NAME>" server: # Enables the workloads to be scheduled on nodes that use Workload Identity nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true"

To access your GCP database instance, install or upgrade your CloudBees CD/RO. For more information refer to:

Configure IAM database authentication for GCP

Using IAM database authentication allows you to authenticate database users based on user roles and the role principles, and allows you to manage database access for groups of users.

GCP only supports IAM database authentication for PostgreSQL and MySQL, but not SQL Server. Additionally, only principle types user account and service account are supported.

To configure IAM database authentication for GCP:

  1. Assign the following variables:

    INSTANCE_ID="<your-gcp-instance-id>" GSA_NAME="<your-gsa-name>" PROJECT_ID="<your-project-id>"
  2. Configure database instance for IAM database authentication:

    gcloud sql instances patch $INSTANCE_ID --database-flags=cloudsql.iam_authentication=on
  3. Create a new database user for the IAM service account:

    gcloud sql users create $GSA_NAME@$PROJECT_ID.iam --instance=$INSTANCE_ID --type=cloud_iam_service_account
  4. Assign Cloud SQL instance user role to the service account:

    gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role=roles/cloudsql.instanceUser
  5. Configure the following values in your myvalues.yaml when installing CloudBees CD/RO:

    Ensure you replace the value placeholders shown as <value> in the following YAML.
    database: dbType: <postgresql|mysql> dbUser: <GSA-NAME>@<PROJECT-ID>.iam dbPassword: <DB-PASSWORD> dbName: <DB-NAME> customDatabaseUrl: jdbc:<DB-TYPE>:///<DB_NAME>?cloudSqlInstance=<YOUR_GCP_PROJECT_ID>:<REGION_ID>:<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.<DB-TYPE>.SocketFactory&ipTypes=PRIVATE&enableIamAuth=true&sslmode=disable rbac: # Replace KSA_NAME with the Kubernetes service account serviceAccountName: "<KSA_NAME>" server: # Enables the workloads to be scheduled on nodes that use Workload Identity nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true"

To access your GCP database instance, install or upgrade your CloudBees CD/RO. For more information refer to:

Configure CloudBees Analytics backups using GKE Workload Identity

Using GKE Workload Identity allows your Kubernetes service account to impersonate your GCP service account. This enables you to pass snapshot backups of CloudBees Analytics to GCP buckets without including GCP service account credentials in CloudBees Analytics configuration files.

Prerequisites

The following prerequisites must be met to enable GKE Workload Identities:

  • You must be using CloudBees CD/RO v2023.08.0 or later, which includes Elasticsearch 7.17.10.

  • Your CloudBees CD/RO cluster must be running on GKE.

  • Several commands in the following steps use the gcloud CLI. If you do not have it installed refer to Google’s Install the gcloud CLI documentation, or perform these steps using the Google Cloud Console.

  • Several command in the following steps use the gsutil CLI. If you do not have it installed refer to Google’s Install gsutil documentation, or perform these steps using the Google Cloud Console.

Configure CloudBees Analytics backups

To enable CloudBees Analytics to send snapshot backups to your GCP buckets:

  1. Update the cluster where you are running CloudBees Analytics on GKE to use Workload Identity, as described in the GKE Use Workload Identity documentation.

  2. In your existing CloudBees CD/RO namespace, create a Kubernetes service account:

    # Create a Kubernetes service account in the # K8s namespace running CloudBees Analytics. # Replace <K8s-service-account> with a K8s service account name. # Replace <cbflow-namespace> with the namespace where you have cbflow installed. kubectl create serviceaccount <K8s-service-account> -n <cbflow-namespace>
  3. Create a GCP bucket using either the Google Cloud Console or with the following gsutil command:

    # Create a GCP bucket. # Replace <CloudBees-Analytics-bucket-name> with # a name for the CloudBees Analytics bucket. gsutil mb gs://<CloudBees-Analytics-bucket-name>
  4. Create a Google Cloud service account using the Google Cloud Console or with the following gcloud command:

    # Create a GCP service account. # Replace <GCP_SERVICE_ACCOUNT> with your GCP account name. # Replace <PROJECT_ID> with your GCP project ID. # Replace <K8s-service-account> with a K8s service account name. # Replace <cbflow-namespace> with the namespace where you have cbflow installed. gcloud iam service-accounts add-iam-policy-binding <GCP_SERVICE_ACCOUNT>@<PROJECT_ID>.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:<PROJECT_ID>.svc.id.goog[<cbflow-namespace>/<K8s-service-account>]"
    <GCP_SERVICE_ACCOUNT> and <PROJECT_ID> must be the same used in Update existing cluster to use Workload Identity.
  5. Add the iam.gke.io/gcp-service-account annotation to your Kubernetes service account

    # Replace <GCP_SERVICE_ACCOUNT> with your GCP account name. # Replace <PROJECT_ID> with your GCP project ID. # Replace <K8s-service-account> with a K8s service account name. # Replace <cbflow-namespace> with the namespace where you have cbflow installed. kubectl annotate serviceaccount <K8s-service-account> \ --namespace <cbflow-namespace> \ iam.gke.io/gcp-service-account=<GCP_SERVICE_ACCOUNT>@<PROJECT_ID>.iam.gserviceaccount.com
  6. In your cb-flow myvalues.yaml, add the following GCP information:

    analytics: nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" backup: enabled: true externalRepo: enabled: true serviceAccountsIdentity: true type: gcs # Replace <CloudBees-Analytics-bucket-name> # with the bucket name you created. bucketName: <CloudBees-Analytics-bucket-name> # Replace <your-GCP-region> with GCP region of your cluster. region: <your-GCP-region>
  7. Update the CloudBees CD/RO installation to apply the changes from your myvalues.yaml. For example:

    # Replace <cbflow-release-name> with the # release name where you have cbflow installed. # Replace <cbflow-namespace> with the namespace # where you have cbflow installed. # Replace <myvalues.yaml> with the path to # the cb-flow values file where you added GCP information. helm upgrade <cbflow-release-name> cloudbees/cloudbees-flow \ -n <cbflow-namespace> \ -f <myvalues.yaml> \ --timeout 10000s
    This is only an example helm upgrade command. Your installation may require many more directives to install correctly.