Installation

10 minute readReference

CloudBees CD/RO server installation and upgrade

This option installs the CloudBees CD/RO server, built-in database, web server, and repository server into one cluster. A local bound agent and CloudBees CD/RO tools are also installed.

Make sure you are able to satisfy the Kubernetes cluster and storage requirements before proceeding.

In a production configuration, connect CloudBees CD/RO to a separate database. If CloudBees CD/RO is initially installed with the built-in database, you can reconfigure it to use a separate database at any time. Refer to Database configuration for further details.

Using a separate database requires a CloudBees CD/RO enterprise license. You must configure an external database at the same time as you install your enterprise license to prevent error messages about an unsupported configuration or a license requirement.

Installation

Create your custom values file to override Helm chart values for your installation. For your convenience, preconfigured values files are available here to get you started. Use these files as is, or use them as a starting point.

The best practice is to make a local copy of one of the preconfigured files and modify the copy with your settings, as needed. This provides a record of your configuration when it comes time to update your installation.
If all of your Kubernetes nodes are tainted, you require one node without a taint on, with which to run the flow-server-init-job pod. After the installation is complete, that node can be tainted or deleted.
Values file Description

cloudbees-cd-demo.yaml

File for use with non-production installations. It configures a single node CloudBees CD/RO server installation with a non-production license and installs MariaDB as the database.

cloudbees-cd-prod.yaml

File for use with production installations. You must configure your database, storage and CloudBees CD/RO credentials in a local copy of this file before it can be used.

cloudbees-cd-defaults.yaml

File listing all Helm chart values along with their default value. Use as a reference when specifying additional configuration in your local values file.

Add public CloudBees repository as follows:

helm repo add cloudbees https://public-charts.artifacts.cloudbees.com/repository/public/
helm repo update

Run the following Helm command to install CloudBees CD/RO.

helm install releaseName cloudbees/cloudbees-flow \
          -f valuesFile --namespace nameSpace --timeout 10000s
helm upgrade releaseName cloudbees/cloudbees-flow \
          -f valuesFile --namespace nameSpace --timeout 10000s

where:

releaseName

A user-defined name for this particular installation. If not specified, one is generated for you.

valuesFile

The local YAML-formatted values file with your specific chart override values. Specify something like my-values.yaml. If not specified, default chart values are used, as defined in the cloudbees-example repo.

nameSpace

A user-defined name for the namespace under which CloudBees CD/RO components are installed.

CloudBees CD/RO installation takes several minutes. Once all the pods are running, you can track the progress of the CloudBees CD/RO server initialization, via the flow-server-init-job job log file in the Kubernetes dashboard.

CloudBees Analytics server certificates

CloudBees CD/RO uses transport layer security (TLS) encryption and certificates to protect data between nodes on the CloudBees Analytics server. Required certificates are generated automatically for single node deployments. You must specify the common node certificate infrastructure for multi-node deployments by defining pre-generated certificates before installation.

CloudBees recommends that you specify certificates by generating a key store containing a root certificate authority (CA) certificate, an intermediate CA signing certificate, and its private key. With this method, all additional required certificates are automatically generated.

Use the cbflow-tools image and the following command to create the key store:

$ docker run --rm cloudbees/cbflow-tools:<docker tag> generate-certificates

This command returns the key store in base64 format. You must set this string as the value of the dois.certificates.bundle property in your local values file.

You can also create a secret key with the corresponding data. Use the following command to create a cbcd-analytics-certificates secret key:

$ kubectl create secret generic cbcd-analytics-certificates --from-literal=CBF_DOIS_CRT_BUNDLE=$(docker run --rm cloudbees/cbflow-tools:<docker tag> generate-certificates)

You can specify this secret key in the value of the dois.certificates.existingSecret property in your local values file, or as a parameter for the installation command.

$ --set dois.certificates.existingSecret=cbcd-analytics-certificates

If the recommended configuration is not suitable for your environment, CloudBees CD/RO also supports the following alternative certificate sets:

  • A root CA certificate and its private key

  • A root CA certificate, an intermediate CA signing certificate, and its private key

  • A root CA certificate, an intermediate CA signing certificate, and node and administrative certificates with their private keys

You must check the corresponding comments in the values file for the corresponding properties.

CloudBees CD/RO agent installation and upgrade

This option installs the CloudBees CD/RO agent, only. This allows for configuring multiple agents separately to fit your particular needs. You must have already installed the CloudBees CD/RO server in the cluster. If you need to install the CloudBees CD/RO server, see CloudBees CD/RO server installation and upgrade before proceeding with the agent installation below.

Installation

Create your custom values file to override Helm chart values to use for installing the agents. For your convenience, preconfigured values files are available here to get you started. Use these files as is, or use them as a starting point.

Best practice is to make a local copy of one of the preconfigured files and modify that local copy with your settings, as needed. That way, you have a record of your configuration when it comes time to update your installation.

When multiple replicas are configured in the values file, via the replicas value, pay particular attention to the way in which the agent resource name template is defined so that all replicas are registered properly. See Agent resource name templates for complete information.
Values file Description

cloudbees-cd-agent-example.yaml

Example file to use for an agent installation. You must configure your agent settings in a local copy of this file before it can be used.

cloudbees-cd-agent-defaults.yaml

File listing all Helm chart values along with their default value. Use as a reference when specifying additional configuration in your local values file. cloudbees-flow-agent chart configuration values for details about values in this file.

Add public CloudBees repository as follows:

helm repo add cloudbees https://public-charts.artifacts.cloudbees.com/repository/public/
helm repo update

Run the following Helm command to install or upgrade.

Before initiating an installation operation, you must create a namespace, as follows.

kubectl create namespace <nameSpace>

Then proceed with the installation.

helm install <agentReleaseName> cloudbees/cloudbees-flow-agent \
      -f <agentValuesFile> --namespace <nameSpace> --timeout 10000s \
      [--set-file customScript=<customScript>]

helm upgrade cloudbees/cloudbees-flow-agent \
      -f <agentValuesFile> --namespace <nameSpace> --timeout 10000s \
      [--set-file customScript=<customScript>]

where:

agentReleaseName

A user-defined name for this particular installation.

agentValuesFile

The local YAML-formatted values file with your specific chart override values. Specify something like my-values.yaml. If not specified, default chart values are used, as defined in the cloudbees-example repo.

nameSpace

A user-defined name for the namespace under which the CloudBees agent is installed. Specify the same name used for CloudBees CD/RO server namespace.

customScript

The local file path for a user-defined script to be installed in the agent container. See Configuring third-party tools.

Configuring third-party tools

CloudBees CD/RO agents typically do work using third-party tools installed on, or accessible to, the agent. In a traditional deployment, this typically means that the tools are installed locally on the agent machine. For agents running in a Kubernetes cluster, third-party tools can be deployed in an agent container, so they are immediately accessible by the agent. Configuration can be accomplished in one of the following ways: making a custom image or dynamically configuring the tools.

Making a custom container

For installations where the set of third-party tools is well-defined and static, create your own container using the CloudBees CD/RO agent image as the base image to which the third-party tools are added. Below is an example Dockerfile configuring the custom container.

Replace 10.0.1.143076_2.0.12_20200729 in the FROM directive below with the current tag found in the CloudBees CD/RO agent Helm chart default values file, cloudbees-cd-agent-defaults.yaml, found here.
FROM cloudbees/cbflow-agent:10.6.0.155529_3.2.18_20220627

USER root
RUN yum update  -y && \
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x ./kubectl && \
mv ./kubectl /usr/local/bin
USER cbflow

Creating a custom script

For installations needing a more flexible method, where the set of tools may vary, use the --set-file option to the helm command to load the installation script into the container using the customScript key. The example custom script below contains commands to install git, and other tools, in the pod using the apt-get package manager.

yum update  -y && \
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x ./kubectl && \
mv ./kubectl /usr/local/bin

By default, agents do not run as root. To run custom scripts that require root access, such as package installers, it is necessary to change the Helm security context using the following settings with the helm command.

--set securityContext.enabled=true --set securityContext.runAsUser="0"

Custom settings required for Red Hat OpenShift installations

Red Hat OpenShift installations require a service account named cbflow, and must be associated with the OpenShift project.

Service Account:

  • Ensure the account is being created in the project.

  • The account can be configured by navigating to User Management  Service Accounts  Create Service Account.

  • Enter cbflow as the name and select Create.

Required settings:

  • dois.openshiftNodeTuning=true

  • ingress.route=true

  • nginx-ingress.enabled=false (Kubernetes versions 1.21 and earlier)

  • ingress-nginx.enabled=false (Kubernetes versions 1.22 and later)

Settings to disable running CloudBees CD/RO containers in privileged mode:

  • dois.sysctlInitContainer.enabled=false

  • volumePermissions.enabled=false

  • mariadb.securityContext.enabled=false

  • mariadb.volumePermissions.enabled=false

Setting to disable Ingress controller:

  • Since CloudBees CD/RO depends on OpenShift routes to expose services, you can disable the creation of an Ingress controller by setting nginx-ingress.enabled=false for Kubernetes versions 1.21 and earlier or ingress-ngnix.enabled=false for Kubernetes versions 1.22 and later.

Load balancing with AWS Application Load Balancer (ALB)

Beginning with CloudBees CD/RO v10.3.2, support has been added for AWS Application Load Balancer (ALB). ALB can be used for application layer (layer 7) load balancing and includes several features for CloudBees CD/RO deployments on EKS, including:

  • Simplified deployment of microservices and container-based applications

  • Content-based routing

  • Redirects

Prerequisites

ALB support requires access to AWS with administrative privileges as it involves IAM configuration.

  1. Follow the steps from AWS Load Balancer Controller to install the ALB controller in the EKS cluster.

    ALB support requires EKS version 1.19 or later and AWS Load Balancer Controller 2.2.0 or later.

  2. Create a certificate in AWS Certificate Manager for the required domain name. For example, cloudbees-cd.example.com.

  3. Set up External-DNS using the tutorial for AWS.

Configuring values.yaml for ALB support

The following stanza from the values.yaml configuration enables support for ALB.

ingress:
  enabled: true
  host: cloudbees-cd.example.com
  class: alb
  annotations:
      alb.ingress.kubernetes.io/certificate-arn: <certificate-arn>
platform: eks

nginx-ingress:
  enabled: false

Configuring external client component access with ALB enabled

The following endpoints are needed for user command-line access to the CD artifact repository and CD server:

  • flow-repository:8200

  • flow-server:8443

  • flow-server:61613

Agents running outside a cluster access these endpoints via the external gateway (bidirectional tcp port 7800). However, ALB does not support exposing non-web TCP ports. You must configure the required endpoints on an external load balancer service (ELB) using the following parameters that are set as false by default:

--set server.externalService.enabled=true
--set repository.externalService.enabled=true

To get the DNS of the external exposed load balancer, run the following commands, where <namespace> is the release namespace.

export DEPLOY_NAMESPACE=<namespace>

SERVER_LB_HOSTNAME=$(kubectl get service flow-server-external -n $DEPLOY_NAMESPACE  \
                   -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
echo "Available at: https://$SERVER_LB_HOSTNAME:8443/"


REPO_LB_HOSTNAME=$(kubectl get service flow-repository-external -n $DEPLOY_NAMESPACE  \
                 -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
echo "Available at: https://$REPO_LB_HOSTNAME:8200/"

Resolving an ingress class name conflict

If you have an ingress class name conflict during your upgrade, update the default ingress class name for the ingress-nginx controller with the additional values below.

ingress:
  enabled: true
  host: yourhost.example.com
  class: <ingress-class-name>

ingress-nginx:
  enabled: true
  controller:
    ingressClassResource:
      name: <ingress-class-name>
nginx-ingress:
  enabled: false

After you update the class name, patch the existing ingress by running the command below.

kubectl patch ingress/flow-ingress -p '{"spec": {"ingressClassName":"<ingress-class-name>" }}' -n <namespace>

If you have a DNS/SSL issue, restart your ingress-nginx pod to mount the local certificates by running the command below.

kubectl get deployments -n <namespace> | grep ingress-nginx
XXXXX-ingress-nginx-controller   1/1     1            1           56m
kubectl rollout restart deployment XXXXX-ingress-nginx-controller -n <namespace>

Your upgrade is complete.

Load balancing with ingress

This CloudBees CD/RO chart provides support for load balancing with the NGINIX Ingress controller, which allows simple host and URL-based HTTP routing. Note that an Ingress controller typically does not eliminate the need for an external load balancer—​it simply adds a layer of routing and control behind the load balancer. With Ingress configured, all service endpoints, such as web, server, and repository, are exposed from the same domain name and load balancer endpoint.

As a best practice, configure the Ingress controller so that all CloudBees CD/RO services can be exposed through a single load balancer. By default, Ingress is enabled in the CloudBees CD/RO chart. Following is a summary of the settings:

To run CloudBees CD/RO with Kubernetes 1.22 and later, you must use the ingress-nginx controller with the following required settings:

  • ingress-nginx.enabled=true

  • ingress.class=nginx

  • nginx-ingress.enabled=false

Kubernetes versions 1.21 and earlier

nginx-ingress.controller.ingressClass

Default: flow-ingress

nginx-ingress.controller.publishService.enabled

Default: true

nginx-ingress.controller.scope.enabled

Default: true

nginx-ingress.enabled

Default: true

nginx-ingress.tcp.61613

CloudBees CD/RO server

Default: {{ .Release.Namespace }}/flow-server:61613

nginx-ingress.tcp.8200

CloudBees CD/RO repository

Default: {{ .Release.Namespace }}/flow-repository:8200

nginx-ingress.tcp.8443

CloudBees CD/RO web server

Default: {{ .Release.Namespace }}/flow-server:8443

nginx-ingress.tcp.9200

CloudBees Analytics Elasticsearch database

Default: {{ .Release.Namespace }}/flow-devopsinsight:9200

nginx-ingress.tcp.9500

CloudBees Analytics server

Default: {{ .Release.Namespace }}/flow-devopsinsight:9500

Kubernetes versions 1.22 and later

ingress-nginx.controller.ingressClass

Default: flow-ingress

ingress-nginx.controller.publishService.enabled

Default: true

ingress-nginx.controller.scope.enabled

Default: true

ingress-nginx.enabled

Default: true

ingress-nginx.tcp.61613

CloudBees CD/RO server

Default: \{\{ .Release.Namespace }}/flow-server:61613

ingress-nginx.tcp.8200

CloudBees CD/RO repository

Default: \{\{ .Release.Namespace }}/flow-repository:8200

ingress-nginx.tcp.8443

CloudBees CD/RO web server

Default: \{\{ .Release.Namespace }}/flow-server:8443

ingress-nginx.tcp.9200

CloudBees Analytics Elasticsearch database

Default: \{\{ .Release.Namespace }}/flow-devopsinsight:9200

ingress-nginx.tcp.9500

CloudBees Analytics server

Default: \{\{ .Release.Namespace }}/flow-devopsinsight:9500

Horizontal pod autoscaler

A HorizontalPodAutoscaler (HPA) automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.

HPA responds to an increased load by deploying more Pods. This is different from vertical scaling, which for Kubernetes means assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.

If the load decreases, and the number of Pods is above the configured minimum, the HPA instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.

For more information, refer to Horizontal Pod Autoscaling.

CloudBees CD/RO includes horizontal pod autoscaling support for the following deployment components:

  • CloudBees CD/RO server

  • Web server

  • Repository server

CloudBees CD/RO server

The CloudBees CD/RO server supports HPA only when clusteredMode is true.

To enable HPA for the CloudBees CD/RO server, add the following parameter values:

server:
  autoscaling:
    enabled: true # enable: true to enable HPA for server
    minReplicas: 1 # Min Number of Replicas
    maxReplicas: 3 # Max Number of Replicas to scale
    targetCPUUtilizationPercentage: 80 # CPU Threshold to scale up
    targetMemoryUtilizationPercentage: 80 # Memory Threshold to scale up
    templates: []
    # Custom or additional autoscaling metrics
    # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
    # - type: Pods
    #   pods:
    #     metric:
    #       name: repository_process_requests_total
    #     target:
    #       type: AverageValue
    #       averageValue: 10000m
server.autoscaling.minReplicas must match server.replicas.

Web server

The web server supports scaling in both cluster and non-cluster modes.

To enable HPA for the web server, add the following parameter values:

web:
  autoscaling:
    enabled: true # enable: true to enable HPA for web
    minReplicas: 1 # Min Number of Replicas
    maxReplicas: 3 # Max Number of Replicas to scale
    targetCPUUtilizationPercentage: 80 # CPU Threshold to scale up
    targetMemoryUtilizationPercentage: 80 # Memory Threshold to scale up
    templates: []
    # Custom or additional autoscaling metrics
    # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
    # - type: Pods
    #   pods:
    #     metric:
    #       name: repository_process_requests_total
    #     target:
    #       type: AverageValue
    #       averageValue: 10000m
web.autoscaling.minReplicas must match web.replicas.

Repository server

The repository server supports scaling in both cluster and non-cluster modes.

To enable HPA for the repository server, add the following parameter values:

repository:
  autoscaling:
    enabled: true # enable: true to enable HPA for repository
    minReplicas: 1 # Min Number of Replicas
    maxReplicas: 3 # Max Number of Replicas to scale
    targetCPUUtilizationPercentage: 80 # CPU Threshold to scale up
    targetMemoryUtilizationPercentage: 80 # Memory Threshold to scale up
    templates: []
    # Custom or additional autoscaling metrics
    # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
    # - type: Pods
    #   pods:
    #     metric:
    #       name: repository_process_requests_total
    #     target:
    #       type: AverageValue
    #       averageValue: 10000m
repository.autoscaling.minReplicas must match repository.replicas.