CloudBees Core on Google Kubernetes Engine (GKE) installation guide

Helm is the recommended method for installing and managing CloudBees Core. If your current CloudBees Core installation wasn’t installed using Helm, you can migrate your existing CloudBees Core installation to use Helm. For Helm installation instructions, see Installing CloudBees Core on Kubernetes using Helm.

Please be aware that as of CloudBees Core version 2.176.3.2, the CloudBees Core upgrade process has changed.

This document explains the cluster requirements, points you to the Google Cloud Platform documentation you will need to create a cluster and explains how to install CloudBees Core in your Kubernetes cluster.

The GKE cluster requirements must be satisfied before CloudBees Core can be installed.

Before you install CloudBees Core on GKE, decide which installation method you want to use.

GKE cluster requirements

For details about supported platforms for CloudBees Core on modern cloud platforms, such as supported Kubernetes, Helm, and NFS versions, refer to Supported platforms for CloudBees Core on modern cloud platforms.

The CloudBees Core installer requires:

  • On your local computer or a bastion host:

    • Kubernetes client 1.x, starting with 1.10, installed and configured (kubectl)

    • gcloud (See Installing Google Cloud SDK for instructions)

  • A GKE cluster running running Kubernetes 1.x, starting with 1.10, as long as it is actively supported by the Kubernetes distribution provider and generally available

    • With nodes that have at least 2 CPUs, 4 GiBs of memory (so nodes have 1 full CPU / 1 GiB available after running a master with default settings)

    • Must have network access to container images (public Docker Hub or a private Docker Registry)

  • The NGINX Ingress Controller installed in the cluster.

    • Load balancer configured and pointing to the NGINX Ingress Controller

    • A DNS record that points to the NGINX Ingress Controller Load balancer

    • TLS certificates (needed when you deploy CloudBees Core)

  • A namespace in the cluster (provided by your admin) with permissions to create Role and RoleBinding objects

  • Kubernetes cluster Default Storage Class defined and ready to use

Kubernetes beta releases are not supported. Use production releases.

Creating your GKE Cluster

To create a Google Kubernetes Engine (GKE) cluster refer to the official Google documentation Create a GKE cluster.

More information on administering a Google Kubernetes cluster is available from the Kubernetes Engine How-to Guides.

More information on Kubernetes concepts is available from the Kubernetes site, including:

Cluster Admin Permissions

CloudBees Core utilizes Kubernetes RBAC to create roles with limited privileges. The kubeconfig created automatically by the gcloud tool does not have the cluster-admin role assigned. During the CloudBees Core installation process, you must have cluster-admin role assigned.

To bind a user account to the cluster-admin role use the following command:

Assigning the cluster-admin role
kubectl create clusterrolebinding cluster-admin-binding  --clusterrole cluster-admin  --user $(gcloud config get-value account)
Cluster-admin (full) permission is only needed during installation, services will run using the created roles with limited privileges. For more information on GKE RBAC, see GKE Role-Based Access Control

Creating a new SSD persistent storageclass

The built-in GKE storage class uses magnetic disks, which don’t provide provide sufficient IOPS for CloudBees Core.

The following steps will guide you through using a storage class backed by SSD disks.

To use SSD persistent disks, you need to create a new storage class of type pd-ssd.

For Multi Zone environments, the volumeBindingMode attribute, which has been available since version 1.12 of Kubernetes, must be set to WaitForFirstConsumer. Otherwise, volumes may be provisioned in a zone where the pod that requests it cannot be deployed. This field is immutable. Therefore, if it is not already set, a new Storage Class must be created.

To do so, run the next command in a terminal.

echo "apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ssd
provisioner: kubernetes.io/gce-pd
# Allows volumes to be expanded after their creation.
allowVolumeExpansion: true
# Uncomment the following for multi zone clusters
# volumeBindingMode: WaitForFirstConsumer
parameters:
  type: pd-ssd" | kubectl create -f -

Then, make it the default instead of the standard storage class.

kubectl annotate storageclass standard storageclass.kubernetes.io/is-default-class-
kubectl annotate storageclass ssd storageclass.kubernetes.io/is-default-class=true

Then you can check the result:

kubectl get storageclass

The expected output looks like this.

NAME                 PROVISIONER            AGE
standard             kubernetes.io/gce-pd   1d
ssd (default)        kubernetes.io/gce-pd   1d