Kubernetes on AWS EKS

19 minute readReference

This document is designed to help you ensure that your AWS EKS Kubernetes cluster is optimally configured for running CloudBees CI in a secure and efficient way.

These are not requirements, and they do not replace the official Kubernetes and cloud provider documentation. They are recommendations based on experience running CloudBees CI on Kubernetes. Use them as guidelines for your deployment.

For more information on Kubernetes please refer to the official Kubernetes Documentation.

Terms and definitions

Jenkins

Jenkins is an open-source automation server. With Jenkins, organizations can accelerate the software development process by automating it. Jenkins manages and controls software delivery processes throughout the entire lifecycle, including build, document, test, package, stage, deployment, static code analysis and much more. You can find more information about Jenkins and CloudBees contributions on the CloudBees site.

CloudBees CI

With CloudBees CI, organizations can embrace rather than replace their existing DevOps toolchains while scaling Jenkins to deliver enterprise-wide secure and compliant software.

operations center

operations console for Jenkins that allows you to manage multiple Jenkins controllers.

Architectural overview

This section provides a high-level architectural overview of CloudBees CI, designed to help you understand how CloudBees CI works, how it integrates with Kubernetes, its network architecture and how managed controllers and build agents are provisioned.

CloudBees CI is essentially a set of Docker Containers that can be deployed to run a cluster of machines within the Kubernetes container management system. Customers are expected to provision and configure their Kubernetes system before installing CloudBees CI.

CloudBees CI includes the operations center that provisions and manages CloudBees managed controllers and team controllers. CloudBees CI also enables managed controllers and team controllers to perform dynamic provisioning of build agents via Kubernetes.

Machines and roles

CloudBees CI is designed to run in a Kubernetes cluster. For the purposes of this section, you need to know that a Kubernetes cluster is a set of machines (virtual or bare-metal), which run Kubernetes. Some of these machines provide the Kubernetes control plane as Kubernetes Masters. They control the Containers that run on the other type of machines known as Kubernetes Nodes. The CloudBees CI containers will run on the Kubernetes Nodes.

The Kubernetes Masters provide an HTTP-based API that can be used to manage the cluster, configure it, deploy containers, etc. A command-line client called kubectl can be used to interact with Kubernetes via this API. You should refer to the Kubernetes documentation for more information on Kubernetes.

CloudBees CI Docker containers

The Docker containers in CloudBees CI are:

  • cloudbees-cloud-core-oc: operations center

  • cloudbees-core-mm: CloudBees CI managed controller

The Docker containers used as Jenkins build agents are specified on a per Pipeline basis and are not included in CloudBees CI. Refer to the example Pipeline in the Agent provisioning section for more details.

The cloudbees-cloud-core-oc, cloudbees-core-mm, and build agent container images can be pulled from the public Docker Hub repository or from a private Docker Registry that you deploy and manage. If you wish to use a private registry, you will have to configure your Kubernetes cluster to do that.

CloudBees CI Kubernetes resources

Kubernetes terminology

The following terms are useful to understand. This is not a comprehensive list. For full details on these and other terms, refer to the Kubernetes documentation.

Pod

A set of containers that share storage volumes and a network interface.

ServiceAccount

Defines an account for accessing the Kubernetes API.

Role

Defines a set of permission rules for access to the Kubernetes APIs.

RoleBinding

Binds a ServiceAccount to a Role.

ConfigMap

A directory of configuration files made available on all Kubernetes Nodes.

StatefulSet

Managing deployment and scaling of a set of Pods.

Service

Provides access to a set of Pods at one or more TCP ports.

Ingress

Uses hostname and path of an incoming request to map the request to a specific Service.

CloudBees CI Kubernetes resources

CloudBees CI defines the following Kubernetes resources:

Resource type Resource value Definition

ServiceAccount

jenkins

Account used to manage Jenkins Build Agents.

ServiceAccount

cjoc

Account used by operations center to manage managed controllers.

Role

master-management

Defines permissions needed by operations center to manage Jenkins controllers.

RoleBinding

cjoc

Binds the operations center ServiceAccount to the master-management Role.

RoleBinding

jenkins

Binds the jenkins ServiceAccount to the pods-all Role.

ConfigMap

cjoc-config

Defines the configuration used to start the cjoc Java process within the cjoc Container.

ConfigMap

cjoc-configure-jenkins-groovy

Defines location.groovy, which is executed on startup by cjoc to define its own hostname.

ConfigMap

jenkins-agent

Defines the Bash script that starts the Jenkins Agent within a Build Agent Container.

StatefulSet

cjoc

Defines a Pod for the cjoc Container, allocates a persistent volume for its JENKINS_HOME directory and ensures that one such Pod is always running.

Service

cjoc

Defines a Service front-end for the cjoc Pod and assigns TCP ports 80 and 50000 to JNLP.

Ingress

default

Maps requests for the CloudBees CI hostname and the path /cjoc to the cjoc Pod.

Ingress

cjoc

Maps requests for the CloudBees CI hostname to the path /cjoc.

Setting pod resource limits

You can specify default limits in Kubernetes namespaces. These default limits will constrain the amount of CPU or memory a given Pod can use unless the defaults are explicitly overridden by the Pod’s configuration.

For example, the following configuration limits requests running in the master-0 namespace to 256 MB of memory and total memory usage to 512 MB:

apiVersion: v1 kind: LimitRange metadata: name: mem-limit-range namespace: master-0 spec: limits: - default: memory: 512Mi defaultRequest: memory: 256Mi type: Container

Overriding default pod resource limits

To override the default configuration on a pod-by-pod basis, configure the controller that needs more resources:

  1. Log into operations center.

  2. Navigate to Manage Jenkins  Kubernetes Pod Templates.

  3. Select menu:Add a pod template.

    1. Locate the template you want to edit.

    2. If the template you want to edit does not exist, create it.

  4. On the Containers tab, click menu:Add Containers and select container.

  5. Click menu:Advanced and modify the resource constraints for the template.

Visualizing CloudBees CI architecture

The diagram below illustrates the CloudBees CI architecture on Kubernetes. The diagram shows three Kubernetes Master Nodes, which are the three dotted-line and overlapping rectangles on the left. The diagram also shows two Kubernetes Worker Nodes, which are the two dotted-line large rectangles in the center and right.

Here’s the key for the colors used in the diagram:

  • Green: processes which are part of Kubernetes

  • Pink: Kubernetes resources created by installing and running CloudBees CI

  • Yellow: Kubernetes resources required by CloudBees CI

Architecture diagram
Figure 1. CloudBees CI architecture

Kubernetes Master

Running on each Kubernetes Master node, there are the Kubernetes processes that manage the cluster: the API Server, the Controller Manager and the Scheduler. In the bottom left of the diagram are resources that are created as part of the CloudBees CI installation, but that are not really tied to any one node in the system.

Kubernetes Nodes

On the Kubernetes Nodes and shown in green above is the kubelet process, which is part of Kubernetes and is responsible for communicating with the Kubernetes API Server and starting and stopping Kubernetes Pods on the Node.

On one Node, you see the operations center Pod which includes a Controller Provisioning plugin that is responsible for starting new Master Pods. On the other node you see a Master Pod, which includes the Jenkins Kubernetes Plugin and uses that plugin to manage Jenkins Build Agents.

Each operations center and Master Pod has a Kubernetes Persistent Volume Claim where it stores its Jenkins Home directory. Each Persistent Volume Claim is backed by some form of storage service, such as an EBS volume on AWS or an NFS drive in an OpenShift environment. When a Master Pod is moved to a new Node, its storage volume must be detached from its old Node and then attached to the Pod’s new node.

Pod scheduling best practice

Prevent operations center and managed controllers pods from being moved during scale down operations by adding the annotation cluster-autoscaler.kubernetes.io/safe-to-evict: "false"

apiVersion: apps/v1 kind: StatefulSet spec: template: metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false"`

Controller provisioning

One of the benefits of CloudBees CI is the easy provisioning of new Jenkins managed controllers from the operations center web interface. This feature is provided by the CloudBees CI Controller Provisioning Plugin for Jenkins. When you provision a new controller, you must specify the amount of memory and CPU to be allocated to the new controller, and the provisioning plugin will call upon the Kubernetes API to create a controller.

The diagram below shows what happens when a new controller is launched via operations center. First, CJOC’s Controller Provisioning Kubernetes Plugin calls Kubernetes to provision a new StatefulSet to run the managed controller Pod.

Controller Provisioning Diagram
Figure 2. Controller Provisioning

Agent provisioning

Agents are created and destroyed in CloudBees CI by the Jenkins Kubernetes Plugin. A Jenkins Pipeline can specify the build agent using the standard Pipeline syntax. For example, below is a CloudBees CI Pipeline that builds and tests a Java project from a GitHub repository using a Maven and Java Docker image:

Pipeline example
podTemplate(label: 'kubernetes', containers: [ containerTemplate(name: 'maven', image: 'maven:3.5.2-jdk-8-alpine', ttyEnabled: true, command: 'cat') ]) { stage('Preparation') { node("kubernetes") { container("maven") { git 'https://github.com/jglick/simple-maven-project-with-tests.git'; sh "mvn -Dmaven.test.failure.ignore clean package" junit '**/target/surefire-reports/TEST-*.xml' archive 'target/*.jar' } } } }

In the above example, the build agent container image is maven:3.5.2-jdk-8-alpine. It will be pulled from the Docker Registry configured for the Kubernetes cluster.

The diagram below shows how build agent provisioning works. First, when the Pipeline runs, the Kubernetes Plugin on the managed controller calls Kubernetes to provision a new pod to run the build agent container. Second, Kubernetes launches the build agent pod to execute the Pipeline.

Agent Provisioning Diagram
Figure 3. Agent Provisioning

CloudBees CI required ports

CloudBees CI requires open ports:

  • 80 for http access to the web interface of operations center and managed controllers

  • 443 for https access to the web interface of operations center and managed controllers

  • 50000 for Java Network Launch Protocol (JNLP) access to operations center and managed controllers

Refer to the Kubernetes documentation for its port requirements.

Network encryption

Network Communication between Kubernetes clients such as kubectl, Kubernetes masters and nodes is encrypted via TLS protocol. The Kubernetes Managing TLS in a Cluster document explains how Certificates are obtained and managed by a cluster.

Communication between application containers running on a Kubernetes cluster, such as operations center and managed controllers, can be encrypted as well but this requires the deployment of a Network Overlay technology.

End-to-end Web Browser to CloudBees CI communications can be TLS encrypted by configuring the Kubernetes Ingress that provides access to CloudBees CI to be the termination point for SSL. Network Overlay and SSL termination configuration is covered in a separate section.

Persistence

operations center and managed controllers store their data in file-system directory, known as Jenkins Home. operations center has its own Jenkins Home, and each controller also has one.

CloudBees CI uses a Kubernetes feature known as Persistent Volume Claims to dynamically provision persistent storage for operations center, each managed controller and build agents.

Cluster sizing and scaling

This document provides general recommendations about sizing and scaling a Kubernetes cluster for CloudBees CI starting with some general notes about minimum requirements and ending with a table of more concrete sizing guidelines recommended by CloudBees.

General notes

When sizing and scaling a cluster you should consider the operational characteristics of Jenkins. The relevant ones are:

  • Jenkins controllers are memory and disk IOPS bound, with some CPU requirements as well. Low IOPS results into longer startup times and worse general performance. Low memory results into slow response time.

  • Build Agents requirements depend on the kind of tasks being executed on them.

Pods are defined by their CPU and memory requirement and they can’t be split across multiple hosts.

It is recommended to use hosts that are big enough so that they can host several pods (Rule of thumb : 3-5 pods per host) at the same time to maximize their actual use.

Example: You are running builds requiring 2 GB of memory each. You need configure pods to have 2 GB each for supporting such builds. The rule of thumb says you should have hosts with 6-10 GB of memory (3 x 2 - 5 x 2).

Depending on your cloud provider, it may be possible to enable auto-scaling in Kubernetes to match with the actual requirements and reduce the operational costs.

If you don’t have auto-scaling in your environment, we recommend you to plan extra capacity in order to sustain hardware failure.

Storage

Each managed controller is provisioned on a separate Persistent Volume (PV). It is recommended to use a storage class with the most IOPS available.

The host storage is not getting used by managed controllers but depending on the instance type you may have restrictions on the kind of block storage you can use (for example, on Azure, you need to use an instance type ending with s).

Disk space on the hosts is necessary to host docker images, containers and volumes. Build workspaces will be on host storage so there must be enough free disk space available on nodes.

CPU

CloudBees CI uses the notion of CPU defined by Kubernetes.

By default, a managed controller requires 1 CPU. Each build agent also requires CPU, so what will determine the total CPU requirement is :

  • (mostly static) The number of managed controllers multiplied by the number of CPU each of them requires.

  • (dynamic) The number of concurrent build agents used by the cluster multiplied by the CPU requirement of pod template. A minimum amount of 1 CPU is recommended for a pod template but you can use more cpus if parallel processing is required by the task.

Most build tasks are CPU-bound (compilation, test executions). So it is quite important when defining pod templates not to underestimate the number of cpus to allocate if you want good performance.

Memory

By default, a managed controller requires 3 GB of RAM.

To determine the total memory requirement, take into account:

  • (mostly static) The number of managed controllers multiplied by the amount of RAM each of them requires.

  • (dynamic) The number of concurrent build agents used by the cluster multiplied by the memory requirement of pod template

Memory also impacts performance. Not giving enough memory to a managed controller will cause additional garbage collection and reduced performance.

Controller Sizing Guidelines

Below are some more concrete sizing guidelines compiled by CloudBees Support Engineers:

Table 1. Controller sizing guidelines
Requirement Baseline Rationale

Average Weekly Users

20

Besides the team themselves, other non-team collaborators often must access the team’s Jenkins to download artifacts or otherwise collaborate with the team. This includes API clients.

Serving the Jenkins user interface impacts IO and CPU consumption and will also result in increased memory usage due to the caching of build results.

CPU Cores

4

A Jenkins of this size should have at least 4 CPU cores available.

Maximum Concurrent Builds

50

Healthy agile teams push changes multiple times per day and may have a large test suite including unit, integration and automated system tests.

We generally observe Jenkins easily handles up to 50 simultaneous builds, with some Jenkins regularly running many multiples of this number. However, poorly written or complicated pipeline code can significantly affect the performance and scalability of Jenkins since the pipeline script is compiled and executed on the controller.

To increase the scalability and throughput of your Jenkins controller, we recommend that Pipeline scripts and libraries be as short and simple as possible. This is the number one mistake teams make. If build logic can possibly be done in a Bash script, Makefile or other project artifact, Jenkins will be more scalable and reliable. Changes to such artifacts are also easier to test than changes to the Pipeline script

Maximum Number of Pipelines (Multi-branch projects)

75

Well-designed systems are often composed of many individual components. The microservices architecture accelerates this trend, as does the maintenance of legacy modules.

Each pipeline can have multiple branches, each with its own build history. If your team has a high number of pipeline jobs, you should consider splitting your Jenkins further.

Recommended Java Heap Size

4 GB

We regularly see Jenkins of this size performing well with 4 gigabytes of heap. This means setting the -Xmx4g as recommended in option B of this Knowledge Base article: Java Heap settings Best Practice.

If you observe that your Jenkins instance requires more than 8 gigabytes of heap, your Jenkins likely needs to be split further. Such high usage could be due to buggy pipelines or perhaps non-verified plugins your teams may be using.

Team Size

10

Most agile resources warn against going above 10 team members. Keeping the team size at 10 or below facilitates the sharing of knowledge about Jenkins and pipeline best practices.Three items

AWS Auto-scaling groups

With a cluster set up on AWS (including EKS), it is possible to define one or several auto-scaling groups. This can be useful to assign some pods to specific nodes based on their specification.

There is an issue where the AWS Auto Scaling group can move nodes to another Availability Zone, which can cause problems with the Kubernetes Cluster Autoscaler resulting in unexpected pod terminations.

There are two solutions:

  1. Suspend AZRebalance on Auto Scaling groups to keep nodes from changing Availability Zones.

  2. Use one Availability Zone on Auto-Scaling Groups, however this will reduce fault tolerance

Targeting specific nodes / segregating pods

When defining pod templates using the Jenkins Kubernetes plugin, it is possible to assign pods to nodes with particular labels.

For example, the below pipeline code would create a pod template restricted to instance type m4.2xlarge.

def label = "mypod-${UUID.randomUUID().toString()}" podTemplate(label: label, containers: [ containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'), containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat') ], nodeSelector: 'beta.kubernetes.io/instance-type=m4.2xlarge') { node(label) { // some block } }

If you are configuring a Kubernetes Pod Template using the Jenkins UI, you can select this option under the Node Selector field (press the Advanced button at the end of the pod template to reveal this option).

Assigning pods to particular nodes is very useful if you wish to use particular instance types for certain types of workloads.

Please read Assigning Pods to Nodes to understand this feature more in details.

Ingress TLS Termination

Ingress TLS Termination should be used to ensure that network communication to the CloudBees CI web interfaces is encrypted from end to end.

If you would like to ensure that web browser to CloudBees CI communications is encrypted end-to-end then you will need to change the Kubernetes Ingress used by CloudBees CI to use your TLS Certificates, thereby making it the termination point for TLS.

This section provides an overview of the changes you’ll need to make, but the definitive guide to setting this up is in the Kubernetes Ingress TLS documentation.

Store your TLS Certs in a Kubernetes Secret

To make your TLS Certs available to Kubernetes, you must create a Kubernetes Secret using the Kubernetes command-line tool kubectl. For example, if your certs are in /etc/mycerts you would issue this command to create a secret named my-certs:

kubectl create secret tls my-certs \ --cert=/etc/mycerts/domain.crt --key=/etc/mycerts/privkey.pem

For more information and options see the definitive guide to secrets in the Kubernetes Secrets documentation.

Change the two CloudBees CI Ingresses to be the TLS termination point

Next, you must configure the CloudBees CI Ingress to use your TLS Certs. You do this by editing the Ingress resource named "cjoc" in the cloudbees-core.yml configuration file, uncommenting the lines around tls: and setting the correct hostname and setting the secretName to the secret that holds your TLS certificates. For example, if your CloudBees CI host name is cje.example.org and your TLS Certs secrets file is named my-certs then this is what the new CloudBees CI resource should look like.

--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cjoc annotations: kubernetes.io/ingress.class: "nginx" #nginx.ingress.kubernetes.io/ssl-redirect: "true" # "413 Request Entity Too Large" uploading plugins, increase client_max_body_size nginx.ingress.kubernetes.io/proxy-body-size: 50m nginx.ingress.kubernetes.io/proxy-request-buffering: "off" spec: tls: - hosts: - cje.example.org secretName: my-certs rules: - http: paths: - path: /cjoc backend: serviceName: cjoc servicePort: 80 host: cje.example.org

And make the same change to the Ingress named "default".

Next you need to deploy CloudBees CI or update your existing CloudBees CI deployment to use your new Ingress definition. Either way, you need to run this command:

kubectl create -f cloudbees-core.yml

If CloudBees CI has not been deployed, then that command will deploy it. If CloudBees CI has already been deployed then that command will update the existing configuration.

Domain name change

  • Stop all managed controllers/team controllers from your operations center dashboard. This can be achieved either automatically with a cluster operation or manually using Managed controller/Team controller  Manage.

  • Add the new domain name by modifying the hostname in ingress/cjoc and cm/cjoc-configure-jenkins-groovy. There are two ways to do this:

    • By changing those hostname values in the cloudbees-core.yml file.

    • By editing the operations center ingress resource and modifying the domain name.

      $ kubectl edit ingress/cjoc

      Modify the operations center configuration map to change the cjoc URL.

      $ kubectl edit cm/cjoc-configure-jenkins-groovy
  • Delete the operations center pod waiting until is terminated.

    $ kubectl delete pod/cjoc
  • Verify that Operations Center  Manage Jenkins  Configure System  Jenkins Location  Jenkins URL has been properly updated. If it hasn’t, change to the new one and click on Save.

  • Start all managed controllers/team controllers from operations center dashboard. This can be achieved either automatically with a cluster operation or manually using Managed controller/Team controller  Manage.

The new domain name must appears in all of those resources:

$ kubectl get statefulset/<master> -o=jsonpath='{.spec.template.spec.containers[?(@.name=="jenkins")].env}' $ kubectl get cm/cjoc-configure-jenkins-groovy -o json $ kubectl get ingress -o wide

The domain name must be the same than what is used in the browser otherwise a default backend - 404 will be returned.

Configuring persistent storage

For persistence of operations center and managed controller data, CloudBees CI must be able to dynamically provision persistent storage. When deployed, the system will provision storage for operations center’s JENKINS_HOME directory and whenever a new managed controller is provisioned, operations center will provision storage for that controller’s JENKINS_HOME.

On Kubernetes, dynamic provisioning of storage is accomplished by creating a Persistent Volume Claim which will use a Storage Class to coordinate with a storage provisioner to provision that storage and make it available to CloudBees CI.

Please refer to the next section in order to set up a storage class for your environment, if applicable.

A detailed explanation of Kubernetes Storage concepts is beyond the scope of this document. For additional background information, refer to the Kubernetes Dynamic Provisioning and Storage Classes blog post, Persistent Volumes and Storage Classes section of the Kubernetes documentation.

Storage requirements

Because pipelines typically read and write many files during execution, CloudBees CI requires high-speed storage. When running CloudBees CI on EKS, CloudBees recommends using Solid-State Disk (SSD) storage.

Although other disk types and AWS Elastic File System (EFS) work and are supported, the same level of performance cannot be guaranteed.

By default, CloudBees CI uses whatever class is configured to be the default storage class. There are two ways you can provide an SSD-based storage class for CloudBees CI:

  • Create a new SSD-based storage class and make it the default. This is the easiest because you don’t have to change the CloudBees CI configuration.

  • Create a new SSD-based storage class and then, before you deploy, change the CloudBees CI configuration file to use the new storage class that you created.

Storage class considerations for multiple Availability Zones

For multi-zone environments, the volumeBindingMode attribute, which has been available since version 1.12 of Kubernetes, must be set to WaitForFirstConsumer. Otherwise, volumes may be provisioned in a zone where the pod that requests it cannot be deployed. This field is immutable. Therefore, if it is not already set, a new storage class must be created.

An alternative is the use of EFS, as this can be addressed across the Availability Zones within a single region.

Check the storage class configuration

After you create your Kubernetes cluster, get the storage classes. If the default storage class is not gp2, then you’re not using SSD storage and you will need to act.

$ kubectl get storageclass NAME PROVISIONER AGE default (default) kubernetes.io/aws-ebs 14d

Create a new SSD-based storage class

To create a new SSD-based storage class you must create a YAML file specifying the class and then run a series of kubectl commands to create the class and make it the default. See Kubernetes AWS storage class documentation for more information on all supported parameters.

Create a gp2-storage.yaml file with the following content for gp2 type with encryption enabled:

kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gp2 provisioner: kubernetes.io/aws-ebs # Uncomment the following for multi zone clusters # volumeBindingMode: WaitForFirstConsumer parameters: type: gp2 encrypted: "true"

Create the storage class:

$ kubectl create -f gp2-storage.yaml

Making the SSD-based storage class the default

If you want your cluster and CloudBees CI to use your new storage class as the default, then you will need to take a couple of additional steps:

$ kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' $ kubectl patch storageclass default -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

List the storage classes and you should see your new class is the default:

$ kubectl get sc NAME PROVISIONER AGE gp2 (default) kubernetes.io/aws-ebs 1d default kubernetes.io/aws-ebs 14d

Enabling Storage Encryption

Storage encryption should be used to ensure that all CloudBees CI data is encrypted at rest. If you want to setup this you must configure storage encryption in your Kubernetes cluster before you install CloudBees CI.

This is done by configuring Kubernetes to use a default Kubernetes Storage Class that implements encryption. Refer the Kubernetes documentation for Storage Classes and your cloud provider’s documentation for more information about the available Storage Classes and how to configure them.

Configuring AWS EBS Encryption

To enable AWS encryption you must create a new Storage Class that uses the kubernetes.io/aws-ebs provisioner, enable encryption in that Storage class and then set it as the default Storage Class for your cluster.

The instructions below explain one way to this. You should refer to the Kubernetes documentation for the complete details of the AWS Storage Class.

Examine the default storage class

First, examine the existing Storage Class configuration of your cluster.

$ kubectl get storageclass NAME PROVISIONER AGE default kubernetes.io/aws-ebs 14d gp2 (default) kubernetes.io/aws-ebs 1d

You should look at the existing storage class to make sure it does not already use encryption, and to verify the name of the is-default-class annotation. The official documentation says the name should be storageclass.kubernetes.io/is-default-class but as you can see below, the name used in this particular cluster is storageclass.beta.kubernetes.io/is-default-class:

[source,bash

$ kubectl get storageclass gp2 -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  creationTimestamp: 2018-02-13T21:37:05Z
  labels:
    k8s-addon: storage-aws.addons.k8s.io
  name: gp2
  resourceVersion: "1823083"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2
  uid: 0959b194-1106-11e8-b6ad-0ea2187dbbe6
parameters:
  type: gp2
provisioner: kubernetes.io/aws-ebs
# Uncomment the following for multi zone clusters
# volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

Create a new encrypted Storage Class

Create a new storage class in a YAML file with contents like the below. Pick a new name; below encrypted-gp2 is used. And note that there is a new parameter encrypted: true.

If you want to specify the keys to be used to encrypt the EBS volumes created by Kubernetes for CloudBees CI, then make sure to also specify the kmsKeyId, which, according to the documentation is "the full Amazon Resource Name of the key to use when encrypting the volume. If none is supplied but encrypted is true, a key is generated by AWS. See AWS docs for valid ARN value."

Here is an example Storage Class definition that specifies encryption:

--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: labels: k8s-addon: storage-aws.addons.k8s.io name: encrypted-gp2 parameters: type: gp2 encrypted: "true" provisioner: kubernetes.io/aws-ebs # Uncomment the following for multi zone clusters # volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete

Save your Storage Class to a file named, for example, sc-new.yml.

Next, use kubectl to create that storage class.

$ kubectl create -f sc-new.yml storageclass "encrypted-gp2" created

Look at the existing storage classes again and you should see the new one:

$ kubectl get storageclass NAME PROVISIONER AGE default kubernetes.io/aws-ebs 14d encrypted-gp2 kubernetes.io/aws-ebs 13s gp2 (default) kubernetes.io/aws-ebs 14d

Set your new Storage Class as the default

Refer to the Kubernetes documentation for changing the default storage class. In summary, you need to mark the existing default as not-default, then mark your new storage class as default. Below are the steps.

Mark the existing default storage class as not default, and be sure to use the right annotation name that we saw above:

kubectl patch storageclass gp2 \ -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"false"}}}'

Mark the new encrypted storage class as the default:

kubectl patch storageclass encrypted-gp2 \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now, verify that the new encrypted storage class is the default:

$ kubectl get storageclass NAME PROVISIONER AGE default kubernetes.io/aws-ebs 14d encrypted-gp2 (default) kubernetes.io/aws-ebs 50m gp2 kubernetes.io/aws-ebs 14d

Now you can proceed to deploy CloudBees CI. With the encrypted storage class in place, all EBS volumes created by CloudBees CI will be encrypted.

Integrating Single Sign-on

Once your CloudBees CI cluster is up and running you can integrate it with a SAML-based single sign-on (SSO) system and configure Role Based Authentication Controls (RBAC). This is done by installing the Jenkins SAML plugin, configuring it to communicate with your IDP and configuring your IDP to communicate with CloudBees CI.

Prerequisites for this task

You will need the following items before you setup SAML based SSO and RBAC:

  • Jenkins SAML Plugin must be installed in operations center

  • A SAML-based Identity Provider (IDP)

  • Service Provider Metadata for your IDP (an XML file provided by your IDP administrator).

  • The SAML Attribute names used by IDP for these fields (ask your IDP administrator):

    • Username

    • Email

    • Group

when you make changes to the security configuration, you may lock yourself out of the system. If this happens you can recover by following the instructions in the CloudBees Knowledge Base article How do I login into Jenkins after I’ve logged myself out.

Steps to Perform

Install the SAML plugin on operations center by:

  • Login to operations center and select Manage Jenkins  Manage Plugins  Available

  • Enter 'SAML' in the search box

  • Click the checkbox to choose the SAML plugin

  • Press the Download now and install after restart button

  • Check the checkbox labeled Restart Jenkins when installation is complete and no jobs are running (You do not need to install the plugin on managed controllers, just on operations center.)

Enable and configure SAML authentication

Login to operations center and select Manage Jenkins  Configure Global Security.

Click the Enable security checkbox and confirm there is a SAML 2.0 option in the Security Realm setting. If it is not there, then the Jenkins SAML Plugin is not installed and you need to install the SAML plugin.

Read and carefully follow the Jenkins SAML Plugin instructions.

Enter the IDP Metadata (XML data) and specify the attribute names that your IDP uses for username, email and group membership. When you are ready, click Save to store the new security settings.

Export Service Provider Metadata to your IDP

After you save your security settings, operations center will report your Service Provider metadata (XML data). You must copy this data and give it to your IDP administrator, who will add it to the IDP configuration.

You can find the Service Provider metadata by following a link on the Configure Global Security page at the end of the SAML section. The link looks like this:

Service Provider Metadata which may be required to configure your Identity
Provider (based on last saved settings).

Login to operations center

Once your IDP administrator confirms that your IDP Metadata has been added to the IDP, attempt to login via the Login link in operations center.

Setup RBAC

Refer to the "Role-based matrix authorization strategy" help in Manage Jenkins  Configure Global Security to enable Role Based Access Controls (RBAC).