Accelerator includes an integration to support Kubernetes on Ubuntu Linux platforms. This section describes the Kubernetes configuration that CloudBees supports for running agents on a Kubernetes cluster as well as running Cluster Manager and eMake either in the Kubernetes cluster or outside of it.
CloudBees supports a cluster of one Kubernetes master and one or more Kubernetes worker nodes that are running the third-party software versions described below. These instructions assume that you are familiar with Kubernetes concepts and the tooling required to establish a Kubernetes cluster.
Third-Party Software Requirements for Running Accelerator on Kubernetes
Running Accelerator on a Kubernetes cluster requires identical versions of the following software (installed in the following order) on the master and each worker node that you want in your Kubernetes cluster:
-
Ubuntu Linux
-
Docker
-
Kubernetes
Not all combinations of Ubuntu Linux, Docker, and Kubernetes are supported . The following matrix shows the compatible versions:
Ubuntu Linux | Docker | Kubernetes | |||||
---|---|---|---|---|---|---|---|
18.04 |
16.04 |
18.06.0 |
17.03.0 |
1.17.1 |
1.16.3 |
1.12.2 |
1.11.0 |
✓ |
– |
✓ |
– |
✓ |
– |
✓ |
– |
✓ |
– |
✓ |
– |
– |
✓ |
– |
– |
– |
✓ |
– |
✓ |
– |
– |
– |
✓ |
– |
✓ |
✓ |
– |
– |
– |
✓ |
– |
– |
✓ |
✓ |
– |
✓ |
– |
– |
– |
Installing Linux on the Master and Worker Nodes
On each machine to be used as the master and the worker nodes, install a compatible Ubuntu Linux version as listed in Third-Party Software Requirements for Running Accelerator on Kubernetes. See your operating system documentation for installation instructions.
Installing Docker on the Master and Worker Nodes
On each machine to be used as the master and the worker nodes, install Docker by entering
sudo apt update sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt update sudo apt-get install docker-ce=<version>~ce~3-0~ubuntu
where <version>
is a Docker version such as 18.06.0
. See Third-Party Software Requirements for Running Accelerator on Kubernetes for a list of compatible versions.
Disabling Swapping and the Firewall on the Master and Worker Nodes
If you installed Ubuntu Linux with swapping enabled, you must disable it on each machine to be used as the master and the worker nodes. You also must turn off the firewall on each of these machines.
-
On each machine to be used as the master and the worker nodes, enter
sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab sudo ufw disable
The second command ensures that swapping stays disabled upon system reboot. The third command disables the firewall.
-
Reboot the master and all worker nodes.
Installing Kubernetes on the Master and Worker Nodes
Kubernetes is required on the master and all worker nodes in your Kubernetes cluster. On each machine to be used as the master and the worker nodes, complete the following steps:
-
Install the
apt-key-gpg
file. -
Add the key to the APT sources keyring by entering
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
-
Create a
Kubernetes.list
file and insert thedeb https://apt.kubernetes.io/ kubernetes-xenial main
string into the file by enteringecho "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
-
Get a list of packages that are available to install by entering
sudo apt-get update -q
This list is based on criteria such as your operating system version.
-
Install the required Kubernetes packages by entering
sudo apt-get install -qy kubelet=<version>-00 kubectl=<version>-00 kubeadm=<version>-00 kubernetes-cni=<cni_version>
where:
<version>
is a Kubernetes version such as1.12.2
. See Third-Party Software Requirements for Running Accelerator on Kubernetes for a list of compatible versions.<cni_version>
is0.7.5-00
for Kubernetes 1.17.1 or0.6.0-00
for supported Kubernetes versions 1.16.3 or earlier.
Initializing the Master
On the machine to be used as the master, enter
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<ip_of_master_node>
This command initiates Kubernetes on the master machine and sets the IP address of the master node. The worker nodes will be on the subnet of this IP address.
In this example:
-
--pod-network-cidr=10.244.0.0/16
sets the pod network CIDR, which specifies the range of IP addresses in the virtual network for the pods to use. This is a subnetwork for the “real” network. -
<ip_of_master_node>
is the IP address of the physical machine of the master.
The following message appears in the shell after you run the above command:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join <ip address:port of master> --token <token for security connection> \ --discovery-token-ca-cert-hash <discovery token hash>
Creating a Configuration File on the Master for Using kubectl without sudo
You must create a configuration file to allow the current user to use kubectl
without sudo
. On the master, enter
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
These commands create the .kube
directory, copy the admin.conf
file into this directory, and change the ownership to the current user.
Configuring the Kubernetes Network Interface on the Master
On the master, enter
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
This command configures the Kubernetes network interface. This example uses Flannel (although there are other options).
Joining Each Worker Node to the Master
On each worker node, use the output from the kubeadm init
command from the previous step by entering
sudo kubeadm join <ip_address:port_of_master> --token <token_for_security_connection> --discovery-token-ca-cert-hash <discovery_token_hash>
This command joins a worker node to the master.
Checking that the Worker Nodes are Joined to the Master
You should verify that each worker node is joined to the master. On the master, enter
kubectl get nodes
This command displays a list of nodes with IP addresses that are successfully joined. For example:
NAME STATUS ROLES AGE VERSION ip-172-31-23-14 Ready master 6h19m v1.12.2 ip-172-31-29-52 Ready <none> 6h18m v1.12.2
At this point, the Kubernetes cluster is created.
Creating a Secure Token for Agent Cloud Bursting Credentials on the Master
You must create a token for use as Kubernetes credentials.
Creating a Service Account
You can choose any service account name. To create a service account, enter
kubectl create -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> labels: k8s-app: <service_account_name> EOF
This command creates a service account and uses the text between the EOF
(end-of-file) markers as input to the command.
Creating a Cluster Role
Create a “cluster role” type of role with the appropriate rules by entering
kubectl create -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: <service_account_name> labels: k8s-app: <service account name> rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods - pods/log - services - deployments verbs: - get - watch - list - create - update - patch - delete EOF
This command creates a cluster role using the text between the EOF
markers as input to the command.
Binding the Service Account to the Cluster Role
Bind the service account that you created above to the cluster role that you created above by entering
kubectl create -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: <service_account_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: default roleRef: kind: ClusterRole name: <service_account_name> apiGroup: rbac.authorization.k8s.io EOF
This command binds your service account to your cluster role using the text between the EOF
markers as input to the command.
Retrieving the Token
Complete the following steps to get the token:
-
Get the IP address of the server by entering
APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
-
Set the account service with the correct name by entering
ACCOUNTSERVICE=<service_account_name>
-
Retrieve the Kubernetes secret token based on the account service that you just specified by entering
TOKEN=$(kubectl describe secret $(kubectl get secrets | grep ^$\{ACCOUNTSERVICE} | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d " ")
-
Display the token to use by entering
echo $TOKEN
Installing the Electric Agent/EFS on the Worker Nodes
You must install the Electric Agent/EFS on each worker node in your Kubernetes cluster. To do so:
-
Upload the Electric Agent/Electric File System (EFS) installer to one of the worker nodes.
You must use the same installer version as the Cluster Manager that you will install below. For details about obtaining the Electric Agent/EFS installer, see Downloading the Accelerator Software.
-
Install the Electric Agent/EFS on the worker node.
For installation instructions, Installing Electric Agent/EFS on Linux or Solaris
-
Repeat the previous steps on the other worker nodes.
Installing and Configuring the Cluster Manager
You must install the prerequisite packages and the Cluster Manager software on the server that you want to use as the Cluster Manager.
This server can be the master or a server outside of your Kubernetes cluster. |
Installing the Required Packages for the Cluster Manager
On the server that you plan to use as the Cluster Manager, install the prerequisite library packages as listed in Linux Prerequisites.
Installing the Cluster Manager
-
Upload the Cluster Manager installer to the server to be used as the Cluster Manager.
You must use the same installer version as the Electric Agent/EFS that you installed on the worker nodes above. For details about obtaining the Cluster Manager installer, see Downloading the Accelerator Software.
-
Install the Cluster Manager with the default settings.
For installation instructions, see Installing the Cluster Manager.
Setting the IP Address of the Cluster Manager Server for Cloud Burst Agent Connections
You must set the IP address of the Cluster Manager server that the cloud burst agents will use to connect to the Cluster Manager.
Setting the IP Address If the Cluster Manager Is on the Master
If you installed your Cluster Manager on the master , complete the following steps.
-
On the master, sign in to the Cluster Manager web UI.
For instructions, see Signing In To the Cluster Manager Web UI.
The default administrator account user name is admin
, and the password ischangeme
. You should change the default password as soon as possible. -
Click Administration > Server Settings .
-
In the IP address field, enter the IP address of the Kubernetes Container Networking Interface (CNI).
The default name of this interface is
cni0
. -
Click OK .
Creating an Agent Resource in the Cluster Manager
You must create a resource in the Cluster Manager. A resource is a host cluster container for hosts that will participate in a build. Resources let you define groups of agents that you can specify when running builds to narrow down which agents the cluster can use in those builds.
These instructions use the Accelerator cmtool
commands. As an alternative, you could do so with the Cluster Manager web UI (for details, see Creating Resources.
-
On the server where you installed the Cluster Manager as instructed above, create a Docker image of the Accelerator Agent component that you want to install.
For instructions for creating a Docker image of the agent component, see the KBEA-00170 Configuring ElectricAccelerator 11.0 and Newer Versions to Run in Docker Containers KB article.
-
Upload the Docker image to Docker Hub.
-
Sign into
cmtool
by enteringcd /tmp /opt/ecloud/i686_Linux/bin/cmtool login admin changeme
admin
is the default username, andchangeme
is the default password. You should change the default password as soon as possible. -
Create credentials by entering
/opt/ecloud/i686_Linux/bin/cmtool createCloudCredential "<kubernetes_credential_name>" --cloudProviderType "kubernetes" \ --cloudCredentialData "{ \"user\" : \"kube\", \"token\" : \"$\{TOKEN}\", \"endpoint\" : \"https://<ip_of_master_node>:<port>\" }"
This command creates credentials using
<kubernetes_credential_name>
and the token that you generated above. You can get the<ip_of_master_node>:<port>
from the output of theAPISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
command that you entered above. -
Set the
IMAGENAME
environment variable with the path to your Docker image by enteringIMAGENAME="=<docker_image_name_with_agent_on_Docker_Hub>"
-
Create a resource using your Kubernetes resource name and Kubernetes credential name by entering
/opt/ecloud/i686_Linux/bin/cmtool createResource "<your_k8s_resource_name>" --cloudProviderType "kubernetes" --cloudCredential “<kubernetes_credential_name>" \ --cloudConfigData "{ \"namespace\": \"default\", \"imageName\" : \"$IMAGENAME\", \"imagePullPolicy\" : \"Always\" }" --cloudIdleTimeout 2
Running a Test Build
You should run a build to test that the setup is working correctly. For simplicity, the following test takes place on the same server where you installed the Cluster Manager as instructed above.
-
On the server where you installed the Cluster Manager, change directories to you project directory (the one containing your makefile and source files) by entering
cd <project_dir>
-
Start a build by entering
/opt/ecloud/i686_Linux/bin/emake --emake-cm=localhost --emake-resource="<your_kubernetes_resource_name>" --emake-maxagents=<max_desired_number_of_agents> all
By default, eMake virtualizes just the current working directory across the Accelerator build cluster. If your source code, output files, and build tools are in different directories, specify those directories by adding the
--emake-root=<path1>:<path2> … :<pathN>
option. For example, enter/opt/ecloud/i686_Linux/bin/emake --emake-cm=localhost --emake-root=/home/bill/proj_q3:/src/foo:/src/baz --emake-resource="<your_kubernetes_resource_name>" --emake-maxagents=<max_desired_number_of_agents> all
A message
Starting build: <build_number>
appears. For example,Starting build: 1
. When the build is finished, a message such as the following appears:Finished build: 1 Duration: 0:41 (m:s) Cluster availability: 100%
Cluster availability: 100%
indicates that the cluster was fully available for the build duration.