To provision managed controllers in multiple Kubernetes clusters, you must do the following:
-
Ensure the Kubernetes cluster matches CloudBees CI pre-installation requirements. See the installation guide matching your Kubernetes platform for pre-installation requirements:
-
Add a namespace using Helm. Note that the Helm executable must be installed locally. Tiller is not required to be installed on the Kubernetes cluster. You must set up the Kubernetes cluster you want to work in your local Kubernetes context.
Preparing a namespace to install a managed controller
-
To prepare a namespace to install a managed controller, use the following example
values.yaml
file:OperationsCenter: Enabled: false Master: Enabled: true OperationsCenterNamespace: ci(1) Agents: Enabled: true
1 Replace ci
with the actual namespace where the operations center is installed. If the operations center is located in another cluster, set it to the same value as the current namespace and then create an operations center service account for authentication. -
Use the following example to perform the installation.
export NAMESPACE=my-team(1) kubectl create namespace $NAMESPACE || true helm install ci-controllers-$NAMESPACE cloudbees/cloudbees-core --namespace $NAMESPACE -f values.yaml
1 Replace my-team
with the actual namespace where the managed controllers will be created.
Controller provisioning configuration
To provision controllers in their own namespaces, each controller must use a specific sub-domain. For example, if the operations center domain is ci.example.org
and the URL is https://ci.example.org/cjoc/
, a controller dev1
should use the sub-domain dev1.ci.example.org
or dev1-ci.example.org
. It is preferable to use dev1-ci.example.org
if using wild card certificates for the domain example.org
.
To configure each controller to use a specific sub-domain in the operations center, navigate to ci.example.org
, the Controller URL Pattern should be https://*-ci.example.org/*/
.
Provision controllers
The namespace for the controller resources can be configured as the default namespace for all managed controllers in the main operations center configuration screen with the namespace
parameter.
The namespace can also specify a specific managed controller in the controller configuration screen with the namespace
parameter.
Leave the namespace value empty to use the value defined by the Kubernetes endpoint. |
Adding a ServiceAccount for operations center
-
Create a ServiceAccount named
cjoc
in the namespace provided as.Master.OperationsCenterNamespace
.kubectl create serviceaccount cjoc -n ci(1)
1 Replace ci
with the actual value used for.Master.OperationsCenterNamespace
. -
Create a kubeconfig for the operations center service account:
apiVersion: v1 kind: Config clusters: - name: kubernetes cluster: certificate-authority-data: <CLUSTER_CA_BASE64_ENCODED>(1) server: <KUBE_APISERVER_ENDPOINT>(2) contexts: - name: cjoc context: cluster: kubernetes namespace: <NAMESPACE>(3) user: cjoc current-context: cjoc users: - name: cjoc user: token: <CJOC_SERVICEACCOUNT_TOKEN>(4)
1 Replace <CLUSTER_CA_BASE64_ENCODED>
with the base64 encoded value of the Cluster Certificate Authority, see Kubernetes - Service Account Tokens.2 Replace <KUBE_APISERVER_ENDPOINT>
with the URL of the Kubernetes API Server.3 Replace <CJOC_SERVICEACCOUNT_NAMESPACE>
with the namespace where thecjoc
ServiceAccount has been created.4 Replace <CJOC_SERVICEACCOUNT_TOKEN>
with the decoded value of thecjoc
ServiceAccount token. Refer to Kubernetes - Service Account Tokens.Another way of generating a kubeconfig file for the operations center service account is to use the following kubectl plugin: view-serviceaccount-kubeconfig, and run the command kubectl view-serviceaccount-kubeconfig cjoc -n ci
-
Make sure it works before trying to use it in the following steps:
KUBECONFIG=<PATH_TO_KUBECONFIG_FILE> kubectl version(1)
1 Replace <PATH_TO_KUBECONFIG_FILE>
by the location of the kubeconfig file created.
Adding a new Kubernetes cluster endpoint
-
In the operations center, browse to
and under Kubernetes Cluster endpoints, the list of endpoints is displayed.The first one corresponds to the current cluster and namespace that operations center is running.
-
Select Add.
-
Set the API endpoint URL. You can find it in your Cloud console, or alternatively in your kubeconfig file.
-
Pick a name to identify the cluster endpoint.
-
Set up credentials. To use a kubeconfig file, create a Secret file.
-
If needed, provide the server certificate. Or, disable the certificate check.
-
Pick a namespace. If left empty, it uses the name of the current namespace where the operations center is deployed.
-
Set the Controller URL Pattern to the domain name configured to target the Ingress controller on the Kubernetes cluster.
-
The Jenkins URL can be left empty, unless there is a specific link between clusters requiring access to the operations center using a different name than the public one.
-
Select Use WebSocket to enable WebSockets for all managed controllers that use this endpoint. See Using WebSockets to connect controllers to operations center for details about WebSockets.
-
-
Select Validate to make sure the connection to the remote Kubernetes cluster can be established and the credentials are valid.
-
Select Save.
Adding a managed controller to a Kubernetes cluster
To add a new managed controller to a Kubernetes cluster:
-
Select the cluster endpoint to use.
-
Adjust the controller configuration to your needs.
-
If you do not want to use the default storage class from the cluster, set the storage class name through the dedicated field.
-
If the Ingress Controller is not the default one, use the YAML field to set an alternate Ingress class.
kind: "Ingress" metadata: annotations: kubernetes.io/ingress.class: 'nginx'
-
Save and provision.