Issue
-
I have connected external Client controllers to CloudBees Core Modern Operations Center and I would like the client controllers to be able to use the kubernetes shared cloud like any of the Managed controllers
Explanation
The default "kubernetes shared cloud" is created at root and visible to any connected controller. So as soon as an external Client controller that has the kubernetes plugin installed connect to Operations Center, the kubernetes cloud configuration is pushed to the controller and can be used.
There are however 2 problems for client controllers:
-
Authentication: the kubernetes shared cloud configuration does not define any means of authentication to kubernetes. In that case, it relies on the behavior of the fabric8/kubernetes-client that can infer authentication details from the file system:
-
either from a kubeconfig file whose default location is
$HOME/.kube/kubeconfig
-
or from ServiceAccount details under
/var/run/secrets/kubernetes.io/serviceaccount/
This works out of the box when inside a pod where service account details are automatically injected at/var/run/secrets/kubernetes.io/serviceaccount/
. But from a client controller, this cannot work unless the client controller is running inside a pod in the same kubernetes cluster.
-
-
Routing: The Client controller URL (configured under
) must be reachable from inside the kubernetes cluster. If not, theKUBERNETES_JENKINS_URL
system property or Environment variable can be used to define a URL that takes precedence over the controller URL and that kubernetes pod agents should use to connect to the controller. Managed controllers for example are started with theKUBERNETES_JENKINS_URL
environment variable that points to their internal endpoint (i.e. their kubernetes service URL). Agent connect directly to the controllers through the kubernetes network.
Resolution
There are 2 viable solutions to leverage the "kubernetes shared cloud" from a client controller:
-
either provide a kubeconfig file to the client controller
-
or inject a service account details in the expected location
/var/run/secrets/kubernetes.io/serviceaccount/
The advantage of those solutions is that it does not require any specific configuration from Jenkins or the Operations Center.
Pre-Requisites
-
A Service Account that has the kubernetes plugin required permissions in the namespace where agent must be spun up. By default, CloudBees Core define a
jenkins
service account in thecloudbees-core
namespace that will be used for the example here.
Solution 1: Create a kubeconfig file for the Client controller
-
Create a kubeconfig file in
$HOME
directory of the user running the Client controller$HOME/.kube/config
and populate it:apiVersion: v1 kind: Config current-context: default-context clusters: - cluster: certificate-authority-data: ${CLUSTER_CA_CERT_BASE64} server: ${KUBERNETES_API_SERVER_URL} name: remote-cluster contexts: - context: cluster: remote-cluster namespace: ${NAMESPACE} user: jenkins name: default-context users: - name: jenkins user: token: ${SERVICEACCOUNT_TOKEN_CLEAR}
Replacing the variables with the appropriate values:
-
$KUBERNETES_API_SERVER_URL
is the Kubernetes server URL:kubectl config view --minify | grep server
-
$SERVICEACCOUNT_TOKEN_CLEAR
is the Service Account token in clear:kubectl get secret $(kubectl get sa jenkins -n cloudbees-core -o jsonpath={.secrets[0].name}) -n cloudbees-core -o jsonpath={.data.token} | base64 --decode
-
$CLUSTER_CA_CERT_BASE64
is the Kubernetes API Server CA Certificate in base64:kubectl get secret $(kubectl get sa jenkins -n cloudbees-core -o jsonpath={.secrets[0].name}) -n cloudbees-core -o jsonpath={.data.'ca\.crt'}
-
$NAMESPACE
is the default namespace (where agent should be spun up by default):cloudbees-core
-
-
Add the system property
DKUBERNETES_JENKINS_URL=$CONTROLLER_URL
to the client controller startup argument, where$CONTROLLER_URL
is the URL of the Client controller and restart the controller.
Make sure that the user running the Client controller has permissions to read its $HOME/.kube/config .
|
Solution 2: Inject the Service Account details in '/var/run/secrets/kubernetes.io/serviceaccount/'
-
In the Client controller’s host, create the directory
/var/run/secrets/kubernetes.io/serviceaccount
and make sure that the user running the client controller’s service, for examplejenkins
has permissions to read those files:sudo mkdir -p /var/run/secrets/kubernetes.io/serviceaccount sudo chown -R jenkins:jenkins /var/run/secrets/kubernetes.io/serviceaccount
-
Then create the following files:
-
/var/run/secrets/kubernetes.io/serviceaccount/token
: must contain thejenkins
service account token in clearkubectl get secret $(kubectl get sa jenkins -n cloudbees-core -o jsonpath={.secrets[0].name}) -n cloudbees-core -o jsonpath={.data.token} | base64 --decode >
-
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
: must contain the Kubernetes API Server CA Certificate in clearkubectl get secret $(kubectl get sa jenkins -n cloudbees-core -o jsonpath={.secrets[0].name}) -n cloudbees-core -o jsonpath={.data.'ca\.crt'} | base64 --decode
-
/var/run/secrets/kubernetes.io/serviceaccount/namespace
: must contain the name of the default namespace, for examplecloudbees-core
-
-
Add the system property
-DKUBERNETES_JENKINS_URL=$CONTROLLER_URL
to the client controller startup argument, where$CONTROLLER_URL
is the URL of the Client controller and restart the controller.
Tested product/plugin versions
-
CloudBees CI (CloudBees Core) on modern cloud Platforms - Operations Center 2.204.3.7
-
CloudBees CI (CloudBees Core) on traditional platforms - Client controller v2.204.3.7
-
Kubernetes Plugin 1.23.1
-
GKE v1.15.9-gke.12
Resources
-
See the different System Properties and Environment variables provided by the io.fabric8.kubernetes.client.Config
-
Kubernetes Plugin: Authenticate with a ServiceAccount to a remote cluster