Issue kubectl commands to your k8s cluster from your CloudBees CodeShip Pro build
The public repository for our |
1. Distill your current k8s configuration to a single file
With a configured k8s cluster context on your local machine, run the following command in your project directory:
kubectl config view --flatten > kubeconfigdata # add --minify flag to reduce info to current context
2. Copy contents of generated k8s config file to env var file
We have a Docker container built for taking the plaintext, flattened k8s config file and storing to a CodeShip Pro env file. The /root/.kube/config
path specifies exactly where we want the contents of the kubeconfigdata
securely placed in the codeship/kubectl
container during runtime.
docker run --rm -it -v $(pwd):/files codeship/env-var-helper cp kubeconfigdata:/root/.kube/config k8s-env
Check out the codeship/env-var-helper README for more information. |
3. Encrypt the env file, remove plaintext and/or add to .gitignore
jet encrypt k8s-env k8s-env.encrypted rm kubeconfigdata k8s-env
4. Configure your services and steps file with the following as guidance
## codeship-services.yml kubectl: image: codeship/kubectl encrypted_env_file: k8s-env.encrypted
## codeship-steps.yml - name: check response to kubectl config service: kubectl command: kubectl config view #- name: attempt to connect to live k8s cluster # service: kubectl # command: kubectl cluster-info
If you’re still largely unfamiliar with CloudBees CodeShip Pro, then check out our step-by-step walk-through on issuing kubectl commands in CloudBees CodeShip Pro. |