Pre-installation setup for EKS

4 minute read

Before installing, do the following:

Setting up a Kubernetes namespace

CloudBees recommends using Kubernetes namespaces when you install CloudBees CI.

When combined with Kubernetes RBAC security, Kubernetes namespaces help a Kubernetes administrator restrict who has access to a namespace and its data.

To set up a Kubernetes namespace:

  1. Type the following commands:

kubectl create namespace cloudbees-core
kubectl config set-context $(kubectl config current-context) --namespace=cloudbees-core

Adding the CloudBees Helm Chart Repository

CloudBees hosts the Helm chart on CloudBees' public Helm Chart Repository. Before you can use the CloudBees repository you must add it to your Helm environment with the helm repo add command.

To add the CloudBees Public Helm Chart Repository to your Helm environment:

helm repo add cloudbees https://charts.cloudbees.com/public/cloudbees (1)
helm repo update (2)
1The helm repo add adds a new Helm Chart Repository to your Helm installation.
2The helm repo update updates your local Helm Chart Repository cache. Your local Helm Chart Repository cache is used by Helm commands like helm search to improve performance.
Always run helm repo update before you execute a Helm search using helm search. This ensures your cache is up to date.

Setting up an Ingress Controller

An Ingress Controller is required to access CloudBees CI on modern cloud platforms. AWS offers different load balancers that can be implemented using different Ingress Controllers. Pick the appropriate one depending on your requirements.

Read Application load balancing on Amazon EKS and follow the steps to install the AWS load balancer controller in the cluster.

With this setup, it could be useful to leverage the External DNS project to allocate DNS names dynamically. Otherwise, DNS names will need to be allocated after deploying CloudBees CI on modern cloud platforms based on the resulting ALB host name.

Using Elastic Load Balancing (ELB) or Network Load Balancers (NLB)

CloudBees CI on modern cloud platforms also supports using NGINX Ingress Controller.

It is possible to do TLS termination either at Ingress level or at Load Balancer level, using AWS Certificate Manager.

Depending on what kind of AWS Load Balancer you want to use and where you want to perform TLS termination, select the appropriate section below.

Creating an NGINX Ingress Controller with NLB with TLS termination at Ingress level

  1. This configuration can be used if you don’t plan to use TLS at all (don’t do this in production), if you use cert-manager to provision your TLS certificates, or if you want to provide your own TLS certificates to Kubernetes resources without relying on AWS Certificate Manager.

  2. Create the file values.yaml with the following content.

    controller:
      service:
        externalTrafficPolicy: "Local"
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
          service.beta.kubernetes.io/aws-load-balancer-type: nlb
  3. Deploy the configuration:

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    helm upgrade ingress-nginx \
                 ingress-nginx/ingress-nginx \
                 --install \
                 --namespace ingress-nginx \
                 --values values.yaml \
                 --version 3.3.0

Creating an NGINX Ingress Controller with NLB with TLS Termination at Load Balancer level

  1. This configuration lets you provide a TLS certificate from AWS Certificate Manager. It handles TLS termination, as well as redirects from http to https protocol.

NLB support is not complete in Kubernetes (kubernetes issue #57250). To obtain a working setup it is necessary to apply a post-install script using kubectl and the AWS CLI.
  1. Create the file values.yaml with the following content, assuming cloudbees-core is the namespace where CloudBees CI is deployed. The ARN reference arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX must be replaced by the actual ARN of the ACM certificate used to do the TLS termination.

    controller:
      config:
        use-proxy-protocol: "true"
        http-snippet: |
          server {
            listen 8080 proxy_protocol;
            return 308 https://$host$request_uri;
          }
      service:
        targetPorts:
          http: 8080
          https: http
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:REGION:NNNNNNNNNNNN:certificate/AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE
          service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-2017-01
          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
          service.beta.kubernetes.io/aws-load-balancer-type: nlb
  2. Deploy the configuration:

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    helm upgrade ingress-nginx \
                 ingress-nginx/ingress-nginx \
                 --install \
                 --namespace ingress-nginx \
                 --values values.yaml \
                 --version 3.3.0
  3. Apply the following post-install script to enable Proxy Protocol v2 on the NLB target groups. You will need:

    • A valid Kubernetes context pointing to the cluster you installed ingress-nginx in.

    • A valid AWS context to use the CLI.

#!/bin/bash -eu
export AWS_PAGER=""
hostname=$(kubectl get -n ingress-nginx services ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
loadBalancerArn=$(aws elbv2 describe-load-balancers  --query "LoadBalancers[?DNSName==\`$hostname\`].LoadBalancerArn"  --output text)
targetGroupsArn=$(aws elbv2 describe-target-groups --load-balancer-arn $loadBalancerArn --query TargetGroups[\*].TargetGroupArn --output text)
for targetGroupArn in $targetGroupsArn; do
  aws elbv2 modify-target-group-attributes --target-group-arn $targetGroupArn --attributes Key=proxy_protocol_v2.enabled,Value=true --output text
done

It is also possible to use the AWS Console to enable proxy protocol.

Creating an NGINX Ingress Controller with ELB Layer 4 with TLS termination at Ingress level

  1. This configuration can be used if you don’t plan to use TLS at all (don’t do this in production), if you use cert-manager to provision your TLS certificates, or if you want to provide your own TLS certificates to Kubernetes resources without relying on AWS Certificate Manager.

  2. Create an empty values.yaml for the next step.

  3. Deploy the configuration:

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    helm upgrade ingress-nginx \
                 ingress-nginx/ingress-nginx \
                 --install \
                 --namespace ingress-nginx \
                 --values values.yaml \
                 --version 3.3.0

Creating an NGINX Ingress Controller with ELB Layer 4 with TLS Termination at Load Balancer level

  1. This configuration lets you provide a TLS certificate from AWS Certificate Manager. It handles TLS termination, as well as redirects from http to https protocol.

  2. Create the file values.yaml with the following content. The ARN reference arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX must be replaced by the actual ARN of the ACM certificate used to do the TLS termination.

    controller:
      config:
        use-proxy-protocol: "true"
        http-snippet: |
          server {
            listen 8080 proxy_protocol;
            return 308 https://$host$request_uri;
          }
      service:
        targetPorts:
          http: 8080
          https: http
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
          service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"
          service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
          service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
        externalTrafficPolicy: "Local"
  3. Deploy the configuration:

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    helm upgrade ingress-nginx \
                 ingress-nginx/ingress-nginx \
                 --install \
                 --namespace ingress-nginx \
                 --values values.yaml \
                 --version 3.3.0

Creating a DNS record

Create a DNS record for the domain you want to use for CloudBees CI, pointing to your NGINX Ingress Controller load balancer.

When using Route53 to manage domain names, create an alias record pointing to the load balancer created by the NGINX Ingress Controller.

You can retrieve the host name of the load balancer using

kubectl get -n ingress-nginx services ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].hostname}'