Troubleshooting CloudBees Core on modern cloud platforms on Amazon Web Services (AWS)
There are a number of resources that you can use to troubleshoot a CloudBees Core failure.
In this section we will cover each of these approaches.
Consult the Knowledge Base
The Knowledge Base can be very helpful in troubleshooting problems with CloudBees Core and can be accessed on the CloudBees Support site.
Instances provisioning
Operations Center Provisioning
-
Check pod status
-
All associated objects are already created: pod, svc, statefulset (1 - 1), ingress, pvc and pv
-
Check the events related with the pod and associated objects: see table
-
Jenkins logs
CloudBees Core basic operations
Viewing Cluster Resources
# Gives you quick readable detail
$ kubectl get -a pod,statefulset,svc,ingress,pvc,pv -o wide
# Gives you high level of detail
$ kubectl get -a pod,statefulset,svc,ingress,pvc,pv -o yaml
# Describe commands with verbose output
$ kubectl describe <TYPE> <NAME>
# Gives you backend storage information
$ kubectl get storageclass
Pod Access
# Access to the bash
$ kubectl exec <POD_NAME> -i -t -- bash -li
master2-0:/$ ps -ef
PID USER TIME COMMAND
1 jenkins 0:00 /sbin/tini -- /usr/local/bin/launch.sh
5 jenkins 1:53 java -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Duser.home=/var/jenkins_home -Xmx1433m -Xms1433m -Djenkins.model.Jenkins.slaveAgentPortEnforce=true -Djenkins.model.Jenkins.slav
481 jenkins 0:00 bash -li
485 jenkins 0:00 ps -ef
# Bash execution command
$ kubectl exec <POD_NAME> -- ps -ef
PID USER TIME COMMAND
1 jenkins 0:00 /sbin/tini -- /usr/local/bin/launch.sh
5 jenkins 2:05 java -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Duser.home=/var/jenkins_home -Xmx1433m -Xms1433m -Djenkins.model.Jenkins.slaveAgentPortEnforce=true -Djenkins.model.Jenkins.slaveAgentPort=50000 -DMASTER_GRANT_ID=270bd80c-3e5c-498c-88fe-35ac9e11f3d3 -Dcb.IMProp.warProfiles.cje=kubernetes.json -DMASTER_INDEX=1 -Dcb.IMProp.warProfiles=kubernetes.json -DMASTER_OPERATIONSCENTER_ENDPOINT=http://cjoc/cjoc -DMASTER_NAME=master2 -DMASTER_ENDPOINT=http://cje.support-core.beescloud.k8s.local/master2/ -jar -Dcb.distributable.name=Docker Common CJE -Dcb.distributable.commit_sha=888f01a54c12cfae5c66ec27fd4f2a7346097997 /usr/share/jenkins/jenkins.war --webroot=/tmp/jenkins/war --pluginroot=/tmp/jenkins/plugins --prefix=/master2/
645 jenkins 0:00 ps -ef
Pod Scale Down/Up
$ kubectl scale statefulset/master2 --replicas=0
statefulset "master2" scaled
$ kubectl get -a statefulset -o wide
NAME DESIRED CURRENT AGE CONTAINERS IMAGES
cjoc 1 1 1d jenkins cloudbees/cloudbees-cloud-core-oc:2.107.1.2
master1 1 1 2h jenkins cloudbees/cloudbees-core-mm:2.107.1.2
master2 0 0 36m jenkins cloudbees/cloudbees-core-mm:2.107.1.2
CloudBees Core Cluster Resources
In the installation phase of CloudBees Core the following service accounts, roles and roles binding are created.
$ kubectl get sa,role,rolebinding
NAME SECRETS AGE
sa/cjoc 1 21h
sa/default 1 21h
sa/jenkins 1 21h
NAME AGE
roles/master-management 21h
roles/pods-all 21h
NAME AGE
rolebindings/cjoc 21h
rolebindings/jenkins 21h
Once the installation is done and the CloudBees Core cluster is already up and running, then we can easily check the status of the most important CloudBees Core resources: pod
,statefulset
,svc
,ingress
,pvc
and pv
.
$ kubectl get pod,statefulset,svc,ingress,pvc,pv
NAME READY STATUS RESTARTS AGE
po/cjoc-0 1/1 Running 0 21h
po/master1-0 1/1 Running 0 14h
NAME DESIRED CURRENT AGE
statefulsets/cjoc 1 1 21h
statefulsets/master1 1 1 14h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/cjoc ClusterIP 100.66.207.191 <none> 80/TCP,50000/TCP 21h
svc/master1 ClusterIP 100.67.1.49 <none> 80/TCP,50000/TCP 14h
NAME HOSTS ADDRESS PORTS AGE
ing/cjoc cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 21h
ing/default cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 21h
ing/master1 cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 14h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/jenkins-home-cjoc-0 Bound pvc-c5cad012-2b69-11e8-80fc-12582571ed5c 20Gi RWO gp2 21h
pvc/jenkins-home-master1-0 Bound pvc-e4b5e473-2ba2-11e8-80fc-12582571ed5c 50Gi RWO gp2 14h
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-c5cad012-2b69-11e8-80fc-12582571ed5c 20Gi RWO Delete Bound cje-on-support-core/jenkins-home-cjoc-0 gp2 21h
pv/pvc-e4b5e473-2ba2-11e8-80fc-12582571ed5c 50Gi RWO Delete Bound cje-on-support-core/jenkins-home-master1-0 gp2 14h
+ NOTE: See Kubernetes Troubleshoot Clusters for more information.
+
In the following sections the expected results of different Kubernetes resources are defined. The definition of each Kubernetes resource was taken from Kubernetes official documentation.
Pods
A pod is the smallest and simplest Kubernetes object, which represents a set of running containers on your cluster. A Pod is typically set up to run a single primary container, although a pod can also run optional sidecar containers that add supplementary features like logging. Pods are commonly managed by a Deployment.
The get pod
will provide you current applications running in the cluster. Applications which are currently stopped or not deployed will not appear as a pod
of the cluster.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
po/cjoc-0 1/1 Running 0 21h
po/master1-0 1/1 Running 0 14h
+ NOTE: See Kubernetes Debugging Pods for more information.
+
Pods Events
Pod events provide you insights about why a specific pod is failing to start in the cluster. In other words, pod events will tell you the reason why a specific application cannot start or be deployed in the cluster.
The table below summarize the most common pods event which might happen in CloudBees Core.
To get the list of events associated with a given pod you will need to run:
$ kubectl describe pod the_pod_name
For example:
$ kubectl describe pod cjoc-0
Status | Events | Cause |
---|---|---|
|
The image you are using cannot be found in the Docker registry, or when using a private registry there is no secret configured |
|
Node issues |
See below. Get node info with |
|
|
|
Not enough memory, either increase the nodes or node size in the cluster or reduce the memory requirement of Operations Center (yaml file) or Master (under configuration) |
|
|
Not enough CPUs, either increase the nodes or node size in the cluster or reduce the CPU requirement of Operations Center (yaml file) or Master (under configuration) |
|
|
There are no nodes available in the zone where the persistent volume was created, start more nodes in that zone |
|
|
Find out why the Docker container crashes. The easiest and first check should be if there are any errors in the output of the previous startup, e.g.: |
|
|
The |
|
This usually indicates a bad node, if there are several pods in that node in the same state. Check with `kubectl get pods --all-namespaces -o wide |
StatefulSet
A StatefulSet manages the deployment and scaling of a set of Pods and provides guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
A StatefulSet operates under the same pattern as any other Controller. You define your desired state in a StatefulSet object and the StatefulSet controller makes any necessary updates to get there from the current state.
$ kubectl get statefulset
NAME DESIRED CURRENT AGE
statefulsets/cjoc 1 1 21h
statefulsets/master1 1 1 14h
In CloudBees Core, the expected DESIRED
and CURRENT
status of any application should be 1
. Not Jenkins, neither build agents supports more than one instance running at the same time.
Service
A service is the API object that describes how to access applications (such as a set of Pods) and can describe ports and load-balancers.
The access point can be internal or external to the cluster.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/cjoc ClusterIP 100.66.207.191 <none> 80/TCP,50000/TCP 21h
svc/master1 ClusterIP 100.67.1.49 <none> 80/TCP,50000/TCP 14h
A service must exist for each application running in the cluster. Otherwise, the service will not be accessible.
+ NOTE: See Kubernetes Debugging Services for more information.
+
Ingress
Ingresses represent the routes to access the applications, where an ingress could be thought of as a Load Balancer.
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ing/cjoc cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 21h
ing/default cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 21h
ing/master1 cje.support-core.beescloud.k8s.local af9463f6a2b68... 80 14h
The required ingresses for CloudBees Core to work are:
-
A ing/default as the default entry point to the cluster
-
A ing/cjoc ingress for the access to the Operations Center
-
A ing/<MASTER_ID> ingress for the access to each master
The product expects these ingresses to be present and so they must not be modified - even to reduce the complexity of scope. Modifying ingresses at the Kubernetes level might produce issues in the product, such as Managed Masters becoming unable to communicate correctly with the Operations Center. |
Persistent Volume Claims (PVC)
Persistent volume claims (PVCs) represent the volumes associated which each application running in the cluster.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/jenkins-home-cjoc-0 Bound pvc-c5cad012-2b69-11e8-80fc-12582571ed5c 20Gi RWO gp2 21h
pvc/jenkins-home-master1-0 Bound pvc-e4b5e473-2ba2-11e8-80fc-12582571ed5c 50Gi RWO gp2 14h
PVCs events
The table below summarize the most common pods event associated with PVCs that might occur in CloudBees Core.
To obtain the list of events associated with a given pod, run:
$ kubectl describe pvc the_pvc_name
For example:
$ kubectl describe pvc jenkins-home-cjoc-0
Status | Events | Cause |
---|---|---|
|
|
There is no default storageclass - follow these instructions to set a default storageclass |
Persistent Volume (PV)
The persistent volume represents the volumes created in the cluster.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-c5cad012-2b69-11e8-80fc-12582571ed5c 20Gi RWO Delete Bound cje-on-support-core/jenkins-home-cjoc-0 gp2 21h
pv/pvc-e4b5e473-2ba2-11e8-80fc-12582571ed5c 50Gi RWO Delete Bound cje-on-support-core/jenkins-home-master1-0 gp2 14h
Accessing $JENKINS_HOME
Accessing Jenkins Home Directory (Pod Running)
By running the following sequence of commands, you can ascertain the path of the $JENKINS_HOME
inside a given pod and a specific CloudBees Core instance.
# Get the location of the $JENKINS_HOME
$ kubectl describe pod master2-0 | grep " jenkins-home " | awk '{print $1}'
/var/jenkins_home
# Access the bash of a given pod
$ kubectl exec master2-0 -i -t -- bash -i -l
master2-0:/$ cd /var/jenkins_home/
master2-0:~$ ps -ef
PID USER TIME COMMAND
1 jenkins 0:00 /sbin/tini -- /usr/local/bin/launch.sh
5 jenkins 1:46 java -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Duser.home=/var/jenkins_home -Xmx1433m -Xms1433m -Djenkins.model.Jenkins.slaveAgentPortEnforce=true -Djenkins.model.Jenkins.slav
516 jenkins 0:00 bash -i -l
524 jenkins 0:00 ps -ef
master2-0:~$ ps -ef | grep java
5 jenkins 1:46 java -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Duser.home=/var/jenkins_home -Xmx1433m -Xms1433m -Djenkins.model.Jenkins.slaveAgentPortEnforce=true -Djenkins.model.Jenkins.slaveAgentPort=50000 -DMASTER_GRANT_ID=270bd80c-3e5c-498c-88fe-35ac9e11f3d3 -Dcb.IMProp.warProfiles.cje=kubernetes.json -DMASTER_INDEX=1 -Dcb.IMProp.warProfiles=kubernetes.json -DMASTER_OPERATIONSCENTER_ENDPOINT=http://cjoc/cjoc -DMASTER_NAME=master2 -DMASTER_ENDPOINT=http://cje.support-core.beescloud.k8s.local/master2/ -jar -Dcb.distributable.name=Docker Common CJE -Dcb.distributable.commit_sha=888f01a54c12cfae5c66ec27fd4f2a7346097997 /usr/share/jenkins/jenkins.war --webroot=/tmp/jenkins/war --pluginroot=/tmp/jenkins/plugins --prefix=/master2/
528 jenkins 0:00 grep java
# Operations to be done. This is an example
$ kubectl cp master2-0:/var/jenkins_home/jobs/ ./jobs/
tar: removing leading '/' from member names
Accessing Jenkins Home Directory (Pod Not Running)
# Stop a pod
$ kubectl scale statefulset/master2 --replicas=0
statefulset "master2" scaled
# Create a new rescue-pod running something with any effect
# in the $JENKINS_HOME
$ cat <<EOF | kubectl create -f -
kind: Pod
apiVersion: v1
metadata:
name: rescue-pod
spec:
volumes:
- name: rescue-storage
persistentVolumeClaim:
claimName: jenkins-home-master2-0
containers:
- name: rescue-container
image: nginx
volumeMounts:
- mountPath: "/tmp/jenkins-home"
name: rescue-storage
EOF
# Access to the bash of the rescue-pod
$ kubectl exec rescue-pod -i -t -- bash -i -l
mesg: ttyname failed: Success
root@rescue-pod:/# cd /tmp/jenkins-home/
root@rescue-pod:/tmp/jenkins-home#
# Operations to be done. This is an example
$ kubectl cp rescue-pod:/tmp/jenkins_home/jobs/ ./jobs/
tar: removing leading '/' from member names
# Delete the rescue pod
$ kubectl delete pod rescue-pod
pod "rescue-pod" deleted
# Start the pod
$ kubectl scale statefulset/master2 --replicas=1
statefulset "master2" scaled
Operations Center Setup Customization
The Operations Center instance could be configured by either editing cloudbees-core.yml
or using the Kubernetes command line.
# Set the memory to 2G
$ kubectl patch statefulset cjoc -p '{"spec":{"template":{"spec":{"containers":[{"name":"jenkins","resources":{"limits":{"memory": "2G"}}}]}}}}'
statefulset "cjoc" patched
# Set initialDelay to 320 seconds
$ kubectl patch statefulset cjoc -p '{"spec":{"template":{"spec":{"containers":[{"name":"jenkins","livenessProbe":{"initialDelaySeconds":320}}]}}}}'
statefulset "cjoc" patched
# Set timeout to 10 seconds
$ kubectl patch statefulset cjoc -p '{"spec":{"template":{"spec":{"containers":[{"name":"jenkins","livenessProbe":{"timeoutSeconds":10}}]}}}}'
statefulset "cjoc" patched
Performance Issues - High CPU / Blocked Threads
# export cluster information
$ kubectl get pod,svc,endpoints,statefulset,ingress,pvc,pv,sa,role,rolebinding -o yaml > to-el-cluster.yml
# jenkinshangWithJstack
$ kubectl cp ~/Downloads/jenkinshangWithJstack.sh master1-0:/tmp/
$ kubectl exec master1-0 -- jps
5 jenkins.war
8807 Jps
$ kubectl exec master1-0 -- chmod u+x /tmp/jenkinshangWithJstack.sh
# currently I cannot make it work without login into the pod/container
$ kubectl exec master1-0 -it -- bash -il
master1-0:/$ /tmp/jenkinshangWithJstack.sh 5 60 5
$ kubectl cp master1-0:/tmp/jenkinshangWithJstack.5.output.tar ./
Expanding Managed Master Disk Space
A Managed Master may run out of disk space when the provisioned storage is insufficient for the users of that master. Kubernetes persistent volumes are fixed sizes defined when the volume is created. Expanding the space for a Managed Master requires a restore of the existing Managed Master into a persistent volume for a new (larger) Managed Master.
Use these steps to restore a backup of an existing Managed Master to a new (larger) Managed Master:
-
On the master that needs more space, run the backup job
-
Stop the master in Operations Center
-
delete the Persistent Volume Claim in Kubernetes.
$ kubectl delete pvc jenkins-home-<<master name>>-0
-
Change the size of the master’s volume in the configuration.
-
Start the master again in Operations Center
-
Create a restore job on the master to restore from the backup made in step 1
-
Run the restore job, watch the console output of the job, there will be a link to restart the master after the restore completes
-
After the expanded master restarts, verify the jobs on it
Troubleshooting steps for diagnosing an unhealthy Kubernetes node
While CloudBees is not a Kubernetes vendor, our clients often encounter problems in their Kubernetes Cluster that present themselves as issues with CloudBees software.
Follow these steps to help diagnose a Kubernetes node that is not healthy:
-
Run the following commands to check which Kubernetes nodes each pod is running on, and see if there is a correlation between which node is running each pod and which pods are encountering an issue:
$ kubectl -n cloudbees-core get node,pod -o wide
-
Next, the following command will provide more information about a node or pod that is experiencing problems:
kubectl describe node <name> kubectl describe pod <name>
-
Use the CloudBees Core requirements validation tool to check for common Kubernetes issues that could impact your usage of CloudBees Core:
$ cloudbees check kubernetes --host-name example.com
Here is an example of the expected output.
-
Use the steps from the Kubernetes documentation for the Monitor Node Health to run the node problem detector.
If you are able to determine a Kubernetes node with an issue, then follow the documentation to Safely Drain a Node while Respecting the PodDisruptionBudget.
$JENKINS_HOME
recovery
Backup in an EBS Volume
The following section describes the action items to be performed to generate a backup of a $JENKINS_HOME
directory tree to an EBS volume.
Backup can be performed by the CloudBees Backup plugin or the AWS CLI. In those cases where an interactive backup is needed, login as a CloudBees Core administrator and follow this procedure:
# Stop the master before backup (either from the UI or from the command line)
$ kubectl scale statefulset/master2 --replicas=0
statefulset "master2" scaled
# Find the current persistent volume (pv)
$ kubectl get pv/pvc-1ad36f90-2607-11e8-aa97-128b794e99f4 -o go-template={{.spec.awsElasticBlockStore.volumeID}}
aws://us-east-1e/vol-0263c2e40981587ed
Then, go to AWS Console / EBS Volumes, create a snapshot of the volume, and add the following new tags:
KubernetesCluster
Name
kubernetes.io/cluster/<clustername> # Replace with your cluster name
kubernetes.io/created-for/pv/name
kubernetes.io/created-for/pvc/name
kubernetes.io/created-for/pvc/namespace
# Start the instance (either from the UI or from the command line)
$ kubectl scale statefulset/master2 --replicas=1
statefulset "master2" scaled
Restore to an EBS Volume
This section provides the steps to perform a disaster recovery operation given a snapshot in AWS.
Firstly, stop the instance we would like to restore.
# Stop the instance (either from the Operations Center UI or from command line)
$ kubectl scale statefulset <MM_STATEFULSET_NAME> --replicas=0
Then, export the current configuration of the PV and PVC of the Managed Master you would like to restore.
# Export the PVC and the PV
$ kubectl get pv <PV> --export -o=yaml > mm-pv.yaml
$ kubectl get pvc <PVC> --export -o=yaml > mm-pvc.yaml
Now, we can proceed to delete the PVC and the PV
# Delete the PVC and the PV
$ kubectl delete pvc <PVC>
$ kubectl delete pv <PV>
Go to AWS Console / EBS / Snapshots and create a volume from the snapshot. Before doing this operation, ensure that all tags from the snapshot were copied so they can be applied to the new volume. Remember that the new volume must be in the same availability zone as the old one - most of the times it will be created on a different availability zone
Remember that the new volume must be in the same availability zone as the old one. |
Before manually creating the PV and the PVC, we should edit the mm-pv.yaml
and the mm-pvc.yaml ensuring that the dynamic information like `selfLink
and uid
are deleted.
In mm-pv.yaml
the volumeID
must contain the new reference - in case it changed.
On the other hand, for mm-pvc.yaml
ensure that the section claimRef
get deleted and the volumeName
must corresponded with the PV we just created.
At this point, we can start the instance again.
# Start the instance (either from the Operations Center UI or from command line)
$ kubectl scale statefulset/master2 --replicas=1
statefulset "master2" scaled
Below, there are two examples of mm-pv.yaml
and `mm-pvc.yaml before being applied.
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
creationTimestamp: null
finalizers:
- kubernetes.io/pv-protection
labels:
failure-domain.beta.kubernetes.io/region: us-east-1
failure-domain.beta.kubernetes.io/zone: us-east-1b
name: pvc-cf8db87b-34de-11e9-86e4-1201fa18b3da
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://us-east-1b/<VOLUMEN_ID>
capacity:
storage: 20Gi
persistentVolumeReclaimPolicy: Delete
storageClassName: gp2
status: {}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
creationTimestamp: null
finalizers:
- kubernetes.io/pvc-protection
labels:
com.cloudbees.cje.tenant: mm-1
com.cloudbees.cje.type: master
com.cloudbees.pse.tenant: mm-1
com.cloudbees.pse.type: master
name: jenkins-home-mm-1-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: gp2
volumeName: pvc-cf8db87b-34de-11e9-86e4-1201fa18b3da
status: {}
Creating a support request
You can call on CloudBees to help resolve your problems. To do this, submit a support request through CloudBees Support Portal. In your request, state the problem and the steps to reproduce the problem. Include support bundles and a cluster description as noted below.
To learn how to create a support bundle, refer to Generating a support bundle.
Required Data
Cluster and Operations Center Data
# Create required data folder
$ mkdir cloudbees-core-required-data
$ cd cloudbees-core-required-data
# Dump cluster info
$ kubectl cluster-info dump --output-directory=./cluster-state/
# Copy the Operations Center bundles
$ kubectl cp cjoc-0:/var/jenkins_home/support/ ./cjoc-support/
# Cluster information
$ kubectl cluster-info > 000-cluster-info.txt
# cluster description
$ kubectl get node,pod,statefulset,svc,endpoints,ingress,pvc,pv,sa,role,rolebinding -o wide > cloudbees-core-cluster-wide.txt
$ kubectl get node,pod,statefulset,svc,endpoints,ingress,pvc,pv,sa,role,rolebinding -o yaml > cloudbees-core-cluster-wide.yml
Master Data
# Also grab the cluster and Operations Center data
# Copy master1 bundles
$ kubectl cp master1-0:/var/jenkins_home/support/ ./master1-support/
Connectivity Checks
# Also grab the cluster, Operations Center and master data
$ kubectl exec -ti cjoc-0 -- curl localhost:50000 > 001-cjoc-curl-local-5000.txt
$ kubectl exec -ti master1-0 -- curl cjoc:50000 > 002-master1-curl-cjoc-5000.txt
$ kubectl exec -ti master1-0 -- curl 100.66.207.191:50000 > 003-master1-curl-cjoc-ip-5000.txt
$ kubectl exec -ti cjoc-0 -- curl -Iv http://master1.default.svc.cluster.local/master1/ > 004-cjoc-curl-master1.txt
$ kubectl exec -ti master1-0 -- curl -Iv http://cjoc/cjoc/ > 005-master1-curl-cjoc-txt
$ kubectl exec -ti cjoc-0 -- curl -Iv http://100.67.1.49:8080/master1/ > 006-cjoc-curl-master1-ip.txt
$ kubectl exec -ti master1-0 -- curl -Iv http://100.66.207.191:8080/cjoc/ > 007-master1-curl-cjoc-ip.txt