Paste xq63r Plain Text
[~]$ juju status --color Model Controller Cloud/Region Version SLA conjure-kubernetes-core-c71 conjure-up-aws-cfb aws/us-east-2 2.2.4 unsupported App Version Status Scale Charm Store Rev OS Notes easyrsa 3.0.1 active 1 easyrsa jujucharms 19 ubuntu etcd 2.3.8 active 1 etcd jujucharms 53 ubuntu flannel 0.9.0 waiting 2 flannel jujucharms 32 ubuntu kubernetes-master 1.8.0 waiting 1 kubernetes-master jujucharms 55 ubuntu exposed kubernetes-worker 1.8.0 active 1 kubernetes-worker jujucharms 59 ubuntu exposed Unit Workload Agent Machine Public address Ports Message easyrsa/0* active idle 1/lxd/0 10.0.135.15 Certificate Authority connected. etcd/0* active idle 1 13.58.179.90 2379/tcp Healthy with 1 known peer kubernetes-master/0* waiting idle 1 13.58.179.90 6443/tcp Waiting for kube-system pods to start flannel/1 waiting idle 13.58.179.90 Waiting for Flannel kubernetes-worker/0* active idle 0 18.221.183.95 80/tcp,443/tcp Kubernetes worker running. flannel/0* active idle 18.221.183.95 Flannel subnet 10.1.46.1/24 Machine State DNS Inst id Series AZ Message 0 started 18.221.183.95 i-0e3732fedd2e29eef xenial us-east-2a running 1 started 13.58.179.90 i-08e7af80c7f9dc876 xenial us-east-2b running 1/lxd/0 started 10.0.135.15 juju-9daee5-1-lxd-0 xenial us-east-2b Container started Relation provider Requirer Interface Type easyrsa:client etcd:certificates tls-certificates regular easyrsa:client kubernetes-master:certificates tls-certificates regular easyrsa:client kubernetes-worker:certificates tls-certificates regular etcd:cluster etcd:cluster etcd peer etcd:db flannel:etcd etcd regular etcd:db kubernetes-master:etcd etcd regular kubernetes-master:cni flannel:cni kubernetes-cni subordinate kubernetes-master:kube-api-endpoint kubernetes-worker:kube-api-endpoint http regular kubernetes-master:kube-control kubernetes-worker:kube-control kube-control regular kubernetes-worker:cni flannel:cni kubernetes-cni subordinate [~]$ juju ssh 1 'cat /var/snap/kube-apiserver/current/args' --admission-control "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,DefaultTolerationSeconds" --allow-privileged=false --basic-auth-file "/root/cdk/basic_auth.csv" --etcd-cafile "/root/cdk/etcd/client-ca.pem" --etcd-certfile "/root/cdk/etcd/client-cert.pem" --etcd-keyfile "/root/cdk/etcd/client-key.pem" --etcd-servers "https://172.31.25.174:2379" --insecure-bind-address "127.0.0.1" --insecure-port 8080 --kubelet-certificate-authority "/root/cdk/ca.crt" --kubelet-client-certificate "/root/cdk/client.crt" --kubelet-client-key "/root/cdk/client.key" --logtostderr --min-request-timeout 300 --service-account-key-file "/root/cdk/serviceaccount.key" --service-cluster-ip-range "10.152.183.0/24" --storage-backend "etcd2" --tls-cert-file "/root/cdk/server.crt" --tls-private-key-file "/root/cdk/server.key" --token-auth-file "/root/cdk/known_tokens.csv" --v 4 Connection to 13.58.179.90 closed. [~]$ [~]$ aws ec2 create-volume --availability-zone=us-east-2a --size=10 --volume-type=gp2 --tag-specifications 'ResourceType=volume,Tags=[{Key=Name,Value=postgres-pv},{Key=purpose,Value=test}]' { "AvailabilityZone": "us-east-2a", "Tags": [ { "Value": "postgres-pv", "Key": "Name" }, { "Value": "test", "Key": "purpose" } ], "Encrypted": false, "VolumeType": "gp2", "VolumeId": "vol-0445bc1b6acae0622", "State": "creating", "Iops": 100, "SnapshotId": "", "CreateTime": "2017-10-19T15:20:08.892Z", "Size": 10 } $ kubectl get pv,pvc,sc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv/postgres-pv 10Gi RWO Retain Bound default/postgres-pvc postgres-sc 14s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc/postgres-pvc Bound postgres-pv 10Gi RWO postgres-sc 10s NAME PROVISIONER storageclasses/postgres-sc kubernetes.io/aws-ebs $ cat aws-postgres-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: postgres-sc provisioner: kubernetes.io/aws-ebs parameters: type: gp2 zones: us-east-2a iopsPerGB: "10" $ cat aws-postgres-pv.yaml kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv labels: type: amazonEBS spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce storageClassName: postgres-sc awsElasticBlockStore: # volumeID: VolumeId volumeID: vol-0445bc1b6acae0622 fsType: ext4 $ cat aws-postgres-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pvc labels: type: amazonEBS spec: accessModes: - ReadWriteOnce storageClassName: postgres-sc resources: requests: storage: 8Gi $ cat pod-ebs.yml kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: "/var/www/html" name: postgres-pv volumes: - name: postgres-pv persistentVolumeClaim: claimName: postgres-pvc $ kubectl get pod mypod NAME READY STATUS RESTARTS AGE mypod 0/1 ContainerCreating 0 41s $ kubectl describe pod mypod Name: mypod Namespace: default Node: ip-172-31-5-169/172.31.5.169 Start Time: Thu, 19 Oct 2017 10:25:29 -0500 Labels: <none> Annotations: <none> Status: Pending IP: Containers: myfrontend: Container ID: Image: nginx Image ID: Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-9hx5l (ro) /var/www/html from postgres-pv (rw) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: postgres-pv: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: postgres-pvc ReadOnly: false default-token-9hx5l: Type: Secret (a volume populated by a Secret) SecretName: default-token-9hx5l Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s node.alpha.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 18s default-scheduler Successfully assigned mypod to ip-172-31-5-169 Warning FailedMount 18s kubelet, ip-172-31-5-169 MountVolume.SetUp failed for volume "postgres-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv Output: Running scope as unit run-r0b74d5153d4b4fdba2064195dffbe053.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 does not exist Normal SuccessfulMountVolume 18s kubelet, ip-172-31-5-169 MountVolume.SetUp succeeded for volume "default-token-9hx5l" Warning FailedMount 17s kubelet, ip-172-31-5-169 MountVolume.SetUp failed for volume "postgres-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv Output: Running scope as unit run-r7a40f795a1a0413eaf73067b5ef04c6a.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 does not exist Warning FailedMount 16s kubelet, ip-172-31-5-169 MountVolume.SetUp failed for volume "postgres-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv Output: Running scope as unit run-r852be69a64954b7d9bb83d1f222f0e9e.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 does not exist Warning FailedMount 14s kubelet, ip-172-31-5-169 MountVolume.SetUp failed for volume "postgres-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv Output: Running scope as unit run-r86319aae90ac41c3b8ae77f30bfb0db8.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 does not exist Warning FailedMount 10s kubelet, ip-172-31-5-169 MountVolume.SetUp failed for volume "postgres-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv Output: Running scope as unit run-r8f582269d4994f4a8cac03f1d0a523c4.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 does not exist Warning FailedMount 2s kubelet, ip-172-31-5-169 MountVolume.SetUp failed for volume "postgres-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 /var/lib/kubelet/pods/bd29a483-b4e1-11e7-8752-06b7fdf65ee2/volumes/kubernetes.io~aws-ebs/postgres-pv Output: Running scope as unit run-r4e369e33e38c41e887cd6dac25e4bd8d.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-0445bc1b6acae0622 does not exist $ kubectl get po --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default default-http-backend-tbb59 1/1 Running 0 19m default mypod 0/1 ContainerCreating 0 2m default nginx-ingress-controller-x4kc8 1/1 Running 0 19m kube-system heapster-v1.5.0-beta.0-5c65bd6446-bzgmv 4/4 Running 0 15m kube-system kube-dns-778977457c-hz8sm 3/3 Running 0 20m kube-system kubernetes-dashboard-76c679977c-jxf85 1/1 Running 0 20m kube-system monitoring-influxdb-grafana-v4-g5zcn 2/2 Running 0 20m [~]$ juju ssh 1 sudo snap list Name Version Rev Developer Notes cdk-addons 1.8.0 169 canonical - core 16-2.28.1 3017 canonical core etcd 2.3.8 55 tvansteenburgh - kube-apiserver 1.8.0 173 canonical - kube-controller-manager 1.8.0 164 canonical - kube-scheduler 1.8.0 173 canonical - kubectl 1.8.0 173 canonical classic Connection to 13.58.179.90 closed. [~]$ juju ssh 1 "sudo snap get kube-apiserver cloud-provider" error: snap "kube-apiserver" has no "cloud-provider" configuration option Connection to 13.58.179.90 closed. [~]$ juju ssh 1 "sudo snap get kube-controller-manager cloud-provider" error: snap "kube-controller-manager" has no "cloud-provider" configuration option Connection to 13.58.179.90 closed. [~]$ juju ssh 1 "sudo snap set kube-controller-manager cloud-provider=aws" Connection to 13.58.179.90 closed. [~]$ juju ssh 1 "sudo snap set kube-apiserver cloud-provider=aws" Connection to 13.58.179.90 closed. [~]$ juju ssh 1 "sudo snap get kube-apiserver cloud-provider" aws Connection to 13.58.179.90 closed. [~]$ juju ssh 1 "sudo snap get kube-controller-manager cloud-provider" aws Connection to 13.58.179.90 closed. [~]$ [~]$ juju ssh 0 sudo snap list Name Version Rev Developer Notes core 16-2.28.1 3017 canonical core kube-proxy 1.8.0 173 canonical classic kubectl 1.8.0 173 canonical classic kubelet 1.8.0 173 canonical classic Connection to 18.221.183.95 closed. [~]$ juju ssh 0 "sudo snap set kubelet cloud-provider=aws" Connection to 18.221.183.95 closed. [~]$ juju ssh 0 "sudo snap get kubelet cloud-provider" aws Connection to 18.221.183.95 closed. [~]$ [~]$ juju ssh 0 sudo reboot [~]$ juju ssh 1 sudo reboot [~]$ juju debug-log unit-kubernetes-master-0: 10:52:03 INFO juju.worker.uniter.operation ran "config-changed" hook machine-1: 10:52:03 INFO juju.utils.packaging.manager Running: apt-get --option=Dpkg::Options::=--force-confold --option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install lxd machine-1: 10:52:04 INFO juju.tools.lxdclient using LXD API version "1.0" machine-1: 10:52:04 INFO juju.tools.lxdclient using LXD API version "1.0" machine-1: 10:52:04 INFO juju.worker start "lxd-provisioner" machine-1: 10:52:04 INFO juju.worker stopped "1-container-watcher", err: <nil> machine-1: 10:52:04 INFO juju.tools.lxdclient using LXD API version "1.0" machine-1: 10:52:04 INFO juju.provisioner machine 1/lxd/0 already started as instance "juju-9daee5-1-lxd-0" machine-1: 10:52:04 INFO juju.provisioner provisioner-harvest-mode is set to destroyed; unknown instances not stopped [] machine-1: 10:52:04 INFO juju.provisioner maintainMachines: 1/lxd/0 unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed The connection to the server localhost:8080 was refused - did you specify the right host or port? unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed Traceback (most recent call last): unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 93, in <module> unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed main() unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 12, in main unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed render_templates() unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 23, in render_templates unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed "num_nodes": get_node_count() unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 77, in get_node_count unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed output = kubectl("get", "nodes", "-o", "name") unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 73, in kubectl unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed return subprocess.check_output(cmd) unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed File "/usr/lib/python3.5/subprocess.py", line 626, in check_output unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed **kwargs).stdout unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed File "/usr/lib/python3.5/subprocess.py", line 708, in run unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed output=stdout, stderr=stderr) unit-kubernetes-master-0: 10:51:21 DEBUG unit.kubernetes-master/0.config-changed subprocess.CalledProcessError: Command '['/snap/cdk-addons/169/kubectl', 'get', 'nodes', '-o', 'name']' returned non-zero exit status 1 unit-kubernetes-master-0: 10:51:21 INFO unit.kubernetes-master/0.juju-log Addons are not ready yet. unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed The connection to the server localhost:8080 was refused - did you specify the right host or port? unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed Traceback (most recent call last): unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 93, in <module> unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed main() unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 12, in main unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed render_templates() unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 23, in render_templates unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed "num_nodes": get_node_count() unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 77, in get_node_count unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed output = kubectl("get", "nodes", "-o", "name") unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 73, in kubectl unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed return subprocess.check_output(cmd) unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed File "/usr/lib/python3.5/subprocess.py", line 626, in check_output unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed **kwargs).stdout unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed File "/usr/lib/python3.5/subprocess.py", line 708, in run unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed output=stdout, stderr=stderr) unit-kubernetes-master-0: 10:51:41 DEBUG unit.kubernetes-master/0.config-changed subprocess.CalledProcessError: Command '['/snap/cdk-addons/169/kubectl', 'get', 'nodes', '-o', 'name']' returned non-zero exit status 1 unit-kubernetes-master-0: 10:51:42 INFO unit.kubernetes-master/0.juju-log Addons are not ready yet. unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed The connection to the server localhost:8080 was refused - did you specify the right host or port? unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed Traceback (most recent call last): unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 93, in <module> unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed main() unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 12, in main unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed render_templates() unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 23, in render_templates unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed "num_nodes": get_node_count() unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 77, in get_node_count unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed output = kubectl("get", "nodes", "-o", "name") unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed File "/snap/cdk-addons/169/apply", line 73, in kubectl unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed return subprocess.check_output(cmd) unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed File "/usr/lib/python3.5/subprocess.py", line 626, in check_output unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed **kwargs).stdout unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed File "/usr/lib/python3.5/subprocess.py", line 708, in run unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed output=stdout, stderr=stderr) unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed subprocess.CalledProcessError: Command '['/snap/cdk-addons/169/kubectl', 'get', 'nodes', '-o', 'name']' returned non-zero exit status 1 unit-kubernetes-master-0: 10:52:02 INFO unit.kubernetes-master/0.juju-log Addons are not ready yet. unit-kubernetes-master-0: 10:52:02 INFO unit.kubernetes-master/0.juju-log Invoking reactive handler: reactive/kubernetes_master.py:437:send_data unit-kubernetes-master-0: 10:52:02 INFO unit.kubernetes-master/0.juju-log Invoking reactive handler: reactive/kubernetes_master.py:515:create_self_config unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed Cluster "juju-cluster" set. unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed Property "users" unset. unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed User "admin" set. unit-kubernetes-master-0: 10:52:02 DEBUG unit.kubernetes-master/0.config-changed Context "juju-context" modified. unit-kubernetes-master-0: 10:52:03 DEBUG unit.kubernetes-master/0.config-changed Switched to context "juju-context". unit-kubernetes-master-0: 10:52:03 INFO unit.kubernetes-master/0.juju-log Invoking reactive handler: reactive/kubernetes_master.py:430:push_service_data unit-kubernetes-master-0: 10:52:03 INFO unit.kubernetes-master/0.juju-log Invoking reactive handler: reactive/kubernetes_master.py:376:etcd_data_change ^C [~]$ juju status Model Controller Cloud/Region Version SLA conjure-kubernetes-core-c71 conjure-up-aws-cfb aws/us-east-2 2.2.4 unsupported App Version Status Scale Charm Store Rev OS Notes easyrsa 3.0.1 active 1 easyrsa jujucharms 19 ubuntu etcd 2.3.8 active 1 etcd jujucharms 53 ubuntu flannel 0.9.0 active 2 flannel jujucharms 32 ubuntu kubernetes-master 1.8.0 waiting 1 kubernetes-master jujucharms 55 ubuntu exposed kubernetes-worker 1.8.0 waiting 1 kubernetes-worker jujucharms 59 ubuntu exposed Unit Workload Agent Machine Public address Ports Message easyrsa/0* active idle 1/lxd/0 10.0.135.15 Certificate Authority connected. etcd/0* active idle 1 13.58.179.90 2379/tcp Healthy with 1 known peer kubernetes-master/0* waiting executing 1 13.58.179.90 6443/tcp (update-status) Waiting to retry addon deployment flannel/1 active idle 13.58.179.90 Flannel subnet 10.1.25.1/24 kubernetes-worker/0* waiting idle 0 18.221.183.95 80/tcp,443/tcp Waiting for kubelet to start. flannel/0* active idle 18.221.183.95 Flannel subnet 10.1.46.1/24 Machine State DNS Inst id Series AZ Message 0 started 18.221.183.95 i-0e3732fedd2e29eef xenial us-east-2a running 1 started 13.58.179.90 i-08e7af80c7f9dc876 xenial us-east-2b running 1/lxd/0 started 10.0.135.15 juju-9daee5-1-lxd-0 xenial us-east-2b Container started Relation provider Requirer Interface Type easyrsa:client etcd:certificates tls-certificates regular easyrsa:client kubernetes-master:certificates tls-certificates regular easyrsa:client kubernetes-worker:certificates tls-certificates regular etcd:cluster etcd:cluster etcd peer etcd:db flannel:etcd etcd regular etcd:db kubernetes-master:etcd etcd regular kubernetes-master:cni flannel:cni kubernetes-cni subordinate kubernetes-master:kube-api-endpoint kubernetes-worker:kube-api-endpoint http regular kubernetes-master:kube-control kubernetes-worker:kube-control kube-control regular kubernetes-worker:cni flannel:cni kubernetes-cni subordinate 2017-10-19 15:51:00 DEBUG config-changed Traceback (most recent call last): 2017-10-19 15:51:00 DEBUG config-changed File "/snap/cdk-addons/169/apply", line 93, in <module> 2017-10-19 15:51:00 DEBUG config-changed main() 2017-10-19 15:51:00 DEBUG config-changed File "/snap/cdk-addons/169/apply", line 12, in main 2017-10-19 15:51:00 DEBUG config-changed render_templates() 2017-10-19 15:51:00 DEBUG config-changed File "/snap/cdk-addons/169/apply", line 23, in render_templates 2017-10-19 15:51:00 DEBUG config-changed "num_nodes": get_node_count() 2017-10-19 15:51:00 DEBUG config-changed File "/snap/cdk-addons/169/apply", line 77, in get_node_count 2017-10-19 15:51:00 DEBUG config-changed output = kubectl("get", "nodes", "-o", "name") 2017-10-19 15:51:00 DEBUG config-changed File "/snap/cdk-addons/169/apply", line 73, in kubectl 2017-10-19 15:51:00 DEBUG config-changed return subprocess.check_output(cmd) 2017-10-19 15:51:00 DEBUG config-changed File "/usr/lib/python3.5/subprocess.py", line 626, in check_output 2017-10-19 15:51:00 DEBUG config-changed **kwargs).stdout 2017-10-19 15:51:00 DEBUG config-changed File "/usr/lib/python3.5/subprocess.py", line 708, in run 2017-10-19 15:51:00 DEBUG config-changed output=stdout, stderr=stderr) 2017-10-19 15:51:00 DEBUG config-changed subprocess.CalledProcessError: Command '['/snap/cdk-addons/169/kubectl', 'get', 'nodes', '-o', 'name']' returned non-zero exit status 1 2017-10-19 15:51:00 INFO juju-log Addons are not ready yet. 2017-10-19 15:51:21 DEBUG config-changed The connection to the server localhost:8080 was refused - did you specify the right host or port? 2017-10-19 15:51:21 DEBUG config-changed Traceback (most recent call last): 2017- Oct 19 15:51:27 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Main process exited, code=exited, status=1/FAILURE Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908112 4246 flags.go:52] FLAG: --tls-cert-file="/root/cdk/server.crt" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908118 4246 flags.go:52] FLAG: --tls-private-key-file="/root/cdk/server.key" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908174 4246 flags.go:52] FLAG: --tls-sni-cert-key="[]" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908188 4246 flags.go:52] FLAG: --token-auth-file="/root/cdk/known_tokens.csv" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908195 4246 flags.go:52] FLAG: --v="4" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908200 4246 flags.go:52] FLAG: --version="false" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908209 4246 flags.go:52] FLAG: --vmodule="" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908215 4246 flags.go:52] FLAG: --watch-cache="true" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908220 4246 flags.go:52] FLAG: --watch-cache-sizes="[]" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908478 4246 server.go:114] Version: v1.8.0 Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908746 4246 interface.go:360] Looking for default routes with IPv4 addresses Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908756 4246 interface.go:365] Default route transits interface "ens3" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.908957 4246 interface.go:174] Interface ens3 is up Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909010 4246 interface.go:222] Interface "ens3" has 2 addresses :[172.31.25.174/20 fe80::4b7:fdff:fef6:5ee2/64]. Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909028 4246 interface.go:189] Checking addr 172.31.25.174/20. Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909037 4246 interface.go:196] IP found 172.31.25.174 Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909047 4246 interface.go:228] Found valid IPv4 address 172.31.25.174 for interface "ens3". Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909054 4246 interface.go:371] Found active IP 172.31.25.174 Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909065 4246 services.go:51] Setting service IP to "10.152.183.1" (read-write). Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909079 4246 cloudprovider.go:59] --external-hostname was not specified. Trying to get it from the cloud provider. Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909186 4246 aws.go:847] Building AWS cloudprovider Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909220 4246 aws.go:810] Zone not specified in configuration file; querying AWS metadata service Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909877 4246 log_handler.go:32] AWS API Send: ec2metadata GetMetadata &{GetMetadata GET /meta-data/placement/avail Oct 19 15:51:27 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Unit entered failed state. Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909899 4246 log_handler.go:37] AWS API ValidateResponse: ec2metadata GetMetadata &{GetMetadata GET /meta-data/pla Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909940 4246 regions.go:74] found AWS region "us-east-2" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909952 4246 aws_credentials.go:90] registering credentials provider for AWS region "us-east-2" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.909960 4246 plugins.go:41] Registered credential provider "aws-ecr-us-east-2" Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.910484 4246 log_handler.go:32] AWS API Send: ec2metadata GetMetadata &{GetMetadata GET /meta-data/instance-id <ni Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.910497 4246 log_handler.go:37] AWS API ValidateResponse: ec2metadata GetMetadata &{GetMetadata GET /meta-data/ins Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: I1019 15:51:27.910582 4246 log_handler.go:27] AWS request: ec2 DescribeInstances Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: error setting the external host value: "aws" cloud provider could not be initialized: could not init cloud provider "aws": error fi Oct 19 15:51:27 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Failed with result 'exit-code'. Oct 19 15:51:28 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Service hold-off time over, scheduling restart. Oct 19 15:51:28 ip-172-31-25-174 systemd[1]: Stopped Service for snap application kube-apiserver.daemon. Oct 19 15:51:28 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Start request repeated too quickly. Oct 19 15:51:28 ip-172-31-25-174 systemd[1]: Failed to start Service for snap application kube-apiserver.daemon. Oct 19 15:51:35 ip-172-31-25-174 systemd[1]: Stopped Service for snap application kube-apiserver.daemon. Oct 19 15:51:35 ip-172-31-25-174 systemd[1]: Started Service for snap application kube-apiserver.daemon. Oct 19 15:51:35 ip-172-31-25-174 kub ubuntu@ip-172-31-25-174:/var/log/juju$ journalctl -xn --no-pager -u snap.kube-apiserver.daemon -- Logs begin at Thu 2017-10-19 15:51:27 UTC, end at Thu 2017-10-19 16:11:33 UTC. -- Oct 19 16:00:39 ip-172-31-25-174 kube-apiserver.daemon[12379]: I1019 16:00:38.988854 12379 flags.go:52] FLAG: --tls-sni-cert-key="[]" Oct 19 16:00:39 ip-172-31-25-174 kube-apiserver.daemon[12379]: I1019 16:00:38.988863 12379 flags.go:52] FLAG: --token-auth-file="/root/cdk/known_tokens.csv" Oct 19 16:00:39 ip-172-31-25-174 kube-apiserver.daemon[12379]: I1019 16:00:38.988868 12379 flags.go:52] FLAG: --v="4" Oct 19 16:00:39 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Main process exited, code=exited, status=1/FAILURE Oct 19 16:00:39 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Unit entered failed state. Oct 19 16:00:39 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Failed with result 'exit-code'. Oct 19 16:00:39 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Service hold-off time over, scheduling restart. Oct 19 16:00:39 ip-172-31-25-174 systemd[1]: Stopped Service for snap application kube-apiserver.daemon. -- Subject: Unit snap.kube-apiserver.daemon.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit snap.kube-apiserver.daemon.service has finished shutting down. Oct 19 16:00:39 ip-172-31-25-174 systemd[1]: snap.kube-apiserver.daemon.service: Start request repeated too quickly. Oct 19 16:00:39 ip-172-31-25-174 systemd[1]: Failed to start Service for snap application kube-apiserver.daemon. -- Subject: Unit snap.kube-apiserver.daemon.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit snap.kube-apiserver.daemon.service has failed. -- -- The result is failed.