Skip to content

Instantly share code, notes, and snippets.

@udyvish
Last active May 15, 2020 19:33
Show Gist options
  • Save udyvish/66c7b353b4b4ee3a3bde14914e556051 to your computer and use it in GitHub Desktop.
Save udyvish/66c7b353b4b4ee3a3bde14914e556051 to your computer and use it in GitHub Desktop.
TASK [kubernetes/master : kubeadm | Initialize first master] **********************************************************************************************************************************
task path: /home/centos/kubespray/roles/kubernetes/master/tasks/kubeadm-setup.yml:137
fatal: [two-k8sm-0]: FAILED! => {
"attempts": 3,
"changed": true,
"cmd": [
"timeout",
"-k",
"300s",
"300s",
"/usr/local/bin/kubeadm",
"init",
"--config=/etc/kubernetes/kubeadm-config.yaml",
"--ignore-preflight-errors=all",
"--skip-phases=addon/coredns",
"--upload-certs"
],
"delta": "0:05:00.003894",
"end": "2020-05-15 15:07:02.274517",
"failed_when_result": true,
"invocation": {
"module_args": {
"_raw_params": "timeout -k 300s 300s /usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --skip-phases=addon/coredns --upload-certs",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 124,
"start": "2020-05-15 15:02:02.270623",
"stderr": "W0515 15:02:02.319345 1187 defaults.go:186] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]\nW0515 15:02:02.319467 1187 validation.go:28] Cannot validate kube-proxy config - no validator is available\nW0515 15:02:02.319474 1187 validation.go:28] Cannot validate kubelet config - no validator is available\n\t[WARNING Firewalld]: firewalld is active, please ensure ports [443 10250] are open or your cluster may not function correctly\n\t[WARNING Port-10259]: Port 10259 is in use\n\t[WARNING Port-10257]: Port 10257 is in use\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists\n\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n\t[WARNING FileExisting-tc]: tc not found in system path\n\t[WARNING ExternalEtcdVersion]: Get https://192.168.1.125:2379/version: dial tcp 192.168.1.125:2379: connect: no route to host\nW0515 15:02:18.900511 1187 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nW0515 15:02:18.905422 1187 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nW0515 15:02:18.907232 1187 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"",
"stderr_lines": [
"W0515 15:02:02.319345 1187 defaults.go:186] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]",
"W0515 15:02:02.319467 1187 validation.go:28] Cannot validate kube-proxy config - no validator is available",
"W0515 15:02:02.319474 1187 validation.go:28] Cannot validate kubelet config - no validator is available",
"\t[WARNING Firewalld]: firewalld is active, please ensure ports [443 10250] are open or your cluster may not function correctly",
"\t[WARNING Port-10259]: Port 10259 is in use",
"\t[WARNING Port-10257]: Port 10257 is in use",
"\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists",
"\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists",
"\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists",
"\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/",
"\t[WARNING FileExisting-tc]: tc not found in system path",
"\t[WARNING ExternalEtcdVersion]: Get https://192.168.1.125:2379/version: dial tcp 192.168.1.125:2379: connect: no route to host",
"W0515 15:02:18.900511 1187 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"",
"W0515 15:02:18.905422 1187 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"",
"W0515 15:02:18.907232 1187 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\""
],
"stdout": "[init] Using Kubernetes version: v1.17.0\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[certs] Using certificateDir folder \"/etc/kubernetes/ssl\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] External etcd mode: Skipping etcd/ca certificate authority generation\n[certs] External etcd mode: Skipping etcd/server certificate generation\n[certs] External etcd mode: Skipping etcd/peer certificate generation\n[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation\n[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/admin.conf\"\n[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/kubelet.conf\"\n[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/controller-manager.conf\"\n[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/scheduler.conf\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"\n[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 5m0s\n[kubelet-check] Initial timeout of 40s passed.",
"stdout_lines": [
"[init] Using Kubernetes version: v1.17.0",
"[preflight] Running pre-flight checks",
"[preflight] Pulling images required for setting up a Kubernetes cluster",
"[preflight] This might take a minute or two, depending on the speed of your internet connection",
"[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Starting the kubelet",
"[certs] Using certificateDir folder \"/etc/kubernetes/ssl\"",
"[certs] Using existing ca certificate authority",
"[certs] Using existing apiserver certificate and key on disk",
"[certs] Using existing apiserver-kubelet-client certificate and key on disk",
"[certs] Using existing front-proxy-ca certificate authority",
"[certs] Using existing front-proxy-client certificate and key on disk",
"[certs] External etcd mode: Skipping etcd/ca certificate authority generation",
"[certs] External etcd mode: Skipping etcd/server certificate generation",
"[certs] External etcd mode: Skipping etcd/peer certificate generation",
"[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation",
"[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation",
"[certs] Using the existing \"sa\" key",
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
"[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/admin.conf\"",
"[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/kubelet.conf\"",
"[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/controller-manager.conf\"",
"[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/scheduler.conf\"",
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
"[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"",
"[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
"[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"",
"[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
"[controlplane] Adding extra host path mount \"etc-pki-tls\" to \"kube-apiserver\"",
"[controlplane] Adding extra host path mount \"etc-pki-ca-trust\" to \"kube-apiserver\"",
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 5m0s",
"[kubelet-check] Initial timeout of 40s passed."
]
}
sudo journalctl -u kubelet -n 2000 --no-pager
-- Logs begin at Fri 2020-05-15 07:56:11 EDT, end at Fri 2020-05-15 15:32:58 EDT. --
May 15 15:26:13 two-k8sm-0 kubelet[4523]: E0515 15:26:13.203342 4523 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.276972 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277007 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277028 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277046 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277098 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277145 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277151 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277160 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277166 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277184 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277212 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277224 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277234 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277254 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277273 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: E0515 15:26:13.277292 4523 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277311 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277333 4523 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277447 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277551 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277591 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277626 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277652 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277696 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: I0515 15:26:13.277750 4523 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:26:13 two-k8sm-0 kubelet[4523]: E0515 15:26:13.378433 4523 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:26:13 two-k8sm-0 kubelet[4523]: E0515 15:26:13.403463 4523 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: Get https://192.168.1.36:443/apis/storage.k8s.io/v1/csinodes/two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:26:13 two-k8sm-0 kubelet[4523]: E0515 15:26:13.477550 4523 controller.go:135] failed to ensure node lease exists, will retry in 800ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:26:13 two-k8sm-0 kubelet[4523]: E0515 15:26:13.479032 4523 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:26:13 two-k8sm-0 kubelet[4523]: E0515 15:26:13.579193 4523 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:27:17 two-k8sm-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 15 15:27:17 two-k8sm-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 15 15:27:27 two-k8sm-0 systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 15 15:27:27 two-k8sm-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16.
May 15 15:27:27 two-k8sm-0 systemd[1]: Stopped Kubernetes Kubelet Server.
May 15 15:27:27 two-k8sm-0 systemd[1]: Started Kubernetes Kubelet Server.
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041177 4712 flags.go:33] FLAG: --add-dir-header="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041384 4712 flags.go:33] FLAG: --address="0.0.0.0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041392 4712 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041402 4712 flags.go:33] FLAG: --alsologtostderr="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041408 4712 flags.go:33] FLAG: --anonymous-auth="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041414 4712 flags.go:33] FLAG: --application-metrics-count-limit="100"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041420 4712 flags.go:33] FLAG: --authentication-token-webhook="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041425 4712 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041432 4712 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041442 4712 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041448 4712 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041453 4712 flags.go:33] FLAG: --azure-container-registry-config=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041458 4712 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041464 4712 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041469 4712 flags.go:33] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041475 4712 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041480 4712 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041488 4712 flags.go:33] FLAG: --cgroup-root=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041493 4712 flags.go:33] FLAG: --cgroups-per-qos="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041498 4712 flags.go:33] FLAG: --chaos-chance="0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041505 4712 flags.go:33] FLAG: --client-ca-file=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041510 4712 flags.go:33] FLAG: --cloud-config=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041515 4712 flags.go:33] FLAG: --cloud-provider=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041520 4712 flags.go:33] FLAG: --cluster-dns="[]"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041527 4712 flags.go:33] FLAG: --cluster-domain=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041532 4712 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041537 4712 flags.go:33] FLAG: --cni-cache-dir="/var/lib/cni/cache"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041543 4712 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041548 4712 flags.go:33] FLAG: --config="/etc/kubernetes/kubelet-config.yaml"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041554 4712 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041559 4712 flags.go:33] FLAG: --container-log-max-files="5"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041565 4712 flags.go:33] FLAG: --container-log-max-size="10Mi"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041571 4712 flags.go:33] FLAG: --container-runtime="docker"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041578 4712 flags.go:33] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041583 4712 flags.go:33] FLAG: --containerd="/run/containerd/containerd.sock"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041589 4712 flags.go:33] FLAG: --contention-profiling="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041594 4712 flags.go:33] FLAG: --cpu-cfs-quota="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041599 4712 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041605 4712 flags.go:33] FLAG: --cpu-manager-policy="none"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041610 4712 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041615 4712 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041621 4712 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041626 4712 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041632 4712 flags.go:33] FLAG: --docker-only="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041637 4712 flags.go:33] FLAG: --docker-root="/var/lib/docker"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041642 4712 flags.go:33] FLAG: --docker-tls="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041648 4712 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041653 4712 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041658 4712 flags.go:33] FLAG: --docker-tls-key="key.pem"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041679 4712 flags.go:33] FLAG: --dynamic-config-dir=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041686 4712 flags.go:33] FLAG: --enable-cadvisor-json-endpoints="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041691 4712 flags.go:33] FLAG: --enable-controller-attach-detach="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041697 4712 flags.go:33] FLAG: --enable-debugging-handlers="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041702 4712 flags.go:33] FLAG: --enable-load-reader="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041707 4712 flags.go:33] FLAG: --enable-server="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041712 4712 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041721 4712 flags.go:33] FLAG: --event-burst="10"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041726 4712 flags.go:33] FLAG: --event-qps="5"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041732 4712 flags.go:33] FLAG: --event-storage-age-limit="default=0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041737 4712 flags.go:33] FLAG: --event-storage-event-limit="default=0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041743 4712 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041755 4712 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041761 4712 flags.go:33] FLAG: --eviction-minimum-reclaim=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041768 4712 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041774 4712 flags.go:33] FLAG: --eviction-soft=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041782 4712 flags.go:33] FLAG: --eviction-soft-grace-period=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041830 4712 flags.go:33] FLAG: --exit-on-lock-contention="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041836 4712 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041841 4712 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041847 4712 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041853 4712 flags.go:33] FLAG: --experimental-dockershim="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041858 4712 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041863 4712 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041868 4712 flags.go:33] FLAG: --experimental-mounter-path=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041874 4712 flags.go:33] FLAG: --fail-swap-on="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041879 4712 flags.go:33] FLAG: --feature-gates=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041886 4712 flags.go:33] FLAG: --file-check-frequency="20s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041891 4712 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041897 4712 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041902 4712 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041907 4712 flags.go:33] FLAG: --healthz-port="10248"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041916 4712 flags.go:33] FLAG: --help="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041921 4712 flags.go:33] FLAG: --hostname-override="two-k8sm-0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041927 4712 flags.go:33] FLAG: --housekeeping-interval="10s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041932 4712 flags.go:33] FLAG: --http-check-frequency="20s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041938 4712 flags.go:33] FLAG: --image-gc-high-threshold="85"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041943 4712 flags.go:33] FLAG: --image-gc-low-threshold="80"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041948 4712 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041954 4712 flags.go:33] FLAG: --image-service-endpoint=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041959 4712 flags.go:33] FLAG: --iptables-drop-bit="15"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041964 4712 flags.go:33] FLAG: --iptables-masquerade-bit="14"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041969 4712 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041974 4712 flags.go:33] FLAG: --kube-api-burst="10"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041981 4712 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041987 4712 flags.go:33] FLAG: --kube-api-qps="5"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041992 4712 flags.go:33] FLAG: --kube-reserved=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.041998 4712 flags.go:33] FLAG: --kube-reserved-cgroup=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042005 4712 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042011 4712 flags.go:33] FLAG: --kubelet-cgroups=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042016 4712 flags.go:33] FLAG: --lock-file=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042021 4712 flags.go:33] FLAG: --log-backtrace-at=":0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042027 4712 flags.go:33] FLAG: --log-cadvisor-usage="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042032 4712 flags.go:33] FLAG: --log-dir=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042037 4712 flags.go:33] FLAG: --log-file=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042042 4712 flags.go:33] FLAG: --log-file-max-size="1800"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042048 4712 flags.go:33] FLAG: --log-flush-frequency="5s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042053 4712 flags.go:33] FLAG: --logtostderr="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042058 4712 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042064 4712 flags.go:33] FLAG: --make-iptables-util-chains="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042070 4712 flags.go:33] FLAG: --manifest-url=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042075 4712 flags.go:33] FLAG: --manifest-url-header=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042082 4712 flags.go:33] FLAG: --master-service-namespace="default"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042087 4712 flags.go:33] FLAG: --max-open-files="1000000"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042095 4712 flags.go:33] FLAG: --max-pods="110"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042101 4712 flags.go:33] FLAG: --maximum-dead-containers="-1"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042106 4712 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042111 4712 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042116 4712 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042122 4712 flags.go:33] FLAG: --network-plugin="cni"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042127 4712 flags.go:33] FLAG: --network-plugin-mtu="0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042132 4712 flags.go:33] FLAG: --node-ip="192.168.1.36"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042137 4712 flags.go:33] FLAG: --node-labels=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042144 4712 flags.go:33] FLAG: --node-status-max-images="50"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042149 4712 flags.go:33] FLAG: --node-status-update-frequency="10s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042154 4712 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042159 4712 flags.go:33] FLAG: --oom-score-adj="-999"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042164 4712 flags.go:33] FLAG: --pod-cidr=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042170 4712 flags.go:33] FLAG: --pod-infra-container-image="k8s.gcr.io/pause:3.1"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042175 4712 flags.go:33] FLAG: --pod-manifest-path=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042181 4712 flags.go:33] FLAG: --pod-max-pids="-1"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042187 4712 flags.go:33] FLAG: --pods-per-core="0"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042192 4712 flags.go:33] FLAG: --port="10250"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042199 4712 flags.go:33] FLAG: --protect-kernel-defaults="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042204 4712 flags.go:33] FLAG: --provider-id=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042209 4712 flags.go:33] FLAG: --qos-reserved=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042215 4712 flags.go:33] FLAG: --read-only-port="10255"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042220 4712 flags.go:33] FLAG: --really-crash-for-testing="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042225 4712 flags.go:33] FLAG: --redirect-container-streaming="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042231 4712 flags.go:33] FLAG: --register-node="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042236 4712 flags.go:33] FLAG: --register-schedulable="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042241 4712 flags.go:33] FLAG: --register-with-taints=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042247 4712 flags.go:33] FLAG: --registry-burst="10"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042252 4712 flags.go:33] FLAG: --registry-qps="5"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042257 4712 flags.go:33] FLAG: --reserved-cpus=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042262 4712 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042270 4712 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042275 4712 flags.go:33] FLAG: --rotate-certificates="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042280 4712 flags.go:33] FLAG: --rotate-server-certificates="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042285 4712 flags.go:33] FLAG: --runonce="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042291 4712 flags.go:33] FLAG: --runtime-cgroups="/systemd/system.slice"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042306 4712 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042312 4712 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042317 4712 flags.go:33] FLAG: --serialize-image-pulls="true"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042322 4712 flags.go:33] FLAG: --skip-headers="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042328 4712 flags.go:33] FLAG: --skip-log-headers="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042333 4712 flags.go:33] FLAG: --stderrthreshold="2"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042338 4712 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042344 4712 flags.go:33] FLAG: --storage-driver-db="cadvisor"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042349 4712 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042354 4712 flags.go:33] FLAG: --storage-driver-password="root"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042359 4712 flags.go:33] FLAG: --storage-driver-secure="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042367 4712 flags.go:33] FLAG: --storage-driver-table="stats"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042372 4712 flags.go:33] FLAG: --storage-driver-user="root"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042377 4712 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042383 4712 flags.go:33] FLAG: --sync-frequency="1m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042388 4712 flags.go:33] FLAG: --system-cgroups=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042393 4712 flags.go:33] FLAG: --system-reserved=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042399 4712 flags.go:33] FLAG: --system-reserved-cgroup=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042404 4712 flags.go:33] FLAG: --tls-cert-file=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042409 4712 flags.go:33] FLAG: --tls-cipher-suites="[]"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042415 4712 flags.go:33] FLAG: --tls-min-version=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042422 4712 flags.go:33] FLAG: --tls-private-key-file=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042427 4712 flags.go:33] FLAG: --topology-manager-policy="none"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042432 4712 flags.go:33] FLAG: --v="2"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042438 4712 flags.go:33] FLAG: --version="false"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042445 4712 flags.go:33] FLAG: --vmodule=""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042450 4712 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042458 4712 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.042491 4712 feature_gate.go:243] feature gates: &{map[]}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.043971 4712 feature_gate.go:243] feature gates: &{map[]}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.044017 4712 feature_gate.go:243] feature gates: &{map[]}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.052811 4712 mount_linux.go:168] Detected OS with systemd
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.053074 4712 server.go:416] Version: v1.17.0
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.053116 4712 feature_gate.go:243] feature gates: &{map[]}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.053163 4712 feature_gate.go:243] feature gates: &{map[]}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.053242 4712 plugins.go:100] No cloud provider specified.
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.053259 4712 server.go:532] No cloud provider specified: "" from the config file: ""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.053268 4712 server.go:821] Client rotation is on, will bootstrap in background
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.054967 4712 bootstrap.go:84] Current kubeconfig file contents are still valid, no bootstrap necessary
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.055033 4712 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.055270 4712 server.go:848] Starting client certificate rotation.
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.055294 4712 certificate_manager.go:275] Certificate rotation is enabled.
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.055691 4712 certificate_manager.go:531] Certificate expiration is 2021-05-15 18:47:03 +0000 UTC, rotation deadline is 2021-02-03 07:49:21.658080568 +0000 UTC
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.055733 4712 certificate_manager.go:281] Waiting 6324h21m53.602350015s for next certificate rotation
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.055967 4712 manager.go:146] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.085336 4712 fs.go:125] Filesystem UUIDs: map[0afd049e-abbf-49e2-b847-0e9046cde0cf:/dev/dm-1 2020-01-03-21-30-07-00:/dev/sr0 7fae6dd8-527f-4d7d-936c-bf746c046171:/dev/dm-0 99925032-09b5-4554-9086-bc29de56007c:/dev/sda1 e6241972-79f7-4ac6-8641-d5712ff6037d:/dev/dm-2]
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.085378 4712 fs.go:126] Filesystem partitions: map[/dev/mapper/cl-home:{mountpoint:/home major:253 minor:2 fsType:xfs blockSize:0} /dev/mapper/cl-root:{mountpoint:/ major:253 minor:0 fsType:xfs blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:ext4 blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:21 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:23 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:96 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm:{mountpoint:/var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm major:0 minor:49 fsType:tmpfs blockSize:0} /var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm:{mountpoint:/var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm major:0 minor:48 fsType:tmpfs blockSize:0} /var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm:{mountpoint:/var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-45:{mountpoint:/var/lib/docker/overlay2/3886f3d42267aa8d9dd19e817c93d5dc894e535350fd15200e5e2f7d757833a6/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/docker/overlay2/5f2386dd31df6ce041c9a607509a0a7f56a2f1b4c78803b5f1582cf2604f7018/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-47:{mountpoint:/var/lib/docker/overlay2/727ab1d060392df0276aae84fd58fa1cac5b7eacf7c41c10d36ee33d34d69843/merged major:0 minor:47 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/docker/overlay2/cfac2d408bba3f0ab0849ae474dc7c293063abebe40d26762e9065a402e9588a/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/docker/overlay2/76fc19cd5877a98fed3377c691930648fcdb14d5f1eadcdbc9d72094a0f78e9a/merged major:0 minor:77 fsType:overlay blockSize:0}]
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.087124 4712 manager.go:193] Machine: {NumCores:4 CpuFrequency:3000000 MemoryCapacity:8191901696 HugePages:[{PageSize:2048 NumPages:0}] MachineID:9a2992777ad949c1a3078c47702b85ff SystemUUID:9a299277-7ad9-49c1-a307-8c47702b85ff BootID:8112a837-fb0a-4340-b8fb-15ec6b6f8efc Filesystems:[{Device:/dev/mapper/cl-root DeviceMajor:253 DeviceMinor:0 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:overlay_0-47 DeviceMajor:0 DeviceMinor:47 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:21 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:23 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:1023303680 Type:vfs Inodes:65536 HasInodes:true} {Device:/var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm DeviceMajor:0 DeviceMinor:49 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:96 Capacity:819187712 Type:vfs Inodes:999988 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:24 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/mapper/cl-home DeviceMajor:253 DeviceMinor:2 Capacity:74140049408 Type:vfs Inodes:36218880 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:53687091200 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:8497659904 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:74176266240 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:137438953472 Scheduler:mq-deadline}] NetworkDevices:[{Name:ens18 MacAddress:96:45:9f:36:4d:cb Speed:-1 Mtu:1500}] Topology:[{Id:0 Memory:8191901696 HugePages:[{PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:2 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:3 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.087885 4712 manager.go:199] Version: {KernelVersion:4.18.0-147.8.1.el8_1.x86_64 ContainerOsVersion:CentOS Linux 8 (Core) DockerVersion:18.09.9 DockerAPIVersion:1.39 CadvisorVersion: CadvisorRevision:}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.087964 4712 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088265 4712 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088307 4712 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:512 scale:6} d:{Dec:<nil>} s:512M Format:DecimalSI}] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088403 4712 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088411 4712 container_manager_linux.go:305] Creating device plugin manager: true
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088421 4712 manager.go:126] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088441 4712 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b1c0d0 0x6e95c50 0x1b1c9a0 map[] map[] map[] map[] map[] 0xc00088de30 [0] 0x6e95c50}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088470 4712 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088535 4712 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088544 4712 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088552 4712 state_checkpoint.go:101] [cpumanager] state checkpoint: restored state from checkpoint
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088558 4712 state_checkpoint.go:102] [cpumanager] state checkpoint: defaultCPUSet:
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088565 4712 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x6e95c50 10000000000 0xc00095f380 <nil> <nil> <nil> <nil> map[cpu:{{200 -3} {<nil>} DecimalSI} memory:{{616857600 0} {<nil>} DecimalSI}] 0x6e95c50}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088634 4712 server.go:1055] Using root directory: /var/lib/kubelet
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088652 4712 kubelet.go:286] Adding pod path: /etc/kubernetes/manifests
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088668 4712 file.go:68] Watching path "/etc/kubernetes/manifests"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.088679 4712 kubelet.go:311] Watching apiserver
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.090125 4712 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.090150 4712 client.go:104] Start docker client with request timeout=2m0s
May 15 15:27:28 two-k8sm-0 kubelet[4712]: E0515 15:27:28.091510 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:28 two-k8sm-0 kubelet[4712]: E0515 15:27:28.091785 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:28 two-k8sm-0 kubelet[4712]: E0515 15:27:28.091788 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:28 two-k8sm-0 kubelet[4712]: W0515 15:27:28.100374 4712 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.100403 4712 docker_service.go:240] Hairpin mode set to "hairpin-veth"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: W0515 15:27:28.100510 4712 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:27:28 two-k8sm-0 kubelet[4712]: W0515 15:27:28.103206 4712 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.103236 4712 plugins.go:166] Loaded network plugin "cni"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.103268 4712 docker_service.go:255] Docker cri networking managed by cni
May 15 15:27:28 two-k8sm-0 kubelet[4712]: W0515 15:27:28.103377 4712 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.111437 4712 docker_service.go:260] Docker Info: &{ID:NZW7:UXGA:JFOA:JV25:5YG7:IPSN:OPFV:3IQO:FYBW:2FAP:6MPA:W7DK Containers:6 ContainersRunning:5 ContainersPaused:0 ContainersStopped:1 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:61 SystemTime:2020-05-15T15:27:28.104313331-04:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-147.8.1.el8_1.x86_64 OperatingSystem:CentOS Linux 8 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00096e5b0 NCPU:4 MemTotal:8191901696 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:two-k8sm-0 Labels:[] ExperimentalBuild:false ServerVersion:18.09.9 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.111514 4712 docker_service.go:273] Setting cgroupDriver to cgroupfs
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.111653 4712 kubelet.go:642] Starting the GRPC server for the docker CRI shim.
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.111799 4712 container_manager_linux.go:118] Configure resource-only container "/systemd/system.slice" with memory limit: 5734331187
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.111832 4712 docker_server.go:59] Start dockershim grpc server
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120590 4712 remote_runtime.go:59] parsed scheme: ""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120639 4712 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120665 4712 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120674 4712 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120699 4712 remote_image.go:50] parsed scheme: ""
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120708 4712 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120717 4712 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120724 4712 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.120991 4712 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000135580, CONNECTING
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.121117 4712 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000135680, CONNECTING
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.121222 4712 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000135580, READY
May 15 15:27:28 two-k8sm-0 kubelet[4712]: I0515 15:27:28.121271 4712 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000135680, READY
May 15 15:27:29 two-k8sm-0 kubelet[4712]: E0515 15:27:29.092105 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:29 two-k8sm-0 kubelet[4712]: E0515 15:27:29.101167 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:29 two-k8sm-0 kubelet[4712]: E0515 15:27:29.102106 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:30 two-k8sm-0 kubelet[4712]: E0515 15:27:30.092753 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:30 two-k8sm-0 kubelet[4712]: E0515 15:27:30.101660 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:30 two-k8sm-0 kubelet[4712]: E0515 15:27:30.102636 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:31 two-k8sm-0 kubelet[4712]: E0515 15:27:31.093359 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:31 two-k8sm-0 kubelet[4712]: E0515 15:27:31.102221 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:31 two-k8sm-0 kubelet[4712]: E0515 15:27:31.103868 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:32 two-k8sm-0 kubelet[4712]: E0515 15:27:32.093867 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:32 two-k8sm-0 kubelet[4712]: E0515 15:27:32.102594 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:32 two-k8sm-0 kubelet[4712]: E0515 15:27:32.104194 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:33 two-k8sm-0 kubelet[4712]: E0515 15:27:33.094377 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:33 two-k8sm-0 kubelet[4712]: E0515 15:27:33.102996 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:33 two-k8sm-0 kubelet[4712]: W0515 15:27:33.103526 4712 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:27:33 two-k8sm-0 kubelet[4712]: E0515 15:27:33.104578 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:34 two-k8sm-0 kubelet[4712]: E0515 15:27:34.095652 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:34 two-k8sm-0 kubelet[4712]: E0515 15:27:34.103596 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:34 two-k8sm-0 kubelet[4712]: E0515 15:27:34.105041 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:35 two-k8sm-0 kubelet[4712]: E0515 15:27:35.096198 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:35 two-k8sm-0 kubelet[4712]: E0515 15:27:35.104047 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:35 two-k8sm-0 kubelet[4712]: E0515 15:27:35.105503 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:36 two-k8sm-0 kubelet[4712]: E0515 15:27:36.096808 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:36 two-k8sm-0 kubelet[4712]: E0515 15:27:36.104448 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:36 two-k8sm-0 kubelet[4712]: E0515 15:27:36.105915 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:37 two-k8sm-0 kubelet[4712]: E0515 15:27:37.097368 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:37 two-k8sm-0 kubelet[4712]: E0515 15:27:37.104882 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:37 two-k8sm-0 kubelet[4712]: E0515 15:27:37.106302 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:38 two-k8sm-0 kubelet[4712]: E0515 15:27:38.097968 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:38 two-k8sm-0 kubelet[4712]: W0515 15:27:38.103723 4712 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:27:38 two-k8sm-0 kubelet[4712]: E0515 15:27:38.105358 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:38 two-k8sm-0 kubelet[4712]: E0515 15:27:38.106660 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:39 two-k8sm-0 kubelet[4712]: E0515 15:27:39.098466 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:39 two-k8sm-0 kubelet[4712]: E0515 15:27:39.105690 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:39 two-k8sm-0 kubelet[4712]: E0515 15:27:39.106991 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:40 two-k8sm-0 kubelet[4712]: E0515 15:27:40.099035 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:40 two-k8sm-0 kubelet[4712]: E0515 15:27:40.106189 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:40 two-k8sm-0 kubelet[4712]: E0515 15:27:40.107397 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:41 two-k8sm-0 kubelet[4712]: E0515 15:27:41.099706 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:41 two-k8sm-0 kubelet[4712]: E0515 15:27:41.106608 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:41 two-k8sm-0 kubelet[4712]: E0515 15:27:41.107785 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:42 two-k8sm-0 kubelet[4712]: E0515 15:27:42.100272 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:42 two-k8sm-0 kubelet[4712]: E0515 15:27:42.107080 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:42 two-k8sm-0 kubelet[4712]: E0515 15:27:42.108155 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:43 two-k8sm-0 kubelet[4712]: E0515 15:27:43.101658 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:43 two-k8sm-0 kubelet[4712]: W0515 15:27:43.104087 4712 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:27:43 two-k8sm-0 kubelet[4712]: E0515 15:27:43.107443 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:43 two-k8sm-0 kubelet[4712]: E0515 15:27:43.108482 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:44 two-k8sm-0 kubelet[4712]: E0515 15:27:44.102251 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:44 two-k8sm-0 kubelet[4712]: E0515 15:27:44.107935 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:44 two-k8sm-0 kubelet[4712]: E0515 15:27:44.110603 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:45 two-k8sm-0 kubelet[4712]: E0515 15:27:45.102859 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:45 two-k8sm-0 kubelet[4712]: E0515 15:27:45.108423 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:45 two-k8sm-0 kubelet[4712]: E0515 15:27:45.111010 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:46 two-k8sm-0 kubelet[4712]: E0515 15:27:46.103388 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:46 two-k8sm-0 kubelet[4712]: E0515 15:27:46.108805 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:46 two-k8sm-0 kubelet[4712]: E0515 15:27:46.111444 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:47 two-k8sm-0 kubelet[4712]: E0515 15:27:47.104498 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:47 two-k8sm-0 kubelet[4712]: E0515 15:27:47.109411 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:47 two-k8sm-0 kubelet[4712]: E0515 15:27:47.111925 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: W0515 15:27:48.104338 4712 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.104911 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.109800 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.112368 4712 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.422086 4712 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 15 15:27:48 two-k8sm-0 kubelet[4712]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423488 4712 kuberuntime_manager.go:211] Container runtime docker initialized, version: 18.09.9, apiVersion: 1.39.0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423642 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/aws-ebs"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423662 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/gce-pd"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423672 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/cinder"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423680 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/azure-disk"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423687 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/azure-file"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423695 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/vsphere-volume"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423706 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/empty-dir"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423714 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/git-repo"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423722 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/host-path"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423730 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/nfs"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423738 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/secret"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423746 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/iscsi"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423756 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/glusterfs"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423768 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/rbd"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423776 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/quobyte"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423784 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/cephfs"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423791 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/downward-api"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423799 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/fc"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423806 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/flocker"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423813 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/configmap"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423821 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/projected"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423835 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/portworx-volume"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423846 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/scaleio"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423854 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/local-volume"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423864 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/storageos"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.423884 4712 plugins.go:629] Loaded volume plugin "kubernetes.io/csi"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.424487 4712 server.go:1113] Started kubelet
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.424664 4712 server.go:143] Starting to listen on 0.0.0.0:10250
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.424944 4712 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.425188 4712 event.go:272] Unable to write event: 'Post https://192.168.1.36:443/api/v1/namespaces/default/events: dial tcp 192.168.1.36:443: connect: connection refused' (may retry after sleeping)
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.425431 4712 server.go:354] Adding debug handlers to kubelet server.
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.426082 4712 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.426369 4712 volume_manager.go:263] The desired_state_of_world populator starts
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.426385 4712 volume_manager.go:265] Starting Kubelet Volume Manager
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.426695 4712 desired_state_of_world_populator.go:138] Desired state populator starts to run
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.427094 4712 controller.go:135] failed to ensure node lease exists, will retry in 200ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.427169 4712 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSIDriver: Get https://192.168.1.36:443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.430727 4712 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.430834 4712 clientconn.go:104] parsed scheme: "unix"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.430846 4712 clientconn.go:104] scheme "unix" not registered, fallback to default scheme
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.430891 4712 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.430901 4712 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.430930 4712 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000f40170, CONNECTING
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.431253 4712 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000f40170, READY
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.432069 4712 factory.go:137] Registering containerd factory
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.446156 4712 factory.go:356] Registering Docker factory
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.446178 4712 factory.go:54] Registering systemd factory
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.446335 4712 factory.go:101] Registering Raw factory
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.446544 4712 manager.go:1158] Started watching for new ooms in manager
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.447579 4712 manager.go:272] Starting recovery of all containers
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.453461 4712 status_manager.go:157] Starting to sync pod status with apiserver
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.453485 4712 kubelet.go:1820] Starting kubelet main sync loop.
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.453573 4712 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.454580 4712 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.36:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.475399 4712 manager.go:277] Recovery completed
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.526944 4712 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.526951 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.527277 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.528663 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.528702 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.528715 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.528736 4712 kubelet_node_status.go:70] Attempting to register node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.528989 4712 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.537043 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.537194 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.538615 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.538654 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.538665 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.538689 4712 cpu_manager.go:173] [cpumanager] starting with none policy
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.538695 4712 cpu_manager.go:174] [cpumanager] reconciling every 10s
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.538703 4712 policy_none.go:43] [cpumanager] none policy: Start
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.539420 4712 manager.go:226] Starting Device Plugin manager
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.539731 4712 manager.go:268] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.539800 4712 plugin_watcher.go:54] Plugin Watcher Start at /var/lib/kubelet/plugins_registry
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.539862 4712 plugin_manager.go:112] The desired_state_of_world populator (plugin watcher) starts
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.539868 4712 plugin_manager.go:114] Starting Kubelet Plugin Manager
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.540477 4712 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node "two-k8sm-0" not found
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.540570 4712 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /systemd/system.slice
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.555380 4712 kubelet.go:1906] SyncLoop (ADD, "file"): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846), kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625), kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.555423 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.555610 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.558060 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.558088 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.558100 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.559102 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.559218 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.559235 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.559358 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.560708 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.560727 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.560737 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.560797 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.560812 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.560821 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: W0515 15:27:48.561263 4712 status_manager.go:530] Failed to get status for pod "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-apiserver-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.562158 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.562307 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.562432 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.562535 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.563380 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.563398 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.563408 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.563548 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.563565 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.563575 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: W0515 15:27:48.563944 4712 status_manager.go:530] Failed to get status for pod "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-controller-manager-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566358 4712 kubelet.go:1951] SyncLoop (PLEG): "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)", event: &pleg.PodLifecycleEvent{ID:"92f790874ae3c4b92635dc49da119ddd", Type:"ContainerStarted", Data:"d4fed961e50fa2588106faee9d7d84f6ecaafca1c783d09f3ab01364cc5f5079"}
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566407 4712 kubelet.go:1951] SyncLoop (PLEG): "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)", event: &pleg.PodLifecycleEvent{ID:"92f790874ae3c4b92635dc49da119ddd", Type:"ContainerStarted", Data:"33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871"}
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566425 4712 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)", event: &pleg.PodLifecycleEvent{ID:"06cefdfd9da3719520e6220b3609d625", Type:"ContainerStarted", Data:"76fe24efc34cd06b01da32779d1c8d3575c2edb2e2f1d0b86e38fdf68d51e925"}
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566442 4712 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)", event: &pleg.PodLifecycleEvent{ID:"06cefdfd9da3719520e6220b3609d625", Type:"ContainerStarted", Data:"8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516"}
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566458 4712 kubelet.go:1951] SyncLoop (PLEG): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)", event: &pleg.PodLifecycleEvent{ID:"a2126bea5a82d3a47ce85181aeeb2846", Type:"ContainerDied", Data:"f7426af69a95f3886b90a7457f623988e7418b9b9aed9bb30066f8665a6f75c2"}
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566490 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566615 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566745 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.566863 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.567946 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.567966 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.568006 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.568014 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.568029 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.568039 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.568036 4712 kubelet.go:1951] SyncLoop (PLEG): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)", event: &pleg.PodLifecycleEvent{ID:"a2126bea5a82d3a47ce85181aeeb2846", Type:"ContainerStarted", Data:"16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21"}
May 15 15:27:48 two-k8sm-0 kubelet[4712]: W0515 15:27:48.568422 4712 status_manager.go:530] Failed to get status for pod "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-scheduler-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.627098 4712 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627276 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627337 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627398 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627469 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627494 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627519 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627543 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627563 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.627567 4712 controller.go:135] failed to ensure node lease exists, will retry in 400ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627581 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627602 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627620 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.627648 4712 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.727264 4712 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.727865 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.727894 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.727915 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.727933 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.727951 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.727969 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.727986 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728004 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728023 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728093 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728102 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728104 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728147 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728175 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728187 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728201 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728223 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728232 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728237 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728242 4712 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728202 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728152 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728270 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.728276 4712 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.729139 4712 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.729312 4712 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.730689 4712 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.730709 4712 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.730719 4712 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: I0515 15:27:48.730741 4712 kubelet_node_status.go:70] Attempting to register node two-k8sm-0
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.730984 4712 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.827428 4712 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.906459 4712 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: Get https://192.168.1.36:443/apis/storage.k8s.io/v1/csinodes/two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:27:48 two-k8sm-0 kubelet[4712]: E0515 15:27:48.927500 4712 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:27:49 two-k8sm-0 kubelet[4712]: E0515 15:27:49.027728 4712 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:27:49 two-k8sm-0 kubelet[4712]: E0515 15:27:49.028447 4712 controller.go:135] failed to ensure node lease exists, will retry in 800ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:28:48 two-k8sm-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 15 15:28:48 two-k8sm-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 15 15:28:58 two-k8sm-0 systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 15 15:28:58 two-k8sm-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17.
May 15 15:28:58 two-k8sm-0 systemd[1]: Stopped Kubernetes Kubelet Server.
May 15 15:28:58 two-k8sm-0 systemd[1]: Started Kubernetes Kubelet Server.
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541643 4901 flags.go:33] FLAG: --add-dir-header="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541715 4901 flags.go:33] FLAG: --address="0.0.0.0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541723 4901 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541731 4901 flags.go:33] FLAG: --alsologtostderr="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541737 4901 flags.go:33] FLAG: --anonymous-auth="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541743 4901 flags.go:33] FLAG: --application-metrics-count-limit="100"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541749 4901 flags.go:33] FLAG: --authentication-token-webhook="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541754 4901 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541761 4901 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541767 4901 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541772 4901 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541777 4901 flags.go:33] FLAG: --azure-container-registry-config=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541783 4901 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541789 4901 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541794 4901 flags.go:33] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541800 4901 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541805 4901 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541814 4901 flags.go:33] FLAG: --cgroup-root=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541819 4901 flags.go:33] FLAG: --cgroups-per-qos="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541824 4901 flags.go:33] FLAG: --chaos-chance="0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541832 4901 flags.go:33] FLAG: --client-ca-file=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541837 4901 flags.go:33] FLAG: --cloud-config=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541842 4901 flags.go:33] FLAG: --cloud-provider=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541847 4901 flags.go:33] FLAG: --cluster-dns="[]"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541855 4901 flags.go:33] FLAG: --cluster-domain=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541860 4901 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541865 4901 flags.go:33] FLAG: --cni-cache-dir="/var/lib/cni/cache"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541871 4901 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541876 4901 flags.go:33] FLAG: --config="/etc/kubernetes/kubelet-config.yaml"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541927 4901 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541933 4901 flags.go:33] FLAG: --container-log-max-files="5"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541940 4901 flags.go:33] FLAG: --container-log-max-size="10Mi"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541946 4901 flags.go:33] FLAG: --container-runtime="docker"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541953 4901 flags.go:33] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541960 4901 flags.go:33] FLAG: --containerd="/run/containerd/containerd.sock"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541966 4901 flags.go:33] FLAG: --contention-profiling="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541971 4901 flags.go:33] FLAG: --cpu-cfs-quota="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541977 4901 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541982 4901 flags.go:33] FLAG: --cpu-manager-policy="none"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541988 4901 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541993 4901 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.541999 4901 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542005 4901 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542010 4901 flags.go:33] FLAG: --docker-only="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542016 4901 flags.go:33] FLAG: --docker-root="/var/lib/docker"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542021 4901 flags.go:33] FLAG: --docker-tls="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542027 4901 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542032 4901 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542038 4901 flags.go:33] FLAG: --docker-tls-key="key.pem"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542045 4901 flags.go:33] FLAG: --dynamic-config-dir=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542052 4901 flags.go:33] FLAG: --enable-cadvisor-json-endpoints="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542057 4901 flags.go:33] FLAG: --enable-controller-attach-detach="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542063 4901 flags.go:33] FLAG: --enable-debugging-handlers="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542068 4901 flags.go:33] FLAG: --enable-load-reader="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542073 4901 flags.go:33] FLAG: --enable-server="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542079 4901 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542087 4901 flags.go:33] FLAG: --event-burst="10"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542092 4901 flags.go:33] FLAG: --event-qps="5"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542098 4901 flags.go:33] FLAG: --event-storage-age-limit="default=0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542103 4901 flags.go:33] FLAG: --event-storage-event-limit="default=0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542109 4901 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542121 4901 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542127 4901 flags.go:33] FLAG: --eviction-minimum-reclaim=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542133 4901 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542139 4901 flags.go:33] FLAG: --eviction-soft=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542148 4901 flags.go:33] FLAG: --eviction-soft-grace-period=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542154 4901 flags.go:33] FLAG: --exit-on-lock-contention="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542159 4901 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542165 4901 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542171 4901 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542176 4901 flags.go:33] FLAG: --experimental-dockershim="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542182 4901 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542187 4901 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542193 4901 flags.go:33] FLAG: --experimental-mounter-path=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542198 4901 flags.go:33] FLAG: --fail-swap-on="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542205 4901 flags.go:33] FLAG: --feature-gates=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542224 4901 flags.go:33] FLAG: --file-check-frequency="20s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542230 4901 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542235 4901 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542241 4901 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542247 4901 flags.go:33] FLAG: --healthz-port="10248"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542255 4901 flags.go:33] FLAG: --help="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542260 4901 flags.go:33] FLAG: --hostname-override="two-k8sm-0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542266 4901 flags.go:33] FLAG: --housekeeping-interval="10s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542271 4901 flags.go:33] FLAG: --http-check-frequency="20s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542277 4901 flags.go:33] FLAG: --image-gc-high-threshold="85"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542282 4901 flags.go:33] FLAG: --image-gc-low-threshold="80"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542287 4901 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542308 4901 flags.go:33] FLAG: --image-service-endpoint=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542315 4901 flags.go:33] FLAG: --iptables-drop-bit="15"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542320 4901 flags.go:33] FLAG: --iptables-masquerade-bit="14"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542326 4901 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542331 4901 flags.go:33] FLAG: --kube-api-burst="10"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542336 4901 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542342 4901 flags.go:33] FLAG: --kube-api-qps="5"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542347 4901 flags.go:33] FLAG: --kube-reserved=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542353 4901 flags.go:33] FLAG: --kube-reserved-cgroup=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542360 4901 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542366 4901 flags.go:33] FLAG: --kubelet-cgroups=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542371 4901 flags.go:33] FLAG: --lock-file=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542376 4901 flags.go:33] FLAG: --log-backtrace-at=":0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542382 4901 flags.go:33] FLAG: --log-cadvisor-usage="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542388 4901 flags.go:33] FLAG: --log-dir=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542393 4901 flags.go:33] FLAG: --log-file=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542399 4901 flags.go:33] FLAG: --log-file-max-size="1800"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542404 4901 flags.go:33] FLAG: --log-flush-frequency="5s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542409 4901 flags.go:33] FLAG: --logtostderr="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542415 4901 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542421 4901 flags.go:33] FLAG: --make-iptables-util-chains="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542426 4901 flags.go:33] FLAG: --manifest-url=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542431 4901 flags.go:33] FLAG: --manifest-url-header=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542438 4901 flags.go:33] FLAG: --master-service-namespace="default"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542443 4901 flags.go:33] FLAG: --max-open-files="1000000"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542452 4901 flags.go:33] FLAG: --max-pods="110"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542459 4901 flags.go:33] FLAG: --maximum-dead-containers="-1"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542465 4901 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542470 4901 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542475 4901 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542480 4901 flags.go:33] FLAG: --network-plugin="cni"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542486 4901 flags.go:33] FLAG: --network-plugin-mtu="0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542491 4901 flags.go:33] FLAG: --node-ip="192.168.1.36"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542496 4901 flags.go:33] FLAG: --node-labels=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542503 4901 flags.go:33] FLAG: --node-status-max-images="50"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542508 4901 flags.go:33] FLAG: --node-status-update-frequency="10s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542513 4901 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542519 4901 flags.go:33] FLAG: --oom-score-adj="-999"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542524 4901 flags.go:33] FLAG: --pod-cidr=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542529 4901 flags.go:33] FLAG: --pod-infra-container-image="k8s.gcr.io/pause:3.1"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542534 4901 flags.go:33] FLAG: --pod-manifest-path=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542541 4901 flags.go:33] FLAG: --pod-max-pids="-1"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542547 4901 flags.go:33] FLAG: --pods-per-core="0"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542552 4901 flags.go:33] FLAG: --port="10250"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542558 4901 flags.go:33] FLAG: --protect-kernel-defaults="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542563 4901 flags.go:33] FLAG: --provider-id=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542568 4901 flags.go:33] FLAG: --qos-reserved=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542574 4901 flags.go:33] FLAG: --read-only-port="10255"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542579 4901 flags.go:33] FLAG: --really-crash-for-testing="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542585 4901 flags.go:33] FLAG: --redirect-container-streaming="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542590 4901 flags.go:33] FLAG: --register-node="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542595 4901 flags.go:33] FLAG: --register-schedulable="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542600 4901 flags.go:33] FLAG: --register-with-taints=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542607 4901 flags.go:33] FLAG: --registry-burst="10"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542612 4901 flags.go:33] FLAG: --registry-qps="5"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542617 4901 flags.go:33] FLAG: --reserved-cpus=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542622 4901 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542630 4901 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542636 4901 flags.go:33] FLAG: --rotate-certificates="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542641 4901 flags.go:33] FLAG: --rotate-server-certificates="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542646 4901 flags.go:33] FLAG: --runonce="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542651 4901 flags.go:33] FLAG: --runtime-cgroups="/systemd/system.slice"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542657 4901 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542662 4901 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542668 4901 flags.go:33] FLAG: --serialize-image-pulls="true"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542674 4901 flags.go:33] FLAG: --skip-headers="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542680 4901 flags.go:33] FLAG: --skip-log-headers="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542685 4901 flags.go:33] FLAG: --stderrthreshold="2"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542691 4901 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542696 4901 flags.go:33] FLAG: --storage-driver-db="cadvisor"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542702 4901 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542707 4901 flags.go:33] FLAG: --storage-driver-password="root"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542712 4901 flags.go:33] FLAG: --storage-driver-secure="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542719 4901 flags.go:33] FLAG: --storage-driver-table="stats"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542725 4901 flags.go:33] FLAG: --storage-driver-user="root"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542730 4901 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542736 4901 flags.go:33] FLAG: --sync-frequency="1m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542741 4901 flags.go:33] FLAG: --system-cgroups=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542746 4901 flags.go:33] FLAG: --system-reserved=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542752 4901 flags.go:33] FLAG: --system-reserved-cgroup=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542757 4901 flags.go:33] FLAG: --tls-cert-file=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542762 4901 flags.go:33] FLAG: --tls-cipher-suites="[]"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542770 4901 flags.go:33] FLAG: --tls-min-version=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542775 4901 flags.go:33] FLAG: --tls-private-key-file=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542781 4901 flags.go:33] FLAG: --topology-manager-policy="none"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542786 4901 flags.go:33] FLAG: --v="2"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542791 4901 flags.go:33] FLAG: --version="false"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542798 4901 flags.go:33] FLAG: --vmodule=""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542804 4901 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542812 4901 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.542842 4901 feature_gate.go:243] feature gates: &{map[]}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.552019 4901 feature_gate.go:243] feature gates: &{map[]}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.552072 4901 feature_gate.go:243] feature gates: &{map[]}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.560487 4901 mount_linux.go:168] Detected OS with systemd
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.560845 4901 server.go:416] Version: v1.17.0
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.560899 4901 feature_gate.go:243] feature gates: &{map[]}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.560946 4901 feature_gate.go:243] feature gates: &{map[]}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.561040 4901 plugins.go:100] No cloud provider specified.
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.561058 4901 server.go:532] No cloud provider specified: "" from the config file: ""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.561069 4901 server.go:821] Client rotation is on, will bootstrap in background
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.562651 4901 bootstrap.go:84] Current kubeconfig file contents are still valid, no bootstrap necessary
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.562743 4901 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.562999 4901 server.go:848] Starting client certificate rotation.
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.563013 4901 certificate_manager.go:275] Certificate rotation is enabled.
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.563158 4901 certificate_manager.go:531] Certificate expiration is 2021-05-15 18:47:03 +0000 UTC, rotation deadline is 2021-02-12 21:46:09.310966248 +0000 UTC
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.563187 4901 certificate_manager.go:281] Waiting 6554h17m10.747781796s for next certificate rotation
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.563606 4901 manager.go:146] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.582326 4901 fs.go:125] Filesystem UUIDs: map[0afd049e-abbf-49e2-b847-0e9046cde0cf:/dev/dm-1 2020-01-03-21-30-07-00:/dev/sr0 7fae6dd8-527f-4d7d-936c-bf746c046171:/dev/dm-0 99925032-09b5-4554-9086-bc29de56007c:/dev/sda1 e6241972-79f7-4ac6-8641-d5712ff6037d:/dev/dm-2]
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.582380 4901 fs.go:126] Filesystem partitions: map[/dev/mapper/cl-home:{mountpoint:/home major:253 minor:2 fsType:xfs blockSize:0} /dev/mapper/cl-root:{mountpoint:/ major:253 minor:0 fsType:xfs blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:ext4 blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:21 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:23 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:96 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm:{mountpoint:/var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm major:0 minor:49 fsType:tmpfs blockSize:0} /var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm:{mountpoint:/var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm major:0 minor:48 fsType:tmpfs blockSize:0} /var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm:{mountpoint:/var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-45:{mountpoint:/var/lib/docker/overlay2/3886f3d42267aa8d9dd19e817c93d5dc894e535350fd15200e5e2f7d757833a6/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/docker/overlay2/5f2386dd31df6ce041c9a607509a0a7f56a2f1b4c78803b5f1582cf2604f7018/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-47:{mountpoint:/var/lib/docker/overlay2/727ab1d060392df0276aae84fd58fa1cac5b7eacf7c41c10d36ee33d34d69843/merged major:0 minor:47 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/docker/overlay2/cfac2d408bba3f0ab0849ae474dc7c293063abebe40d26762e9065a402e9588a/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/docker/overlay2/76fc19cd5877a98fed3377c691930648fcdb14d5f1eadcdbc9d72094a0f78e9a/merged major:0 minor:77 fsType:overlay blockSize:0}]
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.584098 4901 manager.go:193] Machine: {NumCores:4 CpuFrequency:3000000 MemoryCapacity:8191901696 HugePages:[{PageSize:2048 NumPages:0}] MachineID:9a2992777ad949c1a3078c47702b85ff SystemUUID:9a299277-7ad9-49c1-a307-8c47702b85ff BootID:8112a837-fb0a-4340-b8fb-15ec6b6f8efc Filesystems:[{Device:overlay_0-47 DeviceMajor:0 DeviceMinor:47 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm DeviceMajor:0 DeviceMinor:49 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:1023303680 Type:vfs Inodes:65536 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/dev/mapper/cl-home DeviceMajor:253 DeviceMinor:2 Capacity:74140049408 Type:vfs Inodes:36218880 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:/var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:24 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/mapper/cl-root DeviceMajor:253 DeviceMinor:0 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:96 Capacity:819187712 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:21 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:23 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:53687091200 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:8497659904 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:74176266240 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:137438953472 Scheduler:mq-deadline}] NetworkDevices:[{Name:ens18 MacAddress:96:45:9f:36:4d:cb Speed:-1 Mtu:1500}] Topology:[{Id:0 Memory:8191901696 HugePages:[{PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:2 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:3 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.584720 4901 manager.go:199] Version: {KernelVersion:4.18.0-147.8.1.el8_1.x86_64 ContainerOsVersion:CentOS Linux 8 (Core) DockerVersion:18.09.9 DockerAPIVersion:1.39 CadvisorVersion: CadvisorRevision:}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.584798 4901 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585081 4901 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585099 4901 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:512 scale:6} d:{Dec:<nil>} s:512M Format:DecimalSI}] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585179 4901 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585186 4901 container_manager_linux.go:305] Creating device plugin manager: true
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585200 4901 manager.go:126] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585219 4901 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b1c0d0 0x6e95c50 0x1b1c9a0 map[] map[] map[] map[] map[] 0xc00066bd10 [0] 0x6e95c50}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585248 4901 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585379 4901 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585388 4901 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585397 4901 state_checkpoint.go:101] [cpumanager] state checkpoint: restored state from checkpoint
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585403 4901 state_checkpoint.go:102] [cpumanager] state checkpoint: defaultCPUSet:
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585409 4901 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x6e95c50 10000000000 0xc000a0ca80 <nil> <nil> <nil> <nil> map[cpu:{{200 -3} {<nil>} DecimalSI} memory:{{616857600 0} {<nil>} DecimalSI}] 0x6e95c50}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585469 4901 server.go:1055] Using root directory: /var/lib/kubelet
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585487 4901 kubelet.go:286] Adding pod path: /etc/kubernetes/manifests
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585500 4901 file.go:68] Watching path "/etc/kubernetes/manifests"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.585511 4901 kubelet.go:311] Watching apiserver
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.587014 4901 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.587041 4901 client.go:104] Start docker client with request timeout=2m0s
May 15 15:28:58 two-k8sm-0 kubelet[4901]: E0515 15:28:58.588472 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:28:58 two-k8sm-0 kubelet[4901]: E0515 15:28:58.588660 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:28:58 two-k8sm-0 kubelet[4901]: E0515 15:28:58.588795 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:28:58 two-k8sm-0 kubelet[4901]: W0515 15:28:58.590692 4901 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.590715 4901 docker_service.go:240] Hairpin mode set to "hairpin-veth"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: W0515 15:28:58.590844 4901 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:28:58 two-k8sm-0 kubelet[4901]: W0515 15:28:58.595452 4901 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.595513 4901 plugins.go:166] Loaded network plugin "cni"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.595549 4901 docker_service.go:255] Docker cri networking managed by cni
May 15 15:28:58 two-k8sm-0 kubelet[4901]: W0515 15:28:58.595605 4901 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.603737 4901 docker_service.go:260] Docker Info: &{ID:NZW7:UXGA:JFOA:JV25:5YG7:IPSN:OPFV:3IQO:FYBW:2FAP:6MPA:W7DK Containers:6 ContainersRunning:5 ContainersPaused:0 ContainersStopped:1 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:61 SystemTime:2020-05-15T15:28:58.596464929-04:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-147.8.1.el8_1.x86_64 OperatingSystem:CentOS Linux 8 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0006b0070 NCPU:4 MemTotal:8191901696 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:two-k8sm-0 Labels:[] ExperimentalBuild:false ServerVersion:18.09.9 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.603854 4901 docker_service.go:273] Setting cgroupDriver to cgroupfs
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.603997 4901 kubelet.go:642] Starting the GRPC server for the docker CRI shim.
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.604073 4901 container_manager_linux.go:118] Configure resource-only container "/systemd/system.slice" with memory limit: 5734331187
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.604096 4901 docker_server.go:59] Start dockershim grpc server
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618150 4901 remote_runtime.go:59] parsed scheme: ""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618174 4901 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618197 4901 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618207 4901 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618233 4901 remote_image.go:50] parsed scheme: ""
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618242 4901 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618252 4901 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618259 4901 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618350 4901 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000506370, CONNECTING
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618405 4901 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0000844a0, CONNECTING
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618653 4901 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0000844a0, READY
May 15 15:28:58 two-k8sm-0 kubelet[4901]: I0515 15:28:58.618714 4901 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000506370, READY
May 15 15:28:59 two-k8sm-0 kubelet[4901]: E0515 15:28:59.589041 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:28:59 two-k8sm-0 kubelet[4901]: E0515 15:28:59.589966 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:28:59 two-k8sm-0 kubelet[4901]: E0515 15:28:59.591576 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:00 two-k8sm-0 kubelet[4901]: E0515 15:29:00.589707 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:00 two-k8sm-0 kubelet[4901]: E0515 15:29:00.590561 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:00 two-k8sm-0 kubelet[4901]: E0515 15:29:00.592005 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:01 two-k8sm-0 kubelet[4901]: E0515 15:29:01.590303 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:01 two-k8sm-0 kubelet[4901]: E0515 15:29:01.591249 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:01 two-k8sm-0 kubelet[4901]: E0515 15:29:01.592306 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:02 two-k8sm-0 kubelet[4901]: E0515 15:29:02.590867 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:02 two-k8sm-0 kubelet[4901]: E0515 15:29:02.591729 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:02 two-k8sm-0 kubelet[4901]: E0515 15:29:02.592812 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:03 two-k8sm-0 kubelet[4901]: E0515 15:29:03.591407 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:03 two-k8sm-0 kubelet[4901]: E0515 15:29:03.592410 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:03 two-k8sm-0 kubelet[4901]: E0515 15:29:03.593434 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:03 two-k8sm-0 kubelet[4901]: W0515 15:29:03.595772 4901 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:29:04 two-k8sm-0 kubelet[4901]: E0515 15:29:04.592049 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:04 two-k8sm-0 kubelet[4901]: E0515 15:29:04.592921 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:04 two-k8sm-0 kubelet[4901]: E0515 15:29:04.594006 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:05 two-k8sm-0 kubelet[4901]: E0515 15:29:05.592536 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:05 two-k8sm-0 kubelet[4901]: E0515 15:29:05.593521 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:05 two-k8sm-0 kubelet[4901]: E0515 15:29:05.594551 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:06 two-k8sm-0 kubelet[4901]: E0515 15:29:06.593023 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:06 two-k8sm-0 kubelet[4901]: E0515 15:29:06.593962 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:06 two-k8sm-0 kubelet[4901]: E0515 15:29:06.594969 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:07 two-k8sm-0 kubelet[4901]: E0515 15:29:07.593550 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:07 two-k8sm-0 kubelet[4901]: E0515 15:29:07.594428 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:07 two-k8sm-0 kubelet[4901]: E0515 15:29:07.595579 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:08 two-k8sm-0 kubelet[4901]: E0515 15:29:08.594111 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:08 two-k8sm-0 kubelet[4901]: E0515 15:29:08.595028 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:08 two-k8sm-0 kubelet[4901]: W0515 15:29:08.595903 4901 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:29:08 two-k8sm-0 kubelet[4901]: E0515 15:29:08.596138 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:09 two-k8sm-0 kubelet[4901]: E0515 15:29:09.594682 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:09 two-k8sm-0 kubelet[4901]: E0515 15:29:09.595603 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:09 two-k8sm-0 kubelet[4901]: E0515 15:29:09.596662 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:10 two-k8sm-0 kubelet[4901]: E0515 15:29:10.595301 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:10 two-k8sm-0 kubelet[4901]: E0515 15:29:10.596116 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:10 two-k8sm-0 kubelet[4901]: E0515 15:29:10.597280 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:11 two-k8sm-0 kubelet[4901]: E0515 15:29:11.595939 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:11 two-k8sm-0 kubelet[4901]: E0515 15:29:11.596769 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:11 two-k8sm-0 kubelet[4901]: E0515 15:29:11.597848 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:12 two-k8sm-0 kubelet[4901]: E0515 15:29:12.596412 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:12 two-k8sm-0 kubelet[4901]: E0515 15:29:12.597426 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:12 two-k8sm-0 kubelet[4901]: E0515 15:29:12.598623 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:13 two-k8sm-0 kubelet[4901]: W0515 15:29:13.596199 4901 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:29:13 two-k8sm-0 kubelet[4901]: E0515 15:29:13.596896 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:13 two-k8sm-0 kubelet[4901]: E0515 15:29:13.597824 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:13 two-k8sm-0 kubelet[4901]: E0515 15:29:13.599003 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:14 two-k8sm-0 kubelet[4901]: E0515 15:29:14.597373 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:14 two-k8sm-0 kubelet[4901]: E0515 15:29:14.598397 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:14 two-k8sm-0 kubelet[4901]: E0515 15:29:14.599365 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:15 two-k8sm-0 kubelet[4901]: E0515 15:29:15.597943 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:15 two-k8sm-0 kubelet[4901]: E0515 15:29:15.598760 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:15 two-k8sm-0 kubelet[4901]: E0515 15:29:15.599919 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:16 two-k8sm-0 kubelet[4901]: E0515 15:29:16.598667 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:16 two-k8sm-0 kubelet[4901]: E0515 15:29:16.599505 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:16 two-k8sm-0 kubelet[4901]: E0515 15:29:16.600591 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:17 two-k8sm-0 kubelet[4901]: E0515 15:29:17.599347 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:17 two-k8sm-0 kubelet[4901]: E0515 15:29:17.600173 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:17 two-k8sm-0 kubelet[4901]: E0515 15:29:17.601208 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:18 two-k8sm-0 kubelet[4901]: W0515 15:29:18.596381 4901 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.599867 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.600773 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.601988 4901 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.980655 4901 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 15 15:29:18 two-k8sm-0 kubelet[4901]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.981933 4901 kuberuntime_manager.go:211] Container runtime docker initialized, version: 18.09.9, apiVersion: 1.39.0
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982143 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/aws-ebs"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982219 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/gce-pd"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982229 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/cinder"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982236 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/azure-disk"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982244 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/azure-file"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982252 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/vsphere-volume"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982263 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/empty-dir"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982271 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/git-repo"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982311 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/host-path"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982321 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/nfs"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982330 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/secret"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982337 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/iscsi"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982348 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/glusterfs"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982361 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/rbd"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982382 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/quobyte"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982389 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/cephfs"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982396 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/downward-api"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982403 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/fc"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982425 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/flocker"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982433 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/configmap"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982441 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/projected"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982455 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/portworx-volume"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982467 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/scaleio"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982475 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/local-volume"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982486 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/storageos"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.982506 4901 plugins.go:629] Loaded volume plugin "kubernetes.io/csi"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.983490 4901 server.go:1113] Started kubelet
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.983807 4901 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.984022 4901 event.go:272] Unable to write event: 'Post https://192.168.1.36:443/api/v1/namespaces/default/events: dial tcp 192.168.1.36:443: connect: connection refused' (may retry after sleeping)
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.984156 4901 server.go:143] Starting to listen on 0.0.0.0:10250
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.984976 4901 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.985056 4901 server.go:354] Adding debug handlers to kubelet server.
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.985533 4901 volume_manager.go:263] The desired_state_of_world populator starts
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.985544 4901 volume_manager.go:265] Starting Kubelet Volume Manager
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.985727 4901 desired_state_of_world_populator.go:138] Desired state populator starts to run
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.986144 4901 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSIDriver: Get https://192.168.1.36:443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.986190 4901 controller.go:135] failed to ensure node lease exists, will retry in 200ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:18 two-k8sm-0 kubelet[4901]: E0515 15:29:18.987916 4901 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.988222 4901 clientconn.go:104] parsed scheme: "unix"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.988238 4901 clientconn.go:104] scheme "unix" not registered, fallback to default scheme
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.988301 4901 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.988312 4901 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.988341 4901 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000e519a0, CONNECTING
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.988749 4901 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000e519a0, READY
May 15 15:29:18 two-k8sm-0 kubelet[4901]: I0515 15:29:18.989218 4901 factory.go:137] Registering containerd factory
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.003327 4901 factory.go:356] Registering Docker factory
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.003344 4901 factory.go:54] Registering systemd factory
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.003491 4901 factory.go:101] Registering Raw factory
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.003625 4901 manager.go:1158] Started watching for new ooms in manager
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.004674 4901 manager.go:272] Starting recovery of all containers
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.026131 4901 status_manager.go:157] Starting to sync pod status with apiserver
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.026157 4901 kubelet.go:1820] Starting kubelet main sync loop.
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.026195 4901 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.026689 4901 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.36:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.035148 4901 manager.go:277] Recovery completed
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.085700 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.085879 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.085927 4901 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.087470 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.087500 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.087511 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.087532 4901 kubelet_node_status.go:70] Attempting to register node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.087798 4901 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.095715 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.095888 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.098295 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.098320 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.098331 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.098356 4901 cpu_manager.go:173] [cpumanager] starting with none policy
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.098363 4901 cpu_manager.go:174] [cpumanager] reconciling every 10s
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.098370 4901 policy_none.go:43] [cpumanager] none policy: Start
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.099125 4901 manager.go:226] Starting Device Plugin manager
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.099402 4901 manager.go:268] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.099464 4901 plugin_watcher.go:54] Plugin Watcher Start at /var/lib/kubelet/plugins_registry
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.099523 4901 plugin_manager.go:112] The desired_state_of_world populator (plugin watcher) starts
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.099530 4901 plugin_manager.go:114] Starting Kubelet Plugin Manager
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.102606 4901 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node "two-k8sm-0" not found
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.102692 4901 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /systemd/system.slice
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.126394 4901 kubelet.go:1906] SyncLoop (ADD, "file"): "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd), kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846), kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.126440 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.126581 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.127950 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.127974 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.127984 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.131702 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.131761 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.131848 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.131900 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.132855 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.132879 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.132890 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.133343 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.133365 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.133375 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: W0515 15:29:19.133690 4901 status_manager.go:530] Failed to get status for pod "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-scheduler-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.134031 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.134167 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.134316 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.134429 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135135 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135158 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135168 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135520 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135542 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135552 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135836 4901 kubelet.go:1951] SyncLoop (PLEG): "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)", event: &pleg.PodLifecycleEvent{ID:"92f790874ae3c4b92635dc49da119ddd", Type:"ContainerStarted", Data:"d4fed961e50fa2588106faee9d7d84f6ecaafca1c783d09f3ab01364cc5f5079"}
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135879 4901 kubelet.go:1951] SyncLoop (PLEG): "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)", event: &pleg.PodLifecycleEvent{ID:"92f790874ae3c4b92635dc49da119ddd", Type:"ContainerStarted", Data:"33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871"}
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135898 4901 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)", event: &pleg.PodLifecycleEvent{ID:"06cefdfd9da3719520e6220b3609d625", Type:"ContainerStarted", Data:"76fe24efc34cd06b01da32779d1c8d3575c2edb2e2f1d0b86e38fdf68d51e925"}
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135914 4901 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)", event: &pleg.PodLifecycleEvent{ID:"06cefdfd9da3719520e6220b3609d625", Type:"ContainerStarted", Data:"8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516"}
May 15 15:29:19 two-k8sm-0 kubelet[4901]: W0515 15:29:19.135921 4901 status_manager.go:530] Failed to get status for pod "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-apiserver-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135930 4901 kubelet.go:1951] SyncLoop (PLEG): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)", event: &pleg.PodLifecycleEvent{ID:"a2126bea5a82d3a47ce85181aeeb2846", Type:"ContainerDied", Data:"e5f918fdce024d45046cfd54456aa68a61444b7704051105caf59935259a16f7"}
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.135962 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.136007 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.136112 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.136142 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.137096 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.137132 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.137142 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.137166 4901 kubelet.go:1951] SyncLoop (PLEG): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)", event: &pleg.PodLifecycleEvent{ID:"a2126bea5a82d3a47ce85181aeeb2846", Type:"ContainerStarted", Data:"16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21"}
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.137237 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.137252 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.137261 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: W0515 15:29:19.137574 4901 status_manager.go:530] Failed to get status for pod "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-controller-manager-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.147367 4901 event.go:272] Unable to write event: 'Post https://192.168.1.36:443/api/v1/namespaces/default/events: dial tcp 192.168.1.36:443: connect: connection refused' (may retry after sleeping)
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.186049 4901 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.192644 4901 controller.go:135] failed to ensure node lease exists, will retry in 400ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.286224 4901 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286273 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286323 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286346 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286426 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286448 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286466 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286484 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286502 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286521 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286539 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286557 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.286575 4901 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.287966 4901 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.288150 4901 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.289459 4901 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.289478 4901 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.289487 4901 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.289507 4901 kubelet_node_status.go:70] Attempting to register node two-k8sm-0
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.289749 4901 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.386363 4901 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386729 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386764 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386784 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386803 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386821 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386839 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386857 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386874 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386891 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386909 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386927 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.386944 4901 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387050 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387133 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387164 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387203 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387242 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387283 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387368 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387419 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387455 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387484 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387458 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: I0515 15:29:19.387508 4901 operation_generator.go:634] MountVolume.SetUp succeeded for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.400019 4901 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: Get https://192.168.1.36:443/apis/storage.k8s.io/v1/csinodes/two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.486499 4901 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.587393 4901 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:29:19 two-k8sm-0 kubelet[4901]: E0515 15:29:19.593384 4901 controller.go:135] failed to ensure node lease exists, will retry in 800ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:20 two-k8sm-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 15 15:30:20 two-k8sm-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 15 15:30:30 two-k8sm-0 systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 15 15:30:30 two-k8sm-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18.
May 15 15:30:30 two-k8sm-0 systemd[1]: Stopped Kubernetes Kubelet Server.
May 15 15:30:30 two-k8sm-0 systemd[1]: Started Kubernetes Kubelet Server.
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049147 5361 flags.go:33] FLAG: --add-dir-header="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049222 5361 flags.go:33] FLAG: --address="0.0.0.0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049249 5361 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049258 5361 flags.go:33] FLAG: --alsologtostderr="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049264 5361 flags.go:33] FLAG: --anonymous-auth="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049270 5361 flags.go:33] FLAG: --application-metrics-count-limit="100"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049276 5361 flags.go:33] FLAG: --authentication-token-webhook="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049294 5361 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049302 5361 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049321 5361 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049327 5361 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049332 5361 flags.go:33] FLAG: --azure-container-registry-config=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049339 5361 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049345 5361 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049350 5361 flags.go:33] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049355 5361 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049361 5361 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049369 5361 flags.go:33] FLAG: --cgroup-root=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049374 5361 flags.go:33] FLAG: --cgroups-per-qos="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049380 5361 flags.go:33] FLAG: --chaos-chance="0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049387 5361 flags.go:33] FLAG: --client-ca-file=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049392 5361 flags.go:33] FLAG: --cloud-config=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049397 5361 flags.go:33] FLAG: --cloud-provider=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049402 5361 flags.go:33] FLAG: --cluster-dns="[]"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049409 5361 flags.go:33] FLAG: --cluster-domain=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049414 5361 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049420 5361 flags.go:33] FLAG: --cni-cache-dir="/var/lib/cni/cache"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049425 5361 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049430 5361 flags.go:33] FLAG: --config="/etc/kubernetes/kubelet-config.yaml"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049436 5361 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049441 5361 flags.go:33] FLAG: --container-log-max-files="5"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049447 5361 flags.go:33] FLAG: --container-log-max-size="10Mi"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049452 5361 flags.go:33] FLAG: --container-runtime="docker"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049459 5361 flags.go:33] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049465 5361 flags.go:33] FLAG: --containerd="/run/containerd/containerd.sock"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049471 5361 flags.go:33] FLAG: --contention-profiling="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049476 5361 flags.go:33] FLAG: --cpu-cfs-quota="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049481 5361 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049486 5361 flags.go:33] FLAG: --cpu-manager-policy="none"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049491 5361 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049496 5361 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049502 5361 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049507 5361 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049512 5361 flags.go:33] FLAG: --docker-only="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049518 5361 flags.go:33] FLAG: --docker-root="/var/lib/docker"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049523 5361 flags.go:33] FLAG: --docker-tls="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049535 5361 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049555 5361 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049573 5361 flags.go:33] FLAG: --docker-tls-key="key.pem"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049580 5361 flags.go:33] FLAG: --dynamic-config-dir=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049586 5361 flags.go:33] FLAG: --enable-cadvisor-json-endpoints="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049593 5361 flags.go:33] FLAG: --enable-controller-attach-detach="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049598 5361 flags.go:33] FLAG: --enable-debugging-handlers="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049603 5361 flags.go:33] FLAG: --enable-load-reader="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049608 5361 flags.go:33] FLAG: --enable-server="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049613 5361 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049620 5361 flags.go:33] FLAG: --event-burst="10"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049625 5361 flags.go:33] FLAG: --event-qps="5"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049630 5361 flags.go:33] FLAG: --event-storage-age-limit="default=0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049636 5361 flags.go:33] FLAG: --event-storage-event-limit="default=0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049641 5361 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049652 5361 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049657 5361 flags.go:33] FLAG: --eviction-minimum-reclaim=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049664 5361 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049669 5361 flags.go:33] FLAG: --eviction-soft=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049677 5361 flags.go:33] FLAG: --eviction-soft-grace-period=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049683 5361 flags.go:33] FLAG: --exit-on-lock-contention="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049688 5361 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049693 5361 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049699 5361 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049704 5361 flags.go:33] FLAG: --experimental-dockershim="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049709 5361 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049714 5361 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049719 5361 flags.go:33] FLAG: --experimental-mounter-path=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049724 5361 flags.go:33] FLAG: --fail-swap-on="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049729 5361 flags.go:33] FLAG: --feature-gates=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049735 5361 flags.go:33] FLAG: --file-check-frequency="20s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049741 5361 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049746 5361 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049751 5361 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049757 5361 flags.go:33] FLAG: --healthz-port="10248"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049764 5361 flags.go:33] FLAG: --help="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049769 5361 flags.go:33] FLAG: --hostname-override="two-k8sm-0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049787 5361 flags.go:33] FLAG: --housekeeping-interval="10s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049793 5361 flags.go:33] FLAG: --http-check-frequency="20s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049798 5361 flags.go:33] FLAG: --image-gc-high-threshold="85"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049803 5361 flags.go:33] FLAG: --image-gc-low-threshold="80"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049821 5361 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049826 5361 flags.go:33] FLAG: --image-service-endpoint=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049832 5361 flags.go:33] FLAG: --iptables-drop-bit="15"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049838 5361 flags.go:33] FLAG: --iptables-masquerade-bit="14"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049843 5361 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049848 5361 flags.go:33] FLAG: --kube-api-burst="10"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049853 5361 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049858 5361 flags.go:33] FLAG: --kube-api-qps="5"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049863 5361 flags.go:33] FLAG: --kube-reserved=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049869 5361 flags.go:33] FLAG: --kube-reserved-cgroup=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049876 5361 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049881 5361 flags.go:33] FLAG: --kubelet-cgroups=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049886 5361 flags.go:33] FLAG: --lock-file=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049891 5361 flags.go:33] FLAG: --log-backtrace-at=":0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049898 5361 flags.go:33] FLAG: --log-cadvisor-usage="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049903 5361 flags.go:33] FLAG: --log-dir=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049908 5361 flags.go:33] FLAG: --log-file=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049913 5361 flags.go:33] FLAG: --log-file-max-size="1800"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049918 5361 flags.go:33] FLAG: --log-flush-frequency="5s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049924 5361 flags.go:33] FLAG: --logtostderr="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049929 5361 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049934 5361 flags.go:33] FLAG: --make-iptables-util-chains="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049939 5361 flags.go:33] FLAG: --manifest-url=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049944 5361 flags.go:33] FLAG: --manifest-url-header=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049951 5361 flags.go:33] FLAG: --master-service-namespace="default"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049957 5361 flags.go:33] FLAG: --max-open-files="1000000"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049965 5361 flags.go:33] FLAG: --max-pods="110"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049970 5361 flags.go:33] FLAG: --maximum-dead-containers="-1"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049975 5361 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049980 5361 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049985 5361 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049990 5361 flags.go:33] FLAG: --network-plugin="cni"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.049995 5361 flags.go:33] FLAG: --network-plugin-mtu="0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050000 5361 flags.go:33] FLAG: --node-ip="192.168.1.36"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050006 5361 flags.go:33] FLAG: --node-labels=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050012 5361 flags.go:33] FLAG: --node-status-max-images="50"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050017 5361 flags.go:33] FLAG: --node-status-update-frequency="10s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050023 5361 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050028 5361 flags.go:33] FLAG: --oom-score-adj="-999"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050033 5361 flags.go:33] FLAG: --pod-cidr=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050038 5361 flags.go:33] FLAG: --pod-infra-container-image="k8s.gcr.io/pause:3.1"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050044 5361 flags.go:33] FLAG: --pod-manifest-path=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050052 5361 flags.go:33] FLAG: --pod-max-pids="-1"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050057 5361 flags.go:33] FLAG: --pods-per-core="0"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050063 5361 flags.go:33] FLAG: --port="10250"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050068 5361 flags.go:33] FLAG: --protect-kernel-defaults="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050073 5361 flags.go:33] FLAG: --provider-id=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050078 5361 flags.go:33] FLAG: --qos-reserved=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050083 5361 flags.go:33] FLAG: --read-only-port="10255"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050089 5361 flags.go:33] FLAG: --really-crash-for-testing="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050094 5361 flags.go:33] FLAG: --redirect-container-streaming="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050099 5361 flags.go:33] FLAG: --register-node="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050104 5361 flags.go:33] FLAG: --register-schedulable="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050109 5361 flags.go:33] FLAG: --register-with-taints=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050115 5361 flags.go:33] FLAG: --registry-burst="10"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050120 5361 flags.go:33] FLAG: --registry-qps="5"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050125 5361 flags.go:33] FLAG: --reserved-cpus=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050130 5361 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050137 5361 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050142 5361 flags.go:33] FLAG: --rotate-certificates="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050148 5361 flags.go:33] FLAG: --rotate-server-certificates="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050152 5361 flags.go:33] FLAG: --runonce="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050158 5361 flags.go:33] FLAG: --runtime-cgroups="/systemd/system.slice"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050163 5361 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050168 5361 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050173 5361 flags.go:33] FLAG: --serialize-image-pulls="true"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050178 5361 flags.go:33] FLAG: --skip-headers="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050184 5361 flags.go:33] FLAG: --skip-log-headers="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050189 5361 flags.go:33] FLAG: --stderrthreshold="2"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050194 5361 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050199 5361 flags.go:33] FLAG: --storage-driver-db="cadvisor"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050204 5361 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050209 5361 flags.go:33] FLAG: --storage-driver-password="root"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050214 5361 flags.go:33] FLAG: --storage-driver-secure="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050222 5361 flags.go:33] FLAG: --storage-driver-table="stats"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050227 5361 flags.go:33] FLAG: --storage-driver-user="root"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050232 5361 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050238 5361 flags.go:33] FLAG: --sync-frequency="1m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050243 5361 flags.go:33] FLAG: --system-cgroups=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050248 5361 flags.go:33] FLAG: --system-reserved=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050254 5361 flags.go:33] FLAG: --system-reserved-cgroup=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050260 5361 flags.go:33] FLAG: --tls-cert-file=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050265 5361 flags.go:33] FLAG: --tls-cipher-suites="[]"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050273 5361 flags.go:33] FLAG: --tls-min-version=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050278 5361 flags.go:33] FLAG: --tls-private-key-file=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050283 5361 flags.go:33] FLAG: --topology-manager-policy="none"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050288 5361 flags.go:33] FLAG: --v="2"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050293 5361 flags.go:33] FLAG: --version="false"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050300 5361 flags.go:33] FLAG: --vmodule=""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050306 5361 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050313 5361 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.050358 5361 feature_gate.go:243] feature gates: &{map[]}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.051749 5361 feature_gate.go:243] feature gates: &{map[]}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.051793 5361 feature_gate.go:243] feature gates: &{map[]}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.061115 5361 mount_linux.go:168] Detected OS with systemd
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.061351 5361 server.go:416] Version: v1.17.0
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.061418 5361 feature_gate.go:243] feature gates: &{map[]}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.061709 5361 feature_gate.go:243] feature gates: &{map[]}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.061801 5361 plugins.go:100] No cloud provider specified.
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.061822 5361 server.go:532] No cloud provider specified: "" from the config file: ""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.061834 5361 server.go:821] Client rotation is on, will bootstrap in background
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.064058 5361 bootstrap.go:84] Current kubeconfig file contents are still valid, no bootstrap necessary
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.064134 5361 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.064438 5361 server.go:848] Starting client certificate rotation.
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.064460 5361 certificate_manager.go:275] Certificate rotation is enabled.
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.065354 5361 certificate_manager.go:531] Certificate expiration is 2021-05-15 18:47:03 +0000 UTC, rotation deadline is 2021-02-12 17:00:58.942551911 +0000 UTC
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.065381 5361 certificate_manager.go:281] Waiting 6549h30m27.877174397s for next certificate rotation
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.066170 5361 manager.go:146] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.096191 5361 fs.go:125] Filesystem UUIDs: map[0afd049e-abbf-49e2-b847-0e9046cde0cf:/dev/dm-1 2020-01-03-21-30-07-00:/dev/sr0 7fae6dd8-527f-4d7d-936c-bf746c046171:/dev/dm-0 99925032-09b5-4554-9086-bc29de56007c:/dev/sda1 e6241972-79f7-4ac6-8641-d5712ff6037d:/dev/dm-2]
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.096224 5361 fs.go:126] Filesystem partitions: map[/dev/mapper/cl-home:{mountpoint:/home major:253 minor:2 fsType:xfs blockSize:0} /dev/mapper/cl-root:{mountpoint:/ major:253 minor:0 fsType:xfs blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:ext4 blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:21 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:23 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:96 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm:{mountpoint:/var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm major:0 minor:49 fsType:tmpfs blockSize:0} /var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm:{mountpoint:/var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm major:0 minor:48 fsType:tmpfs blockSize:0} /var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm:{mountpoint:/var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-45:{mountpoint:/var/lib/docker/overlay2/3886f3d42267aa8d9dd19e817c93d5dc894e535350fd15200e5e2f7d757833a6/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/docker/overlay2/5f2386dd31df6ce041c9a607509a0a7f56a2f1b4c78803b5f1582cf2604f7018/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-47:{mountpoint:/var/lib/docker/overlay2/727ab1d060392df0276aae84fd58fa1cac5b7eacf7c41c10d36ee33d34d69843/merged major:0 minor:47 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/docker/overlay2/cfac2d408bba3f0ab0849ae474dc7c293063abebe40d26762e9065a402e9588a/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/docker/overlay2/76fc19cd5877a98fed3377c691930648fcdb14d5f1eadcdbc9d72094a0f78e9a/merged major:0 minor:77 fsType:overlay blockSize:0}]
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.098025 5361 manager.go:193] Machine: {NumCores:4 CpuFrequency:3000000 MemoryCapacity:8191901696 HugePages:[{PageSize:2048 NumPages:0}] MachineID:9a2992777ad949c1a3078c47702b85ff SystemUUID:9a299277-7ad9-49c1-a307-8c47702b85ff BootID:8112a837-fb0a-4340-b8fb-15ec6b6f8efc Filesystems:[{Device:/var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm DeviceMajor:0 DeviceMinor:49 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:21 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:24 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:1023303680 Type:vfs Inodes:65536 HasInodes:true} {Device:/dev/mapper/cl-home DeviceMajor:253 DeviceMinor:2 Capacity:74140049408 Type:vfs Inodes:36218880 HasInodes:true} {Device:/dev/mapper/cl-root DeviceMajor:253 DeviceMinor:0 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:overlay_0-47 DeviceMajor:0 DeviceMinor:47 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:96 Capacity:819187712 Type:vfs Inodes:999988 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:23 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:53687091200 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:8497659904 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:74176266240 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:137438953472 Scheduler:mq-deadline}] NetworkDevices:[{Name:ens18 MacAddress:96:45:9f:36:4d:cb Speed:-1 Mtu:1500}] Topology:[{Id:0 Memory:8191901696 HugePages:[{PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:2 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:3 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.098753 5361 manager.go:199] Version: {KernelVersion:4.18.0-147.8.1.el8_1.x86_64 ContainerOsVersion:CentOS Linux 8 (Core) DockerVersion:18.09.9 DockerAPIVersion:1.39 CadvisorVersion: CadvisorRevision:}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.098890 5361 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099237 5361 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099256 5361 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:512 scale:6} d:{Dec:<nil>} s:512M Format:DecimalSI}] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099397 5361 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099405 5361 container_manager_linux.go:305] Creating device plugin manager: true
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099416 5361 manager.go:126] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099435 5361 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b1c0d0 0x6e95c50 0x1b1c9a0 map[] map[] map[] map[] map[] 0xc00039a870 [0] 0x6e95c50}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099465 5361 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099534 5361 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099543 5361 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099551 5361 state_checkpoint.go:101] [cpumanager] state checkpoint: restored state from checkpoint
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099558 5361 state_checkpoint.go:102] [cpumanager] state checkpoint: defaultCPUSet:
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099565 5361 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x6e95c50 10000000000 0xc000824c00 <nil> <nil> <nil> <nil> map[cpu:{{200 -3} {<nil>} DecimalSI} memory:{{616857600 0} {<nil>} DecimalSI}] 0x6e95c50}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099625 5361 server.go:1055] Using root directory: /var/lib/kubelet
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099643 5361 kubelet.go:286] Adding pod path: /etc/kubernetes/manifests
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099664 5361 file.go:68] Watching path "/etc/kubernetes/manifests"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.099675 5361 kubelet.go:311] Watching apiserver
May 15 15:30:31 two-k8sm-0 kubelet[5361]: E0515 15:30:31.101288 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:31 two-k8sm-0 kubelet[5361]: E0515 15:30:31.101373 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:31 two-k8sm-0 kubelet[5361]: E0515 15:30:31.101425 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.109891 5361 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.109920 5361 client.go:104] Start docker client with request timeout=2m0s
May 15 15:30:31 two-k8sm-0 kubelet[5361]: W0515 15:30:31.111317 5361 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.111335 5361 docker_service.go:240] Hairpin mode set to "hairpin-veth"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: W0515 15:30:31.111422 5361 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:30:31 two-k8sm-0 kubelet[5361]: W0515 15:30:31.116947 5361 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.116978 5361 plugins.go:166] Loaded network plugin "cni"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.117009 5361 docker_service.go:255] Docker cri networking managed by cni
May 15 15:30:31 two-k8sm-0 kubelet[5361]: W0515 15:30:31.117056 5361 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.126093 5361 docker_service.go:260] Docker Info: &{ID:NZW7:UXGA:JFOA:JV25:5YG7:IPSN:OPFV:3IQO:FYBW:2FAP:6MPA:W7DK Containers:6 ContainersRunning:5 ContainersPaused:0 ContainersStopped:1 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:61 SystemTime:2020-05-15T15:30:31.117993669-04:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-147.8.1.el8_1.x86_64 OperatingSystem:CentOS Linux 8 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00091a690 NCPU:4 MemTotal:8191901696 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:two-k8sm-0 Labels:[] ExperimentalBuild:false ServerVersion:18.09.9 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.126169 5361 docker_service.go:273] Setting cgroupDriver to cgroupfs
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.126310 5361 kubelet.go:642] Starting the GRPC server for the docker CRI shim.
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.126393 5361 container_manager_linux.go:118] Configure resource-only container "/systemd/system.slice" with memory limit: 5734331187
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.126406 5361 docker_server.go:59] Start dockershim grpc server
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135635 5361 remote_runtime.go:59] parsed scheme: ""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135656 5361 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135687 5361 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135697 5361 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135723 5361 remote_image.go:50] parsed scheme: ""
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135732 5361 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135744 5361 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135751 5361 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135866 5361 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000c00b60, CONNECTING
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.135910 5361 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc00048fb10, CONNECTING
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.136161 5361 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc00048fb10, READY
May 15 15:30:31 two-k8sm-0 kubelet[5361]: I0515 15:30:31.136183 5361 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000c00b60, READY
May 15 15:30:32 two-k8sm-0 kubelet[5361]: E0515 15:30:32.101853 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:32 two-k8sm-0 kubelet[5361]: E0515 15:30:32.102739 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:32 two-k8sm-0 kubelet[5361]: E0515 15:30:32.110068 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:33 two-k8sm-0 kubelet[5361]: E0515 15:30:33.102468 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:33 two-k8sm-0 kubelet[5361]: E0515 15:30:33.103381 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:33 two-k8sm-0 kubelet[5361]: E0515 15:30:33.110481 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:34 two-k8sm-0 kubelet[5361]: E0515 15:30:34.102973 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:34 two-k8sm-0 kubelet[5361]: E0515 15:30:34.103924 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:34 two-k8sm-0 kubelet[5361]: E0515 15:30:34.110907 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:35 two-k8sm-0 kubelet[5361]: E0515 15:30:35.103712 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:35 two-k8sm-0 kubelet[5361]: E0515 15:30:35.104476 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:35 two-k8sm-0 kubelet[5361]: E0515 15:30:35.111223 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:36 two-k8sm-0 kubelet[5361]: E0515 15:30:36.104253 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:36 two-k8sm-0 kubelet[5361]: E0515 15:30:36.105319 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:36 two-k8sm-0 kubelet[5361]: E0515 15:30:36.111643 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:36 two-k8sm-0 kubelet[5361]: W0515 15:30:36.117244 5361 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:30:37 two-k8sm-0 kubelet[5361]: E0515 15:30:37.104913 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:37 two-k8sm-0 kubelet[5361]: E0515 15:30:37.105711 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:37 two-k8sm-0 kubelet[5361]: E0515 15:30:37.112080 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:38 two-k8sm-0 kubelet[5361]: E0515 15:30:38.105471 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:38 two-k8sm-0 kubelet[5361]: E0515 15:30:38.106507 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:38 two-k8sm-0 kubelet[5361]: E0515 15:30:38.112500 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:39 two-k8sm-0 kubelet[5361]: E0515 15:30:39.106058 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:39 two-k8sm-0 kubelet[5361]: E0515 15:30:39.107008 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:39 two-k8sm-0 kubelet[5361]: E0515 15:30:39.112891 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:40 two-k8sm-0 kubelet[5361]: E0515 15:30:40.106650 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:40 two-k8sm-0 kubelet[5361]: E0515 15:30:40.107447 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:40 two-k8sm-0 kubelet[5361]: E0515 15:30:40.113268 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:41 two-k8sm-0 kubelet[5361]: E0515 15:30:41.107193 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:41 two-k8sm-0 kubelet[5361]: E0515 15:30:41.108322 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:41 two-k8sm-0 kubelet[5361]: E0515 15:30:41.113657 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:41 two-k8sm-0 kubelet[5361]: W0515 15:30:41.117429 5361 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:30:42 two-k8sm-0 kubelet[5361]: E0515 15:30:42.107755 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:42 two-k8sm-0 kubelet[5361]: E0515 15:30:42.108748 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:42 two-k8sm-0 kubelet[5361]: E0515 15:30:42.114082 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:43 two-k8sm-0 kubelet[5361]: E0515 15:30:43.108313 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:43 two-k8sm-0 kubelet[5361]: E0515 15:30:43.109201 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:43 two-k8sm-0 kubelet[5361]: E0515 15:30:43.114566 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:44 two-k8sm-0 kubelet[5361]: E0515 15:30:44.108874 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:44 two-k8sm-0 kubelet[5361]: E0515 15:30:44.109719 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:44 two-k8sm-0 kubelet[5361]: E0515 15:30:44.114973 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:45 two-k8sm-0 kubelet[5361]: E0515 15:30:45.109351 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:45 two-k8sm-0 kubelet[5361]: E0515 15:30:45.110311 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:45 two-k8sm-0 kubelet[5361]: E0515 15:30:45.115327 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:46 two-k8sm-0 kubelet[5361]: E0515 15:30:46.109932 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:46 two-k8sm-0 kubelet[5361]: E0515 15:30:46.110833 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:46 two-k8sm-0 kubelet[5361]: E0515 15:30:46.115710 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:46 two-k8sm-0 kubelet[5361]: W0515 15:30:46.118490 5361 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:30:47 two-k8sm-0 kubelet[5361]: E0515 15:30:47.110485 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:47 two-k8sm-0 kubelet[5361]: E0515 15:30:47.111576 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:47 two-k8sm-0 kubelet[5361]: E0515 15:30:47.116010 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:48 two-k8sm-0 kubelet[5361]: E0515 15:30:48.111015 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:48 two-k8sm-0 kubelet[5361]: E0515 15:30:48.111993 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:48 two-k8sm-0 kubelet[5361]: E0515 15:30:48.116416 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:49 two-k8sm-0 kubelet[5361]: E0515 15:30:49.111608 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:49 two-k8sm-0 kubelet[5361]: E0515 15:30:49.112514 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:49 two-k8sm-0 kubelet[5361]: E0515 15:30:49.116741 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:50 two-k8sm-0 kubelet[5361]: E0515 15:30:50.112229 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:50 two-k8sm-0 kubelet[5361]: E0515 15:30:50.113071 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:50 two-k8sm-0 kubelet[5361]: E0515 15:30:50.117080 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.112671 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.113623 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.118171 5361 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: W0515 15:30:51.118784 5361 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.390837 5361 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 15 15:30:51 two-k8sm-0 kubelet[5361]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.391879 5361 kuberuntime_manager.go:211] Container runtime docker initialized, version: 18.09.9, apiVersion: 1.39.0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392148 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/gce-pd"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392164 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/cinder"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392173 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/azure-disk"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392181 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/azure-file"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392189 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/aws-ebs"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392197 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/vsphere-volume"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392209 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/empty-dir"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392217 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/git-repo"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392226 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/host-path"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392234 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/nfs"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392242 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/secret"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392250 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/iscsi"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392260 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/glusterfs"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392272 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/rbd"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392280 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/quobyte"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392288 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/cephfs"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392309 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/downward-api"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392331 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/fc"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392339 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/flocker"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392346 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/configmap"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392355 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/projected"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392369 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/portworx-volume"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392380 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/scaleio"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392388 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/local-volume"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392396 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/storageos"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.392415 5361 plugins.go:629] Loaded volume plugin "kubernetes.io/csi"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.393310 5361 server.go:1113] Started kubelet
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.393362 5361 server.go:143] Starting to listen on 0.0.0.0:10250
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.393805 5361 event.go:272] Unable to write event: 'Post https://192.168.1.36:443/api/v1/namespaces/default/events: dial tcp 192.168.1.36:443: connect: connection refused' (may retry after sleeping)
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.393838 5361 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.394254 5361 server.go:354] Adding debug handlers to kubelet server.
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.395068 5361 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.395435 5361 volume_manager.go:263] The desired_state_of_world populator starts
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.395442 5361 volume_manager.go:265] Starting Kubelet Volume Manager
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.395731 5361 desired_state_of_world_populator.go:138] Desired state populator starts to run
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.395811 5361 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSIDriver: Get https://192.168.1.36:443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.395973 5361 controller.go:135] failed to ensure node lease exists, will retry in 200ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.398124 5361 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.407103 5361 factory.go:356] Registering Docker factory
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.407125 5361 factory.go:54] Registering systemd factory
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.407224 5361 clientconn.go:104] parsed scheme: "unix"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.407236 5361 clientconn.go:104] scheme "unix" not registered, fallback to default scheme
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.407253 5361 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.407262 5361 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.407309 5361 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000e6cae0, CONNECTING
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.407747 5361 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000e6cae0, READY
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.408185 5361 factory.go:137] Registering containerd factory
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.408369 5361 factory.go:101] Registering Raw factory
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.408524 5361 manager.go:1158] Started watching for new ooms in manager
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.409366 5361 manager.go:272] Starting recovery of all containers
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.422248 5361 status_manager.go:157] Starting to sync pod status with apiserver
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.422272 5361 kubelet.go:1820] Starting kubelet main sync loop.
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.422353 5361 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.423384 5361 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.36:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.437551 5361 manager.go:277] Recovery completed
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.495005 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.495155 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.495527 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.495660 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.496142 5361 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496763 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496802 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496817 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496827 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496805 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496847 5361 kubelet_node_status.go:70] Attempting to register node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496861 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496889 5361 cpu_manager.go:173] [cpumanager] starting with none policy
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496896 5361 cpu_manager.go:174] [cpumanager] reconciling every 10s
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.496902 5361 policy_none.go:43] [cpumanager] none policy: Start
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.497095 5361 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.497742 5361 manager.go:226] Starting Device Plugin manager
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.498247 5361 manager.go:268] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.498384 5361 plugin_watcher.go:54] Plugin Watcher Start at /var/lib/kubelet/plugins_registry
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.498463 5361 plugin_manager.go:112] The desired_state_of_world populator (plugin watcher) starts
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.498471 5361 plugin_manager.go:114] Starting Kubelet Plugin Manager
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.498576 5361 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /systemd/system.slice
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.498845 5361 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node "two-k8sm-0" not found
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.522571 5361 kubelet.go:1906] SyncLoop (ADD, "file"): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625), kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd), kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.522691 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.522835 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.524348 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.524369 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.524381 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.525441 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.525573 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.526332 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.526449 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.527276 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.527308 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.527319 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.527778 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.527797 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.527808 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: W0515 15:30:51.528185 5361 status_manager.go:530] Failed to get status for pod "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-controller-manager-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.528235 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.528380 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.528398 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.528519 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.529848 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.529867 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.529876 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.530085 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.530100 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.530127 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: W0515 15:30:51.530431 5361 status_manager.go:530] Failed to get status for pod "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-scheduler-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.531361 5361 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)", event: &pleg.PodLifecycleEvent{ID:"06cefdfd9da3719520e6220b3609d625", Type:"ContainerStarted", Data:"76fe24efc34cd06b01da32779d1c8d3575c2edb2e2f1d0b86e38fdf68d51e925"}
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.531394 5361 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)", event: &pleg.PodLifecycleEvent{ID:"06cefdfd9da3719520e6220b3609d625", Type:"ContainerStarted", Data:"8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516"}
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.531412 5361 kubelet.go:1951] SyncLoop (PLEG): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)", event: &pleg.PodLifecycleEvent{ID:"a2126bea5a82d3a47ce85181aeeb2846", Type:"ContainerDied", Data:"07cf52ebad38592b6e404c7be51c5745be522f05706d33993cd61ab7fd735dc4"}
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.531447 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.531448 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.531589 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.531718 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.533649 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.533722 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.533735 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.533766 5361 kubelet.go:1951] SyncLoop (PLEG): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)", event: &pleg.PodLifecycleEvent{ID:"a2126bea5a82d3a47ce85181aeeb2846", Type:"ContainerStarted", Data:"16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21"}
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.533822 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.533834 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.533842 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: W0515 15:30:51.534198 5361 status_manager.go:530] Failed to get status for pod "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-apiserver-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.596246 5361 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.596437 5361 controller.go:135] failed to ensure node lease exists, will retry in 400ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696142 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696196 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696218 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696253 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696274 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696293 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696328 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.696344 5361 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696347 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696382 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696476 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696513 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.696546 5361 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.697216 5361 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.697407 5361 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.698789 5361 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.698816 5361 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.698827 5361 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.698845 5361 kubelet_node_status.go:70] Attempting to register node two-k8sm-0
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.712816 5361 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.796524 5361 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796704 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796737 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796758 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796777 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796817 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796867 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796874 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796871 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796891 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796926 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796929 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796950 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796968 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796986 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.796998 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797005 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797040 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797049 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797050 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797118 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797122 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797209 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797232 5361 reconciler.go:254] operationExecutor.MountVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: I0515 15:30:51.797275 5361 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.898296 5361 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.913472 5361 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: Get https://192.168.1.36:443/apis/storage.k8s.io/v1/csinodes/two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.996827 5361 controller.go:135] failed to ensure node lease exists, will retry in 800ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:30:51 two-k8sm-0 kubelet[5361]: E0515 15:30:51.998421 5361 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:30:52 two-k8sm-0 kubelet[5361]: E0515 15:30:52.098561 5361 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:31:56 two-k8sm-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 15 15:31:56 two-k8sm-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 15 15:32:06 two-k8sm-0 systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
May 15 15:32:06 two-k8sm-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19.
May 15 15:32:06 two-k8sm-0 systemd[1]: Stopped Kubernetes Kubelet Server.
May 15 15:32:06 two-k8sm-0 systemd[1]: Started Kubernetes Kubelet Server.
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543425 5548 flags.go:33] FLAG: --add-dir-header="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543545 5548 flags.go:33] FLAG: --address="0.0.0.0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543553 5548 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543562 5548 flags.go:33] FLAG: --alsologtostderr="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543567 5548 flags.go:33] FLAG: --anonymous-auth="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543597 5548 flags.go:33] FLAG: --application-metrics-count-limit="100"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543605 5548 flags.go:33] FLAG: --authentication-token-webhook="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543611 5548 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543617 5548 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543623 5548 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543628 5548 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543634 5548 flags.go:33] FLAG: --azure-container-registry-config=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543639 5548 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543644 5548 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543649 5548 flags.go:33] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543654 5548 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543660 5548 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543665 5548 flags.go:33] FLAG: --cgroup-root=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543670 5548 flags.go:33] FLAG: --cgroups-per-qos="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543675 5548 flags.go:33] FLAG: --chaos-chance="0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543682 5548 flags.go:33] FLAG: --client-ca-file=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543687 5548 flags.go:33] FLAG: --cloud-config=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543694 5548 flags.go:33] FLAG: --cloud-provider=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543698 5548 flags.go:33] FLAG: --cluster-dns="[]"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543707 5548 flags.go:33] FLAG: --cluster-domain=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543712 5548 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543717 5548 flags.go:33] FLAG: --cni-cache-dir="/var/lib/cni/cache"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543722 5548 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543727 5548 flags.go:33] FLAG: --config="/etc/kubernetes/kubelet-config.yaml"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543733 5548 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543738 5548 flags.go:33] FLAG: --container-log-max-files="5"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543745 5548 flags.go:33] FLAG: --container-log-max-size="10Mi"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543763 5548 flags.go:33] FLAG: --container-runtime="docker"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543769 5548 flags.go:33] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543787 5548 flags.go:33] FLAG: --containerd="/run/containerd/containerd.sock"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543792 5548 flags.go:33] FLAG: --contention-profiling="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543798 5548 flags.go:33] FLAG: --cpu-cfs-quota="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543803 5548 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543810 5548 flags.go:33] FLAG: --cpu-manager-policy="none"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543815 5548 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543820 5548 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543826 5548 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543831 5548 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543836 5548 flags.go:33] FLAG: --docker-only="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543841 5548 flags.go:33] FLAG: --docker-root="/var/lib/docker"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543847 5548 flags.go:33] FLAG: --docker-tls="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543852 5548 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543857 5548 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543862 5548 flags.go:33] FLAG: --docker-tls-key="key.pem"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543868 5548 flags.go:33] FLAG: --dynamic-config-dir=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543874 5548 flags.go:33] FLAG: --enable-cadvisor-json-endpoints="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543879 5548 flags.go:33] FLAG: --enable-controller-attach-detach="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543884 5548 flags.go:33] FLAG: --enable-debugging-handlers="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543889 5548 flags.go:33] FLAG: --enable-load-reader="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543896 5548 flags.go:33] FLAG: --enable-server="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543901 5548 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543908 5548 flags.go:33] FLAG: --event-burst="10"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543913 5548 flags.go:33] FLAG: --event-qps="5"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543919 5548 flags.go:33] FLAG: --event-storage-age-limit="default=0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543924 5548 flags.go:33] FLAG: --event-storage-event-limit="default=0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543929 5548 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543940 5548 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543946 5548 flags.go:33] FLAG: --eviction-minimum-reclaim=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543952 5548 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543958 5548 flags.go:33] FLAG: --eviction-soft=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543963 5548 flags.go:33] FLAG: --eviction-soft-grace-period=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543969 5548 flags.go:33] FLAG: --exit-on-lock-contention="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543974 5548 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543979 5548 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543985 5548 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543992 5548 flags.go:33] FLAG: --experimental-dockershim="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.543997 5548 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544003 5548 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544008 5548 flags.go:33] FLAG: --experimental-mounter-path=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544012 5548 flags.go:33] FLAG: --fail-swap-on="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544017 5548 flags.go:33] FLAG: --feature-gates=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544024 5548 flags.go:33] FLAG: --file-check-frequency="20s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544029 5548 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544034 5548 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544039 5548 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544045 5548 flags.go:33] FLAG: --healthz-port="10248"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544050 5548 flags.go:33] FLAG: --help="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544055 5548 flags.go:33] FLAG: --hostname-override="two-k8sm-0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544060 5548 flags.go:33] FLAG: --housekeeping-interval="10s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544065 5548 flags.go:33] FLAG: --http-check-frequency="20s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544070 5548 flags.go:33] FLAG: --image-gc-high-threshold="85"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544078 5548 flags.go:33] FLAG: --image-gc-low-threshold="80"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544084 5548 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544103 5548 flags.go:33] FLAG: --image-service-endpoint=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544111 5548 flags.go:33] FLAG: --iptables-drop-bit="15"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544118 5548 flags.go:33] FLAG: --iptables-masquerade-bit="14"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544123 5548 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544128 5548 flags.go:33] FLAG: --kube-api-burst="10"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544133 5548 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544139 5548 flags.go:33] FLAG: --kube-api-qps="5"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544144 5548 flags.go:33] FLAG: --kube-reserved=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544149 5548 flags.go:33] FLAG: --kube-reserved-cgroup=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544154 5548 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544160 5548 flags.go:33] FLAG: --kubelet-cgroups=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544165 5548 flags.go:33] FLAG: --lock-file=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544170 5548 flags.go:33] FLAG: --log-backtrace-at=":0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544176 5548 flags.go:33] FLAG: --log-cadvisor-usage="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544184 5548 flags.go:33] FLAG: --log-dir=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544189 5548 flags.go:33] FLAG: --log-file=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544195 5548 flags.go:33] FLAG: --log-file-max-size="1800"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544201 5548 flags.go:33] FLAG: --log-flush-frequency="5s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544207 5548 flags.go:33] FLAG: --logtostderr="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544212 5548 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544218 5548 flags.go:33] FLAG: --make-iptables-util-chains="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544223 5548 flags.go:33] FLAG: --manifest-url=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544228 5548 flags.go:33] FLAG: --manifest-url-header=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544235 5548 flags.go:33] FLAG: --master-service-namespace="default"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544253 5548 flags.go:33] FLAG: --max-open-files="1000000"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544259 5548 flags.go:33] FLAG: --max-pods="110"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544265 5548 flags.go:33] FLAG: --maximum-dead-containers="-1"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544270 5548 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544275 5548 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544280 5548 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544287 5548 flags.go:33] FLAG: --network-plugin="cni"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544292 5548 flags.go:33] FLAG: --network-plugin-mtu="0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544307 5548 flags.go:33] FLAG: --node-ip="192.168.1.36"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544313 5548 flags.go:33] FLAG: --node-labels=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544320 5548 flags.go:33] FLAG: --node-status-max-images="50"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544325 5548 flags.go:33] FLAG: --node-status-update-frequency="10s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544330 5548 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544335 5548 flags.go:33] FLAG: --oom-score-adj="-999"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544340 5548 flags.go:33] FLAG: --pod-cidr=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544345 5548 flags.go:33] FLAG: --pod-infra-container-image="k8s.gcr.io/pause:3.1"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544350 5548 flags.go:33] FLAG: --pod-manifest-path=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544355 5548 flags.go:33] FLAG: --pod-max-pids="-1"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544361 5548 flags.go:33] FLAG: --pods-per-core="0"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544366 5548 flags.go:33] FLAG: --port="10250"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544371 5548 flags.go:33] FLAG: --protect-kernel-defaults="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544376 5548 flags.go:33] FLAG: --provider-id=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544383 5548 flags.go:33] FLAG: --qos-reserved=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544389 5548 flags.go:33] FLAG: --read-only-port="10255"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544394 5548 flags.go:33] FLAG: --really-crash-for-testing="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544399 5548 flags.go:33] FLAG: --redirect-container-streaming="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544404 5548 flags.go:33] FLAG: --register-node="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544409 5548 flags.go:33] FLAG: --register-schedulable="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544414 5548 flags.go:33] FLAG: --register-with-taints=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544420 5548 flags.go:33] FLAG: --registry-burst="10"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544425 5548 flags.go:33] FLAG: --registry-qps="5"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544431 5548 flags.go:33] FLAG: --reserved-cpus=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544437 5548 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544442 5548 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544447 5548 flags.go:33] FLAG: --rotate-certificates="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544452 5548 flags.go:33] FLAG: --rotate-server-certificates="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544457 5548 flags.go:33] FLAG: --runonce="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544462 5548 flags.go:33] FLAG: --runtime-cgroups="/systemd/system.slice"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544469 5548 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544474 5548 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544480 5548 flags.go:33] FLAG: --serialize-image-pulls="true"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544485 5548 flags.go:33] FLAG: --skip-headers="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544490 5548 flags.go:33] FLAG: --skip-log-headers="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544496 5548 flags.go:33] FLAG: --stderrthreshold="2"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544501 5548 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544506 5548 flags.go:33] FLAG: --storage-driver-db="cadvisor"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544511 5548 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544516 5548 flags.go:33] FLAG: --storage-driver-password="root"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544521 5548 flags.go:33] FLAG: --storage-driver-secure="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544527 5548 flags.go:33] FLAG: --storage-driver-table="stats"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544532 5548 flags.go:33] FLAG: --storage-driver-user="root"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544537 5548 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544542 5548 flags.go:33] FLAG: --sync-frequency="1m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544547 5548 flags.go:33] FLAG: --system-cgroups=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544554 5548 flags.go:33] FLAG: --system-reserved=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544559 5548 flags.go:33] FLAG: --system-reserved-cgroup=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544564 5548 flags.go:33] FLAG: --tls-cert-file=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544569 5548 flags.go:33] FLAG: --tls-cipher-suites="[]"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544577 5548 flags.go:33] FLAG: --tls-min-version=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544582 5548 flags.go:33] FLAG: --tls-private-key-file=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544587 5548 flags.go:33] FLAG: --topology-manager-policy="none"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544592 5548 flags.go:33] FLAG: --v="2"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544597 5548 flags.go:33] FLAG: --version="false"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544604 5548 flags.go:33] FLAG: --vmodule=""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544610 5548 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544615 5548 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.544645 5548 feature_gate.go:243] feature gates: &{map[]}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.546151 5548 feature_gate.go:243] feature gates: &{map[]}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.546194 5548 feature_gate.go:243] feature gates: &{map[]}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.555523 5548 mount_linux.go:168] Detected OS with systemd
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.555733 5548 server.go:416] Version: v1.17.0
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.555791 5548 feature_gate.go:243] feature gates: &{map[]}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.555837 5548 feature_gate.go:243] feature gates: &{map[]}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.555922 5548 plugins.go:100] No cloud provider specified.
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.555940 5548 server.go:532] No cloud provider specified: "" from the config file: ""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.555952 5548 server.go:821] Client rotation is on, will bootstrap in background
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.557548 5548 bootstrap.go:84] Current kubeconfig file contents are still valid, no bootstrap necessary
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.557615 5548 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.557828 5548 server.go:848] Starting client certificate rotation.
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.557846 5548 certificate_manager.go:275] Certificate rotation is enabled.
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.558062 5548 certificate_manager.go:531] Certificate expiration is 2021-05-15 18:47:03 +0000 UTC, rotation deadline is 2021-02-03 14:53:24.309811107 +0000 UTC
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.558092 5548 certificate_manager.go:281] Waiting 6331h21m17.751721774s for next certificate rotation
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.558487 5548 manager.go:146] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.577442 5548 fs.go:125] Filesystem UUIDs: map[0afd049e-abbf-49e2-b847-0e9046cde0cf:/dev/dm-1 2020-01-03-21-30-07-00:/dev/sr0 7fae6dd8-527f-4d7d-936c-bf746c046171:/dev/dm-0 99925032-09b5-4554-9086-bc29de56007c:/dev/sda1 e6241972-79f7-4ac6-8641-d5712ff6037d:/dev/dm-2]
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.577471 5548 fs.go:126] Filesystem partitions: map[/dev/mapper/cl-home:{mountpoint:/home major:253 minor:2 fsType:xfs blockSize:0} /dev/mapper/cl-root:{mountpoint:/ major:253 minor:0 fsType:xfs blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:ext4 blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:21 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:23 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:96 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm:{mountpoint:/var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm major:0 minor:49 fsType:tmpfs blockSize:0} /var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm:{mountpoint:/var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm major:0 minor:48 fsType:tmpfs blockSize:0} /var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm:{mountpoint:/var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-45:{mountpoint:/var/lib/docker/overlay2/3886f3d42267aa8d9dd19e817c93d5dc894e535350fd15200e5e2f7d757833a6/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/docker/overlay2/5f2386dd31df6ce041c9a607509a0a7f56a2f1b4c78803b5f1582cf2604f7018/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-47:{mountpoint:/var/lib/docker/overlay2/727ab1d060392df0276aae84fd58fa1cac5b7eacf7c41c10d36ee33d34d69843/merged major:0 minor:47 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/docker/overlay2/cfac2d408bba3f0ab0849ae474dc7c293063abebe40d26762e9065a402e9588a/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/docker/overlay2/76fc19cd5877a98fed3377c691930648fcdb14d5f1eadcdbc9d72094a0f78e9a/merged major:0 minor:77 fsType:overlay blockSize:0}]
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.579240 5548 manager.go:193] Machine: {NumCores:4 CpuFrequency:3000000 MemoryCapacity:8191901696 HugePages:[{PageSize:2048 NumPages:0}] MachineID:9a2992777ad949c1a3078c47702b85ff SystemUUID:9a299277-7ad9-49c1-a307-8c47702b85ff BootID:8112a837-fb0a-4340-b8fb-15ec6b6f8efc Filesystems:[{Device:/dev/mapper/cl-home DeviceMajor:253 DeviceMinor:2 Capacity:74140049408 Type:vfs Inodes:36218880 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/var/lib/docker/containers/16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21/mounts/shm DeviceMajor:0 DeviceMinor:49 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:96 Capacity:819187712 Type:vfs Inodes:999988 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:24 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:21 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:23 Capacity:4095950848 Type:vfs Inodes:999988 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:1023303680 Type:vfs Inodes:65536 HasInodes:true} {Device:overlay_0-47 DeviceMajor:0 DeviceMinor:47 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/dev/mapper/cl-root DeviceMajor:253 DeviceMinor:0 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true} {Device:/var/lib/docker/containers/33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871/mounts/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:/var/lib/docker/containers/8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516/mounts/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:999988 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:53660876800 Type:vfs Inodes:26214400 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:53687091200 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:8497659904 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:74176266240 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:137438953472 Scheduler:mq-deadline}] NetworkDevices:[{Name:ens18 MacAddress:96:45:9f:36:4d:cb Speed:-1 Mtu:1500}] Topology:[{Id:0 Memory:8191901696 HugePages:[{PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:2 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]} {Id:3 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580037 5548 manager.go:199] Version: {KernelVersion:4.18.0-147.8.1.el8_1.x86_64 ContainerOsVersion:CentOS Linux 8 (Core) DockerVersion:18.09.9 DockerAPIVersion:1.39 CadvisorVersion: CadvisorRevision:}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580105 5548 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580442 5548 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580460 5548 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:512 scale:6} d:{Dec:<nil>} s:512M Format:DecimalSI}] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580544 5548 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580553 5548 container_manager_linux.go:305] Creating device plugin manager: true
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580566 5548 manager.go:126] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580611 5548 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b1c0d0 0x6e95c50 0x1b1c9a0 map[] map[] map[] map[] map[] 0xc000822cc0 [0] 0x6e95c50}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580653 5548 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580718 5548 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580727 5548 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580735 5548 state_checkpoint.go:101] [cpumanager] state checkpoint: restored state from checkpoint
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580741 5548 state_checkpoint.go:102] [cpumanager] state checkpoint: defaultCPUSet:
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580747 5548 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0x6e95c50 10000000000 0xc000838a20 <nil> <nil> <nil> <nil> map[cpu:{{200 -3} {<nil>} DecimalSI} memory:{{616857600 0} {<nil>} DecimalSI}] 0x6e95c50}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580824 5548 server.go:1055] Using root directory: /var/lib/kubelet
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580844 5548 kubelet.go:286] Adding pod path: /etc/kubernetes/manifests
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580864 5548 file.go:68] Watching path "/etc/kubernetes/manifests"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.580876 5548 kubelet.go:311] Watching apiserver
May 15 15:32:06 two-k8sm-0 kubelet[5548]: E0515 15:32:06.581855 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:06 two-k8sm-0 kubelet[5548]: E0515 15:32:06.581941 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:06 two-k8sm-0 kubelet[5548]: E0515 15:32:06.582192 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.584385 5548 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.584409 5548 client.go:104] Start docker client with request timeout=2m0s
May 15 15:32:06 two-k8sm-0 kubelet[5548]: W0515 15:32:06.587581 5548 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.587601 5548 docker_service.go:240] Hairpin mode set to "hairpin-veth"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: W0515 15:32:06.587689 5548 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:32:06 two-k8sm-0 kubelet[5548]: W0515 15:32:06.590416 5548 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.590442 5548 plugins.go:166] Loaded network plugin "cni"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.590490 5548 docker_service.go:255] Docker cri networking managed by cni
May 15 15:32:06 two-k8sm-0 kubelet[5548]: W0515 15:32:06.590531 5548 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.598938 5548 docker_service.go:260] Docker Info: &{ID:NZW7:UXGA:JFOA:JV25:5YG7:IPSN:OPFV:3IQO:FYBW:2FAP:6MPA:W7DK Containers:6 ContainersRunning:5 ContainersPaused:0 ContainersStopped:1 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:61 SystemTime:2020-05-15T15:32:06.591699957-04:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-147.8.1.el8_1.x86_64 OperatingSystem:CentOS Linux 8 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00057e0e0 NCPU:4 MemTotal:8191901696 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:two-k8sm-0 Labels:[] ExperimentalBuild:false ServerVersion:18.09.9 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.599009 5548 docker_service.go:273] Setting cgroupDriver to cgroupfs
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.599103 5548 kubelet.go:642] Starting the GRPC server for the docker CRI shim.
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.599176 5548 container_manager_linux.go:118] Configure resource-only container "/systemd/system.slice" with memory limit: 5734331187
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.599190 5548 docker_server.go:59] Start dockershim grpc server
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.607979 5548 remote_runtime.go:59] parsed scheme: ""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608002 5548 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608022 5548 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608031 5548 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608056 5548 remote_image.go:50] parsed scheme: ""
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608064 5548 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608074 5548 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 <nil>}] <nil>}
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608081 5548 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608431 5548 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc00037e4d0, CONNECTING
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608572 5548 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc00037e820, CONNECTING
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608690 5548 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc00037e4d0, READY
May 15 15:32:06 two-k8sm-0 kubelet[5548]: I0515 15:32:06.608725 5548 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc00037e820, READY
May 15 15:32:07 two-k8sm-0 kubelet[5548]: E0515 15:32:07.582465 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:07 two-k8sm-0 kubelet[5548]: E0515 15:32:07.583260 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:07 two-k8sm-0 kubelet[5548]: E0515 15:32:07.584444 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:08 two-k8sm-0 kubelet[5548]: E0515 15:32:08.583048 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:08 two-k8sm-0 kubelet[5548]: E0515 15:32:08.584119 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:08 two-k8sm-0 kubelet[5548]: E0515 15:32:08.585122 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:09 two-k8sm-0 kubelet[5548]: E0515 15:32:09.583505 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:09 two-k8sm-0 kubelet[5548]: E0515 15:32:09.584633 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:09 two-k8sm-0 kubelet[5548]: E0515 15:32:09.585574 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:10 two-k8sm-0 kubelet[5548]: E0515 15:32:10.584247 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:10 two-k8sm-0 kubelet[5548]: E0515 15:32:10.585088 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:10 two-k8sm-0 kubelet[5548]: E0515 15:32:10.586181 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:11 two-k8sm-0 kubelet[5548]: E0515 15:32:11.584786 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:11 two-k8sm-0 kubelet[5548]: E0515 15:32:11.585911 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:11 two-k8sm-0 kubelet[5548]: E0515 15:32:11.586792 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:11 two-k8sm-0 kubelet[5548]: W0515 15:32:11.590759 5548 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:32:12 two-k8sm-0 kubelet[5548]: E0515 15:32:12.585432 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:12 two-k8sm-0 kubelet[5548]: E0515 15:32:12.586375 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:12 two-k8sm-0 kubelet[5548]: E0515 15:32:12.587356 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:13 two-k8sm-0 kubelet[5548]: E0515 15:32:13.585966 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:13 two-k8sm-0 kubelet[5548]: E0515 15:32:13.586980 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:13 two-k8sm-0 kubelet[5548]: E0515 15:32:13.588105 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:14 two-k8sm-0 kubelet[5548]: E0515 15:32:14.586550 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:14 two-k8sm-0 kubelet[5548]: E0515 15:32:14.587360 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:14 two-k8sm-0 kubelet[5548]: E0515 15:32:14.588590 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:15 two-k8sm-0 kubelet[5548]: E0515 15:32:15.587175 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:15 two-k8sm-0 kubelet[5548]: E0515 15:32:15.587988 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:15 two-k8sm-0 kubelet[5548]: E0515 15:32:15.589123 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:16 two-k8sm-0 kubelet[5548]: E0515 15:32:16.587650 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:16 two-k8sm-0 kubelet[5548]: E0515 15:32:16.588654 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:16 two-k8sm-0 kubelet[5548]: E0515 15:32:16.589812 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:16 two-k8sm-0 kubelet[5548]: W0515 15:32:16.591012 5548 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:32:17 two-k8sm-0 kubelet[5548]: E0515 15:32:17.588230 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:17 two-k8sm-0 kubelet[5548]: E0515 15:32:17.589342 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:17 two-k8sm-0 kubelet[5548]: E0515 15:32:17.590178 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:18 two-k8sm-0 kubelet[5548]: E0515 15:32:18.588749 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:18 two-k8sm-0 kubelet[5548]: E0515 15:32:18.589783 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:18 two-k8sm-0 kubelet[5548]: E0515 15:32:18.590815 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:19 two-k8sm-0 kubelet[5548]: E0515 15:32:19.589393 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:19 two-k8sm-0 kubelet[5548]: E0515 15:32:19.590204 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:19 two-k8sm-0 kubelet[5548]: E0515 15:32:19.591216 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:20 two-k8sm-0 kubelet[5548]: E0515 15:32:20.590029 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:20 two-k8sm-0 kubelet[5548]: E0515 15:32:20.590872 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:20 two-k8sm-0 kubelet[5548]: E0515 15:32:20.592047 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:21 two-k8sm-0 kubelet[5548]: E0515 15:32:21.590653 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:21 two-k8sm-0 kubelet[5548]: W0515 15:32:21.591301 5548 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:32:21 two-k8sm-0 kubelet[5548]: E0515 15:32:21.591622 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:21 two-k8sm-0 kubelet[5548]: E0515 15:32:21.592533 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:22 two-k8sm-0 kubelet[5548]: E0515 15:32:22.591201 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:22 two-k8sm-0 kubelet[5548]: E0515 15:32:22.592155 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:22 two-k8sm-0 kubelet[5548]: E0515 15:32:22.593384 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:23 two-k8sm-0 kubelet[5548]: E0515 15:32:23.591761 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:23 two-k8sm-0 kubelet[5548]: E0515 15:32:23.592716 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:23 two-k8sm-0 kubelet[5548]: E0515 15:32:23.593790 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:24 two-k8sm-0 kubelet[5548]: E0515 15:32:24.592379 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:24 two-k8sm-0 kubelet[5548]: E0515 15:32:24.593345 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:24 two-k8sm-0 kubelet[5548]: E0515 15:32:24.594384 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:25 two-k8sm-0 kubelet[5548]: E0515 15:32:25.592975 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:25 two-k8sm-0 kubelet[5548]: E0515 15:32:25.593880 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:25 two-k8sm-0 kubelet[5548]: E0515 15:32:25.595092 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: W0515 15:32:26.591481 5548 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.593489 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://192.168.1.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.595259 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://192.168.1.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.595525 5548 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://192.168.1.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dtwo-k8sm-0&limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.833432 5548 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 15 15:32:26 two-k8sm-0 kubelet[5548]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834632 5548 kuberuntime_manager.go:211] Container runtime docker initialized, version: 18.09.9, apiVersion: 1.39.0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834880 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/cinder"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834899 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/azure-disk"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834907 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/azure-file"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834915 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/aws-ebs"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834922 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/gce-pd"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834931 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/vsphere-volume"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834942 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/empty-dir"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834950 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/git-repo"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834959 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/host-path"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834967 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/nfs"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834989 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/secret"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.834997 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/iscsi"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835007 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/glusterfs"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835053 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/rbd"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835063 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/quobyte"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835071 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/cephfs"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835078 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/downward-api"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835086 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/fc"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835093 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/flocker"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835105 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/configmap"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835113 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/projected"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835127 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/portworx-volume"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835138 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/scaleio"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835147 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/local-volume"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835157 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/storageos"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.835178 5548 plugins.go:629] Loaded volume plugin "kubernetes.io/csi"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.836076 5548 server.go:1113] Started kubelet
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.836413 5548 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.837344 5548 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.837513 5548 event.go:272] Unable to write event: 'Post https://192.168.1.36:443/api/v1/namespaces/default/events: dial tcp 192.168.1.36:443: connect: connection refused' (may retry after sleeping)
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.837765 5548 server.go:143] Starting to listen on 0.0.0.0:10250
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.839896 5548 server.go:354] Adding debug handlers to kubelet server.
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.845617 5548 volume_manager.go:263] The desired_state_of_world populator starts
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.845630 5548 volume_manager.go:265] Starting Kubelet Volume Manager
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.846022 5548 desired_state_of_world_populator.go:138] Desired state populator starts to run
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.846120 5548 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSIDriver: Get https://192.168.1.36:443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.846331 5548 controller.go:135] failed to ensure node lease exists, will retry in 200ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.848739 5548 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.848861 5548 clientconn.go:104] parsed scheme: "unix"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.848879 5548 clientconn.go:104] scheme "unix" not registered, fallback to default scheme
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.849101 5548 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.849117 5548 clientconn.go:577] ClientConn switching balancer to "pick_first"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.849155 5548 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000ec5050, CONNECTING
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.849954 5548 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc000ec5050, READY
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.850430 5548 factory.go:137] Registering containerd factory
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.859009 5548 status_manager.go:157] Starting to sync pod status with apiserver
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.859039 5548 kubelet.go:1820] Starting kubelet main sync loop.
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.859078 5548 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.859625 5548 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.36:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.863763 5548 factory.go:356] Registering Docker factory
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.863781 5548 factory.go:54] Registering systemd factory
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.863919 5548 factory.go:101] Registering Raw factory
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.864042 5548 manager.go:1158] Started watching for new ooms in manager
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.864934 5548 manager.go:272] Starting recovery of all containers
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.889536 5548 manager.go:277] Recovery completed
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.946629 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.946771 5548 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.946821 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.948837 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.948859 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.948870 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.948890 5548 kubelet_node_status.go:70] Attempting to register node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.949178 5548 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.950848 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.950977 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.952077 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.952100 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.952110 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.952133 5548 cpu_manager.go:173] [cpumanager] starting with none policy
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.952139 5548 cpu_manager.go:174] [cpumanager] reconciling every 10s
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.952146 5548 policy_none.go:43] [cpumanager] none policy: Start
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.952843 5548 manager.go:226] Starting Device Plugin manager
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.953143 5548 manager.go:268] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.953211 5548 plugin_watcher.go:54] Plugin Watcher Start at /var/lib/kubelet/plugins_registry
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.953275 5548 plugin_manager.go:112] The desired_state_of_world populator (plugin watcher) starts
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.953311 5548 plugin_manager.go:114] Starting Kubelet Plugin Manager
May 15 15:32:26 two-k8sm-0 kubelet[5548]: E0515 15:32:26.953608 5548 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node "two-k8sm-0" not found
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.953670 5548 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /systemd/system.slice
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.961273 5548 kubelet.go:1906] SyncLoop (ADD, "file"): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846), kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625), kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.961323 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.961672 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.962885 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.962904 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.962913 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.967112 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.967257 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.967391 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.967536 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.968523 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.968587 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.968624 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.968785 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.968802 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.968812 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: W0515 15:32:26.969298 5548 status_manager.go:530] Failed to get status for pod "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-apiserver-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.969725 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.969785 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.969867 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.969918 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.970875 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.970894 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.970905 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.973537 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.973555 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.973564 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.974005 5548 kubelet.go:1951] SyncLoop (PLEG): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)", event: &pleg.PodLifecycleEvent{ID:"a2126bea5a82d3a47ce85181aeeb2846", Type:"ContainerDied", Data:"9963a91f8244d439320e63bfa94492c7252498b55f476d2c65e3b3be261dc174"}
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.974049 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.974173 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.974327 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.974452 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:26 two-k8sm-0 kubelet[5548]: W0515 15:32:26.974586 5548 status_manager.go:530] Failed to get status for pod "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-controller-manager-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975413 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975436 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975446 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975458 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975473 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975471 5548 kubelet.go:1951] SyncLoop (PLEG): "kube-apiserver-two-k8sm-0_kube-system(a2126bea5a82d3a47ce85181aeeb2846)", event: &pleg.PodLifecycleEvent{ID:"a2126bea5a82d3a47ce85181aeeb2846", Type:"ContainerStarted", Data:"16893b5c3d85677af8ec7ad8505a3298676f2db95cbd173e2fdd09840e41cd21"}
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975483 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975493 5548 kubelet.go:1951] SyncLoop (PLEG): "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)", event: &pleg.PodLifecycleEvent{ID:"92f790874ae3c4b92635dc49da119ddd", Type:"ContainerStarted", Data:"d4fed961e50fa2588106faee9d7d84f6ecaafca1c783d09f3ab01364cc5f5079"}
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975510 5548 kubelet.go:1951] SyncLoop (PLEG): "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)", event: &pleg.PodLifecycleEvent{ID:"92f790874ae3c4b92635dc49da119ddd", Type:"ContainerStarted", Data:"33e25844eee90e6916e9acb7c27238f4f8f393360a68e72f683fb9732cdd1871"}
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975529 5548 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)", event: &pleg.PodLifecycleEvent{ID:"06cefdfd9da3719520e6220b3609d625", Type:"ContainerStarted", Data:"76fe24efc34cd06b01da32779d1c8d3575c2edb2e2f1d0b86e38fdf68d51e925"}
May 15 15:32:26 two-k8sm-0 kubelet[5548]: I0515 15:32:26.975545 5548 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-two-k8sm-0_kube-system(06cefdfd9da3719520e6220b3609d625)", event: &pleg.PodLifecycleEvent{ID:"06cefdfd9da3719520e6220b3609d625", Type:"ContainerStarted", Data:"8b2745938e0f4fee2a85abfa372359aeb32e91c5c934a2e16c478c8f4a77c516"}
May 15 15:32:26 two-k8sm-0 kubelet[5548]: W0515 15:32:26.993570 5548 status_manager.go:530] Failed to get status for pod "kube-scheduler-two-k8sm-0_kube-system(92f790874ae3c4b92635dc49da119ddd)": Get https://192.168.1.36:443/api/v1/namespaces/kube-system/pods/kube-scheduler-two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.046741 5548 controller.go:135] failed to ensure node lease exists, will retry in 400ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.046841 5548 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.046885 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.046912 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.046939 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.046959 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.046978 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.046996 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.047014 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.047036 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.047056 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.047074 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.047092 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.047110 5548 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.147043 5548 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147264 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147322 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147343 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147362 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147382 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147400 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147418 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147436 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147454 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147472 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147490 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147508 5548 reconciler.go:254] operationExecutor.MountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147581 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-ca-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147642 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-ca-trust" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-ca-trust") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147685 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-k8s-certs") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147724 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-etc-pki") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147766 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-flexvolume-dir") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147803 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147835 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etc-pki-tls" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etc-pki-tls") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147877 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "etcd-certs-0" (UniqueName: "kubernetes.io/host-path/a2126bea5a82d3a47ce85181aeeb2846-etcd-certs-0") pod "kube-apiserver-two-k8sm-0" (UID: "a2126bea5a82d3a47ce85181aeeb2846")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147917 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-ca-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147955 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-k8s-certs") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.147997 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/06cefdfd9da3719520e6220b3609d625-kubeconfig") pod "kube-controller-manager-two-k8sm-0" (UID: "06cefdfd9da3719520e6220b3609d625")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.148054 5548 operation_generator.go:634] MountVolume.SetUp succeeded for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/92f790874ae3c4b92635dc49da119ddd-kubeconfig") pod "kube-scheduler-two-k8sm-0" (UID: "92f790874ae3c4b92635dc49da119ddd")
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.149311 5548 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.149474 5548 setters.go:73] Using node IP: "192.168.1.36"
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.150834 5548 kubelet_node_status.go:486] Recording NodeHasSufficientMemory event message for node two-k8sm-0
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.150856 5548 kubelet_node_status.go:486] Recording NodeHasNoDiskPressure event message for node two-k8sm-0
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.150866 5548 kubelet_node_status.go:486] Recording NodeHasSufficientPID event message for node two-k8sm-0
May 15 15:32:27 two-k8sm-0 kubelet[5548]: I0515 15:32:27.150888 5548 kubelet_node_status.go:70] Attempting to register node two-k8sm-0
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.193704 5548 kubelet_node_status.go:92] Unable to register node "two-k8sm-0" with API server: Post https://192.168.1.36:443/api/v1/nodes: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.247270 5548 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.347411 5548 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.393647 5548 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: Get https://192.168.1.36:443/apis/storage.k8s.io/v1/csinodes/two-k8sm-0: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.447526 5548 kubelet.go:2263] node "two-k8sm-0" not found
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.447575 5548 controller.go:135] failed to ensure node lease exists, will retry in 800ms, error: Get https://192.168.1.36:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/two-k8sm-0?timeout=10s: dial tcp 192.168.1.36:443: connect: connection refused
May 15 15:32:27 two-k8sm-0 kubelet[5548]: E0515 15:32:27.547701 5548 kubelet.go:2263] node "two-k8sm-0" not found
kube_network_plugin: weave
weave_password: "&%^gvhbjnkhgf$#$%^&"
weave_version: 2.5.2
etcd_metrics: basic
kube_api_anonymous_auth: True
kubeconfig_localhost: True
kubectl_localhost: True
loadbalancer_apiserver_localhost: True
loadbalancer_apiserver_type: nginx
download_container: True
download_run_once: True
download_force_cache: True
download_localhost: True
download_keep_remote_cache: True
docker_log_opts: "--log-opt max-size=50m --log-opt max-file=5 --log-driver json-file --log-opt labels=rc,test"
dashboard_enabled: False
metrics_server_enabled: True
metrics_server_version: "v0.3.6"
cert_manager_enabled: False
kube_version: v1.17.0
container_manager: docker
kube_image_repo: "k8s.gcr.io"
gcr_image_repo: "gcr.io"
docker_image_repo: "docker.io"
quay_image_repo: "quay.io"
local_release_dir: /home/centos/etc/kubespray/releases
download_cache_dir: /home/centos/etc/kubespray/cache
etcd_version: v3.3.12
kube_router_version: "v0.4.0"
coredns_version: "1.6.9"
helm_version: "v3.1.0"
kube_apiserver_port: 443
{"docker_insecure_registries":["192.168.1.112:5000"],"kube_network_node_prefix":24,"kube_service_addresses":"10.233.0.0/17","kube_pods_subnet":"10.233.128.0/17","node_labels":{"node-role.kubernetes.io/node":"\"\""}}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment