Skip to content

Instantly share code, notes, and snippets.

@rpothier
Created June 7, 2017 19:58
Show Gist options
  • Save rpothier/13cb723cb5eb615ca370a887bd2e49b3 to your computer and use it in GitHub Desktop.
Save rpothier/13cb723cb5eb615ca370a887bd2e49b3 to your computer and use it in GitHub Desktop.
Kubelet.log with IPv6 CIDR
Flag --enable-cri has been deprecated, The non-CRI implementation will be deprecated and removed in a future version.
Flag --rkt-stage1-image has been deprecated, Will be removed in a future version. The default stage1 image will be specified by the rkt configurations, see https://github.com/coreos/rkt/blob/master/Documentation/configuration.md for more details.
I0607 13:23:23.628530 9970 feature_gate.go:144] feature gates: map[DynamicVolumeProvisioning:true ExperimentalCriticalPodAnnotation:true AllAlpha:true DynamicKubeletConfig:true TaintBasedEvictions:true AffinityInAnnotations:true Accelerators:true]
I0607 13:23:23.640549 9970 server.go:236] Starting Kubelet configuration sync loop
E0607 13:23:23.640568 9970 server.go:410] failed to init dynamic Kubelet configuration sync: cloud provider was nil, and attempt to use hostname to find config resulted in: configmaps "kubelet-127.0.0.1" not found
I0607 13:23:23.640581 9970 plugins.go:101] No cloud provider specified.
I0607 13:23:23.640589 9970 server.go:432] No cloud provider specified: "" from the config file: ""
I0607 13:23:23.642399 9970 docker.go:370] Connecting to docker on unix:///var/run/docker.sock
I0607 13:23:23.642414 9970 docker.go:390] Start docker client with request timeout=2m0s
W0607 13:23:23.672951 9970 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
I0607 13:23:23.676178 9970 iptables.go:535] couldn't get iptables-restore version; assuming it doesn't support --wait
I0607 13:23:23.682546 9970 iptables.go:535] couldn't get iptables-restore version; assuming it doesn't support --wait
I0607 13:23:23.685712 9970 iptables.go:535] couldn't get iptables-restore version; assuming it doesn't support --wait
I0607 13:23:23.686187 9970 manager.go:143] cAdvisor running in container: "/"
W0607 13:23:23.811581 9970 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I0607 13:23:23.942427 9970 fs.go:117] Filesystem partitions: map[/dev/mapper/cl-root:{mountpoint:/ major:253 minor:0 fsType:xfs blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:xfs blockSize:0}]
I0607 13:23:23.945019 9970 manager.go:198] Machine: {NumCores:1 CpuFrequency:2294688 MemoryCapacity:3975294976 MachineID:cb525bab91664203b1ee4d288565cc5c SystemUUID:9F4E2005-1D94-4ADF-B58F-8065C25A01CC BootID:cb7252c9-e00e-4398-8330-57b235a5862b Filesystems:[{Device:/dev/mapper/cl-root Capacity:18238930944 Type:vfs Inodes:5357272 HasInodes:true} {Device:/dev/sda1 Capacity:1063256064 Type:vfs Inodes:524288 HasInodes:true}] DiskMap:map[253:1:{Name:dm-1 Major:253 Minor:1 Size:2147483648 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:107374182400 Scheduler:none} 253:3:{Name:dm-3 Major:253 Minor:3 Size:10737418240 Scheduler:none} 253:6:{Name:dm-6 Major:253 Minor:6 Size:10737418240 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:21474836480 Scheduler:cfq} 253:0:{Name:dm-0 Major:253 Minor:0 Size:18249416704 Scheduler:none}] NetworkDevices:[{Name:cbr0 MacAddress:0a:58:0a:01:00:01 Speed:0 Mtu:1500} {Name:enp0s3 MacAddress:08:00:27:81:d2:99 Speed:1000 Mtu:1500} {Name:virbr0 MacAddress:52:54:00:e8:ba:8f Speed:0 Mtu:1500} {Name:virbr0-nic MacAddress:52:54:00:e8:ba:8f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:4294500352 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0607 13:23:24.048705 9970 manager.go:204] Version: {KernelVersion:3.10.0-514.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.12.5 DockerAPIVersion:1.24 CadvisorVersion: CadvisorRevision:}
I0607 13:23:24.049224 9970 server.go:351] Sending events to api server.
I0607 13:23:24.049273 9970 server.go:512] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
W0607 13:23:24.050892 9970 container_manager_linux.go:217] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
I0607 13:23:24.051238 9970 container_manager_linux.go:244] container manager verified user specified cgroup-root exists: /
I0607 13:23:24.051253 9970 container_manager_linux.go:249] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd ProtectKernelDefaults:false EnableCRI:true NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
I0607 13:23:24.051355 9970 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
I0607 13:23:24.051391 9970 server.go:805] Using root directory: /var/lib/kubelet
I0607 13:23:24.051412 9970 kubelet.go:254] Adding manifest file: /var/run/kubernetes/static-pods
I0607 13:23:24.051430 9970 file.go:48] Watching path "/var/run/kubernetes/static-pods"
I0607 13:23:24.051435 9970 kubelet.go:264] Watching apiserver
I0607 13:23:24.051459 9970 reflector.go:187] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46
I0607 13:23:24.051516 9970 reflector.go:187] Starting reflector *v1.Service (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:370
I0607 13:23:24.051555 9970 reflector.go:187] Starting reflector *v1.Node (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:378
I0607 13:23:24.053941 9970 config.go:293] Setting pods for source file
I0607 13:23:24.053998 9970 reflector.go:236] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46
I0607 13:23:24.054368 9970 reflector.go:236] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:370
I0607 13:23:24.054545 9970 reflector.go:236] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:378
I0607 13:23:24.059403 9970 config.go:293] Setting pods for source api
I0607 13:23:24.061331 9970 iptables.go:535] couldn't get iptables-restore version; assuming it doesn't support --wait
I0607 13:23:24.061956 9970 kubelet.go:482] Hairpin mode set to "promiscuous-bridge"
I0607 13:23:24.064914 9970 iptables.go:367] running iptables -C [POSTROUTING -t nat -m comment --comment kubenet: SNAT for outbound traffic from cluster -m addrtype ! --dst-type LOCAL ! -d 10.0.0.0/8 -j MASQUERADE]
I0607 13:23:24.066800 9970 plugins.go:192] Loaded network plugin "kubenet"
W0607 13:23:24.098811 9970 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
I0607 13:23:24.101764 9970 iptables.go:535] couldn't get iptables-restore version; assuming it doesn't support --wait
I0607 13:23:24.128194 9970 iptables.go:535] couldn't get iptables-restore version; assuming it doesn't support --wait
I0607 13:23:24.131563 9970 iptables.go:535] couldn't get iptables-restore version; assuming it doesn't support --wait
I0607 13:23:24.134097 9970 iptables.go:367] running iptables -C [POSTROUTING -t nat -m comment --comment kubenet: SNAT for outbound traffic from cluster -m addrtype ! --dst-type LOCAL ! -d 10.0.0.0/8 -j MASQUERADE]
I0607 13:23:24.135881 9970 plugins.go:192] Loaded network plugin "kubenet"
I0607 13:23:24.135908 9970 docker_service.go:209] Docker cri networking managed by kubenet
I0607 13:23:24.169439 9970 docker_service.go:226] Setting cgroupDriver to systemd
I0607 13:23:24.170797 9970 docker_legacy.go:200] Unable to convert legacy container {ID:ff4a313bf3a1ab35e3e1f039cb8b6762e73fdc1aa4d09006acd15a6c8d02d65f Names:[] Image:hello-world ImageID:sha256:48b5124b2768d2b917edcb640435044a97967015485e812545546cbed5cf0233 Command:/hello Created:1496680932 Ports:[] SizeRw:0 SizeRootFs:0 Labels:map[] State:exited Status:Exited (0) 2 days ago HostConfig:{NetworkMode:default} NetworkSettings:0xc420446a38 Mounts:[]}: failed to parse Docker container name "goofy_jennings" into parts
I0607 13:23:24.172279 9970 docker_legacy.go:255] Unable to convert legacy container {ID:ff4a313bf3a1ab35e3e1f039cb8b6762e73fdc1aa4d09006acd15a6c8d02d65f Names:[] Image:hello-world ImageID:sha256:48b5124b2768d2b917edcb640435044a97967015485e812545546cbed5cf0233 Command:/hello Created:1496680932 Ports:[] SizeRw:0 SizeRootFs:0 Labels:map[] State:exited Status:Exited (0) 2 days ago HostConfig:{NetworkMode:default} NetworkSettings:0xc420446ab8 Mounts:[]}: failed to parse Docker container name "goofy_jennings" into parts
I0607 13:23:24.172304 9970 docker_legacy.go:151] No legacy containers found, stop performing legacy cleanup.
I0607 13:23:24.172328 9970 kubelet.go:571] Starting the GRPC server for the docker CRI shim.
I0607 13:23:24.172343 9970 docker_server.go:60] Start dockershim grpc server
I0607 13:23:24.256219 9970 oom_linux.go:65] attempting to set "/proc/7840/oom_score_adj" to "-999"
I0607 13:23:24.256368 9970 oom_linux.go:65] attempting to set "/proc/7843/oom_score_adj" to "-999"
I0607 13:23:24.261562 9970 remote_runtime.go:41] Connecting to runtime service /var/run/dockershim.sock
I0607 13:23:24.261640 9970 remote_image.go:39] Connecting to image service /var/run/dockershim.sock
I0607 13:23:24.261734 9970 plugins.go:56] Registering credential provider: .dockercfg
I0607 13:23:24.291314 9970 kuberuntime_manager.go:166] Container runtime docker initialized, version: 1.12.5, apiVersion: 1.24.0
I0607 13:23:24.291824 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/aws-ebs"
I0607 13:23:24.291838 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/empty-dir"
I0607 13:23:24.291845 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/gce-pd"
I0607 13:23:24.291851 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/git-repo"
I0607 13:23:24.291858 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/host-path"
I0607 13:23:24.291863 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/nfs"
I0607 13:23:24.291872 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/secret"
I0607 13:23:24.291879 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/iscsi"
I0607 13:23:24.291889 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/glusterfs"
I0607 13:23:24.291895 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/rbd"
I0607 13:23:24.291904 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/cinder"
I0607 13:23:24.291910 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/quobyte"
I0607 13:23:24.291915 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/cephfs"
I0607 13:23:24.291925 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/downward-api"
I0607 13:23:24.291932 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/fc"
I0607 13:23:24.291938 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/flocker"
I0607 13:23:24.291945 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/azure-file"
I0607 13:23:24.291951 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/configmap"
I0607 13:23:24.291958 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/vsphere-volume"
I0607 13:23:24.291964 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/azure-disk"
I0607 13:23:24.291971 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/photon-pd"
I0607 13:23:24.291977 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/projected"
I0607 13:23:24.291983 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/portworx-volume"
I0607 13:23:24.291990 9970 plugins.go:362] Loaded volume plugin "kubernetes.io/scaleio"
I0607 13:23:24.324106 9970 server.go:872] Started kubelet v1.7.0-alpha.2.740+6bd0215b5bc51d-dirty
E0607 13:23:24.324884 9970 kubelet.go:1127] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
I0607 13:23:24.325059 9970 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
I0607 13:23:24.325181 9970 server.go:128] Starting to listen on 127.0.0.1:10250
I0607 13:23:24.325707 9970 server.go:295] Adding debug handlers to kubelet server.
I0607 13:23:24.326748 9970 server.go:145] Starting to listen read-only on 127.0.0.1:10255
I0607 13:23:24.327273 9970 server.go:349] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet.
I0607 13:23:24.332935 9970 kuberuntime_container.go:740] Removing container "7b12a57bd8eb5d7b4a4506b4ff308e05101752ac902d6273a6d0a1174d45ec35"
E0607 13:23:24.517064 9970 kubelet.go:1620] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0607 13:23:24.517098 9970 kubelet.go:1628] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I0607 13:23:24.517104 9970 kubelet_node_status.go:379] Recording NodeHasSufficientDisk event message for node 127.0.0.1
I0607 13:23:24.517135 9970 kubelet_node_status.go:379] Recording NodeHasSufficientMemory event message for node 127.0.0.1
I0607 13:23:24.517151 9970 kubelet_node_status.go:379] Recording NodeHasNoDiskPressure event message for node 127.0.0.1
I0607 13:23:24.518380 9970 node_container_manager.go:68] Attempting to enforce Node Allocatable with config: {KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]}
I0607 13:23:24.518977 9970 qos_container_manager_linux.go:286] [ContainerManager]: Updated QoS cgroup configuration
I0607 13:23:24.519047 9970 server.go:349] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node 127.0.0.1 status is now: NodeHasSufficientDisk
I0607 13:23:24.519059 9970 server.go:349] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node 127.0.0.1 status is now: NodeHasSufficientMemory
I0607 13:23:24.519068 9970 server.go:349] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node 127.0.0.1 status is now: NodeHasNoDiskPressure
I0607 13:23:24.519075 9970 server.go:349] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeAllocatableEnforced' Updated Node Allocatable limit across pods
I0607 13:23:24.607612 9970 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0607 13:23:24.607646 9970 status_manager.go:140] Starting to sync pod status with apiserver
I0607 13:23:24.607667 9970 kubelet.go:1700] Starting kubelet main sync loop.
I0607 13:23:24.607684 9970 kubelet.go:1711] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
I0607 13:23:24.607800 9970 container_manager_linux.go:390] [ContainerManager]: Adding periodic tasks for docker CRI integration
W0607 13:23:24.608164 9970 container_manager_linux.go:750] CPUAccounting not enabled for pid: 7840
W0607 13:23:24.608169 9970 container_manager_linux.go:753] MemoryAccounting not enabled for pid: 7840
I0607 13:23:24.608172 9970 container_manager_linux.go:396] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
I0607 13:23:24.608205 9970 oom_linux.go:65] attempting to set "/proc/9970/oom_score_adj" to "-999"
W0607 13:23:24.608241 9970 container_manager_linux.go:750] CPUAccounting not enabled for pid: 9970
W0607 13:23:24.608244 9970 container_manager_linux.go:753] MemoryAccounting not enabled for pid: 9970
I0607 13:23:24.608258 9970 volume_manager.go:242] The desired_state_of_world populator starts
I0607 13:23:24.608261 9970 volume_manager.go:244] Starting Kubelet Volume Manager
I0607 13:23:24.608372 9970 iptables.go:367] running iptables -N [KUBE-MARK-DROP -t nat]
I0607 13:23:24.633476 9970 iptables.go:367] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
I0607 13:23:24.665367 9970 iptables.go:367] running iptables -N [KUBE-FIREWALL -t filter]
I0607 13:23:24.683710 9970 generic.go:146] GenericPLEG: 2e65909a-4ba4-11e7-9e49-08002781d299/da4e5b99481c616cce9c774b8a546b74ad56ecd621f99272a775b5965665901b: non-existent -> exited
I0607 13:23:24.683780 9970 generic.go:146] GenericPLEG: 2e65909a-4ba4-11e7-9e49-08002781d299/728e28a0b2e8f8147dfaa9a258b25b62d98ed46e4d18c105fd036ccad8f70928: non-existent -> exited
I0607 13:23:24.683787 9970 generic.go:146] GenericPLEG: 2e65909a-4ba4-11e7-9e49-08002781d299/7b12a57bd8eb5d7b4a4506b4ff308e05101752ac902d6273a6d0a1174d45ec35: non-existent -> unknown
I0607 13:23:24.683792 9970 generic.go:146] GenericPLEG: 2e65909a-4ba4-11e7-9e49-08002781d299/491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421: non-existent -> running
I0607 13:23:24.683797 9970 generic.go:146] GenericPLEG: 2e65909a-4ba4-11e7-9e49-08002781d299/8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76: non-existent -> running
I0607 13:23:24.685962 9970 iptables.go:367] running iptables -C [KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
I0607 13:23:24.694502 9970 iptables.go:367] running iptables -C [OUTPUT -t filter -j KUBE-FIREWALL]
I0607 13:23:24.700430 9970 kuberuntime_manager.go:832] getSandboxIDByPodUID got sandbox IDs ["8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76"] for pod "kube-dns-336463160-sw697_kube-system(2e65909a-4ba4-11e7-9e49-08002781d299)"
I0607 13:23:24.703998 9970 iptables.go:367] running iptables -C [INPUT -t filter -j KUBE-FIREWALL]
I0607 13:23:24.713845 9970 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
I0607 13:23:24.716084 9970 iptables.go:367] running iptables -N [KUBE-MARK-MASQ -t nat]
I0607 13:23:24.734596 9970 iptables.go:367] running iptables -N [KUBE-POSTROUTING -t nat]
I0607 13:23:24.736431 9970 iptables.go:367] running iptables -C [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
I0607 13:23:24.748110 9970 iptables.go:367] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I0607 13:23:24.758260 9970 iptables.go:367] running iptables -C [KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
I0607 13:23:24.795064 9970 kubelet.go:2023] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
E0607 13:23:24.795108 9970 kubelet.go:2026] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
E0607 13:23:24.921196 9970 kubelet.go:1620] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0607 13:23:24.921471 9970 kubelet.go:1628] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I0607 13:23:24.921485 9970 kubelet_node_status.go:379] Recording NodeHasSufficientDisk event message for node 127.0.0.1
I0607 13:23:24.921498 9970 kubelet_node_status.go:379] Recording NodeHasSufficientMemory event message for node 127.0.0.1
I0607 13:23:24.921511 9970 kubelet_node_status.go:379] Recording NodeHasNoDiskPressure event message for node 127.0.0.1
I0607 13:23:24.921525 9970 kubelet_node_status.go:78] Attempting to register node 127.0.0.1
I0607 13:23:24.922178 9970 server.go:349] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node 127.0.0.1 status is now: NodeHasSufficientDisk
I0607 13:23:24.922190 9970 server.go:349] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node 127.0.0.1 status is now: NodeHasSufficientMemory
I0607 13:23:24.922196 9970 server.go:349] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"127.0.0.1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node 127.0.0.1 status is now: NodeHasNoDiskPressure
I0607 13:23:24.927566 9970 kubelet_node_status.go:81] Successfully registered node 127.0.0.1
E0607 13:23:25.075690 9970 kubelet.go:1620] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0607 13:23:25.075977 9970 kubelet.go:1628] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I0607 13:23:25.181303 9970 kubelet.go:1115] Container garbage collection succeeded
I0607 13:23:25.277734 9970 generic.go:343] PLEG: Write status for kube-dns-336463160-sw697/kube-system: &container.PodStatus{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Name:"kube-dns-336463160-sw697", Namespace:"kube-system", IP:"10.1.0.25", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc420cc1b20), (*container.ContainerStatus)(0xc420cc1dc0), (*container.ContainerStatus)(0xc42116e000)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc420f401e0)}} (err: <nil>)
E0607 13:23:25.300650 9970 factory.go:330] devicemapper filesystem stats will not be reported: usage of thin_ls is disabled to preserve iops
I0607 13:23:25.300674 9970 factory.go:342] Registering Docker factory
W0607 13:23:25.300682 9970 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I0607 13:23:25.300690 9970 factory.go:54] Registering systemd factory
I0607 13:23:25.300808 9970 factory.go:86] Registering Raw factory
I0607 13:23:25.300914 9970 manager.go:1106] Started watching for new ooms in manager
I0607 13:23:25.306396 9970 oomparser.go:185] oomparser using systemd
I0607 13:23:25.306725 9970 factory.go:115] Factory "docker" was unable to handle container "/"
I0607 13:23:25.306751 9970 factory.go:104] Error trying to work out if we can handle /: / not handled by systemd handler
I0607 13:23:25.306761 9970 factory.go:115] Factory "systemd" was unable to handle container "/"
I0607 13:23:25.306771 9970 factory.go:111] Using factory "raw" for container "/"
I0607 13:23:25.307373 9970 manager.go:898] Added container: "/" (aliases: [], namespace: "")
I0607 13:23:25.307925 9970 handler.go:325] Added event &{/ 2017-01-10 16:33:46.501 -0500 EST containerCreation {<nil>}}
I0607 13:23:25.307976 9970 manager.go:288] Starting recovery of all containers
I0607 13:23:25.323689 9970 container.go:407] Start housekeeping for container "/"
I0607 13:23:25.326849 9970 factory.go:111] Using factory "docker" for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421.scope"
I0607 13:23:25.477652 9970 manager.go:898] Added container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421.scope" (aliases: [k8s_sidecar_kube-dns-336463160-sw697_kube-system_2e65909a-4ba4-11e7-9e49-08002781d299_0 491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421], namespace: "docker")
I0607 13:23:25.477818 9970 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421.scope 2017-06-07 13:10:36.373330499 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.477847 9970 factory.go:115] Factory "docker" was unable to handle container "/system.slice"
I0607 13:23:25.477855 9970 factory.go:104] Error trying to work out if we can handle /system.slice: /system.slice not handled by systemd handler
I0607 13:23:25.477860 9970 factory.go:115] Factory "systemd" was unable to handle container "/system.slice"
I0607 13:23:25.477865 9970 factory.go:111] Using factory "raw" for container "/system.slice"
I0607 13:23:25.478001 9970 manager.go:898] Added container: "/system.slice" (aliases: [], namespace: "")
I0607 13:23:25.478122 9970 handler.go:325] Added event &{/system.slice 2017-06-05 12:42:13.319751499 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.478134 9970 factory.go:115] Factory "docker" was unable to handle container "/kube-proxy"
I0607 13:23:25.478139 9970 factory.go:104] Error trying to work out if we can handle /kube-proxy: /kube-proxy not handled by systemd handler
I0607 13:23:25.478141 9970 factory.go:115] Factory "systemd" was unable to handle container "/kube-proxy"
I0607 13:23:25.478145 9970 factory.go:111] Using factory "raw" for container "/kube-proxy"
I0607 13:23:25.478277 9970 manager.go:898] Added container: "/kube-proxy" (aliases: [], namespace: "")
I0607 13:23:25.478395 9970 handler.go:325] Added event &{/kube-proxy 2017-06-05 12:46:13.004119499 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.478409 9970 factory.go:115] Factory "docker" was unable to handle container "/kubepods.slice/kubepods-besteffort.slice"
I0607 13:23:25.478414 9970 factory.go:104] Error trying to work out if we can handle /kubepods.slice/kubepods-besteffort.slice: /kubepods.slice/kubepods-besteffort.slice not handled by systemd handler
I0607 13:23:25.478418 9970 factory.go:115] Factory "systemd" was unable to handle container "/kubepods.slice/kubepods-besteffort.slice"
I0607 13:23:25.478423 9970 factory.go:111] Using factory "raw" for container "/kubepods.slice/kubepods-besteffort.slice"
I0607 13:23:25.478550 9970 manager.go:898] Added container: "/kubepods.slice/kubepods-besteffort.slice" (aliases: [], namespace: "")
I0607 13:23:25.478652 9970 handler.go:325] Added event &{/kubepods.slice/kubepods-besteffort.slice 2017-06-05 12:46:17.830836499 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.478672 9970 factory.go:115] Factory "docker" was unable to handle container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice"
I0607 13:23:25.478679 9970 factory.go:104] Error trying to work out if we can handle /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice not handled by systemd handler
I0607 13:23:25.478682 9970 factory.go:115] Factory "systemd" was unable to handle container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice"
I0607 13:23:25.478689 9970 factory.go:111] Using factory "raw" for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice"
I0607 13:23:25.478931 9970 manager.go:898] Added container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice" (aliases: [], namespace: "")
I0607 13:23:25.479058 9970 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice 2017-06-07 13:10:33.369000499 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.479077 9970 factory.go:115] Factory "docker" was unable to handle container "/user.slice"
I0607 13:23:25.479083 9970 factory.go:104] Error trying to work out if we can handle /user.slice: /user.slice not handled by systemd handler
I0607 13:23:25.479086 9970 factory.go:115] Factory "systemd" was unable to handle container "/user.slice"
I0607 13:23:25.479089 9970 factory.go:111] Using factory "raw" for container "/user.slice"
I0607 13:23:25.479213 9970 manager.go:898] Added container: "/user.slice" (aliases: [], namespace: "")
I0607 13:23:25.479306 9970 handler.go:325] Added event &{/user.slice 2017-06-05 12:42:13.332344499 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.479318 9970 factory.go:115] Factory "docker" was unable to handle container "/kubepods.slice"
I0607 13:23:25.479323 9970 factory.go:104] Error trying to work out if we can handle /kubepods.slice: /kubepods.slice not handled by systemd handler
I0607 13:23:25.479326 9970 factory.go:115] Factory "systemd" was unable to handle container "/kubepods.slice"
I0607 13:23:25.479330 9970 factory.go:111] Using factory "raw" for container "/kubepods.slice"
I0607 13:23:25.479444 9970 manager.go:898] Added container: "/kubepods.slice" (aliases: [], namespace: "")
I0607 13:23:25.479598 9970 handler.go:325] Added event &{/kubepods.slice 2017-06-05 12:46:17.795755999 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.479618 9970 factory.go:115] Factory "docker" was unable to handle container "/kubepods.slice/kubepods-burstable.slice"
I0607 13:23:25.479624 9970 factory.go:104] Error trying to work out if we can handle /kubepods.slice/kubepods-burstable.slice: /kubepods.slice/kubepods-burstable.slice not handled by systemd handler
I0607 13:23:25.479627 9970 factory.go:115] Factory "systemd" was unable to handle container "/kubepods.slice/kubepods-burstable.slice"
I0607 13:23:25.479631 9970 factory.go:111] Using factory "raw" for container "/kubepods.slice/kubepods-burstable.slice"
I0607 13:23:25.479761 9970 manager.go:898] Added container: "/kubepods.slice/kubepods-burstable.slice" (aliases: [], namespace: "")
I0607 13:23:25.479862 9970 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice 2017-06-05 12:46:17.8182435 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.479977 9970 container.go:407] Start housekeeping for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421.scope"
I0607 13:23:25.481704 9970 container.go:407] Start housekeeping for container "/system.slice"
I0607 13:23:25.482864 9970 container.go:407] Start housekeeping for container "/kube-proxy"
I0607 13:23:25.483913 9970 container.go:407] Start housekeeping for container "/kubepods.slice/kubepods-besteffort.slice"
I0607 13:23:25.484528 9970 container.go:407] Start housekeeping for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice"
I0607 13:23:25.485122 9970 container.go:407] Start housekeeping for container "/user.slice"
I0607 13:23:25.485801 9970 container.go:407] Start housekeeping for container "/kubepods.slice"
I0607 13:23:25.486432 9970 container.go:407] Start housekeeping for container "/kubepods.slice/kubepods-burstable.slice"
I0607 13:23:25.487506 9970 factory.go:111] Using factory "docker" for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76.scope"
I0607 13:23:25.651901 9970 manager.go:898] Added container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76.scope" (aliases: [k8s_POD_kube-dns-336463160-sw697_kube-system_2e65909a-4ba4-11e7-9e49-08002781d299_0 8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76], namespace: "docker")
I0607 13:23:25.652060 9970 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76.scope 2017-06-07 13:10:34.138072999 -0400 EDT containerCreation {<nil>}}
I0607 13:23:25.652083 9970 manager.go:293] Recovery completed
I0607 13:23:25.678082 9970 container.go:407] Start housekeeping for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76.scope"
I0607 13:23:25.683008 9970 eviction_manager.go:198] eviction manager: synchronize housekeeping
I0607 13:23:25.695286 9970 summary.go:389] Missing default interface "eth0" for node:127.0.0.1
I0607 13:23:25.695356 9970 helpers.go:747] eviction manager: observations: signal=memory.available, available: 1677116Ki, capacity: 3882124Ki, time: 2017-06-07 13:23:25.324267407 -0400 EDT
I0607 13:23:25.695386 9970 helpers.go:747] eviction manager: observations: signal=nodefs.available, available: 2585392Ki, capacity: 17394Mi, time: 2017-06-07 13:23:25.324267407 -0400 EDT
I0607 13:23:25.695392 9970 helpers.go:747] eviction manager: observations: signal=nodefs.inodesFree, available: 5175203, capacity: 5369888, time: 2017-06-07 13:23:25.324267407 -0400 EDT
I0607 13:23:25.695398 9970 helpers.go:747] eviction manager: observations: signal=imagefs.available, available: 2585392Ki, capacity: 17394Mi, time: 2017-06-07 13:23:25.324267407 -0400 EDT
I0607 13:23:25.695403 9970 helpers.go:747] eviction manager: observations: signal=imagefs.inodesFree, available: 5175203, capacity: 5369888, time: 2017-06-07 13:23:25.324267407 -0400 EDT
I0607 13:23:25.695408 9970 helpers.go:749] eviction manager: observations: signal=allocatableMemory.available, available: 3767552Ki, capacity: 3779724Ki
I0607 13:23:25.695424 9970 eviction_manager.go:293] eviction manager: no resources are starved
I0607 13:23:26.280306 9970 generic.go:146] GenericPLEG: 2e65909a-4ba4-11e7-9e49-08002781d299/7b12a57bd8eb5d7b4a4506b4ff308e05101752ac902d6273a6d0a1174d45ec35: unknown -> non-existent
I0607 13:23:26.281068 9970 kuberuntime_manager.go:832] getSandboxIDByPodUID got sandbox IDs ["8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76"] for pod "kube-dns-336463160-sw697_kube-system(2e65909a-4ba4-11e7-9e49-08002781d299)"
I0607 13:23:26.288393 9970 generic.go:343] PLEG: Write status for kube-dns-336463160-sw697/kube-system: &container.PodStatus{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Name:"kube-dns-336463160-sw697", Namespace:"kube-system", IP:"10.1.0.25", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42130e700), (*container.ContainerStatus)(0xc42130e9a0), (*container.ContainerStatus)(0xc42130eb60)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc421308f00)}} (err: <nil>)
I0607 13:23:29.607889 9970 kubelet.go:1767] SyncLoop (ADD, "file"): ""
I0607 13:23:29.607945 9970 kubelet.go:1767] SyncLoop (ADD, "api"): ""
I0607 13:23:29.607978 9970 kubelet.go:1843] SyncLoop (housekeeping)
I0607 13:23:29.613072 9970 kubelet_pods.go:917] Killing unwanted pod "kube-dns-336463160-sw697"
I0607 13:23:29.616528 9970 kubelet_pods.go:1575] Orphaned pod "2e65909a-4ba4-11e7-9e49-08002781d299" found, removing pod cgroups
I0607 13:23:29.616566 9970 kubelet.go:1805] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Type:"ContainerDied", Data:"da4e5b99481c616cce9c774b8a546b74ad56ecd621f99272a775b5965665901b"}
I0607 13:23:29.616599 9970 kubelet.go:1805] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Type:"ContainerDied", Data:"728e28a0b2e8f8147dfaa9a258b25b62d98ed46e4d18c105fd036ccad8f70928"}
I0607 13:23:29.616610 9970 kubelet.go:1805] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Type:"ContainerStarted", Data:"491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421"}
I0607 13:23:29.616624 9970 kubelet.go:1805] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Type:"ContainerStarted", Data:"8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76"}
I0607 13:23:29.616633 9970 kubelet.go:1805] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Type:"ContainerDied", Data:"7b12a57bd8eb5d7b4a4506b4ff308e05101752ac902d6273a6d0a1174d45ec35"}
W0607 13:23:29.616643 9970 pod_container_deletor.go:77] Container "7b12a57bd8eb5d7b4a4506b4ff308e05101752ac902d6273a6d0a1174d45ec35" not found in pod's containers
I0607 13:23:29.620477 9970 pod_container_manager_linux.go:140] Attempt to kill process with pid: 6198
I0607 13:23:29.621291 9970 pod_container_manager_linux.go:140] Attempt to kill process with pid: 6481
I0607 13:23:29.621310 9970 pod_container_manager_linux.go:147] successfully killed all unwanted processes.
I0607 13:23:29.621601 9970 kuberuntime_container.go:535] Killing container "docker://491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421" with 30 second grace period
I0607 13:23:29.736584 9970 manager.go:955] Destroyed container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421.scope" (aliases: [k8s_sidecar_kube-dns-336463160-sw697_kube-system_2e65909a-4ba4-11e7-9e49-08002781d299_0 491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421], namespace: "docker")
I0607 13:23:29.736650 9970 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421.scope 2017-06-07 13:23:29.736642569 -0400 EDT containerDeletion {<nil>}}
I0607 13:23:29.736672 9970 manager.go:955] Destroyed container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76.scope" (aliases: [k8s_POD_kube-dns-336463160-sw697_kube-system_2e65909a-4ba4-11e7-9e49-08002781d299_0 8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76], namespace: "docker")
I0607 13:23:29.736683 9970 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice/docker-8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76.scope 2017-06-07 13:23:29.736681849 -0400 EDT containerDeletion {<nil>}}
I0607 13:23:29.736693 9970 manager.go:955] Destroyed container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice" (aliases: [], namespace: "")
I0607 13:23:29.736700 9970 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e65909a_4ba4_11e7_9e49_08002781d299.slice 2017-06-07 13:23:29.736698252 -0400 EDT containerDeletion {<nil>}}
I0607 13:23:29.885671 9970 kuberuntime_container.go:554] Container "docker://491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421" exited normally
W0607 13:23:29.885702 9970 kuberuntime_container.go:436] No ref for container {"docker" "491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421"}
I0607 13:23:29.932755 9970 plugins.go:410] Calling network plugin kubenet to tear down pod "kube-dns-336463160-sw697_kube-system"
I0607 13:23:29.932785 9970 kubenet_linux.go:526] TearDownPod took 1.718µs for kube-system/kube-dns-336463160-sw697
E0607 13:23:29.940134 9970 remote_runtime.go:109] StopPodSandbox "8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin kubenet failed to teardown pod "kube-dns-336463160-sw697_kube-system" network: Kubenet needs a PodCIDR to tear down pods
E0607 13:23:29.940197 9970 kuberuntime_manager.go:779] Failed to stop sandbox {"docker" "8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76"}
E0607 13:23:29.940220 9970 kubelet_pods.go:920] Failed killing the pod "kube-dns-336463160-sw697": failed to "KillPodSandbox" for "2e65909a-4ba4-11e7-9e49-08002781d299" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin kubenet failed to teardown pod \"kube-dns-336463160-sw697_kube-system\" network: Kubenet needs a PodCIDR to tear down pods"
I0607 13:23:30.299525 9970 generic.go:146] GenericPLEG: 2e65909a-4ba4-11e7-9e49-08002781d299/491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421: running -> exited
I0607 13:23:30.299551 9970 generic.go:146] GenericPLEG: 2e65909a-4ba4-11e7-9e49-08002781d299/8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76: running -> exited
I0607 13:23:30.300209 9970 kuberuntime_manager.go:832] getSandboxIDByPodUID got sandbox IDs ["8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76"] for pod "kube-dns-336463160-sw697_kube-system(2e65909a-4ba4-11e7-9e49-08002781d299)"
I0607 13:23:30.309318 9970 generic.go:343] PLEG: Write status for kube-dns-336463160-sw697/kube-system: &container.PodStatus{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Name:"kube-dns-336463160-sw697", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc42171f500), (*container.ContainerStatus)(0xc42171f7a0), (*container.ContainerStatus)(0xc42171f960)}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4216f3e50)}} (err: <nil>)
I0607 13:23:30.309474 9970 kubelet.go:1805] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Type:"ContainerDied", Data:"491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421"}
I0607 13:23:30.309500 9970 kubelet.go:1805] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"2e65909a-4ba4-11e7-9e49-08002781d299", Type:"ContainerDied", Data:"8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76"}
W0607 13:23:30.309510 9970 pod_container_deletor.go:77] Container "8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76" not found in pod's containers
I0607 13:23:30.608076 9970 kubelet.go:1843] SyncLoop (housekeeping)
I0607 13:23:30.610796 9970 kubelet_pods.go:917] Killing unwanted pod "kube-dns-336463160-sw697"
E0607 13:23:30.613902 9970 kubelet_volumes.go:129] Orphaned pod "2e65909a-4ba4-11e7-9e49-08002781d299" found, but volume paths are still present on disk. : There were a total of 1 errors similar to this. Turn up verbosity to see them.
I0607 13:23:30.613983 9970 kuberuntime_container.go:535] Killing container "docker://491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421" with 30 second grace period
I0607 13:23:30.615473 9970 kuberuntime_container.go:554] Container "docker://491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421" exited normally
W0607 13:23:30.615490 9970 kuberuntime_container.go:436] No ref for container {"docker" "491e006f99a5e869ab58e808e1d6ae7b9eb5f738b7d35958e6640285e5e70421"}
I0607 13:23:30.616464 9970 plugins.go:410] Calling network plugin kubenet to tear down pod "kube-dns-336463160-sw697_kube-system"
I0607 13:23:30.616480 9970 kubenet_linux.go:526] TearDownPod took 1.398µs for kube-system/kube-dns-336463160-sw697
E0607 13:23:30.617617 9970 remote_runtime.go:109] StopPodSandbox "8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin kubenet failed to teardown pod "kube-dns-336463160-sw697_kube-system" network: Kubenet needs a PodCIDR to tear down pods
E0607 13:23:30.617645 9970 kuberuntime_manager.go:779] Failed to stop sandbox {"docker" "8f269bc1c121b97d7e8c37a73b414f465ddb6772faffb613c21e5fb3af700f76"}
E0607 13:23:30.617664 9970 kubelet_pods.go:920] Failed killing the pod "kube-dns-336463160-sw697": failed to "KillPodSandbox" for "2e65909a-4ba4-11e7-9e49-08002781d299" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin kubenet failed to teardown pod \"kube-dns-336463160-sw697_kube-system\" network: Kubenet needs a PodCIDR to tear down pods"
I0607 13:23:30.718764 9970 kubelet.go:2023] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
E0607 13:23:30.718800 9970 kubelet.go:2026] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
I0607 13:23:32.608120 9970 kubelet.go:1843] SyncLoop (housekeeping)
E0607 13:23:32.613830 9970 kubelet_volumes.go:129] Orphaned pod "2e65909a-4ba4-11e7-9e49-08002781d299" found, but volume paths are still present on disk. : There were a total of 1 errors similar to this. Turn up verbosity to see them.
I0607 13:23:34.620092 9970 kubelet.go:1843] SyncLoop (housekeeping)
E0607 13:23:34.624692 9970 kubelet_volumes.go:129] Orphaned pod "2e65909a-4ba4-11e7-9e49-08002781d299" found, but volume paths are still present on disk. : There were a total of 1 errors similar to this. Turn up verbosity to see them.
I0607 13:23:35.083506 9970 kuberuntime_manager.go:897] updating runtime config through cri with podcidr 2001:beef::/64
I0607 13:23:35.083661 9970 docker_service.go:304] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:2001:beef::/64,},}
panic: runtime error: index out of range
goroutine 625 [running]:
k8s.io/kubernetes/pkg/kubelet/network/kubenet.(*kubenetNetworkPlugin).Event(0xc4203bf440, 0x462ebd1, 0xf, 0xc4219deae0)
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/network/kubenet/kubenet_linux.go:260 +0x889
k8s.io/kubernetes/pkg/kubelet/network.(*PluginManager).Event(0xc4203e9aa0, 0x462ebd1, 0xf, 0xc4219deae0)
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/network/plugins.go:323 +0x51
k8s.io/kubernetes/pkg/kubelet/dockershim.(*dockerService).UpdateRuntimeConfig(0xc420b634a0, 0xc420447df0, 0x1, 0xc420447de8)
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/docker_service.go:308 +0x1ac
k8s.io/kubernetes/pkg/kubelet/dockershim/remote.(*dockerService).UpdateRuntimeConfig(0xc42024a860, 0x7f8aa5459b70, 0xc4219dea20, 0xc420447de8, 0xc42024a860, 0x42be0e, 0x0)
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/remote/docker_service.go:180 +0x50
k8s.io/kubernetes/pkg/kubelet/api/v1alpha1/runtime._RuntimeService_UpdateRuntimeConfig_Handler(0x452f2a0, 0xc42024a860, 0x7f8aa5459b70, 0xc4219dea20, 0xc4219dc280, 0x0, 0x0, 0x0, 0x360e501, 0x0)
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/api/v1alpha1/runtime/api.pb.go:3635 +0x290
k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4201ab380, 0x826a860, 0xc4206b2cf0, 0xc421913ef0, 0xc4201d4690, 0x820fd80, 0xc4219de9f0, 0x0, 0x0)
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:638 +0xb5c
k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4201ab380, 0x826a860, 0xc4206b2cf0, 0xc421913ef0, 0xc4219de9f0)
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:796 +0x1261
k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc4202f6b50, 0xc4201ab380, 0x826a860, 0xc4206b2cf0, 0xc421913ef0)
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:449 +0xa9
created by k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/root/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:450 +0xa1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment