autoscaler logs
kubectl -n kube-system logs --selector="app.kubernetes.io/name=aws-cluster-autoscaler"
autoscaler logs
kubectl -n kube-system logs --selector="app.kubernetes.io/name=aws-cluster-autoscaler"
Pods which are not running ok
kubectl get pods --all-namespaces --field-selector status.phase!=Running --sort-by status.phase
Get only the tainted nodes, along with their taints:
kubectl get nodes -o json | jq '.items[] | select( .spec.taints ) | { name: .metadata.name, taints: .spec.taints }'
{
"name": "ip-10-17-0-xxx.us-east-2.compute.internal",
"taints": [
{
"effect": "NoExecute",
"key": "node.konghq.com/workload-isolation-group",
"value": "kong-cp"
},
{
"effect": "NoSchedule",
"key": "node.konghq.com/workload-isolation-group",
"value": "kong-cp"
}
]
}
{
"name": "ip-10-17-0-xxx.us-east-2.compute.internal",
"taints": [
{
"effect": "NoExecute",
"key": "node.konghq.com/workload-isolation-group",
"value": "kong-cp"
},
{
"effect": "NoSchedule",
"key": "node.konghq.com/workload-isolation-group",
"value": "kong-cp"
}
]
}
Show the untainted nodes?
kubectl get nodes --sort-by .metadata.creationTimestamp | grep -wvf <( kubectl get nodes -o json | jq -r '.items[] | select( .spec.taints ) | .metadata.name' )
NAME STATUS ROLES AGE VERSION
ip-10-17-35-xxx.us-east-2.compute.internal Ready <none> 9d v1.19.15-eks-9c63c4
ip-10-17-21-xxx.us-east-2.compute.internal Ready <none> 9d v1.19.15-eks-9c63c4
ip-10-17-26-xxx.us-east-2.compute.internal Ready <none> 9d v1.19.15-eks-9c63c4