TS=`date -u +"%Y-%m-%d_%H%M"`
pprofs=( goroutine heap threadcreate block mutex )
for pod in $(kubectl -n cattle-system get pods --no-headers -l app=rancher -o custom-columns=":.metadata.name"); do
echo "getting profile for $pod..."
for pp in ${pprofs[@]}; do
echo "--> generating $pp..."
kubectl -n cattle-system exec $pod -c rancher -- curl -s http://localhost:6060/debug/pprof/$pp -o $pp
Using just a Calico CRD felixconfiguration
, instead of vxlan tunnel.
calicoctl patch felixconfiguration default --type='merge' -p '{"spec":{"wireguardEnabled":true}}'
This should only be enabled by direction from your professional services consultant on an as-needed basis after a thorough examination of the specific environmental factors.
This took a bit of digging to figure out. As it turns out, my lab ingress has evolved a little. This env may have had several ingress classes in the past.
- alpha env behind a LB with Let's Encrypt enabled
- the error is misleading, the api-resources are OK & pathType IS specified
- helm install works OK but won't touch ingress, keeps status as
failed
- even backing up ingress to make it net-new fails
... or readlink
might require a flag like -f
and/or the which
command might not include root's $PATH
so perhaps run under sudo.
This is a little hacky. A better bet for your operations might be Neuvector.
readlink $(awk '/liblzma.so.5/{print $3}' <(ldd $(which sshd))) | grep -qE "liblzma.so.5.6.0|liblzma.so.5.6.1" && echo "Affected by CVE-2024-3094" || echo "Not affected."
Recent versions of MetalLB include FRR as a software router.
If you are cisco-like try it out ...
kubectl exec -it -n metallb-system ds/speaker -c frr -- vtysh
Ask your Qualified consultant for analysis the specifics of the frr configuration. Above applies to manifest only.
iostat -o JSON | jq -r '.[] | paths | select(length>=2) | map(tostring) | join(".")'
... (wip)
Taking notes from here: https://unix.stackexchange.com/questions/549570/list-all-keys-excluding-arrays
- domain names need to match ssh host ids
- dynamic inventory ansible module for
community.libvirt.libvirt
requires qemu-ga running in guest - why not just create a temporary ad-hoc inventory of currently running vms?
- use as an alias or function or with Taskfile, prevent Task from throwing an error for non-running vms
echo "[running]" $(virsh list --name) | xargs printf "%s\n" > ~/.local/tmp/inventory
ansible -i ~/.local/tmp/inventory -a 'uptime' running
- find the
calico-node
container, bin is located in/calicoctl
- [Edit]: Rancher
canal
includes it, but not genericcalico
$ kubectl exec -it -n kube-system $(kubectl get pods -n kube-system -l k8s-app=canal --no-headers -o custom-columns=":metadata.name") -c calico-node -- /calicoctl version
Client Version: v3.26.3
Git commit: bdb7878af
Cluster Version: v3.26.3