Skip to content

Instantly share code, notes, and snippets.

@martinmalek
Created August 28, 2024 18:06
Show Gist options
  • Save martinmalek/7a50fbb9a4c18f3525d2f6dcbff87e8f to your computer and use it in GitHub Desktop.
Save martinmalek/7a50fbb9a4c18f3525d2f6dcbff87e8f to your computer and use it in GitHub Desktop.
K8S + Metallb + Nginx - External IP - port accessible but not serving
This gist came about as a result of a request for help in Kubernetes slack #metallb channel (https://kubernetes.slack.com/archives/CANQGM8BA/p1724849137265479)
Background
==========
- Setting up a homelab k8s cluster running on proxmox.
- Following https://blog.andreev.it/2023/10/install-metallb-on-kubernetes-cluster-running-on-vmware-vms-or-bare-metal-server/
Status
======
- Metallb and Nginx are running fine.
- External IP assigned and port 80 is serving
- httpd and nginx services running but not reachable on External IP
- /etc/hosts on the k8s node and also on my computer
192.168.70.30 nginx.homelab.local
192.168.70.30 httpd.homelab.local
Collecting data
===============
As per request I will paste the result of the followin
- kubectl get all -n ingress-nginx
- kubectl -n ingress-nginx describe po,svc
- kubectl -n appnamespace describe po,svc,ing
- curl command with -v exactly as used and response
- logs of the controller pod
Command outputs
===============
ubuntu@k8s-master-01:~$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-n7mv7 0/1 Completed 0 39m
pod/ingress-nginx-admission-patch-785fj 0/1 Completed 0 39m
pod/ingress-nginx-controller-7d8d8c7b4c-qfwtp 1/1 Running 0 39m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.108.119.176 192.168.70.30 80:31577/TCP,443:32317/TCP 39m
service/ingress-nginx-controller-admission ClusterIP 10.108.9.24 <none> 443/TCP 39m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 39m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-7d8d8c7b4c 1 1 1 39m
NAME STATUS COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create Complete 1/1 4s 39m
job.batch/ingress-nginx-admission-patch Complete 1/1 3s 39m
====================================
ubuntu@k8s-master-01:~$ kubectl -n ingress-nginx describe po,svc
Name: ingress-nginx-admission-create-n7mv7
Namespace: ingress-nginx
Priority: 0
Service Account: ingress-nginx-admission
Node: k8s-worker-02/192.168.70.36
Start Time: Wed, 28 Aug 2024 17:12:41 +0000
Labels: app.kubernetes.io/component=admission-webhook
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.11.2
batch.kubernetes.io/controller-uid=7f4196fc-ba83-4259-a365-c13757986480
batch.kubernetes.io/job-name=ingress-nginx-admission-create
controller-uid=7f4196fc-ba83-4259-a365-c13757986480
job-name=ingress-nginx-admission-create
Annotations: cni.projectcalico.org/containerID: 2eff0597654e486e8ffbd7aeb672bd7940a1b39ae8a10b80fa0f79c6a897ba78
cni.projectcalico.org/podIP:
cni.projectcalico.org/podIPs:
Status: Succeeded
IP: 172.16.118.79
IPs:
IP: 172.16.118.79
Controlled By: Job/ingress-nginx-admission-create
Containers:
create:
Container ID: containerd://5619d0499a376bcc89bf2605e13e59d8a574b1a4c86c7bda4c4f345db754a4ea
Image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3
Image ID: registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3
Port: <none>
Host Port: <none>
SeccompProfile: RuntimeDefault
Args:
create
--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
--namespace=$(POD_NAMESPACE)
--secret-name=ingress-nginx-admission
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 28 Aug 2024 17:12:41 +0000
Finished: Wed, 28 Aug 2024 17:12:41 +0000
Ready: False
Restart Count: 0
Environment:
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8ffl4 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-8ffl4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-admission-create-n7mv7 to k8s-worker-02
Normal Pulled 40m kubelet Container image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3" already present on machine
Normal Created 40m kubelet Created container create
Normal Started 40m kubelet Started container create
Name: ingress-nginx-admission-patch-785fj
Namespace: ingress-nginx
Priority: 0
Service Account: ingress-nginx-admission
Node: k8s-worker-01/192.168.70.35
Start Time: Wed, 28 Aug 2024 17:12:41 +0000
Labels: app.kubernetes.io/component=admission-webhook
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.11.2
batch.kubernetes.io/controller-uid=85040b0d-35e5-4656-bd0e-99fca34d5f15
batch.kubernetes.io/job-name=ingress-nginx-admission-patch
controller-uid=85040b0d-35e5-4656-bd0e-99fca34d5f15
job-name=ingress-nginx-admission-patch
Annotations: cni.projectcalico.org/containerID: f0f06700f06193dfa0bd3305906f3a5a777de1a6d91295212066f88115c9ee1b
cni.projectcalico.org/podIP:
cni.projectcalico.org/podIPs:
Status: Succeeded
IP: 172.16.36.202
IPs:
IP: 172.16.36.202
Controlled By: Job/ingress-nginx-admission-patch
Containers:
patch:
Container ID: containerd://6927a753754bcef781cca0055eeec0c7f4eb31a1b2df479dfa0b219e056f1840
Image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3
Image ID: registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3
Port: <none>
Host Port: <none>
SeccompProfile: RuntimeDefault
Args:
patch
--webhook-name=ingress-nginx-admission
--namespace=$(POD_NAMESPACE)
--patch-mutating=false
--secret-name=ingress-nginx-admission
--patch-failure-policy=Fail
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 28 Aug 2024 17:12:41 +0000
Finished: Wed, 28 Aug 2024 17:12:41 +0000
Ready: False
Restart Count: 0
Environment:
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7g5z9 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-7g5z9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-admission-patch-785fj to k8s-worker-01
Normal Pulled 40m kubelet Container image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3" already present on machine
Normal Created 40m kubelet Created container patch
Normal Started 40m kubelet Started container patch
Name: ingress-nginx-controller-7d8d8c7b4c-qfwtp
Namespace: ingress-nginx
Priority: 0
Service Account: ingress-nginx
Node: k8s-worker-01/192.168.70.35
Start Time: Wed, 28 Aug 2024 17:12:41 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.11.2
pod-template-hash=7d8d8c7b4c
Annotations: cni.projectcalico.org/containerID: f5cf71284021186d178fb0b97f9a9eb13fcc55d7aaeabd790f506799a317fa24
cni.projectcalico.org/podIP: 172.16.36.203/32
cni.projectcalico.org/podIPs: 172.16.36.203/32
Status: Running
IP: 172.16.36.203
IPs:
IP: 172.16.36.203
Controlled By: ReplicaSet/ingress-nginx-controller-7d8d8c7b4c
Containers:
controller:
Container ID: containerd://103bcbc03ea0ddc981e12af7783767b426a874bb64880f8ffb0d8942420d5a8e
Image: registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--enable-metrics=false
State: Running
Started: Wed, 28 Aug 2024 17:12:46 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-7d8d8c7b4c-qfwtp (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-th27p (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-th27p:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-7d8d8c7b4c-qfwtp to k8s-worker-01
Warning FailedMount 40m (x2 over 40m) kubelet MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
Normal Pulling 40m kubelet Pulling image "registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce"
Normal Pulled 40m kubelet Successfully pulled image "registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce" in 3.512s (3.512s including waiting). Image size: 105405235 bytes.
Normal Created 40m kubelet Created container controller
Normal Started 40m kubelet Started container controller
Normal RELOAD 38m (x3 over 40m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.11.2
Annotations: metallb.universe.tf/ip-allocated-from-pool: first-pool
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.119.176
IPs: 10.108.119.176
LoadBalancer Ingress: 192.168.70.30 (VIP)
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31577/TCP
Endpoints: 172.16.36.203:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32317/TCP
Endpoints: 172.16.36.203:443
Session Affinity: None
External Traffic Policy: Local
Internal Traffic Policy: Cluster
HealthCheck NodePort: 31727
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 40m metallb-controller Assigned IP ["192.168.70.30"]
Normal nodeAssigned 40m metallb-speaker announcing from node "k8s-worker-01" with protocol "layer2"
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.11.2
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.9.24
IPs: 10.108.9.24
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 172.16.36.203:8443
Session Affinity: None
Internal Traffic Policy: Cluster
Events: <none>
====================================
ubuntu@k8s-master-01:~$ kubectl -n default describe po,svc,ing
Name: httpd-599bf897b4-tm489
Namespace: default
Priority: 0
Service Account: default
Node: k8s-worker-01/192.168.70.35
Start Time: Wed, 28 Aug 2024 17:14:15 +0000
Labels: app=httpd
pod-template-hash=599bf897b4
Annotations: cni.projectcalico.org/containerID: 9cc78efe5fca9646771c227678a358e41386780accef0cf9b7d781d4a6e30db0
cni.projectcalico.org/podIP: 172.16.36.204/32
cni.projectcalico.org/podIPs: 172.16.36.204/32
Status: Running
IP: 172.16.36.204
IPs:
IP: 172.16.36.204
Controlled By: ReplicaSet/httpd-599bf897b4
Containers:
httpd:
Container ID: containerd://a878c05cbb5571fb0a68f750b3cc5ec9fbe1ae299914306a13b4ac5179395a46
Image: httpd
Image ID: docker.io/library/httpd@sha256:3f71777bcfac3df3aff5888a2d78c4104501516300b2e7ecb91ce8de2e3debc7
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 28 Aug 2024 17:14:20 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6mzg6 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-6mzg6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned default/httpd-599bf897b4-tm489 to k8s-worker-01
Normal Pulling 40m kubelet Pulling image "httpd"
Normal Pulled 40m kubelet Successfully pulled image "httpd" in 3.739s (3.739s including waiting). Image size: 59389120 bytes.
Normal Created 40m kubelet Created container httpd
Normal Started 40m kubelet Started container httpd
Name: nginx-7769f8f85b-gq2ns
Namespace: default
Priority: 0
Service Account: default
Node: k8s-worker-02/192.168.70.36
Start Time: Wed, 28 Aug 2024 17:14:10 +0000
Labels: app=nginx
pod-template-hash=7769f8f85b
Annotations: cni.projectcalico.org/containerID: 3e15022efa1cf9dd2bde9b598c5d37be6c9816a8d22bfa53affb0b0a5194d6f8
cni.projectcalico.org/podIP: 172.16.118.80/32
cni.projectcalico.org/podIPs: 172.16.118.80/32
Status: Running
IP: 172.16.118.80
IPs:
IP: 172.16.118.80
Controlled By: ReplicaSet/nginx-7769f8f85b
Containers:
nginx:
Container ID: containerd://171553a12690b010397545f7f9625c6ae735b1a197fa4cd74196f5501669d27f
Image: nginx
Image ID: docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 28 Aug 2024 17:19:59 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5m29j (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-5m29j:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned default/nginx-7769f8f85b-gq2ns to k8s-worker-02
Warning FailedCreatePodSandBox 39m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7c49b795d01cb41b16bcf54ad031f320510e6393cdd998e6829af3f7f6b7af21": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": net/http: TLS handshake timeout
Normal SandboxChanged 34m (x15 over 39m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 34m kubelet Pulling image "nginx"
Normal Pulled 34m kubelet Successfully pulled image "nginx" in 4.379s (4.379s including waiting). Image size: 71026652 bytes.
Normal Created 34m kubelet Created container nginx
Normal Started 34m kubelet Started container nginx
Name: httpd
Namespace: default
Labels: app=httpd
Annotations: <none>
Selector: app=httpd
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.38.168
IPs: 10.96.38.168
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 172.16.36.204:80
Session Affinity: None
Internal Traffic Policy: Cluster
Events: <none>
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.0.1
IPs: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 192.168.70.30:6443
Session Affinity: None
Internal Traffic Policy: Cluster
Events: <none>
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.200.222
IPs: 10.101.200.222
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 172.16.118.80:80
Session Affinity: None
Internal Traffic Policy: Cluster
Events: <none>
Name: httpd
Labels: <none>
Namespace: default
Address: 192.168.70.30
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
httpd.homelab.local
/ httpd:80 (172.16.36.204:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 39m (x2 over 40m) nginx-ingress-controller Scheduled for sync
Name: nginx
Labels: <none>
Namespace: default
Address: 192.168.70.30
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.homelab.local
/ nginx:80 (172.16.118.80:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 39m (x2 over 40m) nginx-ingress-controller Scheduled for sync
====================================
curl on k8s node
================
ubuntu@k8s-master-01:~$ curl httpd.homelab.local -vvvv
* Host httpd.homelab.local:80 was resolved.
* IPv6: (none)
* IPv4: 192.168.70.30
* Trying 192.168.70.30:80...
* Connected to httpd.homelab.local (192.168.70.30) port 80
> GET / HTTP/1.1
> Host: httpd.homelab.local
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 28 Aug 2024 17:57:57 GMT
< Content-Type: text/html
< Content-Length: 45
< Connection: keep-alive
< Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
< ETag: "2d-432a5e4a73a80"
< Accept-Ranges: bytes
<
<html><body><h1>It works!</h1></body></html>
* Connection #0 to host httpd.homelab.local left intact
======
ubuntu@k8s-master-01:~$ curl nginx.homelab.local -vvvv
* Host nginx.homelab.local:80 was resolved.
* IPv6: (none)
* IPv4: 192.168.70.30
* Trying 192.168.70.30:80...
* Connected to nginx.homelab.local (192.168.70.30) port 80
> GET / HTTP/1.1
> Host: nginx.homelab.local
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 28 Aug 2024 18:00:41 GMT
< Content-Type: text/html
< Content-Length: 615
< Connection: keep-alive
< Last-Modified: Mon, 12 Aug 2024 14:21:01 GMT
< ETag: "66ba1a4d-267"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host nginx.homelab.local left intact
curl on my computer
===================
main●●] % curl httpd.homelab.local -vvvv
* Host httpd.homelab.local:80 was resolved.
* IPv6: (none)
* IPv4: 192.168.70.30, 192.168.70.30
* Trying 192.168.70.30:80...
* connect to 192.168.70.30 port 80 from 192.168.1.3 port 49309 failed: Operation timed out
* Trying 192.168.70.30:80...
^[* connect to 192.168.70.30 port 80 from 192.168.1.3 port 49313 failed: Operation timed out
* Failed to connect to httpd.homelab.local port 80 after 23483 ms: Couldn't connect to server
* Closing connection
curl: (28) Failed to connect to httpd.homelab.local port 80 after 23483 ms: Couldn't connect to server
[main●●] % curl nginx.homelab.local -vvvv
* Host nginx.homelab.local:80 was resolved.
* IPv6: (none)
* IPv4: 192.168.70.30, 192.168.70.30
* Trying 192.168.70.30:80...
* connect to 192.168.70.30 port 80 from 192.168.1.3 port 49283 failed: Operation timed out
* Trying 192.168.70.30:80...
* connect to 192.168.70.30 port 80 from 192.168.1.3 port 49289 failed: Operation timed out
* Failed to connect to nginx.homelab.local port 80 after 23468 ms: Couldn't connect to server
* Closing connection
curl: (28) Failed to connect to nginx.homelab.local port 80 after 23468 ms: Couldn't connect to server
metallb controller logs
=======================
(⎈|kubernetes-admin@kubernetes:default)3_proxmox_ansible_vm_provisioning [main●●] % kubectl logs -n metallb-system -p controller-8694df9d9b-hskhx
{"branch":"dev","caller":"main.go:167","commit":"dev","goversion":"gc / go1.22.4 / amd64","level":"info","msg":"MetalLB controller starting version 0.14.8 (commit dev, branch dev)","ts":"2024-08-28T16:58:42Z","version":"0.14.8"}
{"action":"setting up cert rotation","caller":"webhook.go:31","level":"info","op":"startup","ts":"2024-08-28T16:58:42Z"}
{"caller":"k8s.go:400","level":"info","msg":"secret successfully created","op":"CreateMlSecret","ts":"2024-08-28T16:58:42Z"}
{"caller":"k8s.go:423","level":"info","msg":"Starting Manager","op":"Run","ts":"2024-08-28T16:58:42Z"}
{"level":"info","ts":"2024-08-28T16:58:42Z","logger":"cert-rotation","msg":"starting cert rotator controller","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).Start\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:276\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting EventSource","controller":"cert-rotator","source":"kind source: *v1.Secret","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:173\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting EventSource","controller":"cert-rotator","source":"kind source: *unstructured.Unstructured","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:173\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting EventSource","controller":"cert-rotator","source":"kind source: *unstructured.Unstructured","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:173\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting Controller","controller":"cert-rotator","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting EventSource","controller":"ipaddresspool","controllerGroup":"metallb.io","controllerKind":"IPAddressPool","source":"kind source: *v1beta1.IPAddressPool","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:173\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting EventSource","controller":"ipaddresspool","controllerGroup":"metallb.io","controllerKind":"IPAddressPool","source":"kind source: *v1beta1.Community","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:173\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting EventSource","controller":"service","controllerGroup":"","controllerKind":"Service","source":"kind source: *v1.Service","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:173\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting EventSource","controller":"ipaddresspool","controllerGroup":"metallb.io","controllerKind":"IPAddressPool","source":"kind source: *v1.Namespace","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:173\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting Controller","controller":"ipaddresspool","controllerGroup":"metallb.io","controllerKind":"IPAddressPool","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting EventSource","controller":"service","controllerGroup":"","controllerKind":"Service","source":"channel source: 0xc00024ee00","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:173\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:42Z","msg":"Starting Controller","controller":"service","controllerGroup":"","controllerKind":"Service","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"refreshing CA and server certs","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded.func1\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:320\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:461\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:350\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).Start\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:278\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:43Z","msg":"Starting workers","controller":"cert-rotator","worker count":1,"stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:215\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"refreshing CA and server certs","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded.func1\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:320\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:461\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:350\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:765\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"server certs refreshed","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded.func1\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:326\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:461\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:350\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:765\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"no cert refresh needed","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded.func1\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:347\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:461\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:350\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:765\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"Ensuring CA cert","name":"metallb-webhook-configuration","gvk":"admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration","name":"metallb-webhook-configuration","gvk":"admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).ensureCerts\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:827\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:784\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"Ensuring CA cert","name":"bgppeers.metallb.io","gvk":"apiextensions.k8s.io/v1, Kind=CustomResourceDefinition","name":"bgppeers.metallb.io","gvk":"apiextensions.k8s.io/v1, Kind=CustomResourceDefinition","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).ensureCerts\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:827\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:784\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"no cert refresh needed","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded.func1\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:347\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:461\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:350\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:765\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"Ensuring CA cert","name":"metallb-webhook-configuration","gvk":"admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration","name":"metallb-webhook-configuration","gvk":"admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).ensureCerts\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:827\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:784\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"Ensuring CA cert","name":"bgppeers.metallb.io","gvk":"apiextensions.k8s.io/v1, Kind=CustomResourceDefinition","name":"bgppeers.metallb.io","gvk":"apiextensions.k8s.io/v1, Kind=CustomResourceDefinition","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).ensureCerts\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:827\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:784\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222"}
{"level":"error","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"could not refresh CA and server certs","error":"Operation cannot be fulfilled on secrets \"metallb-webhook-cert\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded.func1\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:322\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:461\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:350\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).Start\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:278\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:43Z","logger":"cert-rotation","msg":"no cert refresh needed","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded.func1\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:347\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:461\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).refreshCertIfNeeded\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:350\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).Start\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:278\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:44Z","msg":"Starting workers","controller":"ipaddresspool","controllerGroup":"metallb.io","controllerKind":"IPAddressPool","worker count":1,"stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:215\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/calico-apiserver","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:99","controller":"PoolReconciler","event":"force service reload","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:112","controller":"PoolReconciler","event":"config reloaded","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:113","controller":"PoolReconciler","end reconcile":"/calico-apiserver","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/default","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/default","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/kubernetes-dashboard","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"calico-system/calico-kube-controllers-metrics","ts":"2024-08-28T16:58:44Z"}
{"level":"info","ts":"2024-08-28T16:58:44Z","msg":"Starting workers","controller":"service","controllerGroup":"","controllerKind":"Service","worker count":1,"stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:215\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:229\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"calico-system/calico-kube-controllers-metrics","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/kubernetes-dashboard","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"calico-system/calico-typha","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/tigera-operator","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"calico-system/calico-typha","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"default/kubernetes","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"default/kubernetes","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"kubernetes-dashboard/dashboard-metrics-scraper","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/tigera-operator","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"kubernetes-dashboard/dashboard-metrics-scraper","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"metallb-system/metallb-webhook-service","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/calico-system","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"metallb-system/metallb-webhook-service","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"calico-apiserver/calico-api","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"calico-apiserver/calico-api","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"kube-system/kube-dns","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"kube-system/kube-dns","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/calico-system","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"kube-system/metrics-server","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/kube-node-lease","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"kube-system/metrics-server","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"kubernetes-dashboard/kubernetes-dashboard","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller.go:72","controller":"ServiceReconciler","end reconcile":"kubernetes-dashboard/kubernetes-dashboard","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/kube-node-lease","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller_reload.go:63","controller":"ServiceReconciler - reprocessAll","level":"info","start reconcile":"metallbreload/reload","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/kube-public","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/kube-public","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/kube-system","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/kube-system","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/metallb-system","ts":"2024-08-28T16:58:44Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/metallb-system","level":"info","ts":"2024-08-28T16:58:44Z"}
{"caller":"service_controller_reload.go:119","controller":"ServiceReconciler - reprocessAll","end reconcile":"metallbreload/reload","level":"info","ts":"2024-08-28T16:58:44Z"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"cert-rotation","msg":"certs are ready in /tmp/k8s-webhook-server/serving-certs","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).ensureCertsMounted\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:866"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"cert-rotation","msg":"CA certs are injected to webhooks","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*CertRotator).ensureReady\n\t/go/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.2-0.20240531181455-2649f121ab97/pkg/rotator/rotator.go:886"}
{"action":"webhooks enabled","caller":"webhook.go:53","level":"info","op":"startup","ts":"2024-08-28T16:58:45Z"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-metallb-io-v1beta1-ipaddresspool","stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Register\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:183\ngo.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1.(*IPAddressPoolValidator).SetupWebhookWithManager\n\t/go/go.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1/ipaddresspool_webhook.go:41\ngo.universe.tf/metallb/internal/k8s.enableWebhook\n\t/go/go.universe.tf/metallb/internal/k8s/webhook.go:65\ngo.universe.tf/metallb/internal/k8s.New.func3\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:306"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-metallb-io-v1beta2-bgppeer","stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Register\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:183\ngo.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta2.(*BGPPeerValidator).SetupWebhookWithManager\n\t/go/go.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta2/bgppeer_webhook.go:41\ngo.universe.tf/metallb/internal/k8s.enableWebhook\n\t/go/go.universe.tf/metallb/internal/k8s/webhook.go:70\ngo.universe.tf/metallb/internal/k8s.New.func3\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:306"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-metallb-io-v1beta1-bgpadvertisement","stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Register\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:183\ngo.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1.(*BGPAdvertisementValidator).SetupWebhookWithManager\n\t/go/go.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1/bgpadvertisement_webhook.go:42\ngo.universe.tf/metallb/internal/k8s.enableWebhook\n\t/go/go.universe.tf/metallb/internal/k8s/webhook.go:75\ngo.universe.tf/metallb/internal/k8s.New.func3\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:306"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Starting webhook server","stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:191\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-metallb-io-v1beta1-l2advertisement","stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Register\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:183\ngo.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1.(*L2AdvertisementValidator).SetupWebhookWithManager\n\t/go/go.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1/l2advertisement_webhook.go:41\ngo.universe.tf/metallb/internal/k8s.enableWebhook\n\t/go/go.universe.tf/metallb/internal/k8s/webhook.go:80\ngo.universe.tf/metallb/internal/k8s.New.func3\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:306"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-metallb-io-v1beta1-community","stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Register\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:183\ngo.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1.(*CommunityValidator).SetupWebhookWithManager\n\t/go/go.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1/community_webhook.go:41\ngo.universe.tf/metallb/internal/k8s.enableWebhook\n\t/go/go.universe.tf/metallb/internal/k8s/webhook.go:85\ngo.universe.tf/metallb/internal/k8s.New.func3\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:306"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-metallb-io-v1beta1-bfdprofile","stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Register\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:183\ngo.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1.(*BFDProfileValidator).SetupWebhookWithManager\n\t/go/go.universe.tf/metallb/internal/k8s/webhooks/webhookv1beta1/bfdprofile_webhook.go:40\ngo.universe.tf/metallb/internal/k8s.enableWebhook\n\t/go/go.universe.tf/metallb/internal/k8s/webhook.go:90\ngo.universe.tf/metallb/internal/k8s.New.func3\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:306"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/convert","stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Register\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:183\ngo.universe.tf/metallb/internal/k8s.enableWebhook\n\t/go/go.universe.tf/metallb/internal/k8s/webhook.go:96\ngo.universe.tf/metallb/internal/k8s.New.func3\n\t/go/go.universe.tf/metallb/internal/k8s/k8s.go:306"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate","stacktrace":"sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).ReadCertificate\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/certwatcher/certwatcher.go:161\nsigs.k8s.io/controller-runtime/pkg/certwatcher.New\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/certwatcher/certwatcher.go:62\nsigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:207\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.webhook","msg":"Serving webhook server","host":"","port":9443,"stacktrace":"sigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:242\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/manager/runnable_group.go:226"}
{"level":"info","ts":"2024-08-28T16:58:45Z","logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher","stacktrace":"sigs.k8s.io/controller-runtime/pkg/certwatcher.(*CertWatcher).Start\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/certwatcher/certwatcher.go:115\nsigs.k8s.io/controller-runtime/pkg/webhook.(*DefaultServer).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/webhook/server.go:214"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"metallb-system/first-pool","ts":"2024-08-28T16:59:14Z"}
{"caller":"pool_controller.go:99","controller":"PoolReconciler","event":"force service reload","level":"info","ts":"2024-08-28T16:59:14Z"}
{"caller":"pool_controller.go:112","controller":"PoolReconciler","event":"config reloaded","level":"info","ts":"2024-08-28T16:59:14Z"}
{"caller":"pool_controller.go:113","controller":"PoolReconciler","end reconcile":"metallb-system/first-pool","level":"info","ts":"2024-08-28T16:59:14Z"}
{"caller":"service_controller_reload.go:63","controller":"ServiceReconciler - reprocessAll","level":"info","start reconcile":"metallbreload/reload","ts":"2024-08-28T16:59:14Z"}
{"caller":"service_controller_reload.go:119","controller":"ServiceReconciler - reprocessAll","end reconcile":"metallbreload/reload","level":"info","ts":"2024-08-28T16:59:14Z"}
{"caller":"pool_controller.go:48","controller":"PoolReconciler","level":"info","start reconcile":"/ingress-nginx","ts":"2024-08-28T17:00:07Z"}
{"caller":"pool_controller.go:88","controller":"PoolReconciler","end reconcile":"/ingress-nginx","level":"info","ts":"2024-08-28T17:00:07Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"ingress-nginx/ingress-nginx-controller","ts":"2024-08-28T17:00:08Z"}
{"caller":"service.go:150","event":"ipAllocated","ip":["192.168.70.30"],"level":"info","msg":"IP address assigned by controller","ts":"2024-08-28T17:00:08Z"}
{"caller":"main.go:116","event":"serviceUpdated","level":"info","msg":"updated service object","ts":"2024-08-28T17:00:08Z"}
{"caller":"service_controller.go:115","controller":"ServiceReconciler","end reconcile":"ingress-nginx/ingress-nginx-controller","level":"info","ts":"2024-08-28T17:00:08Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"ingress-nginx/ingress-nginx-controller","ts":"2024-08-28T17:00:08Z"}
{"caller":"main.go:116","event":"serviceUpdated","level":"info","msg":"updated service object","ts":"2024-08-28T17:00:08Z"}
{"caller":"service_controller.go:115","controller":"ServiceReconciler","end reconcile":"ingress-nginx/ingress-nginx-controller","level":"info","ts":"2024-08-28T17:00:08Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"ingress-nginx/ingress-nginx-controller-admission","ts":"2024-08-28T17:00:08Z"}
{"caller":"service_controller.go:115","controller":"ServiceReconciler","end reconcile":"ingress-nginx/ingress-nginx-controller-admission","level":"info","ts":"2024-08-28T17:00:08Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"ingress-nginx/ingress-nginx-controller","ts":"2024-08-28T17:02:23Z"}
{"caller":"main.go:58","event":"serviceDeleted","level":"info","msg":"service deleted","ts":"2024-08-28T17:02:23Z"}
{"caller":"service_controller.go:107","controller":"ServiceReconciler","event":"force service reload","level":"info","ts":"2024-08-28T17:02:23Z"}
{"caller":"service_controller.go:109","controller":"ServiceReconciler","end reconcile":"ingress-nginx/ingress-nginx-controller","level":"info","ts":"2024-08-28T17:02:23Z"}
{"caller":"service_controller_reload.go:63","controller":"ServiceReconciler - reprocessAll","level":"info","start reconcile":"metallbreload/reload","ts":"2024-08-28T17:02:23Z"}
{"caller":"service_controller_reload.go:119","controller":"ServiceReconciler - reprocessAll","end reconcile":"metallbreload/reload","level":"info","ts":"2024-08-28T17:02:23Z"}
{"caller":"service_controller.go:64","controller":"ServiceReconciler","level":"info","start reconcile":"ingress-nginx/ingress-nginx-controller-admission","ts":"2024-08-28T17:02:23Z"}
{"caller":"service_controller.go:115","controller":"ServiceReconciler","end reconcile":"ingress-nginx/ingress-nginx-controller-admission","level":"info","ts":"2024-08-28T17:02:23Z"}
@martinmalek
Copy link
Author

martinmalek commented Aug 28, 2024

Telnet results

Outside
main●●] % telnet 192.168.70.30 80
Trying 192.168.70.30...
telnet: connect to address 192.168.70.30: Operation timed out
telnet: Unable to connect to remote host

Inside
ubuntu@k8s-master-01:~$ telnet 192.168.70.30 80
Trying 192.168.70.30...
Connected to 192.168.70.30.
Escape character is '^]'.

=================

nginx controller logs


NGINX Ingress controller
Release: v1.11.2
Build: 46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5


W0828 17:12:46.598498 7 client_config.go:659] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0828 17:12:46.598671 7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
I0828 17:12:46.603814 7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764eI0828 17:12:46.737193 7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0828 17:12:46.748384 7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key" I0828 17:12:46.756142 7 nginx.go:271] "Starting NGINX Ingress controller"
I0828 17:12:46.769015 7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"dbb571c9-b333I0828 17:12:47.958395 7 nginx.go:317] "Starting NGINX process"
I0828 17:12:47.958580 7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader... I0828 17:12:47.960585 7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/ke
I0828 17:12:47.960688 7 controller.go:193] "Configuration changes detected, backend reload required"
I0828 17:12:47.963929 7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
I0828 17:12:47.963939 7 status.go:85] "New leader elected" identity="ingress-nginx-controller-7d8d8c7b4c-qfwtp"
I0828 17:12:47.993398 7 controller.go:213] "Backend successfully reloaded"
I0828 17:12:47.993432 7 controller.go:224] "Initial sync, sleeping for 1 second"
I0828 17:12:47.993481 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7d8d8c7b4c-qfwtp", UID:"fc
W0828 17:14:10.759091 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
I0828 17:14:10.780423 7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx"
I0828 17:14:10.783520 7 store.go:440] "Found valid IngressClass" ingress="default/nginx" ingressclass="nginx"
I0828 17:14:10.783705 7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx", UID:"a8a99bff-28f7-499c-bb16-713186db85c6", A
W0828 17:14:14.042501 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
I0828 17:14:14.042606 7 controller.go:193] "Configuration changes detected, backend reload required"
I0828 17:14:14.086020 7 controller.go:213] "Backend successfully reloaded"
I0828 17:14:14.086204 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7d8d8c7b4c-qfwtp", UID:"fc
W0828 17:14:16.004492 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
W0828 17:14:16.004520 7 controller.go:1216] Service "default/httpd" does not have any active Endpoint.
I0828 17:14:16.027123 7 main.go:107] "successfully validated configuration, accepting" ingress="default/httpd"
I0828 17:14:16.029813 7 store.go:440] "Found valid IngressClass" ingress="default/httpd" ingressclass="nginx"
I0828 17:14:16.030053 7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"httpd", UID:"f457e2fa-0b63-49a5-8991-5ea3158e90bd", A
W0828 17:14:17.375717 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
W0828 17:14:17.375754 7 controller.go:1216] Service "default/httpd" does not have any active Endpoint.
I0828 17:14:17.375798 7 controller.go:193] "Configuration changes detected, backend reload required"
I0828 17:14:17.419559 7 controller.go:213] "Backend successfully reloaded"
I0828 17:14:17.419767 7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7d8d8c7b4c-qfwtp", UID:"fc
W0828 17:14:20.711200 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
W0828 17:14:20.711257 7 controller.go:1216] Service "default/httpd" does not have any active Endpoint.
W0828 17:14:24.042891 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
I0828 17:14:47.969819 7 status.go:304] "updating Ingress status" namespace="default" ingress="httpd" currentValue=null newValue=[{"ip":"192.168.70.30"}]
I0828 17:14:47.969826 7 status.go:304] "updating Ingress status" namespace="default" ingress="nginx" currentValue=null newValue=[{"ip":"192.168.70.30"}]
I0828 17:14:47.975621 7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"httpd", UID:"f457e2fa-0b63-49a5-8991-5ea3158e90bd", A
W0828 17:14:47.975733 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
I0828 17:14:47.975821 7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx", UID:"a8a99bff-28f7-499c-bb16-713186db85c6", A
W0828 17:14:51.312089 7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
192.168.70.30 - - [28/Aug/2024:17:15:30 +0000] "GET / HTTP/1.1" 200 45 "-" "curl/8.5.0" 82 0.001 [default-httpd-80] [] 172.16.36.204:80 45 0.002 200 e61f9f531b8da56e
192.168.70.30 - - [28/Aug/2024:17:57:57 +0000] "GET / HTTP/1.1" 200 45 "-" "curl/8.5.0" 82 0.001 [default-httpd-80] [] 172.16.36.204:80 45 0.001 200 130a5de6eaee8694
192.168.70.30 - - [28/Aug/2024:18:00:38 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 82 0.002 [default-nginx-80] [] 172.16.118.80:80 615 0.002 200 4dd12d91da0728
192.168.70.30 - - [28/Aug/2024:18:00:41 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 82 0.002 [default-nginx-80] [] 172.16.118.80:80 615 0.002 200 0554881b26de2a
192.168.70.30 - - [28/Aug/2024:18:24:06 +0000] "\xFF\xF4\xFF\xFD\x06" 400 150 "-" "-" 0 0.001 [] [] - - - - fa63c4d8395c7a05229460631e8b4bda

@martinmalek
Copy link
Author

ubuntu@k8s-master-01:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-worker-01 Ready 166m v1.31.0 192.168.70.35 Ubuntu 24.04.1 LTS 6.8.0-41-generic containerd://1.7.21
k8s-worker-02 Ready 166m v1.31.0 192.168.70.36 Ubuntu 24.04.1 LTS 6.8.0-41-generic containerd://1.7.21
kubernetes Ready control-plane 3h8m v1.31.0 192.168.70.30 Ubuntu 24.04.1 LTS 6.8.0-41-generic containerd://1.7.21

@martinmalek
Copy link
Author

ubuntu@k8s-master-01:~$ kubectl get IPAddressPool -n metallb-system
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
first-pool true false ["192.168.70.30-192.168.70.30"]

@martinmalek
Copy link
Author

martinmalek commented Aug 28, 2024

Solved with the help of @long in Kubernetes slack #ingress-nginx-users.

Problem was that my pool IP was the same as my control plane.

Changing metallb IPAddressPool to an unused range "192.168.70.31-192.168.70.31" has everything working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment