Skip to content

Instantly share code, notes, and snippets.

@pydevops
Last active August 20, 2024 00:55
Show Gist options
  • Save pydevops/83ee7834c212d2e8933624dd9ff1ab32 to your computer and use it in GitHub Desktop.
Save pydevops/83ee7834c212d2e8933624dd9ff1ab32 to your computer and use it in GitHub Desktop.
how to set up kubectl on laptop for private endpoint ONLY k8s cluster (AWS/GCP/On prem)

HTTP tunnel

On prem k8s cluster set up with bastion vm

  1. Create a bastion vm in your data center or in cloud with connectivity set up (usually vpn) to the on prem data center.
  2. Install tinyproxy on the bastion vm and pick a random port as it would be too easy for spam bot with default 8888, set up as systemd service according to https://nxnjz.net/2019/10/how-to-setup-a-simple-proxy-server-with-tinyproxy-debian-10-buster/. Make sure it works by validating with curl --proxy http://127.0.0.1:<tinyproxy-port> https://httpbin.org/ip. And I don't use any user authentication for proxy, so I locked down the firewall rules with my laptop IP/32.
  3. Download the kubeconfig file for the k8s cluster to your laptop
  4. From your laptop, run
HTTPS_PROXY=<bastion-external-ip>:<tinyproxy-port> KUBECONFIG=my-kubeconfig kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-node-0   Ready    control-plane,master   32h   v1.20.4
k8s-node-1   Ready    <none>                 32h   v1.20.4
k8s-node-2   Ready    <none>                 32h   v1.20.4
k8s-node-3   Ready    <none>                 32h   v1.20.4
k8s-node-4   Ready    <none>                 32h   v1.20.4
k8s-node-5   Ready    <none>                 32h   v1.20.4

Private GKE cluster with HTTP proxy solutions

According to private GKE cluster, At this point, these are the only IP addresses that have access to the control plane:

  • The primary range of my-subnet-0.
  • The secondary range used for Pods.

Hence, we can use a bastion vm in the primary range or a pod from the secondary range.

My own hackish way

Given private GKE cluster with public endpoint access disabled, here is one hack I did with Cloud IAP SSH forwarding via an internal bastion vm. In this workaroud it is not using any HTTP proxy and no external IP address from user VPC. It works well for one cluster, but I would aim for deploying tinyproxy for more than one cluster as it is a cleaner solution without handling TLS SAN.

create a private GKE cluster

Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_cp for the latest info. e.g.

gcloud container clusters create "$CLUSTER_NAME" \
  --region ${REGION} \
  --network ${NETWORK} \
  --subnetwork ${SUBNET} \
  --machine-type "${GKE_NODE_TYPE}" \
  --num-nodes=1 \
  --enable-autoupgrade \
  --enable-autorepair \
  --preemptible \
  --enable-ip-alias \
  --cluster-secondary-range-name=pod-range \
  --services-secondary-range-name=service-range \
  --enable-private-nodes \
  --enable-private-endpoint \
  --enable-master-authorized-networks  \
  --master-ipv4-cidr= 172.16.0.32/28

# Get the kubectl credentials for the GKE cluster.
KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials "$CLUSTER_NAME" --region "$REGION"

create a private compute instance "bastion"

with only internal IP

enable and set up Cloud IAP

in GCP console, grant users/group that can access the private instance from the last step

on the laptop, start the SSH forwarding proxy at local port 8443 via CloudIAP tunnel

e.g. 172.16.0.66 is the private master endpoint, The SSH traffic is tunnelled via Cloud IAP in TLS, then port forwarding to the k8s master API endpoint. gcloud beta compute --project ${PROJECT} ssh --zone ${ZONE} "bastion" --tunnel-through-iap --ssh-flag="-L 8443:172.16.0.66:443"

on the laptop, modify the .kube/dev

kubernetes.default and kubernetes are allowed for port server: https://kubernetes.default:8443

on the laptop, modify the /etc/hosts

Please append the following line 127.0.0.1 kubernetes kubernetes.default

on the laptop, happy kubectl from here.

KUBECONFIG=~/.kube/dev kubectl get po --all-namespaces

Private only EKS cluster

Very much like the GCP Cloud IAP, except it uses AWS SSM and bastion to create a tunnel. Assuming the bastion 'subnet is added to the EKS control plane's cluster security group's inbound rule on tcp port 443.

# start a tunnel, local port 4443, traffic will be forwarded to the private EKS endpoint 
aws ssm start-session --target i-bastion --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters host=<eks-controlplane-endpoint>,portNumber=443,localPortNumber=4443

on the laptop 's /etc/hosts

127.0.0.1 localhost kubernetes kubernetes.default.svc kubernetes.default.svc.cluster.local

modify kubeconfig with the server pointing to the local port

server: https://kubernetes.default.svc.cluster.local:4443
@Bharathkumarraju
Copy link

Bharathkumarraju commented Jun 14, 2021

@pydevops i get below error...i have followed same steps as mentioned.

bharathdasaraju@private-vm:~$ KUBECONFIG=~/.kube/dev kubectl get nodes -o wide
The connection to the server kubernetes.default:8443 was refused - did you specify the right host or port?
bharathdasaraju@private-vm:~$

@Bharathkumarraju
Copy link

@pydevops sorry it worked after executing gcloud contianer clusters get-credentials command again. Like below thanks alot for nice article.


bharathdasaraju@MacBook-Pro k8s-samples $ gcloud beta compute --project ${PROJECT} ssh --zone ${ZONE} "private-vm" --tunnel-through-iap --ssh-flag="-L 8443:192.168.0.2:443"
Linux private-vm 4.19.0-16-cloud-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jun 14 23:41:55 2021 from 35.235.240.162
-bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
bharathdasaraju@private-vm:~$ KUBECONFIG=~/.kube/dev kubectl get nodes -o wide
The connection to the server kubernetes.default:8443 was refused - did you specify the right host or port?
bharathdasaraju@private-vm:~$ KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials test-cluster --zone asia-southeast1-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for test-cluster.
bharathdasaraju@private-vm:~$ KUBECONFIG=~/.kube/dev kubectl get nodes -o wide
NAME                                          STATUS   ROLES    AGE   VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-test-cluster-default-pool-ef06052c-dmtq   Ready    <none>   28m   v1.19.9-gke.1900   10.180.16.4   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
gke-test-cluster-default-pool-ef06052c-r2d2   Ready    <none>   28m   v1.19.9-gke.1900   10.180.16.5   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
gke-test-cluster-default-pool-ef06052c-s1rm   Ready    <none>   28m   v1.19.9-gke.1900   10.180.16.3   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
bharathdasaraju@private-vm:~$

@pydevops
Copy link
Author

The original note is written to run kubectl from laptop (MacOS) and tunnel via SSH (Cloud IAP) bastion host.
In the case of running directly from bastion, the --ssh-flag="-L 8443:192.168.0.2:443" won't be needed. You can directly SSH in bastion and run gcloud command as you did there, assuming the service account running with bastion or the gcloud with some user is granted with sufficient permission to pull kubeconfig out of the GKE cluster (e.g. gcloud container clusters get-credentials test-cluster).

@pydevops sorry it worked after executing gcloud contianer clusters get-credentials command again. Like below thanks alot for nice article.


bharathdasaraju@MacBook-Pro k8s-samples $ gcloud beta compute --project ${PROJECT} ssh --zone ${ZONE} "private-vm" --tunnel-through-iap --ssh-flag="-L 8443:192.168.0.2:443"
Linux private-vm 4.19.0-16-cloud-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jun 14 23:41:55 2021 from 35.235.240.162
-bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
bharathdasaraju@private-vm:~$ KUBECONFIG=~/.kube/dev kubectl get nodes -o wide
The connection to the server kubernetes.default:8443 was refused - did you specify the right host or port?
bharathdasaraju@private-vm:~$ KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials test-cluster --zone asia-southeast1-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for test-cluster.
bharathdasaraju@private-vm:~$ KUBECONFIG=~/.kube/dev kubectl get nodes -o wide
NAME                                          STATUS   ROLES    AGE   VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-test-cluster-default-pool-ef06052c-dmtq   Ready    <none>   28m   v1.19.9-gke.1900   10.180.16.4   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
gke-test-cluster-default-pool-ef06052c-r2d2   Ready    <none>   28m   v1.19.9-gke.1900   10.180.16.5   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
gke-test-cluster-default-pool-ef06052c-s1rm   Ready    <none>   28m   v1.19.9-gke.1900   10.180.16.3   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
bharathdasaraju@private-vm:~$

@Bharathkumarraju
Copy link

Bharathkumarraju commented Jun 15, 2021

@pydevops yes thats right but the command KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials "$CLUSTER_NAME" --region "$REGION" won't be working(even though i have admin creds on my gcp user...i think its because of private endpoint) from local even after port-forwarding so i have used private bastion vm itself...

@Bharathkumarraju
Copy link

Bharathkumarraju commented Jun 15, 2021

@pydevops

below are the steps i have followed and it worked...previously i have exited the priavte VM. Sorry for that, onething we can improve is we can run in background below port-forward tunnel ?

  1. port-forward and it automatically logged-in to inside private-vm
bharathdasaraju@MacBook-Pro k8s-samples $ gcloud beta compute --project ${PROJECT} ssh --zone ${ZONE} "private-vm" --tunnel-through-iap --ssh-flag="-L 8443:192.168.0.2:443"
Linux private-vm 4.19.0-16-cloud-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Jun 15 00:14:15 2021 from 35.235.240.162
-bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
bharathdasaraju@private-vm:~$

on the other terminal window
2. get the get-credentials like below

bharathdasaraju@MacBook-Pro ~ $ KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials test-cluster --zone asia-southeast1-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for test-cluster.
bharathdasaraju@MacBook-Pro ~ $
  1. updated kubeconfig file with server: https://kubernetes.default:8443 and Boooooom it worked like a charm. Thanks once again for prompt reply.
bharathdasaraju@MacBook-Pro ~ $ KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials test-cluster --zone asia-southeast1-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for test-cluster.
bharathdasaraju@MacBook-Pro ~ $ vim ~/.kube/dev
bharathdasaraju@MacBook-Pro ~ $ KUBECONFIG=~/.kube/dev kubectl get nodes -o wide
NAME                                          STATUS   ROLES    AGE   VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-test-cluster-default-pool-ef06052c-dmtq   Ready    <none>   67m   v1.19.9-gke.1900   10.180.16.4   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
gke-test-cluster-default-pool-ef06052c-r2d2   Ready    <none>   67m   v1.19.9-gke.1900   10.180.16.5   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
gke-test-cluster-default-pool-ef06052c-s1rm   Ready    <none>   67m   v1.19.9-gke.1900   10.180.16.3   <none>        Container-Optimized OS from Google   5.4.89+          containerd://1.4.3
bharathdasaraju@MacBook-Pro ~ $
bharathdasaraju@MacBook-Pro ~ $

@devops-expanse
Copy link

devops-expanse commented Jul 13, 2021

Interesting, I have found this gcp recommended practice as well:
https://cloud.google.com/architecture/creating-kubernetes-engine-private-clusters-with-net-proxies

Essentially deploy a pod that provides an nginx proxy to the api server residing in the google managed control plane, example of how proxy is used in this particular case:

$  https_proxy=10.244.128.9:8118 kubectl -n qbert-dev get secrets

@pydevops
Copy link
Author

pydevops commented Jul 13, 2021

Interesting, I have found this gcp recommended practice as well:
https://cloud.google.com/architecture/creating-kubernetes-engine-private-clusters-with-net-proxies

Essentially deploy a pod that provides an nginx proxy to the api server residing in the google managed control plane, example of how proxy is used in this particular case:

$  https_proxy=10.244.128.9:8118 kubectl -n qbert-dev get secrets

Good point thanks, I had it linked as well in the gist. The way it works is exposing a k8s LB which is implemented as a ILB (internal load balancer), with a backend service of privproxy deployment. The https_proxy address needs to follow RFC 1918 as an internal address.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment