Skip to content

Instantly share code, notes, and snippets.

@krmahadevan
Last active January 4, 2023 14:52
Show Gist options
  • Save krmahadevan/f67ba986d153c05ca899f9eb6649de5d to your computer and use it in GitHub Desktop.
Save krmahadevan/f67ba986d153c05ca899f9eb6649de5d to your computer and use it in GitHub Desktop.
Quick Setup tutorial with Istio and K8s

Setup

Install minikube by referring to instructions here

Start a mini kube cluster with a higher resource configuration

➜  minikube start --cpus 6 --memory 8192

Remember to enable addons on minikube using

➜  minikube addons enable ingress

Now you can access the ingress gateway via 127.0.0.1 by tunnelling using (Run this in a separate terminal)

➜  minikube tunnel

If you see errors, ensure that you bump up the CPU and memory settings of the Rancher desktop VM

For more useful commands in minikube refer to here

Install istio

Follow the instructions from official documentation and add the path to istioctl to the PATH variable.

Install istioctl to the k8s cluster

➜  istioctl install

After installation it would look like this:

➜  kubectl get ns
NAME              STATUS   AGE
default           Active   13m
istio-system      Active   3m
kube-node-lease   Active   14m
kube-public       Active   14m
kube-system       Active   14m

Some Kubernetes commands

  1. Show the namespaces
kubectl get ns 
  1. List the pods
kubectl get pod
  1. List the pods that are part of the istio-system namespace
➜  kubectl get pod -n istio-system 
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-546585745f-pnlpb   1/1     Running   0          3m55s
istiod-7f8c8bb8c8-bws45                 1/1     Running   0          4m39s

Demo project

We will use microservices-demo from GCP for this exercise.

from this project find the release/kubernetes-manifests.yaml file and then deploy it to the k8s cluster using

➜  kubectl apply -f release/kubernetes-manifests.yaml

To list the running deployments run

➜  kubectl get deploy

To find the status of the deployment run (It will take sometime before all these services are deployed. Eventually an output that looks like below will be seen)

➜  kubectl get pod
NAME                                     READY   STATUS    RESTARTS   AGE
redis-cart-7f557cdb55-2fzgh              1/1     Running   0          48m
checkoutservice-8694cb6b85-psvbc         1/1     Running   0          39m
emailservice-674b4c9599-tj5hr            1/1     Running   0          39m
productcatalogservice-8674897889-95dsl   1/1     Running   0          39m
currencyservice-7fc5c9887c-bpcch         1/1     Running   0          39m
paymentservice-98f69dd99-bp4cl           1/1     Running   0          39m
recommendationservice-6c4644d6cf-727xv   1/1     Running   0          39m
frontend-57b65c777c-lwqrf                1/1     Running   0          39m
shippingservice-64fd5cc9c9-4gfzd         1/1     Running   0          39m
cartservice-8656547fc4-rbrmx             1/1     Running   0          39m
adservice-bbf6c8b7-2ljhl                 1/1     Running   0          39m
loadgenerator-57fc96dcb6-gsllp           1/1     Running   0          39m

To delete a deployment created by a manifest file run

➜  kubectl delete -f release/kubernetes-manifests.yaml

To delete a specific deployment (lets say shippingservice) run

➜  kubectl delete deploy shippingservice

The above output tells us that every pod is having exactly 1 container. So istio didnt inject the envoy proxies (It doesn't happen by default)

Configure envoy proxy injection

First list the labels that are associated with the namespace (default in our case)

➜  kubectl get ns default --show-labels
NAME      STATUS   AGE   LABELS
default   Active   75m   kubernetes.io/metadata.name=default

Now we just add a special label called istio-injection=enabled to our namespace. Once istio sees this label, it auto injects the envoy proxy to our pod

➜  kubectl label ns default istio-injection=enabled
namespace/default labeled

To check if the label was applied by querying the label of our namespace run:

➜  kubectl get ns default --show-labels            
NAME      STATUS   AGE   LABELS
default   Active   77m   istio-injection=enabled,kubernetes.io/metadata.name=default

So now lets delete the existing deployment and then do the deployment again so that istio will start auto injecting the envoy proxies.

To delete a deployment based on a manifest file run the command

➜  kubectl delete -f microservices-demo/release
gateway.networking.istio.io "frontend-gateway" deleted
virtualservice.networking.istio.io "frontend-ingress" deleted
serviceentry.networking.istio.io "allow-egress-googleapis" deleted
serviceentry.networking.istio.io "allow-egress-google-metadata" deleted
virtualservice.networking.istio.io "frontend" deleted
deployment.apps "emailservice" deleted
service "emailservice" deleted
deployment.apps "checkoutservice" deleted
service "checkoutservice" deleted
deployment.apps "recommendationservice" deleted
service "recommendationservice" deleted
deployment.apps "frontend" deleted
service "frontend" deleted
service "frontend-external" deleted
deployment.apps "paymentservice" deleted
service "paymentservice" deleted
deployment.apps "productcatalogservice" deleted
service "productcatalogservice" deleted
deployment.apps "cartservice" deleted
service "cartservice" deleted
deployment.apps "loadgenerator" deleted
deployment.apps "currencyservice" deleted
service "currencyservice" deleted
deployment.apps "shippingservice" deleted
service "shippingservice" deleted
deployment.apps "redis-cart" deleted
service "redis-cart" deleted
deployment.apps "adservice" deleted
service "adservice" deleted

Check if deletion completed.

➜  kubectl get pod
No resources found in default namespace.

Now re-apply the deployment using our manifest file.

➜  kubectl apply -f release/kubernetes-manifests.yaml
deployment.apps/apache created
deployment.apps/catalog created
deployment.apps/customer created
deployment.apps/order created
service/apache created
service/catalog created
service/customer created
service/order created

Now when you list the pods, you will see that there are two containers per pod (the additional container is envoy proxy)

➜  kubectl get pod
NAME                                     READY   STATUS            RESTARTS   AGE
productcatalogservice-8674897889-28v5l   0/2     Init:0/1          0          13s
cartservice-8656547fc4-tjhzs             0/2     Init:0/1          0          11s
frontend-57b65c777c-zq6nh                0/2     PodInitializing   0          15s
loadgenerator-57fc96dcb6-gv4r8           0/2     Init:0/2          0          9s
currencyservice-7fc5c9887c-mw9hc         0/2     Init:0/1          0          8s
checkoutservice-8694cb6b85-mx8hs         0/2     Running           0          16s
emailservice-674b4c9599-hnkqv            0/2     Running           0          16s
shippingservice-64fd5cc9c9-h6zhg         0/2     Init:0/1          0          6s
adservice-bbf6c8b7-4bcpm                 0/2     Pending           0          3s
paymentservice-98f69dd99-ldmgd           0/2     PodInitializing   0          14s
recommendationservice-6c4644d6cf-77rrk   2/2     Running           0          16s
redis-cart-7f557cdb55-g422p              0/2     Init:0/1          0          4s

Lets describe one of the pods

➜  kubectl describe pod cartservice-8656547fc4-tjhzs

Adding addons to a cluster

Here's how we are adding kiali to our cluster

➜  kubectl apply -f ~/tools/istio/istio-1.16.1/samples/addons/kiali.yaml 
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created

Now lets list the pods that are part of the istio-system namespace.

➜  kubectl get pod -n istio-system
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-546585745f-pnlpb   1/1     Running   0          83m
istiod-7f8c8bb8c8-bws45                 1/1     Running   0          84m
kiali-64df7bf7cc-khnqc                  0/1     Running   0          55s 

Lets install prometheus in the same way

➜  kubectl apply -f ~/tools/istio/istio-1.16.1/samples/addons/prometheus.yaml 
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created

Ensure that it's running

➜  kubectl get pod -n istio-system
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-546585745f-pnlpb   1/1     Running   0          88m
istiod-7f8c8bb8c8-bws45                 1/1     Running   0          89m
kiali-64df7bf7cc-khnqc                  1/1     Running   0          5m57s
prometheus-6549d6bdcc-ptvcf             2/2     Running   0          95s

To install all the addons in one shot you can do

➜  kubectl apply -f ~/tools/istio/istio-1.16.1/samples/addons -n istio-system

Here addons is the parent directory that can contain one or more manifests.

Port forwarding

First list the services

➜  kubectl get svc -n istio-system
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                      AGE
istio-ingressgateway   LoadBalancer   10.106.15.0      <pending>     15021:31078/TCP,80:31602/TCP,443:30222/TCP   100m
istiod                 ClusterIP      10.96.238.192    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP        101m
kiali                  ClusterIP      10.98.19.7       <none>        20001/TCP,9090/TCP                           18m
prometheus             ClusterIP      10.101.150.228   <none>        9090/TCP                                     13m

To launch the Kiali dashboard:

➜  kubectl port-forward svc/kiali -n istio-system 20001

To launch the Jaegar UI:

➜  kubectl port-forward svc/tracing 8081:80 -n istio-syste

To lauch Grafana dashboards:

➜  kubectl port-forward svc/grafana 3000 -n istio-system

To launch the UI, first list the deployment

➜  kubectl get deployment
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
redis-cart              1/1     1            1           44m
checkoutservice         1/1     1            1           44m
emailservice            1/1     1            1           44m
productcatalogservice   1/1     1            1           44m
currencyservice         1/1     1            1           44m
paymentservice          1/1     1            1           44m
recommendationservice   1/1     1            1           44m
frontend                1/1     1            1           44m
shippingservice         1/1     1            1           44m
cartservice             1/1     1            1           44m
adservice               1/1     1            1           44m
loadgenerator           1/1     1            1           44m

Now you can run the port forwarding command as below to launch the UI

➜  kubectl port-forward deployment/frontend 8080:8080

Advanced commands

Delete all the pods in a single namespace:

➜  kubectl delete --all pods --namespace=foo

Delete all deployments in namespace which will delete all pods attached with the deployments corresponding to the namespace

➜  kubectl delete --all deployments --namespace=foo

Delete all the namespaces except kube-system, which might be useful:

for each in $(kubectl get ns -o jsonpath="{.items[*].metadata.name}" | grep -v kube-system);
do
  kubectl delete ns $each
done

To delete everything

➜  kubectl delete all --all --all-namespaces
  • The first all means the common resource kinds (pods, replicasets, deployments, ...)
    • kubectl get all == kubectl get pods,rs,deployments, ...
  • The second --all means to select all resources of the selected kinds

Note that all does not include:

  • non namespaced resourced (e.g., clusterrolebindings, clusterroles, ...)
  • configmaps
  • rolebindings
  • roles
  • secrets
  • ...

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment