First I created k3d cluster and installed our old istio there (1.6.11) using kubectl apply in sm-configuration dir. Also added couple of pods with sidecar enabled.
Started recurring curl just to see if there's a disruption
while true; sleep 0.5 && do curl --write-out '%{url_effective} - %{http_code} == ' --silent --output /dev/null -X POST -d '{"foo":"foo"}' http://localhost:8081/hello | pv -N "$(date +"%T.%N")" -t; done
http://localhost:8081/hello - 200 == 12:14:08.501771075: 0:00:00
http://localhost:8081/hello - 200 == 12:14:09.013122221: 0:00:00
http://localhost:8081/hello - 200 == 12:14:09.523396936: 0:00:00
http://localhost:8081/hello - 200 == 12:14:10.033961411: 0:00:00
http://localhost:8081/hello - 503 == 12:14:10.558230496: 0:00:00
http://localhost:8081/hello - 200 == 12:14:11.082789656: 0:00:00
http://localhost:8081/hello - 200 == 12:14:11.593750175: 0:00:00
http://localhost:8081/hello - 200 == 12:14:12.105312659: 0:00:00
http://localhost:8081/hello - 200 == 12:14:12.629518911: 0:00:00
http://localhost:8081/hello - 200 == 12:14:13.139959634: 0:00:00
http://localhost:8081/hello - 200 == 12:14:13.650731534: 0:00:00
http://localhost:8081/hello - 200 == 12:14:14.161452244: 0:00:00
http://localhost:8081/hello - 200 == 12:14:14.672984722: 0:00:00
http://localhost:8081/hello - 200 == 12:14:15.183644889: 0:00:00
http://localhost:8081/hello - 200 == 12:14:15.695060226: 0:00:00
http://localhost:8081/hello - 200 == 12:14:16.205855630: 0:00:00
http://localhost:8081/hello - 200 == 12:14:16.716869116: 0:00:00
http://localhost:8081/hello - 200 == 12:14:17.227709852: 0:00:00
http://localhost:8081/hello - 503 == 12:14:17.738772154: 0:00:00
http://localhost:8081/hello - 200 == 12:14:18.264202396: 0:00:00
http://localhost:8081/hello - 200 == 12:14:18.787389319: 0:00:00
http://localhost:8081/hello - 200 == 12:14:19.298372978: 0:00:00
http://localhost:8081/hello - 200 == 12:14:19.809620084: 0:00:00
The version had to match. istioctl 1.8 started throwing errors.
asdf install istioctl 1.7.7
asdf global istioctl 1.7.7
git checkout istio-1.7.7
istioctl upgrade -d https://github.com/istio/istio/releases/download/1.7.7/istio-1.7.7-linux-amd64.tar.gz -f ~/salemove/sm-configuration/kubernetes/environment/_shared/helm/istio_values.yaml
2021-02-08T12:58:39.526386Z info proto: tag has too few fields: "-"
Control Plane - pilot pod - istiod-cb76c5d49-9r5q5 - version: 1.6.11
Control Plane - pilot pod - istiod-cb76c5d49-2lgh4 - version: 1.6.11
2021-02-08T12:58:52.947239Z warn found 4 CRD of unsupported v1alpha1 security policy: [clusterrbacconfigs.rbac.istio.io rbacconfigs.rbac.istio.io serviceroles.rbac.istio.io servicerolebindings.rbac.istio.io]. The v1alpha1 security policy is no longer supported starting 1.6. It's strongly recommended to delete the CRD of the v1alpha1 security policy to avoid applying any of the v1alpha1 security policy in the unsupported version
Upgrade version check passed: 1.6.11 -> 1.7.7.
Upgrade check: Warning!!! The following IOPS will be changed as part of upgrade. Please double check they are correct:
addonComponents:
grafana:
k8s: map[replicaCount:1] ->
kiali:
k8s: map[replicaCount:1] ->
prometheus:
k8s: map[replicaCount:1] ->
components:
ingressGateways:
'[#0]':
enabled: true -> false
pilot:
k8s:
env:
'[?->0]': -> map[name:PILOT_HTTP10 value:1]
'[?->1]': -> map[name:GODEBUG value:http2server=0]
'[0->?]': map[name:POD_NAME valueFrom:map[fieldRef:map[apiVersion:v1 fieldPath:metadata.name]]] ->
'[1->?]': map[name:POD_NAMESPACE valueFrom:map[fieldRef:map[apiVersion:v1 fieldPath:metadata.namespace]]] ->
hpaSpec: -> map[minReplicas:2]
resources: -> map[limits:map[cpu:100m memory:256Mi] requests:map[cpu:20m memory:128Mi]]
installPackagePath: /tmp/istio-install-packages/istio-1.6.11/manifests -> /tmp/istio-install-packages/istio-1.7.7/manifests
meshConfig:
enablePrometheusMerge: false -> true
values:
base:
enableCRDTemplates: -> false
gateways:
istio-ingressgateway:
meshExpansionPorts:
'[0->?]': map[name:tcp-pilot-grpc-tls port:15011 targetPort:15011] ->
'[2->?]': map[name:tcp-citadel-grpc-tls port:8060 targetPort:8060] ->
global:
istiod:
enabled: true ->
proxy:
envoyStatsd: map[enabled:false host:<nil> port:<nil>] ->
lifecycle: |-
-> map[preStop:map[exec:map[command:[bash -c pilot-agent request POST /healthcheck/fail;
sleep 10;
pilot-agent request POST /drain_listeners?inboundonly;
while [ $(pilot-agent request GET /stats?filter=^cluster\.inbound.*upstream_cx_active$ | grep -v mgmtCluster | awk '{total += $2} END {print total}') -ne 0 ]; do sleep 1; done
]]]]
proxy_init:
resources:
limits:
cpu: 100m -> 2000m
memory: 50Mi -> 1024Mi
grafana:
image:
tag: 6.7.4 -> 7.0.5
kiali:
tag: v1.18 -> v1.22
meshConfig: -> map[accessLogEncoding:JSON enableAutoMtls:false enableTracing:true policyCheckFailOpen:true]
prometheus:
tag: v2.15.1 -> v2.19.2
sidecarInjectorWebhook:
neverInjectSelector: -> [map[matchExpressions:[map[key:app operator:DoesNotExist]]]]
telemetry:
v2:
metadataExchange:
wasmEnabled: -> false
prometheus:
wasmEnabled: -> false
tracing:
jaeger:
tag: 1.16 -> 1.18
Click proceeed
Confirm to proceed [y/N]? y
✔ Istio core installed
✔ Istiod installed
- Pruning removed resources Removed HorizontalPodAutoscaler:istio-system:istio-ingressgateway.
Removed PodDisruptionBudget:istio-system:istio-ingressgateway.
Removed Service:istio-system:istio-ingressgateway.
Removed ServiceAccount:istio-system:istio-ingressgateway-service-account.
Removed RoleBinding:istio-system:istio-ingressgateway-sds.
Removed Role:istio-system:istio-ingressgateway-sds.
✔ Installation complete ..........
Upgrade rollout completed. All Istio control plane pods are running on the target version.
Control Plane - pilot pod - istiod-6d468f6849-45rh4 - version: 1.7.7
Control Plane - pilot pod - istiod-6d468f6849-br6zf - version: 1.7.7
Success. Now the Istio control plane is running at version 1.7.7.
To upgrade the Istio data plane, you will need to re-inject it.
If you’re using automatic sidecar injection, you can upgrade the sidecar by doing a rolling update for all the pods:
kubectl rollout restart deployment --namespace <namespace with auto injection>
If you’re using manual injection, you can upgrade the sidecar by executing:
kubectl apply -f < (istioctl kube-inject -f <original application deployment yaml>)
kubectl rollout restart deployment --namespace default
^ the rollout caused two 503 errors, but I think that's because the
hello-node
pod didn't have any probes/preStop hook set up.
v1.7.7 succesfully running on k3d