Skip to content

Instantly share code, notes, and snippets.

@gilangvperdana
Last active August 20, 2024 09:34
Show Gist options
  • Save gilangvperdana/4fe499b9a126d24fb1c99c0ce441d8df to your computer and use it in GitHub Desktop.
Save gilangvperdana/4fe499b9a126d24fb1c99c0ce441d8df to your computer and use it in GitHub Desktop.
CLI
ceph health detail
ceph -s
ceph tell osd.* config get osd_max_backfills
ceph tell osd.* config get osd_max_backfills 2
ceph tell osd.* config get osd_recovery_max_active
ceph tell osd.* config set osd_recovery_max_active 4
ceph pg deep-scrub $numberpg
ceph osd pool ls detail
ceph health detail | grep "not deep-scrubbed since"|awk '{print $2}'|xargs -I@ bash -c 'ceph pg deep-scrub @'
ceph health detail | grep "not scrubbed since"|awk '{print $2}'|xargs -I@ bash -c 'ceph pg scrub @'
ceph pg $pgnumer mark_unfound_lost delete
ARCHIEVE OSD :
ceph health detail
ceph crash ls
ceph crash info <crash-id>
ceph crash archive <crash-id>
IN OSD :
ceph osd in namaosd.22
AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED
ceph config set mon mon_warn_on_insecure_global_id_reclaim false
ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
ceph config set mon mon_warn_on_insecure_global_id_reclaim true
ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed true
ceph config set mon auth_allow_insecure_global_id_reclaim false
CEK REPLICAS :
ceph osd pool set [pool] size 3
ceph osd pool set [pool] min_size 2
ceph osd dump | grep 'replicated size'
cara ngetrace osd ceph,
ceph osd tree #cari yang down
ceph osd find osd.XX #nanti ketemu host berapa
ssh $host #masuk ke host osd yang rusak
systemctl -a |grep $osd.XX # grep service yang rusak klo bener rusah biasanya auto failed service nya
systemctl status $osd.XX # since failed nya dari kapan klo bener baru tdi berati itu osd disk yang rusak
#Ngetrace osd di disk mana
## Ceph lawas
lsblk #trus cocokin aja sama osd yaang rusak
## Ceph baru
cep-volume lvm list #cari osd yang rusak nanti di colom 19/20 itu lokasi disk fisiknya
## Di check health disknya
smartctl -H /dev/XXX
PILIH CLUSTER :
kubectl config get-contexts
kubectl config use-context my-context-name
SYNTAX:
Melihat NODE
kubectl get nodes
kubectl get nodes --show-labels
kubectl get pods --all-namespace
kubectl describe node NAMANODES
INSTALASI AWAL:
setelah node dan master terinstalasi kubernetes, lakukan ini pada master:
kubeadm init
kubeadm token create --print-join-command (paste sytanx token pada node yang akan dikirimkan)
DEPLOY DASHBOARD:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
kubectl proxy
COBA DEPLOY IMAGE TANPA yaml:
kubectl create deployment nginx-web --image=gilangvperdana/apps:github
Membuat Deployment:
kubectl create deployment NAMADEPLOYMENT --image=NAMAIMAGE
kubectl get deploy
kubectl delete deploy NAMADEPLOY
Membuat replica dari Deployment:
kubectl scale --replicas=4 deployment/NAMADEPLOYMENT
Membuat NameSpace:
kubectl create NAMANAMESPACE
Cara masuk kedalam POD:
kubectl exec -it PODNAME bash
Melihat POD:
kubectl get pods
kubectl get pods -o wide
kubectl describe pod NAMAPOD
kubectl delete pod NAMAPOD
DELETE SERVICE/POD/DEPLOYMENT DARI YAML:
kubectl delete -f NAMAFILEYANGUDAHDICREATE.yaml
Membuat POD (Service) :
kubectl run NAMAPOD --image=NAMAIMAGE --port=80
MENGHAPUS REPLICA SET:
kubectl get replicaset
kubectl delete replicaset NAMAREPLICASET
kubectl get service
kubectl delete svc NAMASERVICE
kubectl describe svc NAMASERVICE
DEPLOY APLIKASI DARI KUBERNETES:
Membuat deployment dari Config YAML:
buat file .yaml dengan konfigurasi umum deployment kubernetes.
kubectl create -f namayamlfile.yaml
Membuat service untuk di exposed dari Config YAML:
kubectl create -f namayamlfile.yaml
Ekspos tanpa YAML:
kubectl expose pod NAMAPOD --name=NAMASERVICE --port=PORTJALAN
kubectl expose pod NAMAPOD --name=NAMASERVICE --port=PORTJALAN --type=TIPEPORT (NodePort,ClusterPort,etc).
Manual Scaling deployment kubernetes:
kubectl scale --replicas=5 deployment/NAMADEPLOYMENT
Update Deployment dari Image yang Baru :
kubectl rollout restart deployment deployment_name
Autoscalling dengan yaml file/ HPA (HorizontalPodAutoScaler):
kubectl create -f fileHPA.yaml
kubectl get hpa
kubectl describe hpa
kubectl delete hpa NAMAHPA
CEK PODS:
kubectl get pods -n kube-system -l k8s-app=metrics-server
kubectl get pods --namespace kube-system
https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
DLL:
kubectl apply -f file.yaml
INSTALASI KOMPOSE UNTUK MENERJEMAHKAN DOCKER COMPOSE KEPADA KUBECTL:
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose
kompose convert
kubectl apply -f file1.yaml,file2,yaml
kompose --file ./examples/docker-guestbook.yml up
COPY POD TO LOCAL:
kubectl cp name-of-your-pod:/path/to/your_folder /path/on_your_host/to/your_folder
CEK RESOURCE NODE/POD:
kubectl top pod
kubectl top node
CONFIG FILE K8s:
sudo cat /root/.kube/config
kubectl config view --minify --raw
LENS FOR UBUNTU DESKTOP:
sudo snap install kontena-lens --classic
LETAK FILE HOSTS:
Windows 10 - "C:\Windows\System32\drivers\etc\hosts"
Linux - "/etc/hosts"
MENAMBAHKAN LABEL PADA NODE:
kubectl label nodes <node-name> <label-key>=<label-value>
The connection to the server 10.148.0.5:6443 was refused - did you specify the right host or port?
systemctl restart kubelet
MEMBERIKAN ROLE:
kubectl label nodes NAMANODE node-role.kubernetes.io/worker=worker
INSTALASI INGRESS PADA v.1.21.0
kubectl apply -f https://raw.githubusercontent.com/gilangvperdana/K8s-BareMetal-Ubuntu21.04/master/3.Ingress-Nginx-Controller/2.%20nginx-controller-baremetal-NP
kubectl get pod -n ingress-nginx --watch
kubectl edit svc -n ingress-nginx ingress-nginx-controller
Tambahkan:
externalIPs:
- IP-MASTER-LOCAL
UNTUK MENGGUNAKAN NGINX INGRESS, SILAHKAN TAMBAHKAN METADATA ANNOTATIONS NGINX PADA INGRESS YAML :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
save, exit.
FIX NODE TAINTS :
kubectl taint nodes nama_node node-role.kubernetes.io/master:NoSchedule-
FIX NAMESPACE TERMINATING STUCK :
NS=`kubectl get ns |grep Terminating | awk 'NR==1 {print $1}'` && kubectl get namespace "$NS" -o json | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" | kubectl replace --raw /api/v1/namespaces/$NS/finalize -f -
KUBECTL EDITOR change to NANO :
KUBE_EDITOR="nano" kubectl edit svc/docker-registry
CONTAINERD KUBERNETES PRUNE IMAGE NOT USED :
crictl rmi --prune
MELIHAT STATUS/LOG KUBELET :
journalctl -u kubelet -f
ISSUE :
kube-node1 kubelet[6774]: E0807 15:09:16.627200 6774 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: failed to create kubelet: get remote runtime typed version failed: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory\""
sudo systemctl enable cri-dockerd.service
sudo systemctl restart cri-dockerd.service
network:
version: 2
ethernets:
enp197s0f0:
dhcp4: no
mtu: 9000
enp197s0f1:
dhcp4: no
mtu: 9000
br-ex:
dhcp4: no
bonds:
bond0:
interfaces:
- enp197s0f0
- enp197s0f1
mtu: 9000
parameters:
mode: 802.3ad
mii-monitor-interval: 100
transmit-hash-policy: layer3+4
vlans:
bond0.11:
id: 11
link: bond0
mtu: 1500
addresses:
- 10.24.11.11/24
gateway4: 10.24.11.1
nameservers:
addresses:
- 167.205.23.1
- 167.205.22.123
search: []
bond0.12:
id: 12
link: bond0
mtu: 1500
addresses:
- 10.24.12.11/24
bond0.13:
id: 13
link: bond0
mtu: 9000
addresses:
- 10.24.13.11/24
bond0.14:
id: 14
link: bond0
mtu: 9000
addresses:
- 10.24.14.11/24
bond0.15:
id: 15
link: bond0
mtu: 9000
addresses:
- 10.24.15.11/24
network:
version: 2
ethernets:
ens3:
dhcp4: false
addresses:
- 10.0.3.186/16
match:
macaddress: fa:16:3e:16:26:ea
mtu: 1450
set-name: ens3
ens7:
dhcp4: true
match:
macaddress: fa:16:3e:62:7c:50
mtu: 1450
set-name: ens4
ens8:
dhcp4: false
addresses:
- 10.0.2.227/16
match:
macaddress: fa:16:3e:18:a1:06
mtu: 1450
set-name: ens5
network:
version: 2
ethernets:
ens33:
dhcp4: false
addresses:
- 192.168.17.128/24
match:
macaddress: 00:0c:29:99:1c:fe
mtu: 1500
set-name: ens3
ens38:
dhcp4: true
match:
macaddress: 00:0c:29:99:1c:12
mtu: 1500
addresses:
- 10.20.0.116/16
gateway4: 10.20.0.1
nameservers:
addresses: [10.254.11.5, 10.252.252.106, 10.254.11.4, 1.1.1.1]
set-name: ens4
addresses:
- 10.20.1.222/16
gateway4: 192.168.10.1
nameservers:
addresses: [10.254.11.5, 10.252.252.106, 10.254.11.4, 1.1.1.1]
===================
WITH METRICS
===================
sudo nano /etc/netplan/50-cloud-init.yaml
---
network:
version: 2
ethernets:
ens3:
dhcp4: true
match:
macaddress: fa:16:3e:c0:91:6d
mtu: 1500
set-name: ens3
dhcp4-overrides:
route-metric: 100
ens4:
dhcp4: true
match:
macaddress: fa:16:3e:ed:49:98
mtu: 1442
set-name: ens4
dhcp4-overrides:
route-metric: 200
---
sudo netplan apply
==============================
MULTIPLE GATEWAY WITH METRICS
==============================
network:
version: 2
ethernets:
ens3:
dhcp4: no
match:
macaddress: fa:16:3e:d2:41:f6
mtu: 8950
set-name: ens3
addresses:
- 10.0.2.115/16
routes:
- to: 0.0.0.0/0
via: 10.0.0.1
metric: 50
nameservers:
addresses: [10.0.0.1]
ens7:
dhcp4: false
addresses:
- 172.20.2.139/16
match:
macaddress: fa:16:3e:3f:2a:59
mtu: 8950
set-name: ens7
ens8:
dhcp4: false
addresses:
- 10.0.4.0/16
match:
macaddress: fa:16:3e:3b:ad:21
mtu: 8950
set-name: ens8
ens9:
dhcp4: no
addresses:
- 172.20.1.162/16
match:
macaddress: fa:16:3e:21:34:4f
mtu: 8950
set-name: ens9
routes:
- to: 0.0.0.0/0
via: 172.20.0.1
metric: 100
nameservers:
addresses: [172.20.0.1]
==========================================
BONDING ON INSTANCE ACTIVE BACKUP DHCP :
==========================================
network:
version: 2
ethernets:
ens3:
dhcp4: no
match:
macaddress: fa:16:3e:71:ff:01
mtu: 8950
ens7:
dhcp4: no
match:
macaddress: fa:16:3e:8f:f3:1d
mtu: 8950
bonds:
bond0:
interfaces:
- ens3
- ens7
mtu: 8950
dhcp4: yes
parameters:
mode: active-backup
primary: ens3
==========================================
BONDING STATIC :
==========================================
network:
version: 2
ethernets:
ens3:
dhcp4: no
match:
macaddress: fa:16:3e:71:ff:01
mtu: 8950
ens7:
dhcp4: no
match:
macaddress: fa:16:3e:8f:f3:1d
mtu: 8950
bonds:
bond0:
interfaces: [ens3, ens7]
mtu: 8950
dhcp4: no
addresses:
- 172.20.0.140/16
- 172.20.2.255/16
gateway4: 172.20.0.1
nameservers:
addresses: [172.20.0.1]
parameters:
mode: 802.3ad
mii-monitor-interval: 100
JIKA TIDAK HA :
echo "100" > /sys/class/net/bond0/bonding/miimon
CLI OPENSTACK CHEAT-SHEET
https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html
https://docs.openstack.org/murano/pike/reference/appendix/articles/image_builders/upload.html
openstack floating ip list
openstack server list
openstack image list
openstack image show nama_image
openstack network list
openstack flavor list
openstack endpoint list
openstack server create --image IMAGE --flavor m1.tiny --network IDNETWORK NAME_INSTANCE
openstack server create --image IMAGE --flavor m1.tiny --network IDNETWORK --key-name NAMAKEYPAIR NAME_INSTANCE
openstack server add floating ip INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS
openstack server delete NAMAINSTANCE
openstack server show NAMAINSTANCE
openstack floating ip create NETWORK_NAME
openstack server remove floating ip INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS
openstack floating ip delete FLOATING_IP_ADDRESS
openstack server delete NAME_INSTANCE
openstack volume service list
systemctl enable snap.microstack.cinder-volume.service
systemctl start snap.microstack.cinder-volume.service
CREATE INSTANCE ON SPESIFIC COMPUTE NODE :
openstack server create \
--image 2004-ne \
--flavor small \
--security-group 464ac0d1-89fb-4c06-a109-dcc860011ce1 \
--key-name key-idcomstack-0 \
--availability-zone nova:glcomp2 \
--network internal \
test-instance
Melihat Network Internal attach ke External mana :
openstack router list
openstack router show nama_router {lihat subnet ID, samakan dengan subnet pada $ openstack network list}
CARA ENABLE OPENSTACK COMPUTE :
openstack compute service set --enable idcommstack-1 nova-compute
openstack compute service set --up idcommstack-1 nova-compute
openstack compute list
openstack availability zone list --long
DELETE INSTANCE :
openstack server list
HARD DELETE :
nova force-delete <id_instance>
CEK SOFT DELETE/HARD DELETE :
openstack server list --deleted
MASUK KE CONSOLE LEWAT CLI :
openstack console log show id_instance
JIKA ADA CLOUD INIT GAGAL MAKA INSTANCE TER-BUILD NAMUN TIDAK MENDAPATKAN ADAPTER BISA CEK DI :
openstack server console log show id_instance
atau di log nova compute (/var/log/kolla/)
ataupun hostname, public key jika berbeda maka itu gagal.
MELIHAT RANGKUMAN HYEPERVISOR DENGAN CLI :
nova hypervisor-stats
-------------------------------------------------------------------------------------------------------------------------
CARA MELIHAT EXTERNAL PROVIDER :
sudo cat /etc/kolla/neutron-server/ml2_conf.ini | grep flat_network | awk '{print $3}'
UPLOAD IMAGES :
ubuntu: https://cloud-images.ubuntu.com/
centos: https://cloud.centos.org/
wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
openstack image create --disk-format qcow2 \
--container-format bare --public \
--file ./cirros-0.5.1-x86_64-disk.img cirros
openstack image create --disk-format qcow2 \
--container-format bare --public \
--file ./ubuntu-20.10-server-cloudimg-amd64.img ubuntu2010
ATAU
openstack image create --file cirros-0.3.4-x86_64-disk.raw --container-format bare --disk-format raw --public cirros
openstack image list
openstack image show cirros-xx
MEMBUAT NETWORK EXTERNAL :
openstack network create --share --external \
--provider-physical-network physnet1 \
--provider-network-type flat external-net-1
#with external-net-1 is name of network.
MEMBUAT SUBNET EXTERNAL :
openstack subnet create --network external-net-1 \
--gateway 20.1.1.1 --no-dhcp \
--subnet-range 20.1.1.0/24 external-subnet-1
MEMBUAT NETWORK INTERNAL :
openstack network create internal-net-1
MEMBUAT SUBNET INTERNAL :
openstack subnet create --network internal-net-1 \
--allocation-pool start=192.168.1.10,end=192.168.1.254 \
--dns-nameserver 8.8.8.8 --gateway 192.168.1.1 \
--subnet-range 192.168.1.0/24 internal-subnet-1
# with external-subnet-1 is name of subnet & external-net-1 is name of network.
MELIHAT NETWORK :
openstack network show external-net-1 #external-net-1 is name of network.
MELIHAT SUBNET :
openstack subnet show external-subnet-1 #external-subnet-1 is name of network.
MEMBUAT ROUTER :
openstack router create router-1
openstack router set --external-gateway external-net-1 router-1
openstack router add subnet router-1 internal-subnet-1
openstack router list
openstack router show router-1
SECURITY GROUP :
openstack security group create security-group-1 --description 'Allow SSH and ICMP'
openstack security group rule create --protocol icmp security-group-1
openstack security group rule create --protocol tcp --ingress --dst-port 22 security-group-1
openstack security group list
openstack security group rule list security-group-1
nambah security group ke instance:
openstack security grop add id_instance nama_secgrup
KEYPAIR :
openstack keypair create --public-key ~/.ssh/id_rsa.pub controller-key-XX
openstack keypair list
openstack keypair show controller-key-XX
FLAVOR :
openstack flavor create --ram 1024 --disk 8 --vcpus 1 --public small-XX
openstack flavor list
openstack flavor show small-XX
LAUNCH INSTANCE :
openstack server create --flavor small-1 \
--image cirros-1 \
--key-name controller-key-1 \
--security-group security-group-1 \
--network internal-net-1 \
cirros-instance-1
openstack server list
openstack server show cirros-instance-XX
CREATE INSTANCE FROM VOLUME :
openstack image list
openstack volume create --image IMAGE_ID --size SIZE_IN_GB bootable_volume
openstack volume unset --image-property 'min_disk' <volume name>
FLOATING IP :
openstack floating ip create --floating-ip-address 20.XX.XX.100 external-net-XX
openstack floating ip list
openstack server add floating ip cirros-instance-XX 20.XX.XX.100
openstack server list
SSH / TEST INSTANCE :
ssh cirros@floatingip
cat /etc/os-release
ping -c 4 google.com
MELIHAT ENDPOINT VNC INSTANCE :
openstack console url show cirros-instance-horizon-XX
PROJECT :
openstack project list
openstack project create --description 'my own project' myproject-1 --domain default
openstack project set myproject-1 --disable
openstack project set myproject-1 --enable
openstack project set myproject-1 --name mynewproject-1
openstack project show mynewproject-1
USER :
openstack user list
openstack user create --project mynewproject-1 --password "123" gilang
openstack user set gilang --disable
openstack user set gilang --enable
openstack user set gilang --name gilang-1 --email gilangvirga271@gmail.com
USER ROLE :
openstack role list
openstack role add --user gilang-1 --project mynewproject-1 _member_
openstack role assignment list --user gilang-1 --project mynewproject-1 --names
QUOTA :
openstack quota show mynewproject-1
openstack quota set --instance 20 mynewproject-1
openstack quota show mynewproject-1 -c instances
CINDER VOLUME :
openstack volume create --size 5 myvolume-24
openstack volume list
openstack volume show myvolume-24
ATTACH VOLUME TO INSTANCE :
openstack server add volume cirros-instance-1 myvolume-24 --device /dev/vdb
openstack volume list
openstack volume delete myvolume-24
DETACH VOLUME FROM INSTANCE :
openstack server remove volume server-ID \ volume ID
openstack server remove volume 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 \
573e024d-5235-49ce-8332-be1576d323f8
MOUNT VOLUME TO FOLDER (on ATTACHED INSTANCE) :
$ sudo mkdir /data
$ sudo mkfs.ext4 /dev/vdb
$ sudo mount /dev/vdb /data
$ sudo lsblk
MAKE IT VOLUME MOUNT PERSISTENT!
sudo blkid /dev/vdb
/dev/vdb: UUID="edeb4bae-32f9-4279-bf8f-b9ad64d3bc16" TYPE="ext4"
nano /etc/fstab
UUID=edeb4bae-32f9-4279-bf8f-b9ad64d3bc16 /data ext4 defaults 0 0
shutdown -r now
UPDATE ON INSTANCE :
sudo growpart /dev/vdb
sudo e2fsck -f /dev/vdb
sudo resize2fs /dev/vdb
lsblk
df -hT
NOVA QUOTA EXCEEDED :
nova quota-defaults
nova quota-class-update --instances 35 default
OPENSTACK HEAT YAML :
openstack stack create -t <template name> <stack name>
openstack stack create -t basic-stack.yaml basic-stack
openstack server show heat-instance
NO LIVE MIGRATE :
openstack server migrate ${VM_ID} --host HostC --os-compute-api-version 2.56
openstack server resize confirm ${VM_ID}
LIVE MIGRATION INSTANCE :
$ openstack server list
$ openstack server show id_instance
$ openstack compute service list
$ openstack server migrate ${VM_ID} --live HostC
atau
$ openstack server migrate <server> --live-migration --host <target host> --os-compute-api-version 2.30
$ openstack server show d1df1b5a-70c4-4fed-98b7-423362f2c47c
RESIZE INSTANCE :
$ openstack server resize --flavor nama_flavor nama_instance
$ openstack server resize --confirm nama_instance
CEK INTERNET PADA ROUTER :
ip netns ls
ip netns e qrouter-xxxx ping 1.1.1.1
ip netns e qrouter-xxxx ip route
ip netns e qrouter-xxxx bash
ip netns e qdhcp-xxxx ssh ubuntu@ip_internal_instance
SAVE IMAGE :
openstack image list
openstack image save --file snapshot.raw 168160e4-faf0-44a9-bfc7-98bf38e54805
openstack image create 2004-pass123-ne --container-format bare --disk-format qcow2 --file snapshot.raw
https://stackbees.co/post/migrate-vm-using-snapshot/
jika Killed, silahkan gunakan Glance-CLI
apt install python3-glanceclient
openstack image list
glance image-download --file snapshot.raw f30b204e-1ce6-40e7-b8d9-b353d4d84e7d
openstack image create 2004-ne --container-format bare --disk-format qcow2 --public --file snapshot.raw
UPLOAD MENGGUNAKAN GLANCE CLI
glance image-create --name cirros2 --visibility public \
--disk-format qcow2 \
--container-format bare < cirros-0.6.0-x86_64-disk.img
RESTART CONTAINER KE BANYAK NODE :
for i in {11..16}; do ssh 10.24.12.$i docker restart nova_novncproxy nova_conductor nova_api nova_scheduler nova_compute nova_libvirt nova_ssh;done
RESET STATUS INSTANCE :
nova reset-state id_instance
atau
openstack server set --state error id_instance
CREATE INSTANCE :
openstack server create \
--image Ubuntu18.04.img \
--flavor medium \
--key-name controller1-key \
--availability-zone general:bdg01r01cmpt03 \
--network demo-net \
test5
nova reset-state --active id
ATTACH VOLUME TO INSTANCE :
openstack server add volume cirros-instance-1 myvolume-24 --device /dev/vdb
openstack volume list
openstack volume delete myvolume-24
CREATE INSTANCE WITH EXTERNAL PORT :
openstack port create --vnic-type direct \
--network ext-net test-port-a
openstack port list | grep test-port-a
openstack server create \
--flavor small \
--image Ubuntu18.04 \
--port test-port-a \
--key-name controller2-key \
--availability-zone general:bdg01r01cmpt02 \
btech-ext-port1
OPENSTACK ROLE ASSIGNMENT :
openstack role assignment list --name
delete volumes & snapshot yang sedang galau tidak mau di delete
#ambil password root mariadb openstack
cat /etc/kolla/password.yml |grep database < cari yang admin_database
#Volumes
docker exec -ti mariadb bash
mysql -u root -p
use cinder;
show tables;
update volumes set deleted=1,status='deleted',deleted_at=now(),updated_at=now() where deleted=0 and id='54b6ae0f-bafb-413d-b4d7-17969daffad7';
update volumes set deleted=1,status='deleted',deleted_at=now(),updated_at=now() where deleted=0 and id='c9f7cc63-490e-4362-b64d-5f570ce28c13';
# IMAGES
update images set deleted=1,status='deleted',deleted_at=now(),updated_at=now() where deleted=0 and id='272c622e-ac88-4c27-b013-f16b9cf49ec0';
#Snapshot
docker exec -ti mariadb bash
mysql -u root -p
use cinder;
show tables;
update snapshots set deleted=1,status='deleted',deleted_at=now(),updated_at=now() where deleted=0 and id='c1a5ba8b-3f45-4bdf-bb31-5741cf0ec9c2';
BR-EX HILANG KOLLA :
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eno1
ovs-vsctl show
DELETE BR-EX :
ovs-vsctl del-br br-ex1
TURN ON PROMISC :
ip link set br-ex up promisc on
RESTART NOVA :
for i in {11..18}; do ssh 10.24.12.$i docker restart nova_compute nova_libvirt nova_ssh nova_novncproxy nova_conductor nova_api nova_scheduler; done
## BERSIH BERSIH OVN :
docker exec -it ovn_nb_db bash
ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound
ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/kick OVN_Northbound <id>
docker exec -it ovn_sb_db bash
ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound
ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/kick OVN_Southbound <id>
for i in {11..13}; do ssh 192.168.1.$i docker restart ovn_northd ovn_nb_db ovn_sb_db ovn_controller;done
for i in {11..13}; do ssh 10.24.12.$i docker restart neutron_server;done
OVN :
/openstack/venvs/neutron-22.0.0.0rc2.dev111/bin/neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --ovn-neutron_sync_mode repair
/var/lib/kolla/venv/bin/neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --ovn-neutron_sync_mode repair
MENGETAHUI TIPE VIRTUALISASI VM :
sudo dmidecode -s system-product-name
MELIHAT VIRTUALISASI HYPERVISOR :
systemd-detect-virt
================================
OVN
================================
RESYNC OVN NEUTRON DB :
docker exec -it -u 0 neturon_server bash
/var/lib/kolla/venv/bin/neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --ovn-neutron_sync_mode repair
OVN CHECK STATUS :
ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound
ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound
ovn-nbctl --if-exists get NB_GLOBAL . options:northd_probe_interval
ovn-nbctl list connection
ovn-sbctl list connection
ovn-nbctl set connection . inactivity_probe='60000'
ovn-sbctl set connection . inactivity_probe='60000'
ovn-nbctl list connection
ovs-vsctl set open . external_ids:ovn-remote-probe-interval=100000
ovs-vsctl get open . external_ids:ovn-remote-probe-interval
================================
INCREASE INACTIVITY PROBE OVN :
================================
ovn-nbctl set-connection ptcp:6641:0.0.0.0 -- set connection . inactivity_probe=60000
ovn-nbctl set-connection ptcp:6641:10.24.12.11 -- set connection . inactivity_probe=60000
ovn-nbctl list connection
ovn-sbctl set-connection ptcp:6641:0.0.0.0 -- set connection . inactivity_probe=60000
ovn-sbctl set-connection ptcp:6641:10.24.12.11 -- set connection . inactivity_probe=60000
ovn-sbctl list connection
ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound
ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound
================================
CHECK INACTIVITY PROBE OPENVSWITCH :
================================
docker exec -it -u 0 openvswitch_vswitchd bash
ovs-vsctl list open
ovs-vsctl get open . external_ids:ovn-remote-probe-interval
ovs-vsctl --inactivity-probe=60000 set-manager tcp:127.0.0.1:6640
ovs-vsctl list-br
ovs-vsctl get-controller br-ex
ovs-vsctl get-controller br-int
ovs-vsctl get-controller br-tun
ovs-vsctl --inactivity-probe=60000 set-controller br-ex
ovs-vsctl --inactivity-probe=60000 set-controller br-int
ovs-vsctl --inactivity-probe=60000 set-controller br-tun
================================
OVS INCREASE INACTIVITY PROBE :
================================
docker exec -it -u 0 neutron_openvswitch_agent bash
ovs-vsctl list manager
ovs-vsctl set manager 7d034a5c-c09b-4eeb-9d1d-7cbe75f48168 inactivity_probe=60000
https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/tshooting/tshoot-mcp-openstack/ovs-timeouts.html
=================================
GANTI RESOLVER PERMANENT :
=================================
nano /etc/systemd/resolved.conf
ATAU DI UNMASK :
Berikut adalah kiat kiat agar /etc/resolv.conf permanen :
systemctl stop systemd-resolved
systemctl disable systemd-resolved
systemctl mask systemd-resolved
Jika ingin mengaktifkan kembali bisa gunakan :
systemctl unmask systemd-resolved
systemctl enable systemd-resolved
systemctl start systemd-resolved
CARA II
nano /etc/systemd/resolved.conf
DNS=1.1.1.1 8.8.8.8
systemctl restart systemd-resolved.service
systemctl status systemd-resolved.service
==================================
CHECK PORT OPEN
==================================
nmap -p0-10000 IP_ADDRESS
==================================
OPENSSL CHECK
==================================
openssl x509 -in cert.pem -text
==================================
NGINX LOGROTATE
==================================
sudo logrotate -v -f /etc/logrotate.d/nginx
==================================
MIGRATE
==================================
https://docs.openstack.org/charm-guide/latest/admin/ops-live-migrate-routers.html
https://docs.openstack.org/kolla-ansible/xena/user/adding-and-removing-hosts.html
DHCP AGENT :
openstack network agent list --host adaptivehpe --agent-type dhcp
openstack network list --agent 52552a7a-675d-42ad-9510-88d7081c4889
L3 :
openstack network agent list --agent-type l3
openstack router list --agent 0542cada-22f8-4c2c-a2bf-577a3b435e57
openstack network agent list
openstack network agent list --host adaptivehpe --agent-type l3
openstack router list --agent $ID_AGENT_L3
openstack network agent remove router $ID_AGENT_L3 $ID_ROUTER --l3
openstack network agent add router $ID_AGENT_L3_HOST_BARU $ID_ROUTER --l3
openstack router list --agent $ID_AGENT_L3_HOST_BARU
==================================
INCREASE MTU
==================================
openstack network set --mtu 8950 [network id ]
https://platform9.com/kb/openstack/change-mtu-size-for-existing-network-using-openstack-cli-api
CINDER RESET STATE
cinder reset-state --state in-use ID_VOLUME
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment