Skip to content

Instantly share code, notes, and snippets.

@mbukatov
Last active August 2, 2019 16:08
Show Gist options
  • Save mbukatov/65cefdea680ad48fa732725d60ce8900 to your computer and use it in GitHub Desktop.
Save mbukatov/65cefdea680ad48fa732725d60ce8900 to your computer and use it in GitHub Desktop.
ocs-ci-pull-499-deployment-testing
cluster-2019-08-02.02.tar.gz

This is a note about deployment testing for red-hat-storage/ocs-ci#499.

At commit ocs-ci 9fe0552a0f1ee6da20752bbd1dc807633f858acd executed via:

(venv-setup-teardown) [ocsqe@localhost ocs-ci]$ run-ci  --cluster-conf conf/examples/monitoring.yaml --cluster-path ~/data/cluster-2019-08-02.02 --cluster-name mbukatov-ocsqe --deploy -m deployment | tee ~/data/cluster-2019-08-02.02.deploy.log

and teardown via:

(venv-setup-teardown) [ocsqe@localhost ocs-ci]$ run-ci --cluster-path ~/data/cluster-2019-08-02.02 --teardown -m deployment | tee ~/data/cluster-2019-08-02.02.teardown.log

Version of deployed cluster:

cluster channel: stable-4.1
cluster version: 4.1.4
cluster image: quay.io/openshift-release-dev/ocp-release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a

storage namespace openshift-cluster-storage-operator
image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33061a64e0a7aaf25b2a1fde79bda3ad63fcfb9fdb36712a046e891fd9f55c47
 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33061a64e0a7aaf25b2a1fde79bda3ad63fcfb9fdb36712a046e891fd9f55c47

storage namespace openshift-storage
image quay.io/cephcsi/cephcsi:canary
 * quay.io/cephcsi/cephcsi@sha256:3305f8a306da85f0304f99c7b634f5b077be581492e3b74a73ce96b893c4db54
image quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
 * quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599
image quay.io/k8scsi/csi-attacher:v1.1.1
 * quay.io/k8scsi/csi-attacher@sha256:e4db94969e1d463807162a1115192ed70d632a61fbeb3bdc97b40fe9ce78c831
image quay.io/k8scsi/csi-provisioner:v1.2.0
 * quay.io/k8scsi/csi-provisioner@sha256:0dffe9a8d39c4fdd49c5dd98ca5611a3f9726c012b082946f630e36988ba9f37
image quay.io/k8scsi/csi-snapshotter:v1.1.0
 * quay.io/k8scsi/csi-snapshotter@sha256:a49e0da1af6f2bf717e41ba1eee8b5e6a1cbd66a709dd92cc43fe475fe2589eb
image docker.io/rook/ceph:master
 * docker.io/rook/ceph@sha256:478d29b6b7c8207c11e540af03ea9b2571f54f39b85a54fea0cb73f9e3faf9e0
image docker.io/ceph/ceph:v14.2.2-20190722
 * docker.io/ceph/ceph@sha256:567fe78d90a63ead11deadc2cbf5a912e42bfcc6ef4b1d6154f4b4fea4019052

Additional notes:

  • no failure during installation
  • target connected
  • ceph_ metrics available in prometheus
  • rook-ceph-mgr servicemonitor deployed
============================= test session starts ==============================
platform linux -- Python 3.7.4, pytest-5.0.1, py-1.8.0, pluggy-0.12.0
rootdir: /home/ocsqe/projects/ocs-ci, inifile: pytest.ini, testpaths: tests
plugins: reportportal-1.0.5, logger-0.5.1, metadata-1.8.0, html-1.21.1, marker-bugzilla-0.9.1.dev2
collected 72 items / 71 deselected / 1 selected
tests/ecosystem/deployment/test_ocs_basic_install.py::test_cluster_is_running
-------------------------------- live log setup --------------------------------
17:01:49 - MainThread - tests.conftest - INFO - All logs located at /tmp/ocs-ci-logs-1564758108
17:01:49 - MainThread - ocs_ci.deployment.factory - INFO - Deployment key = aws_ipi
17:01:49 - MainThread - ocs_ci.deployment.factory - INFO - Current deployment platform: AWS,deployment type: ipi
17:01:49 - MainThread - ocs_ci.utility.utils - INFO - Downloading openshift installer (4.1.4).
17:01:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: tar xzvf openshift-install.tar.gz openshift-install
17:01:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ./bin/openshift-install version
17:01:56 - MainThread - ocs_ci.utility.utils - INFO - OpenShift Installer version: ./bin/openshift-install v4.1.4-201906271212-dirty
built from commit bf47826c077d16798c556b1bd143a5bbfac14271
release image quay.io/openshift-release-dev/ocp-release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
17:01:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ./bin/oc version
17:01:56 - MainThread - ocs_ci.utility.utils - INFO - OpenShift Client version: Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.2-201906121056+3569a06-dirty", GitCommit:"3569a06", GitTreeState:"dirty", BuildDate:"2019-06-12T15:47:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
17:01:56 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Testing access to cluster with /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig
17:01:56 - MainThread - ocs_ci.ocs.openshift_ops - WARNING - The kubeconfig file /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig doesn't exist!
17:01:56 - MainThread - ocs_ci.deployment.deployment - INFO - A testing cluster will be deployed and cluster information stored at: /home/ocsqe/data/cluster-2019-08-02.02
17:01:56 - MainThread - ocs_ci.deployment.deployment - INFO - Generating install-config
17:01:56 - MainThread - ocs_ci.deployment.deployment - INFO - Install config:
apiVersion: v1
baseDomain: qe.rh-ocs.com
compute:
- name: worker
platform: {}
replicas: 3
controlPlane:
name: master
platform: {}
replicas: 3
metadata:
creationTimestamp: null
name: 'mbukatov-ocsqe'
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineCIDR: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-east-2
pullSecret: ''
17:01:56 - MainThread - ocs_ci.deployment.aws - INFO - Deploying OCP cluster
17:01:56 - MainThread - ocs_ci.deployment.aws - INFO - Openshift-installer will be using loglevel:INFO
17:01:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ./bin/openshift-install create cluster --dir /home/ocsqe/data/cluster-2019-08-02.02 --log-level INFO
17:28:58 - MainThread - ocs_ci.utility.utils - WARNING - Command warning:: level=info msg="Consuming \"Install Config\" from target directory"
level=info msg="Creating infrastructure resources..."
level=info msg="Waiting up to 30m0s for the Kubernetes API at https://api.mbukatov-ocsqe.qe.rh-ocs.com:6443..."
level=info msg="API v1.13.4+c62ce01 up"
level=info msg="Waiting up to 30m0s for bootstrapping to complete..."
level=info msg="Destroying the bootstrap resources..."
level=info msg="Waiting up to 30m0s for the cluster at https://api.mbukatov-ocsqe.qe.rh-ocs.com:6443 to initialize..."
level=info msg="Waiting up to 10m0s for the openshift-console route to be created..."
level=info msg="Install complete!"
level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig'"
level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mbukatov-ocsqe.qe.rh-ocs.com"
level=info msg="Login to the console with user: kubeadmin, password: ZwfjG-p3crq-wupWH-K56av"
17:28:58 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Testing access to cluster with /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig
17:28:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc cluster-info
17:29:00 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Access to cluster is OK!
17:29:00 - MainThread - ocs_ci.deployment.aws - INFO - Worker pattern: mbukatov-ocsqe-qqzcv-worker*
17:29:00 - MainThread - botocore.credentials - INFO - Found credentials in shared credentials file: ~/.aws/credentials
17:29:01 - MainThread - ocs_ci.deployment.aws - INFO - Creating and attaching 100 GB volume to mbukatov-ocsqe-qqzcv-worker-us-east-2c-9dl59
17:29:01 - MainThread - ocs_ci.deployment.aws - INFO - Creating and attaching 100 GB volume to mbukatov-ocsqe-qqzcv-worker-us-east-2b-qqxnh
17:29:01 - MainThread - ocs_ci.deployment.aws - INFO - Creating and attaching 100 GB volume to mbukatov-ocsqe-qqzcv-worker-us-east-2a-6kq9g
17:29:20 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get CephCluster -o yaml
17:29:22 - MainThread - ocs_ci.deployment.deployment - INFO - Running OCS basic installation
17:29:22 - MainThread - ocs_ci.ocs.utils - INFO - Creating rook resource from common.yaml
17:29:22 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig create -f /home/ocsqe/data/cluster-2019-08-02.02/common.yaml -o yaml
17:29:28 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc label namespace openshift-storage "openshift.io/cluster-monitoring=true"
17:29:29 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc policy add-role-to-user view system:serviceaccount:openshift-monitoring:prometheus-k8s -n openshift-storage
17:29:30 - MainThread - ocs_ci.ocs.utils - INFO - Applying rook resource from rbac.yaml
17:29:30 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig apply -f /home/ocsqe/data/cluster-2019-08-02.02/rbac.yaml
17:29:35 - MainThread - ocs_ci.ocs.utils - INFO - Applying rook resource from csi-nodeplugin-rbac_rbd.yaml
17:29:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig apply -f /home/ocsqe/data/cluster-2019-08-02.02/csi-nodeplugin-rbac_rbd.yaml
17:29:38 - MainThread - ocs_ci.ocs.utils - INFO - Applying rook resource from csi-provisioner-rbac_rbd.yaml
17:29:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig apply -f /home/ocsqe/data/cluster-2019-08-02.02/csi-provisioner-rbac_rbd.yaml
17:29:41 - MainThread - ocs_ci.ocs.utils - INFO - Applying rook resource from csi-nodeplugin-rbac_cephfs.yaml
17:29:41 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig apply -f /home/ocsqe/data/cluster-2019-08-02.02/csi-nodeplugin-rbac_cephfs.yaml
17:29:44 - MainThread - ocs_ci.ocs.utils - INFO - Applying rook resource from csi-provisioner-rbac_cephfs.yaml
17:29:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig apply -f /home/ocsqe/data/cluster-2019-08-02.02/csi-provisioner-rbac_cephfs.yaml
17:29:47 - MainThread - ocs_ci.deployment.deployment - INFO - Waiting 15 seconds...
17:30:02 - MainThread - ocs_ci.ocs.utils - INFO - Creating rook resource from operator-openshift-with-csi.yaml
17:30:02 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig create -f /home/ocsqe/data/cluster-2019-08-02.02/operator-openshift-with-csi.yaml -o yaml
17:30:03 - MainThread - ocs_ci.deployment.deployment - INFO - Waiting 15 seconds...
17:30:18 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-operator -n openshift-storage --timeout=120s
17:30:50 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-discover -n openshift-storage --timeout=120s
17:31:36 - MainThread - ocs_ci.ocs.utils - INFO - Creating rook resource from cluster.yaml
17:31:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig create -f /home/ocsqe/data/cluster-2019-08-02.02/cluster.yaml -o yaml
17:31:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-agent -n openshift-storage --timeout=120s
17:31:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:31:45 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:31:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:31:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:31:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:04 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:08 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:12 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:16 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:20 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:23 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:27 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:47 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:32:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:33:03 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mon -o yaml
17:33:04 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mgr -o yaml
17:33:08 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mgr -o yaml
17:33:11 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mgr -o yaml
17:33:15 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mgr -o yaml
17:33:19 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mgr -o yaml
17:33:22 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mgr -o yaml
17:33:26 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mgr -o yaml
17:33:27 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:33:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:33:34 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:33:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:33:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:33:45 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:33:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:33:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:33:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:04 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:07 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:11 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:15 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:19 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:22 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:26 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:30 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:33 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:41 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-osd -o yaml
17:34:58 - MainThread - ocs_ci.ocs.utils - INFO - Creating rook resource from toolbox.yaml
17:34:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig create -f /home/ocsqe/data/cluster-2019-08-02.02/toolbox.yaml -o yaml
17:34:58 - MainThread - ocs_ci.deployment.deployment - INFO - Waiting 15 seconds...
17:35:13 - MainThread - ocs_ci.ocs.utils - INFO - Creating rook resource from storage-manifest.yaml
17:35:13 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig create -f /home/ocsqe/data/cluster-2019-08-02.02/storage-manifest.yaml -o yaml
17:35:14 - MainThread - ocs_ci.ocs.utils - INFO - Creating rook resource from prometheus-rules.yaml
17:35:14 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig create -f /home/ocsqe/data/cluster-2019-08-02.02/prometheus-rules.yaml -o yaml
17:35:15 - MainThread - ocs_ci.deployment.deployment - INFO - Waiting 15 seconds...
17:35:30 - MainThread - ocs_ci.ocs.resources.ocs - INFO - Adding CephFilesystem with name ocsci-cephfs
17:35:30 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig create -f /tmp/CephFilesystemzd8y2ym7 -o yaml
17:35:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get CephFilesystem ocsci-cephfs -o yaml
17:35:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mds -o yaml
17:35:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mds -o yaml
17:35:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mds -o yaml
17:35:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mds -o yaml
17:35:46 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mds -o yaml
17:35:50 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mds -o yaml
17:35:54 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-mds -o yaml
17:35:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get CephFileSystem -o yaml
17:35:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get Pod --selector=app=rook-ceph-tools -o yaml
17:35:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get CephFileSystem ocsci-cephfs -o yaml
17:35:57 - MainThread - tests.helpers - INFO - Filesystem ocsci-cephfs got created from Openshift Side
17:35:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig rsh rook-ceph-tools-76c7d559b6-z5m6j ceph fs ls --format json-pretty
17:35:59 - MainThread - tests.helpers - INFO - FileSystem ocsci-cephfs got created from Ceph Side
17:35:59 - MainThread - ocs_ci.deployment.deployment - INFO - MDS deployment is successful!
17:35:59 - MainThread - ocs_ci.deployment.deployment - INFO - Done creating rook resources, waiting for HEALTH_OK
17:35:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-tools -n openshift-storage --timeout=120s
17:36:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pod -l 'app=rook-ceph-tools' -o jsonpath='{.items[0].metadata.name}'
17:36:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage exec rook-ceph-tools-76c7d559b6-z5m6j ceph health
17:36:02 - MainThread - ocs_ci.utility.utils - INFO - HEALTH_OK, install successful.
17:36:02 - MainThread - ocs_ci.deployment.deployment - INFO - Patch gp2 storageclass as non-default
17:36:02 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' --request-timeout=120s
-------------------------------- live log call ---------------------------------
17:36:03 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Testing access to cluster with /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig
17:36:03 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc cluster-info
17:36:04 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Access to cluster is OK!
PASSED [100%]
================== 1 passed, 71 deselected in 2055.38 seconds ==================
============================= test session starts ==============================
platform linux -- Python 3.7.4, pytest-5.0.1, py-1.8.0, pluggy-0.12.0
rootdir: /home/ocsqe/projects/ocs-ci, inifile: pytest.ini, testpaths: tests
plugins: reportportal-1.0.5, logger-0.5.1, metadata-1.8.0, html-1.21.1, marker-bugzilla-0.9.1.dev2
collected 72 items / 71 deselected / 1 selected
tests/ecosystem/deployment/test_ocs_basic_install.py::test_cluster_is_running
-------------------------------- live log setup --------------------------------
18:02:59 - MainThread - tests.conftest - INFO - All logs located at /tmp/ocs-ci-logs-1564761777
18:02:59 - MainThread - ocs_ci.deployment.factory - INFO - Deployment key = aws_ipi
18:02:59 - MainThread - ocs_ci.deployment.factory - INFO - Current deployment platform: AWS,deployment type: ipi
18:02:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ./bin/openshift-install version
18:02:59 - MainThread - ocs_ci.utility.utils - INFO - OpenShift Installer version: ./bin/openshift-install v4.1.4-201906271212-dirty
built from commit bf47826c077d16798c556b1bd143a5bbfac14271
release image quay.io/openshift-release-dev/ocp-release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
18:02:59 - MainThread - tests.conftest - INFO - Will teardown cluster because --teardown was provided
18:02:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ./bin/oc version
18:03:00 - MainThread - ocs_ci.utility.utils - INFO - OpenShift Client version: Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.2-201906121056+3569a06-dirty", GitCommit:"3569a06", GitTreeState:"dirty", BuildDate:"2019-06-12T15:47:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+c62ce01", GitCommit:"c62ce01", GitTreeState:"clean", BuildDate:"2019-06-27T18:14:14Z", GoVersion:"go1.11.6", Compiler:"gc", Platform:"linux/amd64"}
18:03:00 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Testing access to cluster with /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig
18:03:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc cluster-info
18:03:01 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Access to cluster is OK!
18:03:01 - MainThread - ocs_ci.deployment.aws - WARNING - OCP cluster is already running, skipping installation
18:03:01 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig get CephCluster -o yaml
18:03:01 - MainThread - ocs_ci.deployment.deployment - WARNING - OCS cluster already exists
-------------------------------- live log call ---------------------------------
18:03:01 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Testing access to cluster with /home/ocsqe/data/cluster-2019-08-02.02/auth/kubeconfig
18:03:01 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc cluster-info
18:03:02 - MainThread - ocs_ci.ocs.openshift_ops - INFO - Access to cluster is OK!
PASSED [100%]
------------------------------ live log teardown -------------------------------
18:03:02 - MainThread - ocs_ci.deployment.aws - INFO - Destroying the cluster
18:03:02 - MainThread - ocs_ci.deployment.aws - INFO - Destroying cluster defined in /home/ocsqe/data/cluster-2019-08-02.02
18:03:02 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ./bin/openshift-install destroy cluster --dir /home/ocsqe/data/cluster-2019-08-02.02 --log-level INFO
18:05:59 - MainThread - ocs_ci.utility.utils - WARNING - Command warning:: level=info msg=Disassociated arn="arn:aws:ec2:us-east-2:861790564636:instance/i-0effc3337dc428e42" id=i-0effc3337dc428e42 name=mbukatov-ocsqe-qqzcv-master-profile role=mbukatov-ocsqe-qqzcv-master-role
level=info msg=Deleted InstanceProfileName=mbukatov-ocsqe-qqzcv-master-profile arn="arn:aws:iam::861790564636:instance-profile/mbukatov-ocsqe-qqzcv-master-profile" id=i-0effc3337dc428e42
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:instance/i-0effc3337dc428e42" id=i-0effc3337dc428e42
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:natgateway/nat-02879643032892faf" id=nat-02879643032892faf
level=info msg=Disassociated arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-05d1f0fb9556eb389" id=rtbassoc-0d5725a0457c7f249
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-05d1f0fb9556eb389" id=rtb-05d1f0fb9556eb389
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:natgateway/nat-0767175b4079f85e6" id=nat-0767175b4079f85e6
level=info msg=Deleted arn="arn:aws:elasticloadbalancing:us-east-2:861790564636:loadbalancer/net/mbukatov-ocsqe-qqzcv-ext/cabeefab89358f5d" id=net/mbukatov-ocsqe-qqzcv-ext/cabeefab89358f5d
level=info msg=Disassociated arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-05be68c480453e966" id=rtbassoc-0d8380a23672fe081
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-05be68c480453e966" id=rtb-05be68c480453e966
level=info msg=Disassociated arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-0111d53592828a081" id=rtbassoc-0b4ee6d12fad44f1f
level=info msg=Disassociated arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-0111d53592828a081" id=rtbassoc-04eaa4c5014953c63
level=info msg=Disassociated arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-0111d53592828a081" id=rtbassoc-08c8e7a376fb21ca8
level=info msg=Disassociated arn="arn:aws:ec2:us-east-2:861790564636:instance/i-0aece2ed7d8f51714" id=i-0aece2ed7d8f51714 name=mbukatov-ocsqe-qqzcv-worker-profile role=mbukatov-ocsqe-qqzcv-worker-role
level=info msg=Deleted InstanceProfileName=mbukatov-ocsqe-qqzcv-worker-profile arn="arn:aws:iam::861790564636:instance-profile/mbukatov-ocsqe-qqzcv-worker-profile" id=i-0aece2ed7d8f51714
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:instance/i-0aece2ed7d8f51714" id=i-0aece2ed7d8f51714
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:natgateway/nat-0aa4ba9645d7a1c2b" id=nat-0aa4ba9645d7a1c2b
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:image/ami-0ce7875d23e68a1a1" id=ami-0ce7875d23e68a1a1
level=info msg=Deleted arn="arn:aws:elasticloadbalancing:us-east-2:861790564636:loadbalancer/net/mbukatov-ocsqe-qqzcv-int/21d5e0698f0f1647" id=net/mbukatov-ocsqe-qqzcv-int/21d5e0698f0f1647
level=info msg=Deleted arn="arn:aws:elasticloadbalancing:us-east-2:861790564636:targetgroup/mbukatov-ocsqe-qqzcv-aint/2e7b2f21752d6bfe" id=mbukatov-ocsqe-qqzcv-aint/2e7b2f21752d6bfe
level=info msg=Deleted arn="arn:aws:elasticloadbalancing:us-east-2:861790564636:loadbalancer/a4053f058b53911e98a530205eae1ab1" id=a4053f058b53911e98a530205eae1ab1
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:instance/i-03ae1d1033d65afab" id=i-03ae1d1033d65afab
level=info msg=Deleted arn="arn:aws:elasticloadbalancing:us-east-2:861790564636:targetgroup/mbukatov-ocsqe-qqzcv-aext/8f161bb09f6d32e1" id=mbukatov-ocsqe-qqzcv-aext/8f161bb09f6d32e1
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:instance/i-0c52259c0c1974402" id=i-0c52259c0c1974402
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:instance/i-026637b823ef2899f" id=i-026637b823ef2899f
level=info msg=Disassociated arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-0836917f1d44f776a" id=rtbassoc-02d6289465f8d4803
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:route-table/rtb-0836917f1d44f776a" id=rtb-0836917f1d44f776a
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:instance/i-044bbd67314484777" id=i-044bbd67314484777
level=info msg=Deleted NAT gateway=nat-0aa4ba9645d7a1c2b arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-02879643032892faf arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-0767175b4079f85e6 arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted arn="arn:aws:s3:::image-registry-us-east-2-c17e3429503d49798b92f1c6019ddb8b-3cd2"
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 record set="SRV _etcd-server-ssl._tcp.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 record set="A api-int.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 public zone=/hostedzone/ZQ6XFE6BKI2L record set="A api.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 record set="A api.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 public zone=/hostedzone/ZQ6XFE6BKI2L record set="A \\052.apps.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 record set="A \\052.apps.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 record set="A etcd-0.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 record set="A etcd-1.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9 record set="A etcd-2.mbukatov-ocsqe.qe.rh-ocs.com."
level=info msg=Deleted arn="arn:aws:route53:::hostedzone/Z5HHE5EVV32M9" id=Z5HHE5EVV32M9
level=info msg=Deleted arn="arn:aws:iam::861790564636:role/mbukatov-ocsqe-qqzcv-master-role" id=mbukatov-ocsqe-qqzcv-master-role name=mbukatov-ocsqe-qqzcv-master-role policy=mbukatov-ocsqe-qqzcv-master-policy
level=info msg=Deleted arn="arn:aws:iam::861790564636:role/mbukatov-ocsqe-qqzcv-master-role" id=mbukatov-ocsqe-qqzcv-master-role name=mbukatov-ocsqe-qqzcv-master-role
level=info msg=Deleted arn="arn:aws:iam::861790564636:role/mbukatov-ocsqe-qqzcv-worker-role" id=mbukatov-ocsqe-qqzcv-worker-role name=mbukatov-ocsqe-qqzcv-worker-role policy=mbukatov-ocsqe-qqzcv-worker-policy
level=info msg=Deleted arn="arn:aws:iam::861790564636:role/mbukatov-ocsqe-qqzcv-worker-role" id=mbukatov-ocsqe-qqzcv-worker-role name=mbukatov-ocsqe-qqzcv-worker-role
level=info msg=Deleted arn="arn:aws:iam::861790564636:user/mbukatov-ocsqe-qqzcv-cloud-credential-operator-iam-ro-58f87" id=mbukatov-ocsqe-qqzcv-cloud-credential-operator-iam-ro-58f87 policy=mbukatov-ocsqe-qqzcv-cloud-credential-operator-iam-ro-58f87-policy
level=info msg=Deleted arn="arn:aws:iam::861790564636:user/mbukatov-ocsqe-qqzcv-cloud-credential-operator-iam-ro-58f87" id=mbukatov-ocsqe-qqzcv-cloud-credential-operator-iam-ro-58f87
level=info msg=Deleted arn="arn:aws:iam::861790564636:user/mbukatov-ocsqe-qqzcv-openshift-image-registry-hxt5r" id=mbukatov-ocsqe-qqzcv-openshift-image-registry-hxt5r policy=mbukatov-ocsqe-qqzcv-openshift-image-registry-hxt5r-policy
level=info msg=Deleted arn="arn:aws:iam::861790564636:user/mbukatov-ocsqe-qqzcv-openshift-image-registry-hxt5r" id=mbukatov-ocsqe-qqzcv-openshift-image-registry-hxt5r
level=info msg=Deleted arn="arn:aws:iam::861790564636:user/mbukatov-ocsqe-qqzcv-openshift-ingress-9kfr9" id=mbukatov-ocsqe-qqzcv-openshift-ingress-9kfr9 policy=mbukatov-ocsqe-qqzcv-openshift-ingress-9kfr9-policy
level=info msg=Deleted arn="arn:aws:iam::861790564636:user/mbukatov-ocsqe-qqzcv-openshift-ingress-9kfr9" id=mbukatov-ocsqe-qqzcv-openshift-ingress-9kfr9
level=info msg=Deleted arn="arn:aws:iam::861790564636:user/mbukatov-ocsqe-qqzcv-openshift-machine-api-tmh8w" id=mbukatov-ocsqe-qqzcv-openshift-machine-api-tmh8w policy=mbukatov-ocsqe-qqzcv-openshift-machine-api-tmh8w-policy
level=info msg=Deleted arn="arn:aws:iam::861790564636:user/mbukatov-ocsqe-qqzcv-openshift-machine-api-tmh8w" id=mbukatov-ocsqe-qqzcv-openshift-machine-api-tmh8w
level=info msg=Released arn="arn:aws:ec2:us-east-2:861790564636:elastic-ip/eipalloc-02228f02eb18e64eb" id=eipalloc-02228f02eb18e64eb
level=info msg=Deleted arn="arn:aws:elasticloadbalancing:us-east-2:861790564636:targetgroup/mbukatov-ocsqe-qqzcv-sint/737e285f0acf9566" id=mbukatov-ocsqe-qqzcv-sint/737e285f0acf9566
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:subnet/subnet-02c54ad91c342e077" id=subnet-02c54ad91c342e077
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:subnet/subnet-016fdbd3c48fbc826" id=subnet-016fdbd3c48fbc826
level=info msg=Released arn="arn:aws:ec2:us-east-2:861790564636:elastic-ip/eipalloc-03942c26a0c63106b" id=eipalloc-03942c26a0c63106b
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:security-group/sg-0954a0af42a636c8f" id=sg-0954a0af42a636c8f
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:snapshot/snap-0984103f979b26418" id=snap-0984103f979b26418
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:internet-gateway/igw-0d799bc936e2724b3" id=igw-0d799bc936e2724b3
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:subnet/subnet-003725e98049bd457" id=subnet-003725e98049bd457
level=info msg=Released arn="arn:aws:ec2:us-east-2:861790564636:elastic-ip/eipalloc-0859d2f4495ac7dd5" id=eipalloc-0859d2f4495ac7dd5
level=info msg=Deleted NAT gateway=nat-0aa4ba9645d7a1c2b arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-02879643032892faf arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-0767175b4079f85e6 arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a network interface=eni-02160631304a7eaaf
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:volume/vol-0cee9ec114d9cbfaf" id=vol-0cee9ec114d9cbfaf
level=info msg=Deleted NAT gateway=nat-0aa4ba9645d7a1c2b arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-02879643032892faf arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-0767175b4079f85e6 arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a network interface=eni-04d77f802dc522a3d
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:subnet/subnet-0dcedefd39216f6a6" id=subnet-0dcedefd39216f6a6
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:subnet/subnet-0500cd157990af539" id=subnet-0500cd157990af539
level=info msg=Deleted NAT gateway=nat-0aa4ba9645d7a1c2b arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-02879643032892faf arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-0767175b4079f85e6 arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a network interface=eni-086b51e5f5eeb3493
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a table=rtb-05870f012078cd884
level=info msg=Deleted VPC endpoint=vpce-0436039b58bef55b5 arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:security-group/sg-0bedc23e0d183884a" id=sg-0bedc23e0d183884a
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:security-group/sg-0ba4f8d7bfeb81bce" id=sg-0ba4f8d7bfeb81bce
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:subnet/subnet-07790eb81577eb96a" id=subnet-07790eb81577eb96a
level=info msg=Deleted NAT gateway=nat-0aa4ba9645d7a1c2b arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-02879643032892faf arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted NAT gateway=nat-0767175b4079f85e6 arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:vpc/vpc-035308449db43cc8a" id=vpc-035308449db43cc8a
level=info msg=Deleted arn="arn:aws:ec2:us-east-2:861790564636:dhcp-options/dopt-0ba7550d4fcbc8b62" id=dopt-0ba7550d4fcbc8b62
18:05:59 - MainThread - botocore.credentials - INFO - Found credentials in shared credentials file: ~/.aws/credentials
18:06:00 - MainThread - /home/ocsqe/projects/ocs-ci/ocs_ci/utility/aws.py - INFO - Deleting volume: vol-0e3b77d3e8239f72c
18:06:01 - MainThread - /home/ocsqe/projects/ocs-ci/ocs_ci/utility/aws.py - INFO - Deleting volume: vol-03707182feba73bd2
18:06:01 - MainThread - /home/ocsqe/projects/ocs-ci/ocs_ci/utility/aws.py - INFO - Deleting volume: vol-0ad12d8ac87d41731
================== 1 passed, 71 deselected in 182.18 seconds ===================
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment