Once run_upgrade.sh
has completed successfully, you can clean up the
pre-Queens per-service containers, leaving only the consolidated
containers in place.
To do so, shell into alice
and become root
:
$ ssh alice
$ sudo -i
Next, enumerate all configured containers with the lxc-ls -1
(that’s
the numeral one, not a lowercase L) command. You’ll see output similar
to this:
# lxc-ls -1
alice_cinder_api_container-91100136
alice_cinder_scheduler_container-5f8e6b87
alice_galera_container-0af1feab
alice_galera_container-672e4007
alice_galera_container-ddbabcb9
alice_glance_container-ee2a5743
alice_heat_api_container-0ac38e9c
alice_heat_apis_container-0ff6a1cc
alice_heat_engine_container-21de86d8
alice_horizon_container-b6fa8f9e
alice_keystone_container-86d021a3
alice_memcached_container-345498cd
alice_neutron_server_container-5d8398e0
alice_nova_api_container-375837bd
alice_nova_api_metadata_container-2acf891d
alice_nova_api_os_compute_container-3ea33462
alice_nova_api_placement_container-d660d0f6
alice_nova_conductor_container-d0e18d24
alice_nova_console_container-302827ba
alice_nova_scheduler_container-ca4c580b
alice_rabbit_mq_container-15026ea4
alice_rabbit_mq_container-4c572062
alice_rabbit_mq_container-df253d18
alice_repo_container-7be6c70f
alice_utility_container-9d876148
As you’ll notice, there are two different container naming conventions for API containers in place:
<hostname>
<programname>
<servicename>
_container-suffix
(example:alice_nova_scheduler_container-ca4c580b
)<hostname>
_<programname>
_api_container-suffix
(example:alice_nova_api_container-375837bd
)
The former is the pre-Queens convention, the latter is for Queens and
later. Please note that the naming for the Heat API containers is
particularly confusing, heat_apis_container
is pre-Queens,
heat_api_container
(no “s”) is for Queens and later releases.
If your upgrade run has been successful, then the pre-Queens
containers can now be removed. To do so, you will use the
lxc-destroy
command, running as root
on alice
:
# lxc-destroy -f -n alice_nova_api_metadata_container-<suffix>
# lxc-destroy -f -n alice_nova_api_os_compute_container-<suffix>
# lxc-destroy -f -n alice_nova_api_placement_container-<suffix>
# lxc-destroy -f -n alice_nova_conductor_container-<suffix>
# lxc-destroy -f -n alice_nova_console_container-<suffix>
# lxc-destroy -f -n alice_nova_scheduler_container-<suffix>
# lxc-destroy -f -n alice_heat_apis_container-<suffix>
# lxc-destroy -f -n alice_heat_engine_container-<suffix>
# lxc-destroy -f -n alice_cinder_scheduler_container-<suffix>
Please make sure that you substitute <suffix>
with the suffix that
applies to your own system’s containers. Note that the lxc-destroy
command supports tab completion. For example, you can type
lxc-destroy -f -n alice_nova_api_p<tab>
to make sure you get the
right container name for the old placement container.
Once you’ve shut down and destroyed these containers, you can remove
their service entries. To do so, you’ll first need to exit your root
session on alice
, and come back to the deploy
host:
# exit
$ logout
Connection to alice closed.
Now, continuing as the user training
on deploy
, source your
openstackrc
file again:
$ source ~/openstackrc
Next, enumerate your Nova services:
$ openstack compute service list
+----+------------------+-----------------------------------------+----------+---------+-------+
| ID | Binary | Host | Zone | Status | State |
+----+------------------+-----------------------------------------+----------+---------+-------+
| 4 | nova-conductor | alice-nova-conductor-container-d0e18d24 | internal | enabled | down |
| 7 | nova-scheduler | alice-nova-scheduler-container-ca4c580b | internal | enabled | down |
| 19 | nova-consoleauth | alice-nova-console-container-302827ba | internal | enabled | down |
| 22 | nova-compute | bob | nova | enabled | up |
| 28 | nova-conductor | alice-nova-api-container-375837bd | internal | enabled | up |
| 31 | nova-consoleauth | alice-nova-api-container-375837bd | internal | enabled | up |
| 34 | nova-scheduler | alice-nova-api-container-375837bd | internal | enabled | up |
+----+------------------+-----------------------------------------+----------+---------+-------+
In the example output above, you see your deleted containers’ services
in the down
state. (If you don‘t see any services listed as down
immediately after you destroy the containers, just wait a bit: it
takes up to a minute for the service to be detected as down
.)
These down
services are now safe to remove from Nova’s
configuration:
$ openstack compute service delete <id>
Please be sure to substitute the correct service IDs for your
environment. With the service list from above, the correct IDs for the
down
services would be 4, 7, and 19, but you must substitute the
proper IDs as they apply to your environment.
When complete, the Nova service list should only contain the
nova-conductor
, nova-consoleauth
, and nova-scheduler
services
all running in one remaining API container, and the nova-compute
service on bob
:
$ openstack compute service list
+----+------------------+-----------------------------------+----------+---------+-------+
| ID | Binary | Host | Zone | Status | State |
+----+------------------+-----------------------------------+----------+---------+-------+
| 22 | nova-compute | bob | nova | enabled | up |
| 28 | nova-conductor | alice-nova-api-container-375837bd | internal | enabled | up |
| 31 | nova-consoleauth | alice-nova-api-container-375837bd | internal | enabled | up |
| 34 | nova-scheduler | alice-nova-api-container-375837bd | internal | enabled | up |
+----+------------------+-----------------------------------+----------+---------+-------+
You will now proceed (almost) identically for Cinder. Enumerate your Cinder services:
$ openstack volume service list
+------------------+-------------------------------------------+------+---------+-------+
| Binary | Host | Zone | Status | State |
+------------------+-------------------------------------------+------+---------+-------+
| cinder-scheduler | alice-cinder-scheduler-container-5f8e6b87 | nova | enabled | down |
| cinder-volume | daisy@lvm | nova | enabled | up |
| cinder-scheduler | alice-cinder-api-container-91100136 | nova | enabled | up |
+------------------+-------------------------------------------+------+---------+-------+
Unfortunately, there is no openstack volume service remove
command,
but at least we can disable services in the down
state, such that
only one API container and the cinder-volume
service on daisy
remain enabled:
$ openstack volume service set \
--disable \
--disable-reason "Removed during Queens upgrade" \
alice_cinder_scheduler_container-<suffix> \
cinder-scheduler
Finally, re-verify your service list:
$ openstack volume service list
+------------------+-------------------------------------------+------+----------+-------+
| Binary | Host | Zone | Status | State |
+------------------+-------------------------------------------+------+----------+-------+
| cinder-scheduler | alice-cinder-scheduler-container-5f8e6b87 | nova | disabled | down |
| cinder-volume | daisy@lvm | nova | enabled | up |
| cinder-scheduler | alice-cinder-api-container-91100136 | nova | enabled | up |
+------------------+-------------------------------------------+------+----------+-------+
The final remaining task is to remove the deleted containers from the
OpenStack-Ansible inventory, such that they are not recreated on the
next playbook run. You can accomplish this with the
inventory-manage.py
script.
Acting as the user training
on deploy
, become root
, then change
into the openstack-ansible/scripts
directory:
$ sudo -i
# cd /home/training/openstack-ansible/scripts
List the current inventory:
# ./inventory-manage.py -l
Now, remove the redundant containers:
# ./inventory-manage.py -r alice_nova_api_metadata_container-<suffix>
Success. . .
# ./inventory-manage.py -r alice_nova_api_os_compute_container-<suffix>
# ./inventory-manage.py -r alice_nova_api_placement_container-<suffix>
# ./inventory-manage.py -r alice_nova_conductor_container-<suffix>
# ./inventory-manage.py -r alice_nova_console_container-<suffix>
# ./inventory-manage.py -r alice_nova_scheduler_container-<suffix>
# ./inventory-manage.py -r alice_heat_apis_container-<suffix>
# ./inventory-manage.py -r alice_heat_engine_container-<suffix>
# ./inventory-manage.py -r alice_cinder_scheduler_container-<suffix>
Finally, re-list your inventory:
# ./inventory-manage.py -l
And lastly, re-run the haproxy-install.yml
playbook with the
haproxy_server-config
tag so that the redundant entries are dropped
from your HAproxy configuration. This will be a very quick playbook
run; it should take no more than 20 seconds:
# cd ../playbooks
# openstack-ansible -t haproxy_server-config haproxy-install.yml
As always, check for the [Playbook execution success]
message at the
end.
Here's a slightly more efficient way of cleaning up the containers: