In big environments, deploying Veeam Backup & Recovery is failing due to running in a timeout when collecting information of all virtual machines in the cluster. Due to the permission design, it's not possible to limit the requesting user to a subset of virtual machines (e.g. specific cluster).
From a virtual machine, the following, additional details are requested
tags
nics.reporteddevices
disk_attachments.disk
For tags
the Adminpermission Tag management permission
is required. This causing any limited user does have access to all virtual machine objects by default.
That's a design behavior by RHV/ovirt.
The corresponding API request is similar to the following
$ curl -Lk -u 'api-user@internal:secret' https://rhvm.example.com/ovirt-engine/api/vms\?follow=tags,nics.reporteddevices,disk_attachments.disk
To reduce the amount of virtual machines, the accessing user (e.g. api-user) should be limited to a specific cluster.
In addition, it's required to remove the tags
request, as already described, this requires administrator permissions.
To run the container, a system with container-tools
installed should be available.
Basically this is done with a single command.
$ sudo dnf module install -y container-tools
If that's not sufficient, the documentation provides deeper insights.
The container can be executed as rootfull or rootless container. To run it as rootless container, additional steps are required to setup.
The workaround is designed to rewrite the API-call and remove the tags
-part of the requesting url. To achieve this, a nginx
-container with rewrite_url
function is used.
This nginx
-container serves as a reverse proxy and will forward the requests to the RHV-manager instance accordingly.
There are some files required to
$ mkdir -p nginx-rhv/{nginx-certs,nginx-cfg}
This is fine for testing purpose and fast achievement, but should be replaced with own certificates.
$ cd nginx-rhv/nginx-certs
$ scp root@rhv-m.example.com:/etc/pki/ovirt-engine/certs/apache.cer .
$ scp root@rhv-m.example.com:/etc/pki/ovirt-engine/keys/apache.key.nopass .
$ cd ..
$ cat << "EOF" > nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /opt/app-root/etc/nginx.d/*.conf;
}
EOF
$ cat << "EOF" > Containerfile
FROM registry.access.redhat.com/ubi8/nginx-120
# Add application sources
ADD nginx.conf "${NGINX_CONF_PATH}"
#ADD nginx-default-cfg/*.conf "${NGINX_DEFAULT_CONF_PATH}"
ADD nginx-cfg/*.conf "${NGINX_CONFIGURATION_PATH}"
ADD nginx-certs/* "${NGINX_CONFIGURATION_PATH}"
USER root
EXPOSE 80
EXPOSE 443
# Run script uses standard ways to run the application
CMD nginx -g "daemon off;"
EOF
$ cd nginx-cfg
$ cat << "EOF" > engine.conf
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name api-rhv.example.com
;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name api-rhv.example.com
;
server_name_in_redirect off;
ssl_certificate /opt/app-root/etc/nginx.d/apache.cer;
ssl_certificate_key /opt/app-root/etc/nginx.d/apache.key.nopass;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1.2;
location / {
rewrite_log off;
error_log /var/log/nginx/error.log debug;
if ($args ~* "follow=tags,nics.reporteddevices,disk_attachments.disk" ) {
rewrite ^ /ovirt-engine/api/vms?follow=nics.reporteddevices,disk_attachments.disk? last;
}
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_http_version 1.1;
proxy_ssl_verify off;
proxy_pass https://rhv-m.example.com;
}
}
EOF
Note: In this file engine.conf
the following two hostnames should be adjusted according the real environment.
api-rhv.example.com
(twice) - This is thefqdn
of the host, where the container is executed.https://rhv-m.example.com
- This has to be the URL of the RHV-Manager instance. This is the host, where the real API is served.
During the Container build process, the container image nginx-120
will be downloaded from Red Hat registry.
To be able to do this, a login to the registry is required upfront.
$ podman login registry.access.redhat.com
After this, the container can be build using the following command
$ podman build -t rhv-proxy-nginx:latest .
With the following command, the container will be started in interactive mode.
$ podman run --rm -it --name rhv-proxy-nginx -p 8080:80 -p 8443:443 localhost/rhv-proxy-nginx
To run the container in background, use the following command.
$ podman run --rm -d --name rhv-proxy-nginx -p 8080:80 -p 8443:443 localhost/rhv-proxy-nginx