Consider the following legacy application migration scenario that has two macro components. The typical pre-containerization application deployment scenario may have had these hosted on VM's with a reverse proxy handling requests to a single logical host endpoint. Both macro components present REST interfaces and are expecting to see the most simple request URI possible. One of the components can be considered the primary monolithic application and the other is like a plug-in or supporting module.
The reverse proxy design took almost all of the traffic, e.g. /
and sent that to the primary application. For the supporting module, only requests to /sub-module/(.*)
the reverse proxy does a rewrite of the path in the URI to only contain the matched subpath and adds a request header so that the fully qualified paths can be built into responses. In this example, for an incoming request like /sub-module/foo/bar
the reverse proxy will send /foo/bar
as the rewritten path to the component and include a request header like X-Proxy-BaseUri: /sub-module
.
When migrating this application to containers and Kubernetes, it is desirable to keep these macro components in their own deployments to allow the number of replicas to be scaled independently and using helm charts with templates for each deployment and associated path for ingress on the host allow for different annotations including addition any custom request headers.
Red Hat OpenShift includes a network abstraction called a Route
which has some very simple usage patterns including automatic hostname definition in combination with a wildcard domain for all hosted applications, which can simplify usage of TLS through wildcard certificates associated with the domain. OpenShift 4.6 with Kubernetes 1.19 also supports Ingress resources using the api version networking.k8s.api/v1
as well as backward support for deprecated apis.
It is possible to create multiple Ingress resources with different paths for the same host in the same namespace on OpenShift. Its also possible to change the default configuration to allow these Ingress resources to be created in multiple namespaces to support teams that are independently deploying microservice components for an application in different namespaces.
The OpenShift ingress controller (haproxy-based) supports sticky session affinity out of the box and has a specific set of supported annotations. There is an annotation that allows cookie-based session affinity to be turned off. Significantly missing for this scenario is the ability to provide configuration snippets as can be done with both the Kubernetes nginx-based ingress controller and the NGINX ingress controller. To overcome this, a sidecar based on nginx can be deployed to add the necessary header to all requests.
The example deployments will follow the general pattern used by NGINX examples around a cafe metaphor. These examples will deploy a set of pods for the main application (the cafe) and a set of pods for the sub module (a coffee bar) at the cafe. The nginx sidecar will be deployed with the coffee bar pod.
Begin with a RedHat OpenShift 4.6 instance, either with CRC or a Cloud or on-premise cluster. These examples will use a default host corresponding to the CRC built-in domain: my-ocp-cafe.apps-crc.testing
. If using a non CRC cluster, adjust the host value accordingly.
In a new project create a config map from the nginx.conf:
oc create configmap nginx-conf --from-file=nginx.conf
After the configmap is added, create the deployments and services for the two components.
Verify that all pods come up with oc get pods
. You can also verify the defined services with oc get svc
. Now it is time to set up the ingress for the host my-ocp-cafe.apps-crc.testing
.
Create the ingress for the main cafe and then for the coffee bar
Even though Ingress resources are being added, OpenShift will also create and display routes (even in the Web UI) associated with the broader and more specific path on the host:
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
cafe-coffeebar-wjs9x my-ocp-cafe.apps-crc.testing /coffee/ coffee-svc http None
cafe-main-xb94q my-ocp-cafe.apps-crc.testing / cafe-svc http None
When these are defined, inspection of the server configuration file in the /var/lib/haproxy/conf/haproxy.config
on the router default controller pod will have entries for both paths as backends to the respective set of pod endpoints.
Expected output - in these examples:
Request to set cookie being sent:
curl -I http://my-ocp-cafe.apps-crc.testing/main/cafe
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 08 Feb 2021 01:41:31 GMT
Content-Type: text/plain
Content-Length: 162
Expires: Mon, 08 Feb 2021 01:41:30 GMT
Cache-Control: no-cache
Set-Cookie: mycookie=76fc125486b0801b293ccfffa7f8dec1; path=/; HttpOnly
curl http://my-ocp-cafe.apps-crc.testing/main/cafe
Server address: 10.116.0.84:8080
Server name: cafe-684d654676-nfxnz
Date: 08/Feb/2021:01:40:42 +0000
URI: /main/cafe
Request ID: 844fc8f3d806e07070a22aa9ddff9f87
curl http://my-ocp-cafe.apps-crc.testing/coffee/mocha
<html><body>
<i>Request Host</i>: my-ocp-cafe.apps-crc.testing </br>
<i>Request URL</i>: /mocha </br>
<i>Context path</i>: mocha </br>
<i>Client IP (RemoteAddr)</i>: 127.0.0.1:34780 </br>
<i>Request TLS</i>: <nil> </br>
</br><b>Request Headers:</b></br>
<ul>
<li><i> Accept </i>: [*/*] </li>
<li><i> Connection </i>: [close] </li>
<li><i> Forwarded </i>: [for=192.168.130.1;host=my-ocp-cafe.apps-crc.testing;proto=http] </li>
<li><i> User-Agent </i>: [curl/7.58.0] </li>
<li><i> X-Forwarded-For </i>: [192.168.130.1] </li>
<li><i> X-Forwarded-Host </i>: [my-ocp-cafe.apps-crc.testing] </li>
<li><i> X-Forwarded-Port </i>: [80] </li>
<li><i> X-Forwarded-Proto </i>: [http] </li>
<li><i> X-Proxy-Baseuri </i>: [/coffee] </li>
</ul>
</br><b>Form parameters:</b></br>
<ul>
</ul>
</body></html>