Hello all!

I'll have to resubmit this issue as it's still a problem for my installed cluster.

My environment consists of 2 masters and one load-balancer that has the default HAProxy installed by the automated install procedure via the openshift-ansible project. So it's basically the "default" version for a cluster with 2 masters. What I find is that the config map for the web-console gets created by default with one of the masters as consolePublicURL and masterPublicURL and not the load-balancer entry-point, as I would have expected.

So if I try to simulate a master fail for the host that's configured in the configMap, this effectively means that I cannot reach the webconsole, even though I do have the second master available.

I have tried editing the configMap for the webconsole and the oauthclient openshift-web-console, but this results in invalid request errors when trying to access the web-console via a web-browser.

The important edits are:
- in configmap webconsole-config:

      consolePublicURL: https://loadbalancer.my.net:8443/console/
      masterPublicURL: https://loadbalancer.my.net:8443

- in oauthclient openshift-web-console:

redirectURIs:
- https://loadbalancer.my.net:8443/
- https://master1.my.net:8443/console/
- https://master2.my.net:8443/console/

I've already gone through some mail exchanges with Sam Padgett who has pointed out that it might be load-balancer config related, but, as I stated, the load-balancer is the default one configured by the installer. The HAProxy config file seems to be the default one provided in the openshift-ansible project.

I'd appreciate if you have any ideas about where to look for this problem.

Thanks in advance!



_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to