As Phil mentioned, you can check if the iptables rule is blocking it.
Simple test would be to rsh into the router pod and use netcat to send a
message.
$ oc rsh
pod> echo '<14> user test message from router pod' | nc -w 2 -u
514
And maybe try from the host (openshift-node) or another node as
*Sorry for the duplicate email Sebastian - the users list rejected the
original mail*
You would need a customized haproxy config template but you could add
something like this in the 2 frontends public[_ssl] (or to specific
backends if you need more granular control on a per-backend basis):
acl
Couldn't figure out if you have a problem or not (or it was just a
question) from the email thread.
What does "ip addr show" on all the nodes show? This is the nodes where
your ipfailover pods are running.
Are the VIPs allocated to both nodes (assuming you have from the logs),
then it is likely
Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:
>
> On Tue, Jun 7, 2016 at 6:45 PM, Ram Ranganathan <rrang...@redhat.com>
> wrote:
>
>> Is your server always returning 503 - example for a GET/HEAD on / ? That
>> could cause haproxy to mark it as down.
>>
Hmm, so 503 is also returned by haproxy if no server is available to
service a request (example for a backend with no servers or if the server
is not available failing the health check). As I recall, we did the error
page on a request as it gives the ability to override it in a custom
template.
Not clear if you want the router to automatically serve the 503 page or
not. If you do, this line in the haproxy config template:
https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L198
automatically sends a 503 page if your service is down (example
It is warning you that the router will be serving up the default cert for
test-prjtest.getup.io - which may or may not
be a problem depending on your env (config + domain + routes).
If your default certificate is a wild-card cert for *.getup.io, then its ok
- at least for the
Haven't seen this till you mentioned it. I can see the close calls in my
local env. It looks like it happens in a new process - after a clone()
syscall at about a couple of seconds apart. So it is likely part of the
script that does the health check:
script "
wrote:
> Using OSEv3.1.1
>
> I'm
ount:default:router
>> - system:serviceaccount:default:registry
>>
>> router has always been a privileged user service account.
>>
>> On Thu, Mar 3, 2016 at 12:55 AM, Ram Ranganathan <rrang...@redhat.com>
>> wrote:
>>
>>> So you have no app level b