Hi Baptise,

On Fri, Feb 8, 2019 at 6:10 PM Baptiste <bed...@gmail.com> wrote:

>
>
> On Fri, Feb 8, 2019 at 6:09 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> On Fri, Feb 8, 2019 at 2:29 PM Igor Cicimov <
>> ig...@encompasscorporation.com> wrote:
>>
>>> Hi,
>>>
>>> I have a Jetty frontend exposed for couple of ActiveMQ servers behind
>>> SSL terminating Haproxy-1.8.18. They share same storage and state via lock
>>> file and there is only one active AMQ at any given time. I'm testing this
>>> now with dynamic backend using Consul DNS resolution:
>>>
>>> # dig +short @127.0.0.1 -p 8600 activemq.service.consul
>>> 10.140.4.122
>>> 10.140.3.171
>>>
>>> # dig +short @127.0.0.1 -p 8600 _activemq._tcp.service.consul SRV
>>> 1 1 61616 ip-10-140-4-122.node.dc1.consul.
>>> 1 1 61616 ip-10-140-3-171.node.dc1.consul.
>>>
>>> The backends status, the current "master":
>>>
>>> root@ip-10-140-3-171:~/configuration-management# netstat -tuplen | grep
>>> java
>>> tcp        0      0 0.0.0.0:8161            0.0.0.0:*
>>> LISTEN      503        13749196    17256/java
>>> tcp        0      0 0.0.0.0:6161           0.0.0.0:*
>>> LISTEN      503        13749193    17256/java
>>>
>>> and the "slave":
>>>
>>> root@ip-10-140-4-122:~# netstat -tuplen | grep java
>>>
>>> So the service ports are not available on the second one.
>>>
>>> This is the relevant part of the HAP config that I think might be of
>>> interest:
>>>
>>> global
>>>     server-state-base /var/lib/haproxy
>>>     server-state-file hap_state
>>>
>>> defaults
>>>     load-server-state-from-file global
>>>     default-server init-addr    last,libc,none
>>>
>>> listen amq
>>>     bind ... ssl crt ...
>>>     mode http
>>>
>>>     option prefer-last-server
>>>
>>>     # when this is on the backend is down
>>>     #option tcp-check
>>>
>>>     default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
>>> maxconn 25 maxqueue 256 weight 100
>>>
>>>     # working but both show as up
>>>     server-template amqs 2 activemq.service.consul:8161 check
>>>
>>>     # working old static setup
>>>     #server ip-10-140-3-171 10.140.3.171:8161 check
>>>     #server ip-10-140-4-122 10.140.4.122:8161 check
>>>
>>> This is working but the thing is I see both servers as UP in the HAP
>>> console:
>>> [image: amqs.png]
>>> Is this normal for this kind of setup or I'm doing something wrong?
>>>
>>> Another observation, when I have tcp check enabled like:
>>>
>>>     option tcp-check
>>>
>>> the way I had it with the static lines like:
>>>
>>>     server ip-10-140-3-171 10.140.3.171:8161 check
>>>     server ip-10-140-4-122 10.140.4.122:8161 check
>>>
>>> then both servers show as down.
>>> Thanks in advance for any kind of input.
>>> Igor
>>>
>>> Ok, the state has changed now, I have correct state on one haproxy:
>>
>> [image: amqs_hap1.png]
>> but on the second the whole backend is down:
>>
>> [image: amqs_hap2.png]
>> I confirmed via telnet that I can connect to port 8161 to the running amq
>> server from both haproxy servers.
>>
>>
>
>
> Hi Igor,
>
> You're using the libc resolver function at startup time to resolve your
> backend, this is not recommended integration with Consul.
>  You will find some good explanations in this blog article:
>
> https://www.haproxy.com/fr/blog/haproxy-and-consul-with-dns-for-service-discovery/
>
> Basically, you should first create a "resolvers" section, in order to
> allow HAProxy to perform DNS resolution at runtime too.
>
> resolvers consul
>   nameserver consul 127.0.0.1:8600
>   accepted_payload_size 8192
>
> Then, you need to adjust your server-template line, like this:
> server-template amqs 10 _activemq._tcp.service.consul resolvers consul
> resolve-prefer ipv4 check
>
> In the example above, I am using on purpose the SRV records, because
> HAProxy supports it and it will use all information available in the
> response to update server's IP, weight and port.
>
> I hope this will help you.
>
> Baptiste
>

All sorted now. For the record and those interested here is my setup:

Haproxy:
--------

global
    server-state-base /var/lib/haproxy
    server-state-file hap_state

defaults
    load-server-state-from-file global
    default-server init-addr    last,libc,none

resolvers consul
    nameserver consul 127.0.0.1:8600
    accepted_payload_size 8192
    resolve_retries       30
    timeout resolve       1s
    timeout retry         2s
    hold valid            30s
    hold other            30s
    hold refused          30s
    hold nx               30s
    hold timeout          30s
    hold obsolete         30s

listen jetty
    bind _port_ ssl crt ...
    mode http

    option forwardfor except 127.0.0.1 header X-Forwarded-For
    option http-ignore-probes
    option prefer-last-server

    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
maxconn 25 maxqueue 256 weight 100
    server-template jettys 2 _jetty._tcp.service.consul resolvers consul
resolve-prefer ipv4 check

Consul:
-------
{
  "services": [
    {
        "name": "activemq",
        "port": 6161,
        "tags": ["activemq", "data"],
        "check": {
            "service_id": "activemq",
            "script": "[ $(sudo lsof -iTCP:6161 -sTCP:LISTEN -Fp) ] && nc
-4nzv -q0 -w1 127.0.0.1 6161 > /dev/null 2>&1 || exit 0",
            "interval": "10s"
        }
    },
    {
        "name": "jetty",
        "port": 8161,
        "tags": ["jetty", "ui"],
        "check": {
            "script": "curl -ksSNIL localhost:8161 > /dev/null 2>&1",
            "interval": "{{ consul_service_check_interval }}"
        }
    }
  ]
}

The approach like:

server-template amqs 2 _activemq._tcp.service.consul:8161 resolvers consul
resolve-prefer ipv4 check

did not work, although the health checks were using port 8161 as specified
in the server-template line the traffic was hitting the AMQ service port
6161, probably overwritten by the SRV record resolution obtained from
Consul.
Igor

Reply via email to