Hi Minh

Sorry for the long reply :-)

If you only have one load balancer in front of your NiFi cluster, you
introduce a one-point of failure component. For High Availability (HA), you
can have 2 nodes with load balancing in front of your NiFi cluster.
proxy-01 will have one ip-address and proxy-02 will have another
ip-address. You can then create 2 dns records pointing nifi-proxy to both
proxy-01 and proxy-02. This will give you some kind of HA, but you will
rely on the capability of dns to do dns round robin between the 2 records.
If one node goes down, dns don't know any thing about this and will
continue to reply with dns responces to a dead node. So you can have
situations where half of the request to your nifi-proxy will time outs
becouse of a dead node.

Instead of using dns round robin your can use keepalived on linux. This is
a small program which use a 3th ip-address, a so called virtual ip (vip).
You will have to look at the documentation for keepalived, on how to
configure this: https://www.keepalived.org/
You need to make some small adjustment to linux, to allow it to bind to
none existing ip-addresses.
You configure each node in keepalived with a start weight. In my setup I
have configured node-1 with a weight of 100, and node-2 a weight of 99.
Keepalived will be configured so the to keepalived instances of the nodes
can talk together and sends keepalive signals to each other with there
weight. Based on the weight it receive from other keepalived nodes and its
own, it will decide if it should change state to master or backup. The node
with the highst weight will be master and the master node will add the vip
to the node. Now you create only one dns record for nifi-proxy pointing it
to the vip. and all request will go to only one HAProxy which will load
balance your trafic to the NiFi nodes.
You can also configure keepalived to use a script to tell if the service
you are running at the host is alive. In this case HAProxy. I have created
a small script which curl the stats page of HAProxy and looks if  my
"Backend/nifi-ui-nodes" is up. If it's up the script just exits with exit
code 0 (OK) otherwise it will exit with exits code 1 (error). In the
configuration of keepalived, you configure what should happen with the
weight the script fails. I have configured the check script to adjust the
weight with -10 in case of an error. So if HAProxy at node-01 will die, or
lost network connection to all NiFi nodes the check script will fail, and
the weight will be 90 (100-10). Node-01 will receive a keep-alive signal
from node-02 with a weight of 99 and therefore will change state to backup
and remove the vip from the host. Node-2 will receive a keep-alive signal
from node-01 with a weight of 90, and since its own weight is 99 and is
bigger it will change state into master and add the vip to the host. Now it
will be node-02 which wil receive all request and load balance all trafic
to your NiFi-nodes.

Once again, sorry for the long reply. You don't need 2 HAProxy nodes, one
can do the job. But it will be one point of failure. You can also just use
dns round robin to point to two haproxy nodes, or dive in to the use of
keepalived.

Kind regards
Jens M. Kofoed




Den tors. 7. sep. 2023 kl. 13.32 skrev <e-soci...@gmx.fr>:

>
> Hello Jens
>
> Thanks a lot for haproxy conf.
>
> Could you give more details about this point :
>
> - I have 2 proxy nodes, which is running in a HA setup with keepalived and
> with a vip.-
> - I have a dns record nifi-cluster01.foo.bar pointing to the vip address
> to keepalived
>
> Thanks
>
> Minh
> *Envoyé:* jeudi 7 septembre 2023 à 11:29
> *De:* "Jens M. Kofoed" <jmkofoed....@gmail.com>
> *À:* users@nifi.apache.org
> *Objet:* Re: Help : LoadBalancer
> Hi
>
> I have a 3 node cluster running behind a HAProxy setup.
> My haproxy.cfg looks like this:
> global
>     log stdout format iso local1 debug # rfc3164, rfc5424, short, raw,
> (iso)
>     log stderr format iso local0 err # rfc3164, rfc5424, short, raw, (iso)
>     hard-stop-after 30s
>
> defaults
>     log global
>     mode http
>     option httplog
>     option dontlognull
>     timeout connect 5s
>     timeout client 50s
>     timeout server 15s
>
> frontend nifi-ui
>     bind *:8443
>     bind *:443
>     mode tcp
>     option tcplog
>     default_backend nifi-ui-nodes
>
> backend nifi-ui-nodes
>     mode tcp
>     balance roundrobin
>     stick-table type ip size 200k expire 30m
>     stick on src
>     option httpchk
>     http-check send meth GET uri / ver HTTP/1.1 hdr Host
> nifi-cluster01.foo.bar
>     server C01N01 nifi-c01n01.foo.bar:8443 check check-ssl verify none
> inter 5s downinter 5s fall 2 rise 3
>     server C01N02 nifi-c01n02.foo.bar:8443 check check-ssl verify none
> inter 5s downinter 5s fall 2 rise 3
>     server C01N03 nifi-c01n03.foo.bar:8443 check check-ssl verify none
> inter 5s downinter 5s fall 2 rise 3
>
> I have 2 proxy nodes, which is running in a HA setup with keepalived and
> with a vip.
> I have a dns record nifi-cluster01.foo.bar pointing to the vip address to
> keepalived.
>
> In your nifi-properties files you would have so set a proxy host address:
> nifi.web.proxy.host: "nifi-cluster01.foo.bar:8443"
>
> This setup is working for me.
>
> Kind regards
> Jens M. Kofoed
>
>
>
> Den ons. 6. sep. 2023 kl. 16.17 skrev Minh HUYNH <e-soci...@gmx.fr>:
>
>> Hello Juan
>>
>> Not sure if you understand my point of view ?
>>
>> It got a cluster nifi01/nifi02/nifi03
>>
>> I try to use unique url for instance https://nifi_clu01:9091/nifi, this
>> link point to the randomly nifi01/nifi02/nifi03
>>
>> Regards
>>
>>
>> *Envoyé:* mercredi 6 septembre 2023 à 16:05
>> *De:* "Juan Pablo Gardella" <gardellajuanpa...@gmail.com>
>> *À:* users@nifi.apache.org
>> *Objet:* Re: Help : LoadBalancer
>> List all servers you need.
>>
>> server server1 "${NIFI_INTERNAL_HOST1}":8443 ssl
>> server server2 "${NIFI_INTERNAL_HOST2}":8443 ssl
>>
>> On Wed, Sep 6, 2023 at 10:35 AM Minh HUYNH <e-soci...@gmx.fr> wrote:
>>
>>> Thanks a lot for reply.
>>>
>>> Concerning redirection for one node. It is ok we got it.
>>>
>>> But how configure nifi and haproxy to point the cluster node, for
>>> instance cluster nodes "nifi01, nifi02, nifi03"
>>>
>>> regards
>>>
>>> Minh
>>>
>>>
>>>
>>> *Envoyé:* mercredi 6 septembre 2023 à 15:29
>>> *De:* "Juan Pablo Gardella" <gardellajuanpa...@gmail.com>
>>> *À:* users@nifi.apache.org
>>> *Objet:* Re: Help : LoadBalancer
>>> I did that multiple times. Below is how I configured it:
>>>
>>> frontend http-in
>>> # bind ports section
>>> acl prefixed-with-nifi path_beg /nifi
>>> use_backend nifi if prefixed-with-nifi
>>> option forwardfor
>>>
>>> backend nifi
>>> server server1 "${NIFI_INTERNAL_HOST}":8443 ssl
>>>
>>>
>>>
>>> On Wed, Sep 6, 2023 at 9:40 AM Minh HUYNH <e-soci...@gmx.fr> wrote:
>>>
>>>>
>>>> Hello,
>>>>
>>>> I have been trying long time ago to configure nifi cluster behind the
>>>> haproxy/loadbalancer
>>>> But until now, it is always failed.
>>>> I have only got access to the welcome page of nifi after all others
>>>> links are failed.
>>>>
>>>> If someone has the configuration, it is helpfull.
>>>>
>>>> Thanks a lot
>>>>
>>>> Regards
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>

Reply via email to