Hi Imam,

On Tue, Jan 9, 2018 at 6:54 PM, Imam Toufique <techie...@gmail.com> wrote:
> Hi Lukus,
>
> thanks again for your continued help and support!  Here is my config file
> with updates now:
>
> frontend main
>    bind :2200
>    default_backend sftp
>    timeout client 5d
>
>
> listen stats
>    bind *:2200
>    mode tcp
>    maxconn 2000
>    option redis-check
>    retries 3
>    option redispatch
>    balance roundrobin
>
>
> Please correct me if you see something that is not right.

That's wrong. You are again configuring 2 services on a single port.
In this case, the kernel will load-balance between the two causing
chaos.

What is the "listen stats" section supposed to do anyway in your
configuration? Why do you need a main frontend and this listen
section?



> You asked about my SSH/SFTP use-case.  Basically, here is my use-case.  I
> have several SFTP servers that I would like to load-balance.  I was thinking
> about using HAProxy to load-balance SFTP connections between my SFTP
> servers.  As I was testing my setup yesterday, I was sending sftp file
> transfers to the HAproxy node, I noticed that HAProxy node CPU usage was
> pretty high.  I am beginning to wonder if it is the right setup for my
> environment.
> Is HAProxy is the right solution for SFTP server load-balancing?

Load-balancing SSH/SFTP generally should be very easy to do, as SSH
only uses a single port and doesn't have any layering violations (as
opposed to FTP).
The only thing to be aware of is the public key issue with different
servers, as you are load-balancing between them. Use the same private
key on all the backend server to avoid this problem.

As for the high CPU usage, I'd recommend fixing the configuration
first, before troubleshooting the CPU load. You may see strange
effects due to unintended load-balancing.


The rule is is simple: you are specifying the same listening port more
than once in the configuration, then something is and will go wrong.
You must have one single reference to port 2200 only.



Lukas

Reply via email to