Re: TCP Port Forwarding

2014-08-23 Thread TK Lew
Hi Lukas :

- Thanks for the reply.
- We have a  (A) node for example that will stream tcp data towards a
mediation B node.
- The A node only can support 1 destionation IP address and tcp port.
-  In our case we have 3 mediation nodes (B , C and D).

Scenario :

- Can haproxy support tcp forwarding to B, C and D simultaneously
without any balancing algorithm ?

So far I managed to tell haproxy to perform tcp forwarding to a
backend group but the packets are load balance between B , C and D
nodes.

This is just tcp stream and do not have any sessions/cookies.

Thanks !

BR//TK


On Sat, Aug 23, 2014 at 1:02 AM, Lukas Tribus  wrote:
> Hi,
>
>
>> - Can haproxy be use as a tcp proxy to forward traffic to many backend
>> server without any load balancing?
>
> You can certainly configure it to just failover from one server to another,
> is that what you mean?
>
>
>> - Just perform as tcp forwarding to many clients with no balancing
>> algorithms.
>
> Can you elaborate what you mean the forwarding to many clients?
>
>
>
> Regards,
>
> Lukas
>
>
>
>



RE: TCP Port Forwarding

2014-08-23 Thread Lukas Tribus
Hi!


> Hi Lukas :
>
> - Thanks for the reply.
> - We have a (A) node for example that will stream tcp data towards a
> mediation B node.
> - The A node only can support 1 destionation IP address and tcp port.
> - In our case we have 3 mediation nodes (B , C and D).

I see, but your requirement is not just "no load balancing", you need
to have one sender with multiple receivers, and thats not something
that HAProxy can do.

HAproxy only forwards traffic from one source to one destination, not
multiple destinations.


Maybe you find something useful here:
http://serverfault.com/questions/570761/how-to-duplicate-tcp-traffic-to-one-or-multiple-remote-servers-for-benchmarking



Regards,

Lukas


  


failing health checks, when using unix sockets, with ssl server&binding, 1.5.3

2014-08-23 Thread PiBa-NL
Resending, as i didn't see a reply sofar, think it got lost between the 
other conversations.
Would be nice if someone could tell what the problem might be, my config 
or something in haproxy. Thanks.


Hi haproxy-list,

I have some strange results trying to use unix sockets to connect
backends to frontends.
I'm using 1.5.3 on FreeBSD 8.3. (pfSense)

With the config below the result i get is that srv1,2,3 and 5 are
serving requests correctly (i can put all others to maintenance mode and
the stats keep working).

And srv4 is down because of lastchk: "L6TOUT". It seems to me this
behavior is inconsistent?

If anyone could confirm if this is indeed a problem in haproxy or tell
if there is a reason for this, please let me know.

The config below is just what i narrowed it down to to have an easy to
reproduce issue to find why i was having trouble forwarding a tcp
backend to a ssl offloading frontend..
What i wanted to have is a TCP frontend using SNI to forward connections
to the proper backends. And have a defaultbackend that does
SSLoffloading, and then uses host header to send the requests to the
proper backend. The purpose would be to minimize the load on haproxy
itself, while maximizing supported clients (XP and older mobile devices).

Thanks in advance.
PiBa-NL

global
daemon
gid80
ssl-server-verify none
tune.ssl.default-dh-param 1024
chroot/tmp/haproxy_chroot

defaults
timeout connect3
timeout server3

frontend 3in1
bind0.0.0.0:800
modetcp
timeout client3
default_backendlocal84_tcp

backend local84_tcp
modetcp
retries3
optionhttpchk GET /
serversrv1 127.0.0.1:1000send-proxy check inter 1000
serversrv2 /stats1000.socket send-proxy check inter 1000
serversrv3 127.0.0.1:1001send-proxy ssl check inter
1000 check-ssl
serversrv4 /stats1001.socket send-proxy ssl check inter
1000 check-ssl
serversrv5 /stats1001.socket send-proxy ssl

frontend stats23
bind 0.0.0.0:1000 accept-proxy
bind /tmp/haproxy_chroot/stats1000.socket accept-proxy
bind 0.0.0.0:1001 accept-proxy ssl  crt
/var/etc/haproxy/stats23.85.pem
bind /tmp/haproxy_chroot/stats1001.socket accept-proxy ssl  crt
/var/etc/haproxy/stats23.85.pem
modehttp
timeout client3
default_backendstats_http

backend stats_http
modehttp
retries3
statsenable
statsuri /
statsadmin if TRUE
statsrefresh 1