sending traffic to one backend server based on which another backend server sticky session

2014-09-26 Thread Joseph Hardeman
So I have a need to send a remote visitor to one specific server on another
port/backend  based on the first backend server they logged in to.  Its
really the same server just different IP's.

Is this possible?

Joe


Re: Master server outage this night (1wt.eu)

2014-09-03 Thread Joseph Hardeman
Lol. Know what you mean. Good going.
On Sep 3, 2014 5:05 PM, Willy Tarreau w...@1wt.eu wrote:

 On Wed, Sep 03, 2014 at 09:51:54PM +0200, Willy Tarreau wrote:
  I'll send another mail when it's back online.

 Done after 65 mn. Not bad for a move of 6 servers, 2 switches and
 an UPS 25km away after 410 days of uptime :-)

 The secret lies in not unplugging any wire, but at the arrival it's
 an horrible mess!

 Willy





keep alive timeouts

2014-08-26 Thread Joseph Hardeman
Hi everyone,

I hope someone can help out.  I have a customer who has an IPSEC tunnel,
using PAT so that our systems only see requests from a single IP, from
their facility to us, they are then passing through a firewall to go into
haproxy, old version part of vSheild so I don't know what version or
configuration, but from our packet captures, it appears that the traffic is
being sent through at microsecond intervals and its fine until it hits
Haproxy.  The network guys told me they don't see these packets coming out
the other side of haproxy.

Could haproxy be doing some sort of drop on the packets since they appear
to be coming from the same IP, even through a PAT, and dropping the second
packet request since they are coming so fast?

Thanks for any ideas on this.

Joe


Re: keep alive timeouts

2014-08-26 Thread Joseph Hardeman
Hi Lukas,

Thank you for responding, I was told that the remote servers are sending
keep alive request with microseconds between calls.  I have not actually
investigated this, I am going off of what my Network Engineer is telling me
with him running wireshark and looking at packet captures.

The second and biggest issue I have is that we are using the haproxy
embedded into the VMWare vSheild load balancer.  So I don't have any access
to their configuration files.  I actually recommended we setup a second
haproxy to use and move them off of that appliance for load balancing.
 They are trying to insert an F5 load balancer instead.  I was just
wondering if anyone might have seen something like this or something.  :-)

Thanks for the reply and take care.

Joe


On Tue, Aug 26, 2014 at 3:31 PM, Lukas Tribus luky...@hotmail.com wrote:

 Hi Jospeh,


  Hi everyone,
 
  I hope someone can help out. I have a customer who has an IPSEC
  tunnel, using PAT so that our systems only see requests from a single
  IP, from their facility to us, they are then passing through a firewall
  to go into haproxy, old version part of vSheild so I don't know what
  version or configuration, but from our packet captures, it appears that
  the traffic is being sent through at microsecond intervals and its fine
  until it hits Haproxy. The network guys told me they don't see these
  packets coming out the other side of haproxy.

 I'm not sure I understand. What does sent through at microsecond
 intervals
 mean exactly?

 You also don't really explain the problem, just your environment. Do you
 see timeouts in haproxy? Can you enable logging (httplog) and post the
 exact log when this timeout occurs?



  Could haproxy be doing some sort of drop on the packets since they
  appear to be coming from the same IP, even through a PAT, and dropping
  the second packet request since they are coming so fast?

 No, haproxy doesn't care.


 Some things to check:
 - make sure tcp_tw_recycle is disabled
 - check for MTU problems, and if the incoming MSS is correct (or try
 lowering
   it on haproxy side for a blind test [1])


 If nothing pops, we will have to check those packet captures.



 Regards,

 Lukas


 [1] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.1-mss




Re: Capturing Cookies

2013-10-17 Thread Joseph Hardeman
Hey Baptiste,

Very cool, thanks.  That is giving me what I needed.

Joe


On Thu, Oct 17, 2013 at 2:04 AM, Baptiste bed...@gmail.com wrote:

 Hi joseph,

 Add the following to your frontend:
 capture request header Cookie len64

 Assuming you have alredy turned on option httplog.

 Baptiste
  Le 16 oct. 2013 21:23, Joseph Hardeman jwharde...@gmail.com a écrit :

 Hey Guys,

 Quick questions, I want to capture what cookies are making it to an
 haproxy system, I know I can capture a cookie based off its name, but is
 there a way to capture all cookies when a browser hits my proxy?

 Thanks

 Joe




Haproxy SSL Termination question

2013-05-15 Thread Joseph Hardeman
Hi Everyone,

I am in need of a little help, currently I need to send traffic to a
haproxy setup and terminate the SSL certificate there, which I have
working, but until I can get a backend application changed from redirecting
when it gets the https request to a login page, is there any way I can
connect to the backend server(s) over port 443 so it fakes it to the server
and the page redirection continues to work?  At least until we can get the
code updated to use say port 8443 on the server instead of 443?

Just curious and thought I would ask the experts out there. :-)

Thanks in advance.

Joe


Re: build with static openssl

2013-05-11 Thread Joseph Hardeman
Hi Lukas

I am trying to follow the steps you mentioned and the OpenSSL installs
fine, but am getting the following when trying to build haproxy and I would
appreciate any thoughts on why this maybe happening.  This is a CEntOS 5.3
32bit system, I have tried with the target like you mentioned and also with
linux26 and linux24, I removed haproxy-1.5-dev18 directory and untar the
haproxy-1.5-dev18.tar.gz file each time.  but I am still getting the same
sort of errors:


make TARGET=linux2628 USE_OPENSSL=1 ADDINC=-I$LIBSSLBUILD/include
ADDLIB=-L$LIBSSLBUILD/lib -ldl

gcc  -g -o haproxy src/haproxy.o src/sessionhash.o src/base64.o
src/protocol.o src/uri_auth.o src/standard.o src/buffer.o src/log.o
src/task.o src/chunk.o src/channel.o src/listener.o src/time.o src/fd.o
src/pipe.o src/regex.o src/cfgparse.o src/server.o src/checks.o src/queue.o
src/frontend.o src/proxy.o src/peers.o src/arg.o src/stick_table.o
src/proto_uxst.o src/connection.o src/proto_http.o src/raw_sock.o
src/appsession.o src/backend.o src/lb_chash.o src/lb_fwlc.o src/lb_fwrr.o
src/lb_map.o src/lb_fas.o src/stream_interface.o src/dumpstats.o
src/proto_tcp.o src/session.o src/hdr_idx.o src/ev_select.o src/signal.o
src/acl.o src/sample.o src/memory.o src/freq_ctr.o src/auth.o
src/compression.o src/payload.o src/ev_poll.o src/ev_epoll.o src/ssl_sock.o
src/shctx.o ebtree/ebtree.o ebtree/eb32tree.o ebtree/eb64tree.o
ebtree/ebmbtree.o ebtree/ebsttree.o ebtree/ebimtree.o ebtree/ebistree.o
-lcrypt -lssl -lcrypto -L/opt/libsslbuild/lib -ldl
src/listener.o: In function `listener_accept':
/root/haproxy-1.5-dev18/src/listener.c:314: undefined reference to `accept4'
src/shctx.o: In function `atomic_dec':
/root/haproxy-1.5-dev18/src/shctx.c:134: undefined reference to
`__sync_sub_and_fetch_4'
src/shctx.o: In function `cmpxchg':
/root/haproxy-1.5-dev18/src/shctx.c:129: undefined reference to
`__sync_val_compare_and_swap_4'
src/shctx.o: In function `atomic_dec':
/root/haproxy-1.5-dev18/src/shctx.c:134: undefined reference to
`__sync_sub_and_fetch_4'
/root/haproxy-1.5-dev18/src/shctx.c:134: undefined reference to
`__sync_sub_and_fetch_4'
src/shctx.o: In function `cmpxchg':
/root/haproxy-1.5-dev18/src/shctx.c:129: undefined reference to
`__sync_val_compare_and_swap_4'
src/shctx.o: In function `atomic_dec':
/root/haproxy-1.5-dev18/src/shctx.c:134: undefined reference to
`__sync_sub_and_fetch_4'
src/shctx.o: In function `cmpxchg':
/root/haproxy-1.5-dev18/src/shctx.c:129: undefined reference to
`__sync_val_compare_and_swap_4'
collect2: ld returned 1 exit status
make: *** [haproxy] Error 1


Thanks

Joe


On Fri, May 10, 2013 at 8:24 PM, Lukas Tribus luky...@hotmail.com wrote:

 Hi Bryan,


  What's required to build haproxy and statically link with openssl libs
  like can be done with pcre?

 The following procedure will install a static build of latest openssl
 in a directory of your choice without interfering with your OS headers
 and libraries:

  export LIBSSLBUILD=/tmp/libsslbuild
  mkdir $LIBSSLBUILD
  cd ~
  wget http://www.openssl.org/source/openssl-1.0.1e.tar.gz
  tar zxvf openssl-1.0.1e.tar.gz
  cd openssl-1.0.1e 
  ./config --prefix=$LIBSSLBUILD no-shared
  make
  make install_sw


 Then build haproxy by pointing to the proper path:
  make TARGET=linux2628 USE_OPENSSL=1 ADDINC=-I$LIBSSLBUILD/include \
  ADDLIB=-L$LIBSSLBUILD/lib -ldl

 OpenSSL depends on libdl, so we need pass -ldl along.


 When everything is compiled, checkout your openssl version (use a
 snapshot from Apr 27th or younger to see build and runtime
 openssl version). Both should say 1.0.1e in our case. Also check with
 ldd; it should not show any openssl libraries loaded dynamically.

  lukas@ubuntuvm:~/haproxy$ ./haproxy -vv | grep OpenSSL
  Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
  Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
  OpenSSL library supports TLS extensions : yes
  OpenSSL library supports SNI : yes
  OpenSSL library supports prefer-server-ciphers : yes
  lukas@ubuntuvm:~/haproxy$ ldd haproxy
  linux-gate.so.1 =  (0xb76e4000)
  libcrypt.so.1 = /lib/i386-linux-gnu/libcrypt.so.1 (0xb76ab000)
  libdl.so.2 = /lib/i386-linux-gnu/libdl.so.2 (0xb76a6000)
  libc.so.6 = /lib/i386-linux-gnu/libc.so.6 (0xb74fb000)
  /lib/ld-linux.so.2 (0xb76e5000)
  lukas@ubuntuvm:~/haproxy$



 Regards,

 Lukas



Re: Virtual Hosting and logs

2012-01-13 Thread Joseph Hardeman
Hey Willy,

LOL  Then I was confused by other comments I got back when I posted about
analyzing the logs the other day. :-)

Your right about syslog-ng, I would definitely recommend it to anyone also.

Joe

On Fri, Jan 13, 2012 at 1:54 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi Joe,

 On Thu, Jan 12, 2012 at 08:40:01PM -0500, Joseph Hardeman wrote:
  Hey Chris,
 
  What flavor of linux will you be putting syslog-ng on?  Be sure the
  syslog-ng you install can handle multi-threading of its processes, so
  version 3.0 or newer I believe, otherwise it will eat up all of 1 CPU and
  could most certainly lose logs then if you have a lot of traffic going
  through haproxy.

 My experience with syslog-ng has already been extremely good since version
 1.4 around 10 years ago. I remember reaching 2 logs per second with
 zero losses on a pentium-3 933 MHz. You need to tune it to use large
 buffers to cover disk latency, and that's all. Syslog-ng is an excellent
 piece of software, which is why I always recommend it to everyone who needs
 high logging rates.

  We have it setup for one of our customers now, actually I just finished
  setting it up four days ago and I have syslog-ng splitting out logs per
  hour.  I don't really see much in the way of missing logs, if anything
 they
  now have more information than they were getting for the visits to their
  site from Google Analytics.
 
  But just as an idea, using option httplog clf in the listen section for
  mode http, yesterday I receiving around 12G of logs from a single haproxy
  box while today they are at 4.9G and the day isn't over yet.  So today
 may
  end up around 10G as the west coast is now getting off of work.  And the
  clf option sends through less data than the normal option httplog so the
  amount of data is a bit lower than if you log normal logs from haproxy.

 This point surprizes me a little bit because CLF logs contain the same info
 with more delimiters. Maybe they compress better but I'm surprized you find
 them smaller. For instance :

 normal:
  Jan 13 07:52:50 pcw haproxy[839]: 127.0.0.1:56837[13/Jan/2012:07:52:46.258] 
 echo echo/NOSRV 0/0/0/3325/3789 200 14 - -
  0/0/0/0/0 0/0 GET / HTTP/1.1
 clf:
  Jan 13 07:52:34 pcw haproxy[834]: 127.0.0.1 - - [13/Jan/2012:06:52:31
 +] GET / HTTP/1.1 200 14 - - 56835 759 echo echo NOSRV 0
 0 0 2285 2845  0 0 0 0 0 0 0 - -

 Regards,
 Willy




Re: Virtual Hosting and logs

2012-01-12 Thread Joseph Hardeman
Hi Chris,

If you have a spare nic, you can set this to a different subnet from the
other interfaces and set one on a syslog server, then in the global section
of haproxy setup the logging section, for example:

 log192.168.5.5:514   local6

Make sure your syslog-ng is set for tcp and udp on 514, then you can use
filters to split out the different logs based on receiving servers name in
the message in syslog-ng.

Joe


On Wed, Jan 11, 2012 at 7:34 PM, Chris Miller ct...@scratchspace.comwrote:


 We're looking to utilize access logs from HAProxy rather than from
 the backend application servers. It appears we can set logging
 directives to one syslog host per listen directive, this doesn't
 really help us split into separate logs per host. One thought it to
 use syslog-ng which has filters that would allow this, but at an
 unknown overhead for a high traffic load balancer. Before we
 reinvent the wheel, I just wanted to see if anyone has a recommended
 way of addressing this. I was unable to find anything on Google...

 Regards,
Chris

 Chris Miller
 President - Rocket Scientist
 ScratchSpace Inc.
 (831) 621-7928
 http://www.scratchspace.com





Re: Virtual Hosting and logs

2012-01-12 Thread Joseph Hardeman
Hey Chris,

What flavor of linux will you be putting syslog-ng on?  Be sure the
syslog-ng you install can handle multi-threading of its processes, so
version 3.0 or newer I believe, otherwise it will eat up all of 1 CPU and
could most certainly lose logs then if you have a lot of traffic going
through haproxy.

We have it setup for one of our customers now, actually I just finished
setting it up four days ago and I have syslog-ng splitting out logs per
hour.  I don't really see much in the way of missing logs, if anything they
now have more information than they were getting for the visits to their
site from Google Analytics.

But just as an idea, using option httplog clf in the listen section for
mode http, yesterday I receiving around 12G of logs from a single haproxy
box while today they are at 4.9G and the day isn't over yet.  So today may
end up around 10G as the west coast is now getting off of work.  And the
clf option sends through less data than the normal option httplog so the
amount of data is a bit lower than if you log normal logs from haproxy.

Joe

On Thu, Jan 12, 2012 at 7:03 PM, Chris Miller ct...@scratchspace.comwrote:

 **
 On 1/12/2012 3:54 PM, Joseph Hardeman wrote:

 Hi Chris,

 If you have a spare nic, you can set this to a different subnet from the
 other interfaces and set one on a syslog server, then in the global section
 of haproxy setup the logging section, for example:

  log192.168.5.5:514   local6

 Make sure your syslog-ng is set for tcp and udp on 514, then you can use
 filters to split out the different logs based on receiving servers name in
 the message in syslog-ng.


 This was my thought, I'm just concerned about how syslog-ng will handle
 the traffic, as well as any related packet loss since syslog is all udp.
 Sounds like you've implemented this before, has the above been an issue?

 Regards,
   Chris

 Chris Miller
 President - Rocket Scientist
 ScratchSpace Inc.(831) 621-7928http://www.scratchspace.com




Parsing Logs

2012-01-09 Thread Joseph Hardeman
Hi Everyone,

I was wondering if anyone has a way to parse the logs and present them in a
friendly format?  Such as with AWStats or another log parser.

Thanks

Joe


Re: Linux routing performace

2011-05-03 Thread Joseph Hardeman
Hi James,

I would agree with jw.  If your internal network is all on the same subnet,
you don't need the second gateway.  Now if you are routing to different
subnets on the internal network, you could simply put route statements
pointing those routes to use the internal router instead of adding a second
gateway on the haproxy server.

For instance:

route add -net 192.168.1.16 netmask 255.255.255.240 gw 10.0.0.1

Joe

On Tue, May 3, 2011 at 10:39 PM, Jon Watte jwa...@imvu.com wrote:

 Does the internal network need a gateway at all?

 We run a very similar set-up, HAProxy listening on a public network, and
 forwarding TCP connections to servers on an internal network. Because all
 the servers are on the same 10/8 subnet, no default gateway is needed.

 Sincerely,

 jw


 Jon Watte, IMVU.com
 We're looking for awesome people! http://www.imvu.com/jobs/




 On Tue, May 3, 2011 at 7:41 AM, James Bardin jbar...@bu.edu wrote:

 Hello,

 This isn't necessarily an haproxy question, but I'm having trouble
 finding a good resource, so I'm hoping some of the other experienced
 people on this list may be able to help.

 Setup:
 I have a load balancer configuration that needs to me multi-homed
 across a private and public network. Both networks have strict reverse
 path checking, so packets must be routed out their corresponding
 interface, instead of a single default (each interface essentially has
 it's own default gateway).

 The public net is eth0, so it gets the real default gateway. The
 routing rules take any private-net packets, and send them out the
 correct interface, to the private-net gateway.

 
 ip route add default via 10.0.0.1 dev eth1 table 10
 ip rule add from 10.0.0.0/8 table 10
 

 Result:
 What I've noticed is that any traffic handled by this one routing
 decision drops the overall throughput to about 30% (it also seems adds
 about .5ms to the rtt). Haproxy can handle about 1.5Gb/s of tcp
 traffic on the public network, but only about 500Mb/s through the
 private (there's an even greater skew when I remove haproxy, because
 my link is close to 3Gb/s). Adding another cpu, and using interrupt
 coalescing reduced the system cpu time, and brought down the
 context-switches, but didn't increase performance at all.

 Any other tuning options I might try? I'm running the latest RHEL5
 kernel at the moment (I haven't tried bringing up new machines with a
 newer kernel yet)


 Thanks,

 --
 James Bardin jbar...@bu.edu
 Systems Engineer
 Boston University IST





Re: using haproxy for https

2011-04-11 Thread Joseph Hardeman
HI,

Considering these are for a customer and they have already purchased their
certs, I don't want to go through the hassle of converting them and causing
them any issues.

Now we can stick with the examples on the haproxy site using mode tcp, but I
was wondering is there a way via ACL's or something to do something along
the lines of reading the requested domain name and sending that traffic to a
specific server or set of servers?

For example:

listen  cust1_443
mode tcp
bind 0.0.0.0:443
option ssl-hello-chk
balance roundrobin
timeout client 70s
timeout server 70s
timeout connect 30s
some sort of check here for specific domain name
server IIS1-443 192.168.0.206:443 check inter 5000 fall 3 rise 1
maxconn 300
server IIS2-443 192.168.0.207:443 check inter 5000 fall 3 rise 1
maxconn 300
some sort of check here for specific domain name
server IIS1-443 192.168.0.208:443 http://192.168.0.206:443/ check
inter 5000 fall 3 rise 1 maxconn 300
server IIS2-443 192.168.0.209:443 http://192.168.0.207:443/ check
inter 5000 fall 3 rise 1 maxconn 300

Just thinking that if I could do that, then it would save wasting IP's from
applying a different one to the haproxy system and then again another couple
to the IIS servers.

Anyway, would appreciate some in site and advise on if this can be
accomplished in this sort of fashion.

Thanks

Joe


On Sun, Apr 10, 2011 at 5:14 PM, Brian Carpio bcar...@broadhop.com wrote:

 Of course you can export the cert and private keys from IIS and use them in
 stunnel. You will need to use OpenSSL to convert the certificate but it will
 work.

 Sent from my iPhone

 On Apr 10, 2011, at 11:59 AM, Joseph Hardeman jwharde...@gmail.com
 wrote:

 Hi Guys

 The problem is that this is for a customer who is running IIS and already
 has all their certs built for IIS, I don't know if the IIS cert would work
 with stunnel.

 I tried the following configuration which I had found and they said it was
 working for them, but I am getting SSL to long errors:

 #listen cust1_443
 #maxconn 32000
 #bind0.0.0.0:443
 #mode http
 #cookie SERVERID insert indirect nocache
 ##cookie SERVERID rewrite nocache
 #timeout client 70s
 #timeout server 70s
 #timeout connect 30s
 #balance source
 #reqadd X-Forwarded-Proto:\ https
 #reqadd SSL-TERMINATION:\ ON
 #server IIS1-443 192.168.0.206:443 cookie iis1ssl check inter 5000
 fall 3 rise 1 maxconn 30
 ##server IIS2-443 192.168.0.207:443 cookie iis2ssl check inter
 5000 fall 3 rise 1 maxconn 30
 #option abortonclose
 #option httpclose
 #option forwardfor
 #retries 3
 #option redispatch
 #log global
 #option httplog
 #option ssl-hello-chk
 #option dontlognull


 With the second IIS server commented out, they are able to serve 1 of their
 largest customer with their SSL site, but I want to be able to load balance
 the requests and at least pin each visitor to IIS server they are sent to.

 listen  cust1_443
 mode tcp
 bind 0.0.0.0:443
 option ssl-hello-chk
 balance roundrobin
 server IIS1-443 192.168.0.206:443 check inter 5000 fall 3 rise 1
 maxconn 300
 #   server IIS2-443 192.168.0.207:443 check inter 5000 fall 3 rise 1
 maxconn 300
 timeout client 70s
 timeout server 70s
 timeout connect 30s

 Any ideas or thoughts on this?

 Thanks

 JOe


 On Sun, Apr 10, 2011 at 10:26 AM, Brian Carpio  bcar...@broadhop.com
 bcar...@broadhop.com wrote:

 You probably need to ask that question on the stunnel mailing list.


 Sent from my iPhone

 On Apr 10, 2011, at 8:20 AM, German Gutierrez  germ...@olx.com
 germ...@olx.com wrote:

  BTW, will this patch ever go upstream? Why stunnel does not have this
 already?
 
  On Sat, Apr 9, 2011 at 11:43 PM, Vivek Malik  vivek.ma...@gmail.com
 vivek.ma...@gmail.com wrote:
  Joe,
  You need to run as many stunnel instances as number of SSL
 certificates. If
  the sites share SSL certificate, then one stunnel instance will do.
  I run stunnel 4.32 with patch from
 http://haproxy.1wt.eu/download/patches/
 http://haproxy.1wt.eu/download/patches/
  on port 443 and forward it to port 81 on the same machine which is
 bound to
  haproxy.
  My stunnel config looks like
  cert = /etc/stunnel.pem
  sslVersion = all
  chroot = /var/lib/stunnel/
  setuid = stunnel
  setgid = stunnel
  pid = /stunnel.pid
  socket = l:TCP_NODELAY=1
  socket = r:TCP_NODELAY=1
  [https]
  accept  = 443
  connect = 127.0.0.1:81
  TIMEOUTclose = 0
  xforwardedfor = yes
  Note that xforwardedfor option only works after the patch is installed.
  My
  haproxy config looks like
  frontend http
  bind 0.0.0.0:80
  reqidel ^X-Forwarded-Proto:.*
  reqadd X-Forwarded-Proto:\ HTTP
  option forwardfor
  frontend

using haproxy for https

2011-04-09 Thread Joseph Hardeman
Hi Guys,

I was wondering if someone has a good example I could use for proxying https
traffic.  We are trying to proxy multiple sites that use https and I was
hoping for a way to see how to proxy that traffic between multiple IIS
servers without having to setup many different backend sections.  The way
the sites are setup they use a couple of cookies but mostly session
variables to track the user as they do their thing.  Either I need to be
able to pin the user to a single server using the mode tcp function when
they come in or be able to use some form of mode http that doesn't break the
SSL function.

This morning around 5am, I got one site running with only 1 backend using
tcp but I really need to be able to load balance it between multiple
servers.

Thanks

Joe


Question about passing traffic

2011-02-24 Thread Joseph Hardeman

Hi guys,

I have been asked if it were possible for Haproxy to receive traffic 
from servers for NFS over TCP, pass that traffic to a storage cluster 
and then the cluster send the data directly to the servers.  Sort of the 
same method as LVS-DR would be.


So the flow would go something like this:

Server Pool ? Haproxy ? Storage Cluster VIP's
   ?   ?
??

I know that Willy has tested Haproxy to 10G through put but they don't 
want Haproxy to become an I/O bottleneck as they scale their 
application.  Or if someone has a recommendation to use instead of 
Haproxy or RR DNS for this sort of connections to the Storage Cluster I 
would love to hear it.  They enjoy Haproxy on the other applications 
that they are using it for and it is working great.


Thanks

Joe



Re: Source IP instead of Haproxy server IP

2010-04-07 Thread Joseph Hardeman
Willy,

Thank you for the response, its interesting that I can't do this with
haproxy, I was successful in doing this with LVS before.

   Web Visitor
   ^  |
   |  |
   |  V
   |Haproxy
   |  /|\
   | / | \
Cluster of servers

I understand that haproxy is a layer 7 proxy and I am looking at using it as
a transparent forwarding load balancer, at least for this step.

Even with haproxy compiled with tproxy, you mentioned this wont work.

I want to stay with haproxy, but I need to have haproxy at this first step
pass the visitors' ip as the source to the next set of systems instead of
changing it out for haproxy server IP address.

Thanks again.

Joe


 From: Willy Tarreau w...@1wt.eu
 Date: Tue, 6 Apr 2010 07:10:04 +0200
 To: Joseph Hardeman jharde...@colocube.com
 Cc: haproxy@formilux.org
 Subject: Re: Source IP instead of Haproxy server IP
 
 On Tue, Apr 06, 2010 at 07:02:20AM +0200, Willy Tarreau wrote:
 They are wanting their systems to send the data back to the visitor instead
 of passing it back through haproxy.
 
 Opps, sorry, I did not notice the end of the question. It is not
 possible to send the data back to the client because it is not the
 same TCP connection, so it's not a matter of using one address or
 the other.
 
 There is one connection from the client to haproxy and another one
 from haproxy to the server. And even if you use the TPROXY feature,
 the return traffic must still pass through haproxy.
 
 This will be true for any layer7 load balancer BTW : the LB must
 first accept the client's connection to find the contents, and by
 doing so, it chooses TCP sequence numbers that will be different
 from those that the final server will choose (and a lot of other
 settings differ). So the server needs to pass through the LB for
 the return traffic so that the LB can respond to the client with
 its own settings.
 
 If your customer is worried about the bandwidth, you should build
 with LINUX_SPLICE and use a kernel = 2.6.27.x which adds support
 for TCP splicing. This is basically as fast as IP forwarding and
 can even be faster on large objects. With this I reached 10 Gbps
 in labs tests, but someone on the list has already reached 5 Gbps
 of production traffic and is permanently above 3 Gbps.
 
 So maybe this is what you're looking for. And yes, this is compatible
 with LINUX_TPROXY, though the few iptables rules may noticeably
 impact performance.
 
 Regards,
 Willy
 




Re: ACL Question

2009-11-09 Thread Joseph Hardeman




Hi Guys,

I appreciate the responses, over the weekend I decided to test with
using NFS and a single caching server for the application caching
module and it worked great, so I don't have to set haproxy to try to
send the same request to multiple servers *S* I just have to send it
to a single box now. 

I was just curious if it could be done. *S*

Love Haproxy and I recommend it to every one now. 

Joe

Willy Tarreau wrote:

  Hi,

On Fri, Nov 06, 2009 at 11:35:24AM +0100, XANi wrote:
  
  
Hi,

On Thu, 05 Nov 2009 19:44:03 -0500, Joseph Hardeman
jharde...@colocube.com wrote:


  Hi Everyone,

I know you can use acl's to take a request for a file and send it to
a different backend than the normal requests go to, but I was
wondering can an acl be setup so that when a request for a file, say
update.php, is called via the external url, for example:

http://www.example.com/update.php

Instead of sending it to a single server can you send it to all of
the backend servers at the same time? 
  

  
  (...)

  
  
AFAIK there isn't any possibility to do "send reqest to that backend
AND do something else" (i'd love having possibility to use external
rewriting software, like squid can).

  
  
indeed, it is not possible to play a request multiple times (and this
has nothing to do with ACLs).

  
  
What kind of cache do u use ? If it's memcached u can make one big
"global" cache quite easily (in most client libs u just need to specify
all servers in same order), and in other types of cache you would have
to have script that whne cache gets updated on one backend it sends
updates to other ones.

  
  
It's often quite common to see people send remote actions to directed
target servers, most often it's just to verify that all servers are
up to date. For this they simply use cookies. If you set a passive
cookie for each of your cache servers, you can decide which one you
use and your script can simply use that :

	cookie SRV
	server cache1 1.1.1.1 cookie c1 ...
	server cache2 1.1.1.2 cookie c2 ...
	server cache3 1.1.1.3 cookie c3 ...

Regards,
Willy



  

-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.





MySQL + Haproxy Question

2009-10-24 Thread Joseph Hardeman

Hey Guys,

I was wondering if there was a way to have Haproxy handle mysql 
requests.  I know that I can use the TCP option instead of HTTP and it 
will work, but I was wondering if anyone has a way to make haproxy send 
all requests for Select statements to a set of servers and all Insert, 
Updates, and Deletes to a master MySQL server.


I was just thinking about it and was wondering if this was possible and 
if anyone has done it.  If you have would you be willing to share how 
your setup is.


Thanks

Joe

--
This message has been scanned for viruses by Colocube's AV Scanner




Re: MySQL + Haproxy Question

2009-10-24 Thread Joseph Hardeman

Hi Mariusz

Thats actually what I thought, but I wanted to ask to be sure. *S*  I am 
going to look into that solution again, the last time I tried it, many 
months ago now, I couldn't get it to work right and I would have to 
replace all of the libmysql* so files on my web servers. 


Thanks for the reply.

Joe

XANi wrote:

Hi
On Sat, 24 Oct 2009 16:01:26 -0400, Joseph Hardeman
jharde...@colocube.com wrote:
  

Hey Guys,

I was wondering if there was a way to have Haproxy handle mysql 
requests.  I know that I can use the TCP option instead of HTTP and

it will work, but I was wondering if anyone has a way to make haproxy
send all requests for Select statements to a set of servers and all
Insert, Updates, and Deletes to a master MySQL server.

I was just thinking about it and was wondering if this was possible
and if anyone has done it.  If you have would you be willing to share
how your setup is.

U can't do that, u either have to use something like 
http://forge.mysql.com/wiki/MySQL_Proxy_RW_Splitting

or (better) rewrite ur app to split write and read requests

Regards
Mariusz
  


--
This message has been scanned for viruses by Colocube's AV Scanner



Re: drain backend nodes ?

2009-03-26 Thread Joseph Hardeman

Very cool.

Welcome to the community. :-)


Jan-Frode Myklebust wrote:

On 2009-03-26, Joseph Hardeman jharde...@colocube.com wrote:
  
Yes it can, there is an haproxy.conf file which contains the hosts that 
you are proxying the traffic for.  To remove a host, you would edit this 
file, put a # in front of the server(s) you want taken off line and then 
run the following command:





Not quite what I was looking for.. but now I found section 4 (soft stop)
of the architecture.txt which seems exactly like what I was looking for.
Guess it's time to phase out mod_proxy_balancer in favor of HAProxy for
our web-loadbalancing :-)

http://haproxy.1wt.eu/download/1.3/doc/architecture.txt


  -jf



  


--
This message has been scanned for viruses by Colocube's AV Scanner

begin:vcard
fn:Joseph Hardeman
n:Hardeman;Joseph
org:Colocube, LLC;Operations
adr:;;4311 Communications Dr;Norcross;GA;30093;US
email;internet:jharde...@colocube.com
title:Data Center Manager
tel;work:678-427-5890
tel;cell:678-427-5890
note:This email message is intended for the use of the person to whom it has been sent, and may contain information that is confidential or legally protected. If you are not the intended recipient or have received this message in error, you are not authorized to copy, distribute, or otherwise use this message or its attachments. Please notify the sender immediately by return e-mail and permanently delete this message and any attachments.  Thank you.
x-mozilla-html:FALSE
url:http://www.colocube.com
version:2.1
end:vcard



Re: haproxy + memcached

2009-03-20 Thread Joseph Hardeman

James,

Thank you for the info and the link.  You are right, I don't need to 
even think about load balancing our memcached servers.


Thanks again.

Joe


James Satterfield wrote:
The client should be using a hash to determine which memcached to use 
for a given key. You should not be attempting to load balance 
memcached nodes.
See 
http://code.google.com/p/memcached/wiki/FAQ#Cluster_Architecture_Questions for 
explanations.


James.

On Mar 4, 2009, at 11:12 PM, Joseph Hardeman wrote:


Hi Everyone,

I was wondering if anyone has put a haproxy system in front of 
memcached and how it performed.  I am considering putting 12 web 
servers in front of a haproxy server with 2 memcached servers behind 
it to spread the calls to memcached between the two memcached 
systems.  Does anyone have any recommendation on how this can be 
setup?  Using multiple ports in memcached.


Thanks everyone.

Joe


--
This message has been scanned for viruses by Colocube's AV Scanner

jhardeman.vcf






--
This message has been scanned for viruses by Colocube's AV Scanner

begin:vcard
fn:Joseph Hardeman
n:Hardeman;Joseph
org:Colocube, LLC;Operations
adr:;;4311 Communications Dr;Norcross;GA;30093;US
email;internet:jharde...@colocube.com
title:Data Center Manager
tel;work:678-427-5890
tel;cell:678-427-5890
note:This email message is intended for the use of the person to whom it has been sent, and may contain information that is confidential or legally protected. If you are not the intended recipient or have received this message in error, you are not authorized to copy, distribute, or otherwise use this message or its attachments. Please notify the sender immediately by return e-mail and permanently delete this message and any attachments.  Thank you.
x-mozilla-html:FALSE
url:http://www.colocube.com
version:2.1
end:vcard



Re: Multiple Proxies

2009-03-17 Thread Joseph Hardeman

Scott,

John is right, the way to do this is to use either heartbeat or 
keepalive and fail over a VIP to a secondary machine in case the first 
has issues.  Make sure your haproxy files are identical and then test 
the failover. 

We use heartbeat for one of our clients and so far any time I have had 
to either fail it over or it failed over on its own, we only lost 1 - 2 
packets.


If your web servers require the visitors to be pinned to that system for 
application reasons, make sure you have cookies setup in haproxy so that 
when it fails over, the secondary haproxy server knows where to send the 
visitor.


Joe

John Lauro wrote:


Not built into Haproxy, but you can use heartbeat or keepalived along 
with haproxy for IP takeover on a pair of physical boxes (or VMs).


 


*From:* Scott Pinhorne [mailto:scott.pinho...@voxit.co.uk]
*Sent:* Tuesday, March 17, 2009 10:52 AM
*To:* haproxy@formilux.org
*Subject:* Multiple Proxies

 


Hi All

 

I am using haproxy to load balance/failover on a  couple of my dev 
HTTP servers and it works really well.


I would like to introduce hardware redundancy for the haproxy server, 
is this possible with the software?


 


Best Regards

Scott Pinhorne

 


Tel: 0845 862 0371

 


cid:image001.jpg@01C93684.B3F9B800

 


http://www.voxit.co.uk

 


/P //Please consider the environment before printing this email./

PRIVACY AND CONFIDENTIALITY NOTICE

The information in this email is for the named addressee only. As this 
email may contain confidential or privileged information if you are 
not, or suspect that you are not, the named addressee other person 
responsible for delivering the message to the named addressee, please 
contact us immediately. Please note that we cannot guarantee that this 
message has not been intercepted and amended. The views of the author 
may not necessarily reflect those of VoxIT Ltd.


 


VIRUS NOTICE

The contents of any attachment may contain software viruses, which 
could damage your own computer. While VoxIT Ltd has taken reasonable 
precautions to minimise the risk of software viruses, it cannot accept 
liability for any damage, which you may suffer as a result of such 
viruses. We recommend that you carry out your own virus checks before 
opening any attachment.


 



--
This message has been scanned for viruses and
dangerous content by *VOXIT LIMITED* http://www.voxit.co.uk/, and is
believed to be clean.


--
This message has been scanned for viruses and
dangerous content by *MailScanner* http://www.mailscanner.info/, and is
believed to be clean. 


--
This message has been scanned for viruses by Colocube's AV Scanner

begin:vcard
fn:Joseph Hardeman
n:Hardeman;Joseph
org:Colocube, LLC;Operations
adr:;;4311 Communications Dr;Norcross;GA;30093;US
email;internet:jharde...@colocube.com
title:Data Center Manager
tel;work:678-427-5890
tel;cell:678-427-5890
note:This email message is intended for the use of the person to whom it has been sent, and may contain information that is confidential or legally protected. If you are not the intended recipient or have received this message in error, you are not authorized to copy, distribute, or otherwise use this message or its attachments. Please notify the sender immediately by return e-mail and permanently delete this message and any attachments.  Thank you.
x-mozilla-html:FALSE
url:http://www.colocube.com
version:2.1
end:vcard



Stats Page Explanation

2009-03-04 Thread Joseph Hardeman

Hi

I was wondering if there was a document explaining what the sessions 
counter actively counted? 

I have been doing some testing and when I go to a single page, the 
counter for Max Sessions tend go up more than the single request page I 
made.  I looked in my logs and saw the page I requested called several 
js pages and css page along with the main page along with multiple 
images (which may have been cached on my system) and the max sessions 
counter went up to 6, so I am trying to understand how the counters work 
and what they call a session so that I can explain it to my supervisors 
so that they in turn can explain it to others.


Does haproxy count every request sent thru haproxy to get either a page, 
image, streaming file, or some other object as a session?


Thanks for any help.

Joe

--
This message has been scanned for viruses by Colocube's AV Scanner

begin:vcard
fn:Joseph Hardeman
n:Hardeman;Joseph
org:Colocube, LLC;Operations
adr:;;4311 Communications Dr;Norcross;GA;30093;US
email;internet:jharde...@colocube.com
title:Data Center Manager
tel;work:678-427-5890
tel;cell:678-427-5890
note:This email message is intended for the use of the person to whom it has been sent, and may contain information that is confidential or legally protected. If you are not the intended recipient or have received this message in error, you are not authorized to copy, distribute, or otherwise use this message or its attachments. Please notify the sender immediately by return e-mail and permanently delete this message and any attachments.  Thank you.
x-mozilla-html:FALSE
url:http://www.colocube.com
version:2.1
end:vcard



haproxy + memcached

2009-03-04 Thread Joseph Hardeman

Hi Everyone,

I was wondering if anyone has put a haproxy system in front of memcached 
and how it performed.  I am considering putting 12 web servers in front 
of a haproxy server with 2 memcached servers behind it to spread the 
calls to memcached between the two memcached systems.  Does anyone have 
any recommendation on how this can be setup?  Using multiple ports in 
memcached.


Thanks everyone.

Joe


--
This message has been scanned for viruses by Colocube's AV Scanner

begin:vcard
fn:Joseph Hardeman
n:Hardeman;Joseph
org:Colocube, LLC;Operations
adr:;;4311 Communications Dr;Norcross;GA;30093;US
email;internet:jharde...@colocube.com
title:Data Center Manager
tel;work:678-427-5890
tel;cell:678-427-5890
note:This email message is intended for the use of the person to whom it has been sent, and may contain information that is confidential or legally protected. If you are not the intended recipient or have received this message in error, you are not authorized to copy, distribute, or otherwise use this message or its attachments. Please notify the sender immediately by return e-mail and permanently delete this message and any attachments.  Thank you.
x-mozilla-html:FALSE
url:http://www.colocube.com
version:2.1
end:vcard