Re: performance quick fix?

2012-01-24 Thread James Bardin
On Tue, Jan 24, 2012 at 1:43 PM, Coates, James jcoa...@icgcommerce.com wrote:
 We recently moved to Exchange 2010 and decided to balance the exchange
 servers behind haproxy.  We’re currently running haproxy on an old Dell
 server with a Pentium D 915 2.8GHz and we’re starting to pin the CPU now
 that most users have migrated to Exchange 2010.


Are your balancers handling SSL as well, or are you just using HAProxy
in TCP mode?
What kind of numbers do you see for concurrent connections and traffic?

I'm not sure if I can compare with the hardware you're currently on,
but I do have HAProxy in front of some Exchange2010 servers running in
our VMware infrastructure (I'm not sure of the hardware at the
moment).



-jim



Re: source ip - tcp mode

2012-01-18 Thread James Bardin
On Wed, Jan 18, 2012 at 5:43 AM, Karthik Iyer karthiksz...@gmail.com wrote:
 Is there any way to get the souce ip exposed to the nodes for tcp mode in
 someway while running haproxy as non-tproxy, for haproxy 1.4 ?

The most common use for TCP mode is balancing SSL traffic, where
having the IP would do you no good, since you can't insert it into the
stream.

The patch that you're thinking of is for stunnel[1], to add an
x-forwarded-for header.

You could also use stud[2], which can add a header, or use the proxy-protocol.

1. http://haproxy.1wt.eu/download/patches/
2. https://github.com/bumptech/stud

I've also used nginx with great success as an ssl wrapper, which can
give you some added functionality as well.


-jim



Re: HAProxy and TIME_WAIT

2011-11-28 Thread James Bardin
On Mon, Nov 28, 2011 at 11:50 AM, Daniel Rankov daniel.ran...@gmail.com wrote:
 And on
 loaded server this will cause trouble. Isn't there a chance for HAProxy to
 send RST, so that conneciton will be dropped ?

An RST packet won't make the TIME_WAIT socket disappear. It's part if
the TCP protocol, and a socket will sit in that state for 2 minutes
after closing.


You can put `net.ipv4.tcp_tw_reuse = 1` in your sysctl.conf to allow
sockets in TIME_WAIT to be reused is needed.

-jim



Re: HAProxy and TIME_WAIT

2011-11-28 Thread James Bardin
On Mon, Nov 28, 2011 at 12:28 PM, Daniel Rankov daniel.ran...@gmail.com wrote:
 Yeap, I'm aware of net.ipv4.tcp_tw_reuse and the need of TIME_WAIT state,
 but still if there is a way to send a RST /either configuration or compile
 parameter/ the connection will be destroyed.


TIME_WAIT is usually not a problem if port reuse is enabled (I haven't
seen an example otherwise), and you will usually have FIN_WAIT1
sockets if there is a problem with connections terminating badly.

Now that I recall that the socket option to always send RST packets is
called SO_NOLINGER, I noticed that there is an 'option nolinger' for
both front and backends in happroxy.


-jim



Re: unknown keyword 'userlist' in '****' section

2011-08-05 Thread James Bardin
On Fri, Aug 5, 2011 at 1:10 PM, Tom Sztur tsz...@gmail.com wrote:
 correction,
 Version is HA-Proxy version 1.3.15.2

Userlist is not an option in 1.3.
See your version's documentation:
http://haproxy.1wt.eu/download/1.3/doc/configuration.txt



Re: maintenance mode and server affinity

2011-08-02 Thread James Bardin
On Tue, Aug 2, 2011 at 2:52 AM, Willy Tarreau w...@1wt.eu wrote:


 Are you sure your server was set in maintenance mode, did you not just
 set its weight to zero ?


Yes. I've confirmed that when using a stick-table for persistence,
putting a server in maintenance mode does not block traffic from
existing sessions.

I'm using the latest stable 1.4.15, built on centos5.



 I think the later can be done on the stats socket using clear table,
 because you can specify a rule to select which entries to clear, so you
 can clear any entry matching your server's ID. But it's only in 1.5, not
 in a stable release.

I saw the clear table in the dev version after I sent this. Since it
seems that I'm experiencing a bug in maintenance mode, the proper
behavior combined with clear table would be everything I need.


If you need any more info to help troubleshoot this, let me know.

-jim



Re: maintenance mode and server affinity

2011-08-02 Thread James Bardin
On Tue, Aug 2, 2011 at 2:44 PM, Willy Tarreau w...@1wt.eu wrote:


 OK thanks for confirming. Could you check if you have option persist
 somewhere in your config ? From what I can tell from the code, this is
 the only reason why a server set in maintenance mode would be selected :

        if ((srv-state  SRV_RUNNING) ||
            (px-options  PR_O_PERSIST) ||
            (s-flags  SN_FORCE_PRST)) {
                s-flags |= SN_DIRECT | SN_ASSIGNED;
                set_target_server(s-target, srv);
        }

 - the server does not have the SRV_RUNNING flag in maintenance mode
 - the persist option on the backend might be one reason
 - I'm assuming there is no force-persist rule


OK, that's it.

I didn't realize that was the same code path for manually disabled
servers. I had option persist in there to prevent a server that misses
a few healthchecks under load from dumping all it's clients. Graceful
maintenance is more important than this edge case though, so I'll
remove it.


Thanks!
-jim



maintenance mode and server affinity

2011-08-01 Thread James Bardin
I have a number if instances using tcp mode, and a stick-table on src
ip for affinity. When a server is in maintenance mode, clients with an
existing affinity will still connect to the disabled server, and only
be re-dispatched if the connection fails (and error responses from the
backend are still successful tcp connections).

I've done a few things to stop this traffic when needed:
 - drop the packets on the load balancer with a null route or iptables.
 - block the packets with the firewall on the backend server, and let
the clients get re-dispatched.
 - shutdown the services that could response from the backend, and re-dispatch.


Have I missed any configuration in haproxy that will completely stop
traffic to a backend? I have no problem managing this as-is myself,
but having fewer pieces involved makes delegating administration
responsibilities easier.

Willy, is a block server option (or maybe a drop table to get rid
of affinity sessions), something that could be implemented?


Thanks,
-jim



Re: https from source to destination

2011-07-14 Thread James Bardin
On Thu, Jul 14, 2011 at 4:44 AM, Brane F. Gračnar
brane.grac...@najdi.si wrote:
 I guess your only option is nginx, which supports https upstreams.

I mentioned this earlier, but you can use stunnel in client mode to
connect to a remote https server.

It's unfortunate that nginx doesn't yet support http/1.1 in proxy
mode, as it otherwise makes a pretty good ssl wrapper.

-jim



Re: https from source to destination

2011-07-13 Thread James Bardin
On Wed, Jul 13, 2011 at 5:57 PM, Craig cr...@haquarter.de wrote:

 I hereby request the feature to do https to backends
 Sometimes it's really troublesome not being able to do that, even more
 so if a different party administrates the servers.


I'm not sure if you're serious or not, but If another party as
administrating the backend servers, it seems likely that you won't
have the private key for the ssl certificate.

If you don't trust the transport between you and the backend, you can
use stunnel in client mode to tunnel local traffic to a remote server.
This isn't really a function for a the load-balancer itself to handle.

-jim



Re: https from source to destination

2011-07-13 Thread James Bardin
On Wed, Jul 13, 2011 at 8:20 PM, Craig cr...@haquarter.de wrote:

 I'm not sure if you're serious or not, but If another party as
 administrating the backend servers, it seems likely that you won't
 have the private key for the ssl certificate.

 Yea I am, I would't dare to write shitty semi-joke mails on Willy's list.


No prob (and I hope no offense)

 In a big company, the loadbalancer could be managed by the network team,
 and the servers by the application team, that's what I meant; you will
 have the keys. Making HTTPS connections to backends would be really
 nice, because quite often you have rules on your webservers that will
 redirect HTTP traffic to HTTPS ... which causes an endless loop, if you
 terminate that traffic on the loadbalancer and send it via HTTP to your
 backend, of course. Surely, you can add headers with the loadbalancer,
 so that the backend knows if the connection is already secure or needs
 to get redirected, but then there are sometimes also funny application
 servers that still go nuts at you. Or your apache config is being send
 to you by a managed hosting customer and you have to patch it all the
 time for the header check. It's much nicer to just tell haproxy that
 those backend servers are HTTPS instead of HTTP. Sure, it takes more
 ressources, and might slow things down a bit, but if it's a system that
 runs at 5% of available ressources anyways, you won't care much. Even if
 so, you might rather invest 5000$ in hardware to keep the performance
 as-is than to create a sucky workflow and/or piss off your customers
 because you have a sucky loadbalancer that cannot loadbalance https
 properly and makes us change our apache config which took us three days
 and no one pays us that precious time.
 Surely, you could just layer-3 balance, but that takes a lot of features
 away and you might have to run a caching instance like varnish or squid
 running, too.

OK, I see the use case. IMHO though, I'd like to see these things
remain separate (I did learn my trade on unix, and that philosophy has
stuck). I could see combining them if there's a compelling performance
case, probably because of a shorter data pipeline, but SSL is the cpu
here, not the extra memory copies or buffering (we'll just have to
wait for some tests ;).

 Some IT contracts suck. ;)


Yes, they do :)

-- 
James Bardin jbar...@bu.edu
Systems Engineer
Boston University IST



Re: more than one haproxy instance on one host/IP

2011-07-11 Thread James Bardin
On Mon, Jul 11, 2011 at 2:18 PM, Alexander Hollerith
alex.holler...@gmail.com wrote:
 Thank you very much for pointing me into that direction. I think that 
 definitely answers my question. Since haproxy itself might keep more than one 
 process alive after dealing with an -sf (at least for as long as it takes 
 to finish the work) I assume that keeping alive more than one process, in 
 principle, can't be a problem :)


Another FYI, the included init script does this on automatically
reload, and prefaces it with a config check to prevent killing the
process altogether.

-jim



roundrobin vs leastconn

2011-06-17 Thread James Bardin
This is more for my own curiosity (I'm not advocating a change in the
haproxy defaults) -
Is there any inherit drawback to always using leasconn instead of
roundrobin? Since it uses roundrobin internally when servers are
equally loaded, it seems that this would be the most fair algorithm in
most cases, even in plain http where it avoids servers hit with a
number of slow connections.

thanks,
-jim



Re: roundrobin vs leastconn

2011-06-17 Thread James Bardin
On Fri, Jun 17, 2011 at 2:32 PM, Willy Tarreau w...@1wt.eu wrote:

 The round robin of the leastconn will not apply weigths, it's only
 used between servers which have the exact same amount of connections
 in order to avoid the common syndrom of the low load always hitting
 the same server because there's either 0 or 1 connection.


Ahh, that's the piece I was missing, and hadn't found yet in the code.

 Also there are situations where you really want to ensure that only
 round robin will be used. For instance, if you place your visitors
 on servers and then do cookie-based persistence, you absolutely want
 to ensure the smoothest possible distribution, which round robin
 achieves. If you'd do leastconn on that, sometimes you'd place a
 user on an apparently less loaded server at the moment you have to
 select the server, resulting in an imbalance between all servers.


Make perfect sense, just hadn't thought it through enough.

Thank!



Re: nice wiki doc of haproxy

2011-06-15 Thread James Bardin
Just throwing my $.02; how about converting the documentation to
something more easily parse-able, like markdown?

--
-jim



Re: Help on SSL termination and balance source

2011-06-09 Thread James Bardin
On Thu, Jun 9, 2011 at 7:33 AM, habeeb rahman pk.h...@gmail.com wrote:

 apache rewrite rule:
  RewriteRule ^/(.*)$ http://127.0.0.1:2443%{REQUEST_URI} [P,QSA,L]


Why are you using a rewrite instead of mod_proxy?
ProxyPass does some nice things by default, like adding the
X-Forwarded-For header which will provide the address of the client.
Otherwise, you will need to do this manually with rewrite rules.

-jim



Re: Linux routing performace

2011-05-05 Thread James Bardin
On Thu, May 5, 2011 at 7:02 AM, Willy Tarreau w...@1wt.eu wrote:


 I have no idea with ip rules impact performance that much for you.
 Anyway, since you're dealing with two interfaces, you can explicitly
 bind haproxy to each of them and still have a default route on each
 interface. The trick is to use a different metric so that you can have
 two default routes.

 For instance :

  ip route add default via 10.0.0.1 dev eth0
  ip route add default via 192.168.0.1 dev eth1 metric 2


I hadn't tried a default with a different metric, but no, still
doesn't work. Packets outside of the local subnets still end up
leaving through the first default route, which is why I have to move
the packets through another routing table with it's own default. Note
that this, and the previous suggestions, do work on most people's
networks, but our strict reverse path checking makes this more
complex.


Thanks Willy,
-jim



Re: Linux routing performace

2011-05-04 Thread James Bardin
Thanks guys,

On Tue, May 3, 2011 at 10:50 PM, Joseph Hardeman jwharde...@gmail.com wrote:


 route add -net 192.168.1.16 netmask 255.255.255.240 gw 10.0.0.1


A simple route doesn't work in this case, as the packets have to leave
out the correct interface as well, or they will be dropped by the
reverse-path-checking. Linux will route them correctly be default, but
they will still always leave out the interface with the default
gateway.



 On Tue, May 3, 2011 at 10:39 PM, Jon Watte jwa...@imvu.com wrote:

 Does the internal network need a gateway at all?

The internal network is routed throughout the campus, so I may have
backend servers with private IPs, which aren't in my subnet.


This isn't the end of the world if it's unsolvable, as I can request
that all load-balancing service IPs be public for now, and spin up
another haproxy pair for private services if there is a specific
requirement.

I was just hoping there was some kernel sysctl or ip parameter that
could effect routing performance. I'm kind of curious as to why this
ip rule impacts performance so much. Maybe reassigning the outgoing
interface is expensive?

Thanks,
-jim



Re: Transparent front end

2011-04-10 Thread James Bardin
Hi Sara,

What you've described is basically what haproxy (or any reverse proxy
for that matter) does. Have you tried using it? Did you have any
problems?

-jim


2011/4/10 sara fahmy geila...@hotmail.com:


 Hi every one
 I want to know is it possible to create a transparent front end? so that if
 the client wants to request the server it would call the back end server
 directly without knowing that his request is passed first to the front end
 then redirected to the back end? if yes, how?
 thanks!




Re: Build error on CentOS 5.5 x86_64 with PCRE support

2011-03-31 Thread James Bardin
On Thu, Mar 31, 2011 at 1:37 PM, g...@desgames.com g...@desgames.com wrote:

 /usr/bin/ld: skipping incompatible /usr/lib/libpcre.so when searching for 
 -lpcre
 /usr/bin/ld: skipping incompatible /usr/lib/libpcre.a when searching for 
 -lpcre


It's looking in /usr/lib, which only 32bit.
Try forcing it with USE_PCRE=1 and PCREDIR=/usr/lib64

-jim



counter reset on hot reconfiguration

2011-03-25 Thread James Bardin
Is the answer here correct?
http://serverfault.com/questions/205093/restarting-haproxy-without-losing-counters

I would love for the counters to be saved across reloads, but I
haven't seen this in my testing (most extensively on 1.4.11).

Thanks,
-jim



minconn, maxconn and fullconn

2011-03-23 Thread James Bardin
Hello,

I've been going through haproxy in depth recently, but I can't quite
figure out the details with full, min, and maxconn.

First of all, fullconn confuses me, and this example doesn't help

  Example :
 # The servers will accept between 100 and 1000 concurrent connections each
 # and the maximum of 1000 will be reached when the backend reaches 1
 # connections.
 backend dynamic
fullconn   1
server srv1   dyn1:80 minconn 100 maxconn 1000
server srv2   dyn2:80 minconn 100 maxconn 1000

What's the point of the fullconn 1 here? Won't the servers
already be maxed out at 2000 connections, and already at their
respective maximums long before 1 connections are made?

Is using minconn+maxconn+fullconn simply to give finer grained control
over resource allocation than you could get with the load-balancing
algo + weights? Is there a common use case for minconn, or is it one
of those options the majority users never need?


Maxconn can be declared in defaults, frontend, listen, under server,
and global as well. Does the first limit hit take priority; e.g. if I
set maxconn 10 in global, are my *total* connections for everything
limited to 10? Should I set:
  (global maxconn) = sum(frontend maxconns) = sum(server maxconns)



Thanks!
-jim