Re: VM Power Control/Elasticity

2015-05-11 Thread Ben Timby
On Mon, May 11, 2015 at 3:47 PM, Nick Couchman 
wrote:

> Thanks for the hints, Ben.  I'll defer to those who are experts about
> whether or not something like that should be part of the core
> functionality; however, it seems that even though this case might not be a
> great one for adding that to the core, there are a multitude of reasons why
> you'd want some sort of trigger mechanism on events within HAProxy.  I
> think people have asked here before about e-mail notifications for downed
> hosts, and I can think of a few other cases outside of mine corner case
> that would seem to warrant some generic trigger mechanism within HAProxy.
> Seems if a generic one is implemented the flexibility is there for everyone
> who needs a trigger of some sort or another to use it for whatever purposes
> suite their needs.
>

Especially for down host notification, there are tools to do that, like
Nagios for one. Even a tool like Nagios can run a command (like a simple
shell script to read stats from HAProxy) and then conditionally run another
command (like one that starts some VMs).

You may find there is another tool even better suited to your needs that
already exists. The shell scripting required may be minimal to get the
needed metrics into the tool and run hypervisor commands to control your
VMs. This may even be a tool related more closely to your hypervisor, the
software that manages your VMs. It may be able to run a shell script and
use the exit code to provision more/less machines of a certain class. Then
all you need a script to read stats and produce an exit code.


Re: VM Power Control/Elasticity

2015-05-11 Thread Ben Timby
Nick,

Here is some information on using socat to interact with the stats socket.
This might be useful for shell scripting.

http://www.mgoff.in/2010/07/14/haproxy-gathering-stats-using-socat/


Re: VM Power Control/Elasticity

2015-05-11 Thread Ben Timby
Nick,

HAProxy provides statistics via socket or HTTP interface. You can easily
monitor these stats and run scripts. Some cron jobs and regex should
suffice. Specific cases like this are usually not something I would imagine
belongs in HAProxy core, since it is not directly related to load balancing
but more of a specific requirement.

You could simply configure all 10 servers, 5 of which would mostly be in
the down state until your script brought them UP. HAProxy will happily
balance traffic to all UP servers, and shift it away from DOWN servers.

You can also use the stats socket to mark machines as up or down, so that
traffic can be gracefully shifted before after VM power up/down. However,
this may or may not be necessary depending on the services you are load
balancing.

https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2


Re: Is FTP through haproxy at all viable?

2015-05-08 Thread Ben Timby
With some iptables rules you can use FTP active and passive mode via
haproxy.

The key is to assign unique passive port ranges to each backend then port
forward those ranges. You must be able to configure each FTP server daemon
with it's own range.

You must also be able to configure your FTP daemon to maquerade as the load
balancer so that it sends the proper address for port commands etc. Most
FTP servers support the necessary optiona.
On May 8, 2015 10:20 AM, "Baptiste"  wrote:

> On Fri, May 8, 2015 at 4:02 PM, Shawn Heisey  wrote:
> > I have a load balancer setup with both haproxy and LVS-NAT.  The LVS-NAT
> > is giving us high availability for FTP.
> >
> > When I tried migrating everything from CentOS 5, where it all works, to
> > Ubuntu 14 (for the newer kernel and because I find debian-based systems
> > far easier to use), everything worked except passive FTP.
> >
> > Is there a viable solution for FTP through haproxy?  The machine has
> > public IP addresses on one side and private on the other, and is
> > configured with ip forwarding turned on, so the redundant pair acts as
> > the default gateway for the backend machines.  Everything is behind a
> > Cisco firewall, so I have disabled the ufw firewall that Ubuntu includes.
> >
> > Alternatively, if someone can help me make passive FTP work through
> > LVS-NAT like it does on CentOS, I am fine with that.  I've asked for
> > help on that here:
> >
> >
> http://askubuntu.com/questions/620853/lvs-nat-doesnt-work-with-passive-ftp-active-ftp-is-fine
> >
> > Thanks,
> > Shawn
> >
>
>
> Hi Shawn,
>
> Well, FTP can work in active mode only.
> To configure it, you must open port 21 and the active ports where you
> FTP server expects the user to get connected to.
>
> Baptiste
>
>


Re: HAproxy and Mysql

2014-04-24 Thread Ben Timby
My only feedback is that haproxy has a lot of features that make it useful
as a MySQL frontend. The stats are great for sizing and monitoring
purposes. Timeouts and queuing are also great for managing load etc. I used
to run haproxy in front of a single MySQL instance for those features alone
ala:

http://flavio.tordini.org/a-more-stable-mysql-with-haproxy

If you are looking to load balance multiple database servers, I think
haproxy is a good choice for doing that.

It will work great as long as everything is functioning normally, but you
will need to put a lot of work into handling failures and master migration
etc. These things haproxy has nothing directly to do with. Here is some
information on handling failure cases etc. using a simple agent along with
haproxy. It is old information, but should be useful.

http://www.alexwilliams.ca/blog/2009/08/10/using-haproxy-for-mysql-failover-and-redundancy/




On Thu, Apr 24, 2014 at 9:46 AM, Alexandre  wrote:

> Hello everyone,
>
> I'm looking for documentation to make a load balancer for mysql.
>
> I found this article :
> https://www.digitalocean.com/community/articles/how-to-use-
> haproxy-to-set-up-mysql-load-balancing--3
>
> What do you think?
>
> We also perform a test with LVS load balancing for mysql.
>
> Have you feedback of this load balancer.
>
>
> Thank you
>
> Alexandre
>
>


Re: check works on one backend but not another

2014-02-13 Thread Ben Timby
Baptiste gave you the proper answer already. The SSL backend is using TCP
mode, so the check is a TCP check without the `option httpchk` defined on
the backend, which just checks that the port is open. Add the httpchk
option without check-ssl and you will be all set. Or you can use track to
skip the duplicate check against port 9700.


Re: check works on one backend but not another

2014-02-13 Thread Ben Timby
While this does not answer your question per se you can use the track
option to eliminate the duplicate check.

In other words, the SSL backend can track the checks done by the non-SSL
backend.

backend nginx-ssl
modetcp
balance leastconn

server  app1app1.prod:444   track nginx/app1
server  app2app2.prod:444   track nginx/app2

server  downlocalhost:81 backup


Re: speeding up failover

2014-02-13 Thread Ben Timby
Read the manual about `rise` and `fall` parameters. These allow you to
control how many successive checks must pass or fail before the server
transitions up or down (rises / falls). The check interval is used as the
check timeout unless you specify a check timeout. See "timeout check" in
the manual.

http://haproxy.1wt.eu/download/1.5/doc/configuration.txt


Re: HAProxy Question

2014-02-06 Thread Ben Timby
TCP mode load balancing would treat each TCP quad (source ip/source port,
dest ip/dest port), stream, or flow as a "session" or in other words, the
TCP stream is the basic unit of TCP load balancing.

You can enable the stats http interface and monitor that in your browser
for some useful metrics such as session count etc. There are also tools
such as hatop that will monitor the stats socket (unix domain socket) and
print a summary on the console.

See "stats *" directives in manual...
http://haproxy.1wt.eu/download/1.5/doc/configuration.txt


Re: HA Proxy FTP Load Balancing Timeout

2013-05-01 Thread Ben Timby
Alok,

Sorry have been out of the office for a while.

You could try increasing the clitimeout and srctimeout values in your
defaults section. These values are ninety and one hundred and twenty
seconds respectively. My guess is that tcpka has no effect on "activity"
from haproxy's point of view as this tcp traffic would be generated by
haproxy itself.

Also, after seeing your config, I realize why I was confused. I load
balance FTP as well, but I ONLY load balance the command channel via
haproxy. The data channels are handled directly by NAT rules. I wrote up my
method here:

http://ben.timby.com/?page_id=210

The nice thing about this method is that haproxy is still able to
distribute load pretty evenly by "user session" not by individual
connection. Also, the heavy lifting of transferring large files is then
handled in the kernel by netfilter rather than by haproxy. Additionally
this means that time outs enforced by haproxy only apply to the command
channel, and do not affect the data channels.


Re: HA Proxy FTP Load Balancing Timeout

2013-04-18 Thread Ben Timby
On Thu, Apr 18, 2013 at 3:38 PM, Alok Kumar  wrote:

> Hi Ben,
> In my case we are load balancing across FTP servers.
>
> FTP uses two data channel and command channel port for data transfer.
>

I use haproxy for the same purpose. Closing the command channel will not
affect a transfer in any way, unless you have something else set up wrong.

In other words, the command channel is needed only to START the upload,
once started the data channel will complete the upload. If the command
channel closes in the meantime, what does it matter?

I am trying to understand WHY this is a problem, as in my experience
closing the idle command channel is a GOOD thing with no negative
side-effects.


Re: HA Proxy FTP Load Balancing Timeout

2013-04-17 Thread Ben Timby
Alok,


On Tue, Apr 16, 2013 at 8:26 PM, Alok Kumar  wrote:

> I have a HA Proxy server(1.4), thzt is  load balacing FTP traffic to Six
> FTP
> servers.
>
> I noticed that Load Balancer is dropping traffic after 50 sec, where as
> there
> was a valid ftp control port and Large file transfer was in progress over
> data
> port.
>

I have the same behavior on my cluster (by design) by setting "timeout
client" (or old clitimeout) setting in haproxy. For me it is desirable, as
most (all?) FTP clients will re-open the command channel if needed. The
command channel is not needed for ongoing transfers.

http://code.google.com/p/haproxy-docs/wiki/timeout_client

You might also look for "timeout server" in your configuration.

It may be that this is unintentionally enabled in your configuration.


> I tried using tcpka in defaults section, but it didn't make any difference.
>
> In my particular case, using tcpka option on the backend side could have
> solved the issue of control channel timing out dut to inactivity.
>
> Am I  missing something like setting tcp keeyalive time, interval and probe
> frequency setup on Ha Proxy Linux server.
>

If the answer is not so simple, I suggest you provide your configuration to
the list (obscure anything sensitive) to receive more in-depth help.


Layer4 connection problem: Resource temporarily unavailable

2013-04-16 Thread Ben Timby
I run about 50 FTP server clusters. Each cluster consists of 3 backend FTP
servers. I am using haproxy to load balance each of these clusters to three
backends. I am using smtpchk to verify the FTP banner. I run the HTTP admin
interface, which shows the status of all the front/backends.

Running haproxy 15 development snapshot from 12/30/2012, using the same
load balancer for HTTPS (thanks for that) everything works fine except for
the FTP checks. Below is a snippet from our configuration that shows the
check-related options we are using. This snippet happens to be for FTPS
(implicit) but we experience the same check failures with plain old FTP
(port 21, minus check-ssl).

option smtpchk HELO ftp-check.org
server ftp00 111.111.111.111:990 check check-ssl send-proxy inter 1m
fastinter 10s fall 1 rise 3

Any time I load the web interface, haproxy will report several FTP backends
with the status "Layer4 connection problem: resource temporarily
unavailable". They always recover after a failed check, but there are
always a handful in failed state. I would like to know what conditions
might cause this particular state. The failure happens in a few tens of
milliseconds, so it is not a time out, but an active refusal or other
failure.

I AM NOT indicating that the problem is haproxy, it may very well be a
problem with the FTP server, I am looking for some pointers on how to track
down the root problem. Any information about the sequence of events that
would lead to this failure are much appreciated.

I have tried disabling iptables on the FTP backend servers. I have also
increasing the global maxconn and ulimit-n in haproxy. The FTP daemon does
not specify a TCP backlog value for the listening socket, so my
understanding is that the backlog would be (the default) 16 under Linux.


Re: CSS not displayed

2013-01-22 Thread Ben Timby
On Tue, Jan 22, 2013 at 9:57 AM, Olivier Desport
 wrote:
> I use Haproxy with two web servers. The CSS are not well displayed (images,
> fonts...). The look of the page is different every time I refresh ! It works
> correctly when Haproxy is not used. Is there something to set up in haproxy
> or Apache configurations ?

It may not anything specific to haproxy. I would suggest looking at
your apache log files (for 403, 404 errors or similar). Also, you can
try a tool like Firebug (for Firefox, or developer tools for whatever
browser you use) such tools will have a network panel, you can see
which specific resources did not load and why. This might help you
track down the issue.



Re: A backend per application - or backend per server group?

2013-01-21 Thread Ben Timby
On Mon, Jan 21, 2013 at 7:30 PM, Sölvi Páll Ásgeirsson  wrote:
> Hello!
>
> I have a small question on 'idiomatic' haproxy configuration, when
> serving multiple independent applications from
> a a shared group of webservers and if I should define each
> application/virtual directory as a dedicated backend or not.
> I apologize if I'm missing something obvious from the documentation.

No reason to choose one or the other ;-). I run two web applications
on the same group of servers, however, each application has a
dedicated backend that defines the same servers, there are individual
checks for each application, allowing them to have different statuses.

--
frontend vip00
mode http
bind 0.0.0.0:80

acl is-apptwo dst 172.16.1.102

use_backend apptwo if is-fileac
default_backend appone

backend appone
mode http
option httplog
option httpchk GET / HTTP/1.1\r\nHost:\ www
option http-server-close

balance roundrobin

server http00 http00:80 check inter 2s
server http01 http01:80 check inter 2s
server localhost 127.0.0.1: check inter 2s backup


backend apptwo
mode http
option httplog
option httpchk GET /robots.txt
option http-server-close

balance roundrobin

server http00 http00:81 check inter 2s
server http01 http01:81 check inter 2s
server localhost 127.0.0.1: check inter 2s backup
--

I used different ports, but you could easily use the same server/port
for each backend. The frontend uses an acl to sort out the backend,
for my set-up I have a specific IP address for each app. You may very
well use host headers etc. for your acls. The third server in each
backend is a "sorry, we are down" page that is displayed only if all
other servers in the backend are down.



Re: Tilde in haproxy 1.5 log

2013-01-08 Thread Ben Timby
On Tue, Jan 8, 2013 at 11:14 AM, Baptiste  wrote:
> that said, I'm not sure that you can remove this char.

Jeremy, it is not pretty, but we run analytics on a bunch of log
files. We format them as best we can in the producers, but some still
need transformation. Our analytics software is able to read logs from
files, or from a pipe. Thus we have commands (sed/awk) that do some
final format changes just before analysis. If your stats package does
not have similar functionality, you may be able to use cron or
logrotate to run commands against the log files before analysis. We
like to retain the original logs, the JIT approach is nice for us.
Without JIT, you can instead retain separate logs (for archival and
analysis) and remove one set after analysis.



Re: Health check for FTP, smtpchk + send-proxy.

2012-11-21 Thread Ben Timby
On Wed, Nov 21, 2012 at 6:51 PM, Willy Tarreau  wrote:
> What version are you using ? This was changed in dev12 so that the send-proxy
> directive also works with health checks. However be careful with dev12,
> we have found and fixed many issues since then. I'm about to issue dev13
> (finishing last tests), so you'd better use that.

I should have known. I am running dev11, I know a lot of changes went
in to dev12, so I have been wary. I will try dev13 when available.

> From reports I've had, just using "option smtpchk" was giving good results
> on FTP and POP, which is why there are no such checks right now.

Sounded like it would work for FTP, if it works with PROXY, then I am all set.

Thanks Willy.



Health check for FTP, smtpchk + send-proxy.

2012-11-20 Thread Ben Timby
I am trying to find a health check suitable for FTP servers. Sometimes
the FTP server is in a state where it accepts a connection, but does
not respond for several seconds. I would like to be able to simply
ensure that the FTP server banner is returned by a server, ensuring
it's healthy operation.

Issue 1:

Is there a way to do a generic expect-style TCP check? It seems like
it would be fairly trivial to allow something like:

option tcpchk send foo expect bar

Issue 2:

I could not find anything suitable, so I decided to try smtpchk, it
could perhaps be bent to my will. However, the problem I ran into is
that my servers expect the proxy protocol, and even though the servers
are configured as:

listen ftp-vip00
bind 1.1.1.1:21
mode tcp
option tcplog
balance leastconn
option smtpchk HELO ftp.org
server beta-ftp00.ftphosting.net 2.2.2.2:21 check send-proxy

No PROXY line was sent before the HELO ftp.org smtpchk, thus the check fails.



Re: Graceful handling of garbage collecting servers?

2012-10-24 Thread Ben Timby
I am not familiar with Java application servers, so please excuse my ignorance.

Is it possible to schedule the garbage collection? If so, you could
temporarily disable the server, kick off GC, then re-enable the
server. HAProxy has a stats socket that would allow you to adjust the
server's weight to 0 temporarily. If you could make a JSP to kick off
GC, then you could have a simple cron job that uses socat to disable
the server, curl to hit that page, then socat to re-enable the server.
Do each server in turn (or on separate intervals). If you can do this
more often than it would happen "naturally" then you can control the
process and lose 0 requests.



Re: Fastest response

2012-10-22 Thread Ben Timby
Willy,

On Tue, Oct 23, 2012 at 2:10 AM, Willy Tarreau  wrote:
> Some of us have already been discussing about the possibility to adapt the
> HTTP checks to report a header to modulate the server's weight (in fact it
> was planned for 1.3.14 but skipped because of no use at this time). But we
> can bring this back on the table.
>
> The agent would be very simple in my opinion, it would accept incoming
> connections from the load balancer on a specific port, would check that
> the server is available and will adjust a weight between 0 and 100%
> depending on the number of available connection slots relative to a
> configured max on the servers.
>
> So a server which supports 1000 concurrent connections and runs at 150
> would have 850/1000 == 85% weight. Then haproxy will still be able to
> use leastconn depending on that weight, to distribute the load across
> all servers.
>
> Does that sound good to you ?

Yes, that sounds exactly like what I am looking for.

I suppose if/when this feature hits, I would configure an HTTP check
for each FTP backend (in addition to the port 21 tcp check). The HTTP
check would connect to a simple agent that emitted a header containing
the server's desired weight. HAproxy would handle the rest.

I could get started today by writing a simple poller that would sit on
the load balancer, poll the FTP servers via HTTP and update weights
using the HAProxy control socket. Eventually, HAProxy would handle the
polling via the HTTP check and I could discard the interim poller.



Fastest response

2012-10-22 Thread Ben Timby
I am using haproxy to load balance a pool of FTP servers. Since
haproxy only handles the command channel and I am using leastconn, it
is able to pretty much keep the load balanced between all servers.

However, not all users (or command channels) are equal. For example a
specific user may open a lot of file transfers, which are opened
independently of the command channel, and are not subject to load
balancing. Once a user opens a command channel, the way my FTP servers
are configured, all data channels will be established with the same
backend server.

What I have going on is that a particular server will become very
busy, which is the reason for load balancing in the first place. It
will respond slowly and HAProxy will continue to send it traffic. A
portion of users will experience slowdown even while the majority have
no issue.

Ideally, it would be great if haproxy would try to connect for
something like 2 seconds, and then transparently switch to another
backend that is responding more quickly. Optionally, I would like to
balance based on the backend's response time, rather than the number
of connections (since connections are so unequal with FTP).

I read the manual a few times looking for inspiration, the best I have
been able to come up with is to have an agent that dynamically adjusts
weights given some criteria I am yet to define.

Any ideas on how to achieve better balancing? Is anyone using a scheme
like the above or did I miss a more obvious solution?



Re: SSL Backends

2012-07-16 Thread Ben Timby
On Mon, Jul 16, 2012 at 4:39 PM, Gabriel Sosa  wrote:
> IMHO
>
> if you run your servers in a trusted network, **haproxy ==> stunnel
> ==> server** part adds a lot of overhead

I see your point but have to chime in with this: A trusted network is
one small step away from being an untrusted network.

Your web servers, and/or load balancers generally live in some kind of
DMZ, which should be the least trusted part of the network. However,
even the non-DMZ should not be completely trusted. It is very sane and
sensible to encrypt protocols on ANY network. If you can afford to do
so, then you probably should.

My favorite common phrase about this is: "hard and crunchy on the
outside, soft and chewy on the inside."



With SSH Load balancing, haproxy not responding.

2012-07-11 Thread Ben Timby
I use haproxy for HTTP(S) and SSH.

I am running version: haproxy-1.5-dev11

My pool of backend servers are different for each protocol.

I am having a problem with SSH, periodically (every day) haproxy stops
accepting connections. My Nagios check (tcp port 22) receives:

CRITICAL - Socket timeout after 10 seconds

This condition persists until I restart haproxy. At the same time, my
HTTP(S) virtual servers are unaffected. Also, the backends are just
fine, I can open a connection directly to them without issue. I don't
have access to the haproxy status page just now, but when I do, I can
provide information from it.

Here is my configuration for the SSH load balancer:

listen ssh-vip0
bind??.??.??.??:22
modetcp
option  tcplog
balance leastconn
server  ssh0 ssh0:22 minconn 10 maxconn 256 send-proxy
server  ssh1 ssh1:22 minconn 10 maxconn 256 send-proxy

I don't see anything in the haproxy log about this virtual server,
just traffic from the other working ones.

Any ideas? What other information would be useful.



Re: Haproxy notifications

2011-09-22 Thread Ben Timby
On Thu, Sep 22, 2011 at 11:30 AM, Guillaume Bourque
 wrote:
>         option          log-health-checks

:-) I took notification to mean something other than logging.



Re: Haproxy notifications

2011-09-22 Thread Ben Timby
On Thu, Sep 22, 2011 at 10:24 AM, İbrahim Ercan
 wrote:
> Hi, I am new haproxy user. I wonder that is there a way to make haproxy send
> notifications when a server down or up?
> Thank you for interested in...

Hi Ibrahim,

Use Nagios or a similar monitoring tool. These tools can either
monitor your servers directly, or even monitor the server status via
HAProxy using it's web stats interface, or log file etc. Nagios can
then alert via email, sms, etc. You can even configure actions to take
when a server goes down (restart httpd for example).

http://www.nagios.org/

There may be simpler tools more well-suited to your needs, I know of
another named Monit. But my experience is mostly with Nagios.

http://mmonit.com/monit/

The point is that I would look for a monitoring tool to use for sending alerts.



Re: Can't bind to Virtual IP

2011-08-11 Thread Ben Timby
On Thu, Aug 11, 2011 at 10:16 AM, Ran S  wrote:
> But the majority of the guides are not relevant to my problem. as far as I
> understand, in order for a frontend to use a different IP than the machine's
> IP (in an internal network), all is needed is:

Why do you think that? You can certainly bind to any IP address you
want to, but that does not mean that other machines on your network
will know that you have. What I mean is that if your machine does not
advertise the IP (via ARP) other machines won't know to send traffic
it's way.

The only way I know to get an IP address reachable from other machines
on your network is to assign the address to the machine. You can use
an alias IP (eth0:0), or add the ip address using ifconfig/ip.

http://www.cyberciti.biz/faq/linux-creating-or-adding-new-network-alias-to-a-network-card-nic/

Once you do that, your machine will advertise the availability of that
address. Unless you are using static routes or some funny business on
a router, the above is what you need to do.



Re: Help me please, with haproxy.cfg for FTP Server.

2011-05-29 Thread Ben Timby
> Le samedi 28 mai 2011 08:05:59, Jirapong Kijkiat a écrit :
>> Dear. w...@1wt.eu, haproxy@formilux.org
>>
>>     How i can config haproxy for load balance my ftp server.  now my
>> haproxy.cnf

FTP is not easy to load balance. Here is the solution I use.

1. HAProxy machine is the NAT gateway for FTP servers.

2. HAProxy load balances only the control connection (port 21).

The hard part is the data connection. The FTP protocol works by
opening a control channel which exchanges commands and responses.
Whenever data needs to be transfered another connection (a data
channel) is opened. Files, directory listings and similar bulk data is
transfer over the data channel. In this way, FTP allows simultaneous
transfer of multiple files. Rather than multiplex channels on a single
connection, FTP uses a connection per channel. The data channel works
in two modes.

1. Active mode (the default) means that the server will connect to the
client. When a data channel is needed, the client and server negotiate
a TCP address and port for the server to connect to the client on. The
client opens this port and awaits the connection. Usually NAT routers
and firewall on the client end rely on packet inspection to observe
this negotiation, they then allow this connection to take place. Often
they will modify the negotiation to inject the public IP address in
place of the private (RFC 1918) address of the client. The exception
to this when SSL is used. SSL prevents packet inspection and breaks
active mode.

2. Passive mode means that the client will open an additional
connection to the server. Generally this works better as the FTP
server admin can open the port range that will be used for passive
connections. Most NAT routers and firewalls allow any outbound
traffic, so they will not stand in the way of a passive connection.
This allows connections to work without packet inspection even with
SSL.

So, once HAProxy is load balancing the control channel, you have to
work out how to allow both active and passive connections to work.

-- Allowing active mode to work --

1. You must SNAT the FTP server's (private) address to the same
address that accepted the control channel connection (HAProxy bind
address). Otherwise the client machine will sometimes balk at a
connection from an address other than the server's (the one it opened
the command channel to). Also, without this SNAT rule in place, any
NAT router or firewall will expect the connection to come from the
server, and will block it if it does not.

-- Allowing passive mode to work --

1. You must allocate a unique port range for each backend FTP server,
and DNAT each range to the various servers. You must also configure
each server to use it's own unique port space for passive connections.
Most FTP servers allow you to specify the passive port range.

If you are using proftpd, here is how you configure the passive port range.

http://www.proftpd.org/docs/directives/linked/config_ref_PassivePorts.html

Example:

DNAT rule/passive range -> backend server.
2048-4096 -> Server A.
4097-6145 -> Server B.

This way, any client connected to server A will connect to it's
dedicated passive port range and be forwarded by NAT to the correct
backend server (which is awaiting it's connection).

2. You must also configure the FTP server to masquerade as the same
address used for making the control connection (the IP address HAProxy
is listening on port 21 on). This is so that passive connections hit
the NAT server and are correctly forwarded. Bypassing NAT by directing
the client to connect to the backend server directly does not work in
all FTP clients, so it is best to simply masquerade as the main FTP
service IP address. Most FTP servers allow you to configure a
masquerade or public IP address to use in passive connection
negotiations with clients.

If you are using proftpd, here is how you configure the masquerade address:

http://www.proftpd.org/docs/directives/linked/config_ref_MasqueradeAddress.html

-- Client IP address --

* At this point you have a working setup, the next section is about
fine-tuning it. I would get to this point before tackling the next
steps...

The last issue is that now FTP works great, but the FTP server sees
all connections coming from the proxy machine's IP address instead of
the client's address. To solve this you have two options.

1. Use TPROXY kernel support to perform transparent proxying.

2. Use the PROXY protocol and write a plugin for your FTP server to
accept the PROXY protocol.

http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt

I personally use open 2 as I prefer a user-space solution to a kernel
solution. Also, it is much easier to set up my FTP servers without a
custom kernel package that I have to maintain (instead of a simple
"yum update"), let upstream do that for you.



Re: option for not logging ip adresses

2011-05-18 Thread Ben Timby
On 2011-05-17 17:21, Johannes Smith wrote:
> Hi, are there chances to get something like "option dontlogip" which
> dumps all logged ips with 0.0.0.0 (in order to stay compatibel with log
> analyzing tools)?

Another option is to look at your log analyzer. The one I use allows
the log file config option to include a pipe, thus you can do the
following:

LogFile = "/usr/local/bin/remove_ip.sh /var/log/haproxy.log |"

Where remove_ip.sh is a script that uses awk to drop the IP address
column from the file right before it is processed. If a pipe is
present the log analyzer executes the command and reads the output. If
not, it opens it as a regular file. Your software may have a similar
feature.



Re: MySQL LB / Backup Config

2011-05-07 Thread Ben Timby
On Fri, May 6, 2011 at 5:41 PM, Brian Carpio  wrote:

> Hi,
>
>
>
> I have a very simple setup for doing load balancing for MySQL DBs.
>
>
>
> listen mysql_proxy vip01:3306
>
> mode tcp
>
> option tcpka
>
> balance roundrobin
>
> server mysql01 mysql01:3306 weight 1 check inter 10s rise 1 fall 1
>
> server mysql02 mysql02:3306 weight 1 check inter 10s rise 1 fall 1
> backup
>
>
>
> I am using the backup option so that mysql02 ONLY begins to receive traffic
> if mysql01 is down. The problem with this however is that once mysql01 is
> back online it begins to receive traffic gain. I would like mysql02 to stay
> as the “primary” until mysql02 fails, so basically if mysql01 goes down
> mysql01 becomes “backup.
>
>
>
> I didn’t see much in the docs on how to do this, however i could have
> missed it
>

Brian, while HAProxy can load balance any protocol, my suggestion to you
would be to look into Heartbeat to perform this task for you. It does not
load balance like HAProxy, but allows a shared IP address to be migrated
between your two nodes. Once you are using Heartbeat, you can adjust the
"stickyness" of the MySQL resource to keep it from immediately failing back
to the original primary node. For me, Heartbeat has worked very well with
both MySQL and PostgreSQL. Not only can it migrate the IP address, but you
can also put other scripts or services under it's control so that failing
over can also toggle replication settings or anything else you need done.

I think in this case Heartbeat is the tool better suited for the job than
HAProxy.

I personally use Heartbeat with the Pacemaker cluster resource manager.
There are a ton of how-to articles for MySQL+Heartbeat out there.


Re: Rate Limiting Blog Link

2011-04-19 Thread Ben Timby
Simplification is not always possible. You must use the tools at hand.
Reading the article you linked to everything seemed pretty
straightforward to me. A feature like rate limiting can only be
simplified so much.

That said, look into using stunnel for your SSL decryption. There is a
patch that will allow it to implement the PROXY protocol. HAProxy can
then securely receive the client IP address from stunnel without the
worry of spoofed X-Forwarded-For headers.

http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt

I use this method and it works great.

With that out of the way, you can continue to deal with HTTP traffic
in haproxy, rather than focusing on simplification, focus instead on
documentation.



Re: using haproxy for https

2011-04-12 Thread Ben Timby
On Tue, Apr 12, 2011 at 12:15 AM, Joseph Hardeman  wrote:
> HI,
>
> Considering these are for a customer and they have already purchased their
> certs, I don't want to go through the hassle of converting them and causing
> them any issues.

I don't see how this would inconvenience anybody, it is a pretty
straightforward operation. It is done server-side and won't impact the
customer or CA etc.

https://support.servertastic.com/entries/323869-moving-ssl-certificate-from-iis-to-apache

You are simply exporting the cert/key from IIS, which will insist on
encrypting them. Then you are decrypting them using openssl to a PEM
format file so it can be used by software other than IIS.

> Now we can stick with the examples on the haproxy site using mode tcp, but I
> was wondering is there a way via ACL's or something to do something along
> the lines of reading the requested domain name and sending that traffic to a
> specific server or set of servers?

Of course not, if you are doing TCP mode with SSL traffic, how are you
going to inspect the traffic at the proxy? Remember, it is encrypted.



Re: using haproxy for https

2011-04-09 Thread Ben Timby
On Sat, Apr 9, 2011 at 2:07 PM, Joseph Hardeman  wrote:
> Hi Guys,
>
> I was wondering if someone has a good example I could use for proxying https
> traffic.  We are trying to proxy multiple sites that use https and I was
> hoping for a way to see how to proxy that traffic between multiple IIS
> servers without having to setup many different backend sections.  The way
> the sites are setup they use a couple of cookies but mostly session
> variables to track the user as they do their thing.  Either I need to be
> able to pin the user to a single server using the mode tcp function when
> they come in or be able to use some form of mode http that doesn't break the
> SSL function.
>
> This morning around 5am, I got one site running with only 1 backend using
> tcp but I really need to be able to load balance it between multiple
> servers.

Joe, haproxy itself does not do SSL. That said, you can set up an SSL
server in front of it. Myself, I use stunnel. Stunnel strips the SSL
and forwards the traffic to haproxy. I have many instances of stunnel
(one per cert/ip) which all feed a single haproxy http listener.

http://www.stunnel.org/

You could also use another server like nginx, apache etc. to strip the
SSL. However, I find stunnel well suited as all it does is SSL and it
is fast and efficient at it (similar to how haproxy does proxyinig
very well).



Re: Is it possible for haproxy to connect to backend server specified in an http header

2011-04-01 Thread Ben Timby
On Fri, Apr 1, 2011 at 12:24 PM, Delta Yeh  wrote:
> Hi all,
>  When setting up a web hosting service with haproxy, there is a requirement.
> The case is :
> clientnginx---haproxy---wwws
>
> client :1.1.1.1
> nginx  2.2.2.1
> haorxy:2.2.2.2
> wwws:3.3.3.1
>
> nginx sit between client and  haproxy, haproxy work in transparnt mode.
>
> client send request to  www.abc.com( 3.3.3.1) ,the request is redirect
> to nginx box.
> nginx get the original server 3.3.3.1 and add a header  x-original-to:3.3.3.1 
> ,
> nginx proxy request to haproxy, haproxy proxy request to the original
> www server 3.3.3.1 specified in http header "x-original-to"
>
>
> the config fragment of haproxy may like:
>
> backend wwws
>        mode http
>        server  virtual-host-server  0.0.0.0 addr_header x-original-to
>        source 0.0.0.0 usesrc hdr_ip(x-forwarded-for,-1)
>
>
> so server option "addr_header x-original-to" tell haproxy  the real
> www server address to be connected to is
> specified in header "x-original-to"
>
> So my question is it possible for haproxy to do this?

Someone else can correct me if I am wrong, but I don't think this will
work exactly as described. However, a similar method that would work
if you have a number of known backends is to use acls to route the
request.

Given two possible backends:

www1: 3.3.3.1
www2: 3.3.3.2

A configuration like:

listen http
bind2.2.2.2:80
modehttp
option  httplog
balance roundrobin
acl   use-www1 hdr(x-original-to) 3.3.3.1
acl   use-www2 hdr(x-original-to) 3.3.3.2
use_backend www1 if use-www1
use_backend www2 if use-www2
server  www1 3.3.3.1:80
server  www2 3.3.3.2:80

The acl will be set to true when the IP address matches that of the
backend. The acl then determines which backend to use. You would have
to write an acl and use_backend rule for each backend. It would not
work for arbitrary backends.



Re: Redirect Loop when using X-Forwarded-Proto header.

2011-03-29 Thread Ben Timby
I found the issue. From the haproxy manual:

By default HAProxy operates in a tunnel-like mode with regards to persistent
connections: for each connection it processes the first request and forwards
everything else (including additional requests) to selected server. Once
established, the connection is persisted both on the client and server
sides. Use "option http-server-close" to preserve client persistent connections
while handling every incoming request individually, dispatching them one after
another to servers, in HTTP close mode. Use "option httpclose" to switch both
sides to HTTP close mode. "option forceclose" and "option
http-pretend-keepalive" help working around servers misbehaving in HTTP close
mode.

So:
option http-server-close

disables persistent connections to the backends, while keeping them
for the frontend. This allows haproxy to modify each request to the
backend and inject the needed headers.

Sorry for the waste of bandwidth :-).



Redirect Loop when using X-Forwarded-Proto header.

2011-03-29 Thread Ben Timby
I am using haproxy in combination with stunnel to perform SSL. My
backend servers expect an X-Forwarded-Proto: https header to indicate
that the request was sent over SSL. If this header is missing, the
request is redirected to the https:// flavor of the URL.

However, with haproxy-1.5-dev5, I am seeing that the header is only
added to the first request of the connection. Subsequent requests are
missing this header. Below is an example from a tcpdump.

--
GET /private/ HTTP/1.1
Host: beta.mysite.com
Connection: keep-alive
Cache-Control: max-age=0
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.24 (KHTML,
like Gecko) Chrome/11.0.696.16 Safari/534.24
Accept: 
application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: sessionid=03412c52b518e63558dc3d2418b52dc9
X-Forwarded-Proto: http
X-Forwarded-For: 10.10.10.10

HTTP/1.1 302 FOUND
Date: Tue, 29 Mar 2011 16:28:45 GMT
Server: Apache/2.2.3 (CentOS)
Set-Cookie: sessionid=03412c52b518e63558dc3d2418b52dc9; expires=Tue,
29-Mar-2011 16:48:45 GMT; Max-Age=1200; Path=/
Location: https://beta.mysite.com/private/
Content-Length: 0
Keep-Alive: timeout=3, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8

GET /private/ HTTP/1.1
Host: beta.mysite.com
Connection: keep-alive
Cache-Control: max-age=0
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.24 (KHTML,
like Gecko) Chrome/11.0.696.16 Safari/534.24
Accept: 
application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: sessionid=03412c52b518e63558dc3d2418b52dc9

HTTP/1.1 302 FOUND
Date: Tue, 29 Mar 2011 16:28:45 GMT
Server: Apache/2.2.3 (CentOS)
Set-Cookie: sessionid=03412c52b518e63558dc3d2418b52dc9; expires=Tue,
29-Mar-2011 16:48:45 GMT; Max-Age=1200; Path=/
Location: https://beta.mysite.com/private/
Content-Length: 0
Keep-Alive: timeout=3, max=99
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8

...
--

This redirect loop runs until the browser interrupts it and displays
an error. My configuration follows:

--
listen http-vip00
bind192.168.1.1:80
bind127.0.0.1:81 accept-proxy
modehttp
option  httplog
balance roundrobin
reqidel ^X-Forwarded-For:.*
acl is-ssl  dst_port   81
reqadd  X-Forwarded-Proto:\ https if is-ssl
reqadd  X-Forwarded-Proto:\ http unless is-ssl
option  forwardfor
server  www1 10.19.78.18:80
--

I have not yet had a chance to see if the same thing happens with
previous versions of haproxy. Is this expected behavior or do I have
something misconfigured?



Re: Strange behavior from HAProxy 1.5-dev.

2011-03-28 Thread Ben Timby
On Thu, Mar 24, 2011 at 7:02 PM, Ben Timby  wrote:
> On Thu, Mar 24, 2011 at 6:03 PM, Willy Tarreau  wrote:
>> Hi Ben,
>>
>> I'm sure you hit the issue that David has fixed a few days ago.
>> In short, due to a parsing issue on the server address, haproxy
>> is reconnecting to IP 0.0.0.0 on the target port. IP 0.0.0.0 is
>> any IP, and the system connects to whatever IP it is listening
>> on. Thus you have a loop.
>
> Great, thanks Willy. I must have missed that one on the list.

Indeed, this was the problem. Thanks Willy.



Re: Strange behavior from HAProxy 1.5-dev.

2011-03-24 Thread Ben Timby
On Thu, Mar 24, 2011 at 6:03 PM, Willy Tarreau  wrote:
> Hi Ben,
>
> I'm sure you hit the issue that David has fixed a few days ago.
> In short, due to a parsing issue on the server address, haproxy
> is reconnecting to IP 0.0.0.0 on the target port. IP 0.0.0.0 is
> any IP, and the system connects to whatever IP it is listening
> on. Thus you have a loop.

Great, thanks Willy. I must have missed that one on the list.

> I merged the fix into the git tree, it is in the 20110324 snapshot
> if you want to give it a try again.
>
> I need to quickly release an 1.5-dev5 with this fix, but as I have
> found another very minor one, I'd like to see if I can fix it too
> before the release.

I will give it a try!



Re: Half--NAT

2011-03-24 Thread Ben Timby
On Thu, Mar 24, 2011 at 4:59 PM, Jason J. W. Williams
 wrote:
> Hi All,
> I'm trying to find documentation on configuring HAProxy to do half-NAT, but
> can't seem to find any. Does HAProxy not support half-NAT or does it call it
> something else? Thank you in advance for your help.

If you mean something like half-NAT described below...

http://lbwiki.com/index.php/NAT

Then you are looking for the TPROXY support of HAProxy...

http://blog.loadbalancer.org/configure-haproxy-with-tproxy-kernel-for-full-transparent-proxy/



Re: X-Forwarded-For header

2011-03-24 Thread Ben Timby
On Thu, Mar 24, 2011 at 5:01 PM, Ben Timby  wrote:
> Delete any existing headers using reqdel/reqidel.
>
> reqidel X-Forwarded-For
> option forwardfor
>
> This will ensure the only one the backed sees is the one you added.

Sorry, more like:

reqidel ^X-Forwarded-For:.*

Found that in the docs after sending my first reply :-).



Re: X-Forwarded-For header

2011-03-24 Thread Ben Timby
On Thu, Mar 24, 2011 at 4:35 PM, bradford  wrote:
> I know there have been several emails about this, but what is the most
> secure way of logging the client's IP address in the application code?
>  Do you just log the full X-Forwarded-For comma delimited value?
> Also, can't they manipulate the X-Forwarded-For header in the HTTP
> request?

Delete any existing headers using reqdel/reqidel.

reqidel X-Forwarded-For
option forwardfor

This will ensure the only one the backed sees is the one you added.



Re:

2011-03-18 Thread Ben Timby
On Fri, Mar 18, 2011 at 2:00 PM, Antony  wrote:
> Hi guys!
>
> I'm new to HAProxy and currently I'm testing it.
> So I've read this on the main page of the web site:
> "The reliability can significantly decrease when the system is pushed to its 
> limits. This is why finely tuning the sysctls is important. There is no 
> general rule, every system and every application will be specific. However, 
> it is important to ensure that the system will never run out of memory and 
> that it will never swap. A correctly tuned system must be able to run for 
> years at full load without slowing down nor crashing."
> And now have the question.
>
> How do you usually prevent system to swap? I use Linux but solutions for any 
> other OSes are interesting for me too.
>
> I think it isn't just to "swapoff -a" and to del appropriate line in 
> /etc/fstab. Because some people say that it isn't good choise..

Prevent swapping by ensuring your resource limits (max connections)
etc. keep the application from exceeding the amount of physical
memory.

Or conversely by ensuring that your physical memory is sufficient to
handle the load you will be seeing.

This is what is referred to in the documentation, you need to tune
your limits and available memory for the workload you are seeing. Of
course simple things like not running other memory hungry applications
on the same machine apply as well. This is an iterative process
whereby you observe the application, make adjustments and repeat. You
must generate test load within the range of normal operations for this
tweaking to be true-to-life. Of course once you go into production the
tweaking will continue, no simulation is a replacement for production
usage.

The reason running without swap is bad is because if you hit the limit
of your physical memory, the OOM killer is invoked. Any process is
subject to termination by the OOM killer, so in most cases decreased
performance is more acceptable than loss of a critical system process.



Re: proto_ftp.c

2011-02-26 Thread Ben Timby
On Sat, Feb 26, 2011 at 12:04 PM, Willy Tarreau  wrote:
> It has been implement on the client side in haproxy but not yet on the
> server side, though it should not be difficult at all. You can find
> information on the protocol here :
>
>    http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
>
> The goal was to make it very strict and simple to parse in order to
> encourage a broader adoption than just the stunnel+haproxy tandem.

Thanks Willy, I will look into this on Monday.



Re: proto_ftp.c

2011-02-26 Thread Ben Timby
On Sat, Feb 26, 2011 at 9:34 AM, Willy Tarreau  wrote:
> If you maintain your own servers, wouldn't you be interested in making
> them support the proxy protocol we've added between stunnel and haproxy ?
> It provides the server with a first line containing the protocol (TCPv4,
> TCPv6), source and destination addresses and ports, and does not require
> a state to consume a response. Also since by definition it can only appear
> on the first line of the connection, there is no risk a client would send
> it. It would work like this :
>
>> Client              HAProxy              Backend
>> **
>> connect >      |
>>                        | connect ->|
>>                        | PROXY TCP4  ...-->|
>> *===*
>>                        | <-- 220 Ready |
>> USER --->      |
>>                        | USER ---> |
>>                        | <-250 OK  |
>> <-250 OK       |
>> **
>
> I'm just checking how we could implement something simple, reliable and
> durable.

As am I. I was not aware of that protocol, but that sounds like it
would fit the bill. Is there any other information about that? Is
HAProxy able to insert that protocol line or is that an extension to
stunnel?



Re: proto_ftp.c

2011-02-26 Thread Ben Timby
OK, first off, the FTP SITE command is reserved for specific FTP
server extensions. It is commonly used for banning IP addresses. So
that the user can, via their FTP client issue a command such as:

SITE ADDIP XXX.XXX.XXX.XXX

The server knows what to do with this IP address because it has an
extension loaded that stores the provided IP into a ban list. This is
of course implementation specific, some servers will handle this
extension, some don't.

SITE Command description:
http://www.nsftools.com/tips/RawFTP.htm#SITE

Apache FTP Server SITE command:
http://incubator.terra-intl.com/projects/ftpserver/site_cmd.html

Relevant RFC:
http://www.faqs.org/rfcs/rfc959.html
--
 SITE PARAMETERS (SITE)

This command is used by the server to provide services
specific to his system that are essential to file transfer
but not sufficiently universal to be included as commands in
the protocol.  The nature of these services and the
specification of their syntax can be stated in a reply to
the HELP SITE command.
--

With that in mind, the sequence I am thinking of would be:

Client HAProxy   Backend
**
connect > |
   | connect -->|
   | SITE  -->|
   | <--250 OK |
*===*
USER --->|
   | USER ->|
   | <--250 OK |
<-250 OK |
**

Everything below the horizontal (==) line is as usual, HAProxy just
sends an initial SITE command to the backend FTP server to let it know
the client's real IP address. It then starts shoveling data from the
client to the backend as usual.

The fly in the ointment is that the backend FTP server will need to be
able to handle this SITE command. I maintain my own FTP server daemon,
so mine will of course support this. I will contribute patches back to
the community for it.

Other FTP daemons like proftpd can easily support this SITE extension
using add-on modules. The module simply looks for the client IP
provided by the SITE command, then overwrites the variable containing
the remote IP address so that the server can make active FTP
connections to the right place. Also the logs would then contain the
correct client IP address. It is kinda like the X-Forwarded-For header
on HTTP, but using the SITE command on FTP (which is the right place
for this according to the RFCs involved).

I am investigating the feasibility and interest in a feature such as
this at this point.



Re: proto_ftp.c

2011-02-25 Thread Ben Timby
2011/2/25 Krzysztof Olędzki :
> Proxing FTP is much more complicated than simply providing one additional
> command for passing client's IP address.
>
> Please note that FTP is based on two independent TCP connections: control
> and data. You need to analyze a control stream and modify on-fly data (port
> numbers and ip addresses) and set up additional sockets and initiate
> additional connections to handle data stream. To do this you also need to
> handle both PASV/EPSV (passive) and PORT/EPRT (active) modes.
>
> It is of course doable but the amount of work is quite big. I even was
> recently asked to implement such function as a sponsored feature. After a
> short conversation with my possible employer we decided that it would took
> too much time to be profitable and cost effective. Instead another solution
> was chosen - LVS DR.

I have all of that figured out. I simply would like to have the
client's IP address.

I only use HAProxy for the command channel. Data channel is handled
simply by choosing a different PASV port range for each backend
server, and NATing the right range to the right server.

Outbound Active connections are similarly S-NAT'd to the appropriate
outbound address.

I just want the last piece of the puzzle.

As always, in parallel I am building a mainline kernel 2.6.37.2, while
I am investigating other options.



proto_ftp.c

2011-02-25 Thread Ben Timby
First of all, sorry for the previous list spam. I pasted the wrong
address while subscribing.

I am setting up FTP load balancing using HAProxy. The rub is that I
want something similar to the X-Forwarded-For header supported in
HTTP.

I am aware of TPROXY, but I don't wish to maintain my own packages for
the kernel, xen and all the dependencies this entails.

A simpler user-space solution would suit me much better. I would like
to patch HAProxy so that it provides specialized FTP handling in the
form of an FTP SITE command. Such that when optionally enabled, it
will inject the following FTP command at the beginning of the TCP
stream.

SITE IP=XXX.XXX.XXX.XXX

My backend FTP server will know how to deal with this site command and
store the IP address for use internally.

This would negate the need for TPROXY and seems fairly
straightforward. Any feedback or thoughts on this topic?

Thanks.



subscribe

2011-02-25 Thread Ben Timby
subscribe