Re: New Open source SNMP MIB and agent for HAProxy:

2020-08-27 Thread Malcolm Turnbull
Willy,

Thanks for that tip, sounds like a really good idea.
I talked to Pete and he's added it to the list for the next version along with:
Code clean ups, 64 bit values, reading settings from the netsnmp
configuration files, better HAProxy version support and so on...



Malcolm Turnbull

Loadbalancer.org Ltd.
www.loadbalancer.org
 +44 (0)330 380 1064

Malcolm Turnbull

Loadbalancer.org Ltd.

www.loadbalancer.org
 +44 (0)330 380 1064


On Thu, 27 Aug 2020 at 03:20, Willy Tarreau  wrote:
>
> Hi Malcolm,
>
> On Wed, Aug 26, 2020 at 02:43:34PM +0100, Malcolm Turnbull wrote:
> > One of our best Geeks Peter Statham has put a fair bit of work into a
> > new agent and MIB for HAProxy to make the lives of some of our
> > customers easier...
> >
> > He's somewhat nervous that the community might rip his code to shreds...
> > So please be nice with your comments!
> >
> > But it would be great to get constructive feedback + any tips and
> > advice on the best monitoring/graphing tools to use with HAProxy SNMP
> > data:
> >
> > https://www.loadbalancer.org/blog/loadbalancer-org-releases-open-source-snmp-mib-and-agent-for-haproxy/
>
> Thanks for sharing this. Peter should have a look at "show stats typed"
> which provides a lot of information for each metric, indicating its type,
> size, how to aggregate it etc. This typically allows to know if it's a
> counter/gauge/max/avg etc. This could significantly simplify the switch/cases
> parts. The output is larger however (1 line per metric) but would allow the
> code to automatically adapt to new metrics as they are added.
>
> Cheers,
> Willy



New Open source SNMP MIB and agent for HAProxy:

2020-08-26 Thread Malcolm Turnbull
One of our best Geeks Peter Statham has put a fair bit of work into a
new agent and MIB for HAProxy to make the lives of some of our
customers easier...

He's somewhat nervous that the community might rip his code to shreds...
So please be nice with your comments!

But it would be great to get constructive feedback + any tips and
advice on the best monitoring/graphing tools to use with HAProxy SNMP
data:

https://www.loadbalancer.org/blog/loadbalancer-org-releases-open-source-snmp-mib-and-agent-for-haproxy/

Please don't hesitate to ask questions straight on the blog comment section...

And while I'm in semi-advertising mode, the other useful open source
tool we have is our Windows GUI/Service feedback agent:

https://www.loadbalancer.org/blog/open-source-windows-service-for-reporting-server-load-back-to-haproxy-load-balancer-feedback-agent/

Which is very useful in remote desktop server scenarios...



-- 
Malcolm Turnbull

Loadbalancer.org Ltd.
www.loadbalancer.org
+44 (0)330 380 1064



Just a quick update on our Windows Feedback Agent-Check for HAProxy

2018-11-09 Thread Malcolm Turnbull
Hi,

I'm not sure how many people use the feedback functionality in HAProxy?
But I just wanted let anyone interested know that we've written a new
version (and why).

The HAProxy server feedback mechanism is compatible with the
LVS/Ldirectord one (also open source).

With our previous open source Windows-based agent we had some annoying
performance issues under load (Windows WMI library etc.).
The new GO agent is MUCH faster, so we're very happy with it.

We keep this (somewhat messy) blog updated with information about the
windows feedback agent:
https://www.loadbalancer.org/blog/open-source-windows-service-for-reporting-server-load-back-to-haproxy-load-balancer-feedback-agent/

All the source code for both versions is available on Github:
NEW:
https://github.com/loadbalancer-org/go-feedback-agent
OLD:
https://github.com/loadbalancer-org/windows_feedback_agent

Most of our Linux users just build a custom agent by hand (it's not hard...),
I've heard of Herald:
https://medium.com/helpshift-engineering/herald-haproxy-load-feedback-and-check-agent-1b8749a13f02

Does anyone know of any other options for server-side agents?


Thanks,

Malcolm Turnbull

Loadbalancer.org Ltd.

www.loadbalancer.org
 +44 (0)330 380 1064



Re: Haproxy client ip

2018-06-25 Thread Malcolm Turnbull
Daniel,

Yes, That's expected :-).

It normally scares me when people say they are going to use TPROXY...
It's awesome but needs a bit of thought to implement properly.

This blog may help, it's a bit old, so ignore the Kernel stuff - you
don't need it any more:

https://www.loadbalancer.org/blog/configure-haproxy-with-tproxy-kernel-for-full-transparent-proxy/






On 25 June 2018 at 17:59, Daniel Augusto Esteves
 wrote:
> Hi
>
>
> When configuring source 0.0.0.0 usesrc clientip the backend stops
> responding.
>
>
> Best Regards
>
> Daniel
>
>
>
> 
> De: Daniel Augusto Esteves 
> Enviado: segunda-feira, 25 de junho de 2018 08:37
> Para: Jarno Huuskonen; simos.li...@googlemail.com
> Cc: haproxy@formilux.org
> Assunto: Re: Haproxy client ip
>
> Thank you for the tips guys.
>
>
> Obter o Outlook para Android
>
> 
> From: Jarno Huuskonen 
> Sent: Monday, June 25, 2018 8:24:11 AM
> To: Daniel Augusto Esteves
> Cc: haproxy@formilux.org
> Subject: Re: Haproxy client ip
>
> Hi,
>
> On Mon, Jun 25, Simos Xenitellis wrote:
>> On Sat, Jun 23, 2018 at 1:43 AM, Daniel Augusto Esteves
>>  wrote:
>> > Hi
>> >
>> > I am setting up haproxy with keepalived and i need to know if is
>> > possible
>> > pass client ip for destination log server using haproxy in tcp mode?
>> >
>>
>> That can be done with the "proxy protocol". See more at
>> https://www.haproxy.com/blog/haproxy/proxy-protocol/
>
> There's also source usesrc clientip:
> http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-source
> if your backend servers don't support proxy-protocol.
>
> -Jarno
>
> --
> Jarno Huuskonen
>



Re: WAF with HA Proxy.

2018-05-09 Thread Malcolm Turnbull
Dhaval,

As far as I'm concerned almost everyone on the planet uses mod_security...
But most use it with apache & some use it with Nginx...
So you can either put it on all of your web servers...
Or Put it in-front of HAProxy...
Or make an HAProxy[1] sandwich (which is what we do at Loadbalancer.org[2])

[1] 
https://www.haproxy.com/blog/scalable-waf-protection-with-haproxy-and-apache-with-modsecurity/
[2] 
https://www.loadbalancer.org/blog/blocking-invalid-range-headers-using-modsecurity-and-haproxy-ms15-034-cve-2015-1635/


Malcolm Turnbull

Loadbalancer.org Ltd.

www.loadbalancer.org

 +44 (0)330 380 1064
malc...@loadbalancer.org




On 9 May 2018 at 19:21, DHAVAL JAISWAL <dhava...@gmail.com> wrote:
> Looking for open source.
>
> On Wed, May 9, 2018 at 11:10 PM, Mark Lakes <mla...@signalsciences.com>
> wrote:
>>
>> For commercial purposes, see Signal Sciences Next Gen WAF solution:
>> https://www.signalsciences.com/waf-web-application-firewall/
>>
>>
>>
>> Mark Lakes
>> Sr Software Engineer
>> (555) 555-
>> Winner: InfoWorld Technology of the Year 2018
>>
>>
>> On Wed, May 9, 2018 at 2:23 AM, DHAVAL JAISWAL <dhava...@gmail.com> wrote:
>>>
>>> I am looking for WAF solution with HA Proxy.
>>>
>>> One which I come to know is with HA Proxy version 1.8.8 + mode security.
>>> However, I feel its still on early stage.
>>>
>>> Any other recommendation for WAF with HA Proxy.
>>>
>>>
>>> --
>>> Thanks & Regards
>>> Dhaval Jaiswal
>>
>>
>
>
>
> --
> Thanks & Regards
> Dhaval Jaiswal



Re: Scaling HAProxy over multiple cores with session stickyness

2017-09-28 Thread Malcolm Turnbull
Peter,

If you can use cookies then don't use session cookies with a stick
table -  just put the SERVERID in the cookie:
i.e.
cookie SERVERID insert indirect nocache
That will work fine with NBPROC >1

Or if you really want a stick table then move the IRQ to a different
core(s) than the HAProxy one.
You should be able to get A LOT of traffic through HAProxy on one core
if you move the IRQ handling away...




On 28 September 2017 at 16:05, Peter Kenens  wrote:
> Thanks, for pointing me to the replacement sequence, however I still seem to
> miss an element.
>
> Whenever I put in my backend
>
> stick-table type string len 52 size 2m expire 3h
> stick on req.cook(JSESSIONID)
> stick store-response res.cook(JSESSIONID)
>
> I have session stickyness as long as nbproc is 1. Whenever I increase this,
> I get a warning on "stick on" that this does not work correctly in multi
> process mode. Also the documentation says this. What would be the correct
> way to achieve this with nbproc > 1?
>
> Thanks in advance,
> Peter
>



Any more detail on the plan for multi-threading and or multi-process support?

2017-07-04 Thread Malcolm Turnbull
It looks like some exciting stuff is on the way in haproxy-1.8-dev!

Could I ask for a bit more detail on the plans for multi-threading or
multi-process options?
For the vast majority of HTTP applications, binding one cluster to one
front end & process is fine.

But for high traffic sites on newer hardware (Think 24 cores & 2GHz -
Not 4 core & 4GHz) you can quickly run into performance issues.

Will it be possible for one front end to access one stick table - and
yet use multiple threads or processes?

And if it is planned we'd be very interested in helping with testing
and or code.

Sorry if this has already been discussed and I missed the conversation.


--
Regards,

Malcolm Turnbull

Loadbalancer.org Ltd.
 +44 (0)330 380 1064



Re: Reverse Gateway Throught Security Zones

2017-06-23 Thread Malcolm Turnbull
Lukas,

Ha, I like the comment about DMZs being a concept from 1999 :-).
Sorry if I'm going slightly off topic.
We put a comic style picture at the bottom of this blog about, “Our
DMZ is so secure we can’t even get into it!”
https://www.loadbalancer.org/blog/what-exactly-is-a-reverse-proxy
I find people are constantly trying to 'work around the DMZ' rather
than just getting rid of them.
And don't get me started on bridges:
https://www.loadbalancer.org/blog/transparent-vs-explicit-proxy-which-method-should-i-use#bridge-mode


Malcolm Turnbull

Loadbalancer.org Ltd.

www.loadbalancer.org

 +44 (0)330 380 1064
malc...@loadbalancer.org



On 23 June 2017 at 00:05, Lukas Tribus <lu...@gmx.net> wrote:
> Hello Himer,
>
>
> this is probably not the response you wanna hear ...
>
>
>
> Am 22.06.2017 um 22:47 schrieb Himer Martinez:
>> Hello Guys,
>>
>> Sorry to botter you with my specific questions :-)
>>
>> Let's imagine a paranoic security team who forbide http and tcp flows 
>> between the dmz zone and the green zone, they estimate that if an hacker can 
>> take control on the dmz zone server they the can access the green zone from 
>> that server, so flows going from the dmz zone to the green zone are 
>> forbidden and blocked by network firewalls,
>>
>> First idea : So what I need is to create something like a reverse tunnel 
>> between the green zone and HAProxy,
>
> Clarification: what you or your security team is saying is:
>
> - a DMZ host establishing a TCP connection to the green zone
>   is insecure (even if the only open port is HTTP)
> - a green zone host establishing whatever bidirectional connections
>   to the DMZ servers is secure
>
> Is that a correct interpretation?
>
>
>
>>
>> (requests are going from the dmz zone to the green zone with a reverse 
>> connection)
>
> So by reverse tunneling you basically circumvent your
> firewalls and any security policies that may be in place.
>
> You are opening the "DMZ --> Green Zone" path, just in a less
> direct way, and most likely without or with less considerations
> regarding security.
>
>
>
>> Forbidden :
>> Internet --> DMZ --> Green Zone
>>
>> Authorized :
>> Internet --> DMZ <--- Green Zone
>
> This is a ridiculous concept. DMZ needs Green Zone data, either move
> your Green Zone hosts into the DMZ or make the service you need
> reachable (considering security aspects, of course).
>
> By reverse-tunneling you don't gain any security advantage, instead, you
> are over complicating your setup, bypassing most likely restrictive firewalls,
> opening an attack surface you are not considering.
>
>
>
>> First idea : So what I need is to create something like a reverse tunnel 
>> between the green zone and HAProxy,
>
> What you need to do is analyze your *REAL* requirements, from a security 
> perspective
> and otherwise, and then build a concept around it.
>
> Instead you are slamming a 1999 "perimeter security" concept on your network 
> which
> doesn't match your requirements and are now trying to circumvent the 
> perimeters,
> because otherwise you are unable to provide whatever service you need to run.
>
>
> Now to the part that you do wanna hear:
>
> How you can one best bypass a perimeter firewall that is blocking one 
> direction
> of traffic but not the other? Use any VPN that you are familiar with, as that 
> is
> exactly what they are built for. OpenVPN, strongSwan, etc.
>
>
>
> cheers,
> lukas
>
>
>
>



Would you be interested in helping us integrate Alexa voice control into HAProxy?

2017-04-01 Thread Malcolm Turnbull
Willy et al.

Would you be interested in helping us integrate Alexa voice control
into HAProxy?
Maybe we could re-write the whole thing in LUA?

Initially I was sceptical but I'm now really impressed with what our
customers are achieving with our new product:

https://www.loadbalancer.org/blog/5988
https://www.loadbalancer.org/products/hardware/enterprise-val

Its awesome, 92.3% of the time it does exactly what you ask and you
can also control the office music with it!

I hope you are as excited as I am by the possibilities.




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 380 1064
http://www.loadbalancer.org/



Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Malcolm Turnbull
Georg,

That's a timely reminder thanks:
I just had another chat with Simon Horman who has kindly offered to
take a look at this again.




On 10 November 2016 at 10:54, ge...@riseup.net <ge...@riseup.net> wrote:
> Hi all,
>
> On 16-07-05 10:05:13, Mark Brookes wrote:
>> I wondered if we could start a discussion about the possibility of
>> having the stats socket return stats data in JSON format.
>
> After the discussion we had in July, I'm wondering what's the current
> status regarding this topic?
>
> Thanks and all the best,
> Georg



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 380 1064
http://www.loadbalancer.org/



Re: SSL acceleration

2016-01-30 Thread Malcolm Turnbull
Now this is where I probably look stupid but...

Am I correct in stating that the AES-NI is only really useful for file
encryption... and bugger all use for HTTPS/SSL encryption (which is
what we really want) ?

Very happy to be told I'm wrong, because it would be great it it was.





On 29 January 2016 at 18:21, Björn Zettergren
<bjorn.zetterg...@deltaprojects.com> wrote:
> Hi Eric,
>
> If you use a hardware device supported by openssl library you'll have
> hardware acceleration, for example AES-NI extension is available on
> recent cpu's and recent versions of openssl.
>
> I don't know about your Coleto creek device, but i'm sure you can
> check with openssl :)
>
> /Björn
>
>
>
> On Fri, Jan 29, 2016 at 5:56 PM, Eric Chan <eric.c...@fireeye.com> wrote:
>> Hi HAproxy team,
>>
>>
>>
>> Is there a plan to add HW acceleration to your SSL proxy?
>>
>> I am thinking of using HAproxy with Intel Coleto Creek in asynchronous mode,
>> wonder if anyone has done the patch work that needs to make that work.
>>
>>
>>
>> Thanks,
>>
>> Eric
>>
>> This email and any attachments thereto may contain private, confidential,
>> and/or privileged material for the sole use of the intended recipient. Any
>> review, copying, or distribution of this email (or any attachments thereto)
>> by others is strictly prohibited. If you are not the intended recipient,
>> please contact the sender immediately and permanently delete the original
>> and any copies of this email and any attachments thereto.
>



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 380 1064
http://www.loadbalancer.org/



Re: TCP connections rdp

2015-11-20 Thread Malcolm Turnbull
Douglas,

I'm a bit confused on the question but an example Session Broker
Config would be:

listen Test
bind 192.168.64.28:3389 transparent
mode tcp
balance leastconn
persist rdp-cookie
server backup 192.168.64.15:3389 backup  non-stick
timeout client 12h
timeout server 12h
tcp-request inspect-delay 5s
tcp-request content reject if { req_ssl_hello_type 1 }
tcp-request content accept if RDP_COOKIE
option tcpka
option redispatch
option abortonclose
maxconn 4
server Accounts 192.168.64.15:3389  weight 100  check port 80
inter 4000  rise 2  fall 2  minconn 0  maxconn 0  on-marked-down
shutdown-sessions


And a config for using the Microsoft RDP client  cookie would be :

listen Test
bind 192.168.64.28:3389 transparent
mode tcp
balance leastconn
server backup 192.168.64.15:3389 backup  non-stick
timeout client 12h
timeout server 12h
tcp-request inspect-delay 5s
tcp-request content reject if { req_ssl_hello_type 1 }
tcp-request content accept if RDP_COOKIE
stick-table type string size 10240k expire 30m
stick on rdp_cookie(mstshash) upper
stick on src
option tcpka
option redispatch
option abortonclose
maxconn 4
server Accounts 192.168.64.15:3389  weight 100  check port 80
inter 4000  rise 2  fall 2  minconn 0  maxconn 0  on-marked-down
shutdown-sessions

More information here:
http://www.haproxy.com/doc/aloha/7.0/deployment_guides/microsoft_remote_desktop_services.html
and here:
http://www.loadbalancer.org/blog/category/microsoft-terminal-services-blog-posts


On 20 November 2015 at 10:43, Douglas Fabiano Specht
<douglasfabi...@gmail.com> wrote:
> Hello..
> I'm actually using this down, but I'm not coseguindo set to capture the logs
> and analyze them by html.
>
> listen tse-farm
> bind 0.0.0.0:3389
> balance leastconn
> persist rdp-cookie
> timeout server 1h
> timeout client 1h
> timeout connect 4s
> tcp-request inspect-delay 5s
> tcp-request content accept if RDP_COOKIE
> persist rdp-cookie
> stick-table type string size 204800
> stick on req.rdp_cookie(mstshash)
> server srv1 10.240.0.3:3389
> server srv2 10.240.0.4:3389
>
>
> 2015-11-20 5:38 GMT-02:00 Malcolm Turnbull <malc...@loadbalancer.org>:
>>
>> Douglas,
>>
>> Why not use RDP cookies or Session Broker support with HAProxy?:
>>
>> http://www.loadbalancer.org/blog/load-balancing-windows-terminal-server-haproxy-and-rdp-cookies
>>
>>
>>
>>
>> On 19 November 2015 at 15:08, Douglas Fabiano Specht
>> <douglasfabi...@gmail.com> wrote:
>> > staff,
>> > who would have a sample configuration file to use TCP connections on
>> > port
>> > 3389 RDP and could share?
>> >
>> > --
>> >
>> > Douglas Fabiano Specht
>>
>>
>>
>> --
>> Regards,
>>
>> Malcolm Turnbull.
>>
>> Loadbalancer.org Ltd.
>> Phone: +44 (0)330 1604540
>> http://www.loadbalancer.org/
>
>
>
>
> --
>
> Douglas Fabiano Specht



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 380 1064
http://www.loadbalancer.org/



Re: TCP connections rdp

2015-11-20 Thread Malcolm Turnbull
Douglas,

The stats page will only show the number of connections and which
server they are going to...
I think you mean you want to see the contents of the stick table i.e.:

echo "show table VIP1" | socat unix-connect:/var/run/haproxy.stat stdio
# table: VIP1, type: ip, size:10485760, used:1
0x113ce14: key=192.168.0.1 use=0 exp=812299 server_id=2

Which would show the RDP cookie i.e. the user name and the server it
is attached to.



On 20 November 2015 at 13:40, Douglas Fabiano Specht
<douglasfabi...@gmail.com> wrote:
> Hello..
> thanks for the answer.
> I would like to monitor the User connected on port 3389, tried as it is
> below, but I could not make it work.
>
> listen stats
> bind: 
> enable stats
> connect timeout 4000
> client timeout 42000
> server timeout 43000
> stats uri /
> option httpclose
> stats auth loadbalancer: loadbalancer
>
> 2015-11-20 9:17 GMT-02:00 Malcolm Turnbull <malc...@loadbalancer.org>:
>>
>> Douglas,
>>
>> I'm a bit confused on the question but an example Session Broker
>> Config would be:
>>
>> listen Test
>> bind 192.168.64.28:3389 transparent
>> mode tcp
>> balance leastconn
>> persist rdp-cookie
>> server backup 192.168.64.15:3389 backup  non-stick
>> timeout client 12h
>> timeout server 12h
>> tcp-request inspect-delay 5s
>> tcp-request content reject if { req_ssl_hello_type 1 }
>> tcp-request content accept if RDP_COOKIE
>> option tcpka
>> option redispatch
>> option abortonclose
>> maxconn 4
>> server Accounts 192.168.64.15:3389  weight 100  check port 80
>> inter 4000  rise 2  fall 2  minconn 0  maxconn 0  on-marked-down
>> shutdown-sessions
>>
>>
>> And a config for using the Microsoft RDP client  cookie would be :
>>
>> listen Test
>> bind 192.168.64.28:3389 transparent
>> mode tcp
>> balance leastconn
>> server backup 192.168.64.15:3389 backup  non-stick
>> timeout client 12h
>> timeout server 12h
>> tcp-request inspect-delay 5s
>> tcp-request content reject if { req_ssl_hello_type 1 }
>> tcp-request content accept if RDP_COOKIE
>> stick-table type string size 10240k expire 30m
>> stick on rdp_cookie(mstshash) upper
>> stick on src
>> option tcpka
>> option redispatch
>> option abortonclose
>> maxconn 4
>> server Accounts 192.168.64.15:3389  weight 100  check port 80
>> inter 4000  rise 2  fall 2  minconn 0  maxconn 0  on-marked-down
>> shutdown-sessions
>>
>> More information here:
>>
>> http://www.haproxy.com/doc/aloha/7.0/deployment_guides/microsoft_remote_desktop_services.html
>> and here:
>>
>> http://www.loadbalancer.org/blog/category/microsoft-terminal-services-blog-posts
>>
>>
>> On 20 November 2015 at 10:43, Douglas Fabiano Specht
>> <douglasfabi...@gmail.com> wrote:
>> > Hello..
>> > I'm actually using this down, but I'm not coseguindo set to capture the
>> > logs
>> > and analyze them by html.
>> >
>> > listen tse-farm
>> > bind 0.0.0.0:3389
>> > balance leastconn
>> > persist rdp-cookie
>> > timeout server 1h
>> > timeout client 1h
>> > timeout connect 4s
>> > tcp-request inspect-delay 5s
>> > tcp-request content accept if RDP_COOKIE
>> > persist rdp-cookie
>> > stick-table type string size 204800
>> > stick on req.rdp_cookie(mstshash)
>> > server srv1 10.240.0.3:3389
>> > server srv2 10.240.0.4:3389
>> >
>> >
>> > 2015-11-20 5:38 GMT-02:00 Malcolm Turnbull <malc...@loadbalancer.org>:
>> >>
>> >> Douglas,
>> >>
>> >> Why not use RDP cookies or Session Broker support with HAProxy?:
>> >>
>> >>
>> >> http://www.loadbalancer.org/blog/load-balancing-windows-terminal-server-haproxy-and-rdp-cookies
>> >>
>> >>
>> >>
>> >>
>> >> On 19 November 2015 at 15:08, Douglas Fabiano Specht
>> >> <douglasfabi...@gmail.com> wrote:
>> >> > staff,
>> >> > who would have a sample configuration file to use TCP connections on
>> >> > port
>> >> > 3389 RDP and could share?
>> >> >
>> >> > --
>> >> >
>> >> > Douglas Fabiano Specht
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >>
>> >> Malcolm Turnbull.
>> >>
>> >> Loadbalancer.org Ltd.
>> >> Phone: +44 (0)330 1604540
>> >> http://www.loadbalancer.org/
>> >
>> >
>> >
>> >
>> > --
>> >
>> > Douglas Fabiano Specht
>>
>>
>>
>> --
>> Regards,
>>
>> Malcolm Turnbull.
>>
>> Loadbalancer.org Ltd.
>> Phone: +44 (0)330 380 1064
>> http://www.loadbalancer.org/
>
>
>
>
> --
>
> Douglas Fabiano Specht



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 380 1064
http://www.loadbalancer.org/



Re: SSL and Piranha conversion

2015-09-08 Thread Malcolm Turnbull
Piranha is a front end for LVS (layer 4 load balancing)
So I'm assuming that all your Piranha box was doing was forwarding
port 443 & 80 to your two servers...

So just set up HAProxy in TCP mode for port 80 & 443.

Test it , and then when you are happy point your DNS at it.



On 8 September 2015 at 20:23, Jonathan Matthews <cont...@jpluscplusm.com> wrote:
> On 8 Sep 2015 20:07, "Daniel Zenczak" <dani...@zoosociety.org> wrote:
>>
>> Hello All,
>>
>> First time caller, short time listener. So this is the
>> deal.  My organization was running a CentOS box with Piranha on it to work
>> as our load balancer between our two web servers.  Well the CentOS box was a
>> Gateway workstation from 2000 and it finally gave up the ghost.
>
> May I suggest you reconsider migrating your hardware and software at the
> same time, both whilst under pressure? It will be massively simpler to
> install your preexisting choice of (known "good") software on your new
> hardware.
>
> Jonathan



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: SSL and Piranha conversion

2015-09-08 Thread Malcolm Turnbull
Daniel,

All load balancers work in roughly the same way:

You have a Virtual IP on the load balancer that the clients talk to,
and the load balancer is configured to talk to multiple Real
IPs/Backend Servers.
Your old config probably had one VIP for HTTP and one for HTTPS.

HAProxy is very easy but you will need to read the manual or one of
the many blogs talking about how to use it.
Once you have studied / installed / configured / tested / understood -
then if necessary come back to the list for help.



On 8 September 2015 at 20:59, Daniel Zenczak <dani...@zoosociety.org> wrote:
> Malcolm,
> The Piranha gui had some configurations about Virtual IPs and I am 
> not sure how that works or how it is different than HAProxy.  The firewall 
> had some rules that pointed website requests to the virtual ips.
>
> Daniel
> -Original Message-----
> From: Malcolm Turnbull [mailto:malc...@loadbalancer.org]
> Sent: Tuesday, September 8, 2015 2:55 PM
> To: Jonathan Matthews <cont...@jpluscplusm.com>
> Cc: Daniel Zenczak <dani...@zoosociety.org>; haproxy <haproxy@formilux.org>
> Subject: Re: SSL and Piranha conversion
>
> Piranha is a front end for LVS (layer 4 load balancing)
> So I'm assuming that all your Piranha box was doing was forwarding port 443 & 
> 80 to your two servers...
>
> So just set up HAProxy in TCP mode for port 80 & 443.
>
> Test it , and then when you are happy point your DNS at it.
>
>
>
> On 8 September 2015 at 20:23, Jonathan Matthews <cont...@jpluscplusm.com> 
> wrote:
>> On 8 Sep 2015 20:07, "Daniel Zenczak" <dani...@zoosociety.org> wrote:
>>>
>>> Hello All,
>>>
>>> First time caller, short time listener. So this is
>>> the deal.  My organization was running a CentOS box with Piranha on
>>> it to work as our load balancer between our two web servers.  Well
>>> the CentOS box was a Gateway workstation from 2000 and it finally gave up 
>>> the ghost.
>>
>> May I suggest you reconsider migrating your hardware and software at
>> the same time, both whilst under pressure? It will be massively
>> simpler to install your preexisting choice of (known "good") software
>> on your new hardware.
>>
>> Jonathan
>
>
>
> --
> Regards,
>
> Malcolm Turnbull.
>
> Loadbalancer.org Ltd.
> Phone: +44 (0)330 1604540
> http://www.loadbalancer.org/



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: SPICE Proxy with haproxy

2015-06-10 Thread Malcolm Turnbull
Kevin,

Simply remove the port and HAProxy will use the original one:

server OVIR1 172.20.69.21 weight 10



On 10 June 2015 at 09:29, Kevin C ki...@kiven.fr wrote:
 Hi list,

 Is it possible to use HAproxy instead of Squid for a SPICE Proxy (I already
 use Haproxy on this server, I'd rather avoir to install Squid) ?

 I try this

  oVirt  +SPICE
 frontend fe_spice_proxy
 bind 172.18.1.99:8080
 #bind 172.18.1.99:5900-6123
 option tcpka
 default_backend bk_OVIR
 ##
 backend bk_OVIR
 option tcpka
 balance roundrobin
 server OVIR1 172.20.69.21:5900-6123 weight 10
 server OVIR2 172.20.69.22:5900-6123 weight 10


 But it seems I can't set a port range in the server directive. Somebody have
 an idea how  can I setup ?

 Thanks a lot
 --
 Kevin




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: [PATCH 1/2] MEDIUM: Only report drain state in stats if server has SRV_ADMF_DRAIN set

2015-04-09 Thread Malcolm Turnbull
Willy / Simon,

I think I've got a bit confused myself -

I've just installed the patches and tested and it fixes the issues we
seeing with HAProxy getting stuck in DRAIN mode when the agent
temporarily responds with 0% idle.
i.e. when the load on the server decreases HAProxy should
automatically start passing traffic again.

It would however probably make sense to keep the current behaviour of
having a 0 weight highlighted in blue. (but not having the DRAIN state
activated)
I think it is potentially important to know that the DRAIN state or
AGENT DRAIN STATE is different to just weight=0 and UP.

I'm happy with any colours basically as long as just setting 0 weight
from the agent  doesn't force agent drain.

Does that make any sense?
And sorry if I have caused even more confusion!




On 9 April 2015 at 12:42, Willy Tarreau w...@1wt.eu wrote:
 Hi Simon!

 On Thu, Apr 09, 2015 at 03:47:13PM +0900, Simon Horman wrote:
 There are some similarities between a weight of zero and the
 administratively set drain state: both allow existing connections
 to continue while not accepting any new ones.

 However, when reporting a server state generally a distinction is made
 between state=UP,weight=0 and state=DRAIN,weight=*. This patch makes
 stats reporting consistent in this regard.

 Just to be sure, doesn't it undo what the two following patches tried to
 do :

   commit cc8bb92f32128a603d4206f066e99944e4049681
   Author: Geoff Bucar viralb...@gmail.com
   Date:   Thu Apr 18 13:53:16 2013 -0700

 MINOR: stats: show soft-stopped servers in different color

 A soft-stopped server (weight 0) can be missed easily in the
 webinterface. Fix this by using a specific class (and color).

 and :

   commit 6b7764a983a7dd97af6a5398da40c63353698328
   Author: Willy Tarreau w...@1wt.eu
   Date:   Wed Dec 4 00:43:21 2013 +0100

 MINOR: stats: remove some confusion between the DRAIN state and NOLB

 We now have to report 2 conflicting information on the stats page :
   - NOLB  = server which returns 404 and stops load balancing ;
   - DRAIN = server with a weight forced to zero

 The DRAIN state was previously detected from eweight==0 and represented in
 blue so that a temporarily disabled server was noticed. This was done by
 commit cc8bb92 (MINOR: stats: show soft-stopped servers in different 
 color).
 This choice suffered from a small defect however, which is that a server
 with a zero weight was reported in this color whatever its state (even 
 down
 or switching).

 Also, one of the motivations for the color above was because the NOLB 
 state
 is barely detectable as it's very close to the UP state.

 Since commit 8c3d0be (MEDIUM: Add DRAIN state and report it on the stats 
 page),
 we have the new DRAIN state to show servers with a zero weight. The 
 colors are
 unfortunately very close to those of the MAINT state, and some users were
 confused by the disappearance of the blue bars.

 ?

 It seems important for many users that weight=0 is clearly visible on
 the stats page, which I tend to agree with. I agree that reporting the
 word DRAIN is not suited, but functionally it's exactly the same : the
 server doesn't take new traffic anymore. So for people heavily relying on
 the stats page to monitor their server states during deployments, it seems
 important to me that the color remains at least similar. I suspect that
 the method we used to have with both the color and the same being reported
 based on srvstate is a bit outdated now, and maybe we should try to address
 this first (eg: by using a function to get the color to report and another
 one to report the description).

 If your patch maintains the blue line for weight==0, then please ignore my
 comment above.

 Best regards,
 Willy




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: Complete rewrite of HAProxy in Lua

2015-04-01 Thread Malcolm Turnbull
Ok. My immediate thought was..
Oh crap we are going to have to fork haproxy and hire loads of C developers!

Then when someone mentioned what day it was I felt incredible relief :-).

Nice joke. Well executed.

On 1 Apr 2015 10:06, Baptiste bed...@gmail.com wrote:

 I'll have to find a way to code buffer overflows in LUA!

 Baptiste




Re: Agent-check not working with backend HTTPS

2015-04-01 Thread Malcolm Turnbull
Claudio,

I just tested this on  HAProxy  1.6 Dev0 and the bug is fixed (along
with several others)...
It was spotted by someone a few months ago that an SSL re-encrypted
real server would force agent checks to https (incorrectly)




On 1 April 2015 at 16:21, Claudio Ruggieri
claudio.ruggi...@inetworking.it wrote:
 I check with tcpdump: it seems that agent-check in the https backend try to 
 do a SSL connection.
 My agent is a simple TCP socket without SSL.

 However I managed to open an SSL socket, but I still have errors:
 E SSL_accept(): error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer 
 did not return a certificate

 But I'm not interested in having a secure connection for agent check. It is 
 possible to disable SSL and have a simple tcp connection?

 Thank you
 Bye

 -Messaggio originale-
 Da: Baptiste [mailto:bed...@gmail.com]
 Inviato: mercoledì 1 aprile 2015 16.48
 A: Claudio Ruggieri
 Cc: haproxy@formilux.org
 Oggetto: Re: Agent-check not working with backend HTTPS

 On Wed, Apr 1, 2015 at 4:13 PM, Claudio Ruggieri 
 claudio.ruggi...@inetworking.it wrote:
 Hi all,

 I have a problem with agent-check, in my haproxy installation.

 Ubuntu Server 14.04 LTS with haproxy 1.5.3-1~ubuntu14.04.1



 HAProxy is configured with 2 backends: one http e one https.

 Agent-check is a script bash that simply return a percentage.



 HTTP backend works fine. HTTPS backend doesn't work. In the web
 Statistic Report I see no weight is updated and I don't have errors in log.



 This is the HTTPS backend configuration:



 backend application-https

 description HTTPS Application backend

 cookie SRV insert indirect maxidle 24h maxlife 24h



 server rp1-test-https 192.168.170.181:443 maxconn 100 weight
 100 fall 2 rise 2 check inter 2s agent-check agent-port 4321
 agent-inter 5s cookie rp1-test-https ssl verify none

 server rp2-test-https 192.168.170.182:443 maxconn 100 weight
 100 fall 2 rise 2 check inter 2s agent-check agent-port 4321
 agent-inter 5s cookie rp2-test-https ssl verify none



 Any idea?


 Hi Claudio,

 What does a tcpdump on port 4321 tells you?
 and what type of content do you see from the server to haproxy in the packet 
 captured?

 Baptiste




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: Stop new connections from hitting nodes

2014-12-05 Thread Malcolm Turnbull
What wrong with just setting the weight to 0 ?
i.e.

echo set weight VIP_Name/rip2 0 | socat
unix-connect:/var/run/haproxy.stat stdio


On 5 December 2014 at 17:17, Nenad Merdanovic ni...@nimzo.info wrote:
 Hello Jakov,

 On 12/05/2014 06:02 PM, Jakov Sosic wrote:

 Hi guys

 I'm wondering what ways of stopping *new* connections from hitting
 backend nodes?




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: Better understanding of nbproc vs distributing interrupts for cpu load management

2014-11-25 Thread Malcolm Turnbull
Chris,

At loadbalancer.org we did some fairly heavy load testing of haproxy
on various bits of hardware and get excellent results with a
combination of using irqbalance + iptables NOTRACK.
We tend to keep nbproc=1 unless that CPU is totally overloaded.

Its not the most optimal solution but it is reliable/stable and easy
to configure.












On 24 November 2014 at 16:46, Chris Allen ch...@cjx.com wrote:
 We have a load balancer running haproxy 1.5.8. At times of heavy load we're
 getting dangerously close
 to running out of CPU. I'd be really grateful for some definitive opinions
 on the relative merits
 of the two possible solutions detailed below - as I'm having trouble finding
 detailed and consistent
 information on them.


 1. Nbproc

 Our server has 8 cores. We have two primary frontends. We could run Nbproc=8
 assigning
 cores 1,2,3,4 to the first frontend and 5,6,7,8 to the other one. But
 several documents
 say that running Nbproc is not such a great idea (we totally understand the
 issue with
 stats and don't do admin socket updates or any of the dangerous stuff). If
 we're ok
 with that, is Nbproc a reasonable choice?

 Related to nbproc:

 - If we do run with nbproc, should we be pinning network interrupts or just
 letting irqbalance
 handle them? The loadbalancer has 3 bonded NICS generating 15 IRQs - but no
 particular NIC
 is related to any particular frontend.

 - If I bind a frontend to multiple processes, only one of those processes
 listens on the TCP
 port that I have specified. Yet the other processes seem to receive requests
 too. How does
 that work? And is the primary process any more loaded for doing the
 listening?



 2. Distributing interrupts

 One of our techops team found an old haproxy.com blog that suggested a good
 solution for
 balancing cpu load was simply by distributing network interrupts across
 available cores
 but leaving haproxy to run as a single process. We have tried this and
 indeed the *average*
 cpu load appears shared across all proceses - but logic tells me that
 haproxy must still be
 loading each cpu on which it runs and will saturate them (even though the
 overall average
 looks good). Can anybody comment on this approach as our team member is
 convinced
 he has a silver bullet for performance.


 Many thanks for any help/insight into this!








-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Can I insert a prefix cookie rather than read an existing one?

2014-11-14 Thread Malcolm Turnbull
I was just playing around with the configuration from the excellent
blog entry on e-commerce overload protection:
http://blog.haproxy.com/2012/09/19/application-delivery-controller-and-ecommerce-websites/

If you have a PHPSession or ASPsessionID cookie then you can track the
total number of users as follows:

listen L7-Test
bind 192.168.64.27:80 transparent
mode http
acl maxcapacity table_cnt ge 5000
acl knownuser hdr_sub(cookie) MYCOOK
http-request deny if maxcapacity !knownuser

stick-table type string len 32 size 10K expire 10m nopurge
stick store-response set-cookie(MYCOOK)
stick store-request cookie(MYCOOK)


balance leastconn
cookie SERVERID insert nocache
server backup 127.0.0.1:9081 backup non-stick
option http-keep-alive
option forwardfor
option redispatch
option abortonclose
maxconn 4
server Test1 192.168.64.12:80 weight 100 cookie Test1
server Test2 192.168.64.13:80 weight 100 cookie Test2

But what if you only have a single source IP (so you still want to use
cookies to track the usage AND stickyness) but the application doesn't
have its own unique session id?

Can you do something like using gpc0 to store a random haproxy session
id (for overload) and yet still using cookie SERVERID for the
persistence?
Or using a big IP stick table even though all the IPs will be the same?

Or am I just being really stupid today , which is not unusual :-).

Thanks in advance.


-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: Can I insert a prefix cookie rather than read an existing one?

2014-11-14 Thread Malcolm Turnbull
Baptiste,

I was hoping that was not the case :-).

My main goal was to make it completely application agnostic, never
mind I'll stick with the application cookie version.

Thanks very much.


On 14 November 2014 15:40, Baptiste bed...@gmail.com wrote:
 On Fri, Nov 14, 2014 at 1:01 PM, Malcolm Turnbull
 malc...@loadbalancer.org wrote:
 I was just playing around with the configuration from the excellent
 blog entry on e-commerce overload protection:
 http://blog.haproxy.com/2012/09/19/application-delivery-controller-and-ecommerce-websites/

 If you have a PHPSession or ASPsessionID cookie then you can track the
 total number of users as follows:

 listen L7-Test
 bind 192.168.64.27:80 transparent
 mode http
 acl maxcapacity table_cnt ge 5000
 acl knownuser hdr_sub(cookie) MYCOOK
 http-request deny if maxcapacity !knownuser

 stick-table type string len 32 size 10K expire 10m nopurge
 stick store-response set-cookie(MYCOOK)
 stick store-request cookie(MYCOOK)


 balance leastconn
 cookie SERVERID insert nocache
 server backup 127.0.0.1:9081 backup non-stick
 option http-keep-alive
 option forwardfor
 option redispatch
 option abortonclose
 maxconn 4
 server Test1 192.168.64.12:80 weight 100 cookie Test1
 server Test2 192.168.64.13:80 weight 100 cookie Test2

 But what if you only have a single source IP (so you still want to use
 cookies to track the usage AND stickyness) but the application doesn't
 have its own unique session id?

 Can you do something like using gpc0 to store a random haproxy session
 id (for overload) and yet still using cookie SERVERID for the
 persistence?
 Or using a big IP stick table even though all the IPs will be the same?

 Or am I just being really stupid today , which is not unusual :-).

 Thanks in advance.


 --
 Regards,

 Malcolm Turnbull.

 Loadbalancer.org Ltd.
 Phone: +44 (0)330 1604540
 http://www.loadbalancer.org/


 Hi Malcolm,

 I don't understand the question about the IP address.
 You don't use it at all in your conf, since you're using the cookie,
 which is a layer above.

 That said, the case you mentionned is very rare: all users behind a
 single IP and no cookie set by the application.
 Are you sure there is no X-Forwarded-For headers, or whatever other
 you could use to identify a user?

 There is no way for now in HAProxy to generate a random cookie...
 well, no clean way :)

 Baptiste



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Just had a thought about the poodle issue....

2014-10-18 Thread Malcolm Turnbull
I was thinking Haproxy could be used to block any non-TLS connection
Like you can with iptables:
https://blog.g3rt.nl/take-down-sslv3-using-iptables.html

However it would be nice if you had users trying to connect via IE6/7
etc on XP to display a nice message like, please upgrade to a secure
browser chrome or firefox etc?

Is that easy to do?




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: Just had a thought about the poodle issue....

2014-10-18 Thread Malcolm Turnbull
Doh!

I'm getting old... thanks :-).


On 18 October 2014 15:37, David Coulson da...@davidcoulson.net wrote:
 You mean like this?

 http://blog.haproxy.com/2014/10/15/haproxy-and-sslv3-poodle-vulnerability/



 On 10/18/14, 10:34 AM, Malcolm Turnbull wrote:

 I was thinking Haproxy could be used to block any non-TLS connection
 Like you can with iptables:
 https://blog.g3rt.nl/take-down-sslv3-using-iptables.html

 However it would be nice if you had users trying to connect via IE6/7
 etc on XP to display a nice message like, please upgrade to a secure
 browser chrome or firefox etc?

 Is that easy to do?








-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: connections to HaP

2014-09-15 Thread Malcolm Turnbull
Andrey,

Off the top of my head I think you need to set maxconn on each front
end (not just global).


On 15 September 2014 12:28, Andrey Zakabluk a.zakab...@velcom.by wrote:
 Hi. Use HA-Proxy version 1.5.1 2014/06/24 with next GLOBAL settings



 daemon

 maxconn 4

 stats socket  /opt/haproxy/socet/haproxy.sock mode 0600 level admin



 In while testing I have problems. I see in HATOP what my LB not take new
 connections after - Connections  : [ 2000/4] . Why ? In my config file I
 set maxconn 4000. In this time I use nbproc 3 for work around this problems.





-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Re: Problem with external healthchecks and haproxy-ss-20140720

2014-08-15 Thread Malcolm Turnbull
I agree as well.. :-).

Our original specification was to match the way that ldirectord does its
external health checks (so that the customer scripts are compatible).
We could just change ldirectord to be compatible with the new style as it
sounds more extensible (leaving it backwards compatible as well would be
nice.)





On 15 August 2014 08:11, Simon Horman ho...@verge.net.au wrote:

 [Cc Malcolm Turnbull]

 On Fri, Aug 15, 2014 at 12:29:36AM +0200, Willy Tarreau wrote:
  Hi Cyril!
 
  On Thu, Aug 14, 2014 at 10:30:52PM +0200, Cyril Bonté wrote:
   Hi all,
  
   Le 07/08/2014 01:16, Cyril Bonté a écrit :
   Hi Bjoern,
   
   Le 06/08/2014 22:16, bjun...@gmail.com a écrit :
   (...)
   [ALERT] 216/205611 (1316) : Starting [be_test:node01] check: no
 listener.
   Segmentation fault (core dumped)
   
   OK, I could reproduce it. This is happening for several reasons :
   1. External checks can only be used in listen sections.
   This is not clearly documented but it can be guessed by the arguments
   passed to the command : the proxy address and port are required (see
 [1]).
   I think this is annoying because it's only usable in some specific use
   cases. Maybe we should rework this part of the implementation : I see
   that for unix sockets, the port argument is set to NOT_USED, we
 could
   do the same for checks in a backend section. Willy, Simon, is it OK
 for
   you ?
  
   After some thoughts, I'd like to suggest a new implementation.
  
   I tried to imagine in which use case sysadmins in my company would need
   the first listener address/port of the proxy but I couldn't find one,
 or
   when I almost found one, other information could be more useful.
   It's really too specific to be used by everyone, and I'm afraid it will
   be error-prone in a configuration lifecycle.
   Based from the current implementation, it imposes an artificial order
 in
   the bind list, which didn't exist before. If the external healthcheck
   checks thoses values but someone modifies the bind order (by
   adding/removing one, or because a automated script generates the
   configuration without any guaranty on the order, ...), it will
   mysteriously fail until someone remembers to re-read the healthcheck
 script.
  
   Also, some years ago, I developed a tool which called external scripts
   which were (supposed to be) simple with only 2-3 arguments. Over the
   time, users wanted new arguments, some of them were optional, some
   others not. After several years, some calls got nearly 20 arguments, I
   let you imagine how a nightmare it is to maintain this. I fear this
 will
   happen to the haproxy external healthcheck (someone will want to
   retrieve the backend name, someone else will want to retrieve the
   current number of sessions, ...).
  
   So, I came to another solution : let's use environment variables
   instead. This will permit to add new ones easily in the future.
 
  That's a really clever idea!
 
   And currently, instead of providing the bind address/port, we could
 pass
   the backend name and id.
  
   For example, as a current minimal implementation :
   - Backend variables
   HAPROXY_BACKEND_NAME=my_backend
   HAPROXY_BACKEND_ID=2
   - Server variables
   HAPROXY_SERVER_ADDR=203.0.113.1
   HAPROXY_SERVER_PORT=80
   or maybe in a single variable, such as HAPROXY_SERVER=203.0.113.1:80
  
   And if we still want to provide the listener address, why not adding
   optional variables :
   HAPROXY_BIND_ADDR=127.0.0.1
   HAPROXY_BIND_PORT=80
   or as the server address, we could use a single variable (HAPROXY_BIND)
  
   For an unix socket :
   HAPROXY_BIND_ADDR=/path/to/haproxy.socket (without HAPROXY_BIND_PORT
   variable)
  
   If we want to provide all the listeners, we can add a new environment
   variable HAPROXY_BIND_COUNT=number of listeners)
   and list the listeners in environement variables suffixed by n where
   n = [1..number of listeners].
   Example :
   HAPROXY_BIND_COUNT=2
   HAPROXY_BIND_ADDR_1=127.0.0.1
   HAPROXY_BIND_PORT_1=80
   HAPROXY_BIND_ADDR_2=/path/to/haproxy.socket
  
   Any suggestion ?
   If it's ok, I can work on a patch.
 
  I have to say I like it. It's by far much more extensible and will not
  require users to upgrade their scripts each time a new variable is added.

 I agree that is quite a clever idea, and I have no objections to it.

 However, I'd like to allow Malcolm Turnbull to comment as I wrote
 the current code to meet his needs.




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/


Re: HAProxy - Load Balancing + GeoIP

2014-06-29 Thread Malcolm Turnbull
Manus,

Yes, that would be a nice potential feature but...
Surely you are a lot better off using an external GSLB capable resilient
DNS i.e.
In ascending price order:

Amazon Route 53
DYN
Neustar
Akami etc.?

http://blog.loadbalancer.org/gslb-why-do-global-server-load-balancers-suck/






On 29 June 2014 15:46, Marius Jankunas j.mar...@inbox.com wrote:

 Hello,

 First of all congrutalions for HAProxy 1.5.0 release, glad you finally
 finished. :)


 If you have some free time maybe could advise or give any hints which
 could me?
 I'm interested in HAProxy, and would like to know is it possible do load
 balancing to servers which are nearest to clients? And even if yes, so
 could this reduce latency, and improve e.g. website loading speed? I tried
 to draw an datagram(see attachment) which shows how i would like to do load
 balancing.

 About Datagram:

 Example there are 6 users: 2 from Asia, 2 from Europe, 2 from United
 states.
 All 6 users connecting to main haproxy server first, which stands in EU.

 For Asia users ping to main haproxy server is ~ 175ms
 For Europe users ping to main haproxy server ~22ms
 For United states users ping to main haproxy ~ 76ms

 Asia users to Haproxy (Asia) has ping of 15ms
 Europe users to Haproxy (EU) has ping of ~17ms
 United states users to Haproxy (US) has of ~12ms

 All 3 Haproxy (Asia),(EU),(US) servers has ping of +/-35ms to Application
 Server.

 I don't know, but feeling this only would great only additional latency
 for users. If yes, how we can make users connect direct to (Asia),(EU),(US)
 Haproxy server by their geo location? Thank you for any reply.


 Marius,

 
 FREE 3D MARINE AQUARIUM SCREENSAVER - Watch dolphins, sharks  orcas on
 your desktop!
 Check it out at http://www.inbox.com/marineaquarium




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/


Re: [PATCH v7 0/3] MEDIUM: Add external check

2014-06-13 Thread Malcolm Turnbull
Willy,

Much as I'd love to have it right now
Its is probably more sensible to go for 1.6, considering the huge effort
you have put into getting the 1.5 release out and bug free.
I wouldn't want to be the one holding you up.

So I'm happy with whatever your decision is.





On 13 June 2014 09:41, Willy Tarreau w...@1wt.eu wrote:

 Hi Simon!

 On Fri, Jun 13, 2014 at 04:18:14PM +0900, Simon Horman wrote:
  Add an external check which makes use of an external process to
  check the status of a server.
 
  v7 updates this patchset as per the feedback received for v6
  (a very long time ago).

 (...)

 Thanks for this. I'm just wondering if we should try to merge it into
 1.5 now of if we should postpone this for 1.6, given that all reported
 bugs have been fixed and we're polishing the last details. Malcolm,
 maybe you have an opinion on this ?

 Regards,
 Willy




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/


Re: SSL hardware acceleration

2014-05-29 Thread Malcolm Turnbull
John-Paul,

Nice to have some stats, thanks.

However the most intensive CPU part of the SSL transaction on a load
balancer is the handshake (that's why we measure TPS) and as far as I'm
aware AES-NI is not used in the handshake?
We don't use it in our product because we couldn't find any benefit.
http://blog.loadbalancer.org/ssl-offload-testing/
Very happy for someone to prove us wrong though?





On 27 May 2014 05:52, John-Paul Bader john-paul.ba...@wooga.net wrote:

 Here some Benchmarks with aes-256-cbc:

 ##OpenSSL 0.9.8
   16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes
 165967.40k   176138.69k   178376.08k   165082.46k   178232.41k

 ### OpenSSL 1.0.1 without AES-NI (without kernel extension loaded)
   16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes
 240935.91k   258555.73k   261316.44k   266033.49k   260849.66k

 ### OpenSSL 1.0.1 with AES-NI (without kernel extension loaded)
   16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes
 525472.77k   545694.68k   560349.27k   557427.03k   557694.98k

 ### OpenSSL 1.0.1 with AES-NI (with kernel extension loaded)
   16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes
 524809.01k   548448.30k   560363.26k   555793.62k   557424.64k

 So you can see that OpenSSL will use AES-NI without the kernel extension.
 I think the kernel extension is only needed on FreeBSD if you want a
 /dev/aesni device.

 Kind regards,

 John



 Aristedes Maniatis wrote:

 On 27/05/2014 6:59pm, Lukas Tribus wrote:

 Hi,


  Without purchasing specific expensive add-on cards [1], is there
 something specific to some modern CPUs which will accelerate SSL
 handling in haproxy 1.5?

 That is, should I be looking for something in a CPU which will
 improve performance considerably? There is an Intel instruction
 set called AES-NI but I don't know if that applies to HTTPS#
 traffic. As I understand, the initial negotiation in SSL is rsa/dsa
 but then the payload is transported using symmetric key encryption
 (like AES?).

 I'm only looking to handle about 50Mb/s of SSL traffic, so I'm not
 aiming very high. But it would be nice to know the headroom is there.

 Bandwidth is not really the limiting factor, handshakes per second is.
 AES-NI gives you a nice performance boost but doesn't help with
 handshakes
 afaik.

 Whats important, among other points, is having enough entropy, and the
 RDRAND
 feature of modern CPUs can help you there (if you trust your CPU vendor).

 Otherwise, there some software projects like haveged or audio entropy
 daemon
 that can feed random data in the kernel.


 Keep-alive and session id resumption are very important features to scale
 a SSL enabled site, so double check that those things are working
 properly.



 Right, so then it isn't about AES at all, but the public key negotiation
 and key generation. We are running on Freebsd 10 which feeds /dev/random
 from yarrow and that in turn grabs entropy from the CPU and other places.
 So I think we should be good since we are unlikely to run out of entropy
 there.

 aesni_load=YES in loader.conf should take care of the AES side of things

 If the NSA wanted credit card numbers they could just go get them from
 Mastercard directly, and there isn't really much else of great espionage
 interest in the transactional data. So I'm not overly concerned about the
 backdoors in the Intel CPUs.


 Thanks for the useful information.


 Ari




 --
 John-Paul Bader | Software Development

 www.wooga.com
 wooga GmbH | Saarbruecker Str. 38 | D-10405 Berlin
 Sitz der Gesellschaft: Berlin; HRB 117846 B
 Registergericht Berlin-Charlottenburg
 Geschaeftsfuehrung: Jens Begemann, Philipp Moeser




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/


Re: Spam

2014-04-14 Thread Malcolm Turnbull
I love these little political spats :-).
Just wanted to see how long we could make the thread.

+1 for Willy.

Initially I must admit I thought the non-subscribe was odd...
But after years of happy use I finally get the reasoning, its not the
list that is the problem but the spammers - deal with spam in the
usual fashion (at the client end).
In my case Google does it for me 3,500 spams in the last 30 days apparently

Ps. HAProxy is and always will be the best open source load balancing
proxy solution - Thanks very much.












On 14 April 2014 18:07, Juan  Jimenez jjime...@electric-cloud.com wrote:

 On 4/14/14, 12:00 PM, Willy Tarreau w...@1wt.eu wrote:

On Mon, Apr 14, 2014 at 04:39:21PM +0100, Kobus Bensch wrote:
 I'd like to say something as a user of the software and and avid
 follower of each conversation via this list. The few spam messages that
 do come through IS NO ISSUE. Unless it is so bad it is wearing your
 delete key out. Seriously, there are other things to complain about.

Thank you for confirming my beliefs Kobus :-)

Willy

 That¹s anecdotal evidence. LOL!





-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: weights

2014-03-04 Thread Malcolm Turnbull
Willy,

Exactly right, but it is a common misunderstanding.

Out of interest, How hard would it be to get a least connection
scheduler to take account of cumulated connections?
It would/might make it far more useful for HTTP.. Off the top of my
head I think least conns in  LVS is based on  cummulative for 60
seconds (which again causes a lot of confusion)

Just had a quick look here:
http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.ipvsadm.html
and to calculate active conns for LC:
active connections = ActConn * K + InActConn
Where K is between 32 and 50?

So probably way more confusing and yet most of our customers prefer
the LeastConnection handling for HTTP in LVS rather than HAProxy

I also slightly think that they just instinctively like the bigger
numbers for connection count ;-).
http://blog.loadbalancer.org/look-why-cant-you-just-tell-me-how-many-people-are-connected-to-the-load-balancer/

Just thinking the new keepalive functionality will probably effect this as well?






















On 4 March 2014 14:59, Willy Tarreau w...@1wt.eu wrote:
 On Tue, Mar 04, 2014 at 08:27:03PM +0530, vijeesh vijayan wrote:
 Thanks. please check my last reply

  Thanks. Am talking about the weights , if one server (x) assigned with
 weight 125 and other server (y) with weight 12 ( added twice in the file) ,
 we see x is getting half of the traffic compared to y. that means weigt has
 no affects here?

 in this case , server x should be getting 5 folds of connections of y
 ideally. but something is preventing this . Am i right? in our case x is
 getting only 50 percent of y ( we are calculating the number of
 connections/sec) . how do we know how many connections haproxy keep it open
 for a particular server?

 No, unfortunately you definitely don't understand the difference between
 *concurrent* connections and *cumulated* connections. You're measuring
 the number of connections distributed over time. I'm talking about
 concurrent connections, which is what leastconn is about.

 Willy





-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Just a simple thought on health checks after a soft reload of HAProxy....

2014-02-23 Thread Malcolm Turnbull
Neil,

Yes, peers are great for passing stick tables to the new HAProxy
instance and any current connections bound to the old process will be
fine.
However  any new connections will hit the new HAProxy process and if
the backend server is down but haproxy hasn't health checked it yet
then the user will hit a failed server.



On 23 February 2014 10:38, Neil n...@iamafreeman.com wrote:
 Hello

 Regarding restarts, rather that cold starts, if you configure peers the
 state from before the restart should be kept. The new process haproxy
 creates is automatically a peer to the existing process and gets the state
 as was.

 Neil

 On 23 Feb 2014 03:46, Patrick Hemmer hapr...@stormcloud9.net wrote:




 
 From: Sok Ann Yap sok...@gmail.com
 Sent: 2014-02-21 05:11:48 E
 To: haproxy@formilux.org
 Subject: Re: Just a simple thought on health checks after a soft reload of
 HAProxy

 Patrick Hemmer haproxy@... writes:

   From: Willy Tarreau w at 1wt.eu

   Sent:  2014-01-25 05:45:11 E

 Till now that's exactly what's currently done. The servers are marked
 almost dead, so the first check gives the verdict. Initially we had
 all checks started immediately. But it caused a lot of issues at several
 places where there were a high number of backends or servers mapped to
 the same hardware, because the rush of connection really caused the
 servers to be flagged as down. So we started to spread the checks over
 the longest check period in a farm.

 Is there a way to enable this behavior? In my
 environment/configuration, it causes absolutely no issue that all
 the checks be fired off at the same time.
 As it is right now, when haproxy starts up, it takes it quite a
 while to discover which servers are down.
 -Patrick

 I faced the same problem in http://thread.gmane.org/
 gmane.comp.web.haproxy/14644

 After much contemplation, I decided to just patch away the initial spread
 check behavior: https://github.com/sayap/sayap-overlay/blob/master/net-
 proxy/haproxy/files/haproxy-immediate-first-check.diff



 I definitely think there should be an option to disable the behavior. We
 have an automated system which adds and removes servers from the config, and
 then bounces haproxy. Every time haproxy is bounced, we have a period where
 it can send traffic to a dead server.


 There's also a related bug on this.
 The bug is that when I have a config with inter 30s fastinter 1s and no
 httpchk enabled, when haproxy first starts up, it spreads the checks over
 the period defined as fastinter, but the stats output says UP 1/3 for the
 full 30 seconds. It also says L4OK in 30001ms, when I know it doesn't take
 the server 30 seconds to simply accept a connection.
 Yet you get different behavior when using httpchk. When I add option
 httpchk, it still spreads the checks over the 1s fastinter value, but the
 stats output goes full UP immediately after the check occurs, not UP
 1/3. It also says L7OK/200 in 0ms, which is what I expect to see.

 -Patrick





-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Just a simple thought on health checks after a soft reload of HAProxy....

2014-01-14 Thread Malcolm Turnbull
Just a simple though on health checks after a soft reload of HAProxy

If for example you had several backend servers one of which had crashed...
Then you make make a configuration change to HAProxy and soft reload,
for instance adding a new backend server.

All the servers are instantly brought up and available for traffic
(including the crashed one).
So traffic will possibly be sent to a broken server...

Obviously its only a small problem as it is fixed as soon as the
health check actually runs...

But I was just wondering is their a way of saying don't bring up a
server until it passes a health check?



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: example of agent-check ?

2014-01-11 Thread Malcolm Turnbull
Sorry only just got around to looking at this and updating my blog entry:

Yes the important bit missing was agent-check

But my testing with Dev21 seems to bring the servers back fine with
any percentage reading i.e. 10% 75% etc. Please let me know if anyone
else is having an issue, thanks.

server Win2008R2 192.168.64.50:3389  weight 100  check agent-check
agent-port  inter 2000  rise 2  fall 3 minconn 0  maxconn 0
on-marked-down shutdown-sessions



On 27 December 2013 22:44, PiBa-NL piba.nl@gmail.com wrote:
 Simon Drake schreef op 27-12-2013 17:07:



 Would it be possible to post an example showing the correct haproxy config
 to use with the agent-check.

 By the way I saw the mailing list post recently about the changes to the
 agent-check, using state and percentage, and I think that the right way to
 go.

 For me this config works:
 serverMyServer 192.168.0.40:80  check inter 5000 agent-check
 agent-inter  agent-port 2123  weight 32

 I've tried a few small tests with it, and while bringing a server to 'down'
 or 'drain' seemed to work, i was missing the 'up' keyword, only 100% seems
 to bring a server back alive. So if your monitoring 100-%CPUusage and
 sending that 1on1 back to the agent on a server with 99% cpu capacity
 available wont come back up..



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Health check hell

2013-12-04 Thread Malcolm Turnbull
Hi Willy,

Sorry for the lack of response from the Loadbalancer.org end, I must
confess we were getting a bit confused by the descriptions :-).

The only thing in mu mind to be aware of is the design decision of the
agent to report DOWN or DRAIN on every agent request until the agent
starts responding with x% again..
Was because if you send an UP response from the agent how does the
agent know that HAProxy has read that value and acted on it? It would
need to know when it was safe to start responding with x% again?

Our primary requirement at Loadbalancer.org is for the first scenario
i.e. dynamic weight adjustment and uses standard health checks:

  - inform the load balancer about the server's load to adjust the
weights, but not interact with the service's state which is
monitored using regular checks. It basically replaces the job
of the admin who would constantly re-adjust weights depending
on the servers load.

The following usage case makes sense, but isn't really a priority for us:

  - offer a complete health check system to services which are not
easily checkable. In this case they would simply be used without
a regular check. This is more a service-level approach and not
a server-level one.

The third logical function for us was:

For a Windows administrator to have a simple GUI DRAIN/HALT button in
the agent, to enable quick local maintenance on the Windows backend
server without having to log into the load balancer in order to set
maintenance mode.
But again this is not really a priority with us as you say it clashes
with the CLI DRAIN logic









On 2 December 2013 14:30, Willy Tarreau w...@1wt.eu wrote:
 Hi Simon,

 thank you for your response, I felt a little bit alone :-)


-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Health check hell

2013-12-04 Thread Malcolm Turnbull
Forgot to reply to all:

Willy,

This looks good to me and make sense.
Long term it will be more flexible this way.









On 4 December 2013 18:17, Willy Tarreau w...@1wt.eu wrote:
 Hi Malcolm,

 On Wed, Dec 04, 2013 at 03:05:41PM +, Malcolm Turnbull wrote:
 Hi Willy,

 Sorry for the lack of response from the Loadbalancer.org end, I must
 confess we were getting a bit confused by the descriptions :-).

 I'm not surprized! I got even more confused when trying to debug some
 of the issues Igor reported and not understanding what would act on
 what, what would be propagated from tracked servers, etc... Anyway,
 writing the design limitations here and explaining them helps us
 get rid of them.

 The only thing in mu mind to be aware of is the design decision of the
 agent to report DOWN or DRAIN on every agent request until the agent
 starts responding with x% again..
 Was because if you send an UP response from the agent how does the
 agent know that HAProxy has read that value and acted on it? It would
 need to know when it was safe to start responding with x% again?

 OK I get your point. My point was to emit two things at once.
 Eg: UP 10%.

 We could have the agent specification state that the response format
 may include optional state words, optionally followed by a weight.
 That way we can have agents which return state only, weight only or
 both.

 Our primary requirement at Loadbalancer.org is for the first scenario
 i.e. dynamic weight adjustment and uses standard health checks:

   - inform the load balancer about the server's load to adjust the
 weights, but not interact with the service's state which is
 monitored using regular checks. It basically replaces the job
 of the admin who would constantly re-adjust weights depending
 on the servers load.

 I agree that this should be by far the most common use especially in
 combination with the service check. That's the reason why I'm embarrassed
 by the fact that we put the server UP when returning a percentage because
 it means the agent returning the load has to be aware of the service state
 which is not logical.

 The following usage case makes sense, but isn't really a priority for us:

   - offer a complete health check system to services which are not
 easily checkable. In this case they would simply be used without
 a regular check. This is more a service-level approach and not
 a server-level one.

 It's not my priority either though I know some people will want it when
 they already have to use an agent and need to deploy a second script to
 check the health of a specific service : they won't find it convenient
 to run two scripts on different ports, one for the state and one for the
 load.

 The third logical function for us was:

 For a Windows administrator to have a simple GUI DRAIN/HALT button in
 the agent, to enable quick local maintenance on the Windows backend
 server without having to log into the load balancer in order to set
 maintenance mode.

 Hehe, just like the 404 feature in HTTP :-)

 But again this is not really a priority with us as you say it clashes
 with the CLI DRAIN logic

 It does not exactly clash, it depends how we define it. I discovered there
 are 3 dimensions which are managed by a single agent while we initially
 thought there were only two. The agent can :

 - declare a service's state (up or down)
 - declare an administrative state (drain/ready)
 - declare a system load (weight)

 But at the moment with the language we defined, each action changes two
 of them at once, which is a big problem.

 And depending on what system the agent will be deployed on, not all these
 features will be used together. I expect that admin state and load will be
 the more common ones for an agent. Your enumeration tends to support this.

 So let's try with something like this for the agent syntax :

   [keywords]* [weight]

   Where [keywords] are optional and made of :

  up : report that the service is UP.
  down, stopped, fail : report the service down with these causes
  drain : don't change the state, nor the weight, just set DRAIN mode.
  maint : don't change the state, nor the weight, just set MAINT mode
  ready : don't change the state, nor the weight, just leave MAINT and 
 DRAIN modes.

   And [weight] is optional and in the form xxx% to report the desired
   weight for this server relative to the configured one in the config.

 Thus the following examples might illustrate it better :

up: declare the server up, don't change the configured weight
up 50%: declare the server up, set weight to 50%
50%   : don't touch the server state, just set the weight to 50%
drain : don't touch the state, nor weight, just switch to drain mode.
maint : force maintenance mode.
drain 20% : drain mode, adjust weight to 20% (not used in this mode but
  will avoid complex logics in agent scripts)
ready

Re: Is it possible to view the contents of a stick table on a running HAProxy 1.5 instance?

2013-11-29 Thread Malcolm Turnbull
Duncan,

Try something like:


[root@lbmaster loadbalancer.org]# echo show table V1 | socat stdio
/var/run/haproxy.stat

# table: V1, type: ip, size:10485760, used:1

0x6c11b4: key=192.168.64.10 use=0 exp=1670531 server_id=1



On 28 November 2013 16:08, Duncan Mason dma...@thinkanalytics.com wrote:
 Hi,



 Is it possible to view the contents of a stick table on a running HAProxy
 1.5 instance?



 I have a simple 2 node peers configuration as



 peers mypeers

 peer ip-10-31-18-197 10.31.18.197:1024

 peer ip-10-31-35-19 10.31.35.19:1024



 But the 2 nodes seem to be choosing different target servers for the same
 stick on value.



 I was hoping I could view the contents of the stick tables on both nodes so
 I could troubleshoot this.



 Cheers



 Duncan



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: [PATCH v6 00/22] Agent Check Enhancements and External Check

2013-10-31 Thread Malcolm Turnbull
Willy / Simon,

We haven't implemented it in the Loadbalancer.org appliance yet so we
are happy for lb-agent-chk to be replaced.
We are very Keane to add the new agent and SSL support as soon as possible.
It would be great if it got into Dev20 no pressure Simon :-).




On 31 October 2013 08:55, Willy Tarreau w...@1wt.eu wrote:
 On Thu, Oct 31, 2013 at 09:37:01AM +0100, Baptiste wrote:
 Well, no Exceliance customers use this feature yet, but some of them
 asked for it (or something similar).
 It would make HAProxy and the ALOHA more valuable.

 I think you misunderstood us Baptiste. Right now there is already
 lb-agent-chk, but it's limited and makes it harder to implement the
 full-feature version (eg because both of them compete to give the
 result of a check). So the discussion is about getting rid of
 lb-agent-chk and completely replacing it with the full-feature
 version that Simon has been working on.

 And yes, we'd all benefit from the new version !

 Willy




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: rdp cookie

2013-09-06 Thread Malcolm Turnbull
Please try sending us your actual configuration file, and a detailed
explanation of how you are testing.

The cookie comes from the client, and why would you not want to go to
the same server?
If you put the cookies in a stick table you can control the expiry,
take a look at the following blog for various ways to configure RDP
Terminal Server Loadbalancing:
http://blog.loadbalancer.org/load-balancing-windows-terminal-server-haproxy-and-rdp-cookies/




On 6 September 2013 09:13, vaibhav pol vaibhav4...@gmail.com wrote:
 Can anyone tell me where actually rdp cookies are stored client side , front
 end (load balancer ) or backend side . we are using using the haproxy  for
 rdp connections with persist rdp cookies option  .
   problem is client always  connect the same server.  can we specify
 the expiry of the rdp  cookie so it connection  can be   load balanced after
 cookies time out.


 Vaibhav  Pol
 National PARAM Supercomputing Facility
 Centre for Development of Advanced   Computing
 Ganeshkhind Road
 Pune University Campus
 PUNE-Maharastra




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: agent-port / loadbalance with CPU usage

2013-09-05 Thread Malcolm Turnbull
Sebastien,

I would have thought that you would get good results with:
balance leastconn
or
balance roundrobin

Try and see which you prefer...
They are both effected by the weight adjustments made by the agent.

However I don't believe their is a scheduler that always uses the lowest weight.



On 4 September 2013 23:39, Sebastien Estienne
sebastien.estie...@gmail.com wrote:
 Hello,

 I'm testing the patch of simon implementing agent-port.

 I'd like to loadbalance RTMP servers based on the CPU usage, so i
 implemented a small tcp servers that returned the percent of free CPU and
 use the agent-port feature.

 I want new connection to always go to the server with the lowest CPU usage,
 which balancing algorithm should i use to achieve this?

 thanx,
 Sebastien Estienne



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Suggested 10GB card?

2013-08-15 Thread Malcolm Turnbull
Troy,

We've found the Intel cards and drivers consistently good with our
customers doing 10G load balancing:
http://www.loadbalancer.org/10g.php



On 15 August 2013 17:09, Troy Klein spiders...@gmail.com wrote:
 I working on a 10GB haproxy configuration and am wondering what card would
 be the network card that is suggested to put into the hosts?  I have seen a
 post that say the ixgbe driver is difficult to configure and am wonder what
 card/driver would not be difficult.

 Troy Klein




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



If anyone is interested in testing Simon Hormans patches for the server feedback agent

2013-08-15 Thread Malcolm Turnbull
I've put up a quick blog entry here:
http://blog.loadbalancer.org/open-source-windows-service-for-reporting-server-load-back-to-haproxy-load-balancer-feedback-agent/

With links to a source code tar ball for haproxy including Simons patches.

I've also just open sourced (GPL'd) our Windows based service and GUI
for sending load information back to HAProxy. Links included in the
post.

Its not fully debuged or tested yet, but its getting very close if
anyone wants to give it a go.

Any feedback appreciated , thanks.

Ps. We've also got ldirectord patches to work with it if anyone wants those...



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: [PATCH 5/5] dynamic health check

2012-12-24 Thread Malcolm Turnbull
Willy.

Yes. That sounds good to me.

Thanks. And have a nice Christmas...


On 24 December 2012 09:23, Willy Tarreau w...@1wt.eu wrote:
 Hi Malcolm,

 On Mon, Dec 24, 2012 at 09:06:25AM +, Malcolm Turnbull wrote:
 Willy / Simon,

 I'm very happy to add a down option, my original thought was that you
 would use the standard health checks as well as the dynamic agent for
 changing the weight.

 That's what I thought I initially understood from our discussion a few
 months ago but then your post of the specs last week slightly confused
 me as I understood you needed this as a dedicated check. I think it was
 the same for Simon.

 As you may for example want a specific HAproxy SMTP health check + use
 the dynamic weighting agent.

 Exactly. But then we have two options :
   - retrieve the information from the checked port (easy for HTTP or TCP)
   - retrieve the information from a dedicated port = this involves a
 second task to do this, with its own check intervals.

 The latter doesn't seem stupid at all, quite the opposite in fact, but
 it will require more settings on the server line. However it comes with
 a benefit, it is that when the agent returns disable, checks are
 disabled on the real port, but then we could have the agent continue to
 be checked and later return a valid result again.

 I'm not sure if that would cause some coding issues if the health
 checks say 'Down' and the agent says 50%? (I would assume haproxy
 health checks take priority?)

 Status and weights are orthogonal. The real check should have precedence.

 Or if the agent says Down but the HAProxy health check says up?

 I think it should be ANDed. This could help provide a first implementation
 of multi-port checks after all.

 I've certainly happy for Down to be added as an option with a
 description string.
 Also I'm assuming that later (the dynamic agent) could easily be
 extended to an http style get check rather than TCP (lb-agent-chk)  if
 users prefer to write an HTTP server application to integrate with it
 (Kemp and Barracuda support this method).

 That's what I'm commonly observing too. Even right now, there are a lot
 of users who use httpchk for services that are not HTTP at all, but they
 have a very simple agent responding to checks.

 So now we have to decide what to do. I think Simon's code already provides
 some useful features (assuming we support down). It should probably be
 extended later to support combined checks.

 In my opinion, this could be done in three steps :

   1) we merge Simon's work with the option lb-agent-chk directive which
  *replaces* the health check method with this one ;

   2) we implement agent-port and agent-interval on the server lines to
  automatically enable the agent to be run on another port even when a
  different check is running ;

   3) we implement http-check agent-hdr name to retrieve the agent string
  from an HTTP header for HTTP checks ;

 That way we always support exactly the same syntax but can retrieve the
 required information at different places depending on the checks. Does
 that sound good to you ?

 Best regards,
 Willy




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Using set weight to automate auto weight setting based on backend loading

2012-12-17 Thread Malcolm Turnbull
Thomas,

Simon Horms is working on some patches for us that will allow HAProxy
to poll an agent on each real server.
We also have a Windows Service that works with it:
http://www.loadbalancer.org/download/agent/Windows/LBCPUMonInstallation.msi
I'm sure he will post it to the list when he is done in the next few weeks:

HA-Proxy Dynamic Health Check Implementation
1. Goal
Design and implement dynamic health checks for HA-Proxy.
The health check should be performed by opening a TCP socket to a
pre-defined port and
reading an ascii string. The string should have one of the following forms:
i. An ascii representation of positive integer with a maximum value of 256.
e.g: “42”.
◦ This represents an absolute weight. A weight of 0 will be translated to 1.
◦ Values in this format other than 0 are ignored when slowstart is in
effect, in keeping
with the behaviour of the set weight command.
ii. An ascii representation of an positive integer percentage.
e.g. “75%”
◦ Values in this format will set the weight proportional to the
initial weight of a server
as configured when haproxy starts.
iii. The string “drain”.
◦ This will cause the weight of a server to be set to 0, and thus it
will not accept any
new connections other than those that are accepted via persistence.
iv. The string “disable”.
◦ Put the server into maintenance mode. The server must be re-enabled before any
further health checks will be performed.
2. Deliverables
i. Patches posted to the appropriate haproxy mailing list.
ii. A list of patches and implementation overview via email to LoadBalancer.Org


 So well, how about controlling weight via service checks?


 cheers
 thomas



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: HAProxy Hardware LB

2012-05-08 Thread Malcolm Turnbull
Hi,

I'd agree with that choice, they don't look very pretty but we have
found them very reliable especially with Intel SSDs:
We have a good 500+ Loadbalancer.org customers are on that platform:
http://uk.loadbalancer.org/r16.php







On 8 May 2012 09:21, Timh Bergström timh.bergst...@quickvz.com wrote:
 Hi,

 I would highly recommend Supermicro's Atom-boxes, they do have
 Intel-chips (dual-gig) on-board in their mini-19 servers (if you find
 the right one). You can use a SSD-drive and you're down to very few
 moving parts.

 Link: http://www.supermicro.com/products/nfo/atom.cfm

 Good luck!

 Timh Bergström
 www.quickvz.com



 On Wed, May 2, 2012 at 1:07 PM, Sebastian Fohler i...@far-galaxy.de wrote:
 Hi,

 I'm trying to build a small size loadbalancing maschine which fit's into a
 small 19 rackmountable case.
 Are there any experiences which some specific hardware, for example ATOM
 boards or something similiar?
 Can someone recomment anything special?

 Best regards
 Sebastian





-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Using haproxy for tcp-port forwarding/mapping

2012-04-02 Thread Malcolm Turnbull
Timh,

If you don't specify the destination port on the backend servers then
it defaults to using the port the traffic requested.



2012/4/2 Timh Bergström timh.bergst...@quickvz.com:
 Hi,

 I'm looking for an easy way to setup simple tcp port-forwarding with
 haproxy alongside normal load-balancing. Thing is, the incoming ports
 are random within 5900-6000 so I need to forward the same incoming
 port to the backend(s). I tried doing this with a listen directive
 and with frontend/backend setups but couldn't get it to work
 properly. The haproxy server can bind on these ports just fine, I
 just need to make sure that the incoming connection is forwarded to
 the same port on the backend-servers. Has someone else done something
 like this?

 TCP: 5972 - haproxy-server - TCP: 5972 backend-server

 As you may see it's for pushing VNC-connections through haproxy
 because all other traffic is going through there as well. I'm using
 balance src so I'm not that fussed about sessions or state at
 this point. Any info or pointers are much appreciated.

 Timh Bergström
 www.quickvz.com




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: observe layer7 acceptable response codes

2012-03-07 Thread Malcolm Turnbull
And now I've re-read the section about what observe layer 7 actually
does so I am almost certainly talking rubish


On 7 March 2012 21:29, Malcolm Turnbull malc...@loadbalancer.org wrote:
 Jonathan,

 Correct me if I'm wrong but:

 The httpchk is sourced from HAProxy as an application level health
 check so how can it be effected by a client request?
 If a client gets a 404 then HAProxy doesn't really care (it just
 passes on the 404).

 I am quite often very wrong though.. :-).




 On 7 March 2012 14:40, Jonathan Matthews cont...@jpluscplusm.com wrote:
 Hi all -

 It seems to me that there's a trivial DoS available whenever observe
 layer7 is enabled if, as I'm imagining, the set of acceptable
 response codes for observe layer7 is derived from those configured
 for the httpchk.
 Please could someone suggest either what I'm assuming wrongly, or how
 to mitigate against this.

 I need to run with the defaults: a health check must not respond with
 a 4xx or 5xx. This is to guard against a back-end server bombing (5xx)
 or someone making a deployment-time error and either removing the
 health check code (404) or perhaps removing the host header
 configuration from the origin server (400). Don't say that last one
 won't happen - it just did ;-)

 If I do run in this mode, then (what I perceive as) the lack of
 configurability around the acceptable response codes for observe
 layer7 means that anyone can DoS me: just repeatedly hit a
 non-existent page and force a 404 to be served, thereby taking my
 back-end servers out, one by one.

 What am I missing? Is there a way to say httpchk must not be 4xx or
 5xx; observe-layer7 only catches 5xx?

 I'm aware of observe layer4, of course. This is unhelpful in this
 scenario, as we're vhosting to a single IP on the origin servers. It
 will only guard against the entire HTTPd dying - not a specific vhost
 having problems.

 Any ideas?
 Cheers,
 Jonathan

 PS Thanks to all involved for HAProxy - an awesome bit of kit :-)
 --
 Jonathan Matthews
 London, UK
 http://www.jpluscplusm.com/contact.html




 --
 Regards,

 Malcolm Turnbull.

 Loadbalancer.org Ltd.
 Phone: +44 (0)870 443 8779
 http://www.loadbalancer.org/



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re:

2012-02-08 Thread Malcolm Turnbull
John,

As you are using cookies it is safe to use the standard soft reload on HAProxy.
So just change the configuration file as required and restart.

We have a (very) simple API script on our EC2 (HAProxy based)
appliance which allows
auto-scaling servers in the cluster to register their IP address with
the HAProxy load balancer and automatically join the cluster when they
boot:
Assuming your dynamic servers have the same SSH key, they can just
locate the load balancer by DNS and run the API command remotely,
passing their own IP details.
http://www.loadbalancer.org/ec2.php

You would put something like the following in the init script on your
dynamic (auto scaling) servers:

#!/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin;
AMI_KEY_PAIR=path-to-ssh-key;
EC2_LOADBALANCER_IP=ip-address-of-ec2-loadbalancer;
CURL=`which curl`;
SSH=`which ssh`;
AMI_ID=`$CURL -s http://169.254.169.254/latest/meta-data/ami-id`;;
AMI_IP=`$CURL -s http://169.254.169.254/latest/meta-data/local-ipv4`;;
case $1 in
start)
$SSH -i $AMI_KEY_PAIR root@$EC2_LOADBALANCER_IP \lb_modify -l $AMI_ID -d
$AMI_IP \;
exit 0;
;;
stop)
$SSH -i $AMI_KEY_PAIR root@$EC2_LOADBALANCER_IP \lb_modify -l $AMI_ID
-d $AMI_IP -r \;
;;
*)
exit 1;
;;
esac;
exit 0;





On 8 February 2012 01:09, John Langley dige...@gmail.com wrote:
 We are looking for a solution for sticky bit routing based on
 cookies that will run on Amazon's EC2 cloud.

 I've looked at the architecture guide for HAProxy (although not the
 source yet) and it ~may~ be capable of doing what we need, but I
 thought I'd ask the mailing list to see if anyone else has already
 tried this solution. (Without knowing the implementation, it's
 impossible to say if our requirements can be met by the
 implementation)

 The challenge that we have is that unlike a traditional system where
 the sticky bit routing would be to one of a set of predefined servers,
 in our case, the servers will be created dynamically in the cloud. We
 can't configure them when we start the HAProxy routing layer.
 Although we may have some back up servers, that can be used if no
 cookie is in the request OR if the cookie specifies a server that has
 died, in general the servers that the cookie will be specifying will
 be dynamically created and we will assign them to the requests
 ourselves (not needing the nginx layer to round-robin assign them to
 one of a pool of fixed address servers).

 So my question may come down to: Can HAProxy route to servers not
 predefined in the initial configuration? I can easily imagine an
 implementation that could handle this, but wanted to ask if HAProxy
 already does this.

 Thanks in advance

 -- Langley




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Transparent Proxy

2011-09-24 Thread Malcolm Turnbull
Jason,

No that option is not relevant for TPROXY (client source IP transparency)

Its an old blog but take a look at:
http://blog.loadbalancer.org/configure-haproxy-with-tproxy-kernel-for-full-transparent-proxy/

Ignore the kernel re-compile stuff, as its all pretty standard in
modern kernels.
But it should show you how to construct the haproxy.cfg file.





On 23 September 2011 22:53, Jason J. W. Williams
jasonjwwilli...@gmail.com wrote:
 Hello,

 My understanding has been that HAProxy can be set up in conjunction
 with TPROXY support in the Linux kernel so that the backend servers
 see the original client's source IP address on incoming packets?

 So is the option transparent
 (http://code.google.com/p/haproxy-docs/wiki/transparent) not related
 to that type of transparent proxying or am I mistaken and there's no
 way to make HAProxy preserve the original client IP on the way to the
 backend servers?

 Thank you in advance.

 -J





-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Automate backend registration

2011-08-06 Thread Malcolm Turnbull
EC2 auto registration

Common practice is to do this the right scale / IBM way.

When a backend server is launched via autoscaling then it makes an ssh
remote command call to the loadbalancer calling an api script that adds the
backend to the running haproxy configuration.

By all means steal the one from the loadbalancer.org ec2 appliance and copy
or modify it... We re happy to release the script as gpl.

You can get the details from our web site.



On Thursday, 4 August 2011, Jens Bräuer jens.brae...@numberfour.eu wrote:
 Hi Holger,

 thanks for the answer. I already assemble HA-Proxy configuration with the
help of Puppet. Now things require a bit more logic fe. I dont want to
restart HA-Proxy if the config that ships with the RPM does not change. I'll
definitely have a look at your script - thanks for the pointer.

 CU,
 Jens

 On 03.08.2011, at 23:22, Holger Just wrote:

 Jens,

 Many people have a script that builds a working configuration file from
 various bits and pieces. As the actual needed configuration typically
 isn't something which follows a common path but depends on the
 environment and the actual applications and a thousand other bits, there
 isn't a standard here.

 But it really isn't hard to throw together a small shell/perl/python
 whatever script which concatenates the final config file from various
 pieces or uses some templating language of your chosen language.

 An script we use is https://github.com/finnlabs/haproxy. It consists of
 a python script which assembles the config file from a certain directory
 structure. This script is then called before a start/reload of the
 haproxy in the init script.

 So basically, you need to create your script for generating your Haproxy
 configuration, hook it into your init script and then, as a post-install
 in your RPMs put the configuration in place for your
 configuration-file-creating-script and reload Haproxy.

 To enable/disable previously registered backend components, you might be
 able to use the socket, but that usage is rather limited and mainly
 intended for maintenance, not for actual configuration changes.

 Hope that helps and sorry if that was a bit recursive :)
 Holger

 On 2011-08-03 22:52, Jens Bräuer wrote:
 Hi Baptiste,

 sorry for my wording. But you are right, with registration I mean
 - add ACL
 - add use_backend
 - add backend section
 so to sum it up make haproxy aware of a new application.

 There might be cases there I want to only add a server to existing
backend, but that would be the second/third step.
 The use-case is that I have HA-Proxy running and do a yum/apt-get
install and the RPM should come with everything to integrate with HA-Proxy.
I am sure that there must be some tool out there.. ;-)

 Cheers,
 Jens


 On 03.08.2011, at 20:24, Baptiste wrote:
 Hi Jens,

 What do you mean by registration?
 Is that make haproxy aware of the freshly deployed application  ?

 cheers

 On Wed, Aug 3, 2011 at 5:46 PM, Jens Bräuer jens.brae...@numberfour.eu
wrote:
 Hi HA-Proxy guys,

 I wonder whats the current state of the art to automate the
registration of backend. My setup runs in on EC2 and I run HA-Proxy in front
of local applications to easy administration. So a typical config file would
be like this.

 frontend http
   bind *:8080
   acl is-auth path_beg /auth
   acl is-core path_beg /core
   use_backend authif is-auth
   use_backend coreif is-core

 backend auth
   server auth-1 localhost:7778 check

 backend core
   server core-1 localhost:1 check

 All applications are installed via RPMs and I would like couple the
installation with the backend registration. I like to do this as want to
configure everything in one place (the RPM) and the number of installed
applications may vary from host to host.

 I'd really appreciate hint where I can find tools or whats the current
state to handle this kind of task.

 Cheers,
 Jens











-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/


Re: Does haproxy support wccp(Web Cache Communication Protocol) ?

2011-07-12 Thread Malcolm Turnbull
At Loadbalancer.org we simulate WCCP support (i.e. put your proxy in
WCCP mode and it will automatically accept traffic from the load
balancer:)  by using:
LVS in DR mode with firewall marks + force ALL traffic for any IP to
the local_in...

This gives full transparent proxy aka. WCCP. (without the CISCO hash stuff)

You also need to configure the routing of the clients to go via the
load balancer transparently.

The firewall rules required are explained in this doc:
http://loadbalancer.org/pdffiles/Web_Proxy_Deployment_Guide.pdf






On 12 July 2011 06:43, Baptiste bed...@gmail.com wrote:

 Hi,

 You don't need a load balancer to load-balancer WCCP.
 This protocol has already some builtin healthchecks and has a nice URL
 hash algorithm.

 cheers


 On Tue, Jul 12, 2011 at 5:13 AM, 岳强 yueqiang.da...@gmail.com wrote:
  Hello!
      I am doing some work about cache(squid), which suppors wccp, but i
  don't know if haproxy support.
     Because i didn't find any infomations about the wccp on haproxy, so i
  think you can give me a good answer!
 
      Thank you very much!!
 
  Regards,
  YueQiang




--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Re[2]:

2011-03-19 Thread Malcolm Turnbull
On 19 March 2011 10:58, Antony ddj...@mail.ru wrote:
 Hi all,

 Actually I asked this question because I saw a lot of times systems that had 
 more than 10Gb of free physical memory and they anyway used swap 
 partition(about 1-5 Mb). I saw that happened on FreeBSD and on Linux, so I 
 thought it's possible to see that again when I'll run HAProxy.

Antony,

The argument has come up on the kernel mailing list a few times,
people tend to get religious about it.
Personally I never have a swap partition on a server. (and its always
worked well for me).
Yes, If something goes hideously wrong then the OOM killer will be
invoked (but swap will only slow down the system even more and then
die, at least OOM has a small chance to take out the offending
process..)


-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: State sync

2011-03-16 Thread Malcolm Turnbull
On 16 March 2011 10:45, Jaime Nebrera jnebr...@eneotecnologia.com wrote:

  Hi Malcom,

 Source IP has a new replication feature, example here:

 peers mypeers
         peer haproxy1 192.168.0.1:1024
         peer haproxy2 192.168.0.2:1024
         peer haproxy3 10.2.0.1:1024

 backend mybackend
         mode tcp
         balance roundrobin
         stick-table type ip size 20k peers mypeers
         stick on src

         server srv1 192.168.30:80
         server srv2 192.168.31:80

  Sorry for my ignorance, but were is the replication of state between two 
 different haproxy servers, one acting as active the other as passive?

  I mean, seems this is internal to a single server, you are stating, maintain 
 flow affinity based on source ip. Great, but how does the other balancer know 
 about this decission?

 BTW: LVS can easily do session replication just use:
 ipvsadm --start-daemon master
 ipvsadm --start-daemon backup

  Yep, I'm aware this can be done with lvs, my doubt is if it can be done with 
 haproxy too.



stick-table type ip size 20k peers mypeers

This part of the statement means : replicate this session table to ALL
mypeers (3 other haproxy instances).



--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: State sync

2011-03-15 Thread Malcolm Turnbull
Jamie,

Cookies dont need state replication.

Source IP has a new replication feature, example here:

peers mypeers
peer haproxy1 192.168.0.1:1024
peer haproxy2 192.168.0.2:1024
peer haproxy3 10.2.0.1:1024

backend mybackend
mode tcp
balance roundrobin
stick-table type ip size 20k peers mypeers
stick on src

server srv1 192.168.30:80
server srv2 192.168.31:80

BTW: LVS can easily do session replication just use:
ipvsadm --start-daemon master
ipvsadm --start-daemon backup



On 15 March 2011 14:45, Jaime Nebrera jnebr...@eneotecnologia.com wrote:

  Hi all list members,

  I'm wondering if its possible to share the connection affinity / state 
 between a couple of HA proxies, in such a way, in case of failure the backup 
 server will redirect the user to the same end server as it was using when 
 going through the main haproxy.

  I'm aware this can be done with LVS and with netfilter too (for connection 
 tracking. not affinity relted though), but dont know if its possible with 
 haproxy.

  Is it? how? Is this open source or commercial?

  Very thankful in advance.

  Kind regards

 --
 Jaime Nebrera - jnebr...@eneotecnologia.com
 Consultor TI - ENEO Tecnologia SL
 C/ Manufactura 2, Edificio Euro, Oficina 3N
 Mairena del Aljarafe - 41927 - Sevilla
 Telf.- 955 60 11 60 / 619 04 55 18





--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: HAProxy Session affinity for PHP web application

2011-03-11 Thread Malcolm Turnbull
Thomas,

This post really made me smile :-).

HAProxy allows:
non-sticky
Source IP Hash
Source IP Stick table
Cookie (self managed self inserted transparent etc.)
Cookie (based on reading the web servers session cookie)

If your web server(s) are scalable i.e. handles their own sessions in
a SHARED  backend database or memcache then you don't need ANY sticky
on the load balancer.

Any form of sticky on the load balancer means that a single server
failure will break your users session (which may not be a major
problem)
If your app handles its own state then any server failure doesn't really matter.

For your case I would go with:
Cookie (self managed self inserted transparent etc.)
As suggested by the previous example config.





On 11 March 2011 15:44, Thomas Manson dev.mansontho...@gmail.com wrote:

 Hi Gabriel,

   I've read that HAProxy is capable of keeping a set of http request directed 
 to the same webserver. (I think the feature is called 'Sticky Session' on 
 Websphere Cluster)



 So, can anyone confirm that it's possible or not possible to have a sticky 
 session feature with HAProxy  ?
 If possible : howto/best practice?
 If not : well I'll try the memcache solution ;)



--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Is their a way of getting number of sessions/connections even when you are using force close?

2011-03-08 Thread Malcolm Turnbull
OK this may sound like a daft question but:

Is their a way of getting number of sessions/connections even when you
are using force close?

What I mean is that by default we close all connections in HTTP mode.
So say you have 100 users on 2 servers.
Session rate at any one time may be : 5
Sessions may be: 5

So my question Is can you get actual connections/users per server i.e.
find out if:
Server1 = 48
Server2 = 52
?

And as a side question does leastconns use the value of 5 or 50 ? i.e.
what time period does leastconns use?

Thanks in advance.


--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Stick on src defaults to option persist?

2011-02-22 Thread Malcolm Turnbull
Just a small issue with the stick-table feature, with the following
example config:

listen L7vip  192.168.2.60:80
mode tcp
balance leastconn
stick-table type ip size 10240k expire 30m
stick on src
server rip1 192.168.2.26:80 weight 1 check  inter 2000 rise 2 fall 3
server rip2 192.168.2.111:80 weight 1 check  inter 2000 rise 2 fall 3

Works great, but if you put one of the bakend servers in maintenance
mode or set the weight to 0

echo set weight L7vip/rip2 0 | socat unix-connect:/var/run/haproxy.stat stdio
echo disable server L7vip/rip2 0 | socat
unix-connect:/var/run/haproxy.stat stdio

Nothing happens - Or rather the persistence template still takes
effect like option persist

Setting weight=0 and soft restarting works but potentialy breaks all
the other entries in the persistence table? (or does it survive a soft
restart?)

Is it possible to do something like:

stick on pattern [table table] [{if | unless} condition not in
maintenance mode...


--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: SSL read failed - closing connection during benchmarking haproxy SSL

2011-02-16 Thread Malcolm Turnbull
On 15 February 2011 16:49, Amol mandm_z...@yahoo.com wrote:

 I was benchmarking my stunnel -- haproxy -- apache webserver configuration 
 from a ubuntu server and when i run this test i keep getting the SSL read 
 failed - closing connection error
 here is the snippet

 $ ab -n 1 -c 10 https://xxx.xxx.com/xxx/xxx.php
 This is ApacheBench, Version 2.3 $Revision: 655654 $
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/

 Benchmarking  (be patient)
 Completed 1000 requests
 Completed 2000 requests
 Completed 3000 requests
 SSL read failed - closing connection
 SSL read failed - closing connection
 SSL read failed - closing connection


Amol,

We get the same problem with Pound (SSL) - Haproxy and ApacheBench.
It seems to be upset about the SSL handshake, didnt investigate much
but used HTTPPerf instead (which works).
Let me know if you get ab working though please!



--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



HAproxy can't seem to bind to more than 1000 ports?

2011-02-15 Thread Malcolm Turnbull
HAproxy can't seem to bind to more than 1000 ports? (well about 1017
which is suspiciously close to 1024...)
I'm probably being really stupid but I saw the question earlier on FTP
and I was playing with binding large port ranges
my ulimit -n is 2
Am I missing something obvious? (usually)

Thanks.

--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: HAproxy can't seem to bind to more than 1000 ports?

2011-02-15 Thread Malcolm Turnbull
On 15 February 2011 22:23, Willy Tarreau w...@1wt.eu wrote:

 Hi Malcolm,

 On Tue, Feb 15, 2011 at 08:35:05PM +, Malcolm Turnbull wrote:
  HAproxy can't seem to bind to more than 1000 ports? (well about 1017
  which is suspiciously close to 1024...)
  I'm probably being really stupid but I saw the question earlier on FTP
  and I was playing with binding large port ranges
  my ulimit -n is 2
  Am I missing something obvious? (usually)

 that looks rather strange. I remember having bound very large port ranges
 for test purposes. The largest configs I know are around 6-700 ports but
 I've never encountered anything like that.

 What does happen when you try to bind more ports ? Do you get an error,
 do they just not respond ?

 Willy


When you restart HAProxy just the attempted binds  1017 fail with the error:

[ALERT] 045/222808 (22746) : Starting proxy VIP_Name: cannot create
listening socket

However on further testing this seems to be because I am calling the
restart script from within Apache/PHP:

From the command prompt:
haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat
/var/run/haproxy.pid) WORKS
haproxy -f /etc/haproxy/haproxy.cfg start
haproxy -f /etc/haproxy/haproxy.cfg stop

ALL work fine...

But from within Apache/PHP/Sudo
`sudo /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
/var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) 21`;
`haproxy -f /etc/haproxy/haproxy.cfg start`;
`haproxy -f /etc/haproxy/haproxy.cfg stop`;
FAILS for  @1000 ports

So shouldn't effect other people in the same way... I will investigate
offlist















--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Can you do a HTTPS redirect to multiple sub domains?

2011-01-19 Thread Malcolm Turnbull
Willy et al.

I'm being lazy and asking the list without experimenting first but I
quite often get asked for redirecting port 80 - HTTPS and I sugest:
// where 192.168.6.146 is the local stunnel/pound termination
acl secure src 192.168.6.146
redirect prefix https://www.foo.com if !secure

But a customer just asked:

Malcolm, we have a wildcard cert, which means there are a bunch of
domains terminating at the load balancer. I need to redirect each
subdomain independently:

http://dev1-foo.com -- https://dev1.foo.com

http://dev2-foo.com -- https://dev2.foo.com

http://dev3-foo.com -- https://dev3.foo.com

Is there no way to just do something like below(I know the asterisk is
not variable in this case).


acl secure src 192.168.6.146
redirect prefix https://*.foo.com if !secure

So all of the subdomains were caught in this rule?


So question is do I need to do a hdr acl test on each individual entry?
acl foo1 hdr dev-1.foo.com
if acl foo1 




--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: [RFC] Thinking about RDP-cookie

2010-12-15 Thread Malcolm Turnbull
On 15 December 2010 14:33, L. Alberto Giménez
agimenez-hapr...@sysvalve.homelinux.net wrote:


 I'm thinking about something like having a connection table where each new 
 connection gets inserted upon first session, and somehow assigned an internal 
 persistent association with a roundrobin-elected backend.

 Upon further connections, the proxy would recognize that there was a 
 session in the past and that it would reuse the same backend.

 What do you think?


Alberto,

RDP and HTTP have cookies in the application protocol, therefore you
can insert or modify a marker/cookie to keep track...
How would you insert the marker in standard TCP traffic?
The only method I'm aware of is source IP for TCP persistence.








--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Quick question where the answer is probably no :-).

2010-12-08 Thread Malcolm Turnbull
The new stunnel proxy functionality got me thinking (which is normally
a bad thing):

With HAProxy is it possible to insert an x-forwarded (or similar) in SMTP?

F5 suggest:
You could insert the IP in the optional comments field, assuming
exchange can access it there.
Comments: [IP::client_addr]

Or from the postfix documentation:
XFORWARD Example

In the following example, information sent by the client is shown in bold font.

220 server.example.com ESMTP Postfix
EHLO client.example.com
250-server.example.com
250-PIPELINING
250-SIZE 1024
250-VRFY
250-ETRN
250-XFORWARD NAME ADDR PROTO HELO
250 8BITMIME
XFORWARD NAME=spike.porcupine.org ADDR=168.100.189.2 PROTO=ESMTP
250 Ok
XFORWARD HELO=spike.porcupine.org
250 Ok
MAIL FROM:wie...@porcupine.org
250 Ok
RCPT TO:u...@example.com
250 Ok
DATA
354 End data with CRLF.CRLF
. . .message content. . .
.
250 Ok: queued as 3CF6B2AAE8
QUIT
221 Bye





-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Any chance of a brief introduction and example of the new PROXy protocol?

2010-12-07 Thread Malcolm Turnbull
Willy,
New PROXY protocol sounds great, I've just patched stunnel 4.34 and
running it with HAProxy 1.5-dev3... but what commands do I need?
How would you use it for SMTPS OR IMAPS for example?
Any chance of a brief introduction and example of the new PROXy protocol?
Thanks very much in advance.
I did trawl through the 1.5 docs, but couldn't seem to spot it...

--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: How to achieve application driven load balancing scheme using HAProxy?

2010-11-25 Thread Malcolm Turnbull
On 25 November 2010 19:27, Sebastien Tardif sebtar...@ncf.ca wrote:

 How to achieve application driven load balancing scheme using HAProxy?

 What I want to achieve is “user affinity between sessions” based on user 
 login. So that, between http/s sessions, and even between days, user is more 
 likely to reach the same server.


 Any idea how a server can re-stick http session to another server by setting 
 something in the response that HAProxy will act on?

 Any comments?

Sebastian,

It sounds a bit overly complex to me, why not just implement sharding?

i.e.

1) Every application server can handle the login page (no persistence required).
2) Login page once verified redirects to www.bigsite.com/sharda/ (or
just sets an appsession cookie called 'sharda') or BOTH!
3) HAProxy reads the appsession cookie or URL and re-directs to a
smaller cluster (sharda) could be anything such as all logins starting
with 'a'
4) All servers in each shard use same database/storage/memchached to
handle just the users starting with 'a' etc...

Should give fairly unlimitted scaleability?

I notice my googlemail client is doing it right now:
mail.google.com/a/loadbalancer.org



--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



unsubscribe

2010-04-23 Thread Malcolm Turnbull
unsubscribe


RDP cookies with balance leastconn?

2010-04-12 Thread Malcolm Turnbull
Hi,

I managed to convince myself that balance leastconn would work if I
replaced balance rdp-cookie in the following config:

defaults
clitimeout 1h
srvtimeout 1h
   listen VIP1 192.168.0.10:3389
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if RDP_COOKIE
persist rdp-cookie
balance rdp-cookie
option tcpka
option tcplog
server Win2k8-1 192.168.0.11:3389 weight 1 check   inter 2000
rise 2 fall 3
server Win2k8-2 192.168.0.12:3389 weight 1 check   inter 2000
rise 2 fall 3
option redispatch

However after further testing i realised my initial success was just
luck... as it seems to break the cookie persistence completely?

How easy would it be to make new connections use leastconn?

Otherwise you can end up with VERY uneven load balancing if you have
enough long sessions.



--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: SNMP for latest HAProxy version...?

2010-03-29 Thread Malcolm Turnbull
Ooops, I was being a complete idiot, the haproxy config was broken and not
restarting and therefore SNMP wasn't working either.

haproxy.pl-0.27 + haproxy-1.4 does indeed work very well.

:-).




2010/3/28 Krzysztof Olędzki o...@ans.pl

 Hello Malcolm,

 Has anyone already done this? Perl is not my strong point :-).


 Yes, haproxy.pl-0.27 + haproxy-1.4 works.

 Best regards,

Krzysztof Olędzki






-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/


SNMP for latest HAProxy version...?

2010-03-26 Thread Malcolm Turnbull
I was just checking out the SNMP support for the latest version of HAProxy:


Older versions gave:

snmpwalk -c public -v2c 127.0.0.1 1.3.6.1.4.1.29385.106.1.0
SNMPv2-SMI::enterprises.29385.106.1.0.0.1.0 = STRING: stats
SNMPv2-SMI::enterprises.29385.106.1.0.1.1.0 = STRING: FRONTEND
SNMPv2-SMI::enterprises.29385.106.1.0.2.1.0 = 

etc.

But the current version gives:

snmpwalk -c public -v2c 127.0.0.1 1.3.6.1.4.1.29385.106.1.0
SNMPv2-SMI::enterprises.29385.106.1.0 = No Such Instance currently
exists at this OID


I'm using: http://haproxy.1wt.eu/download/contrib/netsnmp-perl/haproxy.pl

I assume the mapping has changed?

my %info_vars = (
0   = 'Name',
1   = 'Version',
2   = 'Release_date',
3   = 'Nbproc',
4   = 'Process_num',
5   = 'Pid',
6   = 'Uptime',
7   = 'Uptime_sec',
8   = 'Memmax_MB',
9   = 'Ulimit-n',
10  = 'Maxsock',
11  = 'Maxconn',
12  = 'CurrConns',
);


Has anyone already done this? Perl is not my strong point :-).

thanks.







--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Can HAProxy work around a PSP3 and IIS7 issue?

2009-11-24 Thread Malcolm Turnbull
We have a customer with a PSP3 web browser client sending TWO refer headers:

Referer: http://localhost/NetFrontBrowser/
Referer: http://localhost/Flash/

And this is breaking IIS7...
http://forums.iis.net/t/1162919.aspx


Could HAProxy be configured to strip extra referer headers? (i.e. only
allow the first one?)


Thanks in advance.


--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: We have been playing around with the new RDP cookie feature in 1.4-dev4 and it works really well...

2009-11-09 Thread Malcolm Turnbull
Guillaume,

I think you would need to increase the clitimeout and srvtimeout to 2 hours
if you wanted it to be seemless.
Otherwise you would need to reconnect with the same username to join the
same session if those timeouts had expired.

Its not like HTTP cookies that are remember on the client (the login id is
only sent when establishing a connection)





Nick can you run this specific test tomorrow, please?


2009/11/9 Guillaume Bourque guillaume.bour...@gmail.com

 Hi Malcolm,

 I'M using haproxy for  RDP dispatcher but in tcp mode with balance
 source.

 This setup will allow a laptop user which goes in sleep mode to go back on
 the same server when it will wake up 2 hours later.

 I would be interested to ear if you have laptop users in your setup and if
 the user will end up on the same backend server after a 1 hour sleep period
 ?  Will the RDP cookie be the same after a wakeup ?

 Thanks for sharing this !

 Guillaume





 Malcolm Turnbull a écrit :

  We have been playing around with the new RDP cookie feature in 1.4-dev4
 and it works really well...
 One of our guys Nick has written a blog about his configuration and
 testing of Windows Terminal Servers with Windows an Linux RDP clients.
 We would welcome any feedback from anyone using a similar configuration.

 http://blog.loadbalancer.org/
 or

 http://blog.loadbalancer.org/load-balancing-windows-terminal-server-%E2%80%93-haproxy-and-rdp-cookies/

 Thanks.


 --
 Regards,

 Malcolm Turnbull.

 Loadbalancer.org Ltd.
 Phone: +44 (0)870 443 8779
 http://www.loadbalancer.org/



 --
 Guillaume Bourque, B.Sc.,
 consultant, infrastructures technologiques libres !
 514 576-7638




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/


Question relating to the errors show on the HAProxy stats screen and 503 errors.

2009-10-16 Thread Malcolm Turnbull
Under the errors section what exactly does the Resp.section mean?
Does it mean any response that is not a 200 OK?

I have a customer with a lot of check errors, a lot of Resp. errors
and too many Down errors.

Origional config was:

    server mm5 173.45.238.119:80 weight 1 cookie mm5
    server mm6 173.45.227.82:80 weight 1 cookie mm6

I've changed it to the following for more testing and enabled logging
at info level.

    server mm5 173.45.238.119:80 weight 1 cookie mm5 check inter 2
fall 3 rise 1
    server mm6 173.45.227.82:80 weight 1 cookie mm6 check inter 2
fall 2 rise 1

They are getting 503 errors and I assume that means that HAProxy is
passing traffic to the real server , getting an error then telling the
client 503?
or do I misunderstand that?

Current load is between 200-400 concurrent sessions.

Also I just realised that the

option redispatch

was not set, would this option reduce the number of 503 errors?





--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Transparent proxy of SSL traffic using Pound to HAProxy backend patch and howto

2009-07-20 Thread Malcolm Turnbull
Many thanks to Ivansceó Krisztián for working on the TPROXY patch for
Pound for us, we can finally do SSL termination - HAProxy - backend
with TPROXY.

http://blog.loadbalancer.org/transparent-proxy-of-ssl-traffic-using-pound-to-haproxy-backend-patch-and-howto/

Patches to Pound are here:
http://www.loadbalancer.org/download/PoundSSL-Tproxy/poundtp-2.4.5.tgz

Willy,

You mentioned that it may be more sensible to do something like:

source 0.0.0.0 usesrc hdr(x-forwarded-for)

rather than having 2 sets of TPROXY set up.. but I don't think this is
possible yet?






--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



kernel: Neighbour table overflow?

2009-06-23 Thread Malcolm Turnbull
We have a customer using haproxy with the following problem, is this
anything obvious?

We are encountering Server Timeout problem from the client pc to
access the servers through loadbalancer. We have two (2) VIP and six
(6) Real Servers and all of those servers are Oracle.

I checked on the /var/log/messages, it says the following message:

Jun 19 11:26:48 lbmaster kernel: printk: 157 messages suppressed.
Jun 19 11:26:48 lbmaster kernel: Neighbour table overflow.
Jun 19 11:26:48 localhost haproxy[2139]: Connect from 10.32.85.18:1190 to
10.101.24.25:8000 (VIP_10_101_24_25/HTTP)
Jun 19 11:26:48 localhost haproxy[2139]: Connect from 10.32.85.18:1191 to
10.101.24.25:8000 (VIP_10_101_24_25/HTTP)
Jun 19 11:26:48 localhost haproxy[2139]: Connect from 10.9.84.69:1327 to
10.101.24.25:8000 (VIP_10_101_24_25/HTTP)
Jun 19 11:26:49 localhost haproxy[2139]: Connect from 10.4.2.129:1165 to
10.101.24.25:8000 (VIP_10_101_24_25/HTTP)
Jun 19 11:26:49 localhost haproxy[2139]: Connect from 10.32.85.18:1192 to
10.101.24.25:8000 (VIP_10_101_24_25/HTTP)
Jun 19 11:26:49 localhost haproxy[2139]: Connect from 10.134.95.110:1344
to 10.101.24.25:8000 (VIP_10_101_24_25/HTTP)
Jun 19 11:26:53 lbmaster kernel: printk: 156 messages suppressed.
Jun 19 11:26:53 lbmaster kernel: Neighbour table overflow.


-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Troubleshoot Starting proxy http_proxy: cannot bind socket

2009-06-12 Thread Malcolm Turnbull
Aron,

Either the socket you are trying to use is already in use... or the IP
doesn't exist on the host.

2009/6/12 Aaron Rosenthal aa...@idealernetwork.com

 I don't know how to troubleshoot this issue, any suggestions would be 
 helpful. Thanks

 [r...@lb1 haproxy-1.3.17]# /etc/init.d/haproxy start
 Starting HAproxy: [ALERT] 161/110810 (22011) : Starting proxy http_proxy: 
 cannot bind socket
                                                           [FAILED]







--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: Transparent proxy

2009-05-11 Thread Malcolm Turnbull
Carlo,

Sorry got busy and forgot to post back to you,
I was going to ask whats your output from :

iptables -L -t mangle

Chain PREROUTING (policy ACCEPT)
target prot opt source   destination
MARK   tcp  --  192.168.2.0/24   anywhere    tcp
dpt:http MARK set 0x1
DIVERT tcp  --  anywhere anywhere    socket


Is the divert to socket in place?





2009/5/11 Carlo Granisso c.grani...@dnshosting.it

 Hello everybody, I have a problem with haproxy (1.3.17) and kernel 2.6.29

 I have successfully recompiled my kernel with TPROXY modules and installed 
 haproxy (compiled from source with tproxy option enabled) and installed 
 iptables 1.4.3 (that have tproxy patch).
 Now I can't use transparent proxy function: if I leave in haproxy.cfg this 
 line source 0.0.0.0 usesrc clientip haproxy say 503 - Service unavailable.
 If I comment out the line, everything work fine (without transparent proxy).

 My situation:

 haproxy with two ethernet device: first one for public IP, sceond one for 
 private IP (192.168.XX.XX)
 two web server with one ethernet for each one connected to my private network.



 Have you got ideas or you can provide me examples?


 Thanks,


 Carlo


--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Just a small inconsistency in the docs for listening on multiple ports?

2009-02-26 Thread Malcolm Turnbull
I'm using haproxy-1.3.15.7.tar.gz for some testing and looking at the
options to bind multiple ports.

The docs imply that you can use a line such as:

listen    VIP_Name :80,:81,:8080-8089

But this gives me :

[ALERT] 056/114217 (18173) : Invalid server address: ':80,'
[ALERT] 056/114217 (18173) : Error reading configuration file :
/etc/haproxy/haproxy.cfg

However if I break up the ports using 3 per line it works fine:

listen    VIP_Name 192.168.2.83:80
    bind 192.168.2.83:103,192.168.2.83:102
    bind 192.168.2.83:104,192.168.2.83:105

Is this a deliberate feature change? (not a major issue but just
wanted to check)

Ps. forgot to mention I have patched with the request/receive health
check posted earlier but I don't think thats the issue.



--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: A patch for haproxy 1.3.15.7 (HTTP-ECV)

2009-02-18 Thread Malcolm Turnbull
FinalBSD,

Ace, I was hoping someone would add that feature.
Just tested it and it works perfectly.

My tests were:
1) check index.html for value mytesttext - result - OK site up -
CORRECT (text is present on my page)
2) check index.html for value not found - result -  site down -
CORRECT (text not present on my page)
3) check doesntexist.html for value not found - result -  site down
- CORRECT (even though text is present on apache error page)

Ldirectord / LVS also has a seperate interval time for http health
checks (so you can make it longer),
and also the option to do x TCP checks followed by 1 HTTP/grep check
(to lower the impact of health checks on the cluster)
These are just ideas for improvements but core feature is very useful,
thanks again.






2009/2/18 匡萃彪 final...@gmail.com

 Hi folks,
 I wrote a patch and add two new features for haproxy(1.3.15.7) yesterday .

 1. HTTP-ECV(Extended Content Verification) monitor

add receive keyword for HTTP-ECV monitor.
 ECV monitors use specified request uri and receive String settings in 
 an attempt to
 retrieve explicit content from backend nodes. The check is successful 
 when
 the content matches the Receive String value.
 Syntax: option httpchk GET /uri [receive  receive string] [HTTP/1.0]
 
 -
 backend www
 balance source
 cookie SERVERID insert indirect
 option httpchk GET  /http-ecv.php receive Hello World! HTTP/1.0
 server www1 192.168.1.2:80 cookie A check port 80 inter 2000 rise 2 
 fall 2
 
 -

 2. When check the reply of services, only 2xx is OK .

If 2xx and 3xx are OK, assuming the following :

Apache configuration:
 
 -
ErrorDocument 404 http://www.example.com/404.html
   
 -
Haproxy Config:
 
 -
option httpchk GET  /check.php HTTP/1.0
 
 -
So, if the file(check.php,) does not exists on the server(s), the 
 check will be redircted to
http://www.example.com/404.html and get response code 302, it's still 
 will be OK, but actually it's not.

Pls forgive my poor english and non-skilled C programming ability.

  Regards!
  FinalBSD





--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re: I Finally got HAProxy TPROXY working so wrote a blog entry howto

2009-02-12 Thread Malcolm Turnbull
Willy,

Just to confirm that I'm using

  server backup 127.0.0.1:80 backup source 0.0.0.0

instead of

  server backup 127.0.0.1:80 backup

and it works fine now, thanks.



--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Question about source IP persistence (balance source) when a server goes down:

2009-01-16 Thread Malcolm Turnbull
The manual states that when using balance source:

The source IP address is hashed and divided by the total
weight of the running servers to designate which server will
receive the request. This ensures that the same client IP
address will always reach the same server as long as no
server goes down or up. If the hash result changes due to the
number of running servers changing, many clients will be
directed to a different server. This algorithm is generally
used in TCP mode where no cookie may be inserted. It may also
be used on the Internet to provide a best-effort stickyness
to clients which refuse session cookies. This algorithm is
static, which means that changing a server's weight on the
fly will have no effect.


Does this mean that if say 1 server out of a cluster of 5 servers
fails, then it is likely
that the hash result changes and lots of the clients could potentially
loose their session? (i..e hit the wrong server because the hash has
changed)




--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/