Re: [PATCH] MINOR: cfgparse: Allow disable of stats

2016-12-16 Thread Willy Tarreau
Hi Robin,

On Thu, Dec 15, 2016 at 10:49:21PM -0800, Robin H. Johnson wrote:
> Add a 'stats disable' option that can be used to explicitly disable the
> stats, without issuing the warning message as seen on TCP proxies.
> 
> If any stats options are present in a default block, there is presently
> no way to explicitly disable them for a single proxy, other than
> defining a new default block with all of the options repeated EXCEPT the
> stats options.
> 
> This normally generates a warning:
>   [WARNING] ... 'stats' statement ignored for proxy 'my-proxy' as it
> requires HTTP mode.

I think we could improve this situation by checking if the stats statement
was inherited from the defaults section and automatically disable it
instead of emitting a warning.

> After the warning is issued, the stats for that proxy are disabled
> anyway.
> 
> The new 'stats disable' option just disables the stats without
> generating the warning message; it uses the exact same means to disable
> the stats as used by the warning path.

I think it can be useful to disable stats on certain proxies working in
HTTP mode in fact. The way stats work is a mess, it's enough to declare
any stats directive to automatically enable them. If you want to put the
stats auth or uri in the defaults section, you may still want to disable
that for certain proxies and "stats disable" can be useful for this.

I'm just slightly concerned about a risk of memory leak if people alternate
"stats enable" and "stats disable" and while we don't have a generic
uri_auth_free() function we should probably at least add a FIXME comment
in the code regarding this. Or we can implement the free function this
way, I think it will work :

void uri_auth_free(struct uri_auth *ua)
{
free(ua->uri_prefix);
free(ua->auth_realm);
free(ua->node);
free(ua->desc);
userlist_free(ua->userlist);
free_http_req_rules(&ua->http_req_rules);
free(ua);
}

Then in your patch you may try to do this :

+   } else if (!strcmp(args[1], "disable")) {
+   if (curproxy == &defproxy || curproxy->uri_auth != 
defproxy.uri_auth)
+   uri_auth_free(curproxy->uri_auth);
+   curproxy->uri_auth = NULL;

Regarding the current warning, I'd want to automatically remove it but I'm
seeing we still have the issue that in check_config_validity() we've lost
the information regarding the fact that we inherited this from a defaults
section (and I think some checks are wrong there by the way), and we still
don't have some post-section checks.

I'll look into adding certain such checks and possibly backport them to
improve the situation.

> This patch should be back-ported to 1.7.

I'm fine with backporting your fix, but please check if the changes above
work fine for you so that we at least make it more reliable. If that doesn't
work, just add a fixme comment instead of the call to uri_auth_free() so
that we can re-work on it later.

Thanks,
Willy



Re: HAProxy clustering

2016-12-16 Thread Jeff Palmer
I didn't say that if one can hit it, they all can.

However, if you want to use that logic,  then I'd counter with..  if
it's not currently the active instance, it doesn't matter if it can or
not. Thus, why do the health check?

The only time it'd matter if the inactive/standby sever can hit the
backend, is if it became the active/hot server.


Now mind you,  I'm not saying this functionality needs to be added.
Merely saying if someone else has figured out a decent workaround, I'd
love to hear about it (and apparently so would others on the list)






On Fri, Dec 16, 2016 at 4:53 PM, Neil - HAProxy List
 wrote:
> So because one loadbal can reach the service the others can?
>
> Log spam needs getting rid of anyway. Filter it out whether its the in
> service or one of the out of service loadbal.
>
> If you have a complex health check that creates load make it a little
> smarter and cache its result for a while
>
> On Fri, 16 Dec 2016 at 19:56, Jeff Palmer  wrote:
>>
>> backend health should be in on the sticktables that are shared between
>>
>> all instances,  right?
>>
>>
>>
>> With that in mind,  the inactive servers would know the backed states
>>
>> if a failover were to occur.  no sense in having the log spam, network
>>
>> traffic, and load from healthchecks that aree essentially useless
>>
>> (IMO, of course)
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Dec 16, 2016 at 2:50 PM, Neil - HAProxy List
>>
>>  wrote:
>>
>> > Stephan,
>>
>> >
>>
>> > I'm curious...
>>
>> >
>>
>> > Why would you want the inactive loadbal not to check the services?
>>
>> >
>>
>> > If you really really did want that you do something horrid like tell
>>
>> > keepalive to block with iptables access to the backends when it does not
>> > own
>>
>> > the service ip
>>
>> >
>>
>> > but why? you healthchecks should be fairly lightweight?
>>
>> >
>>
>> > Neil
>>
>> >
>>
>> >
>>
>> > On 16 Dec 2016 15:44, "Marco Corte"  wrote:
>>
>> >>
>>
>> >> Hi!
>>
>> >>
>>
>> >> I use keepalived for IP management.
>>
>> >>
>>
>> >> I use Ansible on another host to deploy the configuration on the
>> >> haproxy
>>
>> >> nodes.
>>
>> >> This setup gives me better control on the configuration: it is split in
>>
>> >> several files on the Ansible host, but assembled to a single config
>> >> file on
>>
>> >> the nodes.
>>
>> >> This gives also the opportunity to deploy the configuration on one node
>>
>> >> only.
>>
>> >> On the Ansible host, the configuration changes are tracked with git.
>>
>> >>
>>
>> >> I also considered an automatic replication of the config, between the
>>
>> >> nodes but... I did not like the idea.
>>
>> >>
>>
>> >>
>>
>> >> .marcoc
>>
>> >>
>>
>> >
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>> Jeff Palmer
>>
>> https://PalmerIT.net
>>
>



-- 
Jeff Palmer
https://PalmerIT.net



Re: HAProxy clustering

2016-12-16 Thread Neil - HAProxy List
So because one loadbal can reach the service the others can?

Log spam needs getting rid of anyway. Filter it out whether its the in
service or one of the out of service loadbal.

If you have a complex health check that creates load make it a little
smarter and cache its result for a while

On Fri, 16 Dec 2016 at 19:56, Jeff Palmer  wrote:

> backend health should be in on the sticktables that are shared between
>
> all instances,  right?
>
>
>
> With that in mind,  the inactive servers would know the backed states
>
> if a failover were to occur.  no sense in having the log spam, network
>
> traffic, and load from healthchecks that aree essentially useless
>
> (IMO, of course)
>
>
>
>
>
>
>
>
>
> On Fri, Dec 16, 2016 at 2:50 PM, Neil - HAProxy List
>
>  wrote:
>
> > Stephan,
>
> >
>
> > I'm curious...
>
> >
>
> > Why would you want the inactive loadbal not to check the services?
>
> >
>
> > If you really really did want that you do something horrid like tell
>
> > keepalive to block with iptables access to the backends when it does not
> own
>
> > the service ip
>
> >
>
> > but why? you healthchecks should be fairly lightweight?
>
> >
>
> > Neil
>
> >
>
> >
>
> > On 16 Dec 2016 15:44, "Marco Corte"  wrote:
>
> >>
>
> >> Hi!
>
> >>
>
> >> I use keepalived for IP management.
>
> >>
>
> >> I use Ansible on another host to deploy the configuration on the haproxy
>
> >> nodes.
>
> >> This setup gives me better control on the configuration: it is split in
>
> >> several files on the Ansible host, but assembled to a single config
> file on
>
> >> the nodes.
>
> >> This gives also the opportunity to deploy the configuration on one node
>
> >> only.
>
> >> On the Ansible host, the configuration changes are tracked with git.
>
> >>
>
> >> I also considered an automatic replication of the config, between the
>
> >> nodes but... I did not like the idea.
>
> >>
>
> >>
>
> >> .marcoc
>
> >>
>
> >
>
>
>
>
>
>
>
> --
>
> Jeff Palmer
>
> https://PalmerIT.net
>
>


Re: HAProxy clustering

2016-12-16 Thread Jeff Palmer
backend health should be in on the sticktables that are shared between
all instances,  right?

With that in mind,  the inactive servers would know the backed states
if a failover were to occur.  no sense in having the log spam, network
traffic, and load from healthchecks that aree essentially useless
(IMO, of course)




On Fri, Dec 16, 2016 at 2:50 PM, Neil - HAProxy List
 wrote:
> Stephan,
>
> I'm curious...
>
> Why would you want the inactive loadbal not to check the services?
>
> If you really really did want that you do something horrid like tell
> keepalive to block with iptables access to the backends when it does not own
> the service ip
>
> but why? you healthchecks should be fairly lightweight?
>
> Neil
>
>
> On 16 Dec 2016 15:44, "Marco Corte"  wrote:
>>
>> Hi!
>>
>> I use keepalived for IP management.
>>
>> I use Ansible on another host to deploy the configuration on the haproxy
>> nodes.
>> This setup gives me better control on the configuration: it is split in
>> several files on the Ansible host, but assembled to a single config file on
>> the nodes.
>> This gives also the opportunity to deploy the configuration on one node
>> only.
>> On the Ansible host, the configuration changes are tracked with git.
>>
>> I also considered an automatic replication of the config, between the
>> nodes but... I did not like the idea.
>>
>>
>> .marcoc
>>
>



-- 
Jeff Palmer
https://PalmerIT.net



Re: HAProxy clustering

2016-12-16 Thread Guillaume Bourque
Hello Marco,

I would be very interest on how you build your harpy config, you must have per 
server settings and then a global config ?

If time permit and if you can share some unusable config I would be very happy 
to look into this..

Thanks 
---
Guillaume Bourque, B.Sc.,
Architecte infrastructures technologiques robustes

> Le 2016-12-16 à 10:42, Marco Corte  a écrit :
> 
> Hi!
> 
> I use keepalived for IP management.
> 
> I use Ansible on another host to deploy the configuration on the haproxy 
> nodes.
> This setup gives me better control on the configuration: it is split in 
> several files on the Ansible host, but assembled to a single config file on 
> the nodes.
> This gives also the opportunity to deploy the configuration on one node only.
> On the Ansible host, the configuration changes are tracked with git.
> 
> I also considered an automatic replication of the config, between the nodes 
> but... I did not like the idea.
> 
> 
> .marcoc
> 



Re: HAProxy clustering

2016-12-16 Thread Neil - HAProxy List
Stephan,

I'm curious...

Why would you want the inactive loadbal not to check the services?

If you really really did want that you do something horrid like tell
keepalive to block with iptables access to the backends when it does not
own the service ip

but why? you healthchecks should be fairly lightweight?

Neil


On 16 Dec 2016 15:44, "Marco Corte"  wrote:

> Hi!
>
> I use keepalived for IP management.
>
> I use Ansible on another host to deploy the configuration on the haproxy
> nodes.
> This setup gives me better control on the configuration: it is split in
> several files on the Ansible host, but assembled to a single config file on
> the nodes.
> This gives also the opportunity to deploy the configuration on one node
> only.
> On the Ansible host, the configuration changes are tracked with git.
>
> I also considered an automatic replication of the config, between the
> nodes but... I did not like the idea.
>
>
> .marcoc
>
>


Fw: Amazon Business - Guaranteed Results

2016-12-16 Thread Brandon
Title: Hello




Hello,

If you ever thought of starting a business on Amazon, we can help you.

We're based in San Diego and we've helped more than 100 sellers reach income of $30k to $80k Per Month within just few months.

Even if you're completely new to online business, you can earn $10k Per Month within 60 days. It's our Guarantee.

Call Us Toll Free for More Info (855) 271-0184

Best,

Brandon





Re: HAProxy clustering

2016-12-16 Thread Marco Corte

Hi!

I use keepalived for IP management.

I use Ansible on another host to deploy the configuration on the haproxy 
nodes.
This setup gives me better control on the configuration: it is split in 
several files on the Ansible host, but assembled to a single config file 
on the nodes.
This gives also the opportunity to deploy the configuration on one node 
only.

On the Ansible host, the configuration changes are tracked with git.

I also considered an automatic replication of the config, between the 
nodes but... I did not like the idea.



.marcoc



Re: HAProxy clustering

2016-12-16 Thread Jeff Palmer
I would be interested in seeing the ansible playbook,  if it's sanitized?




On Fri, Dec 16, 2016 at 10:19 AM, Michel blanc
 wrote:
> Le 16/12/2016 à 16:08, Jeff Palmer a écrit :
>
>>> Hi
>>> I would like to know what is the best way to have multiple instances of
>>> haproxy and have or share the same configuration file between these
>>> instances.
>
>
>> If you find a solution to the health checks from unused instances,  let us 
>> know!
>
> Hi,
>
> Here I use pacemaker+corosync and 2 VIPs (+ round robin DNS) so all
> haproxy instances are active. In case of failure, failed VIP is "moved"
> to the remaining instance (which then holds the 2 VIPs).
>
> The configuration is deployed using ansible.
>
>
> M
>



-- 
Jeff Palmer
https://PalmerIT.net



Re: HAProxy clustering

2016-12-16 Thread ge...@riseup.net
On 16-12-16 16:19:09, Michel blanc wrote:
> Here I use pacemaker+corosync and 2 VIPs (+ round robin DNS) so all
> haproxy instances are active. In case of failure, failed VIP is
> "moved" to the remaining instance (which then holds the 2 VIPs).

Doing this as well. Also, pacemaker/corosync enables the use of STONITH
/ fencing, which is critical if doing HA.

Cheers,
Georg


signature.asc
Description: Digital signature


Re: HAProxy clustering

2016-12-16 Thread Michel blanc
Le 16/12/2016 à 16:08, Jeff Palmer a écrit :

>> Hi
>> I would like to know what is the best way to have multiple instances of
>> haproxy and have or share the same configuration file between these
>> instances.


> If you find a solution to the health checks from unused instances,  let us 
> know!

Hi,

Here I use pacemaker+corosync and 2 VIPs (+ round robin DNS) so all
haproxy instances are active. In case of failure, failed VIP is "moved"
to the remaining instance (which then holds the 2 VIPs).

The configuration is deployed using ansible.


M



Re: HAProxy clustering

2016-12-16 Thread Jeff Palmer
If you find a solution to the health checks from unused instances,  let us know!



On Fri, Dec 16, 2016 at 10:05 AM, Stephan Müller
 wrote:
>
>
> On 16.12.2016 14:58, shouldbeq931 wrote:
>>
>>
>>
>>> On 16 Dec 2016, at 13:22, Allan Moraes  wrote:
>>>
>>> Hi
>>> I would like to know what is the best way to have multiple instances of
>>> haproxy and have or share the same configuration file between these
>>> instances.
>>
>>
>> I use keepalived to present clustered addresses, and incrond with unison
>> to keep configs in sync.
>>
>> I'm quite sure there are better methods :-)
>
>
> I use also keepalived to float IPs around. Google tells you this setup is
> quite common. For me it works very well.
>
> Currently I am looking for a method to prevent the unused haproxys from
> doing health checks. I'll check "incrond with unison", thanks for the
> pointer.
>
>  ~stephan
>



-- 
Jeff Palmer
https://PalmerIT.net



Re: HAProxy clustering

2016-12-16 Thread Stephan Müller



On 16.12.2016 14:58, shouldbeq931 wrote:




On 16 Dec 2016, at 13:22, Allan Moraes  wrote:

Hi
I would like to know what is the best way to have multiple instances of haproxy 
and have or share the same configuration file between these instances.


I use keepalived to present clustered addresses, and incrond with unison to 
keep configs in sync.

I'm quite sure there are better methods :-)


I use also keepalived to float IPs around. Google tells you this setup 
is quite common. For me it works very well.


Currently I am looking for a method to prevent the unused haproxys from 
doing health checks. I'll check "incrond with unison", thanks for the 
pointer.


 ~stephan



Re: HAProxy clustering

2016-12-16 Thread shouldbeq931


> On 16 Dec 2016, at 13:22, Allan Moraes  wrote:
> 
> Hi
> I would like to know what is the best way to have multiple instances of 
> haproxy and have or share the same configuration file between these instances.

I use keepalived to present clustered addresses, and incrond with unison to 
keep configs in sync.

I'm quite sure there are better methods :-)

Cheers


HAProxy clustering

2016-12-16 Thread Allan Moraes
Hi
I would like to know what is the best way to have multiple instances of
haproxy and have or share the same configuration file between these
instances.


Re: [ANNOUNCE] haproxy-1.7.1

2016-12-16 Thread Igor Pav
Cool, even TLS 1.3 0 RTT feature requires no changes?

On Fri, Dec 16, 2016 at 3:03 AM, Lukas Tribus  wrote:
> Hi Igor,
>
>
> Am 14.12.2016 um 20:47 schrieb Igor Pav:
>>
>> Hi Lukas, in fact, openssl already gets early TLS 1.3 adoption in dev,
>> will release in 1.1.1, and BoringSSL supports TLSv1.3 already.
>
>
> That's nice, and in fact since 1.1.1 will be API compatible with 1.1.0 [1]
> *and* support
> TLS 1.3 (or whatever we end up calling it [2]), this shouldn't require any
> changes in
> haproxy at all.
>
>
>
> [1] https://www.openssl.org/blog/blog/2016/10/24/f2f-roadmap/
> [2] https://www.ietf.org/mail-archive/web/tls/current/msg21888.html