Re: HAProxy doesn't respect the `hold valid 1s` setting

2016-11-10 Thread Tao Wang
Can any help to fix this bug? I reported here long time ago.

On 14 August 2016 at 15:06, Tao Wang  wrote:
> Hi,
>
> As the issue reported on Github haproxy repo are disabled, I transfer
> the issue here.
>
> I created a Github repo for this issue,
> https://github.com/twang2218/haproxy-dns-issue
>
> All the information are there. Here is the issue description.
>
> # The issue
>
> The HAProxy doesn't respect the `hold valid 1s` setting, so after the
> `docker-compose scale app=10`, the reverse proxy will not redirect the
> traffic to the other `app` service, only to the original resolved IP
> address. [haproxy/haproxy#74](https://github.com/haproxy/haproxy/issues/74)
>
> # Setup
>
> Here is a Docker Compose project to reproduce the issue.
>
> 3 services are defined in the `docker-compose.yml`.
>
> ```yaml
> version: '2'
> services:
> haproxy:
> image: haproxy:latest
> volumes:
> - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
> ports:
> - "80:80"
> depends_on:
> - app
> - syslog
> app:
> image: node:latest
> volumes:
> - ./index.js:/app/index.js
> command: node /app/index.js
> syslog:
> image: bobrik/syslog-ng:latest
> volumes:
> - ./log:/var/log/syslog-ng
> ```
>
> `haproxy` is an HAProxy server as a reverse proxy server. The
> `haproxy.cfg` is the following:
>
> ```python
> global
> log syslog daemon
>
> defaults
> timeout connect 5000ms
> timeout client 5ms
> timeout server 5ms
>
> resolvers docker_dns
> nameserver dns "127.0.0.11:53"
> timeout retry   1s
> hold valid 1s
>
> listen tcp_proxy
> mode tcp
> bind :80
> option tcplog
> log global
> server app app:8000 check resolvers docker_dns resolve-prefer ipv4
> ```
>
> `app` is just a simple node.js web service which return the client IP
> and the IP and hostname of current, so it will be easy to know which
> server served the request.
>
> # Reproduce the problem
>
> Start up the stack.
>
> ```bash
> $ docker-compose up -d
> Creating network "haproxyissue74_default" with the default driver
> Creating haproxyissue74_app_1
> Creating haproxyissue74_syslog_1
> Creating haproxyissue74_haproxy_1
> ```
>
> Scale the `app` to `10`.
>
> ```bash
> $ docker-compose scale app=10
> Creating and starting haproxyissue74_app_2 ... done
> Creating and starting haproxyissue74_app_3 ... done
> Creating and starting haproxyissue74_app_4 ... done
> Creating and starting haproxyissue74_app_5 ... done
> Creating and starting haproxyissue74_app_6 ... done
> Creating and starting haproxyissue74_app_7 ... done
> Creating and starting haproxyissue74_app_8 ... done
> Creating and starting haproxyissue74_app_9 ... done
> Creating and starting haproxyissue74_app_10 ... done
> ```
>
> And check load balance result.
>
> ```bash
> $ curl http://localhost/
> :::172.30.0.4 → 7d10f99e3d60 @ [172.30.0.2]%
> $ curl http://localhost/
> :::172.30.0.4 → 7d10f99e3d60 @ [172.30.0.2]%
> $ curl http://localhost/
> :::172.30.0.4 → 7d10f99e3d60 @ [172.30.0.2]%
> $ curl http://localhost/
> :::172.30.0.4 → 7d10f99e3d60 @ [172.30.0.2]%
> $
> ```
>
> If the `hold valid 1s` works, HAProxy should resolve the `app` ip
> address in different, however, it looks like doesn't work.
>
> To verify the DNS resolution's randomness, we can query the `app` DNS
> within `haproxy` container.
>
> Enter the container
>
> ```bash
> $ docker-compose exec haproxy bash
> root@a2243d2c5e7c:/#
> ```
>
> Install `dnsutils` for `nslookup` and `dig` command.
>
> ```bash
> root@a2243d2c5e7c:/# apt-get update && apt-get install -y dnsutils
> Get:1 http://security.debian.org jessie/updates InRelease [63.1 kB]
> Get:2 http://security.debian.org jessie/updates/main amd64 Packages [366 kB]
> ...
> Setting up rename (0.20-3) ...
> update-alternatives: using /usr/bin/file-rename to provide
> /usr/bin/rename (rename) in auto mode
> Setting up xml-core (0.13+nmu2) ...
> Processing triggers for libc-bin (2.19-18+deb8u4) ...
> Processing triggers for sgml-base (1.26+nmu4) ...
> root@a2243d2c5e7c:/#
> ```
>
> And then, we can use `dig` command to check the DNS query result.
>
> ```bash
> root@a2243d2c5e7c:/# dig app | grep app
> ; <<>> DiG 9.9.5-9+deb8u6-Debian <<>> app
> ;app. IN A
> app. 600 IN A 172.30.0.5
> app. 600 IN A 172.30.0.9
> app. 600 IN A 172.30.0.7
> app. 600 IN A 172.30.0.12
> app. 600 IN A 172.30.0.2
> app. 600 IN A 172.30.0.11
> app. 600 IN A 172.30.0.8
> app. 600 IN A 172.30.0.10
> app. 600 IN A 172.30.0.13
> app. 600 IN A 172.30.0.6
> root@a2243d2c5e7c:/# dig app | grep app
> ; <<>> DiG 9.9.5-9+deb8u6-Debian <<>> app
> ;app. IN A
> app. 600 IN A 172.30.0.6
> app. 600 IN A 172.30.0.10
> app. 600 IN A 172.30.0.13
> app. 600 IN A 172.30.0.11
> app. 600 IN A 172.30.0.7
> app. 600 IN A 172.30.0.2
> app. 600 IN A 172.30.0.5
> app. 600 IN A 172.30.0.9
> app. 600 IN A 172.30.0.12
> 

Re: [ANNOUNCE] haproxy-1.7-dev6

2016-11-10 Thread Aleksandar Lazic

Hi Willy.

Thank you as always for the detailed answer.

Am 10-11-2016 06:51, schrieb Willy Tarreau:

Hi Aleks,

On Thu, Nov 10, 2016 at 12:52:22AM +0100, Aleksandar Lazic wrote:

> http://www.haproxy.org/download/1.7/doc/SPOE.txt

I have read the doc. very interesting.

When I understand this sentence right currently it is only possible to 
check

some headers right?



###
Actually, for now, the SPOE can offload the processing before 
"tcp-request

content",
"tcp-response content", "http-request" and "http-response" rules.
###


In theory since you pass sample fetch results as arguments, it should
be possible to pass anything. For example you can already parse the
beginning of a body in http-request if you have enabled the correct
option to wait for the body (http-buffer-request or something like
this). So in theory you could even pass a part of it even right now.


Interesting Idea.

So a header only WAF is now "easily" possible instead of the full 
stack with

mod_security.
http://blog.haproxy.com/2012/10/12/scalable-waf-protection-with-haproxy-and-apache-with-modsecurity/


In theory yes. And that's one of the goals. My initial intent regarding
this protocol was to be able to delegate some heavy processing outside
of the process, to do it for blocking stuff (eg: ldap libs are always
at list a little bit blocking), as well as for anything requiring
threads.

Then I realized that it would solve other problems. For example we have
3 device detection engines, none of them is ever built in by default
because they have external dependencies, so users who want to use them
have to rebuild haproxy and will not be able to use their distro 
packages

anymore. Such components could possibly be moved to external agents.

Another point is WAF. People have a love and hate relation with their
WAF, whatever it is. When you deploy your first WAF, you start by 
loving

it because you see in the logs that it blocks a lot of stuff. Then your
customers complain about breakage and you have to tune it and find a
tradeoff between protection and compatibility. And one day they get
hacked and they declare this WAF useless and you want to change
everything. Having the WAF built into haproxy would mean that users
would have to switch to another LB just to use a different WAF! With
SPOP we can imagine having various WAF implementations in external
processes that users can chose from.


Well, nothing to add!


A last motive is stability. At haptech we have implemented support for
loadable modules (and you know how much I don't want to see this in our
version here). Developing these modules require extreme care and lots
of skills regarding haproxy's internals. We currently have a few such
modules, providing nice improvements but whose usage will be debatable
depending on users. Thus supporting modules is interesting because not
everyone is forced to load some code they don't necessarily need or
want, and it saves us from having to maintain patches. However we have
to enforce a very tight check on the internal API to ensure a module is
not loaded on a different version, which means that users have to 
update

their modules at the same time they update the haproxy executable. But
despite this there's always the risk that a bug in some experimental
code we put there corrupts the whole process and does nasty stuff
(random crashes, corrupted responses, never-ending connections, etc).
With an external process it's much easier for anyone to develop
experimental code without taking any risk for the main process. And if
something crashes, you know on which side it crashes thus you can guess
why. And typically a WAF is not something I would like to see 
implemented

as a module, I would fear support escalations for crashed processes!

So you see, there are plenty of good reasons for being able to move 
some
content processing outside of haproxy, and these reasons have driven 
the

protocol design. The first implementation focuses on having something
usable even if not optimal first (eg: we didn't implement pipelining of
requests yet but given there are connection pools it's not a big deal).


Some attacks are also in the post body, I assume this will come in the
future after some good tests.


Yes that's planned. You should already be able to pass a full buffer of
data using req.body (not tested). This is even why the protocol 
supports

fragmented payload. It's more complicated to implement though. We could
even imagine doing some compression outside (eg: sdch, snappy, or
whatever). In 1.7, the compression was moved to filters so it's pretty
possible to move it to an external process as well.

We'll be very interested in getting feedback such as "I tried to 
implement
this and failed". The protocol currently looks nice and evolutive. But 
I

know by experience that you can plan everything and the first feature
someone requests cannot be fulfilled and will require a protocol update 
:-)


I will start to create a Dockerfile for 1.7 

Re: problem building haproxy 1.6.9 on ar71xx

2016-11-10 Thread Lukas Tribus

Hi Thomas,


Am 10.11.2016 um 22:20 schrieb Thomas Heil:



Also see:
https://www.openssl.org/docs/man1.1.0/crypto/ERR_remove_state.html


hmm. did i read correctly, that this function does nothing?


It does nothing in openssl 1.1.0, as it isn't required in that branch. 
However it is required in earlier openssl branches, like 1.0.1 or 1.0.2 
though. So in your case (1.0.2) it's badly needed.









OpenSSL version is 1.0.2j.

I assume this is a non-standard build, maybe with the no-deprecated
option or something?


Could you define standard build? Iam cross compiling with th Lede / OpenWrt.



Lede has OPENSSL_WITH_DEPRECATED menuconfig [1], which defaults to yes 
(so a default LEDE build should be fine).


Can you confirm your config has OPENSSL_WITH_DEPRECATED = y?

Also can you post the output of "openssl version -a" please? That would 
have to come from the executable though; so on the destination machine 
in your cross-compile situation.





Regards,
Lukas


[1] 
https://github.com/lede-project/source/commit/db11695aa66ac49b8a52f97059697f52b6a3a893







Re: problem building haproxy 1.6.9 on ar71xx

2016-11-10 Thread Thomas Heil
Hey,
On 10.11.2016 19:13, Lukas Tribus wrote:
> Hi,
> 
> 
> Am 10.11.2016 um 18:27 schrieb Thomas Heil:
>> Hi,
>>
>> Iam facing a problem when building haproxy 1.6.9 with ssl for mips_24kc
>> with musl 1.1.15.
>> Openssl was building fine, but the function "ERR_remove_state(0)" does
>> not exist but
>> ERR_remove_thread_state(0); is available.
>>
>> So does anybody know whats the difference between?
> 
> Also see:
> https://www.openssl.org/docs/man1.1.0/crypto/ERR_remove_state.html
> 
hmm. did i read correctly, that this function does nothing?
> 
> ERR_remove_state() was deprecated in OpenSSL 1.0.0
> ERR_remove_thread_state() was deprecated in OpenSSL 1.1.0
> 
> 
> By just switching from one call to the other we break 0.9.8
> compatibility, which is kind-of OK for haproxy 1.7 but not at all for
> haproxy 1.6.
> 

ah okay.

> 
> 
>> OpenSSL version is 1.0.2j.
> 
> I assume this is a non-standard build, maybe with the no-deprecated
> option or something?
> 

Could you define standard build? Iam cross compiling with th Lede / OpenWrt.

> Deprecated calls are still supposed to work; and OpenSSL 1.0.2 is widely
> used.

I dont think so.

> In fact, I assume haproxy 1.7-dev6 with openssl 1.1.0 support still
> builds fine even with ERR_remove_state(), otherwise Dirkjan would have
> patched this already.
> 

So maybe its a cross compiling issue. I just want to be sure.

> 
> 
> Lukas
> 
> 
cheers
thomas
> 
> 
> 
> 



Re: Haproxy subdomain going to wrong backend

2016-11-10 Thread Bryan Talbot

> On Nov 9, 2016, at Nov 9, 4:45 AM, Azam Mohammed  wrote:
> 
> Also we have exact same Haproxy config on QA and UAT environment and works 
> fine.
> 
> QA Environment:
> Haproxy Version: HA-Proxy version 1.5.4
> OS Version: CentOS release 6.3 (Final)
> 
> UAT Environment:
> Haproxy Version: HA-Proxy version 1.3.26
> OS Version: CentOS release 5.6 (Final)
> 

I didn’t notice before, but both of these versions are quite old. You should 
consider upgrading them when possible. I’m sure there are many critical 
security issues that have been fixed in the years since these were released.

-Bryan




Re: Haproxy subdomain going to wrong backend

2016-11-10 Thread Bryan Talbot
… please include the list in your responses too


> On Nov 10, 2016, at Nov 10, 4:09 AM, Azam Mohammed  wrote:
> 
> Hi Bryan,
> 
> Thanks for your reply.
> 
> Putting "use_backend test" first in Haproxy config worked fine.
> 
> But I have few more question based on the solution.
> 
> As you said both the url_subdomain and url_test acls match the string 
> ‘subdomain.domain.com ' we get this issue.  But 
> in the ACL section full URL is specified, then why acl url_subdomain is 
> catching the requested with the URL test.subdomain.domain.com 
> . I believe url_subdomain should alway 
> match subdomain.domain.com  only.
> 

hdr_dom matches domains (strings terminated by a dot (.) or whitespace. Since 
you seem to be expecting an exact string match, just use hdr.


> If there is anything to do with the levels of subdomain, is it mentioned in 
> the Haproxy documentation to use the precedence. Could please point me where 
> to look in Haproxy documentation for this.   
> 


The documentation is quite extensive and you can find specifics about req.hdr 
at https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-req.hdr 


-Bryan


> --
> 
> Thanks & Regards, 
>  
> Azam Sheikh Mohammed
> IT Network & System Admin
>  
> D a n a t
> Al-Shatha Tower Office 1305, Dubai Internet City | P.O.Box: 502113, Dubai, 
> UAE | Tel: +971 4 368 8468 Ext. 133 | Fax:  +971 4 368 8232 | Mobile:  +971 
> 55 498 8089 | Email: a...@danatev.com 
> On Thu, Nov 10, 2016 at 12:46 AM, Bryan Talbot  > wrote:
> 
>> On Nov 9, 2016, at Nov 9, 4:45 AM, Azam Mohammed > > wrote:
>> 
>> Hello,
>> 
>>  
>>  
>> acl  url_subdomain   hdr_dom(host)   -i  subdomain.domain.com 
>> 
>> acl  url_test hdr_dom(host)   -i  
>> test.subdomain.domain.com 
>>  
>>  
>> use_backend subdomain if url_subdomain
>> 
>> use_backend test   if url_test
>> 
>>  
>>  
>> Both the subdomain has different web pages. Now if we enter 
>> test.subdomain.domain.com  in the browser 
>> it goes into subdomain.domain.com  backend. We 
>> have no idea what is causing this issue.
>> 
>>  
> 
> 
> Both the url_subdomain and url_test acts match the string 
> ‘subdomain.domain.com ’.
> 
> Make the ACL match be more specific or put the “use_backend test” first since 
> it is already more specific.
> 
> -Bryan
> 
> 
> 



Re: problem building haproxy 1.6.9 on ar71xx

2016-11-10 Thread Lukas Tribus

Hi,


Am 10.11.2016 um 18:27 schrieb Thomas Heil:

Hi,

Iam facing a problem when building haproxy 1.6.9 with ssl for mips_24kc
with musl 1.1.15.
Openssl was building fine, but the function "ERR_remove_state(0)" does
not exist but
ERR_remove_thread_state(0); is available.

So does anybody know whats the difference between?


Also see:
https://www.openssl.org/docs/man1.1.0/crypto/ERR_remove_state.html


ERR_remove_state() was deprecated in OpenSSL 1.0.0
ERR_remove_thread_state() was deprecated in OpenSSL 1.1.0


By just switching from one call to the other we break 0.9.8 
compatibility, which is kind-of OK for haproxy 1.7 but not at all for 
haproxy 1.6.





OpenSSL version is 1.0.2j.


I assume this is a non-standard build, maybe with the no-deprecated 
option or something?


Deprecated calls are still supposed to work; and OpenSSL 1.0.2 is widely 
used.
In fact, I assume haproxy 1.7-dev6 with openssl 1.1.0 support still 
builds fine even with ERR_remove_state(), otherwise Dirkjan would have 
patched this already.




Lukas












Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread ge...@riseup.net
Hi,

On 16-11-10 16:56:33, Willy Tarreau wrote:
> I removed you from the To in this response, but just as a hint we
> generally recommend to keep people CCed since most of us subscribed
> to lists have filters to automatically place them in the right box,
> and some people may participate without being subscribed. 

Yeah, I'm using filtering as well, but this doesn't deal with getting
the same mail(s) multiple times.

> On most lists, when people don't want to be automatically CCed on
> replies, they simply set their Reply-To header to the list's address.

Thanks, wasn't aware of this. I did so now.

> OK but just so that there's no misunderstanding, next release will be
> in approx one year. However if the patch is merged early, it will very
> likely apply well to the stable release meaning you can easily add it
> to your own packages.

Ah, I see, wasn't aware of this. Well then...this is fine as well.. :)

Cheers,
Georg


signature.asc
Description: Digital signature


Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Willy Tarreau
On Thu, Nov 10, 2016 at 04:30:57PM +0100, ge...@riseup.net wrote:
> (Please don't Cc: me, I'm subscribed to the list.)

I removed you from the To in this response, but just as a hint we
generally recommend to keep people CCed since most of us subscribed
to lists have filters to automatically place them in the right box,
and some people may participate without being subscribed. On most
lists, when people don't want to be automatically CCed on replies,
they simply set their Reply-To header to the list's address.

> Even if I'm not Simon, I'll say a word, hope thats okay, because I've
> dug out this old thread: It's fine for me if it will go into 1.7 or
> 1.8. I don't need this within the next two weeks, but looking forward to
> use it. If it will take another four, six or eight weeks, this is
> completely fine with me.

OK but just so that there's no misunderstanding, next release will be in
approx one year. However if the patch is merged early, it will very likely
apply well to the stable release meaning you can easily add it to your own
packages.

Cheers,
Willy



Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Willy Tarreau
Hi Simon!

On Thu, Nov 10, 2016 at 04:27:15PM +0100, Simon Horman wrote:
> My preference is to take things calmly as TBH I am only just getting
> started on this and I think the schema could take a little time to get
> a consensus on.

I totally agree with you. I think the most difficult thing is not to
run over a few arrays and dump them but manage to make everyone agree
on the schema. And that will take more than a few days I guess. Anyway
I'm fine with being proven wrong :-)

Cheers,
Willy




Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread ge...@riseup.net
(Please don't Cc: me, I'm subscribed to the list.)

On 16-11-10 16:12:31, Willy Tarreau wrote:
> That's cool!
> 
> The only thing is that I don't want to delay the release only for this,
> and at the same time I'm pretty sure it's possible to do something which
> will not impact existing code within a reasonable time frame. I just
> don't know how long it takes to make everyone agree on the schema. My
> intent is to release 1.7 by the end of next week *if we don't discover
> new scary bugs*. So if you think it's doable by then, that's fine. Or
> if you want to buy more time, you need to discover a big bug which will
> keep me busy and cause the release to be delayed ;-) Otherwise I think
> it will have to be in 1.8.
> 
> Note, to be clear, if many people insist on having this, we don't have an
> emergency to release by the end of next week, but it's just a policy we
> cannot pursue forever, at least by respect for those who were pressured
> to send their stuff in time. So I think that we can negociate one extra
> week if we're sure to have something completed, but only if people here
> insist on having it in 1.7.
> 
> Thus the first one who has a word to say is obviously Simon : if you
> think that even two weeks are not achievable, let's calmly postpone
> and avoid any stress.

Even if I'm not Simon, I'll say a word, hope thats okay, because I've
dug out this old thread: It's fine for me if it will go into 1.7 or
1.8. I don't need this within the next two weeks, but looking forward to
use it. If it will take another four, six or eight weeks, this is
completely fine with me.

All the best,
Georg


signature.asc
Description: Digital signature


Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Simon Horman
On Thu, Nov 10, 2016 at 04:12:31PM +0100, Willy Tarreau wrote:
> Hi Malcolm,
> 
> On Thu, Nov 10, 2016 at 12:53:13PM +, Malcolm Turnbull wrote:
> > Georg,
> > 
> > That's a timely reminder thanks:
> > I just had another chat with Simon Horman who has kindly offered to
> > take a look at this again.
> 
> That's cool!
> 
> The only thing is that I don't want to delay the release only for this,
> and at the same time I'm pretty sure it's possible to do something which
> will not impact existing code within a reasonable time frame. I just
> don't know how long it takes to make everyone agree on the schema. My
> intent is to release 1.7 by the end of next week *if we don't discover
> new scary bugs*. So if you think it's doable by then, that's fine. Or
> if you want to buy more time, you need to discover a big bug which will
> keep me busy and cause the release to be delayed ;-) Otherwise I think
> it will have to be in 1.8.
> 
> Note, to be clear, if many people insist on having this, we don't have an
> emergency to release by the end of next week, but it's just a policy we
> cannot pursue forever, at least by respect for those who were pressured
> to send their stuff in time. So I think that we can negociate one extra
> week if we're sure to have something completed, but only if people here
> insist on having it in 1.7.
> 
> Thus the first one who has a word to say is obviously Simon : if you
> think that even two weeks are not achievable, let's calmly postpone and
> avoid any stress.

My preference is to take things calmly as TBH I am only just getting
started on this and I think the schema could take a little time to get
a consensus on.



Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Willy Tarreau
Hi Malcolm,

On Thu, Nov 10, 2016 at 12:53:13PM +, Malcolm Turnbull wrote:
> Georg,
> 
> That's a timely reminder thanks:
> I just had another chat with Simon Horman who has kindly offered to
> take a look at this again.

That's cool!

The only thing is that I don't want to delay the release only for this,
and at the same time I'm pretty sure it's possible to do something which
will not impact existing code within a reasonable time frame. I just
don't know how long it takes to make everyone agree on the schema. My
intent is to release 1.7 by the end of next week *if we don't discover
new scary bugs*. So if you think it's doable by then, that's fine. Or
if you want to buy more time, you need to discover a big bug which will
keep me busy and cause the release to be delayed ;-) Otherwise I think
it will have to be in 1.8.

Note, to be clear, if many people insist on having this, we don't have an
emergency to release by the end of next week, but it's just a policy we
cannot pursue forever, at least by respect for those who were pressured
to send their stuff in time. So I think that we can negociate one extra
week if we're sure to have something completed, but only if people here
insist on having it in 1.7.

Thus the first one who has a word to say is obviously Simon : if you
think that even two weeks are not achievable, let's calmly postpone and
avoid any stress.

Thanks,
Willy



Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Dave Cottlehuber
On Thu, 10 Nov 2016, at 13:53, Malcolm Turnbull wrote:
> Georg,
> 
> That's a timely reminder thanks:
> I just had another chat with Simon Horman who has kindly offered to
> take a look at this again.

Sounds great!

I'm very interested in logging this continually via chrooted unix
socket,
into both riemann & rsyslog and into graylog/splunk. I'm happy to help
test
and contribute documentation as well.

I was planning to use riemann-tools with csv format
 https://github.com/riemann/riemann-tools/blob/master/bin/riemann-haproxy 

A+
Dave



Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Malcolm Turnbull
Georg,

That's a timely reminder thanks:
I just had another chat with Simon Horman who has kindly offered to
take a look at this again.




On 10 November 2016 at 10:54, ge...@riseup.net  wrote:
> Hi all,
>
> On 16-07-05 10:05:13, Mark Brookes wrote:
>> I wondered if we could start a discussion about the possibility of
>> having the stats socket return stats data in JSON format.
>
> After the discussion we had in July, I'm wondering what's the current
> status regarding this topic?
>
> Thanks and all the best,
> Georg



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 380 1064
http://www.loadbalancer.org/



Re: HAProxy 1.5 vs 1.6

2016-11-10 Thread Markus Rietzler
Am 10.11.16 um 10:24 schrieb Pavlos Parissis:
> On 09/11/2016 09:20 μμ, Steven Le Roux wrote:
>> Hi a first good coverage for a comparison between 1.5 and 1.6 would be
>> http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/
>>
>> 1.6 is perfectly considered stable and hasn't seen any maintenance
>> release for more than 2 months. It's being widely used so I would be
>> confident with it. It brings many improvements and features (libslz,
>> lua, server states checkpointing,...) over 1.5
>>
> 
> Same story here. 1.6 is a rock solid release and works fine.
> 
> Cheers,
> Pavlos
> 
> 
We are using 1.6.x for quite a long time and it runs perfectly!

Markus



Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread ge...@riseup.net
Hi all,

On 16-07-05 10:05:13, Mark Brookes wrote:
> I wondered if we could start a discussion about the possibility of
> having the stats socket return stats data in JSON format.

After the discussion we had in July, I'm wondering what's the current
status regarding this topic?

Thanks and all the best,
Georg


signature.asc
Description: Digital signature


Re: Herald - Generic and extensible agent-check service for Haproxy

2016-11-10 Thread Willy Tarreau
Hello Raghu,

On Thu, Nov 10, 2016 at 01:30:05PM +0530, Raghu Udiyar wrote:
> Hello
> 
> We have developed a generic service (using python gevent) to serve as an
> agent check service for Haproxy. Its extensible via plugins for application
> feedback, and supports result cacheing, json expressions, arithmetic, regex
> match, and fallback in case of failure.
> 
> The source is here : https://github.com/helpshift/herald
> 
> Blog post here :
> https://engineering.helpshift.com/2016/herald-haproxy-loadfeedback-agent/
> 
> Hope this is useful for the haproxy community.

This is excellent, thank you for contributing this!

We used to work on something more or less similar a few years ago but
missed the time needed to complete it. We wanted to combine multiple
checks and report aggregated statuses. The haproxy check code was not
ready by then so we had to postpone. It's true that the agent is much
more versatile for such use cases now.

I'm having a few questions : did you miss anything from haproxy when
doing this ? For example, did you have to workaround the impossibility
to do something via the agent just because of the agent protocol or
because of some validity checks performed by haproxy ? I'm asking
because if we have to perform very minor tweaks, better do them before
the release. Do you think you could benefit from "agent-send" so that
the same agent is used for multiple servers, when haproxy would then
indicate with the connection for what server it is connecting ? If so,
would an automatic string such as "agent-send-name" sending the backend
and the server name be an useful improvement for this ?

Do you want me to add a link to the main haproxy page, and if so to
what location ?

Thanks,
Willy



Re: HAProxy 1.5 vs 1.6

2016-11-10 Thread Pavlos Parissis
On 09/11/2016 09:20 μμ, Steven Le Roux wrote:
> Hi a first good coverage for a comparison between 1.5 and 1.6 would be
> http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/
> 
> 1.6 is perfectly considered stable and hasn't seen any maintenance
> release for more than 2 months. It's being widely used so I would be
> confident with it. It brings many improvements and features (libslz,
> lua, server states checkpointing,...) over 1.5
> 

Same story here. 1.6 is a rock solid release and works fine.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Herald - Generic and extensible agent-check service for Haproxy

2016-11-10 Thread Raghu Udiyar
Hello

We have developed a generic service (using python gevent) to serve as an
agent check service for Haproxy. Its extensible via plugins for application
feedback, and supports result cacheing, json expressions, arithmetic, regex
match, and fallback in case of failure.

The source is here : https://github.com/helpshift/herald

Blog post here :
https://engineering.helpshift.com/2016/herald-haproxy-loadfeedback-agent/

Hope this is useful for the haproxy community.

Thanks,
-- Raghu