Re: HAProxy feature request.

2020-08-27 Thread Jarno Huuskonen
Hello,

On Thu, 2020-08-27 at 13:33 +0530, Roshan Singh wrote:
> Dear HAProxy Technical Support Team,
> 
> REQUEST: HAProxy supports IPv4 Header manipulation for QoS.
> 
> ISSUE: I have been trying to pass the ToS value received from client to
> backend server for DSCP. But i can't manipulate DSCP value.
> 
> STEPS:
> 1.Request from client: # curl HAProxy_node_IP -H 'x-tos:0x48'
> 2. Below is the log captured from wireshark on HAProxy node.
> 3. DSCP value should be update with value as 'af21'. but this only goes
> from HTTPHeader when added below line in fronted : http-request set-header 
> x-ipheader % [req.hdr(x-tos)]

Try http-request set-tos: 
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-http-request%20set-tos
(or http-response set-tos 
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-http-response%20set-tos
)

I'm not sure the documentation is correct here because documentation says
that
both http-request/http-response set-tos "packets sent to the client"
and http-request probably should say "packets sent to the server" ?

-Jarno

> Please let me know if this feature has been already implemented or can be
> used by any third party tool.

-- 
Jarno Huuskonen


Re: check rewrite feature request

2020-01-02 Thread Aleksandar Lazic

Hi Willy.

Am 02-01-2020 10:49, schrieb Willy Tarreau:

Hi Aleks,

On Thu, Dec 26, 2019 at 12:11:31PM +0100, Aleksandar Lazic wrote:

>   - rewrite *all* health checks to internally use tcp-check sequences
> only
> so that we don't have to maintain this horribly broken mess anymore
> and
> can more easily implement new ones ;

Well, I think we will need also udp-check for example for DNS, QUIC 
and some

other protocols.


These would then be DNS, QUIC, ping or whatever. There's no such thing 
as
a UDP check given that by default UDP doesn't respond so there's no way 
to
know whether a generic service works or not. You cannot even count on 
ICMP
port unreachable which can be rate-limited or filtered out. In fact TCP 
was
the particular case here since it's possible to at least check that a 
port

is bound without knowing what protocol is spoken on top of it.


Oh yes you are right. There should be specific tests for protocols as we
have for some protocols right now.


>   - implement centralized check definitions that can be reused in
> multiple
> backends. E.g. tcp-checks sequences or htx check sequences should
> only
> be defined once. This implies that some of the headers will likely
> need
> to support variables so that a backend may defined a host header for
> example and the checks use it.

But we have already such a possibility IMHO, it's the named defaults
section,
isn't it.


tcp-checks cannot be put into defaults sections. Also even with 
defaults

sections it makes sense to be able to define a few favorite checks that
are used a lot. With this said, we've already talked about named 
defaults
that frontends/backends could explicitly designate. It could be 
convenient

for example to have :

   default tcp
   ...

   defaults http
   ...

   defaults admin
   ...

   frontend foo
   use-defaults http

and so on. This combined with the ability to put tcp-check sequences in
the defaults sections could actually address a huge number of 
limitations.
This could also work to designate which log format to use, while I was 
more
thinking about a named log profile. This proves that a bit more 
thinking is

still needed in this area.


Okay I think my example with 'defaults' wasn't the right one I thought 
to have

check apps sections similar the fcgi and cache app.

check-app tcp01
  ...

check-app tcp02
  ...

check-app http01
  ...

check-app http02
  ...

backend foo
  use-check http01

I think this makes more sense, right?



It would be nice to be able to reuse the feature
(tcp|http)-(request|response)
for the checks.


Maybe, maybe not, I really don't know to be honest, because it can
also add a lot of connfusion compared to a send/expect model for
example.


Well when every check-app have there own parts then maybe it is more 
useable.


And you know there will be some distributed setups where the status 
from a

backend
should be shared with different haproxy instances, maybe via peers 
protocol,

this
will be maybe only possible in the commercial version ;-).


This was something that I intended many years ago already, even before
we had the peers protocol, and that we've even discussed during the
protocol's design to be sure to leave provisions for such an extension,
and v2 of the protocol with shared connections made a giant step 
forward

in this direction. I even wrote down on paper a distributed check
algorithm that will take less time to re-design than to figure on what
sheet of paper it was :-)


;-)

But in the mean time the definition of a "server" has changed a lot. 
For

the LB algos, it's a position in the list of server. For stickiness it
used to be a numeric ID (defaults to the position in the list) and is a
name now. For those who used to rely on SNMP-based monitoring it also
was this numeric ID. For some people it's the server's name (hence the
recent change). Now with service meshes it tends to move to IP:port
where the server's name is irrelevant. Users are able to adapt their
monitoring, stats and reporting to all these conditions, but when it
comes to defining what to exchange over the wire and what is
authoritative it's a completely different story!

In addition, in such new environments, it's common to see a central
service decide what server is up or down and advertise it either via
the API or the DNS, in which case checks just become very basic again.

Last but not least, in highly distributed environments you really do
not want your neighbor to tell you what server is up when it uses a
different path than you use.

So while I was initially really fond of the idea and wanted to see it
done at least for the beauty of the design, I must confess that I'm
far less impatient nowadays because I predict that it will require lots
of tunables that most users will consider unwelcome. And I really doubt
it will provide that much value in modern environments in the end.


Sounds fairly reasonable, it was just a Idea.


Just my two cents,
Willy

Re: check rewrite feature request

2020-01-02 Thread Willy Tarreau
Hi Aleks,

On Thu, Dec 26, 2019 at 12:11:31PM +0100, Aleksandar Lazic wrote:
> >   - rewrite *all* health checks to internally use tcp-check sequences
> > only
> > so that we don't have to maintain this horribly broken mess anymore
> > and
> > can more easily implement new ones ;
> 
> Well, I think we will need also udp-check for example for DNS, QUIC and some
> other protocols.

These would then be DNS, QUIC, ping or whatever. There's no such thing as
a UDP check given that by default UDP doesn't respond so there's no way to
know whether a generic service works or not. You cannot even count on ICMP
port unreachable which can be rate-limited or filtered out. In fact TCP was
the particular case here since it's possible to at least check that a port
is bound without knowing what protocol is spoken on top of it.

> >   - implement centralized check definitions that can be reused in
> > multiple
> > backends. E.g. tcp-checks sequences or htx check sequences should
> > only
> > be defined once. This implies that some of the headers will likely
> > need
> > to support variables so that a backend may defined a host header for
> > example and the checks use it.
> 
> But we have already such a possibility IMHO, it's the named defaults
> section,
> isn't it.

tcp-checks cannot be put into defaults sections. Also even with defaults
sections it makes sense to be able to define a few favorite checks that
are used a lot. With this said, we've already talked about named defaults
that frontends/backends could explicitly designate. It could be convenient
for example to have :

   default tcp
   ...

   defaults http
   ...

   defaults admin
   ...

   frontend foo
   use-defaults http

and so on. This combined with the ability to put tcp-check sequences in
the defaults sections could actually address a huge number of limitations.
This could also work to designate which log format to use, while I was more
thinking about a named log profile. This proves that a bit more thinking is
still needed in this area.

> It would be nice to be able to reuse the feature
> (tcp|http)-(request|response)
> for the checks.

Maybe, maybe not, I really don't know to be honest, because it can
also add a lot of connfusion compared to a send/expect model for
example.

> And you know there will be some distributed setups where the status from a
> backend
> should be shared with different haproxy instances, maybe via peers protocol,
> this
> will be maybe only possible in the commercial version ;-).

This was something that I intended many years ago already, even before
we had the peers protocol, and that we've even discussed during the
protocol's design to be sure to leave provisions for such an extension,
and v2 of the protocol with shared connections made a giant step forward
in this direction. I even wrote down on paper a distributed check
algorithm that will take less time to re-design than to figure on what
sheet of paper it was :-)

But in the mean time the definition of a "server" has changed a lot. For
the LB algos, it's a position in the list of server. For stickiness it
used to be a numeric ID (defaults to the position in the list) and is a
name now. For those who used to rely on SNMP-based monitoring it also
was this numeric ID. For some people it's the server's name (hence the
recent change). Now with service meshes it tends to move to IP:port
where the server's name is irrelevant. Users are able to adapt their
monitoring, stats and reporting to all these conditions, but when it
comes to defining what to exchange over the wire and what is
authoritative it's a completely different story!

In addition, in such new environments, it's common to see a central
service decide what server is up or down and advertise it either via
the API or the DNS, in which case checks just become very basic again.

Last but not least, in highly distributed environments you really do
not want your neighbor to tell you what server is up when it uses a
different path than you use.

So while I was initially really fond of the idea and wanted to see it
done at least for the beauty of the design, I must confess that I'm
far less impatient nowadays because I predict that it will require lots
of tunables that most users will consider unwelcome. And I really doubt
it will provide that much value in modern environments in the end.

Just my two cents,
Willy



Re: check rewrite feature request

2019-12-26 Thread Aleksandar Lazic

Hi Willy.

Am 26-12-2019 06:44, schrieb Willy Tarreau:

Hi Aleks,

On Tue, Dec 24, 2019 at 10:29:44AM +0100, Aleksandar Lazic wrote:
I have created a feautre request for the check rewrite as we have more 
and

more requests for checks which are quite difficult to setup.

https://github.com/haproxy/haproxy/issues/426


Thanks for this. We definitely need to rework them for 2.2. At minima,
what I'd like to see, in order of realization:

  - rewrite *all* health checks to internally use tcp-check sequences 
only
so that we don't have to maintain this horribly broken mess anymore 
and

can more easily implement new ones ;


Well, I think we will need also udp-check for example for DNS, QUIC and 
some

other protocols.

  - implement a new htx check mechanism that uses the muxes and that 
will be

able to seamlessly deal with H1/H2/FCGI, and likely plug onto idle
connections; these ones will need to support adding headers and 
probably

do more;


Full Ack.

  - implement centralized check definitions that can be reused in 
multiple
backends. E.g. tcp-checks sequences or htx check sequences should 
only
be defined once. This implies that some of the headers will likely 
need
to support variables so that a backend may defined a host header 
for

example and the checks use it.


But we have already such a possibility IMHO, it's the named defaults 
section,

isn't it.


If we can do all this it will be great.

Note that in your example above you're using %[host] for example, 
though
the log-format syntax is only valid for traffic being processed, but I 
do
get the idea anyway. For me it's the same principle as having variables 
or

check parameters defined in the backend (or maybe even per server).


Yes, the log-format syntax was used to show that in some way a rewrite 
from some

variables will be necessary.

It would be nice to be able to reuse the feature 
(tcp|http)-(request|response)

for the checks.

And you know there will be some distributed setups where the status from 
a backend
should be shared with different haproxy instances, maybe via peers 
protocol, this

will be maybe only possible in the commercial version ;-).


Cheers,
Willy


Regards
Aleks



Re: check rewrite feature request

2019-12-25 Thread Willy Tarreau
Hi Aleks,

On Tue, Dec 24, 2019 at 10:29:44AM +0100, Aleksandar Lazic wrote:
> I have created a feautre request for the check rewrite as we have more and
> more requests for checks which are quite difficult to setup.
> 
> https://github.com/haproxy/haproxy/issues/426

Thanks for this. We definitely need to rework them for 2.2. At minima,
what I'd like to see, in order of realization:

  - rewrite *all* health checks to internally use tcp-check sequences only
so that we don't have to maintain this horribly broken mess anymore and
can more easily implement new ones ;

  - implement a new htx check mechanism that uses the muxes and that will be
able to seamlessly deal with H1/H2/FCGI, and likely plug onto idle
connections; these ones will need to support adding headers and probably
do more;

  - implement centralized check definitions that can be reused in multiple
backends. E.g. tcp-checks sequences or htx check sequences should only
be defined once. This implies that some of the headers will likely need
to support variables so that a backend may defined a host header for
example and the checks use it.

If we can do all this it will be great.

Note that in your example above you're using %[host] for example, though
the log-format syntax is only valid for traffic being processed, but I do
get the idea anyway. For me it's the same principle as having variables or
check parameters defined in the backend (or maybe even per server).

Cheers,
Willy



check rewrite feature request

2019-12-24 Thread Aleksandar Lazic

Hi.

I have created a feautre request for the check rewrite as we have more and more 
requests for checks which are quite difficult to setup.


https://github.com/haproxy/haproxy/issues/426

Regards
aleks



Re: feature request: http-check with backend weight control possibility

2019-08-05 Thread Jiri Tulach

Hello!
It would be also nice to have an option to control maximum number of 
connections to the backend. For instance by X-HAPROXY-MAXCONN header.


Best Regards,
Jiří Tulach

Dne 6.8.2019 v 6:51 Максим Куприянов napsal(a):

Hi!

It would be nice to to add some backend weight control option to 
http-checks.
For example backend could add some X-WEIGHT http-header to it's 
health-check responses and haproxy could use them instead of a separate 
haproxy-agent instance on a backend to control backend weight or even 
maintenance.


--
Best regards,
Maksim Kupriianov






feature request: http-check with backend weight control possibility

2019-08-05 Thread Максим Куприянов
Hi!

It would be nice to to add some backend weight control option to
http-checks.
For example backend could add some X-WEIGHT http-header to it's
health-check responses and haproxy could use them instead of a separate
haproxy-agent instance on a backend to control backend weight or even
maintenance.

--
Best regards,
Maksim Kupriianov


Re: [Feature request] Call fan-out to all endpoints.

2018-06-10 Thread Patrick Hemmer


On 2018/6/10 13:27, Aleksandar Lazic wrote:
> Hi.
>
> On 10/06/2018 17:56, amotz wrote:
>> Baptiste wrote:
>>> Hi,
>>>
>>> what's the use case?
>>> Is this API gateway kind of thing?
>>>
>>> Baptiste
>>
>> From my experience this is mostly needed for operations/management API.
>>
>> Some examples:
>> getStaus (i.e get the status/health from all endpoint)
>> flashCache (make all endpoint flash their cache)
>> setConfig (you get the point ...)
>> more...
>>
>> with regard to the fan-in question by Jonathan.
>> Maybe return 207 (multi-status)  https://httpstatuses.com/207 ?
>> IMO, the most intuitive response would be a json array of all the
>> endpoints
>> responses, but I'm open for suggestions.
>
> Let's say you have a `option allendpoints /_fan-out`.
>
> When you now call `curl -sS https://haproxy:8080/_fan-out/health` then
> you will receive a json from *all active* endpoints (pods,
> app-server,...) with the result of there `/health`, something like this?
>
> That sounds interesting. Maybe it's possible with a
> `external-check command ` or some lua code?
>
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#external-check%20%28Alphabetically%20sorted%20keywords%20reference%29
>

Just throwing out my own $0.02, I don't think this is a good
responsibility for haproxy to support. This is very specific application
level logic.
Haproxy don't care about content types (json). What if I want to use
this feature, but with some other encoding?
How should haproxy respond if a server sends a 1GB response? It can't
buffer the whole thing in memory so it can encode it and add it to the
response message.
What about the non-happy-path cases? What if one of the servers times
out, what should haproxy put in the response? What if a server sends a
partial response?
How should the headers from a server response be encoded?

This is basically the invention of a new protocol.

Don't get me wrong, the underlying goal, having a client send a single
request and that request getting duplicated amongst the servers, is a
good one. In fact we do this at my work. But we use a custom application
that is specifically designed to handle the protocol we are wrapping.

I think this might be reasonable to do in LUA, and maybe even possible
already, but there's still going to be lots of the fore-mentioned
difficulties.
However to put some measure of positive spin on things, I think HTTP/2
would fit very well with this use case. HTTP/2 supports server push
messages. Meaning it's built in to the protocol that the client can send
a single request, and receive multiple responses. Haproxy doesn't
support fully H2 passthrough right now, but that may not be necessary. I
think LUA really only needs a few things to be able to support this: The
ability to receive H2 requests & generate responses (LUA already has
http/1.1 response capabilities, but I have no idea if they work with H2
requests), and then the ability to trigger a request to a server, and
have that sent back to the client as a server-push message.

-Patrick


Re: [Feature request] Call fan-out to all endpoints.

2018-06-10 Thread Aleksandar Lazic

Hi.

On 10/06/2018 17:56, amotz wrote:

Baptiste wrote:

Hi,

what's the use case?
Is this API gateway kind of thing?

Baptiste


From my experience this is mostly needed for operations/management API.

Some examples:
getStaus (i.e get the status/health from all endpoint)
flashCache (make all endpoint flash their cache)
setConfig (you get the point ...)
more...

with regard to the fan-in question by Jonathan.
Maybe return 207 (multi-status)  https://httpstatuses.com/207 ?
IMO, the most intuitive response would be a json array of all the endpoints
responses, but I'm open for suggestions.


Let's say you have a `option allendpoints /_fan-out`.

When you now call `curl -sS https://haproxy:8080/_fan-out/health` then
you will receive a json from *all active* endpoints (pods,
app-server,...) with the result of there `/health`, something like this?

That sounds interesting. Maybe it's possible with a
`external-check command ` or some lua code?

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#external-check%20%28Alphabetically%20sorted%20keywords%20reference%29


Thanks,
Amotz


Best regards
Aleks


z

‫בתאריך יום א׳, 10 ביוני 2018 ב-14:23 מאת ‪Baptiste‬‏ <‪bed...@gmail.com
‬‏>:‬




On Sun, Jun 10, 2018 at 12:36 PM, Jonathan Matthews <
cont...@jpluscplusm.com> wrote:


On 10 June 2018 at 08:44, amotz  wrote:
> I found myself needing the options to do  "fantout" for a call. Meaning
> making 1 call to haproxy and have it pass that call to all of the
endpoint
> currently active.
> I don't mind implementing this myself and push to code review Is this a
> feature you would be interested in ?

Hey Amotz,

I'm merely an haproxy user (not a dev and nothing to do with the
project from a feature/code/merging point of view), but I'd be
interested in using this.

I feel like an important part of it would be how you'd handle the
merge of the different server responses. I.e. the fan-in part.

I can see various merge strategies which would be useful in different
situations.

e.g. "Reply with *this* backend's response but totally ignore this
other backend's response" could be useful for in a logging/audit
scenario.

"Merge the response bodies in this defined order" could be useful for
structured data/responses being assembled.

"Merge the response bodies in any order, so long as they gave an HTTP
response code in the range of X-Y" could be useful for unstructured or
self-contained data (e.g. a catalog API).

"Merge these N distinct JSON documents into one properly formed JSON
response" could be really handy, but would obviously move haproxy's
job up the stack somewhat, and might well be an anti-feature!

I could have used all the above strategies at various points in my career.

I think all but the first strategy might well be harder to implement,
as you'll have to cater for a situation where you've received a
response but the admin's configured merging strategy dictates that you
can't serve the response to the requestor yet. You'll have to find
somewhere to cache entire individual response bodies for an amount of
time. I don't have any insight into doing that - I can just see that
it might be ... interesting :-)

If Willy and the rest of the folks who'd have to support this in the
future feel like this feature is worth it, please take this as an
enthusiastic "yes please!" from a user!

Jonathan









Re: [Feature request] Call fan-out to all endpoints.

2018-06-10 Thread amotz
>From my experience this is mostly needed for operations/management API.
Some examples:
getStaus (i.e get the status/health from all endpoint)
flashCache (make all endpoint flash their cache)
setConfig (you get the point ...)
more...

with regard to the fan-in question by Jonathan.
Maybe return 207 (multi-status)  https://httpstatuses.com/207 ?
IMO, the most intuitive response would be a json array of all the endpoints
responses, but I'm open for suggestions.

Thanks,
Amotz

‫בתאריך יום א׳, 10 ביוני 2018 ב-14:23 מאת ‪Baptiste‬‏ <‪bed...@gmail.com
‬‏>:‬

>
>
> On Sun, Jun 10, 2018 at 12:36 PM, Jonathan Matthews <
> cont...@jpluscplusm.com> wrote:
>
>> On 10 June 2018 at 08:44, amotz  wrote:
>> > I found myself needing the options to do  "fantout" for a call. Meaning
>> > making 1 call to haproxy and have it pass that call to all of the
>> endpoint
>> > currently active.
>> > I don't mind implementing this myself and push to code review Is this a
>> > feature you would be interested in ?
>>
>> Hey Amotz,
>>
>> I'm merely an haproxy user (not a dev and nothing to do with the
>> project from a feature/code/merging point of view), but I'd be
>> interested in using this.
>>
>> I feel like an important part of it would be how you'd handle the
>> merge of the different server responses. I.e. the fan-in part.
>>
>> I can see various merge strategies which would be useful in different
>> situations.
>>
>> e.g. "Reply with *this* backend's response but totally ignore this
>> other backend's response" could be useful for in a logging/audit
>> scenario.
>>
>> "Merge the response bodies in this defined order" could be useful for
>> structured data/responses being assembled.
>>
>> "Merge the response bodies in any order, so long as they gave an HTTP
>> response code in the range of X-Y" could be useful for unstructured or
>> self-contained data (e.g. a catalog API).
>>
>> "Merge these N distinct JSON documents into one properly formed JSON
>> response" could be really handy, but would obviously move haproxy's
>> job up the stack somewhat, and might well be an anti-feature!
>>
>> I could have used all the above strategies at various points in my career.
>>
>> I think all but the first strategy might well be harder to implement,
>> as you'll have to cater for a situation where you've received a
>> response but the admin's configured merging strategy dictates that you
>> can't serve the response to the requestor yet. You'll have to find
>> somewhere to cache entire individual response bodies for an amount of
>> time. I don't have any insight into doing that - I can just see that
>> it might be ... interesting :-)
>>
>> If Willy and the rest of the folks who'd have to support this in the
>> future feel like this feature is worth it, please take this as an
>> enthusiastic "yes please!" from a user!
>>
>> Jonathan
>>
>>
>
> Hi,
>
> what's the use case?
> Is this API gateway kind of thing?
>
> Baptiste
>


Re: [Feature request] Call fan-out to all endpoints.

2018-06-10 Thread Baptiste
On Sun, Jun 10, 2018 at 12:36 PM, Jonathan Matthews  wrote:

> On 10 June 2018 at 08:44, amotz  wrote:
> > I found myself needing the options to do  "fantout" for a call. Meaning
> > making 1 call to haproxy and have it pass that call to all of the
> endpoint
> > currently active.
> > I don't mind implementing this myself and push to code review Is this a
> > feature you would be interested in ?
>
> Hey Amotz,
>
> I'm merely an haproxy user (not a dev and nothing to do with the
> project from a feature/code/merging point of view), but I'd be
> interested in using this.
>
> I feel like an important part of it would be how you'd handle the
> merge of the different server responses. I.e. the fan-in part.
>
> I can see various merge strategies which would be useful in different
> situations.
>
> e.g. "Reply with *this* backend's response but totally ignore this
> other backend's response" could be useful for in a logging/audit
> scenario.
>
> "Merge the response bodies in this defined order" could be useful for
> structured data/responses being assembled.
>
> "Merge the response bodies in any order, so long as they gave an HTTP
> response code in the range of X-Y" could be useful for unstructured or
> self-contained data (e.g. a catalog API).
>
> "Merge these N distinct JSON documents into one properly formed JSON
> response" could be really handy, but would obviously move haproxy's
> job up the stack somewhat, and might well be an anti-feature!
>
> I could have used all the above strategies at various points in my career.
>
> I think all but the first strategy might well be harder to implement,
> as you'll have to cater for a situation where you've received a
> response but the admin's configured merging strategy dictates that you
> can't serve the response to the requestor yet. You'll have to find
> somewhere to cache entire individual response bodies for an amount of
> time. I don't have any insight into doing that - I can just see that
> it might be ... interesting :-)
>
> If Willy and the rest of the folks who'd have to support this in the
> future feel like this feature is worth it, please take this as an
> enthusiastic "yes please!" from a user!
>
> Jonathan
>
>

Hi,

what's the use case?
Is this API gateway kind of thing?

Baptiste


Re: [Feature request] Call fan-out to all endpoints.

2018-06-10 Thread Jonathan Matthews
On 10 June 2018 at 08:44, amotz  wrote:
> I found myself needing the options to do  "fantout" for a call. Meaning
> making 1 call to haproxy and have it pass that call to all of the endpoint
> currently active.
> I don't mind implementing this myself and push to code review Is this a
> feature you would be interested in ?

Hey Amotz,

I'm merely an haproxy user (not a dev and nothing to do with the
project from a feature/code/merging point of view), but I'd be
interested in using this.

I feel like an important part of it would be how you'd handle the
merge of the different server responses. I.e. the fan-in part.

I can see various merge strategies which would be useful in different
situations.

e.g. "Reply with *this* backend's response but totally ignore this
other backend's response" could be useful for in a logging/audit
scenario.

"Merge the response bodies in this defined order" could be useful for
structured data/responses being assembled.

"Merge the response bodies in any order, so long as they gave an HTTP
response code in the range of X-Y" could be useful for unstructured or
self-contained data (e.g. a catalog API).

"Merge these N distinct JSON documents into one properly formed JSON
response" could be really handy, but would obviously move haproxy's
job up the stack somewhat, and might well be an anti-feature!

I could have used all the above strategies at various points in my career.

I think all but the first strategy might well be harder to implement,
as you'll have to cater for a situation where you've received a
response but the admin's configured merging strategy dictates that you
can't serve the response to the requestor yet. You'll have to find
somewhere to cache entire individual response bodies for an amount of
time. I don't have any insight into doing that - I can just see that
it might be ... interesting :-)

If Willy and the rest of the folks who'd have to support this in the
future feel like this feature is worth it, please take this as an
enthusiastic "yes please!" from a user!

Jonathan



[Feature request] Call fan-out to all endpoints.

2018-06-10 Thread amotz
I found myself needing the options to do  "fantout" for a call. Meaning making
1 call to haproxy and have it pass that call to *all* of the endpoint
currently active.
I don't mind implementing this myself and push to code review Is this a
feature you would be interested in ?

Thanks,
Amotz


Feature request: smtpchk additional output check

2018-02-09 Thread Stu M
Hi,

I have a small but hopefully simple request that would be useful in
Exchange SMTP load balancing situations.

When Exchange 2013+ hub transport service is put into maintenance mode it
keeps the SMTP service running but responds to mail from commands with "421
4.3.2 Service not active" - I believe this is for "smart redirect" of
Outlook clients that can relay thru another server in a cluster.

However, where other MTAs are concerned, when relaying thru a load balanced
Exchange this becomes problematic because they simply fail to submit
messages - HAproxy still sees the backend server is up because SMTP is
still alive, despite using smtpchk (Exchange still responds to the ehlo
command, see below).

E.g. typical output when a hub transport host is in service mode..

220 xyzserver Microsoft ESMTP MAIL Service ready at Fri, 9 Feb 2018
09:32:20 +
EHLO
250-xyzserver Hello [x.x.x.x]
250-SIZE 37748736
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-STARTTLS
250-8BITMIME
250-BINARYMIME
250 CHUNKING
mail from: m...@hotmail.com
421 4.3.2 Service not active

My question is, would it be possible to have add additional option for
smtpchk that causes HAP to look for additional data in the SMTP response -
specifically the last line above "service not active"?

So after doing ehlo, if HAP then does a dummy "mail from:" it will get the
result code at that point, perhaps something like the following would
suffice?

   option smtpchk ehlo 
   option smtpchk-exch 

Appreciate any thoughts on this.

Best regards,
Stuart.


Re: feature request

2018-01-19 Thread Vladimír Houba ml .
Thank you Marc for fast reply,

yes we have monitoring deployed in place and are aware of this possibility.

Though we thought implementation in haproxy would have several benefits

   - easy configuration (no need to integrate scripts with nagios on every
   node)
   - as we use permanent connection the periodic nature of monitoring
   checks can miss (and probably will as the connections are immediately
   re-established if possible) a connection break-up

Haproxy already is great!! at moniroting and health-checking backends, and
extending mailers would give a new range of interesting possibilities.

This was intended more as a feature idea, just to draw our use-case in case
others would share it.

Thank you again,
Vladimir


On Fri, Jan 19, 2018 at 12:07 PM, Marc Fournier <
marc.fourn...@camptocamp.com> wrote:

> Vladimír Houba ml.  writes:
>
> Hello,
>
> > we have many backends with few permanent connections/each and I was
> > wondering if it is possible to send an email alert when no connection is
> > active on the backend. It is not possible to implement this feature on
> the
> > application server as they are load-balanced and the connection may be
> > routed to any of them.
> >
> > Also, it would be nice feature to be able to send the notifications via a
> > rest service to make it more flexible.
>
> This sounds like the sort of thing a monitoring system does. Fortunately
> HAProxy already offers a nice hook for this class of tools:
> http://cbonte.github.io/haproxy-dconv/1.8/management.html#9
>
> Basically, fetch the stats, parse the CSV format and extract the
> per-server/backend "current sessions" field, trigger an alert if the
> value is below a certain threshold.
>
> HTH,
> Marc
>
>


-- 
S pozdravom / Best regards
Vladimír Houba jr.
Prosoft , Slovakia
+421 915 708 171


Re: feature request

2018-01-19 Thread Marc Fournier
Vladimír Houba ml.  writes:

Hello,

> we have many backends with few permanent connections/each and I was
> wondering if it is possible to send an email alert when no connection is
> active on the backend. It is not possible to implement this feature on the
> application server as they are load-balanced and the connection may be
> routed to any of them.
>
> Also, it would be nice feature to be able to send the notifications via a
> rest service to make it more flexible.

This sounds like the sort of thing a monitoring system does. Fortunately
HAProxy already offers a nice hook for this class of tools:
http://cbonte.github.io/haproxy-dconv/1.8/management.html#9

Basically, fetch the stats, parse the CSV format and extract the
per-server/backend "current sessions" field, trigger an alert if the
value is below a certain threshold.

HTH,
Marc



feature request

2018-01-19 Thread Vladimír Houba ml .
Hello,

we have many backends with few permanent connections/each and I was
wondering if it is possible to send an email alert when no connection is
active on the backend. It is not possible to implement this feature on the
application server as they are load-balanced and the connection may be
routed to any of them.

Also, it would be nice feature to be able to send the notifications via a
rest service to make it more flexible.

Thank you
Vladimir

-- 
S pozdravom / Best regards
Vladimír Houba jr.
Prosoft , Slovakia
+421 915 708 171


Re: Feature request: disable CA/distinguished names.

2017-07-28 Thread Emmanuel Hocdet
Hi Willy

thanks!

> Le 28 juil. 2017 à 15:23, Willy TARREAU  a écrit :
> 
> Hi Manu,
> 
> thanks you!
> 
> I've just applied a minor change below :
> 
> - int verify:2;  /* verify method (set of SSL_VERIFY_* flags) 
> */
> + int verify:3;  /* verify method (set of SSL_VERIFY_* flags) 
> */
> 
> I've put 3 bits for verify instead of 2 because while apparently haproxy only
> uses values 0, 1, and 2, openssl defines 0x04 as well in all versions I have
> here and I prefer to avoid jokes in the future.
> 
okay.
Not a silently joke, in this case gcc generate "warning: overflow in implicit 
constant conversion [-Woverflow]"

++
Manu




Re: Feature request: disable CA/distinguished names.

2017-07-28 Thread Willy TARREAU
Hi Manu,

thanks you!

I've just applied a minor change below :

-   int verify:2;  /* verify method (set of SSL_VERIFY_* flags) 
*/
+   int verify:3;  /* verify method (set of SSL_VERIFY_* flags) 
*/

I've put 3 bits for verify instead of 2 because while apparently haproxy only
uses values 0, 1, and 2, openssl defines 0x04 as well in all versions I have
here and I prefer to avoid jokes in the future.

Thanks,
Willy



Re: Feature request: disable CA/distinguished names.

2017-07-28 Thread Emmanuel Hocdet
Hi Emeric

Thanks for the review

patch with ‘{}’ include

++
Manu



0001-MINOR-ssl-add-no-ca-names-parameter-for-bind.patch
Description: Binary data



> Le 27 juil. 2017 à 18:47, Emeric Brun  a écrit :
> 
> Hi Manu,
> 
> 
> Could you add a block '{ }' or move the comment on the comment on following 
> lines:
> 
> + if (!((ssl_conf && ssl_conf->no_ca_names) || 
> bind_conf->ssl_conf.no_ca_names))
> + /* set CA names fo client cert request, 
> function returns void */
> + SSL_CTX_set_client_CA_list(ctx, 
> SSL_load_client_CA_file(ca_file));
> 
> Is it quite confusing, and we want to avoid further mistakes.
> 
> 
> A second point, i don't know which is the current policy about the keyword 
> prefix "no-" in configuration statements, but
> we usually take care using this word.
> 
> Willy, would you clarify that point?
> 
> R,
> Emeric
> 
> On 07/10/2017 05:45 PM, Emmanuel Hocdet wrote:
>> 
>> Hi Bas,
>> 
>>> Le 10 juil. 2017 à 17:05, Wolvers, Bas  a écrit :
>>> 
>>> Hi Emmanuel,
>>> 
>>> I finally found time to test your patch.
>>> 
>>> It works, but you can't seem to turn it off.
>>> no-ca-names seems to be active regardless of the option in the config file.
>>> 
>> 
>> oops i fail the double negation.
>> fix patch include.
>> 
>>> I think I'll find time tomorrow to find out if it’s the global option or 
>>> not, but my time is a bit limited unfortunately.
>>> 
>>> Best regards,
>>> 
>>> Bas
>> 
>> Thanks for testing!
>> 
>> Manu
>> 
>> 
>> 
>> 
>> 
>> 
> 



Re: Feature request: disable CA/distinguished names.

2017-07-27 Thread Willy TARREAU
On Thu, Jul 27, 2017 at 06:47:38PM +0200, Emeric Brun wrote:
> A second point, i don't know which is the current policy about the keyword 
> prefix "no-" in configuration statements, but
> we usually take care using this word.
> 
> Willy, would you clarify that point?

In fact we were very careful to avoid them for the "server" lines
to limit the trouble in prevision for the server-template directive
and the "no-whatever" that came with it. Other than that we already
have a few "no-" on bind lines, and I think that when they disable
part of a feature that is on by default due to another option it
makes sense as there's little confusion.

Willy



Re: Feature request: disable CA/distinguished names.

2017-07-27 Thread Emeric Brun
Hi Manu,


Could you add a block '{ }' or move the comment on the comment on following 
lines:

+   if (!((ssl_conf && ssl_conf->no_ca_names) || 
bind_conf->ssl_conf.no_ca_names))
+   /* set CA names fo client cert request, 
function returns void */
+   SSL_CTX_set_client_CA_list(ctx, 
SSL_load_client_CA_file(ca_file));

Is it quite confusing, and we want to avoid further mistakes.


A second point, i don't know which is the current policy about the keyword 
prefix "no-" in configuration statements, but
we usually take care using this word.

Willy, would you clarify that point?

R,
Emeric

On 07/10/2017 05:45 PM, Emmanuel Hocdet wrote:
> 
> Hi Bas,
> 
>> Le 10 juil. 2017 à 17:05, Wolvers, Bas  a écrit :
>>
>> Hi Emmanuel,
>>
>> I finally found time to test your patch.
>>
>> It works, but you can't seem to turn it off.
>> no-ca-names seems to be active regardless of the option in the config file.
>>
> 
> oops i fail the double negation.
> fix patch include.
> 
>> I think I'll find time tomorrow to find out if it’s the global option or 
>> not, but my time is a bit limited unfortunately.
>>
>> Best regards,
>>
>> Bas
> 
> Thanks for testing!
> 
> Manu
> 
> 
> 
> 
> 
> 




Re: Feature request: disable CA/distinguished names.

2017-07-27 Thread Emmanuel Hocdet

Hi Bas,

> Le 11 juil. 2017 à 11:24, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
> 
> Hi Emmanuel,
> 
> This seems to work fine.
> I've tested with 1 CA certs, without the option on I get "tcp window 
> full" followed by tls fatal alerts, with the option on the connection works 
> fine.
> 
> I haven't tested the crt-list option.
> 

good!

> 
> Do you know if it is possible to add this to stable (1.5/1.6)?
> My guess would be 'no' because it is a new feature, but I'm not sure what 
> your policy's are.
> 

Merge in 1.8dev will be a good step.
Emeric or Willy must find time to review and consider the merge.

++
Manu

> Best regards,
> 
> Bas
> 
> -Original Message-
> From: Emmanuel Hocdet [mailto:m...@gandi.net] 
> Sent: maandag 10 juli 2017 17:46
> To: Wolvers, Bas
> Cc: haproxy@formilux.org
> Subject: Re: Feature request: disable CA/distinguished names.
> 
> 
> Hi Bas,
> 
>> Le 10 juil. 2017 à 17:05, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
>> 
>> Hi Emmanuel,
>> 
>> I finally found time to test your patch.
>> 
>> It works, but you can't seem to turn it off.
>> no-ca-names seems to be active regardless of the option in the config file.
>> 
> 
> oops i fail the double negation.
> fix patch include.
> 
>> I think I'll find time tomorrow to find out if it’s the global option or 
>> not, but my time is a bit limited unfortunately.
>> 
>> Best regards,
>> 
>> Bas
> 
> Thanks for testing!
> 
> Manu
> 
> 




RE: Feature request: disable CA/distinguished names.

2017-07-11 Thread Wolvers, Bas
Hi Emmanuel,

This seems to work fine.
I've tested with 1 CA certs, without the option on I get "tcp window full" 
followed by tls fatal alerts, with the option on the connection works fine.

I haven't tested the crt-list option.


Do you know if it is possible to add this to stable (1.5/1.6)?
My guess would be 'no' because it is a new feature, but I'm not sure what your 
policy's are.

Best regards,

Bas

-Original Message-
From: Emmanuel Hocdet [mailto:m...@gandi.net] 
Sent: maandag 10 juli 2017 17:46
To: Wolvers, Bas
Cc: haproxy@formilux.org
Subject: Re: Feature request: disable CA/distinguished names.


Hi Bas,

> Le 10 juil. 2017 à 17:05, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
> 
> Hi Emmanuel,
> 
> I finally found time to test your patch.
> 
> It works, but you can't seem to turn it off.
> no-ca-names seems to be active regardless of the option in the config file.
> 

oops i fail the double negation.
fix patch include.

> I think I'll find time tomorrow to find out if it’s the global option or not, 
> but my time is a bit limited unfortunately.
> 
> Best regards,
> 
> Bas

Thanks for testing!

Manu




Re: Feature request: disable CA/distinguished names.

2017-07-10 Thread Emmanuel Hocdet

Hi Bas,

> Le 10 juil. 2017 à 17:05, Wolvers, Bas  a écrit :
> 
> Hi Emmanuel,
> 
> I finally found time to test your patch.
> 
> It works, but you can't seem to turn it off.
> no-ca-names seems to be active regardless of the option in the config file.
> 

oops i fail the double negation.
fix patch include.

> I think I'll find time tomorrow to find out if it’s the global option or not, 
> but my time is a bit limited unfortunately.
> 
> Best regards,
> 
> Bas

Thanks for testing!

Manu




0001-MINOR-ssl-add-no-ca-names-parameter-for-bind.patch
Description: Binary data




RE: Feature request: disable CA/distinguished names.

2017-07-10 Thread Wolvers, Bas
Hi Emmanuel,

I finally found time to test your patch.

It works, but you can't seem to turn it off.
no-ca-names seems to be active regardless of the option in the config file.

I think I'll find time tomorrow to find out if it’s the global option or not, 
but my time is a bit limited unfortunately.

Best regards,

Bas

-Original Message-
From: Emmanuel Hocdet [mailto:m...@gandi.net] 
Sent: dinsdag 13 juni 2017 15:39
To: Wolvers, Bas
Cc: haproxy@formilux.org
Subject: Re: Feature request: disable CA/distinguished names.


> Le 13 juin 2017 à 14:13, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
> 
> That would do nicely.
> 
> Is there something useful I can do to help?
> 

Can you test with this patch? :



Re: Feature request: disable CA/distinguished names.

2017-06-13 Thread Emmanuel Hocdet

> Le 13 juin 2017 à 14:13, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
> 
> That would do nicely.
> 
> Is there something useful I can do to help?
> 

Can you test with this patch? :



0001-MINOR-ssl-add-no-ca-names-parameter-for-bind.patch
Description: Binary data

> -Original Message-
> From: Emmanuel Hocdet [mailto:m...@gandi.net] 
> Sent: maandag 12 juni 2017 17:58
> To: Wolvers, Bas
> Cc: haproxy@formilux.org
> Subject: Re: Feature request: disable CA/distinguished names.
> 
> Thanks for the explanation.
> I think a parameter like ‘no-ca-names’ could do the job, or you have a better 
> name?
> 
> Manu
> 
>> Le 12 juin 2017 à 14:32, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
>> 
>> If you connect to a haproxy TLS server with CA names on (verify optional or 
>> required) part of the server hello message is the list of CA's that are 
>> accepted.
>> The client can use this list to decide which certificate to send as its 
>> client certificate.
>> 
>> The problem arises when this list if long, the server hello message gets 
>> really long as well.
>> If the list if very long the server hello becomes prohibitively big, making 
>> client connections fail.
>> 
>> So disabling the list of CA names in the server hello message reduces the 
>> message size.
>> Lots of clients don’t need to be told which certificate to send, and this 
>> list is optional since TLS1.1 if memory serves me well.
>> 
>> I'm running a system which (for good reason) runs on self-signed 
>> certificates, so technically I have a CA for every client. 
>> With more than 30 CA's I had client that have problems connecting because 
>> the server hello is too big.
>> With CA names turned off I tested with 1 CA's loaded without problems.
>> 
>> -Original Message-
>> From: Emmanuel Hocdet [mailto:m...@gandi.net]
>> Sent: maandag 12 juni 2017 14:22
>> To: Wolvers, Bas
>> Cc: haproxy@formilux.org
>> Subject: Re: Feature request: disable CA/distinguished names.
>> 
>> I don't understand.
>> CA certs are loaded by haproxy when needed: i.e if 'ca-file’ parameter is 
>> used and ‘verify’ is set to ‘optional’ or ‘required’.
>> 
>>> Le 12 juin 2017 à 13:00, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
>>> 
>>> For setups with large amounts of CA certs it can be a really good idea to 
>>> turn off CA names in the key exchange.
>>> As far as I understand it is optional to send CA names, and it works fine 
>>> with these turned off.
>>> This is also called distinguished names.
>>> 
>>> To do this a single line should not be executed.
>>> SSL_CTX_set_client_CA_list(ctx, 
>>> SSL_load_client_CA_file(ca_file));
>>> (in ssl_sock.c, function ssl_sock_prepare_ctx).
>>> 
>>> I currently disable this with a LD_PRELOAD shim, but I think it would be a 
>>> good idea to make this an ssl option, similar to force_tls12 etc.
>>> 
>>> /*
>>> This shim disables 2 openssl functions.
>>> The effect of this is that no client CA names,  also known as 
>>> distingushed names, are loaded  this reduces ssl traffic with large 
>>> numbers of  CA certificates.
>>> 
>>> This is made to be used with HAPROXY since it  does not have a 
>>> setting to disable this in the  configuration.
>>> */
>>> #include 
>>> 
>>> void SSL_CTX_set_client_CA_list(void *one, void *two) { 
>>> printf("SSL_CTX_set_client_CA_list called but disabled by shim.\n"); 
>>> return; } void *SSL_load_client_CA_file(void *one) { 
>>> printf("SSL_load_client_CA_file called but disabled by shim.\n"); 
>>> return 0; }
>>> 
>> 
> 



RE: Feature request: disable CA/distinguished names.

2017-06-13 Thread Wolvers, Bas
That would do nicely.

Is there something useful I can do to help?

-Original Message-
From: Emmanuel Hocdet [mailto:m...@gandi.net] 
Sent: maandag 12 juni 2017 17:58
To: Wolvers, Bas
Cc: haproxy@formilux.org
Subject: Re: Feature request: disable CA/distinguished names.

Thanks for the explanation.
I think a parameter like ‘no-ca-names’ could do the job, or you have a better 
name?

Manu

> Le 12 juin 2017 à 14:32, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
> 
> If you connect to a haproxy TLS server with CA names on (verify optional or 
> required) part of the server hello message is the list of CA's that are 
> accepted.
> The client can use this list to decide which certificate to send as its 
> client certificate.
> 
> The problem arises when this list if long, the server hello message gets 
> really long as well.
> If the list if very long the server hello becomes prohibitively big, making 
> client connections fail.
> 
> So disabling the list of CA names in the server hello message reduces the 
> message size.
> Lots of clients don’t need to be told which certificate to send, and this 
> list is optional since TLS1.1 if memory serves me well.
> 
> I'm running a system which (for good reason) runs on self-signed 
> certificates, so technically I have a CA for every client. 
> With more than 30 CA's I had client that have problems connecting because the 
> server hello is too big.
> With CA names turned off I tested with 1 CA's loaded without problems.
> 
> -Original Message-
> From: Emmanuel Hocdet [mailto:m...@gandi.net]
> Sent: maandag 12 juni 2017 14:22
> To: Wolvers, Bas
> Cc: haproxy@formilux.org
> Subject: Re: Feature request: disable CA/distinguished names.
> 
> I don't understand.
> CA certs are loaded by haproxy when needed: i.e if 'ca-file’ parameter is 
> used and ‘verify’ is set to ‘optional’ or ‘required’.
> 
>> Le 12 juin 2017 à 13:00, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
>> 
>> For setups with large amounts of CA certs it can be a really good idea to 
>> turn off CA names in the key exchange.
>> As far as I understand it is optional to send CA names, and it works fine 
>> with these turned off.
>> This is also called distinguished names.
>> 
>> To do this a single line should not be executed.
>>  SSL_CTX_set_client_CA_list(ctx, 
>> SSL_load_client_CA_file(ca_file));
>> (in ssl_sock.c, function ssl_sock_prepare_ctx).
>> 
>> I currently disable this with a LD_PRELOAD shim, but I think it would be a 
>> good idea to make this an ssl option, similar to force_tls12 etc.
>> 
>> /*
>> This shim disables 2 openssl functions.
>> The effect of this is that no client CA names,  also known as 
>> distingushed names, are loaded  this reduces ssl traffic with large 
>> numbers of  CA certificates.
>> 
>> This is made to be used with HAPROXY since it  does not have a 
>> setting to disable this in the  configuration.
>> */
>> #include 
>> 
>> void SSL_CTX_set_client_CA_list(void *one, void *two) { 
>> printf("SSL_CTX_set_client_CA_list called but disabled by shim.\n"); 
>> return; } void *SSL_load_client_CA_file(void *one) { 
>> printf("SSL_load_client_CA_file called but disabled by shim.\n"); 
>> return 0; }
>> 
> 



Re: Feature request: disable CA/distinguished names.

2017-06-12 Thread Emmanuel Hocdet
Thanks for the explanation.
I think a parameter like ‘no-ca-names’ could do the job, or you have a better 
name?

Manu

> Le 12 juin 2017 à 14:32, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
> 
> If you connect to a haproxy TLS server with CA names on (verify optional or 
> required) part of the server hello message is the list of CA's that are 
> accepted.
> The client can use this list to decide which certificate to send as its 
> client certificate.
> 
> The problem arises when this list if long, the server hello message gets 
> really long as well.
> If the list if very long the server hello becomes prohibitively big, making 
> client connections fail.
> 
> So disabling the list of CA names in the server hello message reduces the 
> message size.
> Lots of clients don’t need to be told which certificate to send, and this 
> list is optional since TLS1.1 if memory serves me well.
> 
> I'm running a system which (for good reason) runs on self-signed 
> certificates, so technically I have a CA for every client. 
> With more than 30 CA's I had client that have problems connecting because the 
> server hello is too big.
> With CA names turned off I tested with 1 CA's loaded without problems.
> 
> -Original Message-
> From: Emmanuel Hocdet [mailto:m...@gandi.net] 
> Sent: maandag 12 juni 2017 14:22
> To: Wolvers, Bas
> Cc: haproxy@formilux.org
> Subject: Re: Feature request: disable CA/distinguished names.
> 
> I don't understand.
> CA certs are loaded by haproxy when needed: i.e if 'ca-file’ parameter is 
> used and ‘verify’ is set to ‘optional’ or ‘required’.
> 
>> Le 12 juin 2017 à 13:00, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
>> 
>> For setups with large amounts of CA certs it can be a really good idea to 
>> turn off CA names in the key exchange.
>> As far as I understand it is optional to send CA names, and it works fine 
>> with these turned off.
>> This is also called distinguished names.
>> 
>> To do this a single line should not be executed.
>>  SSL_CTX_set_client_CA_list(ctx, 
>> SSL_load_client_CA_file(ca_file));
>> (in ssl_sock.c, function ssl_sock_prepare_ctx).
>> 
>> I currently disable this with a LD_PRELOAD shim, but I think it would be a 
>> good idea to make this an ssl option, similar to force_tls12 etc.
>> 
>> /*
>> This shim disables 2 openssl functions.
>> The effect of this is that no client CA names,  also known as 
>> distingushed names, are loaded  this reduces ssl traffic with large 
>> numbers of  CA certificates.
>> 
>> This is made to be used with HAPROXY since it  does not have a 
>> setting to disable this in the  configuration.
>> */
>> #include 
>> 
>> void SSL_CTX_set_client_CA_list(void *one, void *two) {  
>> printf("SSL_CTX_set_client_CA_list called but disabled by shim.\n");  
>> return; } void *SSL_load_client_CA_file(void *one) {  
>> printf("SSL_load_client_CA_file called but disabled by shim.\n");  
>> return 0; }
>> 
> 




RE: Feature request: disable CA/distinguished names.

2017-06-12 Thread Wolvers, Bas
If you connect to a haproxy TLS server with CA names on (verify optional or 
required) part of the server hello message is the list of CA's that are 
accepted.
The client can use this list to decide which certificate to send as its client 
certificate.

The problem arises when this list if long, the server hello message gets really 
long as well.
If the list if very long the server hello becomes prohibitively big, making 
client connections fail.

So disabling the list of CA names in the server hello message reduces the 
message size.
Lots of clients don’t need to be told which certificate to send, and this list 
is optional since TLS1.1 if memory serves me well.

I'm running a system which (for good reason) runs on self-signed certificates, 
so technically I have a CA for every client. 
With more than 30 CA's I had client that have problems connecting because the 
server hello is too big.
With CA names turned off I tested with 1 CA's loaded without problems.

-Original Message-
From: Emmanuel Hocdet [mailto:m...@gandi.net] 
Sent: maandag 12 juni 2017 14:22
To: Wolvers, Bas
Cc: haproxy@formilux.org
Subject: Re: Feature request: disable CA/distinguished names.

I don't understand.
CA certs are loaded by haproxy when needed: i.e if 'ca-file’ parameter is used 
and ‘verify’ is set to ‘optional’ or ‘required’.

> Le 12 juin 2017 à 13:00, Wolvers, Bas <bas.wolv...@alliander.com> a écrit :
> 
> For setups with large amounts of CA certs it can be a really good idea to 
> turn off CA names in the key exchange.
> As far as I understand it is optional to send CA names, and it works fine 
> with these turned off.
> This is also called distinguished names.
> 
> To do this a single line should not be executed.
>   SSL_CTX_set_client_CA_list(ctx, 
> SSL_load_client_CA_file(ca_file));
> (in ssl_sock.c, function ssl_sock_prepare_ctx).
> 
> I currently disable this with a LD_PRELOAD shim, but I think it would be a 
> good idea to make this an ssl option, similar to force_tls12 etc.
> 
> /*
>  This shim disables 2 openssl functions.
>  The effect of this is that no client CA names,  also known as 
> distingushed names, are loaded  this reduces ssl traffic with large 
> numbers of  CA certificates.
> 
>  This is made to be used with HAPROXY since it  does not have a 
> setting to disable this in the  configuration.
> */
> #include 
> 
> void SSL_CTX_set_client_CA_list(void *one, void *two) {  
> printf("SSL_CTX_set_client_CA_list called but disabled by shim.\n");  
> return; } void *SSL_load_client_CA_file(void *one) {  
> printf("SSL_load_client_CA_file called but disabled by shim.\n");  
> return 0; }
> 



Re: Feature request: disable CA/distinguished names.

2017-06-12 Thread Emmanuel Hocdet
I don't understand.
CA certs are loaded by haproxy when needed: i.e if 'ca-file’ parameter is used 
and ‘verify’ is set to ‘optional’ or ‘required’.

> Le 12 juin 2017 à 13:00, Wolvers, Bas  a écrit :
> 
> For setups with large amounts of CA certs it can be a really good idea to 
> turn off CA names in the key exchange.
> As far as I understand it is optional to send CA names, and it works fine 
> with these turned off.
> This is also called distinguished names.
> 
> To do this a single line should not be executed.
>   SSL_CTX_set_client_CA_list(ctx, 
> SSL_load_client_CA_file(ca_file));
> (in ssl_sock.c, function ssl_sock_prepare_ctx).
> 
> I currently disable this with a LD_PRELOAD shim, but I think it would be a 
> good idea to make this an ssl option, similar to force_tls12 etc.
> 
> /*
>  This shim disables 2 openssl functions.
>  The effect of this is that no client CA names,
>  also known as distingushed names, are loaded
>  this reduces ssl traffic with large numbers of
>  CA certificates.
> 
>  This is made to be used with HAPROXY since it
>  does not have a setting to disable this in the
>  configuration.
> */
> #include 
> 
> void SSL_CTX_set_client_CA_list(void *one, void *two) {
>  printf("SSL_CTX_set_client_CA_list called but disabled by shim.\n");
>  return;
> }
> void *SSL_load_client_CA_file(void *one) {
>  printf("SSL_load_client_CA_file called but disabled by shim.\n");
>  return 0;
> }
> 




Feature request: disable CA/distinguished names.

2017-06-12 Thread Wolvers, Bas
For setups with large amounts of CA certs it can be a really good idea to turn 
off CA names in the key exchange.
As far as I understand it is optional to send CA names, and it works fine with 
these turned off.
This is also called distinguished names.

To do this a single line should not be executed.
SSL_CTX_set_client_CA_list(ctx, 
SSL_load_client_CA_file(ca_file));
(in ssl_sock.c, function ssl_sock_prepare_ctx).

I currently disable this with a LD_PRELOAD shim, but I think it would be a good 
idea to make this an ssl option, similar to force_tls12 etc.

/*
  This shim disables 2 openssl functions.
  The effect of this is that no client CA names,
  also known as distingushed names, are loaded
  this reduces ssl traffic with large numbers of
  CA certificates.

  This is made to be used with HAPROXY since it
  does not have a setting to disable this in the
  configuration.
*/
#include 

void SSL_CTX_set_client_CA_list(void *one, void *two) {
  printf("SSL_CTX_set_client_CA_list called but disabled by shim.\n");
  return;
}
void *SSL_load_client_CA_file(void *one) {
  printf("SSL_load_client_CA_file called but disabled by shim.\n");
  return 0;
}



Re: New feature request

2017-05-30 Thread Pavlos Parissis
On 05/30/2017 11:56 AM, Willy Tarreau wrote:
> On Tue, May 30, 2017 at 11:04:35AM +0200, Pavlos Parissis wrote:
>> On 05/29/2017 02:58 PM, John Dison wrote:
>>> Hello,
>>>
>>> in ROADMAP I see:
>>> - spare servers : servers which are used in LB only when a minimum farm
>>> weight threshold is not satisfied anymore. Useful for inter-site LB with
>>> local pref by default.
>>>
>>>
>>> Is it possible to push this item priority to get it done for 1.8 please?  
>>> It looks like it should not require major code refactoring, just another LB 
>>> scheme.
>>>
>>> What I want to achieve is an ability to route request to "local" pool until 
>>> is get some
>>> pre-defined maximum load, and route extra request to "remote" pool of 
>>> servers.
>>>
>>> Thanks in advance.
>>>
>>
>>
>> +1 as I also find it very useful. But I am afraid it is too late for 1.8.
> 
> I'd love to have it as well for the same reasons. I think by now it
> shouldn't be too complicated to implement anymore, but all the usual
> suspects are busy on more important devs. I'm willing to take a look
> at it before 1.8 is released if we're in time with everything planned,
> but not more. However if someone wants to give it a try and doesn't
> need too much code review (which is very time consuming), I think this
> could get merged if the impact on existing code remains low (otherwise
> postponed to 1.9-dev).
> 
> In the mean time it's quite possible to achieve something more or less
> similar using two backends, one with the local servers, one with all
> servers, and to only use the second backend when the first one is full.
> It's not exactly the same, but can sometimes provide comparable results.
> 
> Willy
> 

True. I use the following to achieve it, it also avoids flipping users between 
data centers:

# Data center availability logic.
# Based on the destination IP we select the pool.
# NOTE: Destination IP is the public IP of a site and for each data center
# we use different IP address. So, in case we see IP address of dc1
# arriving in dc2 we know that dc is broken
http-request set-header X-Pool
%[str(www.foo.bar)]%[dst,map_ip(/etc/haproxy/dst_ip_dc.map,env(DATACENTER)]
use_backend %[hdr(X-Pool)] if { hdr(X-Pool),nbsrv ge 1 }

# Check for the availability of app in a data canter.
# NOTE: Two acl's with the same name produces a logical or.
acl www.foo.bardc1_down nbsrv(www.foo.bardc1) lt 1
acl www.foo.bardc1_down queue(www.foo.bardc1) ge 1
acl www.foo.bardc2_down nbsrv(www.foo.bardc2) lt 1
acl www.foo.bardc2_down queue(www.foo.bardc2) ge 1
acl www.foo.bardc3_down nbsrv(www.foo.bardc3) lt 1
acl www.foo.bardc3_down queue(www.foo.bardc3) ge 1

# We end up here if the selected pool of a data center is down.
# We don't want to use the all_dc pool as it would flip users between data
# centers, thus we are going to balance traffic across the two remaining
# data centers using a hash against the client IP. Unfortunately, we will
# check again for the availability of the data center, for which we know
# already is down. I should try to figure out a way to somehow dynamically
# know the remaining two data centers, so if dc1 is down then I should
# only check dc2 and dc3.

http-request set-var(req.selected_dc_backup) src,djb2,mod(2)

#Balance if www.foo.bardc1 is down
use_backend www.foo.bardc2 if www.foo.bardc1_down !www.foo.bardc2_down { 
var(req.selected_dc_backup)
eq 0 }
use_backend www.foo.bardc3 if www.foo.bardc1_down !www.foo.bardc3_down { 
var(req.selected_dc_backup)
eq 1 }

#Balance if www.foo.bardc2 is down
use_backend www.foo.bardc1 if www.foo.bardc2_down !www.foo.bardc1_down { 
var(req.selected_dc_backup)
eq 0 }
use_backend www.foo.bardc3 if www.foo.bardc2_down !www.foo.bardc3_down { 
var(req.selected_dc_backup)
eq 1 }

#Balance if www.foo.bardc3 is down
use_backend www.foo.bardc1 if www.foo.bardc3_down !www.foo.bardc1_down { 
var(req.selected_dc_backup)
eq 0 }
use_backend www.foo.bardc2 if www.foo.bardc3_down !www.foo.bardc2_down { 
var(req.selected_dc_backup)
eq 1 }

# If two data centers are down then for simplicity reasons just use the all_dc 
pool
default_backend www.foo.barall_dc

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: New feature request

2017-05-30 Thread Willy Tarreau
On Tue, May 30, 2017 at 11:04:35AM +0200, Pavlos Parissis wrote:
> On 05/29/2017 02:58 PM, John Dison wrote:
> > Hello,
> > 
> > in ROADMAP I see:
> > - spare servers : servers which are used in LB only when a minimum farm
> > weight threshold is not satisfied anymore. Useful for inter-site LB with
> > local pref by default.
> > 
> > 
> > Is it possible to push this item priority to get it done for 1.8 please?  
> > It looks like it should not require major code refactoring, just another LB 
> > scheme.
> > 
> > What I want to achieve is an ability to route request to "local" pool until 
> > is get some
> > pre-defined maximum load, and route extra request to "remote" pool of 
> > servers.
> > 
> > Thanks in advance.
> > 
> 
> 
> +1 as I also find it very useful. But I am afraid it is too late for 1.8.

I'd love to have it as well for the same reasons. I think by now it
shouldn't be too complicated to implement anymore, but all the usual
suspects are busy on more important devs. I'm willing to take a look
at it before 1.8 is released if we're in time with everything planned,
but not more. However if someone wants to give it a try and doesn't
need too much code review (which is very time consuming), I think this
could get merged if the impact on existing code remains low (otherwise
postponed to 1.9-dev).

In the mean time it's quite possible to achieve something more or less
similar using two backends, one with the local servers, one with all
servers, and to only use the second backend when the first one is full.
It's not exactly the same, but can sometimes provide comparable results.

Willy



Re: New feature request

2017-05-30 Thread Pavlos Parissis
On 05/29/2017 02:58 PM, John Dison wrote:
> Hello,
> 
> in ROADMAP I see:
> - spare servers : servers which are used in LB only when a minimum farm
> weight threshold is not satisfied anymore. Useful for inter-site LB with
> local pref by default.
> 
> 
> Is it possible to push this item priority to get it done for 1.8 please?  It 
> looks like it should not require major code refactoring, just another LB 
> scheme.
> 
> What I want to achieve is an ability to route request to "local" pool until 
> is get some
> pre-defined maximum load, and route extra request to "remote" pool of servers.
> 
> Thanks in advance.
> 


+1 as I also find it very useful. But I am afraid it is too late for 1.8.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


New feature request

2017-05-29 Thread John Dison
Hello,

in ROADMAP I see:
- spare servers : servers which are used in LB only when a minimum farm
weight threshold is not satisfied anymore. Useful for inter-site LB with
local pref by default.


Is it possible to push this item priority to get it done for 1.8 please?  It 
looks like it should not require major code refactoring, just another LB scheme.

What I want to achieve is an ability to route request to "local" pool until is 
get some
pre-defined maximum load, and route extra request to "remote" pool of servers.

Thanks in advance.



Re: Feature request: routing a TCP stream based on Cipher Suites in a TLS ClientHello

2017-02-24 Thread Lukas Tribus

Hi,


Am 24.02.2017 um 08:29 schrieb Pavlos Parissis:


That means RedHat7, which comes with openssl 1.0.1, users can't use this
functionality!


Yes; the functionality being both RSA and ECC certificates at the same 
time and only if
you use vanilla openssl 1.0.1 (you could link haproxy to a static 
openssl 1.0.2 build).





Is this because openssl 1.0.1 version doesn't support ECC
certificates?


No, openssl 1.0.1 supports ECC certificates, and you can use ECC 
certificates in haproxy
on RedHat 7 just fine; what doesn't work is the "multi cert" mode, when 
using *both*

RSA and ECC certificates at the same time.

Verify if you use case still warrants RSA certificates. If not, just 
stick to ECC only and

openssl 1.0.1/RedHat 7 will be enough.


cheers,
Lukas




Re: Feature request: routing a TCP stream based on Cipher Suites in a TLS ClientHello

2017-02-23 Thread Pavlos Parissis
On 23/02/2017 07:38 μμ, Lukas Tribus wrote:
> Hi,
> 
> Am 23.02.2017 um 04:02 schrieb James Brown:
>> Unfortunately, that feature only works with OpenSSL 1.0.2 (which,
>> incidentally, would be a good thing to note in the documentation)...
> 
> Good point; I did not remember this either ... we have to fix the docs.
> 
> 
> Lukas
> 

That means RedHat7, which comes with openssl 1.0.1, users can't use this
functionality! Is this because openssl 1.0.1 version doesn't support ECC
certificates?

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: Feature request: routing a TCP stream based on Cipher Suites in a TLS ClientHello

2017-02-23 Thread Lukas Tribus

Hi,

Am 23.02.2017 um 04:02 schrieb James Brown:

Unfortunately, that feature only works with OpenSSL 1.0.2 (which,
incidentally, would be a good thing to note in the documentation)...


Good point; I did not remember this either ... we have to fix the docs.


Lukas



Re: Feature request: routing a TCP stream based on Cipher Suites in a TLS ClientHello

2017-02-22 Thread James Brown
Unfortunately, that feature only works with OpenSSL 1.0.2 (which,
incidentally, would be a good thing to note in the documentation)...

On Wed, Feb 22, 2017 at 4:39 PM, Lukas Tribus  wrote:

> Hello James,
>
>
> Am 23.02.2017 um 01:11 schrieb James Brown:
>
>> Right now, the "best" way I'm aware of to serve both an RSA and an ECDSA
>> certificate on the same IP to different clients is to use req.ssl_ec_ext <
>> http://cbonte.github.io/haproxy-dconv/1.7/configuration.
>> html#7.3.5-req.ssl_ec_ext>
>> to determine if a set of supported elliptic curves was passed in the
>> ClientHello.
>>
>
> No, you don't have to do this anymore.
>
> Forget the TCP frontend with req.ssl_ec_ext, you can configure multiple
> cert types
> directly as per [1].
>
> Its a simple as naming the actual files "example.pem.rsa" and
> "example.pem.ecdsa" and
> point to it by its base name "ssl crt example.pem".
>
>
> Regards,
> Lukas
>
> [1] http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.1-crt
>



-- 
James Brown
Engineer


Re: Feature request: routing a TCP stream based on Cipher Suites in a TLS ClientHello

2017-02-22 Thread Lukas Tribus

Hello James,


Am 23.02.2017 um 01:11 schrieb James Brown:

Right now, the "best" way I'm aware of to serve both an RSA and an ECDSA
certificate on the same IP to different clients is to use 
req.ssl_ec_ext 

to determine if a set of supported elliptic curves was passed in the 
ClientHello.


No, you don't have to do this anymore.

Forget the TCP frontend with req.ssl_ec_ext, you can configure multiple 
cert types

directly as per [1].

Its a simple as naming the actual files "example.pem.rsa" and 
"example.pem.ecdsa" and

point to it by its base name "ssl crt example.pem".


Regards,
Lukas

[1] http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.1-crt



Feature request: routing a TCP stream based on Cipher Suites in a TLS ClientHello

2017-02-22 Thread James Brown
Right now, the "best" way I'm aware of to serve both an RSA and an ECDSA
certificate on the same IP to different clients is to use req.ssl_ec_ext

to
determine if a set of supported elliptic curves was passed in the
ClientHello. Unfortunately, if clients disable ECDSA cipher suites (either
manually or through poor defaults), the EC extension block will still be
present, but the user will be unable to negotiate a handshake with an
ECDSA-using server. It would be nice to be able to direct users with no
ECDSA cipher suites to the RSA backend instead.

It would be nice to have a set of booleans available at the same level as
req.ssl_ec_ext for determining if various families of cipher suites are
present. I envision something like req.ssl_rsa_supported,
req.ssl_dsa_supported, and req.ssl_ecdsa_supported. I suppose we could also
just add a fetcher that exposes the entire client cipher-suite list as a
string and then use a regexp to determine if, e..g, the string "-ECDSA"
occurs in that list, but that seems somewhat failure-prone.

​Thoughts?​

-- 
James Brown
Engineer


Feature-request: tcp-request connection requeue

2017-02-21 Thread Charlie Elgholm
Hi!

Sorry for sending this to the mailing list directly, perhaps it's the wrong
forum.
I did not find any other "feature request"-link on the haproxy-site.
If this is the wrong forum, please let me know.

I would love if there was a "requeue" action, for "tcp-request connection
", which just puts this request "last" in the queue - to be
processed again (at a later time).

That way, we can easily handle NAT-connections, in a fairly "safe" way
without having to reject them completely if they're playing nice.

Like this:
tcp-request connection requeue if { src_conn_cur ge 3 }
tcp-request connection reject if { src_conn_cur ge 20 }

Also, a src_sess_cur variable showing the number of currently
backend-handled sessions from the source IP-address would be lovely,
instead of just src_conn_cur - which could be in the frontend-queue, right?
Then we could do "tcp-request connection requeue if { src_sess_cur ge 3 }"
instead.

I am sorry if this already can be easily accomplished, and I'm too stupid
to understand it.

What I am aiming for is that we then easily configure NAT-clients
(companies) in a very easy and safe way. In my examples above we handle a
maximum of 3 requests in the backend, from a single IP, and keep requeueing
new requests from that IP, which will eventually be handled at a future
point. If we get a lot of connections from the IP, 20, well, OK, drop them.

Many users behind a single NAT-IP-address will probably see our site as
normally working, even if they are opening a lot of connections. They will
just be queued (the site will appear "slower" for them). If we drop a bunch
of requests, some users will perhaps (depending on how they have their
browser/proxy configured) see the site as broken.

-- 
Regards
Charlie Elgholm
Brightly AB


[Feature Request] Expose UDP socket from luasocket

2017-02-03 Thread Dave Marion
Any chance of exposing the UDP functionality[1] from lua socket?


[1] http://w3.impa.br/~diego/software/luasocket/udp.html



Re: Feature Request for log stdout ...

2016-02-28 Thread Aleksandar Lazic

Hi.

Am 18-02-2016 15:22, schrieb Willy Tarreau:

Hi Aleks,

On Thu, Feb 18, 2016 at 02:53:29PM +0100, Aleksandar Lazic wrote:


[snipp]


For openshift I will try to use 2 container in 1 pod.

If there any interests I can write here if this works ;-)


Sure, please report anyway.


You can find my solution at this repo

https://github.com/git001/haproxy

I use socklog ( http://smarden.org/socklog/ ) instead of socat.
socklog is in debian but not in centos/rhel so I just build it in the 
docker file.


BR Aleks



Re: Feature Request for log stdout ...

2016-02-18 Thread Aleksandar Lazic

Hi Bryan.

Am 18-02-2016 21:18, schrieb Bryan Talbot:

Sorry I'm a bit late to this party but when running in a container it's
also easy to configure haproxy to log to a unix socket and bind mount
that socket to the host.

in haproxy.cnf


log /dev/log local2


Then when launching the container an option like "-v /var/log:/var/log"
works quite well to get container syslogs to the host.


Well this way is not possible in openshift due to the fact that the pods 
are not running as root!



-Bryan

On Thu, Feb 18, 2016 at 6:22 AM, Willy Tarreau  wrote:


Hi Aleks,

On Thu, Feb 18, 2016 at 02:53:29PM +0100, Aleksandar Lazic wrote:

But this moves just the stdout handling to other tools and does not
solve the problem with blocking handling of std*, as far as I have
understood right.


Yes it does because if the logging daemon blocks, logs are simply lost
on the UDP socket between haproxy and the daemon without blocking
haproxy.


It also 'violates' the best practice of docker.





https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run-only-one-process-per-container


Well it's written "in almost all cases". Otherwise you would not even
be allowed to use nbproc or the systemd wrapper. If you consider your
deamon as the log-dedicated process, it's OK :-)


Okay this could be solved with the linking as described in the link.

For openshift I will try to use 2 container in 1 pod.

If there any interests I can write here if this works ;-)


Sure, please report anyway.

Cheers,
Willy




Re: Feature Request for log stdout ...

2016-02-18 Thread Bryan Talbot
Sorry I'm a bit late to this party but when running in a container it's
also easy to configure haproxy to log to a unix socket and bind mount that
socket to the host.

in haproxy.cnf

log /dev/log local2


Then when launching the container an option like "-v /var/log:/var/log"
works quite well to get container syslogs to the host.

-Bryan



On Thu, Feb 18, 2016 at 6:22 AM, Willy Tarreau  wrote:

> Hi Aleks,
>
> On Thu, Feb 18, 2016 at 02:53:29PM +0100, Aleksandar Lazic wrote:
> > But this moves just the stdout handling to other tools and does not
> > solve the problem with blocking handling of std*, as far as I have
> > understood right.
>
> Yes it does because if the logging daemon blocks, logs are simply lost
> on the UDP socket between haproxy and the daemon without blocking
> haproxy.
>
> > It also 'violates' the best practice of docker.
> >
> >
> https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run-only-one-process-per-container
>
> Well it's written "in almost all cases". Otherwise you would not even
> be allowed to use nbproc or the systemd wrapper. If you consider your
> deamon as the log-dedicated process, it's OK :-)
>
> > Okay this could be solved with the linking as described in the link.
> >
> > For openshift I will try to use 2 container in 1 pod.
> >
> > If there any interests I can write here if this works ;-)
>
> Sure, please report anyway.
>
> Cheers,
> Willy
>
>
>


Re: Feature Request for log stdout ...

2016-02-18 Thread Willy Tarreau
Hi Aleks,

On Thu, Feb 18, 2016 at 02:53:29PM +0100, Aleksandar Lazic wrote:
> But this moves just the stdout handling to other tools and does not 
> solve the problem with blocking handling of std*, as far as I have 
> understood right.

Yes it does because if the logging daemon blocks, logs are simply lost
on the UDP socket between haproxy and the daemon without blocking
haproxy.

> It also 'violates' the best practice of docker.
> 
> https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run-only-one-process-per-container

Well it's written "in almost all cases". Otherwise you would not even
be allowed to use nbproc or the systemd wrapper. If you consider your
deamon as the log-dedicated process, it's OK :-)

> Okay this could be solved with the linking as described in the link.
> 
> For openshift I will try to use 2 container in 1 pod.
> 
> If there any interests I can write here if this works ;-)

Sure, please report anyway.

Cheers,
Willy




Re: Feature Request for log stdout ...

2016-02-18 Thread Alberto



On 02/18/2016 07:53 AM, Aleksandar Lazic wrote:


Thanks for answers and suggestions.

But this moves just the stdout handling to other tools and does not 
solve the problem with blocking handling of std*, as far as I have 
understood right.


haproxy will not block. If your logging system hangs it will
drop the UDP packets.

I've seen it and had discussions before when people use TCP
for logging systems. A hiccup in the logging systems brings
the whole infrastructure down.


It also 'violates' the best practice of docker.


Your application should be more important than docker's best
practices. These guides are suggestions not mandates.

Having said that, you should run your logging system in its own
container.

Process in this context is a micro-service, not a single kernel
process (ie. pid)

Hope this helps,

Alberto




Re: Feature Request for log stdout ...

2016-02-18 Thread Aleksandar Lazic

Hi.

Am 18-02-2016 11:47, schrieb Conrad Hoffmann:

Two more cents from my side:

socklog [1] also works pretty well...

[1] http://smarden.org/socklog/

Conrad

On 02/18/2016 11:28 AM, Baptiste wrote:

On Thu, Feb 18, 2016 at 10:57 AM, Willy Tarreau  wrote:

Hi Aleks,

On Wed, Feb 17, 2016 at 04:30:06PM +0100, Aleksandar Lazic wrote:

Hi.

how difficult is it to be able to add "log stdout;" to haproxy?


[snipp]


It's been discussed a few times in the past. The response is "no".
It's totally insane to emit logs to a blocking destination. Your
whole haproxy process will run at the speed of the logs consumer
and the log processing will incure its latency to the process.



[snipp]



My 2 cents: Some tools may be used for this purpose:

Configure HAProxy to send logs to port 2000, then use:

- socat:
socat -u UDP-RECV:2000 -


[snipp]


- netcat:
netcat -l -k -u 2000


[snipp]

Thanks for answers and suggestions.

But this moves just the stdout handling to other tools and does not 
solve the problem with blocking handling of std*, as far as I have 
understood right.


It also 'violates' the best practice of docker.

https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run-only-one-process-per-container

Okay this could be solved with the linking as described in the link.

For openshift I will try to use 2 container in 1 pod.

If there any interests I can write here if this works ;-)

BR Aleks



Re: Feature Request for log stdout ...

2016-02-18 Thread Willy Tarreau
On Thu, Feb 18, 2016 at 11:47:03AM +0100, Conrad Hoffmann wrote:
> Two more cents from my side:
> 
> socklog [1] also works pretty well...
> 
> [1] http://smarden.org/socklog/

Thanks for the links guys. These should probably be added to the
management doc so that people find them more easily.

willy




Re: Feature Request for log stdout ...

2016-02-18 Thread Conrad Hoffmann
Two more cents from my side:

socklog [1] also works pretty well...

[1] http://smarden.org/socklog/

Conrad

On 02/18/2016 11:28 AM, Baptiste wrote:
> On Thu, Feb 18, 2016 at 10:57 AM, Willy Tarreau  wrote:
>> Hi Aleks,
>>
>> On Wed, Feb 17, 2016 at 04:30:06PM +0100, Aleksandar Lazic wrote:
>>> Hi.
>>>
>>> how difficult is it to be able to add "log stdout;" to haproxy?
>>>
>>> I ask because in some PaaS environment is it difficult to setup a
>>> dedicated user yust for haproxy.
>>>
>>> It fits also a little bit better to http://12factor.net/logs
>>
>> It's been discussed a few times in the past. The response is "no".
>> It's totally insane to emit logs to a blocking destination. Your
>> whole haproxy process will run at the speed of the logs consumer
>> and the log processing will incure its latency to the process.
>>
>> If one day we implement an synchronous stream logging task, this
>> could change, but for now we send immediate logs as datagrams in
>> order never to block.
>>
>> To get an idea about what it can look like with blocking logs,
>> simply run "haproxy -d 2>&1 | more" and don't press any key.
>> You'll quickly see that the system continues to accept new
>> connections and that they will randomly freeze at various steps.
>>
>> Regards,
>> Willy
>>
>>
> 
> My 2 cents: Some tools may be used for this purpose:
> 
> Configure HAProxy to send logs to port 2000, then use:
> 
> - socat:
> socat -u UDP-RECV:2000 -
> <133>Feb 18 11:27:02 haproxy[4134]: Proxy f started.
> <133>Feb 18 11:27:02 haproxy[4134]: Proxy b started.
> <133>Feb 18 11:27:02 haproxy[4134]: Proxy stats started.
> <129>Feb 18 11:27:02 haproxy[4134]: Server b/s is DOWN, reason: Layer4
> connection problem, info: "Connection refused", check duration: 0ms. 0
> active and 0 backup servers left. 0 sessions active, 0 requeued, 0
> remaining in queue.
> <128>Feb 18 11:27:02 haproxy[4134]: backend b has no server available!
> 
> - netcat:
> netcat -l -k -u 2000
> <133>Feb 18 11:28:17 haproxy[4303]: Proxy f started.
> <133>Feb 18 11:28:17 haproxy[4303]: Proxy b started.
> <133>Feb 18 11:28:17 haproxy[4303]: Proxy stats started.
> <129>Feb 18 11:28:17 haproxy[4303]: Server b/s is DOWN, reason: Layer4
> connection problem, info: "Connection refused", check duration: 0ms. 0
> active and 0 backup servers left. 0 sessions active, 0 requeued, 0
> remaining in queue.
> <128>Feb 18 11:28:17 haproxy[4303]: backend b has no server available!
> 
> 
> 
> Baptiste
> 

-- 
Conrad Hoffmann
Traffic Engineer

SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany

Managing Director: Alexander Ljung | Incorporated in England & Wales
with Company No. 6343600 | Local Branch Office | AG Charlottenburg |
HRB 110657B



Re: Feature Request for log stdout ...

2016-02-18 Thread Baptiste
On Thu, Feb 18, 2016 at 10:57 AM, Willy Tarreau  wrote:
> Hi Aleks,
>
> On Wed, Feb 17, 2016 at 04:30:06PM +0100, Aleksandar Lazic wrote:
>> Hi.
>>
>> how difficult is it to be able to add "log stdout;" to haproxy?
>>
>> I ask because in some PaaS environment is it difficult to setup a
>> dedicated user yust for haproxy.
>>
>> It fits also a little bit better to http://12factor.net/logs
>
> It's been discussed a few times in the past. The response is "no".
> It's totally insane to emit logs to a blocking destination. Your
> whole haproxy process will run at the speed of the logs consumer
> and the log processing will incure its latency to the process.
>
> If one day we implement an synchronous stream logging task, this
> could change, but for now we send immediate logs as datagrams in
> order never to block.
>
> To get an idea about what it can look like with blocking logs,
> simply run "haproxy -d 2>&1 | more" and don't press any key.
> You'll quickly see that the system continues to accept new
> connections and that they will randomly freeze at various steps.
>
> Regards,
> Willy
>
>

My 2 cents: Some tools may be used for this purpose:

Configure HAProxy to send logs to port 2000, then use:

- socat:
socat -u UDP-RECV:2000 -
<133>Feb 18 11:27:02 haproxy[4134]: Proxy f started.
<133>Feb 18 11:27:02 haproxy[4134]: Proxy b started.
<133>Feb 18 11:27:02 haproxy[4134]: Proxy stats started.
<129>Feb 18 11:27:02 haproxy[4134]: Server b/s is DOWN, reason: Layer4
connection problem, info: "Connection refused", check duration: 0ms. 0
active and 0 backup servers left. 0 sessions active, 0 requeued, 0
remaining in queue.
<128>Feb 18 11:27:02 haproxy[4134]: backend b has no server available!

- netcat:
netcat -l -k -u 2000
<133>Feb 18 11:28:17 haproxy[4303]: Proxy f started.
<133>Feb 18 11:28:17 haproxy[4303]: Proxy b started.
<133>Feb 18 11:28:17 haproxy[4303]: Proxy stats started.
<129>Feb 18 11:28:17 haproxy[4303]: Server b/s is DOWN, reason: Layer4
connection problem, info: "Connection refused", check duration: 0ms. 0
active and 0 backup servers left. 0 sessions active, 0 requeued, 0
remaining in queue.
<128>Feb 18 11:28:17 haproxy[4303]: backend b has no server available!



Baptiste



Re: Feature Request for log stdout ...

2016-02-18 Thread Willy Tarreau
Hi Aleks,

On Wed, Feb 17, 2016 at 04:30:06PM +0100, Aleksandar Lazic wrote:
> Hi.
> 
> how difficult is it to be able to add "log stdout;" to haproxy?
> 
> I ask because in some PaaS environment is it difficult to setup a 
> dedicated user yust for haproxy.
> 
> It fits also a little bit better to http://12factor.net/logs

It's been discussed a few times in the past. The response is "no".
It's totally insane to emit logs to a blocking destination. Your
whole haproxy process will run at the speed of the logs consumer
and the log processing will incure its latency to the process.

If one day we implement an synchronous stream logging task, this
could change, but for now we send immediate logs as datagrams in
order never to block.

To get an idea about what it can look like with blocking logs,
simply run "haproxy -d 2>&1 | more" and don't press any key.
You'll quickly see that the system continues to accept new
connections and that they will randomly freeze at various steps.

Regards,
Willy




Feature Request for log stdout ...

2016-02-17 Thread Aleksandar Lazic

Hi.

how difficult is it to be able to add "log stdout;" to haproxy?

I ask because in some PaaS environment is it difficult to setup a 
dedicated user yust for haproxy.


It fits also a little bit better to http://12factor.net/logs

BR Aleks



Feature Request

2014-10-18 Thread Brent Kennedy
Not sure if this is the right place for this, but I was wondering if a
select all check box could be added to the statistics page for each section.
Right now, you check off the selection boxes for each server you want to
perform an action for, which is fine.  But if you have 20(or more) servers
in the list and you want to take 19 down for a code upgrade, you have to
click each box.   I would be really really really greatful if a select all
box could be added to the top of each section.  Then I could select that
which would then check all the boxes and then uncheck the one server( two
clicks instead of 19).  It's a pretty standard web functionality, but there
might be a reason it was never added or it was just overlooked, so I thought
I would ask.

 

Really liking HAproxy 1.5.4 though, with built in SSL, things are more
streamlined now!

 

Thanks for everything!

 

Brent Kennedy

 



Re: Feature Request: Maxconn in CSV status

2014-10-02 Thread Willy Tarreau
Hi Kyle,

On Wed, Sep 24, 2014 at 12:54:52PM -0400, Kyle Brandt wrote:
 I just noticed slim (session limit) - this seems like it might be what I'm
 looking for - can anyone confirm that this is all I need to monitor to make
 sure I don't hit this limit?

Yes that's it! You have it for frontends, backends (=fullconn) and
servers (=maxconn).

Hoping this helps,
Willy




Feature Request: Maxconn in CSV status

2014-09-24 Thread Kyle Brandt
Hi All.

After having an outage for our websockets (doesn't take the site down, just
some functionallity) due to hitting max conn I realized I need to be
monitoring the current connections as a percentage of max connections. I
already gather stats from the stats csv page, but it doesn't seem there is
an entry for the maxconn setting. The regular stats page has this, but it
seems only globally.

What do you all think about making it a field in the csv file for
frontends, servers, listen etc? That would make this easy to monitor.

Thanks,
Kyle


Re: Feature Request: Maxconn in CSV status

2014-09-24 Thread Kyle Brandt
I just noticed slim (session limit) - this seems like it might be what I'm
looking for - can anyone confirm that this is all I need to monitor to make
sure I don't hit this limit?

On Wed, Sep 24, 2014 at 12:46 PM, Kyle Brandt k...@stackexchange.com
wrote:

 Hi All.

 After having an outage for our websockets (doesn't take the site down,
 just some functionallity) due to hitting max conn I realized I need to be
 monitoring the current connections as a percentage of max connections. I
 already gather stats from the stats csv page, but it doesn't seem there is
 an entry for the maxconn setting. The regular stats page has this, but it
 seems only globally.

 What do you all think about making it a field in the csv file for
 frontends, servers, listen etc? That would make this easy to monitor.

 Thanks,
 Kyle



Feature request: redispatch-on-5xx

2014-06-23 Thread Dmitry Sivachenko
Hello!

One more thing which can be very useful in some setups: if backend server 
returns HTTP 5xx status code, it would be nice to have an ability to retry the 
same request on another server before reporting error to client (when you know 
for sure the same request can be sent multiple times without side effects).

Is it possible to make some configuration switch to allow such retries?

Thanks.


Re: Feature request: redispatch-on-5xx

2014-06-23 Thread Willy Tarreau
Hi Dmitry,

On Mon, Jun 23, 2014 at 06:16:28PM +0400, Dmitry Sivachenko wrote:
 Hello!
 
 One more thing which can be very useful in some setups: if backend server
 returns HTTP 5xx status code, it would be nice to have an ability to retry
 the same request on another server before reporting error to client (when you
 know for sure the same request can be sent multiple times without side
 effects).
 
 Is it possible to make some configuration switch to allow such retries?

No it is not because if the server has responded, it means that haproxy does
not have the request anymore. That's precisely one of the difficulties of
implementing server-side multiplexing.

Willy




Re: Feature Request: Extract IP from TCP Options Header

2014-05-09 Thread Jim Rippon
 

Hi Willy, 

On 2014-05-07 10:54, Willy Tarreau wrote: 

 Hi Jim,


 On Fri, May 02, 2014 at 04:13:40PM +0100, Jim Rippon wrote:
 
 Hi
all, As mentioned on the IRC channel today, I have a requirement to
extract an end users IP address from the TCP Options Header (in my case
with key 34 or 0x22, but there are other similar implementations using
28 or 0x1C). This header is being added by some Application Delivery
Optimisation solutions by providers such as Akamai (with their IPA
product line) and CDNetworks (with their DNA product) though there are
likely others out there hijacking the TCP headers this way.
 
 Cool,
I'm happy that some people start to use TCP options for this, it
 could
drive forward improved APIs in various operating systems to help

retrieve these options. We designed the PROXY protocol precisely as an

alternative for the lack of ability to access these.
 
 Because the
options headers won't be forwarded by haproxy to the back-end servers,
the most useful way to deal with this for our http services would be to
extract the IP address encoded and place it into either the
X-Forwarded-For or X-Real-IP headers, so that it can be understood and
handled by the upstream servers. Sample implementations can be found in
documentation from F5 [1] and Citrix [2] below. In the TCP SYN packet
(and some later packets, but always in the initial SYN) we see the
option at the end of the options header field like so in our packet
capture: 22 06 ac 10 05 0a Broken down, we have: 22 = TCP Options
Header key (34 in this case with CDNetworks) 06 = Field size - this
appears to include the key, this size field and the option value ac 10
05 0a = the IP address of the end-user - faked in this example to
private address 172.16.5.10 This would be hugely useful functionality -
it would allow us to avoid the expense of high-end load balancer devices
and licenses to support testing of our CDN implementations before going
into production.
 
 Sure it would be great, and even better if we
could set them. The only
 problem is that there is no way to retrieve
these information from userland.
 
 The option is present in the
incoming SYN packet, is not recognized by the
 kernel which skips it,
and as soon as the system responds with the SYN/ACK,
 the information
is lost. Are you aware of kernel patches to retrieve these
 options ?
If at least one of them is widely deployed, we could consider

implementing support for it, just like we did in the past with the
cttproxy
 or tcpsplicing patches.
 
 Best regards,
 Willy

The
closest I have come so far is to have an NFQUEUE hook in iptables on my
Linux servers which can extract the details from the raw packets. I
don't see a way I could use this on its own, however, to mangle the
packets and insert an http header as changing the size of the payload
will lead to the TCP sequence numbers becoming incorrect when the server
replies. 

My simple cli script using Python and SCAPY can be found
here: http://bit.ly/1kSeZV7 

My NFQUEUE proof-of-concept script can be
found here, though this has the known flaw that it breaks tcp
sequencing: http://bit.ly/1kSeGtu 

Don't know if that would help anyone
- I don't really know where to go next with this, presumably a KV store
for source ip/port vs ip from header populated by the nfqueue handler,
that can be referred to when populating the XFF field? 

Jim 

Re: Feature Request: Extract IP from TCP Options Header

2014-05-07 Thread Willy Tarreau
Hi Jim,

On Fri, May 02, 2014 at 04:13:40PM +0100, Jim Rippon wrote:
 Hi all, 
 
 As mentioned on the IRC channel today, I have a
 requirement to extract an end users IP address from the TCP Options
 Header (in my case with key 34 or 0x22, but there are other similar
 implementations using 28 or 0x1C). This header is being added by some
 Application Delivery Optimisation solutions by providers such as Akamai
 (with their IPA product line) and CDNetworks (with their DNA product)
 though there are likely others out there hijacking the TCP headers this
 way. 

Cool, I'm happy that some people start to use TCP options for this, it
could drive forward improved APIs in various operating systems to help
retrieve these options. We designed the PROXY protocol precisely as an
alternative for the lack of ability to access these.

 Because the options headers won't be forwarded by haproxy to the
 back-end servers, the most useful way to deal with this for our http
 services would be to extract the IP address encoded and place it into
 either the X-Forwarded-For or X-Real-IP headers, so that it can be
 understood and handled by the upstream servers. 
 
 Sample implementations
 can be found in documentation from F5 [1] and Citrix [2] below. In the
 TCP SYN packet (and some later packets, but always in the initial SYN)
 we see the option at the end of the options header field like so in our
 packet capture: 
 
 22 06 ac 10 05 0a 
 
 Broken down, we have: 
 
 22 = TCP
 Options Header key (34 in this case with CDNetworks) 
 
 06 = Field size
 - this appears to include the key, this size field and the option value
 
 
 ac 10 05 0a = the IP address of the end-user - faked in this example
 to private address 172.16.5.10 
 
 This would be hugely useful
 functionality - it would allow us to avoid the expense of high-end load
 balancer devices and licenses to support testing of our CDN
 implementations before going into production. 

Sure it would be great, and even better if we could set them. The only
problem is that there is no way to retrieve these information from userland.

The option is present in the incoming SYN packet, is not recognized by the
kernel which skips it, and as soon as the system responds with the SYN/ACK,
the information is lost. Are you aware of kernel patches to retrieve these
options ? If at least one of them is widely deployed, we could consider
implementing support for it, just like we did in the past with the cttproxy
or tcpsplicing patches.

Best regards,
Willy




Feature Request: Reset down time on 'clear counters all'

2014-05-07 Thread Dimitris Baltas
Hello,

I am frequently running the show  stats command and use the csv formatted 
output in a custom monitoring tool.
Running clear counters all resets all numbers except the down time of servers 
and service.

I understand that down time is a critical element, but
given that down_time does reset anyway when HAProxy reloads or restarts,
It would make sense to also reset it on clear counters all

Best,
Dimitris Baltas


Dimitris Baltas

RD Manager



Address:
4, Karageorgi Servias str
105 62, Athens Greece

Reservations:
14824 (0,37/min, land line - 0,46/min, mobile)

Phone:
+30 211 1079680

Fax:
+30 210 7299664

Email:
dbal...@travelplanet24.gr

Website:
www.travelplanet24.com

Subscribe:
Newsletter

Join us:



P please consider the environment before printing this email

email disclaimer:
the information contained in this email is intended solely for the addressee. 
access to this email by anyone else is unauthorized. if you are not the 
intended recipient, any form of disclosure, reproduction, distribution or any 
action taken or refrained from in reliance on it, is prohibited and may be 
unlawful. please notify the sender immediately.






Feature Request: Extract IP from TCP Options Header

2014-05-02 Thread Jim Rippon
 

Hi all, 

As mentioned on the IRC channel today, I have a
requirement to extract an end users IP address from the TCP Options
Header (in my case with key 34 or 0x22, but there are other similar
implementations using 28 or 0x1C). This header is being added by some
Application Delivery Optimisation solutions by providers such as Akamai
(with their IPA product line) and CDNetworks (with their DNA product)
though there are likely others out there hijacking the TCP headers this
way. 

Because the options headers won't be forwarded by haproxy to the
back-end servers, the most useful way to deal with this for our http
services would be to extract the IP address encoded and place it into
either the X-Forwarded-For or X-Real-IP headers, so that it can be
understood and handled by the upstream servers. 

Sample implementations
can be found in documentation from F5 [1] and Citrix [2] below. In the
TCP SYN packet (and some later packets, but always in the initial SYN)
we see the option at the end of the options header field like so in our
packet capture: 

22 06 ac 10 05 0a 

Broken down, we have: 

22 = TCP
Options Header key (34 in this case with CDNetworks) 

06 = Field size
- this appears to include the key, this size field and the option value


ac 10 05 0a = the IP address of the end-user - faked in this example
to private address 172.16.5.10 

This would be hugely useful
functionality - it would allow us to avoid the expense of high-end load
balancer devices and licenses to support testing of our CDN
implementations before going into production. 

Regards, 

Jim Rippon


1:
https://devcentral.f5.com/articles/accessing-tcp-options-from-irules


2:
http://blogs.citrix.com/2012/08/31/using-tcp-options-for-client-ip-insertion/


 

Feature request bind add fib option

2014-01-17 Thread Ge Jin
Hi, all!

Referenced http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#bind

bind: has a lot of  bind options.

Can you add another option setfib=number for our freeBSD users ? Thanks!
We have some situation which have to use it.

setfib=number this parameter sets the associated routing table, FIB
(the SO_SETFIB option) for the listening socket.On FreeBSD



Re: Feature request: TOS based ACL.

2014-01-06 Thread Willy Tarreau
Hi guys,

On Thu, Jan 02, 2014 at 04:56:14PM +0100, Lukas Tribus wrote:
  acl bad_guys tos-acl 0x20
  block if bad_guys
 
 Ah ok, you want to match incoming TOS.
 
 That is indeed not supported currently.
 
 
 Also, not all *nixes provide an API for this. Linux has
 IP_RECVTOS/IPV6_RECVTCLASS to do it, but BSD hasn't, also see:
 http://stackoverflow.com/questions/1029849/what-is-the-bsd-or-portable-way-to-get-tos-byte-like-ip-recvtos-from-linux
 
 
 Not sure what effort it would be to implement this.

I just checked and it's really not worth it for several reasons :

  - there can be as many TOS values as there are packets. On load balanced
links, it's very likely that half packets may arrive with one TOS and
half with another one (and maybe a third one for the SYN).

  - I found no way to *query* the last known TOS seen on a received packet
for an existing socket without transfering data ;

  - it requires that we change *all* recv() calls for the slower recvmsg()
and always enable the option to retrieve this TOS in responses ; and
we'd need to store these values somewhere in the connection just for
the hypothetical case it would be used by some ACLs.

We could still check if it's possible to use recvmsg(MSGPEEK) out of the
data stream, but I doubt it since we should get a standard EAGAIN response
because there are no more data pending.

Also, I would not rely much on TOS marking for security purposes, considering
that anyone along the path may modify it, I'd fear a lot of false positives...

Regards,
Willy




Re: Feature request: TOS based ACL.

2014-01-02 Thread Ge Jin
Hi, all!

What I wanna to do is using acl to capture the TOS field on
http-request traffic.

On Thu, Jan 2, 2014 at 10:29 AM, Ge Jin altman87...@gmail.com wrote:
 Hi, Lukas!

 Thats great, but is there can be anything like this?

 acl bad_guys tos-acl   0x20
 block if bad_guys

 On Tue, Dec 31, 2013 at 7:14 PM, Lukas Tribus luky...@hotmail.com wrote:
 Hi,


 Could haproxy add a tos based acl? 
 http://en.wikipedia.org/wiki/Type_of_service
 We want to do some action on the traffic based on the tos field.


 Should work already with something like this:
  acl local_net src 192.168.0.0/16
  http-response set-tos 46 if local_net

 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-http-response



 Regards,

 Lukas



RE: Feature request: TOS based ACL.

2014-01-02 Thread Lukas Tribus
Hi,


 Thats great, but is there can be anything like this?

 acl bad_guys tos-acl 0x20
 block if bad_guys

Ah ok, you want to match incoming TOS.

That is indeed not supported currently.


Also, not all *nixes provide an API for this. Linux has
IP_RECVTOS/IPV6_RECVTCLASS to do it, but BSD hasn't, also see:
http://stackoverflow.com/questions/1029849/what-is-the-bsd-or-portable-way-to-get-tos-byte-like-ip-recvtos-from-linux


Not sure what effort it would be to implement this.



Regards,

Lukas 


Re: Feature request: TOS based ACL.

2014-01-02 Thread k simon

man ip on the freebsd box:

If the IP_RECVTTL option is enabled on a SOCK_DGRAM socket, the
recvmsg(2) call will return the IP TTL (time to live) field for a UDP
datagram. The msg_control field in the msghdr structure points to a
buffer that contains a cmsghdr structure followed by the TTL. The cms-
ghdr fields have the following values:

cmsg_len = CMSG_LEN(sizeof(u_char))
cmsg_level = IPPROTO_IP
cmsg_type = IP_RECVTTL

If the IP_RECVTOS option is enabled on a SOCK_DGRAM socket, the
recvmsg(2) call will return the IP TOS (type of service) field for a UDP
datagram. The msg_control field in the msghdr structure points to a
buffer that contains a cmsghdr structure followed by the TOS. The cms-
ghdr fields have the following values:

cmsg_len = CMSG_LEN(sizeof(u_char))
cmsg_level = IPPROTO_IP
cmsg_type = IP_RECVTOS


FreeBSD only support recv tos or ttl for udp packets. If you want split 
some tcp request traffic for special purpose, may be you can set ttl or 
tos on the front router/firewall ,then capture it with ipfw tool and 
redirect it to the customed frontend. But that leads complex 
configurations.


Simon


于 2/1/14 下午11:56, Lukas Tribus 写道:

Hi,



Thats great, but is there can be anything like this?

acl bad_guys tos-acl 0x20
block if bad_guys

Ah ok, you want to match incoming TOS.

That is indeed not supported currently.


Also, not all *nixes provide an API for this. Linux has
IP_RECVTOS/IPV6_RECVTCLASS to do it, but BSD hasn't, also see:
http://stackoverflow.com/questions/1029849/what-is-the-bsd-or-portable-way-to-get-tos-byte-like-ip-recvtos-from-linux


Not sure what effort it would be to implement this.



Regards,

Lukas   





Re: Feature request: TOS based ACL.

2014-01-01 Thread Ge Jin
Hi, Lukas!

Thats great, but is there can be anything like this?

acl bad_guys tos-acl   0x20
block if bad_guys

On Tue, Dec 31, 2013 at 7:14 PM, Lukas Tribus luky...@hotmail.com wrote:
 Hi,


 Could haproxy add a tos based acl? 
 http://en.wikipedia.org/wiki/Type_of_service
 We want to do some action on the traffic based on the tos field.


 Should work already with something like this:
  acl local_net src 192.168.0.0/16
  http-response set-tos 46 if local_net

 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-http-response



 Regards,

 Lukas



Feature request: TOS based ACL.

2013-12-31 Thread Ge Jin
Hi, all!

Could haproxy add a tos based acl? http://en.wikipedia.org/wiki/Type_of_service
We want to do some action on the traffic based on the tos field.



RE: Feature request: TOS based ACL.

2013-12-31 Thread Lukas Tribus
Hi,


 Could haproxy add a tos based acl? 
 http://en.wikipedia.org/wiki/Type_of_service
 We want to do some action on the traffic based on the tos field.


Should work already with something like this:
 acl local_net src 192.168.0.0/16
 http-response set-tos 46 if local_net

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-http-response



Regards,

Lukas 


[feature request] stats auth based on userlist

2013-08-06 Thread Christian Becker
Hello,

we´ve just updated the statistics settings for our haproxy
environment, when i noticed that haproxy has support for userlists.

In the past, we´ve used the stats configuration in every backend and
stats auth with plain text passwords.

Now the whole stats related config is in the defaults section, but
still using plaintext passwords.

From what i´ve found so far, it is possible to authenticate stats
against userlists and crypted hashes, but this requires an acl and
an if in every backend - this is a lot of config if you have 50
backens or more.

So it would be great, if you would add a new config option, f.e.
stats userlist userlist which could be placed in the defaults
sections.

This would give us an easy config, since we have to define the
authentication once and it is available in every backend and
additionally we have the security of sha512 hashes.

Here is an example how it might be:

defaults
  stats enable
  stats uri /admin?stats
  stats show-legends
  stats show-node
  stats userlist adminlist

userlist adminlist
  group G1 users tiger,scott

  user tiger password $6$k6y3o.eP$JlKBx9za9667qe4(...)xHSwRv6J.C0/D7cV91
  user scott insecure-password elgato

backend foo
  - no stats config here


Thank you,
Regards,
Christian



[feature request] stats auth based on userlist

2013-07-19 Thread Christian Becker
Hello,

we´ve just updated the statistics settings for our haproxy
environment, when i noticed that haproxy has support for userlists.

In the past, we´ve used the stats configuration in every backend and
stats auth with plain text passwords.

Now the whole stats related config is in the defaults section, but
still using plaintext passwords.

From what i´ve found so far, it is possible to authenticate stats
against userlists and crypted hashes, but this requires an acl and
an if in every backend - this is a lot of config if you have 50
backens or more.

So it would be great, if you would add a new config option, f.e.
stats userlist userlist which could be placed in the defaults
sections.

This would give us an easy config, since we have to define the
authentication once and it is available in every backend and
additionally we have the security of sha512 hashes.

Here is an example how it might be:

defaults
  stats enable
  stats uri /admin?stats
  stats show-legends
  stats show-node
  stats userlist adminlist

userlist adminlist
  group G1 users tiger,scott

  user tiger password $6$k6y3o.eP$JlKBx9za9667qe4(...)xHSwRv6J.C0/D7cV91
  user scott insecure-password elgato

backend foo
  - no stats config here


Thank you,
Regards,
Christian



Funding Feature Request Management For Open Source

2012-10-07 Thread james . clark
Hi,

We are building a web based tool to help open source projects with feature 
requests  funding (www.catincan.com). We're hoping you might provide us with 
some insight by answering the 5 questions below.

1. Do you have a problem managing feature requests (time it takes/prioritizing)?
2. Do you have any funding problems?
3. If you have the above problems, what would the perfect solution have?
4. If there was a solution specifically for open source projects that tied 
feature requests to funding, would you use it? If no, why not?
5. What is your most requested feature and how much funding would be required 
for you make it a priority to develop?

I really appreciate your time in helping us. If you have any questions, just 
let me know.

Cheers,

James

Re: feature request - slowdeath

2010-12-20 Thread David Birdsong
On Mon, Dec 20, 2010 at 12:25 AM, Willy Tarreau w...@1wt.eu wrote:
 On Sun, Dec 19, 2010 at 11:35:37PM +0100, Bedis 9 wrote:
 Hey,

 A slowdown would be interesting if you want to avoid any huge traffic
 to be redirected to quickly to other backends.

 That's the only use I can think of, and I can't find this useful for a
 simple reason : the remaining servers already have to be able to deal
 with the load implied by losing a server. So when shutting it down, the
 same thing as a failure happens to the other ones.

Ok, that's a pretty resounding no.  My use case isn't for when servers
are failing but during some sort of maintenance event.  I can see why
this is not a compelling reason as the server farm should always be
able to handle the sudden onrush of a downed server's load since that
is what happens in any failure situation and shouldn't require
warming.

I sometimes like to be nicer to a server farm than a failure would be,
but reading about Netflix's Chaos Monkey makes me reconsider.
http://techblog.netflix.com/2010/12/5-lessons-weve-learned-using-aws.html

 A nice feature, maybe already implemented, would be the grace
 shutdown of a backend.
 In case you use sticky session, grace shutdown would balance new
 incoming request to other backends and wait the established sessions
 to finish to consider the backend completely down.
 It's longer to shutdown a backend, but it's safer for your buisness ;)

 Already exists, check disable-on-404.

 Cheers,
 Willy




feature request - slowdeath

2010-12-19 Thread David Birdsong
Hey Willy,

Haproxy rocks!

I've been using slowstart lately and was wondering if it would be
possible to add the opposite--something like 'slowdeath'.  Some
external event would trigger, perhaps a very specific HTTP code, or a
command coming through the control socket, haproxy would un-weight the
server over a preconfigured interval down to zero.

I have a script that essentially does this--before I discovered
slowstart it took care to bring the server up slowly.  If slowdeath
existed, I could delete more code from this script!



Re: FEATURE REQUEST: Add ability to mark server as 'backup' via socket control

2010-09-26 Thread Cyril Bonté
Hi Brett,

Le vendredi 24 septembre 2010 12:49:20, Brett Delle Grazie a écrit :
 Hi,
 
 When using a socket to control haproxy there is already the capability
 to mark a server as 'down' for maintenance. Is there any possibility of
 adding the capability to mark it as 'backup'?
 
 This feature would permit greater automation control via scripting (i.e.
 progressively marking a server as 'backup' and then 'down' once the
 sessions have reduced to at/near zero.
 
 Is this a good idea? Are there good reasons why this hasn't been done
 already?

This can be very confusing to change the behaviour of a server at runtime. If 
a main server can become a backup one, that also means that a backup server 
can become a main one. And backup servers can be used to provide different 
pages than the main servers. This can be annoying if backup pages are 
delivered.

I think that for your need, you can already change the weight of your servers, 
this will have the same effect. The load balacing algorithm will ignore each 
server with a weight set to 0 (but requests with a persistance cookie will 
still go to the server).

-- 
Cyril Bonté



FEATURE REQUEST: Add ability to mark server as 'backup' via socket control

2010-09-24 Thread Brett Delle Grazie
Hi,

When using a socket to control haproxy there is already the capability
to mark a server as 'down' for maintenance. Is there any possibility of
adding the capability to mark it as 'backup'?

This feature would permit greater automation control via scripting (i.e.
progressively marking a server as 'backup' and then 'down' once the
sessions have reduced to at/near zero.

Is this a good idea? Are there good reasons why this hasn't been done
already?

Thanks,

-- 
Best Regards,

Brett Delle Grazie

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__