Boss Capital

2014-12-08 Thread Boss Capital
Hi there,

My name is David and I have personally been working with Boss Capital Trading 
for quite some time. They are very well reviewed online and have excellent 
customer service. If you are interested in becoming a binary trader please 
click the following link to start trading today! It is  great way to manage 
your investments by yourself and have complete contro of where you want your 
money to go. 

http://binary.bosscapital.com/aff_c?offer_id=184&aff_id=56348

Thanks for your time!

David


Re: Can not set or clear a table when the Key contains "\"

2014-12-08 Thread Jonathan Matthews
On 5 December 2014 at 07:05, Nick  wrote:
> when i try the command --echo -e "set table RD01-CSN-1 key PVG\\PENGZ
> data.server_id 3 " | socat /var/run/haproxy.stat stdio, the unix socket
> seems excluded the backslash "\\", so i cannot successfully edit the
> Haproxy tables.
> the same problem when i try the command echo -e "clear table RD01-CSN-1
> key PVG\\PENGZ data.server_id 3 " | socat /var/run/haproxy.stat stdio.

I think you're having a generic shell escaping problem, which has
nothing to do with haproxy or the unix socket.
Try using single quotes around the string you pass in, and without
giving echo that "-e" parameter.

Jonathan



Re: Disable HTTP logging for specific backend in HAProxy

2014-12-08 Thread Jonathan Matthews
On 7 December 2014 at 20:54, Alexander Minza  wrote:
> How does one adjust logging level or disable logging altogether for specific
> backends in HAProxy?
>
> In the example below, both directives "http-request set-log-level err" and
> "no log" seem to have no effect - the logs are swamped with lines of
> successful HTTP status 200 OK records.
[snip]
>> backend static
>>   http-request set-log-level err
>>   no log

Are you /absolutely/ sure that these log lines aren't being emitted by
the frontend or listener through which your backend must have received
the request? Are you expecting that "no log" to percolate back to the
frontend? I don't /think/ it works that way ... (though I've not
tested).

[ As an aside, the way I read what you've written above is "mark *all*
logs from the static backend as "err" level. Whereas your global
section's "log /dev/log local1 notice" line says "log everything that
is notice-or-more-sever to /dev/log". I know you're "no log" looks
like it should override this logging, but I just thought I'd mention
it as it looks a little odd. ]

Regards,
Jonathan



Re: Performance implications of using dynamic maps

2014-12-08 Thread Sachin Shetty
Hi Willy,

I need one more clarification, I need the value in multiple acls

acl is_a_v-1 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-1

acl is_a_v-2 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-2

acl is_a_v-3 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-3
..
..
acl is_a_v-10 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-10


is there a way I could lookup once and use the values in multiple acls?
Unfortunately I cannot refer to an acl in another acl conditions which
would have worked for me.


Thanks
Sachin

On 12/2/14 2:15 PM, "Sachin Shetty"  wrote:

>Thanks a lot Willy.
>
>Yes, I tried my luck with sticky tables, but could not find a way to
>store 
>key value mapping for 1000s of host names.
>
>I will move this to testing, thanks for you help as always :)
>
>Thanks
>Sachin
>
>On 12/2/14 1:01 PM, "Willy Tarreau"  wrote:
>
>>Hi Sachin,
>>
>>On Sat, Nov 29, 2014 at 04:19:54PM +0530, Sachin Shetty wrote:
>>> Hi,
>>> 
>>> In our architecture, we have thousands of host names resolving to a
>>>single
>>> haproxy, we dynamically decide a sticky backend based on our own custom
>>> sharding. To determine the shard info, we let the request flow in to a
>>> default apache proxy  that processes  the requests and also responds
>>>with
>>> the shard info. To be able to serve the consequent requests directly
>>> bypassing the apache, we want to store the shard info received in the
>>>first
>>> request in a map and use it for subsequent request
>>> 
>>> 1. Store the shard info from apache
>>> backend apache_l1
>>> mode http
>>> http-response set-map(/opt/haproxy/current/conf/proxy.map)
>>> %[res.hdr(X-Request-Host)]  %[res.hdr(X-Backend-Id)]
>>> server apache_l1 :80
>>> 
>>> 2. Use the backend directly for subsequent requests:
>>> acl is_a_v-1 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-1
>>> use_backend l2_haproxy if is_a_v-1
>>> 
>>> I have tested this config and it works well, but I am not sure about
>>>the
>>> performance. For every request sent to Apache, we will be adding a key,
>>> value to the map and we will be looking up the key value for every
>>>requests
>>> that is coming in to haproxy ­ is that ok considering that this is
>>>very 
>>>high
>>> performance stack? The haproxy servers are pretty powerful and
>>>dedicated to
>>> just doing proxy.
>>
>>Here you're using string-to-string mapping, it's one of the cheapest one
>>since there's no conversion of text to patterns. The string lookups are
>>performed in a few tens of nanoseconds so that does not count. The update
>>here will require :
>>  - building a new key : log-format + strdup(result)
>>  - building a new value : log-format + strdup(new)
>>  - lookup of the key in the tree
>>  - replacement or insertion of the key in the tree
>>  - free(old_key)
>>  - free(old_value)
>>
>>I suspect that below 10-2 req/s you will not notice a significant
>>difference. Above it can cost a few percent extra CPU usage.
>>
>>It's interesting to see that you have basically reimplemented stickiness
>>using maps :-)
>>
>>Regards,
>>Willy





Re: Performance implications of using dynamic maps

2014-12-08 Thread Willy Tarreau
Hi Sachin,

On Mon, Dec 08, 2014 at 06:04:35PM +0530, Sachin Shetty wrote:
> Hi Willy,
> 
> I need one more clarification, I need the value in multiple acls
> 
> acl is_a_v-1 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-1
> 
> acl is_a_v-2 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-2
> 
> acl is_a_v-3 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-3
> ..
> ..
> acl is_a_v-10 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-10
> 
> 
> is there a way I could lookup once and use the values in multiple acls?

There would be an option for this. Using a capture would permit to
have a temporary variable containing the result of your map. Something
like this approximately :

tcp-request inspect-delay 10s
tcp-request capture %[hdr(host),map(/opt/haproxy/current/conf/proxy.map)] 
len 40

Then your ACLs can refer to capture.req.hdr(0) (assuming it's the first
"capture" rule) :

  acl is_a_v-1 capture.req.hdr(0) a_v-1
  acl is_a_v-2 capture.req.hdr(0) a_v-2
  acl is_a_v-3 capture.req.hdr(0) a_v-3
  ...

Note that when using rules as yours above (string-to-string mapping), the
lookup is very fast, only the header extraction costs a little bit, so you
should not be worried by these few rules. If you would use case-insensitive
match or regex match, it would be different and you'd really need this
optimization.

If you only want to use these rules to select a proper backend, you could
also use the dynamic use_backend rules (but please carefully read the doc
about use_backend and maps for the details) :

 use_backend %[hdr(host),map(proxy.map)]

And you don't need any acl anymore, and everything is done in a single lookup.

Regards,
Willy




Re: Three questions about stick-tables and request rate limiting

2014-12-08 Thread Baptiste
Hi Dennis,

Answering inline in your email.


> Question 1: Is there a better way to reset the gpc0 counter other than
> waiting for the stick-table entry to expire?
>
> In my test if I hit haproxy with the load-testing tool apache bench to
> trigger the 10 req/s limit for two seconds and then follow that up with
> a pattern of 1 req/s for a minute these requests will never succeed
> because gpc0 is greater than zero, will never reset and the stick-table
> entry will never expire because the timer will always get reset by the 1
> req/s pattern so the user is effectively locked out forever even though
> he is no longer exceeding the request/s limit.
>
> Wouldn't it be better to reset the gpc0 counter to zero once
> http_req_rate has dropped below 10 again to not create this kind of
> perma-block?


yes, you can, there is a sample called sc0_clr_gpc0 whose purpose is
to clear the value of gpc0.
an other solution would not to measure gpc0 itself but its growing
rate using sc0_gpc0_rate.
growing would be very low with 1 request per minute.


> Question 2: When I use wrk instead of ab it seems the request limiting
> doesn't work at all. What wrk does is it doesn't create new connections
> for each request but only creates a bunch of connections initially and
> then sends all requests using these permanent connections. These are a
> couple of stick-table dumps I did after starting the wrk test:
>
> 0xe5e854: key=10.99.0.1 use=10 exp=7791 gpc0=15771 conn_cur=10
> http_req_rate(1)=15780
> 0xe5e854: key=10.99.0.1 use=10 exp=7247 gpc0=19767 conn_cur=10
> http_req_rate(1)=19776
> 0xe5e854: key=10.99.0.1 use=10 exp=6727 gpc0=23606 conn_cur=10
> http_req_rate(1)=23615
> 0xe5e854: key=10.99.0.1 use=10 exp=6247 gpc0=26718 conn_cur=10
> http_req_rate(1)=26727
> 0xe5e854: key=10.99.0.1 use=10 exp=5823 gpc0=29760 conn_cur=10
> http_req_rate(1)=29769
> 0xe5e854: key=10.99.0.1 use=10 exp=5424 gpc0=32622 conn_cur=10
> http_req_rate(1)=32631
> 0xe5e854: key=10.99.0.1 use=10 exp=4967 gpc0=35964 conn_cur=10
> http_req_rate(1)=35973
> 0xe5e854: key=10.99.0.1 use=10 exp=4567 gpc0=38779 conn_cur=10
> http_req_rate(1)=38788
>
> Notice how the http_req_rate keeps going up as does the gpc0 counter yet
> wrk doesn't report any failed requests and a result of several thousand
> requests per second.
>
> The impression I get here is that this configuration doesn't *really*
> limit the number of requests but only the number of connections based on
> the request rate which is semantically a bit different and still allows
> a potential abuser to send as many requests as he wants as long as he
> keeps using an existing connection.
> Is this impressions correct and is the a way to truly limit the number
> of requests/s even when no new connections are made?


instead of flagging a request, you can simply deny it.
HAProxy will then close the TCP connection and the user won't be
allowed to establish a new one.


> Question 3: As you can see in the configuration I'm using a https
> frontend that proxies the traffic to the http frontend so that I can get
> the combined stats in the single-process http frontend while still being
> able to put the https frontend on independent processes to distribute
> the load among cores.
>
> What I noticed though is that when I do the above tests on the SSL
> frontend I don't get any stick-table entries in the regular http
> frontend. Apparently the proxied connection aren't registered by the
> stick-table. Is there a way to get these connections to show up as well
> or do I have to copy+paste the stick-table and abuse settings and keep
> them manually in sync between the two frontends?

There should be no difference between SSL and clear traffic.
I can reproduce the behavior: there might a bug when passing through a
unix socket.
As a workaround, you can failover to a loopback IP address.

In order to populate a blacklist between clear and SSL frontends, you
can use the 'http-response add-acl'.

Hope this helps.

Baptiste



Re: Disable HTTP logging for specific backend in HAProxy

2014-12-08 Thread Baptiste
On Mon, Dec 8, 2014 at 1:29 PM, Jonathan Matthews
 wrote:
> On 7 December 2014 at 20:54, Alexander Minza  
> wrote:
>> How does one adjust logging level or disable logging altogether for specific
>> backends in HAProxy?
>>
>> In the example below, both directives "http-request set-log-level err" and
>> "no log" seem to have no effect - the logs are swamped with lines of
>> successful HTTP status 200 OK records.
> [snip]
>>> backend static
>>>   http-request set-log-level err
>>>   no log
>
> Are you /absolutely/ sure that these log lines aren't being emitted by
> the frontend or listener through which your backend must have received
> the request? Are you expecting that "no log" to percolate back to the
> frontend? I don't /think/ it works that way ... (though I've not
> tested).
>
> [ As an aside, the way I read what you've written above is "mark *all*
> logs from the static backend as "err" level. Whereas your global
> section's "log /dev/log local1 notice" line says "log everything that
> is notice-or-more-sever to /dev/log". I know you're "no log" looks
> like it should override this logging, but I just thought I'd mention
> it as it looks a little odd. ]
>
> Regards,
> Jonathan
>

Hi Alexander,

You don't disable logging in a backend, since the frontend is
responsible to generate the log line.

If you don't want to log static content, you can do something like this:

acl static ###put your acl rule here
http-request set-log-level silent if static

Baptiste



Can't find an old example of haproxy failover setup with 2 locations

2014-12-08 Thread Aleksandr Vinokurov
I've seen it 2 years ago. If I remember it right, Willy Tarreau was the
author and it had ASCII graphics for network schema. It depicts step by
step the configuration from one location and one server to 2 locations and
4 (or only 2) Haproxy servers.

Will be **very** glad if smb. can share a link to it.

Aleksandr Vinokurov
+7 (921) 982-21-43
@aleksandrvin


Re: Can't find an old example of haproxy failover setup with 2 locations

2014-12-08 Thread david rene comba lareu
this maybe can help you:
http://brokenhaze.com/blog/2014/03/25/how-stack-exchange-gets-the-most-out-of-haproxy/

2014-12-08 12:10 GMT-03:00 Aleksandr Vinokurov :
>
> I've seen it 2 years ago. If I remember it right, Willy Tarreau was the
> author and it had ASCII graphics for network schema. It depicts step by step
> the configuration from one location and one server to 2 locations and 4 (or
> only 2) Haproxy servers.
>
> Will be **very** glad if smb. can share a link to it.
>
> Aleksandr Vinokurov
> +7 (921) 982-21-43
> @aleksandrvin



Re: Performance implications of using dynamic maps

2014-12-08 Thread Sachin Shetty
Thanks willy, I need to do more than just pick a backend. So you feel even
with a map of 10K keys, multiple look ups should be ok?

Thanks
Sachin

On 12/8/14 6:15 PM, "Willy Tarreau"  wrote:

>Hi Sachin,
>
>On Mon, Dec 08, 2014 at 06:04:35PM +0530, Sachin Shetty wrote:
>> Hi Willy,
>> 
>> I need one more clarification, I need the value in multiple acls
>> 
>> acl is_a_v-1 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-1
>> 
>> acl is_a_v-2 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-2
>> 
>> acl is_a_v-3 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-3
>> ..
>> ..
>> acl is_a_v-10 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-10
>> 
>> 
>> is there a way I could lookup once and use the values in multiple acls?
>
>There would be an option for this. Using a capture would permit to
>have a temporary variable containing the result of your map. Something
>like this approximately :
>
>tcp-request inspect-delay 10s
>tcp-request capture
>%[hdr(host),map(/opt/haproxy/current/conf/proxy.map)] len 40
>
>Then your ACLs can refer to capture.req.hdr(0) (assuming it's the first
>"capture" rule) :
>
>  acl is_a_v-1 capture.req.hdr(0) a_v-1
>  acl is_a_v-2 capture.req.hdr(0) a_v-2
>  acl is_a_v-3 capture.req.hdr(0) a_v-3
>  ...
>
>Note that when using rules as yours above (string-to-string mapping), the
>lookup is very fast, only the header extraction costs a little bit, so you
>should not be worried by these few rules. If you would use
>case-insensitive
>match or regex match, it would be different and you'd really need this
>optimization.
>
>If you only want to use these rules to select a proper backend, you could
>also use the dynamic use_backend rules (but please carefully read the doc
>about use_backend and maps for the details) :
>
> use_backend %[hdr(host),map(proxy.map)]
>
>And you don't need any acl anymore, and everything is done in a single
>lookup.
>
>Regards,
>Willy
>





Re: Performance implications of using dynamic maps

2014-12-08 Thread Willy Tarreau
On Mon, Dec 08, 2014 at 10:46:07PM +0530, Sachin Shetty wrote:
> Thanks willy, I need to do more than just pick a backend. So you feel even
> with a map of 10K keys, multiple look ups should be ok?

Sure. Fetching the header is more expensive than performing a lookup
in a binary tree of "only" 10k keys. You should avoid performing too
many lookups of course, but that's true for any rule in general.

Willy




rand(x) output limited to x/2

2014-12-08 Thread Vivek Malik
Hi,

I am using rand(x) in configuration to make some routing decisions. I
am basically load balancing between backends and using the following
configuration

use_backend bk_1 { rand(100) le 50 }
default_backend bk_2

However, I am not seeing any traffic going to bk_2 and all traffic
goes to bk_1. It seems that there is a bug in smp_fetch_rand function
around reduction.

I did some further testing by setting up a header using

http-request set-header X-RAND %[rand(200)]

and printing that header in a file. I am unable to see the random
value going above arg/2.

Here is my haproxy build information.

HA-Proxy version 1.5.9 2014/11/26
Copyright 2000-2014 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.



Re: Disable HTTP logging for specific backend in HAProxy

2014-12-08 Thread Alexander Minza
Baptiste  writes:

> You don't disable logging in a backend, since the frontend is
> responsible to generate the log line.
> 
> If you don't want to log static content, you can do something like this:
> 
> acl static ###put your acl rule here
> http-request set-log-level silent if static
> 
> Baptiste

Thanks for your idea, Baptiste - I was trying those directives in the
backends sections. After I moved them to the frontend and set the log
level to silent - it worked.

However, I would like to log just the errors, thus after setting the log level
to err it seems that it is logging again all the requests, not just those
resulting in a  HTTP error from the backend response.

What am I doing wrong? Any ideas? Thanks so much for your help!






Re: Disable HTTP logging for specific backend in HAProxy

2014-12-08 Thread Alexander Minza
Alexander Minza  writes:

> However, I would like to log just the errors, thus after setting the log level
> to err it seems that it is logging again all the requests, not just those
> resulting in a  HTTP error from the backend response.

Adding the following lines to the backend config section:

no log
log /dev/log local1 err

does not seem to have any effect - the log is still populated with HTTP 200 OK 
requests.




Re: Can't find an old example of haproxy failover setup with 2 locations

2014-12-08 Thread Jonathan Matthews
On 8 Dec 2014 15:10, "Aleksandr Vinokurov"  wrote:
>
>
> I've seen it 2 years ago. If I remember it right, Willy Tarreau was the
author and it had ASCII graphics for network schema. It depicts step by
step the configuration from one location and one server to 2 locations and
4 (or only 2) Haproxy servers.
>
> Will be **very** glad if smb. can share a link to it.

Might you be referring to
www.haproxy.com/static/media/uploads/eng/resources/art-2006-making_applications_scalable_with_lb.pdf
?

J


Re: rand(x) output limited to x/2

2014-12-08 Thread Vincent Bernat
 ❦  8 décembre 2014 11:30 -0600, Vivek Malik  :

> I am using rand(x) in configuration to make some routing decisions. I
> am basically load balancing between backends and using the following
> configuration
>
> use_backend bk_1 { rand(100) le 50 }
> default_backend bk_2
>
> However, I am not seeing any traffic going to bk_2 and all traffic
> goes to bk_1. It seems that there is a bug in smp_fetch_rand function
> around reduction.
>
> I did some further testing by setting up a header using
>
> http-request set-header X-RAND %[rand(200)]
>
> and printing that header in a file. I am unable to see the random
> value going above arg/2.

You are right. HAProxy is doing that:

#+begin_src c
unsigned int uint = random();
uint = ((uint64_t)uint * 100) >> 32;
#+end_src

However, random() is returning an integer between 0 and RAND_MAX,
RAND_MAX being (usually?) equal to INT_MAX. This means that the most
significant bit is always 0.

It seems that there is nothing preventing RAND_MAX to be smaller. The
GNU LibC manual says it can be as small as 32767. So, we should shift
only by the highest significant bit of RAND_MAX.

Assuming that RAND_MAX is always a power of two - 1, 32 could be
replaced by a precomputed value of ffs(RAND_MAX+1)-1.
-- 
Don't patch bad code - rewrite it.
- The Elements of Programming Style (Kernighan & Plauger)



Can't get HAProxy to support Forward Secrecy FS

2014-12-08 Thread Sander Rijken
System is Ubuntu 12.04 LTS server, with openssl 1.0.1 and haproxy 1.5.9

    OpenSSL> version
    OpenSSL 1.0.1 14 Mar 2012


I'm currently using the following, started with the suggested [stanzas][1] 
(formatted for readability, it is one long line in my config):

    bind 0.0.0.0:443 ssl crt mycert.pem no-tls-tickets ciphers \
        ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384: \
        ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384: \
        ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256: 
\
        AES128-SHA:AES256-SHA256:AES256-SHA no-sslv3

[1]: https://gist.github.com/rnewson/8384304

ssllabs.com indicates FS is not used. When I disable all algorithms except the 
ECDHE ones, I get SSL connection error (ERR_SSL_PROTOCOL_ERROR), so something 
on the system doesn't support FS.

Any ideas?


-- 
Sander Rijken



Re: rand(x) output limited to x/2

2014-12-08 Thread Vincent Bernat
 ❦  8 décembre 2014 23:20 +0100, Vincent Bernat  :

> Assuming that RAND_MAX is always a power of two - 1, 32 could be
> replaced by a precomputed value of ffs(RAND_MAX+1)-1.

ebtree defines a fls64() function which seems best suited (RAND_MAX+1
could overflow). Here is a proposed patch for this:

>From 960ad6d49541ffe81c9048398201d307fd2c20cb Mon Sep 17 00:00:00 2001
From: Vincent Bernat 
Date: Mon, 8 Dec 2014 23:37:40 +0100
Subject: [PATCH] BUG/MEDIUM: sample: fix random number upper-bound

random() will generate a number between 0 and RAND_MAX. POSIX mandates
RAND_MAX to be at least 32767. GNU libc uses (1<<31 - 1) as
RAND_MAX.

In smp_fetch_rand(), a reduction is done with a multiply and shift to
avoid skewing the results. However, the shift was always 32 and hence
the numbers were not distributed uniformly in the specified range. We
fix that by computing the highest bit of RAND_MAX and use it to shift.
---
 src/sample.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/sample.c b/src/sample.c
index 0ffc76daf3a9..569f7b387c50 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -1813,6 +1813,7 @@ smp_fetch_proc(struct proxy *px, struct session *s, void *l7, unsigned int opt,
 	return 1;
 }
 
+static int random_bits;
 /* generate a random 32-bit integer for whatever purpose, with an optional
  * range specified in argument.
  */
@@ -1824,7 +1825,7 @@ smp_fetch_rand(struct proxy *px, struct session *s, void *l7, unsigned int opt,
 
 	/* reduce if needed. Don't do a modulo, use all bits! */
 	if (args && args[0].type == ARGT_UINT)
-		smp->data.uint = ((uint64_t)smp->data.uint * args[0].data.uint) >> 32;
+		smp->data.uint = ((uint64_t)smp->data.uint * args[0].data.uint) >> random_bits;
 
 	smp->type = SMP_T_UINT;
 	smp->flags |= SMP_F_VOL_TEST | SMP_F_MAY_CHANGE;
@@ -1883,4 +1884,7 @@ static void __sample_init(void)
 	/* register sample fetch and format conversion keywords */
 	sample_register_fetches(&smp_kws);
 	sample_register_convs(&sample_conv_kws);
+
+	/* Setup the number of random bits we can expect with random() */
+	random_bits = fls64(RAND_MAX);
 }
-- 
2.1.3



-- 
Let the machine do the dirty work.
- The Elements of Programming Style (Kernighan & Plauger)


Re: Can't get HAProxy to support Forward Secrecy FS

2014-12-08 Thread Jonathan Matthews
On 8 December 2014 at 22:44, Sander Rijken  wrote:
> System is Ubuntu 12.04 LTS server, with openssl 1.0.1 and haproxy 1.5.9
>
> OpenSSL> version
> OpenSSL 1.0.1 14 Mar 2012
>
>
> I'm currently using the following, started with the suggested [stanzas][1]
> (formatted for readability, it is one long line in my config):
>
> bind 0.0.0.0:443 ssl crt mycert.pem no-tls-tickets ciphers \
> ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384: \
>
> ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384: \
>
> ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256: \
> AES128-SHA:AES256-SHA256:AES256-SHA no-sslv3
>
> [1]: https://gist.github.com/rnewson/8384304
>
> ssllabs.com indicates FS is not used. When I disable all algorithms except
> the ECDHE ones, I get SSL connection error (ERR_SSL_PROTOCOL_ERROR), so
> something on the system doesn't support FS.
>
> Any ideas?

I'm not best placed to help you debug your setup, but you might diff
your versions and setup against what I have on my personal site, which
SSLlabs says has "Robust" forward secrecy. I followed the server-side
recommendations of the "Modern" setup, here:
https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility

Here's some data you can check against, along with the commands I used
to generate it:


user:~$ /usr/sbin/haproxy -vv
HA-Proxy version 1.5.8 2014/10/31
Copyright 2000-2014 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4
-Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

user:~$ ldd /usr/sbin/haproxy
linux-gate.so.1 =>  (0xe000)
libcrypt.so.1 => /lib/i386-linux-gnu/i686/cmov/libcrypt.so.1 (0xb76b4000)
libz.so.1 => /lib/i386-linux-gnu/libz.so.1 (0xb769b000)
libssl.so.1.0.0 =>
/usr/lib/i386-linux-gnu/i686/cmov/libssl.so.1.0.0 (0xb7641000)
libcrypto.so.1.0.0 =>
/usr/lib/i386-linux-gnu/i686/cmov/libcrypto.so.1.0.0 (0xb7483000)
libpcre.so.3 => /lib/i386-linux-gnu/libpcre.so.3 (0xb7445000)
libc.so.6 => /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xb72e)
libdl.so.2 => /lib/i386-linux-gnu/i686/cmov/libdl.so.2 (0xb72dc000)
/lib/ld-linux.so.2 (0xb76f9000)

user:~$ apt-cache policy openssl haproxy | grep -i -e install -e ^[a-z]
openssl:
  Installed: 1.0.1e-2+deb7u13
haproxy:
  Installed: 1.5.8-1~bpo70+1

user:~$ openssl version
OpenSSL 1.0.1e 11 Feb 2013

user:~$ openssl ciphers
ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:SRP-DSS-AES-256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:DHE-RSA-CAMELLIA256-SHA:DHE-DSS-CAMELLIA256-SHA:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-RSA-AES256-SHA:ECDH-ECDSA-AES256-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:CAMELLIA256-SHA:PSK-AES256-CBC-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:SRP-DSS-3DES-EDE-CBC-SHA:SRP-RSA-3DES-EDE-CBC-SHA:SRP-3DES-EDE-CBC-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:ECDH-RSA-DES-CBC3-SHA:ECDH-ECDSA-DES-CBC3-SHA:DES-CBC3-SHA:PSK-3DES-EDE-CBC-SHA:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:SRP-DSS-AES-128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:DHE-RSA-SEED-SHA:DHE-DSS-SEED-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-DSS-CAMELLIA128-SHA:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-RSA-AES128-SHA256:ECDH-ECDSA-AES128-SHA256:ECDH-RSA-AES128-SHA:ECDH-ECDSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:SEED-SHA:CAMELLIA128-SHA:PSK-AES128-CBC-SHA:ECDHE-

Re: Can't get HAProxy to support Forward Secrecy FS

2014-12-08 Thread Vivek Malik
Are you putting in DH parameters in mycert.pem?

PFS depends on using DH algorithm to exchange and create a secret for
the connection.

openssl dhparam 2048 >> mycert.pem should add the DH parameters to the
cert file.

Regards,
Vivek

On Mon, Dec 8, 2014 at 4:44 PM, Sander Rijken  wrote:
> System is Ubuntu 12.04 LTS server, with openssl 1.0.1 and haproxy 1.5.9
>
> OpenSSL> version
> OpenSSL 1.0.1 14 Mar 2012
>
>
> I'm currently using the following, started with the suggested [stanzas][1]
> (formatted for readability, it is one long line in my config):
>
> bind 0.0.0.0:443 ssl crt mycert.pem no-tls-tickets ciphers \
> ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384: \
>
> ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384: \
>
> ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256: \
> AES128-SHA:AES256-SHA256:AES256-SHA no-sslv3
>
> [1]: https://gist.github.com/rnewson/8384304
>
> ssllabs.com indicates FS is not used. When I disable all algorithms except
> the ECDHE ones, I get SSL connection error (ERR_SSL_PROTOCOL_ERROR), so
> something on the system doesn't support FS.
>
> Any ideas?
>
>
> --
> Sander Rijken
>



Re: Can't get HAProxy to support Forward Secrecy FS

2014-12-08 Thread Sander Rijken
I didn't have DH parameters, added those, but it's still not working yet.
Is there any way to check with openssl why it isn't working?

On Tue, Dec 9, 2014 at 12:11 AM, Vivek Malik  wrote:

> Are you putting in DH parameters in mycert.pem?
>
> PFS depends on using DH algorithm to exchange and create a secret for
> the connection.
>
> openssl dhparam 2048 >> mycert.pem should add the DH parameters to the
> cert file.
>
> Regards,
> Vivek
>
> On Mon, Dec 8, 2014 at 4:44 PM, Sander Rijken 
> wrote:
> > System is Ubuntu 12.04 LTS server, with openssl 1.0.1 and haproxy 1.5.9
> >
> > OpenSSL> version
> > OpenSSL 1.0.1 14 Mar 2012
> >
> >
> > I'm currently using the following, started with the suggested
> [stanzas][1]
> > (formatted for readability, it is one long line in my config):
> >
> > bind 0.0.0.0:443 ssl crt mycert.pem no-tls-tickets ciphers \
> > ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384: \
> >
> > ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384: \
> >
> > ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256: \
> > AES128-SHA:AES256-SHA256:AES256-SHA no-sslv3
> >
> > [1]: https://gist.github.com/rnewson/8384304
> >
> > ssllabs.com indicates FS is not used. When I disable all algorithms
> except
> > the ECDHE ones, I get SSL connection error (ERR_SSL_PROTOCOL_ERROR), so
> > something on the system doesn't support FS.
> >
> > Any ideas?
> >
> >
> > --
> > Sander Rijken
> >
>


Re: Disable HTTP logging for specific backend in HAProxy

2014-12-08 Thread Baptiste
On Mon, Dec 8, 2014 at 10:20 PM, Alexander Minza
 wrote:
> Alexander Minza  writes:
>
>> However, I would like to log just the errors, thus after setting the log 
>> level
>> to err it seems that it is logging again all the requests, not just those
>> resulting in a  HTTP error from the backend response.
>
> Adding the following lines to the backend config section:
>
> no log
> log /dev/log local1 err
>
> does not seem to have any effect - the log is still populated with HTTP 200 OK
> requests.
>
>

There is a nice option called "dontlog-normal" which logs only errors.
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20dontlog-normal

Baptiste



RE: Can't get HAProxy to support Forward Secrecy FS

2014-12-08 Thread Lukas Tribus
> PFS depends on using DH algorithm to exchange and create a secret for
> the connection.

This is not entirely correct, *DHE* ciphers depend on it, but ECDHE ciphers
don't. Since he disabled all DHE ciphers manually in the configuration,
thats not it.



> I didn't have DH parameters, added those, but it's still not working
> yet. Is there any way to check with openssl why it isn't working?

First of all, post the output of "haproxy -vv". Second of all, try a more
simpler list of ciphers like 'HIGH:@STRENGTH'. If that works, try the
Mozilla recommendation [1].



Regards,

Lukas




[1] https://wiki.mozilla.org/Security/Server_Side_TLS


  


RE: eliminate per-server queuing?

2014-12-08 Thread Lukas Tribus
> Why is this wrong?

Because you still serve existing session from the broken backend,
and if the session rate on that server is below maxconn, even new
sessions will be served from that backend. On the other hand, if
the session rate is at maxconn and you somehow managed to disable
queuing than the proxy is likely to generate a 50x code itself
if there are no backends left to serve due to zero queuing.



> We do use health-checking, but we can generate a lot of 503s in 2s.

Then try some other things, like:
- "on-error mark-down" [1]
- "on-marked-down shutdown-sessions" [2]


Queuing really is important.




Regards,

Lukas


[1] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-on-error
[2] 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-on-marked-down