Re: Compression does not seem to work in my setup

2015-04-07 Thread Igor Cicimov
On Wed, Apr 8, 2015 at 3:47 PM, Krishna Kumar Unnikrishnan (Engineering) <
krishna...@flipkart.com> wrote:

> Hi all,
>
> I am trying to use the compression feature, but don't seem to get it
> working when
> trying to curl some text files (16K containing a-zA-Z, also smaller files
> like 1024
> bytes):
>
> $ curl -o/dev/null -D - "http://192.168.122.110:80/TEXT_16K"; -H
> "Accept-Encoding: gzip"
>   % Total% Received % Xferd  Average Speed   TimeTime Time
> Current
>  Dload  Upload   Total   SpentLeft
> Speed
>   0 00 00 0  0  0 --:--:-- --:--:--
> --:--:-- 0HTTP/1.1 200 OK
> Server: nginx/1.6.2
> Date: Wed, 08 Apr 2015 05:00:35 GMT
> *Content-Type: application/octet-stream*
>^
>^
>
Well, compare the Content-Type of the file you are returning with the types
specified in your config:


*compression type text/html text/plain text/javascript
application/javascript application/xml text/css*
it is not on the list is it ???

Content-Length: 16384
> Last-Modified: Wed, 08 Apr 2015 04:45:12 GMT
> ETag: "5524b258-4000"
> Accept-Ranges: bytes
>
> 100 16384  100 163840 0  4274k  0 --:--:-- --:--:-- --:--:--
> 5333k
>
> My configuration file has these parameters:
>
> 
> compression algo gzip
> *compression type text/html text/plain text/javascript
> application/javascript application/xml text/css*
> server nginx-1 192.168.122.101:80 maxconn 15000 check
> server nginx-2 192.168.122.102:80 maxconn 15000 check
> .
> ..
>
> Tcpdump at the proxy shows:
>
> GET /TEXT_16K HTTP/1.1
> User-Agent: curl/7.26.0
> Host: 192.168.122.110
> Accept: */*
> Accept-Encoding: gzip
> X-Forwarded-For: 192.168.122.1
>
>
> HTTP/1.1 200 OK
> Server: nginx/1.6.2
> Date: Wed, 08 Apr 2015 05:25:09 GMT
> Content-Type: application/octet-stream
> Content-Length: 16384
> Last-Modified: Wed, 08 Apr 2015 04:28:01 GMT
> Connection: keep-alive
> ETag: "5524ae51-4000"
> Accept-Ranges: bytes
>
> HTTP/1.1 200 OK
> Server: nginx/1.6.2
> Date: Wed, 08 Apr 2015 05:25:09 GMT
> Content-Type: application/octet-stream
> Content-Length: 16384
> Last-Modified: Wed, 08 Apr 2015 04:28:01 GMT
> Connection: keep-alive
> ETag: "5524ae51-4000"
> Accept-Ranges: bytes
>
> haproxy build info:
> HA-Proxy version 1.5.8 2014/10/31
> Copyright 2000-2014 Willy Tarreau 
>
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
> -Werror=format-security -D_FORTIFY_SOURCE=2
>   OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1
>
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
>
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.7
> Compression algorithms supported : identity, deflate, gzip
> Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
> Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.30 2012-02-04
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
>
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>
> How can I fix this? Thanks for any help,
>
> Regards,
> - KK
>



-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com 
w*.* encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Compression does not seem to work in my setup

2015-04-07 Thread Krishna Kumar Unnikrishnan (Engineering)
Hi all,

I am trying to use the compression feature, but don't seem to get it
working when
trying to curl some text files (16K containing a-zA-Z, also smaller files
like 1024
bytes):

$ curl -o/dev/null -D - "http://192.168.122.110:80/TEXT_16K"; -H
"Accept-Encoding: gzip"
  % Total% Received % Xferd  Average Speed   TimeTime Time
Current
 Dload  Upload   Total   SpentLeft
Speed
  0 00 00 0  0  0 --:--:-- --:--:--
--:--:-- 0HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Wed, 08 Apr 2015 05:00:35 GMT
Content-Type: application/octet-stream
Content-Length: 16384
Last-Modified: Wed, 08 Apr 2015 04:45:12 GMT
ETag: "5524b258-4000"
Accept-Ranges: bytes

100 16384  100 163840 0  4274k  0 --:--:-- --:--:-- --:--:--
5333k

My configuration file has these parameters:


compression algo gzip
compression type text/html text/plain text/javascript
application/javascript application/xml text/css
server nginx-1 192.168.122.101:80 maxconn 15000 check
server nginx-2 192.168.122.102:80 maxconn 15000 check
.
..

Tcpdump at the proxy shows:

GET /TEXT_16K HTTP/1.1
User-Agent: curl/7.26.0
Host: 192.168.122.110
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.122.1


HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Wed, 08 Apr 2015 05:25:09 GMT
Content-Type: application/octet-stream
Content-Length: 16384
Last-Modified: Wed, 08 Apr 2015 04:28:01 GMT
Connection: keep-alive
ETag: "5524ae51-4000"
Accept-Ranges: bytes

HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Wed, 08 Apr 2015 05:25:09 GMT
Content-Type: application/octet-stream
Content-Length: 16384
Last-Modified: Wed, 08 Apr 2015 04:28:01 GMT
Connection: keep-alive
ETag: "5524ae51-4000"
Accept-Ranges: bytes

haproxy build info:
HA-Proxy version 1.5.8 2014/10/31
Copyright 2000-2014 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
-Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

How can I fix this? Thanks for any help,

Regards,
- KK


Re: [PATCH] Configurable http result codes for http-request deny

2015-04-07 Thread Willy Tarreau
Hi,

On Tue, Apr 07, 2015 at 12:03:37PM -0400, CJ Ess wrote:
> This is my first time submitting a modification to haproxy, so I would
> appreciate feedback.
> 
> We've been experimenting with using the stick tables feature in Haproxy to
> do rate limiting by IP at the edge. We know from past experience that we
> will need to maintain a whitelist because schools and small ISPs (in
> particular) have a habit of proxying a significant number of requests
> through a handful of addresses without providing x-forwarded-for to
> differentiate between actual origins. My employer has a strict "we talk to
> our customers" policy (what a unique concept!) so when we do rate limit
> someone we want to return a custom error page which explains in a positive
> way why we are not serving he requested page and how our support group will
> be happy to add them to the white list if they contact us.
> 
> This patch adds support for error codes 429 and 405 to Haproxy and a
> "deny_status XXX" option to "http-request deny" where you can specify which
> code is returned with 403 being the default. We really want to do this the
> "haproxy way" and hope to have this patch included in the mainline. We'll
> be happy address any feedback on how this is implemented.

That's the good approach. At first glance your work looks fine. I'll review
it deeper probably tomorrow if time permits.

Thanks,
Willy




Re: [PATCH] Add a new log format variable "%p" that spits out the sanitized request path

2015-04-07 Thread Willy Tarreau
Hi Andrew,

On Tue, Apr 07, 2015 at 04:52:38PM -0500, Andrew Hayworth wrote:
> It's often undesirable to log query params - and in some cases, it can
> create legal compliance problems. This commit adds a new log format
> variable that logs the HTTP verb and the path requested sans query
> string (and additionally ommitting the protocol). For example, the
> following HTTP request line:
> 
>   GET /foo?bar=baz HTTP/1.1
> 
> becomes:
> 
>   GET /foo
> 
> with this log format variable.

Just a question, did you find any benefit in doing it with a new tag
compared to %[path] ? It may just be a matter of convenience, I'm just
wondering.

Other comments on the code below :

> +char* strip_uri_params(char *str) {
> + int spaces = 0;
> + int end = strlen(str);
> +
> + int i;
> + int path_end = end;
> + for (i = 0; i < end; i++) {
> + if (str[i] == ' ' && spaces == 0) {
> + spaces++;
> + } else if (str[i] == '?' || (str[i] == ' ' && spaces > 0)) {
> + path_end = i;
> + break;
> + }
> + }

What's the purpose of counting spaces to stop at the second one ? You
cannot not have any in the path, so I'm a bit puzzled.

> + char *temp = malloc(path_end + 1);

Please avoid inline declarations, not only they're prone to many bugs,
but they don't build on some older compilers.

Also please avoid mallocs as much as possible. In your case this function
returns a string for immediate use, there's no need for allocating, you
can copy the string directly into the trash.

>  /* return the name of the directive used in the current proxy for which we're
>   * currently parsing a header, when it is known.
>   */
> @@ -1539,6 +1564,27 @@ int build_logline(struct session *s, char *dst, size_t 
> maxsize, struct list *lis
>   last_isspace = 0;
>   break;
>  
> + case LOG_FMT_PATH: // %p
> + if (tmp->options & LOG_OPT_QUOTE)
> + LOGCHAR('"');
> + uri = txn->uri ? txn->uri : "";
> + ret = encode_string(tmplog, dst + maxsize,
> +'#', url_encode_map, 
> uri);
> + if (ret == NULL || *ret != '\0')
> + goto out;
> +
> + char *sanitized = strip_uri_params(tmplog);
> + if (sanitized == NULL)
> + goto out;
> +
> + tmplog += strlen(sanitized);
> + free(sanitized);

Here I don't understand, you seem to be doing malloc+strncpy+strlen+free
just to get an integer which is the position of the question mark in the
chain that is determined very earély in your strip function, is that it ?
If so, that's totally overkill and not even logical!

In this case I'd simply do something like this which looks much much
cleaner and more efficient :

case LOG_FMT_PATH: // %p
if (tmp->options & LOG_OPT_QUOTE)
LOGCHAR('"');
uri = txn->uri ? txn->uri : "";
ret = encode_string(tmplog, dst + maxsize,
   '#', url_encode_map, 
uri);
if (ret == NULL || *ret != '\0')
goto out;

+   uri = strchr(tmplog, '?');
+   tmplog = uri ? uri : ret;

if (tmp->options & LOG_OPT_QUOTE)
LOGCHAR('"');
last_isspace = 0;
break;

Or am I missing something ?

Thanks,
Willy




Re: [PATCH] BUG/MINOR: Display correct filename in error message

2015-04-07 Thread Willy Tarreau
Hello Alexander,

On Tue, Apr 07, 2015 at 04:02:17PM +0200, Alexander Rigbo wrote:
> Hello,
> 
> I noticed an error in the output when crl-file is non-existant (or other).
> 
> Tested with this config:
> global
> tune.ssl.default-dh-param 2048
> 
> defaults
> timeout server  10s
> timeout client  10s
> timeout connect 10s
> 
> frontend foo
> bind *: ssl crt /etc/ssl/certs/combo.pem ca-file /ca.crt
> crl-file /crlfile verify required
> default_backend bar
> 
> backend bar
> server baz 127.0.0.1:80
> 
> Gives:
> [ALERT] 096/145558 (11605) : Proxy 'foo': unable to configure CRL file
> '/ca.crt' for bind '*:' at [haproxy.conf:10].
> 
> If ca-file is not set at all it gives:
> [ALERT] 096/150029 (14284) : Proxy 'foo': unable to configure CRL file
> '(null)' for bind '*:' at [haproxy.conf:10].

Good catch! Patch applied, thank you!
willy




[PATCH] Add a new log format variable "%p" that spits out the sanitized request path

2015-04-07 Thread Andrew Hayworth
It's often undesirable to log query params - and in some cases, it can
create legal compliance problems. This commit adds a new log format
variable that logs the HTTP verb and the path requested sans query
string (and additionally ommitting the protocol). For example, the
following HTTP request line:

  GET /foo?bar=baz HTTP/1.1

becomes:

  GET /foo

with this log format variable.
---
 include/types/log.h |  1 +
 src/log.c   | 46 ++
 2 files changed, 47 insertions(+)

diff --git a/include/types/log.h b/include/types/log.h
index c7e47ea..3205ce6 100644
--- a/include/types/log.h
+++ b/include/types/log.h
@@ -90,6 +90,7 @@ enum {
LOG_FMT_HDRREQUESTLIST,
LOG_FMT_HDRRESPONSLIST,
LOG_FMT_REQ,
+   LOG_FMT_PATH,
LOG_FMT_HOSTNAME,
LOG_FMT_UNIQUEID,
LOG_FMT_SSL_CIPHER,
diff --git a/src/log.c b/src/log.c
index 1a5ad25..e9c1b10 100644
--- a/src/log.c
+++ b/src/log.c
@@ -108,6 +108,7 @@ static const struct logformat_type logformat_keywords[] = {
{ "hs", LOG_FMT_HDRRESPONS, PR_MODE_TCP, LW_RSPHDR, NULL },  /* header 
response */
{ "hsl", LOG_FMT_HDRRESPONSLIST, PR_MODE_TCP, LW_RSPHDR, NULL },  /* 
header response list */
{ "ms", LOG_FMT_MS, PR_MODE_TCP, LW_INIT, NULL },   /* accept date 
millisecond */
+   { "p", LOG_FMT_PATH, PR_MODE_HTTP, LW_REQ, NULL },  /* path */
{ "pid", LOG_FMT_PID, PR_MODE_TCP, LW_INIT, NULL }, /* log pid */
{ "r", LOG_FMT_REQ, PR_MODE_HTTP, LW_REQ, NULL },  /* request */
{ "rc", LOG_FMT_RETRIES, PR_MODE_TCP, LW_BYTES, NULL },  /* retries */
@@ -161,6 +162,30 @@ struct logformat_var_args var_args_list[] = {
{  0,  0 }
 };
 
+char* strip_uri_params(char *str) {
+   int spaces = 0;
+   int end = strlen(str);
+
+   int i;
+   int path_end = end;
+   for (i = 0; i < end; i++) {
+   if (str[i] == ' ' && spaces == 0) {
+   spaces++;
+   } else if (str[i] == '?' || (str[i] == ' ' && spaces > 0)) {
+   path_end = i;
+   break;
+   }
+   }
+
+   char *temp = malloc(path_end + 1);
+   if (temp == NULL)
+   return temp;
+
+   strncpy(temp, str, path_end);
+   temp[path_end] = '\0';
+   return temp;
+}
+
 /* return the name of the directive used in the current proxy for which we're
  * currently parsing a header, when it is known.
  */
@@ -1539,6 +1564,27 @@ int build_logline(struct session *s, char *dst, size_t 
maxsize, struct list *lis
last_isspace = 0;
break;
 
+   case LOG_FMT_PATH: // %p
+   if (tmp->options & LOG_OPT_QUOTE)
+   LOGCHAR('"');
+   uri = txn->uri ? txn->uri : "";
+   ret = encode_string(tmplog, dst + maxsize,
+  '#', url_encode_map, 
uri);
+   if (ret == NULL || *ret != '\0')
+   goto out;
+
+   char *sanitized = strip_uri_params(tmplog);
+   if (sanitized == NULL)
+   goto out;
+
+   tmplog += strlen(sanitized);
+   free(sanitized);
+
+   if (tmp->options & LOG_OPT_QUOTE)
+   LOGCHAR('"');
+   last_isspace = 0;
+   break;
+
case LOG_FMT_PID: // %pid
if (tmp->options & LOG_OPT_HEXA) {
iret = snprintf(tmplog, dst + maxsize - 
tmplog, "%04X", pid);
-- 
2.1.3




Re: how to make HAproxy itself reply to a health check from another load balancer?

2015-04-07 Thread Pavlos Parissis
On 07/04/2015 09:55 μμ, Florin Andrei wrote:
> Let's say HAproxy is used for a second layer of load balancers, with the
> first layer being AWS ELBs.
> 
> When you create an ELB, you can specify a health check. This should
> actually check the health of the HAproxy instances that the ELB is
> pointing at.
> 
> Is there a way to make HAproxy answer a health check from an ELB? This
> health check cannot be passed all the way to the backend web servers,
> because they all answer different URL prefixes.
> 
> 

You can use monitor-uri, here is an example

acl site_dead nbsrv(foo_backend) lt 2
monitor-uri   /site_alive
monitor fail  if site_dead

then point healthcheck from ELB to /site_alive

Cheers,
Pavlos






signature.asc
Description: OpenPGP digital signature


how to make HAproxy itself reply to a health check from another load balancer?

2015-04-07 Thread Florin Andrei
Let's say HAproxy is used for a second layer of load balancers, with the 
first layer being AWS ELBs.


When you create an ELB, you can specify a health check. This should 
actually check the health of the HAproxy instances that the ELB is 
pointing at.


Is there a way to make HAproxy answer a health check from an ELB? This 
health check cannot be passed all the way to the backend web servers, 
because they all answer different URL prefixes.



--
Florin Andrei
http://florin.myip.org/



'acl' and 'use_backend' in defaults section?

2015-04-07 Thread Florin Andrei
I have a few ACLs that are identical for several frontends. I tried to 
define the ACLs in the defaults section, but I got an error (quote at 
the end).


Is there a way around this? I'd like to not have to repeat identical 
configuration lines for many frontends.



Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:56] : 'acl' not allowed in 
'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:57] : 'acl' not allowed in 
'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:58] : 'acl' not allowed in 
'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:59] : 'acl' not allowed in 
'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:60] : 'acl' not allowed in 
'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:62] : 'use_backend' not 
allowed in 'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:63] : 'use_backend' not 
allowed in 'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:64] : 'use_backend' not 
allowed in 'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:65] : 'use_backend' not 
allowed in 'defaults' section.
Apr  7 19:05:49 haproxy-test haproxy-systemd-wrapper: [ALERT] 096/190549 
(20038) : parsing [/etc/haproxy/haproxy.cfg:66] : 'use_backend' not 
allowed in 'defaults' section.




--
Florin Andrei
http://florin.myip.org/



[PATCH] Configurable http result codes for http-request deny

2015-04-07 Thread CJ Ess
This is my first time submitting a modification to haproxy, so I would
appreciate feedback.

We've been experimenting with using the stick tables feature in Haproxy to
do rate limiting by IP at the edge. We know from past experience that we
will need to maintain a whitelist because schools and small ISPs (in
particular) have a habit of proxying a significant number of requests
through a handful of addresses without providing x-forwarded-for to
differentiate between actual origins. My employer has a strict "we talk to
our customers" policy (what a unique concept!) so when we do rate limit
someone we want to return a custom error page which explains in a positive
way why we are not serving he requested page and how our support group will
be happy to add them to the white list if they contact us.

This patch adds support for error codes 429 and 405 to Haproxy and a
"deny_status XXX" option to "http-request deny" where you can specify which
code is returned with 403 being the default. We really want to do this the
"haproxy way" and hope to have this patch included in the mainline. We'll
be happy address any feedback on how this is implemented.
diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9a04200..daba1b9 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2612,7 +2612,8 @@ errorfile  
  yes   |yes   |   yes  |   yes
   Arguments :
 is the HTTP status code. Currently, HAProxy is capable of
-  generating codes 200, 400, 403, 408, 500, 502, 503, and 504.
+  generating codes 200, 400, 403, 405, 408, 429, 500, 502, 503, and
+  504.
 
 designates a file containing the full HTTP response. It is
   recommended to follow the common practice of appending ".http" to
diff --git a/include/types/proto_http.h b/include/types/proto_http.h
index 5a4489d..d649fdd 100644
--- a/include/types/proto_http.h
+++ b/include/types/proto_http.h
@@ -309,7 +309,9 @@ enum {
HTTP_ERR_200 = 0,
HTTP_ERR_400,
HTTP_ERR_403,
+   HTTP_ERR_405,
HTTP_ERR_408,
+   HTTP_ERR_429,
HTTP_ERR_500,
HTTP_ERR_502,
HTTP_ERR_503,
@@ -417,6 +419,7 @@ struct http_req_rule {
struct list list;
struct acl_cond *cond; /* acl condition to meet */
unsigned int action;   /* HTTP_REQ_* */
+   short deny_status; /* HTTP status to return to user 
when denying */
int (*action_ptr)(struct http_req_rule *rule, struct proxy *px, struct 
session *s, struct http_txn *http_txn);  /* ptr to custom action */
union {
struct {
@@ -484,6 +487,7 @@ struct http_txn {
unsigned int flags; /* transaction flags */
enum http_meth_t meth;  /* HTTP method */
/* 1 unused byte here */
+   short rule_deny_status; /* HTTP status from rule when denying */
short status;   /* HTTP status from the server, 
negative if from proxy */
 
char *uri;  /* first line if log needed, NULL 
otherwise */
diff --git a/src/proto_http.c b/src/proto_http.c
index 611a8c1..989d399 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -131,7 +131,9 @@ const int http_err_codes[HTTP_ERR_SIZE] = {
[HTTP_ERR_200] = 200,  /* used by "monitor-uri" */
[HTTP_ERR_400] = 400,
[HTTP_ERR_403] = 403,
+   [HTTP_ERR_405] = 405,
[HTTP_ERR_408] = 408,
+   [HTTP_ERR_429] = 429,
[HTTP_ERR_500] = 500,
[HTTP_ERR_502] = 502,
[HTTP_ERR_503] = 503,
@@ -163,6 +165,14 @@ static const char *http_err_msgs[HTTP_ERR_SIZE] = {
"\r\n"
"403 Forbidden\nRequest forbidden by 
administrative rules.\n\n",
 
+   [HTTP_ERR_405] =
+   "HTTP/1.0 405 Method Not Allowed\r\n"
+   "Cache-Control: no-cache\r\n"
+   "Connection: close\r\n"
+   "Content-Type: text/html\r\n"
+   "\r\n"
+   "405 Method Not Allowed\nA request was made of a 
resource using a request method not supported by that 
resource\n\n",
+
[HTTP_ERR_408] =
"HTTP/1.0 408 Request Time-out\r\n"
"Cache-Control: no-cache\r\n"
@@ -171,6 +181,14 @@ static const char *http_err_msgs[HTTP_ERR_SIZE] = {
"\r\n"
"408 Request Time-out\nYour browser didn't send a 
complete request in time.\n\n",
 
+   [HTTP_ERR_429] =
+   "HTTP/1.0 429 Too Many Requests\r\n"
+   "Cache-Control: no-cache\r\n"
+   "Connection: close\r\n"
+   "Content-Type: text/html\r\n"
+   "\r\n"
+   "429 Too Many Requests\nYou have sent too many 
requests in a given amount of time.\n\n",
+
[HTTP_ERR_500] =
"HTTP/1.0 500 Server Error\r\n"
"Cache-Control: no-cache\r\n"
@@ -3408,10 +3426,12 @@ resume_execution:
return HTTP_RULE_RES_STOP;
 
case HTTP_REQ_ACT_DENY:
+   txn->rule_deny

[PATCH] BUG/MINOR: Display correct filename in error message

2015-04-07 Thread Alexander Rigbo
Hello,

I noticed an error in the output when crl-file is non-existant (or other).

Tested with this config:
global
tune.ssl.default-dh-param 2048

defaults
timeout server  10s
timeout client  10s
timeout connect 10s

frontend foo
bind *: ssl crt /etc/ssl/certs/combo.pem ca-file /ca.crt
crl-file /crlfile verify required
default_backend bar

backend bar
server baz 127.0.0.1:80

Gives:
[ALERT] 096/145558 (11605) : Proxy 'foo': unable to configure CRL file
'/ca.crt' for bind '*:' at [haproxy.conf:10].

If ca-file is not set at all it gives:
[ALERT] 096/150029 (14284) : Proxy 'foo': unable to configure CRL file
'(null)' for bind '*:' at [haproxy.conf:10].

Best regards,
Alexander Rigbo


0001-BUG-MINOR-Display-correct-filename-in-error-message.patch
Description: Binary data


250 euros offerts pour parier sur le GNT à Lyon

2015-04-07 Thread ZEturf
Title: Grand National du Trot - Lyon la Soie






  

  
  
Si vous ne voyez pas correctement ce message, visualisez notre version en ligne.
Pour être sûr de recevoir tous nos emails, ajoutez newslet...@email.zeturf.com à votre carnet d´adresses
Pour ne plus recevoir de messages de notre part, rendez-vous sur cette page.
  

  


  

  

  

  

  

  
  

  

  
  

  

  

  


  

  
 

 

 
  

  




  
  
  

  
  

  
  Mot de passe oublié ?     | 
     Désinscription Newsletter    |  
    Jeu responsable   |
  Contactez-nous
  

  
  

  
  
  

  


  *Offre valable sur la réunion de Lyon Parilly, mercredi 8 Avril 2015. Voir conditions sur le site.
**Voir conditions sur le site. ***Offre valable pour les nouveaux clients ZEturf uniquement. Voir conditions sur le site.
À tout moment, vous disposez d'un droit d'accès, de modification, de rectification et de suppression des données qui vous concernent.

Jouer comporte des risques : endettement, isolement… Pour être aidé, appelez le 09-74-75-13-13 (appel non surtaxé).
  


  

  
  Vous devez avoir plus de 18 ans pour jouer sur ZEturf








Re: global maxconn limit in pure TCP mode

2015-04-07 Thread Tom Keyser
Hi Florin,

>I suspect I cannot increase the global maxconn indefinitely. At some
point, I'll run into some limits. What will dictate those limits? In other
words, how should I design the instance running HAproxy to make sure I can
increase >maxconn to a very high value?

I'm far from an expert, but from lurking on this list, I can tell you that
your maxconn will be limited by available RAM on your server, and the
amount of RAM each connection consumes is constant unless you're doing SSL
termination, which it doesn't sound like you are.

Have a look at this response from Willy to a similar question a few years
ago; it should answer your question:

https://www.mail-archive.com/haproxy@formilux.org/msg03205.html



Keys


Re: limiting conn-curs per-ip using x-forwarded-for

2015-04-07 Thread Klavs Klavsen

Back from easter vacation :)

Baptiste wrote on 03/25/2015 10:30 AM:

Hi,

some useful examples can be taken from this blog post:
http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/

Just replace src by hdr(X-Forwarded-For).



Tried:

frontend nocache
  mode  http
..
  option  httplog
  option  accept-invalid-http-request
  stick-table  type ip size 100k expire 30s store conn_cur
  tcp-request connection reject  if { src_conn_cur ge 10 }
  tcp-request connection track-sc1  hdr(X-Forwarded-For)
..

but haproxy complains:
'tcp-request connection track-sc1' : fetch method 'hdr(X-Forwarded-For)' 
extracts information from 'HTTP request headers,HTTP response headers', 
none of which is available here


I took the example from 
http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/


:(

--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

"Those who do not understand Unix are condemned to reinvent it, poorly."
  --Henry Spencer




Re: CPU saturated with 250Mbps traffic on frontend

2015-04-07 Thread Evgeniy Sudyr
Willy,

I will post results when available.

--
Evgeniy

On Mon, Apr 6, 2015 at 3:24 PM, Willy Tarreau  wrote:
> On Mon, Apr 06, 2015 at 02:54:13PM +0200, Evgeniy Sudyr wrote:
>> this is server with 2x Intel I350-T4 1G Quad port NICs, where on first
>> card each NIC is connected to uplink provider and 2nd NIC 4 ports are
>> used for trunk interface with lacp connected to internal 1Gb switch
>> with lacp configured as well. I've tested uplinks and internal link
>> with iperf and was able to see at least 900Mbps for TCP tests.
>
> You may want to retry without LACP. A long time ago on Linux, the bonding
> driver used not to propagate NIC-specific optimizations and resulted in
> worse performance sometimes than without. Also I don't know if you're
> using VLANs, and I don't know if openbsd supports checksum offloading
> on VLANs, but that could as well be something which limits the list of
> possible optimizations/offloadings that normally result in lower CPU
> usage.
>
>> Card seems to be OK. Haproxy definitely needs to be moved to separate
>> servers in inside network.
>
> Makes sense. Then make sure to use a distro with a kernel 3.10 or above,
> that's where you'll get the best performance.
>
>> Btw, where Pavlos reported his test results? There in list or somewhere else?
>
> It was posted one or two weeks ago on this list, yes. I must say I was
> quite happy to see someone else post results in the order of magnitude
> I encounter in my own tests, because at least I won't be suspected of
> cheating anymore :-)
>
> Cheers,
> Willy
>



-- 
--
With regards,
Eugene Sudyr



Re: Trouble with getting ocsp response to work

2015-04-07 Thread Jarno Huuskonen
Hi,

On Mon, Apr 06, Vasileios Tzimourtos wrote:
> It was the issue that you mentioned with the 300sec SKEW. I compiled
> haproxy with smaller value (30 :) ) and id returns the response :)

30s is probably too small: If client's clock is off by > 30s then
it's possible that haproxy send ocsp response that client thinks has
expired.

Maybe HARICA could issue responses that are valid for longer than 5m ?
Or if this is not possible maybe something like 200 for SKEW and
update responses every 90s(<100) ?

-Jarno
 
> The test with the openssl that toy mentioned returns Verified OK.
> The problem was the refferrence to the past
> 
> Finally, to ease your curiosity, the CA is HARICA ( harica.gr )
> 
> Thanks again!
> 
> 
> On 6/4/2015 12:50 μμ, Jarno Huuskonen wrote:
> >Hi,
> >
> >On Mon, Apr 06, Vasileios Tzimourtos wrote:
> >>**/usr/bin/openssl ocsp -noverify -issuer $ROOT_CERT_FILE -cert
> >>$SERVER_CERT_FILE -url "$OCSP_URL" -no_nonce -header Host `echo
> >>"$OCSP_URL" | cut -d"/" -f3` -respout $OCSP_FILE**
> >>**echo "set ssl ocsp-response $(/usr/bin/base64 -w 1
> >>$OCSP_FILE)" | socat $HAPROXY_SOCKET stdio**
> >>*
> >Can you run openssl ocsp w/out -noverify (and maybe -VAfile) ?
> >So something like:
> >/usr/bin/openssl ocsp -issuer $ROOT_CERT_FILE \
> >  -cert $SERVER_CERT_FILE -url "$OCSP_URL" -no_nonce \
> >  -header Host `echo "$OCSP_URL" | cut -d"/" -f3` -respout $OCSP_FILE \
> >  [ -VAfile $ROOT_CERT_FILE [-validity_period 300] ]
> >
> >>Running the above script returns that all is OK and that ocsp
> >>response was updated
> >Do you get any messages about ocsp response if you reload haproxy/check
> >configuration sometime after creating the ocsp response ?
> >>*/etc/haproxy/certs/mycertificate.crt.pem: good**
> >>**This Update: Apr  6 08:28:46 2015 GMT**
> >>**Next Update: Apr  6 08:33:46 2015 GMT**
> >>**OCSP Response updated!**
> >Out of curiosity which CA issues responses for only 5min ?
> >
> >Haproxy defaults.h has:
> >#define OCSP_MAX_RESPONSE_TIME_SKEW 300
> >
> >In commit 4f3c87a5d942d4d0649c35805ff4e335970b87d4 there's:
> >"   Haproxy stops serving OCSP response if nextupdate date minus
> > the supported time skew (#define OCSP_MAX_RESPONSE_TIME_SKEW) is
> > in the past.
> >"
> >
> >Your problem maybe be that the ocsp response is valid for 5min(300s)
> >Quick check to test this could be to compile haproxy with
> >different OCSP_MAX_RESPONSE_TIME_SKEW (< 300) ?
> >
> >-Jarno
> >
> 
> -- 
> Vassilis Tzimourtos
> 
> 

-- 
Jarno Huuskonen