Re: [PATCH] MEDIUM: init: allow directory as argument of -f

2016-05-10 Thread Willy Tarreau
Hi Maxime,

On Wed, May 11, 2016 at 12:37:36AM +0200, Maxime de Roucy wrote:
> Hello Willy,
> 
> I don't receive all the mails from haproxy@formilux.org.
> For exemple I didn't received :
> http://article.gmane.org/gmane.comp.web.haproxy/27795

Well, this one was sent to you directly by Cyril, so you should have
received it.

> However I am sure I sent a blank mail to haproxy+subscr...@formilux.org
>  (I rechecked).
> Can you check on the server side ?

Indeed you're not subscribed here. You might have some anti-spam filtering
which blocked the validation e-mail. Some people have already had trouble
with gmail's spam filtering. It seems to vary in time, but we have a bit
more than 300 subscribers using gmail here so I guess overall it works.
Ah and there's this well-known rant from Linus about a huge amount of
false positives, lots of people are experiencing two-digit percents of
false positive rate :

   https://plus.google.com/+LinusTorvalds/posts/DiG9qANf5PA

I've subscribed you by hand now.

> > > I forgot to free the memory allocated at 'filename = calloc' (why
> > > valgrind
> > > didn't warn...). Forget this patch. I will send another one
> > > tomorow.
> > 
> > Yes I noticed, and there's this one as well :
> > 
> > > > +   wlt = calloc(1, sizeof(*wlt));
> > > > +   tmp_str = strdup(filename);
> > > > +   if (!wlt || !tmp_str) {
> > > > +   Alert("Cannot load
> > > > configuration
> > > > files %s : out of memory.\n",
> > > > + filename);
> > > > +   exit(1);
> > > > +   }
> > 
> > If tmp_str fails and wlt succeeds, wlt is not freed.
> 
> If tmp_str fails and wlt succeeds we still got the Alert and everything
> it freed on exit.

Yes I know but as I said, if/when such code later moves to its own function,
this function might initially decide to exit then to let the caller take the
decision and one day all of this will be used dynamically or from the CLI and
then people discover a memory leak. And there are the valgrind users who send
patches very often to fix such warnings that annoy them. I mean we spent a
lot of time killing some such old issues that were not bugs initially and
that became bugs later, so we try to be careful. We don't want to be the
next openssl if you see what I mean :-)

> Anyway the problem isn't here anymore as I get ride of strdup.
> See the end of this mail.

OK.

> I create the function "void cfgfiles_expand_directories(void)", but not
> the "load_config_file" one.
> I am not accustomed to using goto and it's hard for me to use it here
> as I actually don't see the point of it (in
> cfgfiles_expand_directories).

That's the best way to deal with error unrolling. I'm sad that teachers at
school teach students not to use it because :
  1) it's what the compiler implements anyway for all other constructs
  2) it's the only safe way to perform unrolling which resists to code
 additions.

We used to have some leaks in the past because we were not using it. When
you have some session initialization code like this :

s = malloc(sizeof(*s));
if (!s)
return;

s->req = malloc(sizeof(*s->req));
if (!s->req)) {
   free(s);
   return;
}

s->res = malloc(sizeof(*s->res));
if (!s->res)) {
   free(s->req);
   free(s);
   return;
}

s->txn = malloc(sizeof(*s->txn));
if (!s->txn)) {
   free(s->res);
   free(s->req);
   free(s);
   return;
}

s->log = malloc(sizeof(*s->log));
if (!s->log)) {
   free(s->txn);
   free(s->res);
   free(s->req);
   free(s);
   return;
}

s->req_capture = malloc(sizeof(*s->req_capture));
if (!s->req_capture)) {
   free(s->log);
   free(s->txn);
   free(s->res);
   free(s->req);
   free(s);
   return;
}

s->res_capture = malloc(sizeof(*s->res_capture));
if (!s->res_capture)) {
   free(s->req_capture);
   free(s->log);
   free(s->txn);
   free(s->res);
   free(s->req);
   free(s);
   return;
}

... and so on for 10 entries ...

Then you may already have bugs above due to the inevitable copy-paste,
and once you insert a new field in the middle (eg: s->vars) you're
pretty sure that you will miss it in one of the next "if" blocks because
they are never as clear as above but themselves enclosed within other
"if" blocks. And when you need to switch allocation order, that's even
worse. But the horrible thing above can be safely turned into this :

s = malloc(sizeof(*s));
if (!s)
goto fail_s;

s->req = malloc(sizeof(*s->req));
if (!s->req))
goto fail_req;

s->res = malloc(sizeof(*s->res));
if (!s->res))
goto fail_res;

s->txn = malloc(sizeof(*s->txn));
if (!s->txn))
goto fail_txn;


Bienvenue chez CAFE COTON

2016-05-10 Thread CAFE COTON


HA Proxy - Dummy Frontend redirection

2016-05-10 Thread Lucas Teligioridis
Hi,



Is the following configuration viable and a workable solution? Or am I
going to run into some cookie/session persistent issues?

The idea behind this is so I can separate all my services into different
frontend and just having a cleaner configuration of settings to use.



Is the http-keep-alive option to be used if this works, or do I need to
have a http-server-close option so that the connection between the dummy
front and backends is severed?

Any help or guidance would be appreciated.



…

#

# CONTENT SWITCHING #

#



frontend FT-ALL-HTTP

bind192.168.2.120:80

modehttp

redirectscheme https code 301 if !{ ssl_fc }



frontend FT-CS-HTTPS

bind 192.168.2.120:443 ssl crt /path/to/cert.pem

mode http



# Rules for header inspections

use_backend BK-ONE-CS if { hdr(Host) -i example.one.com.au }

use_backend BK-TWO-CS if { hdr(Host) -i example.two.com.au }

use_backend BK-THREE-CS if { hdr(Host) -i example.three.com.au }



backend BK-ONE-CS

option http-keep-alive

option forwardfor

server DUMMY1 192.168.2.120:1443 check ssl verify none



backend BK-TWO-CS

option http-keep-alive

option forwardfor

server DUMMY2 192.168.2.120:2443 check ssl verify none



backend BK-THREE-CS

option http-keep-alive

option forwardfor

server DUMMY3 192.168.2.120:3443 check ssl verify none





# ROUTED FRONTENDS #





frontend FT-ONE-HTTPS

bind 192.168.2.120:1443



frontend FT-TWO-HTTPS

bind 192.168.2.120:2443



frontend FT-THREE-HTTPS

bind 192.168.2.120:3443

…


Kind Regards,

Lucas


Re: [PATCH] MEDIUM: init: allow directory as argument of -f

2016-05-10 Thread Maxime de Roucy
Hello Willy,

I don't receive all the mails from haproxy@formilux.org.
For exemple I didn't received :
http://article.gmane.org/gmane.comp.web.haproxy/27795

However I am sure I sent a blank mail to haproxy+subscr...@formilux.org
 (I rechecked).
Can you check on the server side ?

> > I forgot to free the memory allocated at 'filename = calloc' (why
> > valgrind
> > didn't warn...). Forget this patch. I will send another one
> > tomorow.
> 
> Yes I noticed, and there's this one as well :
> 
> > > +   wlt = calloc(1, sizeof(*wlt));
> > > +   tmp_str = strdup(filename);
> > > +   if (!wlt || !tmp_str) {
> > > +   Alert("Cannot load
> > > configuration
> > > files %s : out of memory.\n",
> > > + filename);
> > > +   exit(1);
> > > +   }
> 
> If tmp_str fails and wlt succeeds, wlt is not freed.

If tmp_str fails and wlt succeeds we still got the Alert and everything
it freed on exit.
Anyway the problem isn't here anymore as I get ride of strdup.
See the end of this mail.

> Overall I still think that writing this into a dedicated function
> will make the controls and unrolling easier. I'd suggest something
> like this which is much more conventional, auditable and maintainable
> :
> 
> /* loads config file/dir , returns 0 on failure with errmsg
>  * filled with an error message.
>  */
> int load_config_file(const char *arg, char **errmsg)
> {
> ...
> wlt = calloc(1, sizeof(*wlt));
> if (!wlt) {
>  memprintf(errmsg, "out of memory");
>  goto fail_wlt;
> }
> 
> tmp_str = strdup(filename);
> if (!tmp_str) {
>  memprintf(errmsg, "out of memory");
>  goto fail_tmp_str;
> }
> ...
> return 1;
> ...
> fail_smp_str:
> free(wlt);
> fail_wlt:
> ...
> return 0;
> }
> 
> Then in init() :
> 
> if (!load_config_from_file(argv, )) {
> Alert("Cannot load configuration files %s : %s.\n", filename,
> errmsg);
> exit(1);
> }

I create the function "void cfgfiles_expand_directories(void)", but not
the "load_config_file" one.
I am not accustomed to using goto and it's hard for me to use it here
as I actually don't see the point of it (in
cfgfiles_expand_directories).

It doesn't reduce the number of lines and, as after every alert we call
exit, there is no need to clean anything.

> Also, please don't use sprintf() but snprintf() as I mentionned in
> the earlier mail.

I used sprintf at first because the man says it's not C89 compatible :
  fprintf(), printf(), sprintf(), vprintf(), vfprintf(), vsprintf(): 
POSIX.1-2001, POSIX.1-2008, C89, C99.
  snprintf(), vsnprintf(): POSIX.1-2001, POSIX.1-2008, C99.

and the CONTRIBUTING file says :
  It is important to keep in mind that certain facilities offered by
  recent versions must not be used in the code :
    - …
    - in general, anything which requires C99 (such as declaring
      variables in "for" statements)

I changed it to snprintf, yet I didn't check the output as it shouldn't
be able to fail (the string size == copied size == calloc size).

Here is a new version of the patch. Only for the src/haproxy.c file.
I will send a complete patch when everything will be settled.
I also free head memory from the "wlt = calloc(1, sizeof(*wlt));" line,
in the deinit function.
I don't think it's necessary as deinit is followed by exit but I
thought it would be cleaner as we also free cfg_cfgfiles elements.

Thank you very much for all your comments !
Maxime de Roucy

#

diff --git a/src/haproxy.c b/src/haproxy.c
index 0c223e5..3fae428 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -33,10 +33,12 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -423,7 +425,7 @@ void usage(char *name)
 {
display_version();
fprintf(stderr,
-   "Usage : %s [-f ]* [ -vdV"
+   "Usage : %s [-f ]* [ -vdV"
"D ] [ -n  ] [ -N  ]\n"
"[ -p  ] [ -m  ] [ -C  ] [-- 
*]\n"
"-v displays version ; -vv shows known build options.\n"
@@ -551,6 +553,95 @@ void dump(struct sig_handler *sh)
pool_gc2();
 }
 
+/* This function check if cfg_cfgfiles containes directories.
+ * If it find one, it add all the files (and only files) it containes
+ * in cfg_cfgfiles in place of the directory (and remove the directory).
+ * It add the files in lexical order.
+ */
+void cfgfiles_expand_directories(void)
+{
+   struct wordlist *wl, *wlb;
+
+   list_for_each_entry_safe(wl, wlb, _cfgfiles, list) {
+   struct stat file_stat;
+   struct dirent **dir_entries;
+

Re: dynamically choosing back-end port

2016-05-10 Thread Derek Brown
Sure.

I have a setup where we're using HAProxy to front hundreds of different
services, each service running on a different port.

I can figure out from the request (for example, the SNI information), which
service we want to use.  However, for maintenance of the haproxy config
file,
it's very desirable to not have several hundred back-ends.

A frontend, with a server configuration with a dynamically chosen port seems
ideal.  It would be the logical equivalent of

server svc1 host:1001 if svc_1
server svc2 host:1002 if svc_2

etc.

Thanks,

On Tue, May 10, 2016 at 2:53 PM, Baptiste  wrote:

> On Tue, May 10, 2016 at 8:13 PM, Derek Brown 
> wrote:
> > Hello-
> >
> > I am trying to write a configuration which will allow me to choose the
> > back-end port dynamically.
> >
> > Specifically, I'd like to listen on port 443, and then choose the backend
> > port based on an http header
> > in the request.  Something like
> >
> > frontend myserver
> > bind 443
> > mode http
> >
> > server real-server 192.168.1.1:req.hdr(X-My-Header)
> >
> > --OR--
> >server realserver 192.168.1.1:%[req.ssl_sni,lower,map(mapfile)]
> >
> >
> > where mapfile contains
> > hosta.domain.com 1001
> > hostb.domain.com 1002
> >
> > or similar.
> >
> > Is there any facility which would allow this, including the new(er) Lua
> > capabilities?
> >
> > Thanks, in advance
>
>
> Hi Derek,
>
> Could you please explain us your use case?
>
> Baptiste
>


答复:haproxy@formilux.org——中国国际汽车零部件博览会|6月|广州 

2016-05-10 Thread 固化剂
haproxy@formilux.org

中华人民共和国商务部引导支持展会
国家级国际性汽车配件用品展贸平台
 
【中文名称】 2016第十四届中国(广州)国际汽车零部件及用品展览会
【英文名称】 The 14th China (Guangzhou) International Auto Parts Expo,2016 (CAPE 2016)
 
【展会日期】 2016年06月08—10日
【展会场馆】 广州琶洲保利世贸博览馆
 
【参展情况】 目前仅剩少量展位,请即刻登陆官网报名。机不可失!
【展会简介】 
本届CAPE预计展会面积67000平方米,标准展位3000多个,参观观众65330多人;其中,专业买家3多人,终端消费者25000多人,国外采购商7551多人。
 
【参展联系】 电话:4000-680-860转8144; 手机:139-1031-8144; QQ/邮箱:12809395#qq.com;
【官方网站】 http://www.CAPE-china.com 
 

  
  
【公众平台】
  
微信: 参展消息 (ID:CanZhanXiaoXi)—— 品牌扩张的平台 市场开拓的桥梁
微信: 展商之家 (ID:ZhanShangZhiJia)—— 为展商提供最佳营地 为阁下营造参展价值
  
---
百万群发系统|为您发送|如不希望再收到此行业资讯|请回复“退订+CAPE”至邮箱1055800...@qq.com


dynamically choosing back-end port

2016-05-10 Thread Derek Brown
Hello-

I am trying to write a configuration which will allow me to choose the
back-end port dynamically.

Specifically, I'd like to listen on port 443, and then choose the backend
port based on an http header
in the request.  Something like

frontend myserver
bind 443
mode http

server real-server 192.168.1.1:req.hdr(X-My-Header)

--OR--
   server realserver 192.168.1.1:%[req.ssl_sni,lower,map(mapfile)]


where mapfile contains
hosta.domain.com 1001
hostb.domain.com 1002

or similar.

Is there any facility which would allow this, including the new(er) Lua
capabilities?

Thanks, in advance


Nouveau robot aspirateur revolutionnaire

2016-05-10 Thread iRobot
Pour visualiser la version en ligne, veuillez vous rendre ici 
:http://hukani.com/GHN4_BjaM6ue2DonC7gCqiZm_IzT-6lE80dUsW5FMWTaCxMsczNqxRCY8OHHC_2OTKrt9syJnbPf73g0VpJRB_d_iGXgXAq4wTtIZdQ0zu4utOAJF2imFg_0b-mNpQfrZRwuY9h9uDQF7BgEMgC61R3DIQu1Mf-u72zz4T4q8F-mWctpSMP0adm3NEH_nnuta8V1cdsK6w4k8obi5JDV0w==Découvrez
 toutes nos gammes de robot ménagers iRobot®  Trouvez les points de ventes les 
plus proches de chez vous ! 

 
LA PUISSANCE QUI RÉVOLUTIONNE VOTRE ROUTINE DE NETTOYAGE
 
Nouveau robot aspirateur Roomba 980 :   • Nettoie un étage complet de votre 
maison
• Se recharge et se remet au travail jusqu'à ce que tout soit propre
• Lancez le nettoyage où que vous soyez grâce à l'appli iRobot HOME
• Augmente la puissance sur les moquettes et les tapis
 

Garantie de remboursement sous 30jours   Garantie limitée de 2 ans   
Livraison Gratuite sur toutes les commandes de robots
Pour vous desinscrire, veuillez vous rendre ici 
:http://hukani.com/GHN4_BjaM6ue2DonC7gCqiZm_IzT-6lE80dUsW5FMWTaCxMsczNqxRCY8OHHC_2OHdhuRtI5kPEfqaQ_unrAK6Wt97UiYgC_fitRlAS-PeURPcHAFEN96hzsy-tQcN24yHVvkEZImXcgTAb9U_taRg==



[ANNOUNCE] haproxy-1.5.18

2016-05-10 Thread Willy Tarreau
Hi,

HAProxy 1.5.18 was released on 2016/05/10. It added 15 new commits
after version 1.5.17.

This version fixes several painful bugs that were recently discussed
here on the list, oscillating between random spikes of 100% CPU load
and timeouts when downloading files between 2 and 4 GB (+ N * 4G) when
using content-length. Note that the last one was introduced in 1.5.17
as part of the fix for one of the former 100% CPU issues. Another one
is of interest for users of nbproc, as there was an oddity in the way
the maxaccept value is computed when dealing with multiple processes.
In short, a listener bound to a single process would still see its
maxaccept divided by the global number of processes. It's not much
visible for people using nbproc 2, but those running with nbproc 32
didn't find it fun. The other issue where some tools built with Go 1.6
were disabling signals was worked around despite not being an haproxy
bug. It used to prevent old processes from stopping during a reload!

People who use 1.5 definitely need to upgrade to get rid of all this
mess, and more importantly those running 1.5.17 if they try to serve
large files. Just as I stated for 1.6, considering the severity of some
of the bugs, there's no point wasting time investigating bug reports for
1.5 versions older than 1.5.18 from now on.

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Sources  : http://www.haproxy.org/download/1.5/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.5.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.5.git
   Changelog: http://www.haproxy.org/download/1.5/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
  - DOC: Clarify IPv4 address / mask notation rules
  - CLEANUP: fix inconsistency between fd->iocb, proto->accept and accept()
  - BUG/MEDIUM: fix maxaccept computation on per-process listeners
  - BUG/MINOR: listener: stop unbound listeners on startup
  - BUG/MINOR: fix maxaccept computation according to the frontend process range
  - MEDIUM: unblock signals on startup.
  - BUG/MEDIUM: channel: don't allow to overwrite the reserve until connected
  - BUG/MEDIUM: channel: incorrect polling condition may delay event delivery
  - BUG/MEDIUM: channel: fix miscalculation of available buffer space (3rd try)
  - MINOR: channel: add new function channel_congested()
  - BUG/MEDIUM: http: fix risk of CPU spikes with pipelined requests from dead 
client
  - BUG/MAJOR: channel: fix miscalculation of available buffer space (4th try)
  - BUG/MEDIUM: stream: ensure the SI_FL_DONT_WAKE flag is properly cleared
  - BUG/MEDIUM: channel: fix inconsistent handling of 4GB-1 transfers
  - MINOR: stats: fix typo in help messages
---



[ANNOUNCE] haproxy-1.6.5

2016-05-10 Thread Willy Tarreau
Hi,

HAProxy 1.6.5 was released on 2016/05/10. It added 53 new commits
after version 1.6.4.

This version fixes several painful bugs that were recently discussed
here on the list, oscillating between random spikes of 100% CPU load
and frozen transfers depending on the fix. Another one is of interest
for users of nbproc, as there was an oddity in the way the maxaccept
value is computed when dealing with multiple processes. In short, a
listener bound to a single process would still see its maxaccept divided
by the global number of processes. It's not much visible for people using
nbproc 2, but those running with nbproc 32 didn't find it fun.

Some of the CLI commands could return truncated data, Cyril fixed this
(show backends and show servers state). There were some DNS fixes that
Vincent worked on. I also happened to spot and fix a bug which could
have been responsible for the zoombie processes some users have been
facing and for some of the CLOSE_WAIT state affecting some peers
connections. Ah and this other issue where some tools built with Go 1.6
were disabling signals was worked around. It used to prevent old processes
from stopping during a reload! I also introduced a minor regression in
1.6.4 when fixing the crash when logging layer 7 info at the connection
level. Some HTTP fields would report "", it was fixed here.

I'd say that all users of 1.6 definitely need to upgrade. At least at
this point considering the severity of some of the bugs, there's no point
wasting time investigating bug reports for 1.6 versions older than 1.6.5.

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Sources  : http://www.haproxy.org/download/1.6/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.6.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.6.git
   Changelog: http://www.haproxy.org/download/1.6/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
  - BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
  - BUILD: namespaces: fix a potential build warning in namespaces.c
  - DOC: add encoding to json converter example
  - BUG/MINOR: conf: "listener id" expects integer, but its not checked
  - DOC: Clarify tunes.vars.xxx-max-size settings
  - BUG/MEDIUM: peers: fix incorrect age in frequency counters
  - BUG/MEDIUM: Fix RFC5077 resumption when more than TLS_TICKETS_NO are present
  - BUG/MAJOR: Fix crash in http_get_fhdr with exactly MAX_HDR_HISTORY headers
  - BUG/MINOR: lua: can't load external libraries
  - DOC: "addr" parameter applies to both health and agent checks
  - DOC: timeout client: pointers to timeout http-request
  - DOC: typo on stick-store response
  - DOC: stick-table: amend paragraph blaming the loss of table upon reload
  - DOC: typo: ACL subdir match
  - DOC: typo: maxconn paragraph is wrong due to a wrong buffer size
  - DOC: regsub: parser limitation about the inability to use closing square 
brackets
  - DOC: typo: req.uri is now replaced by capture.req.uri
  - DOC: name set-gpt0 mismatch with the expected keyword
  - BUG/MEDIUM: stick-tables: some sample-fetch doesn't work in the connection 
state.
  - DOC: fix "needed" typo
  - BUG/MINOR: dns: inapropriate way out after a resolution timeout
  - BUG/MINOR: dns: trigger a DNS query type change on resolution timeout
  - BUG/MINOR : allow to log cookie for tarpit and denied request
  - OPTIM/MINOR: session: abort if possible before connecting to the backend
  - BUG/MEDIUM: trace.c: rdtsc() is defined in two files
  - BUG/MEDIUM: channel: fix miscalculation of available buffer space (2nd try)
  - BUG/MINOR: cfgparse: couple of small memory leaks.
  - BUG/MEDIUM: sample: initialize the pointer before parse_binary call.
  - DOC: fix discrepancy in the example for http-request redirect
  - DOC: Clarify IPv4 address / mask notation rules
  - CLEANUP: fix inconsistency between fd->iocb, proto->accept and accept()
  - BUG/MEDIUM: fix maxaccept computation on per-process listeners
  - BUG/MINOR: listener: stop unbound listeners on startup
  - BUG/MINOR: fix maxaccept computation according to the frontend process range
  - MEDIUM: unblock signals on startup.
  - BUG/MEDIUM: channel: don't allow to overwrite the reserve until connected
  - BUG/MEDIUM: channel: incorrect polling condition may delay event delivery
  - BUG/MEDIUM: channel: fix miscalculation of available buffer space (3rd try)
  - BUG/MEDIUM: log: fix risk of segfault when logging HTTP fields in TCP mode
  - BUG/MEDIUM: lua: protects the upper boundary of the argument list for 
converters/fetches.
  - BUG/MINOR: log: fix a typo that would cause %HP to log 
  - MINOR: channel: add new function channel_congested()
  - BUG/MEDIUM: http: fix risk of CPU spikes with pipelined requests from dead 
client
  - BUG/MAJOR: channel: fix miscalculation of available buffer space (4th try)
  - 

[ANNOUNCE] haproxy-1.7-dev3

2016-05-10 Thread Willy Tarreau
Hi,

HAProxy 1.7-dev3 was released on 2016/05/10. It added 101 new commits
after version 1.7-dev2.

This is mainly a bugfix release so that everyone can play with it. The
last few weeks have been a bit painful with a few unkillable bugs and your
beloved maintainer introducing regressions when trying to fix them. But the
nice news is that some of them have been alive for several years (since
1.4-dev for some).

Aside the numerous bugs, there were a few updates to the Lua core making
it possible from Lua to consult statistics. I know that Thierry managed
reimplement the equivalent of the stats page in Lua, but I can't go into
the details as I don't understand everything there, it's too high-level
for me :-)

Those who currently test 1.7-dev should definitely update (though most
often they're on a snapshot already). If you found no interest in 1.7 yet,
this one will not change your mind. Please refer to the complete changelog
below for more details.

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Sources  : http://www.haproxy.org/download/1.7/src/
   Git repository   : http://git.haproxy.org/git/haproxy.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy.git
   Changelog: http://www.haproxy.org/download/1.7/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
  - MINOR: sample: Moves ARGS underlying type from 32 to 64 bits.
  - BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
  - BUILD: namespaces: fix a potential build warning in namespaces.c
  - MINOR: da: Using ARG12 macro for the sample fetch and the convertor.
  - DOC: add encoding to json converter example
  - BUG/MINOR: conf: "listener id" expects integer, but its not checked
  - DOC: Clarify tunes.vars.xxx-max-size settings
  - CLEANUP: chunk: adding NULL check to chunk_dup allocation.
  - CLEANUP: connection: fix double negation on memcmp()
  - BUG/MEDIUM: peers: fix incorrect age in frequency counters
  - BUG/MEDIUM: Fix RFC5077 resumption when more than TLS_TICKETS_NO are present
  - BUG/MAJOR: Fix crash in http_get_fhdr with exactly MAX_HDR_HISTORY headers
  - BUG/MINOR: lua: can't load external libraries
  - BUG/MINOR: prevent the dump of uninitialized vars
  - CLEANUP: map: it seems that the map were planed to be chained
  - MINOR: lua: move class registration facilities
  - MINOR: lua: remove some useless checks
  - CLEANUP: lua: Remove two same functions
  - MINOR: lua: refactor the Lua object registration
  - MINOR: lua: precise message when a critical error is catched
  - MINOR: lua: post initialization
  - MINOR: lua: Add internal function which strip spaces
  - MINOR: lua: convert field to lua type
  - DOC: "addr" parameter applies to both health and agent checks
  - DOC: timeout client: pointers to timeout http-request
  - DOC: typo on stick-store response
  - DOC: stick-table: amend paragraph blaming the loss of table upon reload
  - DOC: typo: ACL subdir match
  - DOC: typo: maxconn paragraph is wrong due to a wrong buffer size
  - DOC: regsub: parser limitation about the inability to use closing square 
brackets
  - DOC: typo: req.uri is now replaced by capture.req.uri
  - DOC: name set-gpt0 mismatch with the expected keyword
  - MINOR: http: sample fetch which returns unique-id
  - MINOR: dumpstats: extract stats fields enum and names
  - MINOR: dumpstats: split stats_dump_info_to_buffer() in two parts
  - MINOR: dumpstats: split stats_dump_fe_stats() in two parts
  - MINOR: dumpstats: split stats_dump_li_stats() in two parts
  - MINOR: dumpstats: split stats_dump_sv_stats() in two parts
  - MINOR: dumpstats: split stats_dump_be_stats() in two parts
  - MINOR: lua: dump general info
  - MINOR: lua: add class proxy
  - MINOR: lua: add class server
  - MINOR: lua: add class listener
  - BUG/MEDIUM: stick-tables: some sample-fetch doesn't work in the connection 
state.
  - MEDIUM: proxy: use dynamic allocation for error dumps
  - CLEANUP: remove unneeded casts
  - CLEANUP: uniformize last argument of malloc/calloc
  - DOC: fix "needed" typo
  - BUG/MINOR: dumpstats: fix write to global chunk
  - BUG/MINOR: dns: inapropriate way out after a resolution timeout
  - BUG/MINOR: dns: trigger a DNS query type change on resolution timeout
  - CLEANUP: proto_http: few corrections for gcc warnings.
  - BUG/MINOR: DNS: resolution structure change
  - BUG/MINOR : allow to log cookie for tarpit and denied request
  - BUG/MEDIUM: ssl: rewind the BIO when reading certificates
  - OPTIM/MINOR: session: abort if possible before connecting to the backend
  - DOC: http: rename the unique-id sample and add the documentation
  - BUG/MEDIUM: trace.c: rdtsc() is defined in two files
  - BUG/MEDIUM: channel: fix miscalculation of available buffer space (2nd try)
  - BUG/MINOR: server: risk of over reading the pref_net array.
  - BUG/MINOR: cfgparse: couple of small 

Re: AWS ELB with SSL backend adds proxy protocol inside SSL stream

2016-05-10 Thread Hector Rivas Gandara
Hello,

On 10 May 2016 at 14:23, Jonathan Matthews  wrote:

> On 5 May 2016 at 12:11, Hector Rivas Gandara
>  wrote:
>>  * If not, is there a better way to 'chain' the config as I did above.
> Take a look at the "abns@" syntax and feature documented here:
> https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#bind.
> It's excellent for HAP->HAP links, as you're using. I'm using it in
> production *inside* Cloud Foundry, for the record :-)

I did not try the `abns@` thing because I did not really understand
it, but I think it is a nice proposal.

Our case is also for Cloud Foundry.

> As an aside, I'd be interested in even a brief summary of how/if you
> resolved your problem, given that I've not seen it described on the
> list before. I wonder if you're the first to run into this specific
> problem ...

As commented, I implemented it first using two frontends chained:

https://github.com/alphagov/paas-haproxy-release/commit/394a7ccf4dfe9b495f671bd3f971e4b91653e58b

Then we discussed it internally and we decided drop the requirement of
encrypting the traffic between ELB and the HAProxy for the time being.



-- 
Regards
Hector Rivas | GDS / Multi-Cloud PaaS



Re: Stale UNIX sockets after reload

2016-05-10 Thread Willy Tarreau
On Tue, May 10, 2016 at 04:01:24PM +0200, Mehdi Ahmadi wrote:
> In the case of `stop` - I image that the stale / former sockets can be
> listed using:
> `sudo lsof | grep SOMETHING` ?

No since there's no more process. What you see is just a file system
entry. If you try to connect to it using socat, you'll see "connection
refused" which indicates there's nobody there anymore.

> I'm wondering if an additional shell level checks / cleanup can be done in
> such cases were related PID's would be killed? - or perhaps there's a core
> part to how socets are created and managed that I'm lacking.

It could, but it's dangerous as the script would have to guess the paths
correctly from parsing the haproxy config... Or you could put all of them
into a dedicated subdir and remove them on stop. But even then it will not
scale well with multiple instances.

I'd really suggest to place them somewhere it cannot bother you and ignore
them. /var/run is quite common for this, example :

  $ find /var/ -type s 2>/dev/null 
  /var/run/acpid.socket
  /var/run/rpcbind.sock
  /var/run/dbus/system_bus_socket

Willy




Re: Stale UNIX sockets after reload

2016-05-10 Thread Mehdi Ahmadi
In the case of `stop` - I image that the stale / former sockets can be
listed using:
`sudo lsof | grep SOMETHING` ?

I'm wondering if an additional shell level checks / cleanup can be done in
such cases were related PID's would be killed? - or perhaps there's a core
part to how socets are created and managed that I'm lacking.

Thanks for the input Willy! :-)


On Tue, May 10, 2016 at 2:15 PM, Willy Tarreau  wrote:

> On Mon, May 09, 2016 at 04:12:32PM +0200, Pavlos Parissis wrote:
> > On 09/05/2016 02:26 , Christian Ruppert wrote:
> > > Hi,
> > >
> > > it seems that HAProxy does not remove the UNIX sockets after reloading
> > > (also restarting?) even though they have been removed from the
> > > configuration and thus are stale afterwards.
> > > At least 1.6.4 seems to be affected. Can anybody else confirm that?
> It's
> > > a multi-process setup in this case but it also happens with binds bound
> > > to just one process.
> > >
> >
> > I can confirm this behavior. I don't think it is easy for haproxy to
> > clean up stale UNIX socket files as their names can change or stored in
> > a directory which is shared with other services.
>
> In fact it's not exact, it does its best for this. The thing is that upon
> a reload it's the new process which takes care of removing the old socket
> and it does so pretty well. But if you perform a stop there's no way to
> do it if the process is chrooted. In practice many daemons have the same
> issue, that's how you end up with /dev/log even when syslogd is not running
> or with /tmp/.X11-unix/X0 just to give a few examples.
>
> Hoping this helps,
> Willy
>
>
>


Re: AWS ELB with SSL backend adds proxy protocol inside SSL stream

2016-05-10 Thread Jonathan Matthews
Hello Hector -

On 5 May 2016 at 12:11, Hector Rivas Gandara
 wrote:
>  * If not, is there a better way to 'chain' the config as I did above.

I don't have any insight into the protocol layering problem you're
having, I'm afraid, but if you do end up with the chained solution you
describe, I have a suggestion.

Take a look at the "abns@" syntax and feature documented here:
https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#bind.
It's excellent for HAP->HAP links, as you're using. I'm using it in
production *inside* Cloud Foundry, for the record :-)

As an aside, I'd be interested in even a brief summary of how/if you
resolved your problem, given that I've not seen it described on the
list before. I wonder if you're the first to run into this specific
problem ...

All the best,
Jonathan
-- 
Jonathan Matthews
London, UK
http://www.jpluscplusm.com/contact.html



Re: Haproxy running on 100% CPU and slow downloads

2016-05-10 Thread Willy Tarreau
On Tue, May 10, 2016 at 11:10:14AM +0530, Sachin Shetty wrote:
> We deployed the latest and we saw throughput still dropped around peak
> hours a bit, then we swithed to nbproc 4 which is holding up ok.

So probably you were reaching the processing limits for a single process,
that can easily happen with SSL if a lot of rekeying has to be done.

> Note that
> 4 Cpus was not sufficient earlier, so I believe the latest version is
> scaling better. 

Good, that confirms that you're not facing these bugs anymore. I'm currently
starting a new release, that will make it easier for you to deploy.

Thanks for the report,
Willy




Re: Stale UNIX sockets after reload

2016-05-10 Thread Willy Tarreau
On Mon, May 09, 2016 at 04:12:32PM +0200, Pavlos Parissis wrote:
> On 09/05/2016 02:26 , Christian Ruppert wrote:
> > Hi,
> > 
> > it seems that HAProxy does not remove the UNIX sockets after reloading
> > (also restarting?) even though they have been removed from the
> > configuration and thus are stale afterwards.
> > At least 1.6.4 seems to be affected. Can anybody else confirm that? It's
> > a multi-process setup in this case but it also happens with binds bound
> > to just one process.
> > 
> 
> I can confirm this behavior. I don't think it is easy for haproxy to
> clean up stale UNIX socket files as their names can change or stored in
> a directory which is shared with other services.

In fact it's not exact, it does its best for this. The thing is that upon
a reload it's the new process which takes care of removing the old socket
and it does so pretty well. But if you perform a stop there's no way to
do it if the process is chrooted. In practice many daemons have the same
issue, that's how you end up with /dev/log even when syslogd is not running
or with /tmp/.X11-unix/X0 just to give a few examples.

Hoping this helps,
Willy




Re: Forwarding unique auth to squid servers

2016-05-10 Thread Lukas Erlacher
Hi,

On 10.05.2016 08:16, Justin Rhodes wrote:
> Hi
> 
> I'm using HAProxy to forward requests to a pool of squid proxies. 
> 
> reqadd Proxy-Authorisation you can handle the auth side for the proxies as a 
> whole like:
> 
> |backend rotateproxy server proxy1 ip1: server proxy2 ip2: reqadd 
> Proxy-Authorization:\ Basic\ dXNlcjpwYXNz|
> 
> 
> How would I handle differing user:pass for each squid proxy in the pool?
> 
> -Jus

You can get the current server id and put it into an ACL: 
https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#7.3.3-srv_id

Then you can make several reqadd Proxy-Authorization statements conditional on 
the ACL, or select from a mapping table indexed on the ACL.




smime.p7s
Description: S/MIME Cryptographic Signature


Forwarding unique auth to squid servers

2016-05-10 Thread Justin Rhodes
Hi

I'm using HAProxy to forward requests to a pool of squid proxies.

reqadd Proxy-Authorisation you can handle the auth side for the proxies as
a whole like:

backend rotateproxy
   server proxy1 ip1:
   server proxy2 ip2:
   reqadd Proxy-Authorization:\ Basic\ dXNlcjpwYXNz


How would I handle differing user:pass for each squid proxy in the pool?

-Jus