Could have thought of that before…
Here’s the valgrind info after installing the debug symbols.


root@haproxy-1:/var/crash# valgrind haproxy -d -f 
/vagrant/configs/crasht-test.cfg 
==4802== Memcheck, a memory error detector
==4802== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==4802== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==4802== Command: haproxy -d -f /vagrant/configs/crasht-test.cfg
==4802== 
Note: setting global.maxconn to 2000.
Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
[WARNING] 088/121911 (4802) : [haproxy.main()] Cannot raise FD limit to 4031.
00000000:fe_http.accept(0005)=0007 from [192.168.0.154:59442]
00000000:fe_http.clireq[0007:ffffffff]: POST /v2/documents HTTP/1.1
00000000:fe_http.clihdr[0007:ffffffff]: Host: api.centerdevice.de
00000000:fe_http.clihdr[0007:ffffffff]: Content-Type: application/json
00000000:fe_http.clihdr[0007:ffffffff]: Connection: close
00000000:fe_http.clihdr[0007:ffffffff]: Accept: application/json
00000000:fe_http.clihdr[0007:ffffffff]: User-Agent: Paw/2.3.2 (Macintosh; OS 
X/10.11.4) ASIHTTPRequest/v1.8.1-61
00000000:fe_http.clihdr[0007:ffffffff]: Authorization: Bearer 
d9bf4d6d-945e-4cd1-a760-92a96739f260
00000000:fe_http.clihdr[0007:ffffffff]: Accept-Encoding: gzip
00000000:fe_http.clihdr[0007:ffffffff]: Content-Length: 118
==4802== Invalid read of size 8
==4802==    at 0x19AAF3: smp_fetch_sc_inc_gpc0 (in /usr/sbin/haproxy)
==4802==    by 0x1A0CB6: sample_process (in /usr/sbin/haproxy)
==4802==    by 0x19DA43: acl_exec_cond (in /usr/sbin/haproxy)
==4802==    by 0x1654F8: http_req_get_intercept_rule (in /usr/sbin/haproxy)
==4802==    by 0x16A556: http_process_req_common (in /usr/sbin/haproxy)
==4802==    by 0x197E0D: process_stream (in /usr/sbin/haproxy)
==4802==    by 0x12CCE4: process_runnable_tasks (in /usr/sbin/haproxy)
==4802==    by 0x1232CC: run_poll_loop (in /usr/sbin/haproxy)
==4802==    by 0x11FB5A: main (in /usr/sbin/haproxy)
==4802==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
==4802== 
==4802== 
==4802== Process terminating with default action of signal 11 (SIGSEGV)
==4802==  Access not within mapped region at address 0x0
==4802==    at 0x19AAF3: smp_fetch_sc_inc_gpc0 (in /usr/sbin/haproxy)
==4802==    by 0x1A0CB6: sample_process (in /usr/sbin/haproxy)
==4802==    by 0x19DA43: acl_exec_cond (in /usr/sbin/haproxy)
==4802==    by 0x1654F8: http_req_get_intercept_rule (in /usr/sbin/haproxy)
==4802==    by 0x16A556: http_process_req_common (in /usr/sbin/haproxy)
==4802==    by 0x197E0D: process_stream (in /usr/sbin/haproxy)
==4802==    by 0x12CCE4: process_runnable_tasks (in /usr/sbin/haproxy)
==4802==    by 0x1232CC: run_poll_loop (in /usr/sbin/haproxy)
==4802==    by 0x11FB5A: main (in /usr/sbin/haproxy)
==4802==  If you believe this happened as a result of a stack
==4802==  overflow in your program's main thread (unlikely but
==4802==  possible), you can try to increase the size of the
==4802==  main thread stack using the --main-stacksize= flag.
==4802==  The main thread stack size used in this run was 8388608.
==4802== 
==4802== HEAP SUMMARY:
==4802==     in use at exit: 589,450 bytes in 1,347 blocks
==4802==   total heap usage: 1,642 allocs, 295 frees, 659,781 bytes allocated
==4802== 
==4802== LEAK SUMMARY:
==4802==    definitely lost: 0 bytes in 0 blocks
==4802==    indirectly lost: 0 bytes in 0 blocks
==4802==      possibly lost: 84,028 bytes in 1,032 blocks
==4802==    still reachable: 505,422 bytes in 315 blocks
==4802==         suppressed: 0 bytes in 0 blocks
==4802== Rerun with --leak-check=full to see details of leaked memory
==4802== 
==4802== For counts of detected and suppressed errors, rerun with: -v
==4802== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Segmentation fault (core dumped)





> On 29.03.2016, at 14:16, Daniel Schneller <daniel.schnel...@centerdevice.com> 
> wrote:
> 
> Hi!
> 
> I am seeing a segfault upon the first request coming through the 
> configuration below.
> 
> My intention is to enforce a) a total request limit per minute and b) a 
> separate limit for certain API paths. For that purpose, in addition to the 
> be_api_external table, which I intend to use for the total request rate, I 
> created a separate dummy backend to get another table (be_tbl_search) for 
> search API calls. In the real config, there would be a handful of these.
> 
> I reduced the config as far as I could to demonstrate.
> 
> ===================
> ...
> 
> frontend fe_http
>   bind 192.168.1.3:80
>   http-request capture hdr(Authorization)   len 64   # id 2
>   default_backend be_api_external
> 
> backend be_tbl_search
>   stick-table type string len 64 size 50k expire 60s store gpc0_rate(60s)
> 
> backend be_api_external
>   balance leastconn
>   option httplog
>   option http-buffer-request
> 
>   stick-table type string len 64 size 50k expire 60s store http_req_rate(60s)
> 
>   http-request track-sc1 hdr(Authorization) table be_api_external
>   http-request track-sc1 hdr(Authorization) table be_tbl_search 
> 
>   acl do_count_search  sc1_inc_gpc0(be_tbl_search) gt 0
>   http-request add-header X-Rate-All    
> %[hdr(Authorization),table_http_req_rate(be_api_external)]
>   http-request add-header X-Rate-Search 
> %[hdr(Authorization),table_gpc0_rate(be_tbl_search)] if do_count_search
> 
>   server s1 app-server-01:8081
> =================
> 
> 
> The first request I make crashes haproxy 1.6.4 (on Ubuntu 14.04, from 
> https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-1.6 
> <https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-1.6>).
> 
> It will not crash if I remove the “if do_count_search” ACL or use track-sc2.
> Just removing the ACL, though, leaves be_tbl_search table empty. 
> Using track-sc2 fills both tables, even with the ACL in place.
> 
> Is what I am trying to do even possible? From some older mailing list 
> postings I was under the impression I could use multiple tables to track 
> requests in a more fine-grained fashion, at the expense of memory and CPU, of 
> course.
> 
> From what I see here, it would seem I am limited to at most three tables 
> (using all of sc0, sc1 and sc2)? 
> 
> I would much appreciate a clarification/correction of my understanding of how 
> these two concepts play together. Still, a segfaulting crash at runtime 
> should not happen anyway, IMO.
> 
> 
> Not sure if it helps without symbols, but this is what Valgrind produces:
> root@haproxy-1:/var/crash# valgrind haproxy -d -f 
> /vagrant/configs/crasht-test.cfg 
> ==4628== Memcheck, a memory error detector
> ==4628== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
> ==4628== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
> ==4628== Command: haproxy -d -f /vagrant/configs/crasht-test.cfg
> ==4628== 
> Note: setting global.maxconn to 2000.
> Available polling systems :
>       epoll : pref=300,  test result OK
>        poll : pref=200,  test result OK
>      select : pref=150,  test result FAILED
> Total: 3 (2 usable), will use epoll.
> Using epoll() as the polling mechanism.
> [WARNING] 088/120512 (4628) : [haproxy.main()] Cannot raise FD limit to 4031.
> 00000000:fe_http.accept(0005)=0007 from [192.168.0.154:59269]
> 00000000:fe_http.clireq[0007:ffffffff]: POST /v2/documents HTTP/1.1
> 00000000:fe_http.clihdr[0007:ffffffff]: Host: api.centerdevice.de 
> <http://api.centerdevice.de/>
> 00000000:fe_http.clihdr[0007:ffffffff]: Content-Type: application/json
> 00000000:fe_http.clihdr[0007:ffffffff]: Connection: close
> 00000000:fe_http.clihdr[0007:ffffffff]: Accept: application/json
> 00000000:fe_http.clihdr[0007:ffffffff]: User-Agent: Paw/2.3.2 (Macintosh; OS 
> X/10.11.4) ASIHTTPRequest/v1.8.1-61
> 00000000:fe_http.clihdr[0007:ffffffff]: Authorization: Bearer 
> d9bf4d6d-945e-4cd1-a760-92a96739f260
> 00000000:fe_http.clihdr[0007:ffffffff]: Accept-Encoding: gzip
> 00000000:fe_http.clihdr[0007:ffffffff]: Content-Length: 118
> ==4628== Invalid read of size 8
> ==4628==    at 0x19AAF3: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x1A0CB6: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x19DA43: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x1654F8: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x16A556: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x197E0D: process_stream (in /usr/sbin/haproxy)
> ==4628==    by 0x12CCE4: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x1232CC: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x11FB5A: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x5D58EC4: (below main) (libc-start.c:287)
> ==4628==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
> ==4628== 
> ==4628== 
> ==4628== Process terminating with default action of signal 11 (SIGSEGV)
> ==4628==  Access not within mapped region at address 0x0
> ==4628==    at 0x19AAF3: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x1A0CB6: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x19DA43: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x1654F8: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x16A556: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x197E0D: process_stream (in /usr/sbin/haproxy)
> ==4628==    by 0x12CCE4: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x1232CC: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x11FB5A: ??? (in /usr/sbin/haproxy)
> ==4628==    by 0x5D58EC4: (below main) (libc-start.c:287)
> ==4628==  If you believe this happened as a result of a stack
> ==4628==  overflow in your program's main thread (unlikely but
> ==4628==  possible), you can try to increase the size of the
> ==4628==  main thread stack using the --main-stacksize= flag.
> ==4628==  The main thread stack size used in this run was 8388608.
> ==4628== 
> ==4628== HEAP SUMMARY:
> ==4628==     in use at exit: 589,331 bytes in 1,345 blocks
> ==4628==   total heap usage: 1,641 allocs, 296 frees, 659,752 bytes allocated
> ==4628== 
> ==4628== LEAK SUMMARY:
> ==4628==    definitely lost: 0 bytes in 0 blocks
> ==4628==    indirectly lost: 0 bytes in 0 blocks
> ==4628==      possibly lost: 83,909 bytes in 1,030 blocks
> ==4628==    still reachable: 505,422 bytes in 315 blocks
> ==4628==         suppressed: 0 bytes in 0 blocks
> ==4628== Rerun with --leak-check=full to see details of leaked memory
> ==4628== 
> ==4628== For counts of detected and suppressed errors, rerun with: -v
> ==4628== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
> Segmentation fault (core dumped)
> 
> 
> 
> 
> 
> -- 
> Daniel Schneller
> Principal Cloud Engineer
>  
> CenterDevice GmbH                  | Merscheider Straße 1
>                                    | 42699 Solingen
> tel: +49 1754155711                | Deutschland
> daniel.schnel...@centerdevice.de <mailto:daniel.schnel...@centerdevice.de>   
> | www.centerdevice.de <http://www.centerdevice.de/>
> 
> 
> 
> 

Reply via email to