It crashed now on first test in process_stream: struct task *process_stream(struct task *t, void *context, unsigned short state) { struct server *srv; struct stream *s = context; struct session *sess = s->sess; unsigned int rqf_last, rpf_last; unsigned int rq_prod_last, rq_cons_last; unsigned int rp_cons_last, rp_prod_last; unsigned int req_ana_back; struct channel *req, *res; struct stream_interface *si_f, *si_b; unsigned int rate;
TEST_STRM(s); [...] Program terminated with signal SIGSEGV, Segmentation fault. #0 0x000055f4cda7b5f9 in testcorrupt (ptr=0x7f75ac1ed990) at src/mux_h2.c:6238 [Current thread is 1 (Thread 0x7f75a98b9700 (LWP 5860))] (gdb) bt full #0 0x000055f4cda7b5f9 in testcorrupt (ptr=0x7f75ac1ed990) at src/mux_h2.c:6238 cs = 0x7f75ac1ed990 h2s = 0x7f7584244510 #1 0x000055f4cdad8993 in process_stream (t=0x7f75ac139d70, context=0x7f7588066540, state=260) at src/stream.c:1499 srv = 0x7f75a9896390 s = 0x7f7588066540 sess = 0x7f759c071b80 rqf_last = 4294967294 rpf_last = 2217468112 rq_prod_last = 32629 rq_cons_last = 2217603024 rp_cons_last = 32629 rp_prod_last = 2217182865 req_ana_back = 2217603025 req = 0x7f75a9896350 res = 0x55f4cdbed618 <__task_queue+92> si_f = 0x55f4ce03c680 <task_per_thread+896> si_b = 0x7f75842def80 rate = 2217603024 #2 0x000055f4cdbeddb2 in run_tasks_from_list (list=0x55f4ce03c6c0 <task_per_thread+960>, max=150) at src/task.c:371 process = 0x55f4cdad892d <process_stream> t = 0x7f75ac139d70 state = 260 ctx = 0x7f7588066540 done = 3 [...] subs is 0xffffffff like before BUT dummy1 is also changed to 0xffff (gdb) p *(struct h2s*)(0x7f7584244510) $1 = {cs = 0x7f75ac1ed990, sess = 0x55f4ce02be40 <pool_cache+7328>, h2c = 0x7f758417abd0, h1m = {state = H1_MSG_RPBEFORE, flags = 12, curr_len = 0, body_len = 0, next = 0, err_pos = -1, err_state = 0}, by_id = {node = { branches = {b = {0x7f758428e430, 0x7f7584244550}}, node_p = 0x7f758428e431, leaf_p = 0x7f7584244551, bit = 1, pfx = 33828}, key = 23}, id = 23, flags = 16385, sws = 0, errcode = H2_ERR_NO_ERROR, st = H2_SS_HREM, status = 0, body_len = 0, rxbuf = {size = 16384, area = 0x7f75780a2210 "Ð?", data = 16384, head = 0}, dummy0 = 0x0, dummy1 = 0xffff, subs = 0xffffffff, list = {n = 0x7f75842445c8, p = 0x7f75842445c8}, shut_tl = 0x7f75842df0d0} pon., 9 lis 2020 o 15:07 Christopher Faulet <cfau...@haproxy.com> napisał(a): > Le 09/11/2020 à 13:10, Maciej Zdeb a écrit : > > I've played little bit with the patch and it led me to backend.c file > and > > connect_server() function > > > > int connect_server(struct stream *s) > > { > > [...] > > if (!conn_xprt_ready(srv_conn) && !srv_conn->mux) { > > /* set the correct protocol on the output stream > interface */ > > if (srv) > > conn_prepare(srv_conn, > > protocol_by_family(srv_conn->dst->ss_family), srv->xprt); > > else if (obj_type(s->target) == OBJ_TYPE_PROXY) { > > /* proxies exclusively run on raw_sock right > now */ > > conn_prepare(srv_conn, > > protocol_by_family(srv_conn->dst->ss_family), xprt_get(XPRT_RAW)); > > if (!(srv_conn->ctrl)) { > > conn_free(srv_conn); > > return SF_ERR_INTERNAL; > > } > > } > > else { > > conn_free(srv_conn); > > return SF_ERR_INTERNAL; /* how did we get > there ? */ > > } > > // THIS ONE IS OK > > TEST_STRM(s); > > ////////////////////////////// > > srv_cs = si_alloc_cs(&s->si[1], srv_conn); > > // FAIL > > TEST_STRM(s); > > ////////////////////////////// > > if (!srv_cs) { > > conn_free(srv_conn); > > return SF_ERR_RESOURCE; > > } > > Hi, > > In fact, this crash occurs because of the Willy's patch. It was not design > to > handle non-h2 connections. Here the crash happens on a TCP connection, > used by a > SPOE applet for instance. > > I updated its patch. First, I added some calls to TEST_STRM() in the SPOE > code, > to be sure. I also explicitly set the stream task to NULL in stream_free() > to > catch late wakeups in the SPOE. Finally, I modified testcorrupt(). I hope > this > one is correct. But if I missed something, you may only keep the last > ABORT_NOW() in testcorrupt() and replace others by a return statement, > just like > in the Willy's patch. > > -- > Christopher Faulet >