I've played little bit with the patch and it led me to backend.c file and
connect_server() function

int connect_server(struct stream *s)
{
[...]
if (!conn_xprt_ready(srv_conn) && !srv_conn->mux) {
                /* set the correct protocol on the output stream interface
*/
                if (srv)
                        conn_prepare(srv_conn,
protocol_by_family(srv_conn->dst->ss_family), srv->xprt);
                else if (obj_type(s->target) == OBJ_TYPE_PROXY) {
                        /* proxies exclusively run on raw_sock right now */
                        conn_prepare(srv_conn,
protocol_by_family(srv_conn->dst->ss_family), xprt_get(XPRT_RAW));
                        if (!(srv_conn->ctrl)) {
                                conn_free(srv_conn);
                                return SF_ERR_INTERNAL;
                        }
                }
                else {
                        conn_free(srv_conn);
                        return SF_ERR_INTERNAL;  /* how did we get there ?
*/
                }
// THIS ONE IS OK
TEST_STRM(s);
//////////////////////////////
                srv_cs = si_alloc_cs(&s->si[1], srv_conn);
// FAIL
TEST_STRM(s);
//////////////////////////////
                if (!srv_cs) {
                        conn_free(srv_conn);
                        return SF_ERR_RESOURCE;
                }
[...]
}

pon., 9 lis 2020 o 11:51 Maciej Zdeb <mac...@zdeb.pl> napisał(a):

> Hi,
>
> This time h2s = 0x30 ;)
>
> it crashed here:
> void testcorrupt(void *ptr)
> {
> [...]
> if (h2s->cs != cs)
>                 return;
> [...]
>
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x0000556b617f0562 in testcorrupt (ptr=0x7f99741d85a0) at
> src/mux_h2.c:6228
> 6228 src/mux_h2.c: No such file or directory.
> [Current thread is 1 (Thread 0x7f99a484d700 (LWP 28658))]
> (gdb) bt full
> #0  0x0000556b617f0562 in testcorrupt (ptr=0x7f99741d85a0) at
> src/mux_h2.c:6228
>         cs = 0x7f99741d85a0
>         h2s = 0x30
> #1  0x0000556b61850b1a in process_stream (t=0x7f99741d8c60,
> context=0x7f99682cd7b0, state=1284) at src/stream.c:2147
>         srv = 0x556b622770e0
>         s = 0x7f99682cd7b0
>         sess = 0x7f9998057170
>         rqf_last = 9469954
>         rpf_last = 2151677952
>         rq_prod_last = 8
>         rq_cons_last = 0
>         rp_cons_last = 8
>         rp_prod_last = 0
>         req_ana_back = 0
>         req = 0x7f99682cd7c0
>         res = 0x7f99682cd820
>         si_f = 0x7f99682cdae8
>         si_b = 0x7f99682cdb40
>         rate = 1
> #2  0x0000556b61962a5f in run_tasks_from_list (list=0x556b61db1600
> <task_per_thread+832>, max=150) at src/task.c:371
>         process = 0x556b6184d8e6 <process_stream>
>         t = 0x7f99741d8c60
>         state = 1284
>         ctx = 0x7f99682cd7b0
>         done = 2
> [...]
>
>
> pt., 6 lis 2020 o 20:00 Willy Tarreau <w...@1wt.eu> napisał(a):
>
>> Maciej,
>>
>> I wrote this ugly patch to try to crash as soon as possible when a corrupt
>> h2s->subs is detected. The patch was written for 2.2. I only instrumented
>> roughly 30 places in process_stream() which is a fairly likely candidate.
>> I just hope it happens within the context of the stream itself otherwise
>> it will become really painful.
>>
>> You can apply this patch on top of your existing changes. It will try to
>> detect the presence of a non-zero lowest bit in the subs pointer (which
>> should never happen). If we're lucky it will crash inside process_stream()
>> between two points and we'll be able to narrow it down. If we're unlucky
>> it will crash when entering it and that will not be fun.
>>
>> If you want to play with it, you can apply TEST_SI() on stream_interface
>> pointers (often called "si"), TEST_STRM() on stream pointers, and
>> TEST_CS()
>> on conn_stream pointers (often called "cs").
>>
>> Please just let me know how it goes. Note, I tested it, it passes all
>> regtests for me so I'm reasonably confident it should not crash by
>> accident. But I can't be sure, I'm just using heuristics, so please do
>> not put it in sensitive production!
>>
>> Thanks,
>> Willy
>>
>

Reply via email to