> Am 29.10.2019 um 14:45 schrieb Rainer Jung <rainer.j...@kippdata.de>:
> 
> Aha, and this is due to the fact, that r1656259 "mod_proxy_http: don't 
> connect or reuse backend before prefetching request body." or parts of it was 
> backported from trunk to 2.4 as part of r1860166.
> 
> So I think (not yet verified), that the same problems applies to trunk since 
> r1656259 in February 2015 :(

trunk is the root of all problems!

> Regards,
> 
> Rainer
> 
> Am 29.10.2019 um 14:21 schrieb Rainer Jung:
>> The reason why this fails now is that we prefetch in 2.4.41 the request body 
>> before doing the connection check to the backend. In 2.4.39 we did that 
>> after doing the check, so the body was still there when doing the final 
>> request sending.
>> 2.4.39:
>> In proxy_http_handler():
>> - ap_proxy_determine_connection()
>> - ap_proxy_check_connection()
>> - optionally ap_proxy_connect_backend() which might fail.
>> - ap_proxy_connection_create_ex()
>> - ap_proxy_http_request() which does prefetch late!
>> 2.4.41:
>> In proxy_http_handler():
>> - ap_proxy_determine_connection()
>> - early ap_proxy_http_prefetch() which does the prefetch!
>> - optionally again ap_proxy_determine_connection()
>> - ap_proxy_check_connection()
>> - optionally ap_proxy_connect_backend() which might fail.
>> - ap_proxy_connection_create_ex()
>> - ap_proxy_http_request()
>> So we either need to remember the prefetch result for later retries or 
>> switch back to the old order.
>> Regards,
>> Rainer
>> Am 29.10.2019 um 12:46 schrieb Rainer Jung:
>>> This happens in the case of a small body. We read the body into 
>>> req->input_brigade in ap_proxy_http_prefetch() before trying the first 
>>> node, but then loose it on the second node, because we use another req and 
>>> thus also another req->input_brigade then.
>>> 
>>> Not sure, how we could best save the read input_brigade for the second 
>>> attempt after failover. Will try some experiments.
>>> 
>>> If you try to reproduce yourself, make sure you use a small POST (here: 30 
>>> bytes) and also exclude /favicon.ico from forwarding when using a browser. 
>>> Otherwise some of the failovers will be triggered by favicon.ico and you'll 
>>> not notice the problem in the POST request:
>>> 
>>> ProxyPass /favicon.ico !
>>> 
>>> Regards,
>>> 
>>> Rainer
>>> 
>>> Am 29.10.2019 um 11:15 schrieb Rainer Jung:
>>>> A first heads-up: it seems this commit broke failover for POST requests. 
>>>> Most (or all?) of the times a balancer failover happens for a POST 
>>>> request, the request send to the failover node has a Content-Length of "0" 
>>>> instead of the real content length.
>>>> 
>>>> I use a trivial setup like this:
>>>> 
>>>> <Proxy balancer://backends/>
>>>>    ProxySet lbmethod=byrequests
>>>>    BalancerMember http://localhost:5680
>>>>    BalancerMember http://localhost:5681
>>>> </Proxy>
>>>> 
>>>> ProxyPass / balancer://backends/
>>>> 
>>>> where one backend node is up and the second node is down.
>>>> 
>>>> I will investigate further.
>>>> 
>>>> Regards,
>>>> 
>>>> Rainer
> 
> -- 
> kippdata
> informationstechnologie GmbH   Tel: 0228 98549 -0
> Bornheimer Str. 33a            Fax: 0228 98549 -50
> 53111 Bonn                     www.kippdata.de
> 
> HRB 8018 Amtsgericht Bonn / USt.-IdNr. DE 196 457 417
> Geschäftsführer: Dr. Thomas Höfer, Rainer Jung, Sven Maurmann

Reply via email to