It looks the same bug.

We cannot have a file larger than 5GB (the backend is an openstack swift 
installation).

But, if the client is using keepalive, I guess he can transfer more than 16GB 
in the same connection.

If we succeed to reproduce the bug that way, we will try the proposed patch.

Thx.

Le Vendredi 29 Août 2014 17:57 CEST, Remi Gacogne <[email protected]> a écrit:

> Hi,
>
> >>> (gdb) bt
> >>> #0  http_skip_chunk_crlf (msg=0x388ba50) at src/proto_http.c:2113
> >>> #1  http_request_forward_body (s=s@entry=0x388b9a0, 
> >>> req=req@entry=0x343ddf0, an_bit=an_bit@entry=8192) at 
> >>> src/proto_http.c:5398
> >>> #2  0x0000000000467298 in process_session (t=0x39eb8d0) at 
> >>> src/session.c:1955
> >>> #3  0x0000000000411ff5 in process_runnable_tasks (next=0x7fff5584ca1c) at 
> >>> src/task.c:238
> >>> #4  0x000000000040a25c in run_poll_loop () at src/haproxy.c:1304
> >>> #5  0x000000000040799f in main (argc=<optimized out>, argv=<optimized 
> >>> out>) at src/haproxy.c:1638
>
> Remind me of [1], do you have response larger than 16G ?
> If so, maybe you could try the patch suggested by Willy in [2].
>
> [1]: http://marc.info/?l=haproxy&m=140727447831648&w=2
> [2]: http://marc.info/?l=haproxy&m=140803767115539&w=2
>
>
> --
> Rémi
>

--
Romain LE DISEZ
[email protected]
+33 (0)6 78 77 99 18

Reply via email to