[squid-users] Serious problem with read_timeout
Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. -- Jean-Philippe Menil - Pôle réseau Service IRTS DSI Université de Nantes jean-philippe.me...@univ-nantes.fr Tel : 02.53.48.49.27 - Fax : 02.53.48.49.09
Re: [squid-users] Serious problem with read_timeout
Le 03/04/2012 11:06, Jean-Philippe Menil a écrit : Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. Nobody have ever observe this phenomen? Regards. -- Jean-Philippe Menil - Pôle réseau Service IRTS DSI Université de Nantes jean-philippe.me...@univ-nantes.fr Tel : 02.53.48.49.27 - Fax : 02.53.48.49.09
Re: [squid-users] Serious problem with read_timeout
On 04.04.2012 02:46, Jean-Philippe Menil wrote: Le 03/04/2012 11:06, Jean-Philippe Menil a écrit : Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. Nobody have ever observe this phenomen? Not many production networks (squid-users people) use 3.HEAD (alpha) code. The developers and alpha/beta testers hang out in squid-dev ;) And no, you are the first to mention this particular behaviour. Amos
Re: [squid-users] Serious problem with read_timeout
On 03/04/2012 23:53, Amos Jeffries wrote: On 04.04.2012 02:46, Jean-Philippe Menil wrote: Le 03/04/2012 11:06, Jean-Philippe Menil a écrit : Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. Nobody have ever observe this phenomen? Not many production networks (squid-users people) use 3.HEAD (alpha) code. The developers and alpha/beta testers hang out in squid-dev ;) And no, you are the first to mention this particular behaviour. Amos Hi, yes iknow, but i think it is present in 3.2 too (i will test this afternoon to confirm). I think i can repeat that only when download a file to on a https site, does it help? Regards. -- Jean-Philippe Menil - Pôle réseau Service IRTS DSI Université de Nantes jean-philippe.me...@univ-nantes.fr Tel : 02.53.48.49.27 - Fax : 02.53.48.49.09
Re: [squid-users] Serious problem with read_timeout
Le 04/04/2012 09:00, Jean-Philippe Menil a écrit : On 03/04/2012 23:53, Amos Jeffries wrote: On 04.04.2012 02:46, Jean-Philippe Menil wrote: Le 03/04/2012 11:06, Jean-Philippe Menil a écrit : Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. Nobody have ever observe this phenomen? Not many production networks (squid-users people) use 3.HEAD (alpha) code. The developers and alpha/beta testers hang out in squid-dev ;) And no, you are the first to mention this particular behaviour. Amos Hi, yes iknow, but i think it is present in 3.2 too (i will test this afternoon to confirm). I think i can repeat that only when download a file to on a https site, does it help? Regards. Hi, so i have done test with a squid 3.2.0.14. And it appear that i can repeat the problem only with https site, why i don't know yet. For test, i fixe a lower value (don't want to wait 15 minutes between each test) for read_timeout, and i download an iso file through some https site: https://nzdis.org/projects/projects/perfnet/repository/revisions/4/raw/vendor/Vyatta/Vyatta/vyatta-livecd-vc5.0.2.iso Every time, the download stop at the read_timeout value. Any ideas? Regards. -- Jean-Philippe Menil - Pôle réseau Service IRTS DSI Université de Nantes jean-philippe.me...@univ-nantes.fr Tel : 02.53.48.49.27 - Fax : 02.53.48.49.09
Re: [squid-users] Serious problem with read_timeout
Le 05/04/2012 16:50, Jean-Philippe Menil a écrit : Le 04/04/2012 09:00, Jean-Philippe Menil a écrit : On 03/04/2012 23:53, Amos Jeffries wrote: On 04.04.2012 02:46, Jean-Philippe Menil wrote: Le 03/04/2012 11:06, Jean-Philippe Menil a écrit : Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. Nobody have ever observe this phenomen? Not many production networks (squid-users people) use 3.HEAD (alpha) code. The developers and alpha/beta testers hang out in squid-dev ;) And no, you are the first to mention this particular behaviour. Amos Hi, yes iknow, but i think it is present in 3.2 too (i will test this afternoon to confirm). I think i can repeat that only when download a file to on a https site, does it help? Regards. Hi, so i have done test with a squid 3.2.0.14. And it appear that i can repeat the problem only with https site, why i don't know yet. For test, i fixe a lower value (don't want to wait 15 minutes between each test) for read_timeout, and i download an iso file through some https site: https://nzdis.org/projects/projects/perfnet/repository/revisions/4/raw/vendor/Vyatta/Vyatta/vyatta-livecd-vc5.0.2.iso Every time, the download stop at the read_timeout value. Any ideas? Regards. Hi, sorry to up this subject, but i can't understand why the read_timeout isn't zeroed with https communication. Do i miss something? Best regards. -- Jean-Philippe Menil - Pôle réseau Service IRTS DSI Université de Nantes jean-philippe.me...@univ-nantes.fr Tel : 02.53.48.49.27 - Fax : 02.53.48.49.09
Re: [squid-users] Serious problem with read_timeout
On 11/04/2012 6:47 p.m., Jean-Philippe Menil wrote: Le 05/04/2012 16:50, Jean-Philippe Menil a écrit : Le 04/04/2012 09:00, Jean-Philippe Menil a écrit : On 03/04/2012 23:53, Amos Jeffries wrote: On 04.04.2012 02:46, Jean-Philippe Menil wrote: Le 03/04/2012 11:06, Jean-Philippe Menil a écrit : Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. Nobody have ever observe this phenomen? Not many production networks (squid-users people) use 3.HEAD (alpha) code. The developers and alpha/beta testers hang out in squid-dev ;) And no, you are the first to mention this particular behaviour. Amos Hi, yes iknow, but i think it is present in 3.2 too (i will test this afternoon to confirm). I think i can repeat that only when download a file to on a https site, does it help? Regards. Hi, so i have done test with a squid 3.2.0.14. And it appear that i can repeat the problem only with https site, why i don't know yet. For test, i fixe a lower value (don't want to wait 15 minutes between each test) for read_timeout, and i download an iso file through some https site: https://nzdis.org/projects/projects/perfnet/repository/revisions/4/raw/vendor/Vyatta/Vyatta/vyatta-livecd-vc5.0.2.iso Every time, the download stop at the read_timeout value. Any ideas? Regards. Hi, sorry to up this subject, but i can't understand why the read_timeout isn't zeroed with https communication. Do i miss something? Read timeout should be reset to full on every packet read. It is never zeroed during a transfer. It sounds like the transfer is stalling for more than read timeout, or the timeout is not being reset like it should be. Amos
Re: [squid-users] Serious problem with read_timeout
On 11/04/2012 14:49, Amos Jeffries wrote: On 11/04/2012 6:47 p.m., Jean-Philippe Menil wrote: Le 05/04/2012 16:50, Jean-Philippe Menil a écrit : Le 04/04/2012 09:00, Jean-Philippe Menil a écrit : On 03/04/2012 23:53, Amos Jeffries wrote: On 04.04.2012 02:46, Jean-Philippe Menil wrote: Le 03/04/2012 11:06, Jean-Philippe Menil a écrit : Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. Nobody have ever observe this phenomen? Not many production networks (squid-users people) use 3.HEAD (alpha) code. The developers and alpha/beta testers hang out in squid-dev ;) And no, you are the first to mention this particular behaviour. Amos Hi, yes iknow, but i think it is present in 3.2 too (i will test this afternoon to confirm). I think i can repeat that only when download a file to on a https site, does it help? Regards. Hi, so i have done test with a squid 3.2.0.14. And it appear that i can repeat the problem only with https site, why i don't know yet. For test, i fixe a lower value (don't want to wait 15 minutes between each test) for read_timeout, and i download an iso file through some https site: https://nzdis.org/projects/projects/perfnet/repository/revisions/4/raw/vendor/Vyatta/Vyatta/vyatta-livecd-vc5.0.2.iso Every time, the download stop at the read_timeout value. Any ideas? Regards. Hi, sorry to up this subject, but i can't understand why the read_timeout isn't zeroed with https communication. Do i miss something? Read timeout should be reset to full on every packet read. It is never zeroed during a transfer. My fault, bad semantic language, i mean reset, effectively. It sounds like the transfer is stalling for more than read timeout, or the timeout is not being reset like it should be. Do you know, how i can troubleshoot this issue? Amos Many thanks. Regards -- Jean-Philippe Menil - Pôle réseau Service IRTS DSI Université de Nantes jean-philippe.me...@univ-nantes.fr Tel : 02.53.48.49.27 - Fax : 02.53.48.49.09
Re: [squid-users] Serious problem with read_timeout
Le 11/04/2012 20:37, Jean-Philippe Menil a écrit : On 11/04/2012 14:49, Amos Jeffries wrote: On 11/04/2012 6:47 p.m., Jean-Philippe Menil wrote: Le 05/04/2012 16:50, Jean-Philippe Menil a écrit : Le 04/04/2012 09:00, Jean-Philippe Menil a écrit : On 03/04/2012 23:53, Amos Jeffries wrote: On 04.04.2012 02:46, Jean-Philippe Menil wrote: Le 03/04/2012 11:06, Jean-Philippe Menil a écrit : Hi, i encounter serious outage with squid 3.HEAD-20120307-r12077. Every time i download some test files, it stop after 15 minutes. If i go down read_timeout to 1 minutes, the download stop after 1 minutes. Is it a know issue, or must i increment read_timeout to excessively timeout? special configuration is as follow: workers 4 cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9 Regards. Nobody have ever observe this phenomen? Not many production networks (squid-users people) use 3.HEAD (alpha) code. The developers and alpha/beta testers hang out in squid-dev ;) And no, you are the first to mention this particular behaviour. Amos Hi, yes iknow, but i think it is present in 3.2 too (i will test this afternoon to confirm). I think i can repeat that only when download a file to on a https site, does it help? Regards. Hi, so i have done test with a squid 3.2.0.14. And it appear that i can repeat the problem only with https site, why i don't know yet. For test, i fixe a lower value (don't want to wait 15 minutes between each test) for read_timeout, and i download an iso file through some https site: https://nzdis.org/projects/projects/perfnet/repository/revisions/4/raw/vendor/Vyatta/Vyatta/vyatta-livecd-vc5.0.2.iso Every time, the download stop at the read_timeout value. Any ideas? Regards. Hi, sorry to up this subject, but i can't understand why the read_timeout isn't zeroed with https communication. Do i miss something? Read timeout should be reset to full on every packet read. It is never zeroed during a transfer. My fault, bad semantic language, i mean reset, effectively. It sounds like the transfer is stalling for more than read timeout, or the timeout is not being reset like it should be. Do you know, how i can troubleshoot this issue? Amos Many thanks. Regards Hi Amos, I think i got it: i still encounter the problem with 3.2.0.17. I can easily repeat with the following parameter read_timeout 30 seconds wget --no-check-certificate https://nzdis.org/projects/projects/perfnet/repository/revisions/4/raw/vendor/Vyatta/Vyatta/vyatta-livecd-vc5.0.2.iso -O /dev/null The donwload will stop on the fixed read_timeout. With the following (obvious) patch, the download don't stop anymore: 01-build64:/home/menil-jp/squid-3.2.0.17# diff src/tunnel.cc /tmp/squid-3.2.0.17/src/tunnel.cc 321,326d320 < if (Comm::IsConnOpen(to.conn)) { < AsyncCall::Pointer timeoutCall = commCbCall(5, 4, "tunnelTimeout", < CommTimeoutCbPtrFun(tunnelTimeout, this)); < commSetConnTimeout(to.conn, Config.Timeout.read, timeoutCall); < } < 333,336c327,330 < /* Only close the remote end if we've finished queueing data to it */ < if (from.len == 0 && Comm::IsConnOpen(to.conn) ) { < to.conn->close(); < } --- > /* Only close the remote end if we've finished queueing data to it */ > if (from.len == 0 && Comm::IsConnOpen(to.conn) ) { > to.conn->close(); > } Can you confirm the problem? Best regards. -- Jean-Philippe Menil - Pôle réseau Service IRTS DSI Université de Nantes jean-philippe.me...@univ-nantes.fr Tel : 02.53.48.49.27 - Fax : 02.53.48.49.09