Re: [squid-users] Memory leak
Le 19/08/2017 à 22:08, Eliezer Croitoru a écrit : > Hey Emmanuel, > > Something is not clear to me. > Are you using url_rewrite or store_id helpers in any form? No > Also what DNS lookups squid does exactly? > - Reverse > - Forward Mostly forward > > Also: > - internal clients > - external domains External domains. For the record, below is the original report, and the reply of Amos: > Hello, > > I'm in a context where I have a lot of Squid installation without direct > internet access. > All queries are forwarded to an Internet connected peer. > > Recently, I migrate my old 2.x Squid to 3.x and take responsibility for > some other 3.x existing installations. > - my Debian based Squid 3.4.8 start doing DNS request for each requested > domain > - Ubuntu 14.04 based Squid 3.3.8 behave the same > - Ubuntu 16.04 based Squid 3.5.12 behave the same > The internal DNS setup is completely private with it's own hierarchy an > with no Internet link/relation. > Internet "like" request are banned on this infrastructure and could > raise alarms. > > On the Ubuntu installations, the problem was worked around with a local > nsd daemon responsible to answer "nxdomain" to all requests. > > All was carefully checked and nothing in my configuration (acl etc ...) > explain why Squid insist to do DNS requests for requests forwarded to > the peer(s). > > I was able to reproduce the "bug" with all squid versions up to 3.5.23 > with this minimalist config test file: > > http_access allow all > > http_port 3128 > cache_peer 10.xx.xx.xx parent 8000 0 default no-query no-digest > login=login:password > never_direct allow all > > cache_mem 256 MB > maximum_object_size_in_memory 16384 KB > cache_dir aufs /var/spool/squid3 10 32 256 > maximum_object_size 400 MB > access_log stdio:/var/log/squid/access.log squid > > refresh_pattern ^ftp: 144020% 10080 > refresh_pattern ^gopher:14400% 1440 > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 > refresh_pattern . 0 20% 4320 > > quick_abort_pct 55 > read_ahead_gap 128 KB > hosts_file none > coredump_dir /var/spool/squid3 > > #bug #4575 > url_rewrite_extras XXX > store_id_extras XXX > > > Since the switch from 3.5.12 to 3.5.19/23, I am able to use a simpler > work around (I switched directly from 3.5.12 to 3.5.19 so I don't know > when the behavior changed): > Instead of installing a fake local DNS server and using > dns_nameservers 127.0.0.1 > I could use > dns_nameservers none > Squid warn about non usable DNS and proceed normally. Before (tested > with 3.5.12 and lower) Squid hang. > > So, I am missing something ? Is it a know problem ? > With the work around, things work but I could not logs things based on > Internal DNS for the client side, and this is something that was working > in the old 2.x versions. > Should I open a bug report ? > > Thank you, > Emmanuel. > On 24/01/2017 3:58 a.m., FUSTE Emmanuel wrote: >> All was carefully checked and nothing in my configuration (acl etc ...) >> explain why Squid insist to do DNS requests for requests forwarded to >> the peer(s). >> > >> #bug #4575 >> url_rewrite_extras XXX >> store_id_extras XXX > I dont think that workaround is working. > >> >> >> Since the switch from 3.5.12 to 3.5.19/23, I am able to use a simpler >> work around (I switched directly from 3.5.12 to 3.5.19 so I don't know >> when the behavior changed): >> Instead of installing a fake local DNS server and using >> dns_nameservers 127.0.0.1 >> I could use >> dns_nameservers none >> Squid warn about non usable DNS and proceed normally. Before (tested >> with 3.5.12 and lower) Squid hang. >> > nice. > > I'm prety sure this is still bug 4575. I've added a comment there to > mention how the workaround is broken, and your improved one. > > Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory leak (was: Squid 3.x never_direct and DNS requests problem.)
Hey Emmanuel, Something is not clear to me. Are you using url_rewrite or store_id helpers in any form? Also what DNS lookups squid does exactly? - Reverse - Forward Also: - internal clients - external domains Thanks, Eliezer Eliezer Croitoru Linux System Administrator Mobile: +972-5-28704261 Email: elie...@ngtech.co.il -Original Message- From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf Of FUSTE Emmanuel Sent: Friday, August 18, 2017 14:53 To: Amos Jeffries <squ...@treenet.co.nz>; squid-users@lists.squid-cache.org Subject: [squid-users] Memory leak (was: Squid 3.x never_direct and DNS requests problem.) Le 24/01/2017 à 10:55, FUSTE Emmanuel a écrit : > Le 23/01/2017 à 23:41, Amos Jeffries a écrit : >> On 24/01/2017 3:58 a.m., FUSTE Emmanuel wrote: >>> All was carefully checked and nothing in my configuration (acl etc ...) >>> explain why Squid insist to do DNS requests for requests forwarded to >>> the peer(s). >>> >> >>> #bug #4575 >>> url_rewrite_extras XXX >>> store_id_extras XXX >> I dont think that workaround is working. >> >>> >>> >>> Since the switch from 3.5.12 to 3.5.19/23, I am able to use a simpler >>> work around (I switched directly from 3.5.12 to 3.5.19 so I don't know >>> when the behavior changed): >>> Instead of installing a fake local DNS server and using >>> dns_nameservers 127.0.0.1 >>> I could use >>> dns_nameservers none >>> Squid warn about non usable DNS and proceed normally. Before (tested >>> with 3.5.12 and lower) Squid hang. >>> >> :-) nice. >> >> I'm prety sure this is still bug 4575. I've added a comment there to >> mention how the workaround is broken, and your improved one. >> > Thank you ! > If there's anything I can help with to solve this bug, I'd be happy to. > > Emmanuel. > It seems that using this this workaround is a bad idea. It expose or induce HUGE memory leak: Current memory usage: Pool Obj SizeChunks AllocatedIn UseIdle Allocations SavedRate (bytes)KB/ch obj/ch(#) used free part %Frag (#) (KB) high (KB) high (hrs) %Tot(#) (KB) high (KB) high (hrs) %alloc(#) (KB) high (KB)(#) %cnt %vol(#)/sec cbdata idns_query (6) 8696 689173 5852587 5852587 0.00 87.949 689173 5852587 5852587 0.00 100.000 0 0 0 0 0.000 0.000 0.000 mem_node 4136 129945 524856 534368 3.85 7.887 129801 524275 534368 3.85 99.889 144 582 19715 4096894 0.873 9.316 0.003 fqdncache_entry 160 340128 53145 53145 0.00 0.799 340128 53145 53145 0.00 100.000 0 0 0 0 0.000 0.000 0.000 ipcache_entry 128 343083 42886 42886 0.00 0.644 343083 42886 42886 0.00 100.000 0 0 1 1 0.000 0.000 0.000 Short Strings 40 822953 32147 34373 0.28 0.483 819903 32028 34373 0.28 99.629 3050 120 605 247675706 52.776 5.447 0.166 cbdata generic_cbdata (14) 32 1026262 32071 32071 0.00 0.482 1026262 32071 32071 0.00 100.000 0 0 1 60 0.000 0.000 0.001 16KB Strings 163841306 20896 42624 3.15 0.314 1215 19440 42624 3.15 93.032 91 1456 13488 1976589 0.421 17.805 0.001 HttpHeaderEntry56 372842 20390 21773 0.26 0.306 371478 20316 21773 0.26 99.634 1364 75 385 51897666 11.059 1.598 0.035 MemObject 328 35177 11268 12032 0.27 0.169 34892 11177 12032 0.27 99.190 285 92 233 1599153 0.341 0.288 0.001 HttpReply 280 35179 9620 10274 0.26 0.145 34893 9542 10274 0.26 99.187 286 79 199 4877844 1.039 0.751 0.003 Long Strings 512 18426 9213 9967 0.30 0.138 18176 9088 9967 0.30 98.643 250 125 203 5984528 1.275 1.685 0.004 Digest Scheme nonce's 72 109609 7707 14004 3.17 0.116 109609 7707 14004 3.17 100.000 0 0 7186 29051 0.006 0.001 0.001 Medium Strings128 53288 6661 7158 0.26 0.100 53025 6629 7158 0.26 99.506 263 33 157 11608569 2.474 0.817 0.008 4KB Strings 4096 1104 4416 4916 0.28 0.066 1077 4308 4916 0.28 97.554 27 108 304 435808 0.093 0.981 0.000 StoreEntry
[squid-users] Memory leak (was: Squid 3.x never_direct and DNS requests problem.)
Le 24/01/2017 à 10:55, FUSTE Emmanuel a écrit : > Le 23/01/2017 à 23:41, Amos Jeffries a écrit : >> On 24/01/2017 3:58 a.m., FUSTE Emmanuel wrote: >>> All was carefully checked and nothing in my configuration (acl etc ...) >>> explain why Squid insist to do DNS requests for requests forwarded to >>> the peer(s). >>> >> >>> #bug #4575 >>> url_rewrite_extras XXX >>> store_id_extras XXX >> I dont think that workaround is working. >> >>> >>> >>> Since the switch from 3.5.12 to 3.5.19/23, I am able to use a simpler >>> work around (I switched directly from 3.5.12 to 3.5.19 so I don't know >>> when the behavior changed): >>> Instead of installing a fake local DNS server and using >>> dns_nameservers 127.0.0.1 >>> I could use >>> dns_nameservers none >>> Squid warn about non usable DNS and proceed normally. Before (tested >>> with 3.5.12 and lower) Squid hang. >>> >> :-) nice. >> >> I'm prety sure this is still bug 4575. I've added a comment there to >> mention how the workaround is broken, and your improved one. >> > Thank you ! > If there's anything I can help with to solve this bug, I'd be happy to. > > Emmanuel. > It seems that using this this workaround is a bad idea. It expose or induce HUGE memory leak: Current memory usage: Pool Obj SizeChunks AllocatedIn UseIdle Allocations SavedRate (bytes)KB/ch obj/ch(#) used free part %Frag (#) (KB) high (KB) high (hrs) %Tot(#) (KB) high (KB) high (hrs) %alloc(#) (KB) high (KB)(#) %cnt %vol(#)/sec cbdata idns_query (6) 8696 689173 5852587 5852587 0.00 87.949 689173 5852587 5852587 0.00 100.000 0 0 0 0 0.000 0.000 0.000 mem_node 4136 129945 524856 534368 3.85 7.887 129801 524275 534368 3.85 99.889 144 582 19715 4096894 0.873 9.316 0.003 fqdncache_entry 160 340128 53145 53145 0.00 0.799 340128 53145 53145 0.00 100.000 0 0 0 0 0.000 0.000 0.000 ipcache_entry 128 343083 42886 42886 0.00 0.644 343083 42886 42886 0.00 100.000 0 0 1 1 0.000 0.000 0.000 Short Strings 40 822953 32147 34373 0.28 0.483 819903 32028 34373 0.28 99.629 3050 120 605 247675706 52.776 5.447 0.166 cbdata generic_cbdata (14) 32 1026262 32071 32071 0.00 0.482 1026262 32071 32071 0.00 100.000 0 0 1 60 0.000 0.000 0.001 16KB Strings 163841306 20896 42624 3.15 0.314 1215 19440 42624 3.15 93.032 91 1456 13488 1976589 0.421 17.805 0.001 HttpHeaderEntry56 372842 20390 21773 0.26 0.306 371478 20316 21773 0.26 99.634 1364 75 385 51897666 11.059 1.598 0.035 MemObject 328 35177 11268 12032 0.27 0.169 34892 11177 12032 0.27 99.190 285 92 233 1599153 0.341 0.288 0.001 HttpReply 280 35179 9620 10274 0.26 0.145 34893 9542 10274 0.26 99.187 286 79 199 4877844 1.039 0.751 0.003 Long Strings 512 18426 9213 9967 0.30 0.138 18176 9088 9967 0.30 98.643 250 125 203 5984528 1.275 1.685 0.004 Digest Scheme nonce's 72 109609 7707 14004 3.17 0.116 109609 7707 14004 3.17 100.000 0 0 7186 29051 0.006 0.001 0.001 Medium Strings128 53288 6661 7158 0.26 0.100 53025 6629 7158 0.26 99.506 263 33 157 11608569 2.474 0.817 0.008 4KB Strings 4096 1104 4416 4916 0.28 0.066 1077 4308 4916 0.28 97.554 27 108 304 435808 0.093 0.981 0.000 StoreEntry120 35177 4123 4402 0.27 0.062 34892 4089 4402 0.27 99.190 285 34 85 1599153 0.341 0.106 0.001 cbdata clientReplyContext (17) 4320913 3852 5932 1.00 0.058 840 3544 5932 1.00 92.004 73 308 1047 1805429 0.385 4.288 0.001 cbdata ClientSocketContext (16) 4256913 3795 5844 1.00 0.057 840 3492 5844 1.00 92.004 73 304 1031 1805429 0.385 4.225 0.001 1KB Strings 1024 3793 3793 4299 0.29 0.057 3766 3766 4299 0.29 99.288 27 27 155 421553 0.090 0.237 0.000 cbdata MemBuf (11) 64 35210 2201 2354 0.27 0.033 34920 2183 2354 0.27 99.176 290 19 48 9673363 2.061 0.340 0.006 HttpHdrCc 96
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
Hi. On 12.01.2015 19:06, Amos Jeffries wrote: I am confident that those types of leaks do not exist at al in Squid 3.4. These rounds of mmory exhaustion problems are caused by pseudo-leaks, where Squid incorrectly holds onto memory (has not forgotten it though) far longer than it should be. Could you please clarify for me what is the Long Strings pool and how can I manage it's size ? After start the largest consuming pool is the mem_node one, but it usually stops increasing after a few days (somewhere around the cache_memory border - don't know if it's it, or just a coincedence). Long Strings, however, keep raising and raising, and after some days it becomes the largest one. I'm using the following settings: cache_mem 512 MB cache_dir diskd /var/squid/cache 1100 16 256 after few days SNMP reports that the clients amount is around 1700. Thanks. Eugene. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
Hi. On 09.01.2015 06:12, Amos Jeffries wrote: Grand total: = 9.5 GB of RAM just for Squid. .. then there is whatever memory the helper programs, other software on the server and operating system all need. I'm now also having a strong impression that squid is leaking memory. Now, when 3.4.x is able to handle hundreds of users during several hours I notice that it's memory usage is constantly increasing. My patience always ends at the point of 1.5 Gigs memory usage, where server memory starts to be exhausted (squid is running with lots of other stuff) and I restart it. This is happening on exactly the same config the 3.3.13 was running, so ... I have cache_mem set to 512 megs, diskd, medium sized cache_dir and lots of users. Is something changed drastically in 3.4.x comparing to the 3.3.13, or is it, as it seems, a memory leak ? Thanks. Eugene. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Yep. Memory leaking - if it really it - will be occurs on all platforms. If not - this is OS-specific issue. libc, malloc library problem. But not squid itself. 12.01.2015 18:06, Eugene M. Zheganin пишет: Hi. On 12.01.2015 16:41, Eugene M. Zheganin wrote: I'm now also having a strong impression that squid is leaking memory. Now, when 3.4.x is able to handle hundreds of users during several hours I notice that it's memory usage is constantly increasing. My patience always ends at the point of 1.5 Gigs memory usage, where server memory starts to be exhausted (squid is running with lots of other stuff) and I restart it. This is happening on exactly the same config the 3.3.13 was running, so ... I have cache_mem set to 512 megs, diskd, medium sized cache_dir and lots of users. Is something changed drastically in 3.4.x comparing to the 3.3.13, or is it, as it seems, a memory leak ? Squid 3.4 on FreeBSD is by default compiling with the --enable-debug-cbdata option and when 45th log selector is at it's default 1, cache.log is filling with CBData memory leaking alarms. Here is the list for the last 40 minutes, sorted with the occurrence count: 104136 Checklist.cc:160 81438 Checklist.cc:187 177226 Checklist.cc:320 84861 Checklist.cc:45 89151 CommCalls.cc:21 22069 DiskIO/DiskDaemon/DiskdIOStrategy.cc:353 120 UserRequest.cc:166 29 UserRequest.cc:172 55814 clientStream.cc:235 5966 client_side_reply.cc:93 4516 client_side_request.cc:134 5568 dns_internal.cc:1131 4859 dns_internal.cc:1140 86 event.cc:90 7770 external_acl.cc:1426 1548 fqdncache.cc:340 7467 helper.cc:856 39905 ipcache.cc:353 11880 store.cc:1611 181959 store_client.cc:154 256951 store_client.cc:337 6835 ufs/UFSStoreState.cc:333 are those all false alarms ? Thanks. Eugene. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBAgAGBQJUs7rJAAoJENNXIZxhPexGmk0IAJmEmfJ1aBLL9DlrrnHM95JU 8VeGgsQi/wVpAjS2z1JA5eDJZ6WY5tBycVkJsKK0SBaHXrFjTHQkEUuy4iFQLpkV q8xZ4Y/X0rKJ6ayy6XSHaEd4wznlthteCTI5ImTW9fiL7PXfW7mci+o2g6lUPNa7 edtep7gp04ICmkLdq1F6P5InxksoLpc1iavV281SRowPDv151TFlZ5cn0A3fmqIv J/Pi19ss3vabiU3VXEvhiA5duxtx2lIs+BMZpU2Q3L9nQhvUf2pa8xMRBRF95RCd 8Pagb7Exzc/0/2JIjt8oCeV60Rr7xOUcwoxOXRC/4EBxzTWBH7FOkrnmBpVGNnM= =NJuF -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Yep. Memory leaking - if it really it - will be occurs on all platforms. If not - this is OS-specific issue. libc, malloc library problem. But not squid itself. 12.01.2015 18:06, Eugene M. Zheganin пишет: -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBAgAGBQJUs7rFAAoJENNXIZxhPexG7EYIAKfrJU2JU6gulc11PMnuNrq1 P0lMm5WYTh2joRj6j3mCsiMR8FyolOQCxT298wY3/zXlY4bvluuwwqJ9hW4JiuMn RGXK5iJvGg8zr0yABiDoNXFLgUMVdW8NqibSfecRds7ZZkjEhnn8tUO+2jU03ZBy dZzg7TavNOeRextBJCaknr4IKwvoQWQsiparTF91wJSg9YfQ7oHsWellTlbI7uPC r/2opE2nOtKF+PEbspmzXgzt76RBe1xNM4dWikbeZOPzXvg0n7Iwbhd6TSTfWLS0 Wb4HAPB7qVJ52Nx3lYjFYHrIMqKClrj+ETQVJp7CKOZCxP25jMyF+F1Oa9d9ZxE= =mVwt -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Looks like an OS-specific issue. I don't see any memory leaking on my boxes (running Solaris 10, yes ;)). Moreover, helpers is corrrectly got an release memory. 12.01.2015 17:41, Eugene M. Zheganin пишет: Hi. On 09.01.2015 06:12, Amos Jeffries wrote: Grand total: = 9.5 GB of RAM just for Squid. .. then there is whatever memory the helper programs, other software on the server and operating system all need. I'm now also having a strong impression that squid is leaking memory. Now, when 3.4.x is able to handle hundreds of users during several hours I notice that it's memory usage is constantly increasing. My patience always ends at the point of 1.5 Gigs memory usage, where server memory starts to be exhausted (squid is running with lots of other stuff) and I restart it. This is happening on exactly the same config the 3.3.13 was running, so ... I have cache_mem set to 512 megs, diskd, medium sized cache_dir and lots of users. Is something changed drastically in 3.4.x comparing to the 3.3.13, or is it, as it seems, a memory leak ? Thanks. Eugene. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBAgAGBQJUs7R6AAoJENNXIZxhPexGrKwH/1T9k9zGwgEqQeg/6+u1z1bV kShvT7TOVNGXHMXWEka2NWjn/o973nHRAUbwBd6MUMsRSd0o3hOBYnVByYAI/6UM X/CmZpADcTMS/WSAFIrSlqj/Ml1HOafOQcmMrxw6h5jJ9qoO/O8oPHGiBpiAIjGh eMtwX0qiyfx+Xy8ncYUial/JtQPm3jsxBuCofBHatqeAA9vPyng+a+e/C4MKILX/ D4EbzCGd8CBzH8vKGkPIwUKbXJ3j79yf7ve+u+YREX/DuJ68uroHJEOo8lNnHr+/ cVoJ70C3Nju0ZYE2Dme8kYJ7764k+K4sdlD10mBXroMxqgBoUZTqIl+Tx/G8o5c= =E/KG -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
Hi. On 12.01.2015 16:41, Eugene M. Zheganin wrote: I'm now also having a strong impression that squid is leaking memory. Now, when 3.4.x is able to handle hundreds of users during several hours I notice that it's memory usage is constantly increasing. My patience always ends at the point of 1.5 Gigs memory usage, where server memory starts to be exhausted (squid is running with lots of other stuff) and I restart it. This is happening on exactly the same config the 3.3.13 was running, so ... I have cache_mem set to 512 megs, diskd, medium sized cache_dir and lots of users. Is something changed drastically in 3.4.x comparing to the 3.3.13, or is it, as it seems, a memory leak ? Squid 3.4 on FreeBSD is by default compiling with the --enable-debug-cbdata option and when 45th log selector is at it's default 1, cache.log is filling with CBData memory leaking alarms. Here is the list for the last 40 minutes, sorted with the occurrence count: 104136 Checklist.cc:160 81438 Checklist.cc:187 177226 Checklist.cc:320 84861 Checklist.cc:45 89151 CommCalls.cc:21 22069 DiskIO/DiskDaemon/DiskdIOStrategy.cc:353 120 UserRequest.cc:166 29 UserRequest.cc:172 55814 clientStream.cc:235 5966 client_side_reply.cc:93 4516 client_side_request.cc:134 5568 dns_internal.cc:1131 4859 dns_internal.cc:1140 86 event.cc:90 7770 external_acl.cc:1426 1548 fqdncache.cc:340 7467 helper.cc:856 39905 ipcache.cc:353 11880 store.cc:1611 181959 store_client.cc:154 256951 store_client.cc:337 6835 ufs/UFSStoreState.cc:333 are those all false alarms ? Thanks. Eugene. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 9/01/2015 8:10 a.m., Doug Sampson wrote: Hi, I have the similar problem on FreeBSD 10.1-STABLE #1 r275861 with squid-3.4.10. I also applied MEMPOOLS=1 when starting squid. I experience the process slowing down and unacceptable performance. Squid is configured to use kerberos and ntlm authentication and lap group authentication. other settings: ~14 MB for Squid process memory. cache_replacement_policy heap LFUDA cache_mem 4096 MB + 4096 MB for RAM cache + 60 MB for RAM cache index maximum_object_size 32 MB cache_dir diskd /usr/local/squid/cache 32768 32 256 + 540 MB for disk cache index + (600 * 16 * 0.5) MB for each active client connection state. == 4800 MB NP: modern web browsers open up to 8 parallel connections to load a page (happy eyeballs makes that 16 TCP sockets) per client == ~8 MB per active client. Grand total: = 9.5 GB of RAM just for Squid. .. then there is whatever memory the helper programs, other software on the server and operating system all need. I have seen the following errors in cache.log: FATAL: Received Segment Violation...dying. FATAL: Received Bus Error...dying. need a backtrace to tell what the Segment Violation is about. Bus Error is your CPU or RAM futzing up. The OS or hardware is broken. Quite possibly this is the result of runnign out of RAM and swap space. after this the squid restarts. The system has 10GB of memory and is working as internal cache for ~600 users. Please point me in the right direction. I have no problem running squid33-3.3.13 on FreeBSD 9.3-STABLE #0 r270210. Thank you very much. Regards, lk Man, I empathize with you. Have you tried running Squid 3.4.x on FreeBSD 9.3? Sometimes I wonder if it's FreeBSD 10.x that's causing the issue... I tried the shell variable MEMPOOLS=1 and that quickly made the situation a lot worse. Swap space would get filled up very quickly and the system would slow down quickly before crashing. All MEMPOOLS=1 does is make Squid internally recycle the memory that gets free'd. Re-using it for other objects of the same type that are allocated right after its free'd. So new memory only gets allocated once the recycling pool for an object type is empty, everything allocated so far actively in use. The main point of using MEMPOOLS=1 is a debugging aid to track down leaks, by a) reduced OS allocator calls making frequent ones obvious, and b) the mgr:mem report more clearly listing what objects are being free'd (chunked pool contains recycled objects) and which are not (pool always empty). Steve Hill has come up with some patches that resolve memory issues with the new 3.4 helper annotations feature when using Negotiate or NTMLM auth helpers. For authenticated clients with long connection times or high traffic volumes the state can accumulate up to quite large amounts of memory. see the Debugging slow proxy thread of earlier this week. HTH Amos -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (MingW32) iQEcBAEBAgAGBQJUrysEAAoJELJo5wb/XPRjFFMH/RNhVzPWfFixDDarpHUFQMoE 6QLkaD5wwQfRp3xfBW/QfZ9vvu04zptFOvNr4766jGsh9RSGZdNZwgPc9pCADDrv Xs0HCO1VDMxGoBjFfaS2XJ5DLX2zQdbYlZKb0yghxXSYzg0ZZEhmsKO8r1Exp8j7 KO95yP3z/8agmwXbU2sJ1esqRC7IfW2sF/DtU8cPTzUf0cKEyjGoCbVrNAN1qKUH jLf1iw8sNr3xGwFG/WmQmKpYTsXInSp4GmwXgCSQ0T5DCLVDtWuvnXpobxR8iuSW zIX/CDKTe58lfVP4EXdzJNBgiuEiVZaRccynbk2L8HRvJ8kt9Pk50P5tWXM5FMw= =lv6G -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
Doug Sampson do...@dawnsign.com writes: Hi, I have the similar problem on FreeBSD 10.1-STABLE #1 r275861 with squid-3.4.10. I also applied MEMPOOLS=1 when starting squid. I experience the process slowing down and unacceptable performance. Squid is configured to use kerberos and ntlm authentication and lap group authentication. other settings: cache_replacement_policy heap LFUDA cache_mem 4096 MB maximum_object_size 32 MB cache_dir diskd /usr/local/squid/cache 32768 32 256 I have seen the following errors in cache.log: FATAL: Received Segment Violation...dying. FATAL: Received Bus Error...dying. after this the squid restarts. The system has 10GB of memory and is working as internal cache for ~600 users. Please point me in the right direction. I have no problem running squid33-3.3.13 on FreeBSD 9.3-STABLE #0 r270210. Thank you very much. Regards, lk Man, I empathize with you. Have you tried running Squid 3.4.x on FreeBSD 9.3? Sometimes I wonder if it's FreeBSD 10.x that's causing the issue... I am running FreeBSD 9.3-STABLE #1 r274084 with Squid 3.4.9 as a cache in DMZ without authentication. There are no problems. lk I tried the shell variable MEMPOOLS=1 and that quickly made the situation a lot worse. Swap space would get filled up very quickly and the system would slow down quickly before crashing. Any other ideas? ~Doug ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
Hi, I have the similar problem on FreeBSD 10.1-STABLE #1 r275861 with squid-3.4.10. I also applied MEMPOOLS=1 when starting squid. I experience the process slowing down and unacceptable performance. Squid is configured to use kerberos and ntlm authentication and lap group authentication. other settings: cache_replacement_policy heap LFUDA cache_mem 4096 MB maximum_object_size 32 MB cache_dir diskd /usr/local/squid/cache 32768 32 256 I have seen the following errors in cache.log: FATAL: Received Segment Violation...dying. FATAL: Received Bus Error...dying. after this the squid restarts. The system has 10GB of memory and is working as internal cache for ~600 users. Please point me in the right direction. I have no problem running squid33-3.3.13 on FreeBSD 9.3-STABLE #0 r270210. Thank you very much. Regards, lk Man, I empathize with you. Have you tried running Squid 3.4.x on FreeBSD 9.3? Sometimes I wonder if it's FreeBSD 10.x that's causing the issue... I tried the shell variable MEMPOOLS=1 and that quickly made the situation a lot worse. Swap space would get filled up very quickly and the system would slow down quickly before crashing. Any other ideas? ~Doug ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I mean, cache_mem is too big. First, 600 users is not very big installation to do huge memory cache. 1-2 Gb will be ehough. With fast disk cache. Second - 4 Gb ram is near 32-bit address limit. Possible 32-bit library or somewhat meets this limit. Is OS and Squid completely 64-bit? 09.01.2015 1:10, Doug Sampson пишет: Hi, I have the similar problem on FreeBSD 10.1-STABLE #1 r275861 with squid-3.4.10. I also applied MEMPOOLS=1 when starting squid. I experience the process slowing down and unacceptable performance. Squid is configured to use kerberos and ntlm authentication and lap group authentication. other settings: cache_replacement_policy heap LFUDA cache_mem 4096 MB maximum_object_size 32 MB cache_dir diskd /usr/local/squid/cache 32768 32 256 I have seen the following errors in cache.log: FATAL: Received Segment Violation...dying. FATAL: Received Bus Error...dying. after this the squid restarts. The system has 10GB of memory and is working as internal cache for ~600 users. Please point me in the right direction. I have no problem running squid33-3.3.13 on FreeBSD 9.3-STABLE #0 r270210. Thank you very much. Regards, lk Man, I empathize with you. Have you tried running Squid 3.4.x on FreeBSD 9.3? Sometimes I wonder if it's FreeBSD 10.x that's causing the issue... I tried the shell variable MEMPOOLS=1 and that quickly made the situation a lot worse. Swap space would get filled up very quickly and the system would slow down quickly before crashing. Any other ideas? ~Doug ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBAgAGBQJUrvaqAAoJENNXIZxhPexGTigIAKCNyOeAD03un/vxFINUN2Yb 0cZLNzSjIDkzhepXkfQo2yazcTSoGoJ5QFSwMj9yljFCve2JjTj1O6KT4JxbOzvG vMpEqY6KjuCsENU8vm6lBq+D7PmtZw20V+nofA1V3y1L9CGCuygVTe33FuWNIVwA VQKmhV+0IaG8vIdHgKtYKUK9n4zV48DAPurpjTf4UotzD0+BYef/BOatrQ1HGp9/ z6cmqB9FGIxOk5MZIgeer/3bjkYng7Q60zm7pHnMFhb265+0bsMzkvGl/PMvzKib ASRdTqy+rZiE/Y48KfcjTjY4Eyvk69U3I5iuu7xU+CvawMsBhifRn79A7e0eHTQ= =gLo3 -END PGP SIGNATURE- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
On 26/11/2014 8:59 a.m., Doug Sampson wrote: Thanks, Amos, for your pointers. I've commented out all the fresh_patterns lines appearing above the last two lines. I also have dropped diskd in favor of using aufs exclusively, taking out the min-size parameter. I've commented out the diskd_program support option. In the previous version of squid (2.7) I had split the cache_dir into two types with great success using coss and aufs. Previously I had only aufs and performance wasn't where I wanted it. Apparently coss is no longer supported in the 3.x version of squid atop FreeBSD. COSS has been replaced with Rock storage type in Squid-3. They should be used in roughly similar ways in terms of traffic optimization. The pathname for the cache swap logs have been fixed. Apparently this came from a squid.conf example that I copied in parts. Would this be the reason why we are seeing the error messages in /var/log/messages regarding swapping mentioned in my original post? No. I think that is coming out of the OS kernel memory management. It uses the term swap as well in regards to disk backed virtual memory. If your system is swapping (using that disk backed swap memory) while running Squid then you will get terrible performance as a matter of course since the Squid cache index and cache_meme is often very large in RAM and accessed often. The hierarchy_stoplist line has been stripped out as you say it is deprecated. The mem .TSV file is attached herewith. Currently I have the cache_dir located on the OS disk and all of the cache logging files on a second drive. Is this the optimal setup of cache-dir and logs? I would do it the other way around. Logs are appended with a small amount of data each transaction, whereas the main cache_dir has a relativey large % of the bandwidth throughput being written out to it constantly (less % in recent Squid, but still a lot). The dik most likely to die early is the one holding cache_dir. I'm still running into the issue of being out of available space in the swapfile on my system. I've attached another TSV file indicating the various type of memory being in use and whatnot. Is there anything in there that screams out? Amos, you said earlier that it was the OS system that needed to be tuned. Are there any references to where I can fine-tune it for Squid usage? I looked here http://oss.org.cn/man/newsoft/squid/Squid_FAQ/FAQ-8.html#how-much-ram and I'm unable to figure out a way to decrease the amount of memory that I could use. I tried limiting cache_mem to 1344 MB from some higher value but that didn't work. What are some of the methods that FreeBSD 10.0 users using to limit the use of memory that Squid uses? Sort of fixing the memory leaks, it looks like I need to consider the possibility of restarting the Squid service on a regular basis (i.e. at least once a week) in order to enable Squid to perform at an acceptable level and to avoid clogging /var/log/messages with such messages as follows: ... +swap_pager_getswapspace(15): failed +swap_pager_getswapspace(2): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(16): failed ... Is this a common practice among squid admins to restart squid periodically? Thank you! ~Doug Current memory usage: Pool Obj SizeChunksAllocatedIn UseIdleAllocations SavedRate (bytes)KB/ch obj/ch(#) used free part %Frag (#) (KB) high (KB) high (hrs) %Tot(#) (KB) high (KB) high (hrs) %alloc(#) (KB) high (KB)(#) %cnt %vol(#)/sec mem_node 4136292390 1180982 1336651 4.76 86.464 292191 1180178 1336651 4.76 99.932 199 804 16173 1641718 1.322 21.088 0.001 Short Strings 401701206 66454 73289 4.76 4.865 1699725 66396 73289 4.76 99.913 1481 58 250 37674459 30.347 4.680 0.029 HttpHeaderEntry56796862 43579 47925 4.76 3.191 796197 43543 47925 4.76 99.917 665 37 163 10727306 8.641 1.866 0.008 HttpReply 28074302 20317 22720 4.76 1.487 74275 20310 22720 4.76 99.964 27 8 135 754702 0.608 0.656 0.001 MemObject 24074278 17409 19473 4.76 1.275 74273 17408 19473 4.76 99.993 5 2 121 187591 0.151 0.140 0.000 StoreEntry10479949 8120 9020 4.76 0.594 79944 8120 9020 4.76 99.994 5 1 53 198382 0.160 0.064 0.000 HttpHdrCc 9650739 4757 5279 4.76 0.348 50725 4756 5279 4.76 99.972 14 2 41 602296 0.485 0.180 0.000 Medium Strings12837543 4693 4889 3.29 0.344 37426 4679 4889 3.29 99.688 117 15 48 2579153 2.078 1.025 0.002 cbdata MemBuf (9) 6474339 4647 5197 4.76 0.340 74310 4645 5197 4.76 99.961 29 2 31 1332078 1.073 0.265 0.001 Long Strings 5124170 2085 2085 0.04 0.153 4105 2053 2085 0.04 98.441 65 33 130 389656 0.314 0.620 0.000 LRU policy node2474220 1740 1946 4.76 0.127 74217 1740 1946 4.76 99.996 3 1 12 29622 0.024 0.002 0.000 4KB Strings 4096394 1576 1576
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
Maybe your problem is related to sysctl mib tuning about swap/overcommit etc. I did not observed memory leak with squid 3.4.4, but FB 10 do swap frequently than old version. Simon ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
I used the ugly tuning: set vm.defer_swapspace_pageouts to 1. But it is may caused some issue when the physic memory is really exhausted. I have no much time to investigate the right way, but I think maybe vm.swap_idle_threshold1/vm.swap_idle_threshold2 or vm.overcommit etc. maybe harmful. Simon 在 14/12/1 23:16, Doug Sampson 写道: Maybe your problem is related to sysctl mib tuning about swap/overcommit etc. I did not observed memory leak with squid 3.4.4, but FB 10 do swap frequently than old version. Could you elaborate a bit more? That went over my head. What could I do in terms of tuning the system? ~Doug ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
On 25/11/2014 9:06 a.m., Doug Sampson wrote: Recently due to squid 2.7 being EOL'ed, we migrated our squid server to version 3.4.9 on a FreeBSD 10.0-RELEASE running on 64-bit hardware. We started seeing paging file being swapped out eventually running out of available memory. From the time squid gets started it usually takes about two days before we see these entries in /var/log/messages as follows: +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(12): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(12): failed +swap_pager_getswapspace(6): failed +swap_pager_getswapspace(16): failed Looking at the 'top' results, I see that the swap file has been totally exhausted. Memory used by squid hovers around 2.3GB out of the total 3GB of system memory. I am not sure what is causing these memory leaks. After rebooting, squid-internal-mgr/info shows the following statistics: Squid Object Cache: Version 3.4.9 Build Info: Start Time: Mon, 24 Nov 2014 18:39:08 GMT Current Time: Mon, 24 Nov 2014 19:39:13 GMT Connection information for squid: Number of clients accessing cache: 18 Number of HTTP requests received:10589 Number of ICP messages received: 0 Number of ICP messages sent: 0 Number of queued ICP replies: 0 Number of HTCP messages received: 0 Number of HTCP messages sent: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start:176.2 Average ICP messages per minute since start: 0.0 Select loop called: 763993 times, 4.719 ms avg Cache information for squid: Hits as % of all requests: 5min: 3.2%, 60min: 17.0% Hits as % of bytes sent: 5min: 2.0%, 60min: 6.7% Memory hits as % of hit requests: 5min: 0.0%, 60min: 37.2% Disk hits as % of hit requests: 5min: 22.2%, 60min: 33.2% Storage Swap size: 7361088 KB Storage Swap capacity: 58.5% used, 41.5% free Storage Mem size: 54348 KB Storage Mem capacity: 3.9% used, 96.1% free Mean Object Size:23.63 KB Requests given to unlinkd: 1 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.10857 0.19742 Cache Misses: 0.10857 0.32154 Cache Hits:0.08265 0.01387 Near Hits: 0.15048 0.12106 Not-Modified Replies: 0.00091 0.00091 DNS Lookups: 0.05078 0.05078 ICP Queries: 0.0 0.0 Resource usage for squid: UP Time: 3605.384 seconds CPU Time: 42.671 seconds CPU Usage: 1.18% CPU Usage, 5 minute avg: 0.72% CPU Usage, 60 minute avg: 1.17% Maximum Resident Size: 845040 KB Page faults with physical i/o: 20 Memory accounted for: Total accounted: 105900 KB memPoolAlloc calls: 2673353 memPoolFree calls:2676487 File descriptor usage for squid: Maximum number of file descriptors: 87516 Largest file desc currently in use:310 Number of file desc currently in use: 198 Files queued for open: 0 Available number of file descriptors: 87318 Reserved number of file descriptors: 100 Store Disk files open: 0 Internal Data Structures: 311543 StoreEntries 4421 StoreEntries with MemObjects 4416 Hot Object Cache Items 311453 on-disk objects I will post another one tomorrow that will indicate growing memory/swapfile consumption. Here is my squid.conf: # OPTIONS FOR AUTHENTICATION # - # 1st four lines for auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off # next three lines for kerberos authentication (needed to use usernames) # used in conjunction with acl auth proxy_auth line below #auth_param negotiate program /usr/local/libexec/squid/negotiate_kerberos_auth -i #auth_param negotiate children 50 startup=10 idle=5 #auth_param negotiate keep_alive on # ACCESS CONTROLS # - # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed #acl manager proto cache_object acl manager url_regex -i ^cache_object:// /squid-internal-mgr/ acl adminhost src 192.168.1.149 acl localnet src 192.168.1.0/24 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl webserver src 198.168.1.35 acl some_big_clients src 192.168.1.149/32 #CI53 # We want to limit downloads of these type of files # Put this all in one line acl magic_words url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .dmg .mp4
[squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
Recently due to squid 2.7 being EOL'ed, we migrated our squid server to version 3.4.9 on a FreeBSD 10.0-RELEASE running on 64-bit hardware. We started seeing paging file being swapped out eventually running out of available memory. From the time squid gets started it usually takes about two days before we see these entries in /var/log/messages as follows: +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(12): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(12): failed +swap_pager_getswapspace(6): failed +swap_pager_getswapspace(16): failed Looking at the 'top' results, I see that the swap file has been totally exhausted. Memory used by squid hovers around 2.3GB out of the total 3GB of system memory. I am not sure what is causing these memory leaks. After rebooting, squid-internal-mgr/info shows the following statistics: Squid Object Cache: Version 3.4.9 Build Info: Start Time: Mon, 24 Nov 2014 18:39:08 GMT Current Time: Mon, 24 Nov 2014 19:39:13 GMT Connection information for squid: Number of clients accessing cache: 18 Number of HTTP requests received: 10589 Number of ICP messages received:0 Number of ICP messages sent:0 Number of queued ICP replies: 0 Number of HTCP messages received: 0 Number of HTCP messages sent: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start: 176.2 Average ICP messages per minute since start:0.0 Select loop called: 763993 times, 4.719 ms avg Cache information for squid: Hits as % of all requests: 5min: 3.2%, 60min: 17.0% Hits as % of bytes sent:5min: 2.0%, 60min: 6.7% Memory hits as % of hit requests: 5min: 0.0%, 60min: 37.2% Disk hits as % of hit requests: 5min: 22.2%, 60min: 33.2% Storage Swap size: 7361088 KB Storage Swap capacity: 58.5% used, 41.5% free Storage Mem size: 54348 KB Storage Mem capacity:3.9% used, 96.1% free Mean Object Size: 23.63 KB Requests given to unlinkd: 1 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.10857 0.19742 Cache Misses: 0.10857 0.32154 Cache Hits:0.08265 0.01387 Near Hits: 0.15048 0.12106 Not-Modified Replies: 0.00091 0.00091 DNS Lookups: 0.05078 0.05078 ICP Queries: 0.0 0.0 Resource usage for squid: UP Time:3605.384 seconds CPU Time: 42.671 seconds CPU Usage: 1.18% CPU Usage, 5 minute avg:0.72% CPU Usage, 60 minute avg: 1.17% Maximum Resident Size: 845040 KB Page faults with physical i/o: 20 Memory accounted for: Total accounted: 105900 KB memPoolAlloc calls: 2673353 memPoolFree calls:2676487 File descriptor usage for squid: Maximum number of file descriptors: 87516 Largest file desc currently in use:310 Number of file desc currently in use: 198 Files queued for open: 0 Available number of file descriptors: 87318 Reserved number of file descriptors: 100 Store Disk files open: 0 Internal Data Structures: 311543 StoreEntries 4421 StoreEntries with MemObjects 4416 Hot Object Cache Items 311453 on-disk objects I will post another one tomorrow that will indicate growing memory/swapfile consumption. Here is my squid.conf: # OPTIONS FOR AUTHENTICATION # - # 1st four lines for auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off # next three lines for kerberos authentication (needed to use usernames) # used in conjunction with acl auth proxy_auth line below #auth_param negotiate program /usr/local/libexec/squid/negotiate_kerberos_auth -i #auth_param negotiate children 50 startup=10 idle=5 #auth_param negotiate keep_alive on # ACCESS CONTROLS # - # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed #acl manager proto cache_object acl manager url_regex -i ^cache_object:// /squid-internal-mgr/ acl adminhost src 192.168.1.149 acl localnet src 192.168.1.0/24 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl webserver src 198.168.1.35 acl some_big_clients
Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 25/11/2014 9:06 a.m., Doug Sampson wrote: Recently due to squid 2.7 being EOL'ed, we migrated our squid server to version 3.4.9 on a FreeBSD 10.0-RELEASE running on 64-bit hardware. We started seeing paging file being swapped out eventually running out of available memory. From the time squid gets started it usually takes about two days before we see these entries in /var/log/messages as follows: +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(12): failed +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(12): failed +swap_pager_getswapspace(6): failed +swap_pager_getswapspace(16): failed Looking at the 'top' results, I see that the swap file has been totally exhausted. Memory used by squid hovers around 2.3GB out of the total 3GB of system memory. I am not sure what is causing these memory leaks. After rebooting, squid-internal-mgr/info shows the following statistics: Squid Object Cache: Version 3.4.9 Build Info: Start Time: Mon, 24 Nov 2014 18:39:08 GMT Current Time: Mon, 24 Nov 2014 19:39:13 GMT Connection information for squid: Number of clients accessing cache:18 Number of HTTP requests received:10589 Number of ICP messages received:0 Number of ICP messages sent: 0 Number of queued ICP replies: 0 Number of HTCP messages received: 0 Number of HTCP messages sent: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start: 176.2 Average ICP messages per minute since start: 0.0 Select loop called: 763993 times, 4.719 ms avg Cache information for squid: Hits as % of all requests: 5min: 3.2%, 60min: 17.0% Hits as % of bytes sent: 5min: 2.0%, 60min: 6.7% Memory hits as % of hit requests:5min: 0.0%, 60min: 37.2% Disk hits as % of hit requests: 5min: 22.2%, 60min: 33.2% Storage Swap size:7361088 KB Storage Swap capacity: 58.5% used, 41.5% free Storage Mem size:54348 KB Storage Mem capacity: 3.9% used, 96.1% free Mean Object Size: 23.63 KB Requests given to unlinkd: 1 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.10857 0.19742 Cache Misses: 0.10857 0.32154 Cache Hits:0.08265 0.01387 Near Hits: 0.15048 0.12106 Not-Modified Replies: 0.00091 0.00091 DNS Lookups: 0.05078 0.05078 ICP Queries: 0.0 0.0 Resource usage for squid: UP Time:3605.384 seconds CPU Time: 42.671 seconds CPU Usage: 1.18% CPU Usage, 5 minute avg: 0.72% CPU Usage, 60 minute avg: 1.17% Maximum Resident Size: 845040 KB Page faults with physical i/o: 20 Memory accounted for: Total accounted: 105900 KB memPoolAlloc calls: 2673353 memPoolFree calls:2676487 File descriptor usage for squid: Maximum number of file descriptors: 87516 Largest file desc currently in use:310 Number of file desc currently in use: 198 Files queued for open: 0 Available number of file descriptors: 87318 Reserved number of file descriptors: 100 Store Disk files open: 0 Internal Data Structures: 311543 StoreEntries 4421 StoreEntries with MemObjects 4416 Hot Object Cache Items 311453 on-disk objects I will post another one tomorrow that will indicate growing memory/swapfile consumption. Here is my squid.conf: # OPTIONS FOR AUTHENTICATION # - # 1st four lines for auth_param basic children 5 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off # next three lines for kerberos authentication (needed to use usernames) # used in conjunction with acl auth proxy_auth line below #auth_param negotiate program /usr/local/libexec/squid/negotiate_kerberos_auth -i #auth_param negotiate children 50 startup=10 idle=5 #auth_param negotiate keep_alive on # ACCESS CONTROLS # - # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed #acl manager proto cache_object acl manager url_regex -i ^cache_object:// /squid-internal-mgr/ acl adminhost src 192.168.1.149 acl localnet src 192.168.1.0/24 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl webserver src 198.168.1.35 acl some_big_clients src 192.168.1.149/32 #CI53 # We want to limit downloads of these type of files # Put this all in one line acl magic_words url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .dmg .mp4 .img # We
[squid-users] Memory leak with reconfigure
* Various memory leaks
Re: [squid-users] Memory leak with reconfigure
On 27/06/2014 7:20 a.m., Alexandre wrote: * Various memory leaks ?? are you enjoying 3.4.6 or something? Amos
[squid-users] Memory leak in 3.2.5
I just upgraded from 2.7STABLE9 to 3.2.5, and now I'm battling a memory leak. Squid Cache: Version 3.2.5 configure options: '--prefix=/local/proxy/squid' '--with-maxfd=8192' '--with-pthreads' '--enable-storeio=aufs' '--enable-removal-policies=heap' '--enable-cache-digests' '--enable-delay-pools' '--enable-wccpv2' '--disable-external-acl-helpers' '--disable-ipv6' --enable-ltdl-convenience Here's the significant part of the configuration: ident_lookup_access permit all http_port 8000 cache_peer 127.0.0.1 parent 8080 0 no-query no-digest round-robin weight=100 #cache_peer p01.sas.com parent 8080 0 name=p1 no-query no-digest round-robin cache_peer p02.sas.com parent 8080 0 name=p2 no-query no-digest round-robin cache_peer p03.sas.com parent 8080 0 name=p3 no-query no-digest round-robin cache_peer p04.sas.com parent 8080 0 name=p4 no-query no-digest round-robin #cache_peer p01.sas.com sibling 8000 3130 name=s1 proxy-only cache_peer p02.sas.com sibling 8000 3130 name=s2 proxy-only cache_peer p03.sas.com sibling 8000 3130 name=s3 proxy-only cache_peer p04.sas.com sibling 8000 3130 name=s4 proxy-only memory_replacement_policy heap GDSF cache_replacement_policy heap LFUDA cache_dir aufs /cache/squid/aufs 10 64 253 maximum_object_size 500 KB I have four squid servers identically configured, each a sibling to the other. Each has a local parent proxy (virus scanner). If the local virus scanner is unresponsive, the request is forwarded to one of the other three in round-robin fashion. I am also performing ident queries just for logging purposes. Requests do not require authentication. This configuration has worked without issue for several years with squid 2.7STABLE9. It doesn't work well with squid 3.2.5. Memory usage grows at about the same rate as the query rate. After 16 hours memory usage is about 6 GB. The query rate is about 200/sec during business hours. If I restart squid the memory usage goes back to ~600 MB and starts growing. There are about 2.5 million objects in the cache. I thought the problem might be ICP, so I tried HTCP instead of ICP for the cache siblings but that did not make a difference. I do have another 3.2.5 system handling ~100 requests/second that does not exhibit the problem. The one without a problem uses these four as siblings, and I've tried both ICP and HTCP there, too. I've checked the cache.log file for clues. After ~5,000,000 queries I found ~1,500 messages like WARNING: Forwarding loop detected for and ~3,000 messages like Failed to select source for '[null_entry]' Those messages would have to leak about 1 MB each to account for the memory loss I'm seeing. The system without the problem does not perform the ident queries. Could the leak be there? Is anyone using ident? Mike Mitchell
[squid-users] Memory leak Squid Cache: Version 3.1.16 + FreeBSD 7.4-STABLE
Hallo, After port cvsup, updating lib, packages and so on. I got something weird problem, I guess this cause memory leak. root:~# free SYSTEM MEMORY SUMMARY: mem_used: 574558208 (547MB) [ 53%] Logically used memory mem_avail: +499183616 (476MB) [ 46%] Logically available memory -- --- -- mem_total: = 1073741824 ( 1024MB) [100%] Logically total memory root:~# /usr/local/etc/rc.d/squid stop mem_used: back to 23%. root:~# valgrind -v --tool=memcheck --leak-check=yes squid *snip* ==45738== 1 errors in context 2 of 39: ==45738== Mismatched free() / delete / delete [] ==45738==at 0x4B9D5: operator delete(void*) (in /usr/local/lib/valgrind/vgpreload_memcheck-x86-freebsd.so) ==45738==by 0xE0518: std::basic_ostringstreamchar, std::char_traitschar, std::allocatorchar ::~basic_ostringstream() (in /usr/lib/libstdc++.so.6) ==45738==by 0x8091369: Debug::finishDebug() (debug.cc:753) ==45738==by 0x81036FD: PconnModule::PconnModule() (pconn.cc:348) ==45738==by 0x8103739: PconnModule::GetInstance() (pconn.cc:356) ==45738==by 0x8103CDE: PconnPool::PconnPool(char const*) (pconn.cc:241) ==45738==by 0x80A5C0F: __static_initialization_and_destruction_0(int, int) (forward.cc:76) ==45738==by 0x80A5C59: global constructors keyed to _ZN8FwdState15CBDATA_FwdStateE (forward.cc:1464) ==45738==by 0x81ABDD7: ??? (in /usr/local/sbin/squid) ==45738==by 0x804BAD4: ??? (in /usr/local/sbin/squid) ==45738==by 0x804CBB7: (below main) (in /usr/local/sbin/squid) ==45738== Address 0x2e8180 is 0 bytes inside a block of size 180 alloc'd ==45738==at 0x4C0F5: malloc (in /usr/local/lib/valgrind/vgpreload_memcheck-x86-freebsd.so) ==45738==by 0x81AB1DD: xmalloc (util.c:508) ==45738==by 0x8050DA0: operator new(unsigned int) (SquidNew.h:49) ==45738==by 0x8090C14: Debug::getDebugOut() (debug.cc:735) ==45738==by 0x81036E8: PconnModule::PconnModule() (pconn.cc:348) ==45738==by 0x8103739: PconnModule::GetInstance() (pconn.cc:356) ==45738==by 0x8103CDE: PconnPool::PconnPool(char const*) (pconn.cc:241) ==45738==by 0x80A5C0F: __static_initialization_and_destruction_0(int, int) (forward.cc:76) ==45738==by 0x80A5C59: global constructors keyed to _ZN8FwdState15CBDATA_FwdStateE (forward.cc:1464) ==45738==by 0x81ABDD7: ??? (in /usr/local/sbin/squid) ==45738==by 0x804BAD4: ??? (in /usr/local/sbin/squid) ==45738==by 0x804CBB7: (below main) (in /usr/local/sbin/squid) ==45738== ==45738== ERROR SUMMARY: 39 errors from 39 contexts (suppressed: 0 from 0) *snip* Anyone have a clue? TY -- budsz
Re: [squid-users] Memory leak Squid Cache: Version 3.1.16 + FreeBSD 7.4-STABLE
On Tue, 1 Nov 2011 16:42:13 +0700, budsz wrote: Hallo, After port cvsup, updating lib, packages and so on. I got something weird problem, I guess this cause memory leak. root:~# free SYSTEM MEMORY SUMMARY: mem_used: 574558208 (547MB) [ 53%] Logically used memory mem_avail: +499183616 (476MB) [ 46%] Logically available memory -- --- -- mem_total: = 1073741824 ( 1024MB) [100%] Logically total memory root:~# /usr/local/etc/rc.d/squid stop mem_used: back to 23%. Note that Squid uses memory for a lot of things, most of them in very large blocks or large number of small blocks. The OS does not account for memory released. root:~# valgrind -v --tool=memcheck --leak-check=yes squid *snip* ==45738== 1 errors in context 2 of 39: ==45738== Mismatched free() / delete / delete [] ==45738==at 0x4B9D5: operator delete(void*) (in /usr/local/lib/valgrind/vgpreload_memcheck-x86-freebsd.so) ==45738==by 0xE0518: std::basic_ostringstreamchar, std::char_traitschar, std::allocatorchar ::~basic_ostringstream() (in /usr/lib/libstdc++.so.6) ==45738==by 0x8091369: Debug::finishDebug() (debug.cc:753) ==45738==by 0x81036FD: PconnModule::PconnModule() (pconn.cc:348) ==45738==by 0x8103739: PconnModule::GetInstance() (pconn.cc:356) ==45738==by 0x8103CDE: PconnPool::PconnPool(char const*) (pconn.cc:241) ==45738==by 0x80A5C0F: __static_initialization_and_destruction_0(int, int) (forward.cc:76) ==45738==by 0x80A5C59: global constructors keyed to _ZN8FwdState15CBDATA_FwdStateE (forward.cc:1464) ==45738==by 0x81ABDD7: ??? (in /usr/local/sbin/squid) ==45738==by 0x804BAD4: ??? (in /usr/local/sbin/squid) ==45738==by 0x804CBB7: (below main) (in /usr/local/sbin/squid) ==45738== Address 0x2e8180 is 0 bytes inside a block of size 180 alloc'd ==45738==at 0x4C0F5: malloc (in /usr/local/lib/valgrind/vgpreload_memcheck-x86-freebsd.so) ==45738==by 0x81AB1DD: xmalloc (util.c:508) ==45738==by 0x8050DA0: operator new(unsigned int) (SquidNew.h:49) ==45738==by 0x8090C14: Debug::getDebugOut() (debug.cc:735) ==45738==by 0x81036E8: PconnModule::PconnModule() (pconn.cc:348) ==45738==by 0x8103739: PconnModule::GetInstance() (pconn.cc:356) ==45738==by 0x8103CDE: PconnPool::PconnPool(char const*) (pconn.cc:241) ==45738==by 0x80A5C0F: __static_initialization_and_destruction_0(int, int) (forward.cc:76) ==45738==by 0x80A5C59: global constructors keyed to _ZN8FwdState15CBDATA_FwdStateE (forward.cc:1464) ==45738==by 0x81ABDD7: ??? (in /usr/local/sbin/squid) ==45738==by 0x804BAD4: ??? (in /usr/local/sbin/squid) ==45738==by 0x804CBB7: (below main) (in /usr/local/sbin/squid) ==45738== ==45738== ERROR SUMMARY: 39 errors from 39 contexts (suppressed: 0 from 0) *snip* Anyone have a clue? For some reason your compiler is using Squids internal overloaded definition of new() and failing to use the matching definition for delete() which is defined in an identical way right next to it. This could be a problems since they go through two different memory accounting systems and this will leave Squid allocated RAM counters forever going upward. You snipped away the details about RAM impact. Reconstructing from the trace count and size this appears to only be leaking a total of 7KB. Somehow it seems there is something else going on. Amos
Re: [squid-users] Memory leak?
The squid process grows without bounds here. I've read the FAQ, and tried lowering cache_mem setting, decreasing cache_dir size. That server has 4GB physical memory, and with total cache_dir size setting to 60G, squid resident size still can grow beyond bound and start eating swap. Note that cache_mem is not a bound on squid memry usage. Merely the RAM cache_dir. I know that, thanks for mentioning. It seems like it is the aufs which caused this memory leak problem. I've noticed that with freebsd 6.2 + diskd, everything went fine. Does anyone else using freebsd 7.1 + aufs got the same problem? Thanks and regards, Liu
Re: [squid-users] Memory leak?
Bin Liu wrote: Thanks for your reply. # /usr/local/squid/sbin/squid -v Squid Cache: Version 2.7.STABLE6 configure options: '--prefix=/usr/local/squid' '--with-pthreads' '--with-aio' '--with-dl' '--with-large-files' '--enable-storeio=ufs,aufs,diskd,coss,null' '--enable-removal-policies=lru,heap' '--enable-htcp' '--enable-kill-parent-hack' '--enable-snmp' '--enable-freebsd-tproxy' '--disable-poll' '--disable-select' '--enable-kqueue' '--disable-epoll' '--disable-ident-lookups' '--enable-stacktraces' '--enable-cache-digests' '--enable-err-languages=English' The squid process grows without bounds here. I've read the FAQ, and tried lowering cache_mem setting, decreasing cache_dir size. That server has 4GB physical memory, and with total cache_dir size setting to 60G, squid resident size still can grow beyond bound and start eating swap. Note that cache_mem is not a bound on squid memry usage. Merely the RAM cache_dir. The OS is FreeBSD 7.1-RELEASE. Thanks. Do you have access to any memory-tracing software (valgrind or similar?) tracking an actual memory usage while live can be done when built against valgrind and certain cachemgr reports. I'll have to look them up. Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14 Current Beta Squid 3.1.0.7
[squid-users] Memory leak?
I've already set memory_pools off in squid.conf, so squid should free all unused memory. But I can still see these lines in cachemgr after running for some time: memPoolAlloc calls: 549769495 memPoolFree calls: 545292412 Did I miss something? Regards
Re: [squid-users] Memory leak?
Bin Liu wrote: I've already set memory_pools off in squid.conf, so squid should free all unused memory. But I can still see these lines in cachemgr after running for some time: memPoolAlloc calls: 549769495 memPoolFree calls: 545292412 Did I miss something? Usually this turns out to be active state. All indexes are alloc'd, and there are many indexes for all types of things, some of which have their own sub-alloc'd data floating as well. What squid version? Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14 Current Beta Squid 3.1.0.7
Re: [squid-users] Memory leak?
Thanks for your reply. # /usr/local/squid/sbin/squid -v Squid Cache: Version 2.7.STABLE6 configure options: '--prefix=/usr/local/squid' '--with-pthreads' '--with-aio' '--with-dl' '--with-large-files' '--enable-storeio=ufs,aufs,diskd,coss,null' '--enable-removal-policies=lru,heap' '--enable-htcp' '--enable-kill-parent-hack' '--enable-snmp' '--enable-freebsd-tproxy' '--disable-poll' '--disable-select' '--enable-kqueue' '--disable-epoll' '--disable-ident-lookups' '--enable-stacktraces' '--enable-cache-digests' '--enable-err-languages=English' The squid process grows without bounds here. I've read the FAQ, and tried lowering cache_mem setting, decreasing cache_dir size. That server has 4GB physical memory, and with total cache_dir size setting to 60G, squid resident size still can grow beyond bound and start eating swap. The OS is FreeBSD 7.1-RELEASE. Thanks and regards, Liu
[squid-users] Memory leak in squid 2.5STABLE13?
I have four servers running 2.5STABLE13, each handling ~100 requests/second. They're all running on RedHat Linux ES 2.1. Each of the four servers leaks about 100 MB a week. It's been that way for years. I restart squid once a month so I don't run out of memory. Here's some cachemgr output: Squid Object Cache: Version 2.5.STABLE13 Start Time: Sat, 15 Apr 2006 21:43:47 GMT Current Time: Mon, 24 Apr 2006 19:22:37 GMT Connection information for squid: Number of clients accessing cache: 0 Number of HTTP requests received: 12318603 Number of ICP messages received:3718919 Number of ICP messages sent:3719279 Number of queued ICP replies: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start: 961.0 Average ICP messages per minute since start:580.3 Select loop called: 102978835 times, 7.469 ms avg Cache information for squid: Request Hit Ratios: 5min: 76.4%, 60min: 66.9% Byte Hit Ratios:5min: 19.1%, 60min: 9.3% Request Memory Hit Ratios: 5min: 4.6%, 60min: 6.4% Request Disk Hit Ratios:5min: 6.3%, 60min: 9.9% Storage Swap size: 31866372 KB Storage Mem size: 98444 KB Mean Object Size: 14.16 KB Requests given to unlinkd: 0 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.00562 0.00767 Cache Misses: 0.12783 0.13498 Cache Hits:0.00463 0.00463 Near Hits: 0.06640 0.07409 Not-Modified Replies: 0.00379 0.00463 DNS Lookups: 0.03223 0.03374 ICP Queries: 0.00137 0.00185 Resource usage for squid: UP Time:769130.108 seconds CPU Time: 49356.840 seconds CPU Usage: 6.42% CPU Usage, 5 minute avg:21.48% CPU Usage, 60 minute avg: 26.74% Process Data Segment Size via sbrk(): 492179 KB Maximum Resident Size: 0 KB Page faults with physical i/o: 2439 Memory usage for squid via mallinfo(): Total space in arena: 492179 KB Ordinary blocks: 454101 KB 344150 blks Small blocks: 0 KB 0 blks Holding blocks: 19968 KB 12 blks Free Small blocks: 0 KB Free Ordinary blocks: 38077 KB Total in use: 474069 KB 93% Total free: 38077 KB 7% Total size:512147 KB Memory accounted for: Total accounted: 258504 KB memPoolAlloc calls: 1436813623 memPoolFree calls: 1431954945 File descriptor usage for squid: Maximum number of file descriptors: 4096 Largest file desc currently in use:389 Number of file desc currently in use: 226 Files queued for open: 0 Available number of file descriptors: 3870 Reserved number of file descriptors: 100 Store Disk files open: 1 Internal Data Structures: 2263720 StoreEntries 18309 StoreEntries with MemObjects 18288 Hot Object Cache Items 2250084 on-disk objects Notice that the memory usage Total in Use: is 474 MB, while the Memory accounted for is only 258 MB. I've tried configuring with --enable-dlmalloc and that didn't have any affect on the memory leak. I've also tried replacing the version of dlmalloc (2.6.4) that's shipped with squid with a newer version (2.7.2), but that didn't have any affect either. Does anyone have an idea of what I should try next? Mike Mitchell SAS Institute Inc. [EMAIL PROTECTED] (919) 531-6793
Re: [squid-users] Memory leak in squid 2.5STABLE13?
mån 2006-04-24 klockan 15:47 -0400 skrev Mike Mitchell: Notice that the memory usage Total in Use: is 474 MB, while the Memory accounted for is only 258 MB. Does anyone have an idea of what I should try next? Start by making a graph of Total in Use and Memory accounted for. If the growth of the two follows similar patterns then there is good news and you can go to the Memory pools page to determine where the memory have gone. If Memory accounted for is stable and only Total in Use is growing then the situation is slightly trickier (but not hopeless). To continue from there check if a) memprof is available for your RedHat OS version. If it is then it can be used to collect a detailed memory usage profile with only a small impact on performance. b) If memprof isn't available then the next option to try is valgrind. This is even more detailed but have a significantly higher performance impact. I've tried configuring with --enable-dlmalloc and that didn't have any affect on the memory leak. I've also tried replacing the version of dlmalloc (2.6.4) that's shipped with squid with a newer version (2.7.2), but that didn't have any affect either. Switching the malloc implementation only helps if Total in use is stable but Total space in arena continues growing a lot.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Memory leak?[Scanned]
On Mon, 2003-09-01 at 19:50, shpendi wrote: Hi there, I'm running a Squid 2.5.STABLE3 Squid on Redhat 9 linux installed on a dual P4 2.4GHz, 3GB RAM and 6xSCSI 320 harddisks. Your cache size is too large. See the FAQ on memory use. Cheers, Rob -- GPG key available at: http://members.aardvark.net.au/lifeless/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [squid-users] Memory leak?[Scanned]
On Mon, 2003-09-01 at 19:50, shpendi wrote: Hi there, I'm running a Squid 2.5.STABLE3 Squid on Redhat 9 linux installed on a dual P4 2.4GHz, 3GB RAM and 6xSCSI 320 harddisks. Your cache size is too large. See the FAQ on memory use. Cheers, Rob -- GPG key available at: http://members.aardvark.net.au/lifeless/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [squid-users] Memory leak?[Scanned]
On Mon, 2003-09-01 at 19:50, shpendi wrote: Hi there, I'm running a Squid 2.5.STABLE3 Squid on Redhat 9 linux installed on a dual P4 2.4GHz, 3GB RAM and 6xSCSI 320 harddisks. Your cache size is too large. See the FAQ on memory use. Cheers, Rob -- GPG key available at: http://members.aardvark.net.au/lifeless/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [squid-users] Memory leak?[Scanned]
On Mon, 2003-09-01 at 19:50, shpendi wrote: Hi there, I'm running a Squid 2.5.STABLE3 Squid on Redhat 9 linux installed on a dual P4 2.4GHz, 3GB RAM and 6xSCSI 320 harddisks. Your cache size is too large. See the FAQ on memory use. Cheers, Rob -- GPG key available at: http://members.aardvark.net.au/lifeless/keys.txt. signature.asc Description: This is a digitally signed message part
[squid-users] Memory leak on epoll support in IA64
Hello Adam and all, I am muthukumar working in epoll development on Squid.I am working in IA64 platfrom.Then I have tested the squid with epoll for 300 req/sec.But after that it is consuming more than 1.9 GB out of 2.0 GB.As Adam's advice, I have changed the swap memory from 1945.634 to 256.000. [Try shrinking your swap partition in half (to about 1 GB) and see if that improves your memory situation. You don't have to reboot the system - just use swapoff to turn off the swap space, use parted to resize it (check /etc/fstab for the minor partition # first), then use swapon to turn it back on ] But still the squid is exiting due to Sep 4 11:10:19 pandia squid[1184]: Squid Parent: child process 1186 started Sep 4 11:15:40 pandia kernel: Out of Memory: Killed process 1186 (squid). Sep 4 11:15:40 pandia squid[1184]: Squid Parent: child process 1186 exited due to signal 9 I have tuned the kernel parameters of net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 1 kernel.sysrq = 0 kernel.core_uses_pid = 1 fs.file-max = 16384 net.ipv4.ipfrag_low_thresh = 90 net.ipv4.ipfrag_high_thresh = 100 net.ipv4.ipfrag_time = 45 net.ipv4.tcp_rmem = 200 225 250 net.ipv4.tcp_wmem = 100 125 150 net.ipv4.neigh.default.gc_thresh1 = 1024 net.ipv4.neigh.default.gc_thresh2 = 4096 net.ipv4.neigh.default.gc_thresh3 = 8192 net.core.rmem_max = 150 net.core.rmem_default = 150 net.core.wmem_max = 100 net.core.wmem_default = 100 I have used the squid configuration of configure options: '--prefix=/usr/local/squidepoll' '--enable-epoll' '--disable-poll' '--disable-select' '--disable-kqueue' '--enable-storeio=null,aufs,ufs' '--with-file-descriptors=16384' '--enable-async-io=16' '--with-pthreads' I have tried a lot to satisfy more than 300 requests.But i cann't make it .Any other parameter must be tuned for good performance?. So please suggest me some information regarding to this.Then regarding to kernel I am using the sys_epoll enabled 2.4.20 on Ia64.I have made it with sys_epoll patch and IA64 patch.Using the sys_call in entry.S,i have used the epoll support on IA64. I want to know whether this affection is on IA64 kernel or squid-3.0-pre3.Regarding to the 300 req test also consumed upto 1.8 GB.So please tell some way to improve the squid for epoll support.The kernel series 2.5 and 2.6 is supporting sys_epoll.But is their any affection of kenrel on Squid-3.0-pre. I have enabled the epoll on 2.4.20 mannually without default on 2.5 and 2.6.So I think ,there is no effection of kernel on Squid-3.0 in the effect of memory leak. Anyway to improve the squid-3.0-pre3 for epoll support. Thanks in advance. Muthukumar
Re: [squid-users]Memory leak problem on epoll i/o squid on IA64
Hi, MUTHUKUMAR KANDASAMY wrote: Hello all , In the compilation of squid ,I used cache_mem 1200 MB cache_dir null The cache line looks fine, I doubt that is causing you problems fs.file-max = 16384 That should probably be higher, like 32786 or something higher... net.ipv4.ipfrag_low_thresh = 90 net.ipv4.ipfrag_high_thresh = 100 net.ipv4.ipfrag_time = 45 net.ipv4.tcp_rmem = 200 225 250 net.ipv4.tcp_wmem = 100 125 150 net.ipv4.neigh.default.gc_thresh1 = 1024 net.ipv4.neigh.default.gc_thresh2 = 4096 net.ipv4.neigh.default.gc_thresh3 = 8192 net.core.rmem_max = 150 net.core.rmem_default = 150 net.core.wmem_max = 100 net.core.wmem_default = 100 What the heck? Why are your settings set to such insane levels? The first number in net.ipv4.tcp_wmem and net.ipv4.tcp_rmem would cause you problems. tcp_wmem - vector of 3 INTEGERs: min, default, max min: Amount of memory reserved for send buffers for TCP socket. Each TCP socket has rights to use it due to fact of its birth. Default: 4K tcp_rmem - vector of 3 INTEGERs: min, default, max min: Minimal size of receive buffer used by TCP sockets. It is guaranteed to each TCP socket, even under moderate memory pressure. Default: 8K Taking a wild stab, and saying 256 file descriptors open, that means something on the order of: 256 * (200 + 100) = 768 MB (and that is MINIMUM even under moderate memory pressure) Lower all of your numbers down to a sane level -- David Nicklay Location: CNN Center - SE0811A Office: 404-827-2698Cell: 404-545-6218
[Fwd: Re: [squid-users]Memory leak problem on epoll i/o squid on IA64]
-- David Nicklay Location: CNN Center - SE0811A Office: 404-827-2698Cell: 404-545-6218 ---BeginMessage--- Hi, MUTHUKUMAR KANDASAMY wrote: Hello all , In the compilation of squid ,I used cache_mem 1200 MB cache_dir null The cache line looks fine, I doubt that is causing you problems fs.file-max = 16384 That should probably be higher, like 32786 or something higher... net.ipv4.ipfrag_low_thresh = 90 net.ipv4.ipfrag_high_thresh = 100 net.ipv4.ipfrag_time = 45 net.ipv4.tcp_rmem = 200 225 250 net.ipv4.tcp_wmem = 100 125 150 net.ipv4.neigh.default.gc_thresh1 = 1024 net.ipv4.neigh.default.gc_thresh2 = 4096 net.ipv4.neigh.default.gc_thresh3 = 8192 net.core.rmem_max = 150 net.core.rmem_default = 150 net.core.wmem_max = 100 net.core.wmem_default = 100 What the heck? Why are your settings set to such insane levels? The first number in net.ipv4.tcp_wmem and net.ipv4.tcp_rmem would cause you problems. tcp_wmem - vector of 3 INTEGERs: min, default, max min: Amount of memory reserved for send buffers for TCP socket. Each TCP socket has rights to use it due to fact of its birth. Default: 4K tcp_rmem - vector of 3 INTEGERs: min, default, max min: Minimal size of receive buffer used by TCP sockets. It is guaranteed to each TCP socket, even under moderate memory pressure. Default: 8K Taking a wild stab, and saying 256 file descriptors open, that means something on the order of: 256 * (200 + 100) = 768 MB (and that is MINIMUM even under moderate memory pressure) Lower all of your numbers down to a sane level -- David Nicklay Location: CNN Center - SE0811A Office: 404-827-2698Cell: 404-545-6218 ---End Message---
RE: [squid-users] Memory leak?
I have no other memory-eating processes in this machine, except the usual processes necessary for running squid... why its more important for kernel to keep the cache/buffer in-memory and swap squid out? can I alter this somehow? I noticed this with my Squid box as well - the OS was using swap space instead of freeing up memory used for buffers and cache (RedHat Linux 7.3). I originally had a 1 GB swap partition, and saw a similar problem - free memory was very low, but there was a large amount of buffers and cache, and some swap usage. Using parted, I resized this partition to 256 MB, and the system used less memory for buffers and cache. Try shrinking your swap partition in half (to about 1 GB) and see if that improves your memory situation. You don't have to reboot the system - just use swapoff to turn off the swap space, use parted to resize it (check /etc/fstab for the minor partition # first), then use swapon to turn it back on. Adam
[squid-users] Memory leak?
Hi there, I'm running a Squid 2.5.STABLE3 Squid on Redhat 9 linux installed on a dual P4 2.4GHz, 3GB RAM and 6xSCSI 320 harddisks. The proxy is configured to work as transparent. The problem is that after the proxy starts being hit, the memory consumption is going high as time passes until it starts using swap. I have configured cache_mem to 192MB, and the squid process is using around 500MB of RAM but the rest is being eaten somehow. When I turn the squid process down (regulary), only the memory used by Squid (500MB) are being released while around 2.5 GB are left allocated, and the only way to deallocate them is to restart the server. I have upgraded redhat to its latest packages, and tried different malloc libraries (GNU, DL,...) but it didnt help. I turned memory pools off, but that didnt help either. I applied the memory leak patches for squid 2.5.3 although I didnt use those functions but everything remained the same. I'm using ReiserFS filesystem on the disks where the cache resides (notail, noatime,nodiratime). Can somebody give me a hint on what could be going wrong here. Any hint is apprecated. Compile options: --enable-cache-digests --with-aio --enable-snmp --enable-gnuregex --enable-removal-policies --enable-storeio=ufs,diskd --enable-linux-netfilter --disable-ident-lookups --enable-poll --enable-underscores --enable-xmalloc-statistics ReiserFS: 3.6.25 Kernel: 2.4.20-20.9smp glibc: 2.3.2-27.9 Hope somebody has an idea of whats going on as I'm running out of options here :) regards, Shpend Bakalli
Re: [squid-users] Memory leak?
On Mon, 2003-09-01 at 19:50, shpendi wrote: Hi there, I'm running a Squid 2.5.STABLE3 Squid on Redhat 9 linux installed on a dual P4 2.4GHz, 3GB RAM and 6xSCSI 320 harddisks. Your cache size is too large. See the FAQ on memory use. Cheers, Rob -- GPG key available at: http://members.aardvark.net.au/lifeless/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [squid-users] Memory leak?
-- Original Message -- From: Robert Collins [EMAIL PROTECTED] Date: Mon, 01 Sep 2003 21:07:44 +1000 On Mon, 2003-09-01 at 19:50, shpendi wrote: Hi there, I'm running a Squid 2.5.STABLE3 Squid on Redhat 9 linux installed on a dual P4 2.4GHz, 3GB RAM and 6xSCSI 320 harddisks. Your cache size is too large. See the FAQ on memory use. Cheers, Rob For the moment my cache size is around 13 GB and the maximum setting is 48GB which I'll probably lower a bit. I have around 1 million objects in HDD and around 30-40.000 hot (in-memory) objects. this still does not explain why 2 and more Gigs of memory are being eaten (it continues to eat the mem and swaps), and it is not accounted in squid process (which grows up to 500 MB). When squid is shut down cleanly, these 2 Gigs of memory are not being released to the system (only the 500MB are). For other details, plese read the prev post. regards, Shpendi
Re: [squid-users] Memory leak?
very simple decrase your cache_mem to 32 not more than that and lets C :) -- Best Regs, Masood Ahmad Shah System Administrator ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ | * * * * * * * * * * * * * * * * * * * * * * * * | Fibre Net (Pvt) Ltd. Lahore, Pakistan | Tel: +92-42-6677024 | Mobile: +92-300-4277367 | http://www.fibre.net.pk | * * * * * * * * * * * * * * * * * * * * * * * * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ Unix is very simple, but it takes a genius to understand the simplicity. (Dennis Ritchie) - Original Message - From: shpendi [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, September 01, 2003 2:50 PM Subject: [squid-users] Memory leak? | Hi there, | | I'm running a Squid 2.5.STABLE3 Squid on Redhat 9 linux installed on a dual P4 2.4GHz, 3GB RAM and 6xSCSI 320 harddisks. The proxy is configured to work as transparent. | The problem is that after the proxy starts being hit, the memory consumption is going high as time passes until it starts using swap. | I have configured cache_mem to 192MB, and the squid process is using around 500MB of RAM but the rest is being eaten somehow. When I turn the squid process down (regulary), only the memory used by Squid (500MB) are being released while around 2.5 GB are left allocated, and the only way to deallocate them is to restart the server. | I have upgraded redhat to its latest packages, and tried different malloc libraries (GNU, DL,...) but it didnt help. I turned memory pools off, but that didnt help either. I applied the memory leak patches for squid 2.5.3 although I didnt use those functions but everything remained the same. | I'm using ReiserFS filesystem on the disks where the cache resides (notail, noatime,nodiratime). | Can somebody give me a hint on what could be going wrong here. Any hint is apprecated. | Compile options: --enable-cache-digests --with-aio --enable-snmp --enable-gnuregex - -enable-removal-policies --enable-storeio=ufs,diskd --enable-linux-netfilter --disable-ident-lookups --enable-poll --enable-underscores --enable-xmalloc -statistics | ReiserFS: 3.6.25 | Kernel: 2.4.20-20.9smp | glibc: 2.3.2-27.9 | | Hope somebody has an idea of whats going on as I'm running out of options here :) | | regards, | Shpend Bakalli |
Re: [squid-users] Memory leak?
On Mon, 1 Sep 2003, Shpend Bakalli wrote: (it continues to eat the mem and swaps), and it is not accounted in squid process (which grows up to 500 MB). When squid is shut down Forgive me for asking the obvious question: are you basing this on the output of free? You're aware that the OS uses unallocated pages as disk cache? Rick.
Re: [squid-users] Memory leak?
mån 2003-09-01 klockan 14.40 skrev Shpend Bakalli: For the moment my cache size is around 13 GB and the maximum setting is 48GB which I'll probably lower a bit. I have around 1 million objects in HDD and around 30-40.000 hot (in-memory) objects. this still does not explain why 2 and more Gigs of memory are being eaten (it continues to eat the mem and swaps), and it is not accounted in squid process (which grows up to 500 MB). When squid is shut down cleanly, these 2 Gigs of memory are not being released to the system (only the 500MB are). For other details, plese read the prev post. If you find that the Used -/+ buffers/cache value (which is below the Used value) is increasing and it is not used by the Squid process then you have some other software on the same server using a lot of memory. Use ps to find out what process is using the memory and get rid of it. -- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org Please consult the Squid FAQ and other available documentation before asking Squid questions, and use the squid-users mailing-list when no answer can be found. Private support questions is only answered for a fee or as part of a commercial Squid support contract. If you need commercial Squid support or cost effective Squid and firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, [EMAIL PROTECTED]
Re: [squid-users] Memory leak?
From: Richard Lyons [EMAIL PROTECTED] Date: Mon, 1 Sep 2003 23:12:14 +1000 On Mon, 1 Sep 2003, Shpend Bakalli wrote: (it continues to eat the mem and swaps), and it is not accounted in squid process (which grows up to 500 MB). When squid is shut down Forgive me for asking the obvious question: are you basing this on the output of free? You're aware that the OS uses unallocated pages as disk cache? Rick. I am aware, but I dont think that squid will start swaping if the OS is using 2 gigs of disk cache... the cache/buffer memory supposedly should be freed to the applications asking for it right? Output from Free... total used free sharedbuffers cached Mem: 3098532308773210800 0 197972 1902248 -/+ buffers/cache: 9875122111020 Swap: 2040244 199482020296 and after a while... total used free sharedbuffers cached Mem: 3098532308780410728 0 171668 1798304 -/+ buffers/cache:11178321980700 Swap: 2040244 437281996516 I have no other memory-eating processes in this machine, except the usual processes necessary for running squid... why its more important for kernel to keep the cache/buffer in-memory and swap squid out? can I alter this somehow? regards, Shpendi
[squid-users] Memory Leak tools
Hello all, I have compiled the squid-3.0-PRE on Ia64.I suspected there is some memory leak in kernel, from the testing with polygraph.So please preper which tool is used to find the memory leak in IA64 Platform. I am using the kernel-2.4.20 with epoll-support. You will be appreciated for your help. Muthukumar _ Got a wish? Make it come true. http://server1.msn.co.in/msnleads/citibankpersonalloan/citibankploanjuly03.asp?type=txt Best personal loans!