Re: [squid-users] Squid 3.3x: UNLNK id(232) Error: no filename in shm buffer
Hello, I was going to ask for the same thing, I'm running 3.2 and I also see tons of these errors filling my cache.log. On Tue, 12 Feb 2013 13:29:07 +0100 David Touzeau da...@articatech.com wrote: Dear I have these errors on Squid 3.3 What does it means ? 26988 UNLNK id(232) Error: no filename in shm buffer 26989 UNLNK id(547) Error: no filename in shm buffer 26988 UNLNK id(233) Error: no filename in shm buffer 26989 UNLNK id(548) Error: no filename in shm buffer 26988 UNLNK id(234) Error: no filename in shm buffer 26989 UNLNK id(549) Error: no filename in shm buffer 2013/02/12 09:12:23| Error sending to ICMPv6 packet to [2a02:13a8:102:1:40::83]. ERR: (101) Network is unreachable 26992 UNLNK id(300) Error: no filename in shm buffer 26992 UNLNK id(301) Error: no filename in shm buffer 26989 UNLNK id(550) Error: no filename in shm buffer 26991 UNLNK id(276) Error: no filename in shm buffer 26989 UNLNK id(551) Error: no filename in shm buffer 26988 UNLNK id(235) Error: no filename in shm buffer 26989 UNLNK id(552) Error: no filename in shm buffer 26991 UNLNK id(277) Error: no filename in shm buffer 26988 UNLNK id(236) Error: no filename in shm buffer 26989 UNLNK id(553) Error: no filename in shm buffer 26992 UNLNK id(302) Error: no filename in shm buffer 26992 UNLNK id(303) Error: no filename in shm buffer 26992 UNLNK id(304) Error: no filename in shm buffer 26989 UNLNK id(554) Error: no filename in shm buffer 26991 UNLNK id(278) Error: no filename in shm buffer 26989 UNLNK id(555) Error: no filename in shm buffer 26991 UNLNK id(279) Error: no filename in shm buffer 26989 UNLNK id(556) Error: no filename in shm buffer 26991 UNLNK id(280) Error: no filename in shm buffer 26989 UNLNK id(557) Error: no filename in shm buffer 26992 UNLNK id(305) Error: no filename in shm buffer 26991 UNLNK id(281) Error: no filename in shm buffer 26989 UNLNK id(558) Error: no filename in shm buffer 26991 UNLNK id(282) Error: no filename in shm buffer 26989 UNLNK id(559) Error: no filename in shm buffe 26989 UNLNK id(551) Error: no filename in shm buffer
Re: [squid-users] 2.7 STABLE 9 responds very slowly or stops responding
Hello again :D I have spent some time debugging my squid. Yep, it runs as a transparent proxy. The transparent port is different than 'user' port, but it was not blocked in any way. I changed that and also tweaked a few more things. So far everything works. Could the problem be caused by hardware failure ? One of my SATA disks died today, suddenly crashed and it does not work anymore. Of course, this disk contained SQUID's cache. Perhaps it was quietly dying for last few days, however I checked all the logs I could find and didn't find any complains against disk read/writes. Btw, is there any way to find how many FDs are in use ? I have a feeling that my cache does not use more than 100-200 at any time (unless something is wrong). From time to time SQUID dumps a complain to the cache.log about request too long or something and it shows FD, it was never higher than 100 during normal operation. On Tue, 10 Aug 2010 23:45:09 +1200 Amos Jeffries squ...@treenet.co.nz wrote: TJM wrote: Hello, For last 3 days I had weird problems with Squid 2.7 stable 9. I'm running that version since it was released and it worked just fine for these few months. Then suddenly users behind the proxy started to report serious slowdowns or downtimes. This is a very low volume proxy with less than 30 users, set-up mostly to save bandwidth during peak hours. I'm using it myself everyday, so when the last slowdown started I was able to look at the logs almost immediately. When the slowdown event starts, usually the proxy almost stops responding - for example, if I open new tab in a brower and enter URL, it might take several minutes until it starts loading and then it might take another couple of minutes since the page loads completely (if it does at all). It lasts for a while, sometimes it will just go away without restarting the proxy, sometimes not. During the slowdown, any requests made do not appear in the access log until the squid handles the request, which as I mentioned above might take several minutes. Also, during last slowdown I've found weird log entries in the access.log, a sample from access log, 10.0.0.4 is the cache IP address: 1281379957.060 899446 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html 1281379957.060 899446 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html 1281379957.060 899445 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html 1281379957.060 899445 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html 1281379957.060 899445 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html Also, the cache.log complains about cache running out of file descriptors. Where should I look at to find what's the cause of this Yes. That is to be expected if Squid is being forced to loop requests to itself for extremely long durations. (Squid holds 3+ FD per request). problem ? I doubt that it's the config itself, because the proxy was running fine for like 7-8 years, upgraded everytime when stable version came out. The strange requests are the proxy machine sending a post to its own public listening port. Which relays through to ... one guess. So question is what other POST requests are there that match that path but don't come from the proxy machine itself? It's highly likely that client is performing these requests. Check your via directive is turned on. Something is permitting these requests to last for over 10 minutes. The config needs to be corrected to catch and blocking them quickly. Then monitor the cache.log for loop warnings to see when it happens. If you have a transparent proxy check that the port your firewall passes traffic to is NOT accessible to general users. Separate ports for the interception and for regular access are good. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.6 Beta testers wanted for 3.2.0.1
[squid-users] 2.7 STABLE 9 responds very slowly or stops responding
Hello, For last 3 days I had weird problems with Squid 2.7 stable 9. I'm running that version since it was released and it worked just fine for these few months. Then suddenly users behind the proxy started to report serious slowdowns or downtimes. This is a very low volume proxy with less than 30 users, set-up mostly to save bandwidth during peak hours. I'm using it myself everyday, so when the last slowdown started I was able to look at the logs almost immediately. When the slowdown event starts, usually the proxy almost stops responding - for example, if I open new tab in a brower and enter URL, it might take several minutes until it starts loading and then it might take another couple of minutes since the page loads completely (if it does at all). It lasts for a while, sometimes it will just go away without restarting the proxy, sometimes not. During the slowdown, any requests made do not appear in the access log until the squid handles the request, which as I mentioned above might take several minutes. Also, during last slowdown I've found weird log entries in the access.log, a sample from access log, 10.0.0.4 is the cache IP address: 1281379957.060 899446 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html 1281379957.060 899446 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html 1281379957.060 899445 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html 1281379957.060 899445 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html 1281379957.060 899445 10.0.0.4 TCP_MISS/504 227 POST http://10.0.0.4:3128/p4s - DIRECT/10.0.0.4 text/html Also, the cache.log complains about cache running out of file descriptors. Where should I look at to find what's the cause of this problem ? I doubt that it's the config itself, because the proxy was running fine for like 7-8 years, upgraded everytime when stable version came out.