[squid-users] Squid - cache_peer
hello everyone i have four squid servers with WCCP configuration and working good. i like to improve caching performance, now i simply using the cache_peer for server1 following : cache_peer server2 sibling 3128 3130 proxy-only cache_peer server3 sibling 3128 3130 proxy-only cache_peer server4 sibling 3128 3130 proxy-only i request u to for best cache_peer with options Thanks -Viswa
Re: [squid-users] File Descriptors
Mr ED, Changing the FD value from limits.conf and including the "ulimit -HSn 4096" in squid daemon does not change the default squid FD limit. This needs to be done in compile time. Recompile your squid and run "ulimit -HSn 4096" after compile and before make install. That will work. Regards, Bal > > Hello, > > Got a odd problem with file descriptors im hoping you guys could help me > out > with? > > Background > > I'm running CentOS 5.5 and squid 3.0 Stable 5. > The system is configured with 4096 file descriptors with the following : > > /etc/security/limits.conf > *- nofile 4096 > /etc/sysctl.conf > fs.file-max = 4096 > > Also /etc/init.d/squid has ulimit -HSn 4096 at the start. > > Problem > > Running a ulimit -n on the box does indeed show 4096 connectors but squid > states it is using 1024 despite what is said above. I noticed this because > im starting to get warnings in the logs about file descriptors... > > Any help greatly appreciated. > > Thanks > > Ed > > Ed > -- > View this message in context: > http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html > Sent from the Squid - Users mailing list archive at Nabble.com. >
Re: [squid-users] File Descriptors
I used this how to, and did not require a re-compile http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/ cheers Ivan On Tue, Jul 6, 2010 at 1:43 PM, Mellem, Dan wrote: > > Did you set the limit before you compiled it? The upper limit is set at > compile time. I ran into this problem myself. > > -Dan > > > -Original Message- > From: Superted666 [mailto:ruckafe...@gmail.com] > Sent: Mon 7/5/2010 3:33 PM > To: squid-users@squid-cache.org > Cc: > Subject: [squid-users] File Descriptors > > > Hello, > > Got a odd problem with file descriptors im hoping you guys could help me out > with? > > Background > > I'm running CentOS 5.5 and squid 3.0 Stable 5. > The system is configured with 4096 file descriptors with the following : > > /etc/security/limits.conf > * - nofile 4096 > /etc/sysctl.conf > fs.file-max = 4096 > > Also /etc/init.d/squid has ulimit -HSn 4096 at the start. > > Problem > > Running a ulimit -n on the box does indeed show 4096 connectors but squid > states it is using 1024 despite what is said above. I noticed this because > im starting to get warnings in the logs about file descriptors... > > Any help greatly appreciated. > > Thanks > > Ed > > Ed > -- > View this message in context: > http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html > Sent from the Squid - Users mailing list archive at Nabble.com. > > > >
RE: [squid-users] File Descriptors
Did you set the limit before you compiled it? The upper limit is set at compile time. I ran into this problem myself. -Dan -Original Message- From: Superted666 [mailto:ruckafe...@gmail.com] Sent: Mon 7/5/2010 3:33 PM To: squid-users@squid-cache.org Cc: Subject:[squid-users] File Descriptors Hello, Got a odd problem with file descriptors im hoping you guys could help me out with? Background I'm running CentOS 5.5 and squid 3.0 Stable 5. The system is configured with 4096 file descriptors with the following : /etc/security/limits.conf *- nofile 4096 /etc/sysctl.conf fs.file-max = 4096 Also /etc/init.d/squid has ulimit -HSn 4096 at the start. Problem Running a ulimit -n on the box does indeed show 4096 connectors but squid states it is using 1024 despite what is said above. I noticed this because im starting to get warnings in the logs about file descriptors... Any help greatly appreciated. Thanks Ed Ed -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Antwort: Re: [squid-users] Memory and CPU usage squid-3.1.4
The memory leak is caused by idnsGrokReply: the caller of rfc1035MessageUnpack should free the memory using rfc1035MessageDestroy. The function idnsGrokReply has several changes between 3.0.x and 3.1.x. It is obvious that not all paths call rfc1035MessageDestroy but I do not know this code and feel uncomfortable making a patch, so I suggest that the person who maintains this part of the code looks at it. Marcus martin.pichlma...@continental-corporation.com wrote: Hello list, I just wanted to post the results with valgrind. Unfortunately the memcheck thread needs so much CPU that I could not put a high load on the squid as maximum only about 5-10 req/s. # ./squid -v Squid Cache: Version 3.1.3 configure options: '--prefix=/appl' '--localstate=/var' '--with-filedescriptors=16384' '--enable-storeio=aufs' '--enable-auth=ntlm,basic' '--enable-external-acl-helpers=wbinfo_group' '--enable-icap-client' --enable-ltdl-convenience also recompiled and tried with: # squid -v Squid Cache: Version 3.1.3 configure options: '--prefix=/appl' '--localstate=/var' '--with-filedescriptors=16384' '--enable-storeio=aufs' '--enable-auth=ntlm,basic' '--enable-external-acl-helpers=wbinfo_group' '--enable-icap-client' '--with-valgrind-debug' 'CFLAGS=-g -O2' --enable-ltdl-convenience I ran valgrind repeatedly with: "valgrind --leak-check=yes -v squid -N &" and found: ==24141== 3,311,957 bytes in 3,784 blocks are definitely lost in loss record 26 of 27 ==24141==at 0x4A05809: malloc (vg_replace_malloc.c:149) ==24141==by 0x5ABAA7: xmalloc (util.c:508) ==24141==by 0x5AA35A: rfc1035MessageUnpack (rfc1035.c:433) ==24141==by 0x4B15A7: idnsGrokReply(char const*, unsigned long) (dns_interna l.cc:939) ==24141==by 0x4B22F0: idnsRead(int, void*) (dns_internal.cc:1178) ==24141==by 0x4AC154: comm_select (comm_epoll.cc:308) ==24141==by 0x5455AC: CommSelectEngine::checkEvents(int) (comm.cc:2682) ==24141==by 0x4B712D: EventLoop::checkEngine(AsyncEngine*, bool) (EventLoop.cc:51) ==24141==by 0x4B7282: EventLoop::runOnce() (EventLoop.cc:125) ==24141==by 0x4B7377: EventLoop::run() (EventLoop.cc:95) ==24141==by 0x4FB36C: SquidMain(int, char**) (main.cc:1379) ==24141==by 0x4FB975: main (main.cc:1141) I looked a bit in the source code but didn't really find what could cause this. Sometimes DNS did not seem to loose mem but I found this instead: ==29780== 987,870 (987,046 direct, 824 indirect) bytes in 1,321 blocks are definitely lost in loss record 27 of 28 ==29780==at 0x4A05809: malloc (vg_replace_malloc.c:149) ==29780==by 0x5ABAA7: xmalloc (util.c:508) ==29780==by 0x5ABBAB: xstrdup (util.c:756) ==29780==by 0x4B3E15: errorTryLoadText(char const*, char const*, bool) (errorpage.cc:313) ==29780==by 0x4B494F: ErrorState::BuildContent() (errorpage.cc:1007) ==29780==by 0x4B551D: ErrorState::BuildHttpReply() (errorpage.cc:881) ==29780==by 0x4B58E5: errorAppendEntry (errorpage.cc:432) ==29780==by 0x51D656: store_client::callback(long, bool) (store_client.cc:164) ==29780==by 0x51DA2F: store_client::scheduleMemRead() (store_client.cc:448) ==29780==by 0x51E567: storeClientCopy2(StoreEntry*, store_client*) (store_client.cc:331) ==29780==by 0x51E8D3: store_client::copy(StoreEntry*, StoreIOBuffer, void (*)(void*, StoreIOBuffer), void*) (store_client.cc:264) ==29780==by 0x4A0D0E: clientReplyContext::doGetMoreData() (client_side_reply.cc:1675) When running valgrind with 3.0.STABLE 23 I did not find similar lost blocks, only some KB lost when initializing but 3.1 looses some KB as well at that point. I monitored a squid3.0.STABLE25 and squid 3.1.3/3.1.4 over a longer period and found out that both need more memory over time but 3.0 eventually does not grow. 3.1 continues to grow until CPU rises to nearly 100%; then the memory consumption seem to stop. Has someone an idea where the problem could be? Martin Marcus Kool wrote on 17.06.2010 16:15:09: Martin, Valgrind is a memory leak detection tool. You need some developer skills to run it. If you have a test environment with low load you may want to give it a try. - download the squid sources - run configure with CFLAGS="-g -O2" - run squid with valgrind - wait - kill squid with a TERM signal and look and the valgrind log file Valgrind uses a lot of memory for its own administration and has a lot of CPU overhead, so reduce cache_mem to a small value like 32MB. Most likely you will see many memory leaks because Squid does not free everything when it exits. This is normal. You need to look at repeated memory leaks; the leaks that occur often and file a bug report. Please do not post the whole valgrind output to this list. Marcus martin.pichlma...@continental-corporation.com wrote: Hello, I just wanted to report back the last tests: After the memory cache is filled to 100% the squid (3.1.4 or 3.1.3) still needs more memory over time when under load, about 1-2 G
[squid-users] File Descriptors
Hello, Got a odd problem with file descriptors im hoping you guys could help me out with? Background I'm running CentOS 5.5 and squid 3.0 Stable 5. The system is configured with 4096 file descriptors with the following : /etc/security/limits.conf *- nofile 4096 /etc/sysctl.conf fs.file-max = 4096 Also /etc/init.d/squid has ulimit -HSn 4096 at the start. Problem Running a ulimit -n on the box does indeed show 4096 connectors but squid states it is using 1024 despite what is said above. I noticed this because im starting to get warnings in the logs about file descriptors... Any help greatly appreciated. Thanks Ed Ed -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: Re: Re: Re: squid_kerb_auth (parseNegTokenInit failed with rc=102)
Hi squid_kerb_auth is not required for squid_kerb_ldap work, but you have to use -g GROUP and provide an ldap URL as squid_kerb_ldap won't be able to "automagically" determine the ldap server. Regards Markus "GIGO ." wrote in message news:snt134-w356b42d425f0504c922352b9...@phx.gbl... Hi, please some more guidance required. Can squid_kerb_ldap be used(alone) independentaly of calling squid_kerb_auth or any other helper?? If and only if it is must to use squid_kerb_auth & squid_kerb_ldap both then is it correct that we are not using the following directives?? acl auth proxy_auth REQUIRED #used #http_access deny !auth # Not used #http_access allow auth #not used as instead ldap based directives of the following form are used... external_acl_type squid_kerb_ldap ttl=3600 negative_ttl=3600 %LOGIN /usr/sbin/squid_kerb_ldap -g GROUP@ acl ldap_group_check external squid_kerb_ldap http_access allow ldap_group_check thanking you & regards, Bilal To: squid-users@squid-cache.org From: hua...@moeller.plus.com Date: Thu, 1 Jul 2010 21:31:13 +0100 Subject: [squid-users] Re: Re: Re: squid_kerb_auth (parseNegTokenInit failed with rc=102) Hi 1) 1.2.1a is just a minor patch version to 1.2.1. 2) This happens only when you use the -d debug option 3) You can use the options -u BIND_DN -p BIND_PW -b BIND_PATH -l LDAP_URL 4) If they have different access needs then that is the only way. If they have the same access right you can use -g inetgrl...@mailserver.v.local:inetgrl...@mailserver.v.local:inetgrl...@mailserver.v.local Regards Markus - Original Message - From: "GIGO ." To: "squidsuperuser2" ; "SquidHelp" Sent: Thursday, July 01, 2010 11:31 AM Subject: RE: [squid-users] Re: Re: Re: squid_kerb_auth (parseNegTokenInit failed with rc=102) Dear Markus, Thank you so much for your help as i diagnosed the problem back to KRB5_KTNAME not exported properly through my startup script. For the completion sake and your analysis i have appended the cache.log at the bottom. Please i have few queries: 1. I am using squid_kerb_ldap version 1.2.1a as per your recommendation and which is the latest but is the "a" in 1.2.1(a) means alpha. Can i use this latest version in the production or i should switch back to 1.2.1. 2. i have just figured out that squid_kerb_ldap gets all the groups for a user in question even if the first group it find matches. Is this the normal behaviour? 3. Is there a way to bind to a specific or multiple(chosen) ldap servers rather than using DNS. (what is the syntax and how) 4. As i have different categories of users so i had defined the following directives. Is it ok to do this way as it does not look very neet to me and looks like squid_kerb_ldap being called redundantly. -Portion of squid.conf- auth_param negotiate program /usr/libexec/squid/squid_kerb_auth/squid_kerb_auth auth_param negotiate children 10 auth_param negotiate keep_alive on # basic auth ACL controls to make use of it are.(if and only if squid_kerb_ldap(authorization) is not used) #acl auth proxy_auth REQUIRED #http_access deny !auth #http_access allow auth #Groups fom Mailserver Domain: external_acl_type squid_kerb_ldap_msgroup1 ttl=3600 negative_ttl=3600 %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local external_acl_type squid_kerb_ldap_msgroup2 ttl=3600 negative_ttl=3600 %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local external_acl_type squid_kerb_ldap_msgroup3 ttl=3600 negative_ttl=3600 %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local acl msgroup1 external squid_kerb_ldap_msgroup1 acl msgroup2 external squid_kerb_ldap_msgroup2 acl msgroup3 external squid_kerb_ldap_msgroup3 http_access deny msgroup2 msn http_access deny msgroup3 msn http_access deny msgroup2 ym http_access deny msgroup3 ym ###Most Restricted settings Exclusive for Normal users..### http_access deny msgroup3 Movies http_access deny msgroup3 downloads http_access deny msgroup3 torrentSeeds http_access deny all _ Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. https://signup.live.com/signup.aspx?id=60969
RE: [squid-users] Re: Re: Re: squid_kerb_auth (parseNegTokenInit failed with rc=102)
Hi, please some more guidance required. Can squid_kerb_ldap be used(alone) independentaly of calling squid_kerb_auth or any other helper?? If and only if it is must to use squid_kerb_auth & squid_kerb_ldap both then is it correct that we are not using the following directives?? acl auth proxy_auth REQUIRED #used #http_access deny !auth # Not used #http_access allow auth #not used as instead ldap based directives of the following form are used... external_acl_type squid_kerb_ldap ttl=3600 negative_ttl=3600 %LOGIN /usr/sbin/squid_kerb_ldap -g GROUP@ acl ldap_group_check external squid_kerb_ldap http_access allow ldap_group_check thanking you & regards, Bilal > To: squid-users@squid-cache.org > From: hua...@moeller.plus.com > Date: Thu, 1 Jul 2010 21:31:13 +0100 > Subject: [squid-users] Re: Re: Re: squid_kerb_auth (parseNegTokenInit failed > with rc=102) > > Hi > > 1) 1.2.1a is just a minor patch version to 1.2.1. > 2) This happens only when you use the -d debug option > 3) You can use the options -u BIND_DN -p BIND_PW -b BIND_PATH -l LDAP_URL > 4) If they have different access needs then that is the only way. If they > have the same access right you can use -g > inetgrl...@mailserver.v.local:inetgrl...@mailserver.v.local:inetgrl...@mailserver.v.local > > Regards > Markus > > - Original Message - > From: "GIGO ." > To: "squidsuperuser2" ; "SquidHelp" > > Sent: Thursday, July 01, 2010 11:31 AM > Subject: RE: [squid-users] Re: Re: Re: squid_kerb_auth (parseNegTokenInit > failed with rc=102) > > > > Dear Markus, > > Thank you so much for your help as i diagnosed the problem back to > KRB5_KTNAME not exported properly through my startup script. For the > completion sake and your analysis i have appended the cache.log at the > bottom. > > Please i have few queries: > > > 1. I am using squid_kerb_ldap version 1.2.1a as per your recommendation and > which is the latest but is the "a" in 1.2.1(a) means alpha. Can i use this > latest version in the production or i should switch back to 1.2.1. > > > > > 2. i have just figured out that squid_kerb_ldap gets all the groups for a > user in question even if the first group it find matches. Is this the normal > behaviour? > > > 3. Is there a way to bind to a specific or multiple(chosen) ldap servers > rather than using DNS. (what is the syntax and how) > > > 4. As i have different categories of users so i had defined the following > directives. Is it ok to do this way as it does not look very neet to me and > looks like squid_kerb_ldap being called redundantly. > > > -Portion of > squid.conf- > auth_param negotiate program > /usr/libexec/squid/squid_kerb_auth/squid_kerb_auth > auth_param negotiate children 10 > auth_param negotiate keep_alive on > # basic auth ACL controls to make use of it are.(if and only if > squid_kerb_ldap(authorization) is not used) > #acl auth proxy_auth REQUIRED > #http_access deny !auth > #http_access allow auth > > #Groups fom Mailserver Domain: > external_acl_type squid_kerb_ldap_msgroup1 ttl=3600 negative_ttl=3600 > %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local > external_acl_type squid_kerb_ldap_msgroup2 ttl=3600 negative_ttl=3600 > %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local > external_acl_type squid_kerb_ldap_msgroup3 ttl=3600 negative_ttl=3600 > %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local > > acl msgroup1 external squid_kerb_ldap_msgroup1 > acl msgroup2 external squid_kerb_ldap_msgroup2 > acl msgroup3 external squid_kerb_ldap_msgroup3 > http_access deny msgroup2 msn > http_access deny msgroup3 msn > http_access deny msgroup2 ym > http_access deny msgroup3 ym > ###Most Restricted settings Exclusive for Normal users..### > http_access deny msgroup3 Movies > http_access deny msgroup3 downloads > http_access deny msgroup3 torrentSeeds > http_access deny all > > _ Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. https://signup.live.com/signup.aspx?id=60969
Re: [squid-users] Squid + Apache
Mr. John, First of all how you redirecting the traffics to squid ? TCS, WCCP or as the gateway proxy ? The most probable case may be as you redirect all the traffics to port 80 to 3128 the squid port, all the http request for www.a.com needs to be served by squid. Either you can modify your firewall such that if the destination if your proxy server's IP then don't redirect to 3128 or you can change the listening port of apache to other than 80. > > Hello all. > > I have an Squid with apache on the same Server > > Squid is Transparent with PF forwarding all 80 to 3128 > > I have 1 domain on apache www.a.com but is not working when Squid is > transparent > > The requested URL could not be retrieved > > Any idea to do that ? > > > > __ Information from ESET NOD32 Antivirus, version of virus > signature > database 5251 (20100704) __ > > The message was checked by ESET NOD32 Antivirus. > > http://www.eset.com > > >
[squid-users] Squid + Apache
Hello all. I have an Squid with apache on the same Server Squid is Transparent with PF forwarding all 80 to 3128 I have 1 domain on apache www.a.com but is not working when Squid is transparent The requested URL could not be retrieved Any idea to do that ? __ Information from ESET NOD32 Antivirus, version of virus signature database 5251 (20100704) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com
[squid-users] Blocking SSL Port does not work
Hi, I'm trying to block SSL port 443 on my squid server but no luck on several tries. My squid Server is running Transparent Mode. Thanks, Malvin
Re: [squid-users] Question regarding stats
Markus Meyer wrote: Hi all, via the Cachemgr there is the item "Near Hits" under "Median Service Time". AFAIK this tells me the connection times between siblings. Should be the time of service for If-Modified-Sice requests that returned a changed object. TCP_REFRESH_HIT in access.log. Now I'd like to get this value via SNMP but am unsure about it's name. Can someone please tell me which SNMP value corresponds to "Near Hits"? Thanks and regards, Markus "HTTP refresh hit service time" http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs 1.3.6.1.4.1.3495.1.3.2.2.1.11.* Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.5
Re: [squid-users] ACL blocks, browser retries constantly
David Parks wrote: I have a simple ACL helper that fails whenever a user should no longer have access (I need a way of dynamically blocking access to the proxy on a per-user basis). But when the ACL fails the request, the browser goes into a vicious cycle of continuing to re-try the same request indefinitely and just hammering the proxy. Bad for the proxy, and looks bad to the user (it's not clear why the browser is going crazy). Any thoughts on how I should deal with this problem? It's not clear what your config is. Or what browser is showing the problem. Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.5
Re: [squid-users] ACL blocks, browser retries constantly
From: David Parks > But when the ACL fails the request, the browser > goes into a vicious cycle of continuing to re-try > the same request indefinitely Did you set 'negative_ttl' in your external_acl line? JD
Re: [squid-users] Re: Squid tcp_outgoing_address.
On 5/07/10 7:00 PM, "Notino" wrote: > > > Hello, thanks for your response. > > Yes this is exactly what is needed each client needs to point to a > particular IP address and port to connect and use the proxy. Is there any > type of guide on this at all? Also is it possible to still use nsa_auth with > this method? > > Thanks again. Sorry, should have been more specific. is your auth ACL that would use nsa_auth. You would need to check the docs for the external authentication (never used nsa before) configuration. This might help a little. http://www.visolve.com/squid/squid30/externalsupport.php#auth_param This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system. Please note that any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the organisation. Finally, the recipient should check this email and any attachments for the presence of viruses. The organisation accepts no liability for any damage caused by any virus transmitted by this email.
[squid-users] Re: Squid tcp_outgoing_address.
Hello, thanks for your response. Yes this is exactly what is needed each client needs to point to a particular IP address and port to connect and use the proxy. Is there any type of guide on this at all? Also is it possible to still use nsa_auth with this method? Thanks again. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-tcp-outgoing-address-tp2278077p2278163.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Squid tcp_outgoing_address.
On 5/07/10 4:07 PM, "Notino" wrote: > > > Hello Everyone, > > I have setup squid on a VPS and successfully got the main IP to work with > nsa_auth and all is working 100% fine. My question is I am about to purchase > 50 IP addresses becuase I would like to add them on so private clients can > use. > > How would this be setup? I have looked around and heard I need to setup > tcp_outgoing_address; > So in this case would it be like so > > 80.90.90.01:port > 80.90.90.02:port > 80.90.90.03:port > And so on > > So if a client uses 80.90.90.01:port he/she will login with username and > passsword from nsa_auth and start browsing? Then the next client can use > 80.90.90.02:port and so on and so on? > > Thanks. > This is for squid 3 and may or may not be the best approach but it might help. You could probably setup an ACL that uses "myip" manual: http://www.visolve.com/squid/squid30/accesscontrols.php#myip acl myip1 myip 172.16.1.53/32 Then use the tcp_outgoing_address to use it http_port 80.90.90.03:port acl myip1 myip 80.90.90.03/32 tcp_outgoing_address 80.90.90.03 password myip1 http_access allow password myip1 Hope that helps. Each client would still need to point at a particular IP address that you have configured though. This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system. Please note that any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the organisation. Finally, the recipient should check this email and any attachments for the presence of viruses. The organisation accepts no liability for any damage caused by any virus transmitted by this email.
[squid-users] Question regarding stats
Hi all, via the Cachemgr there is the item "Near Hits" under "Median Service Time". AFAIK this tells me the connection times between siblings. Now I'd like to get this value via SNMP but am unsure about it's name. Can someone please tell me which SNMP value corresponds to "Near Hits"? Thanks and regards, Markus