Re[2]: [squid-users] Digest auth trouble
Hello, Henrik. You wrote at 9/08/2007 15:55:23: HN> On ons, 2007-08-08 at 10:34 +0500, Sergey Svyatkin wrote: >> Hello. >> >> There are problems at use digest-auth by means of a perl-script >> which takes data of users from a database postgresql. With periodicity of >> the order of 40 minutes squid is core dumped. In logs (with >> debug_options ALL, 9): HN> Please get a stack trace and file a bug report. See this: [EMAIL PROTECTED] /usr/local/squid/cache]# gdb squid squid.core GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-marcel-freebsd"... Core was generated by `squid'. Program terminated with signal 6, Aborted. Reading symbols from /lib/libcrypt.so.3...done. Loaded symbols for /lib/libcrypt.so.3 Reading symbols from /lib/libm.so.4...done. Loaded symbols for /lib/libm.so.4 Reading symbols from /lib/libc.so.6...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /libexec/ld-elf.so.1...done. Loaded symbols for /libexec/ld-elf.so.1 #0 0x28218ecb in kill () from /lib/libc.so.6 (gdb) where #0 0x28218ecb in kill () from /lib/libc.so.6 #1 0x28218e68 in raise () from /lib/libc.so.6 #2 0x28217b78 in abort () from /lib/libc.so.6 #3 0x281f3db8 in __assert () from /lib/libc.so.6 #4 0x080d456f in hash_remove_link (hid=0x80e29aa, hl=0x28229d80) at hash.c:277 #5 0x080d1143 in authDigestNoncePurge (nonce=0x9724b00) at digest/auth_digest.c:426 #6 0x080d213b in authenticateDigestNonceCacheCleanup (data=0x0) at digest/auth_digest.c:281 #7 0x0807e9c0 in eventRun () at event.c:148 #8 0x0809e353 in main (argc=3, argv=0xbfbfec78) at main.c:832 (gdb) quit [EMAIL PROTECTED] /usr/local/squid/cache]# uname -a FreeBSD proxy.svgc.ru 6.2-RELEASE FreeBSD 6.2-RELEASE #1: Tue Jun 5 12:59:59 SAMST 2007 [EMAIL PROTECTED]:/usr/src/sys/i386/compile/PROXY i386 -- WBR, Sergey Svyatkin mailto:[EMAIL PROTECTED]
Re: [squid-users] error during make
Alex Rousskov wrote: On Fri, 2007-08-10 at 09:39 +0700, zen wrote: I think the above command failed because I (or you) missed a space before the -E option: ... ".deps/MemPool.Tpo" -E -o MemPool.E MemPool.cc my mistake core# g++ -DHAVE_CONFIG_H -I. -I. -I../include -I../include -I../include -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -D_REENTRANT -g -O2 -MT MemPool.o -MD -MP -MF ".deps/MemPool.Tpo" -E -o MemPool.E MemPool.cc core# fgrep -3 mallopt MemPool.E core# Alex. TIA Zen
Re: [squid-users] error during make
On Fri, 2007-08-10 at 09:39 +0700, zen wrote: > Alex Rousskov wrote: > > >Just go ahead and cut-and-paste those commands into your shell, starting > >from the top Squid source directory. The last command may produce some > >output. You do not need to cut-and-paste comments (lines starting with > >'#'). > > > >The first command places you into Squid's lib directory. The second > >precompiles MemPool.cc into MemPool.E using g++. The third searches for > >mallopt in that precompiled file. > > > core# g++ -DHAVE_CONFIG_H -I. -I. -I../include -I../include > -I../include > -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -D_REENTRANT > -g -O2 -MT MemPool.o -MD -MP -MF ".deps/MemPool.Tpo"-E -o MemPool.E > MemPool.cc I think the above command failed because I (or you) missed a space before the -E option: ... ".deps/MemPool.Tpo" -E -o MemPool.E MemPool.cc Alex.
Re: [squid-users] error during make
Alex Rousskov wrote: Just go ahead and cut-and-paste those commands into your shell, starting from the top Squid source directory. The last command may produce some output. You do not need to cut-and-paste comments (lines starting with '#'). The first command places you into Squid's lib directory. The second precompiles MemPool.cc into MemPool.E using g++. The third searches for mallopt in that precompiled file. core# g++ -DHAVE_CONFIG_H -I. -I. -I../include -I../include -I../include -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -D_REENTRANT -g -O2 -MT MemPool.o -MD -MP -MF ".deps/MemPool.Tpo"-E -o MemPool.E MemPool.cc /usr/lib/crt1.o(.text+0x85): In function `_start': : undefined reference to `main' /var/tmp//ccu9mk8h.o(.text+0x32): In function `memPoolIterateDone(MemPoolIterator**)': ../include/SquidNew.h:50: undefined reference to `xassert' /var/tmp//ccu9mk8h.o(.text+0x6d): In function `memPoolIterateNext(MemPoolIterator*)': ../include/splay.h:265: undefined reference to `xassert' /var/tmp//ccu9mk8h.o(.text+0x1fe): In function `MemPools::GetInstance()': ../include/SquidNew.h:45: undefined reference to `xmalloc' /var/tmp//ccu9mk8h.o(.text+0x35f): In function `MemImplementingAllocator::flushMeters()': /data/source/squid-3.0.PRE6/lib/MemPool.cc:477: undefined reference to `squid_curtime' /var/tmp//ccu9mk8h.o(.text+0x408):/data/source/squid-3.0.PRE6/lib/MemPool.cc:471: undefined reference to `squid_curtime' /var/tmp//ccu9mk8h.o(.text+0x4b7): In function `MemPools::flushMeters()': /data/source/squid-3.0.PRE6/lib/MemPool.cc:517: undefined reference to `squid_curtime' /var/tmp//ccu9mk8h.o(.text+0x4f6):/data/source/squid-3.0.PRE6/lib/MemPool.cc:518: undefined reference to `squid_curtime' /var/tmp//ccu9mk8h.o(.text+0x5b5):/data/source/squid-3.0.PRE6/lib/MemPool.cc:516: undefined reference to `squid_curtime' /var/tmp//ccu9mk8h.o(.text+0x61e): In function `MemMalloc::allocate()': snip--- >::insert(MemChunk*, int (*)(MemChunk* const&, MemChunk* const&))': /data/source/squid-3.0.PRE6/lib/MemPool.cc:170: undefined reference to `xfree' /var/tmp//ccu9mk8h.o(.gnu.linkonce.t._ZN5SplayIP8MemChunkE6insertERKS1_PFiS4_S4_E+0x4d): In function `Splay::insert(MemChunk* const&, int (*)(MemChunk* const&, MemChunk* const&))': /data/source/squid-3.0.PRE6/lib/MemPool.cc:147: undefined reference to `xassert' /var/tmp//ccu9mk8h.o(.gnu.linkonce.t._ZN9MemMallocD0Ev+0x11): In function `MemMalloc::~MemMalloc()': ../include/splay.h:316: undefined reference to `xfree' core# fgrep -3 mallopt MemPool.E fgrep: MemPool.E: No such file or directory core# and g++ --version core# g++ --version g++ (GCC) 3.4.4 [FreeBSD] 20050518 Noted. Thank you, Alex. TIA Zen
Re: [squid-users] username and password in TRANSPARENT mode
On Thu, Aug 09, 2007, Henrik Nordstrom wrote: > On m??n, 2007-08-06 at 18:26 +0800, Adrian Chadd wrote: > > > Look at how a browser talks directly to an origin server when presenting > > (HTTP Basic) authentication credentials, and what a proxy ends up doing > > with those. > > What about it? It doesn't work reliably? :) Adrian
Re: [squid-users] access.log issues with WCCP
On Thu, Aug 09, 2007, Chad Harrelson wrote: > Hello list, > I am running squid-2.6-STABLE6 with WCCP version 1. My problem is > with access logging. If I configure my browser to manually point to > my squid box, I see log data in /var/log/squid/access.log. However, > if I do not manually configure the browser I only get TCP_DENIED 400 > messages (very few) but web browsing works and if I sniff eth0 and > gre1 I see web traffic. My router also reports that WCCP is working ( > with the exception of a 0.0.0.0 for my webcache ID). .. so whats your squid config look like? specifically, the http_port lines? Adrian
Re: [squid-users] Slow connection through proxy
Julian Pilfold-Bagwell wrote: Hi All, I have a problem with my proxy and Windows clients on certain ip ranges on my network. I've just upgraded my network from a single LDAP/Samba server running on Mandriva 2007 to a dual redundant setup with DNS, NTP and LDAP master/slave on two servers with a seperate PDC and BDC pair authenticating and providing file shares. Authentication on the network for users is fast as lightning. On the old network I had a Mandriva 2007 box with Squid proxying and NTLM auth and this machine has been moved to the new setup. Clients are spread across three IP ranges 172.20.0., 172.20.1. and 172.20.2. with the 0 range being assigned static IPs and the one and two ranges collecting an IP from DHCPD. If I connect a client to the network, it obtains an address from the DHCP server along with DNS, gateway and WINS server settings but the connection via Squid is slow e.g. 30-120 seconds to obtain a page. If I take the settings from ipconfig and enter them manually but with an IP in the 172.20.0 range, it works perfectly with pages appearing withing 1-2 seconds. Perhaps it's an issue with reverse DNS for the 172.20.1.0/23 subnet. Squid is trying to perform reverse DNS lookups on clients on that netblock and is hanging there... nslookup returns IP's within a second on the proxy and clients and su'ing to a user account on the proxy takes a split second, suggesting that nss and pam_smb are authenticating OK. If you've specified that the clients use proxy, their access to DNS should have little effect on surfing speed (baring client proxy exceptions). On the old network, the proxy worked fine across al three IP ranges, on the new it behaves as above. Is there anywhere I should be looking in particular for clues to this one. Watch a network trace between a DHCP client and the proxy. Check the access.log for how long it takes to "register" the completed request (and how long the request took to complete). Check to see if the proxy server an perform RDNS queries on all three subnets. I'll be out of the office until Monday but I'll check the mail as soon as I can for a reply. Many thanks, Julian PB Chris
Re: [squid-users] Opinions sought on best storage type for FreeBSD
On Thu, Aug 09, 2007, Michel Santos wrote: > > the bug, I am curious what others have been using or prefer as their > > alternative to diskd and why? > > diskd for sure is the fastest specially on SMP machines but there are not > so much people sharing my opinion ... Just supply real-world numbers showing which is faster. Remember - the overlap between the people doing the development and the people saving/making money using Squid is almost 0.. Adrian
Re: [squid-users] mixing ntlm and non-ntlm auth
Gavin White wrote: Hi, I'm running 2.6.STABLE6 on RHEL4.5, and I have ntlm authentication working via smb/winbind. My problem is that I have a mixed client base of windows PCs, which can do ntlm, and linux servers, which cannot. All the linux servers are on their own IP network, 192.168.0.0/24, while the windows PCs are in 192.168.0.1/24. I would like to use ntlm auth for the windows PCs, and allow the linux machines to use the proxy without ntlm authentication. I have tried various combinations of acls, but I always end up in a position where all requests succeed without authentication, or the windows work but the linux clients fail with '407 authfail'. My current config is: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp acl ntlmauth src 192.168.1.0/24 acl nonntlm src 192.168.0.0/24 acl mynet src 192.168.0.0/23 acl AuthorizedUsers proxy_auth #with and without REQUIRED, no difference http_access allow nonntlm http_access allow AuthorizedUsers ntlmauth #have also tried 'ntlmauth AuthorizedUsers ' Is this possible? Can I configure squid to require ntlm auth for some source addresses, but not for others? http_access deny ntlmauth !AuthorizedUsers # Prevent the 192.168.1.0/24 netblock from surfing without authentication http_access allow mynet # Allow my network to use the proxy http_access deny all # Keep the riff-raff out Thanks, Gavin Chris
Re: [squid-users] squid 2.6stable14 + miranda + ntlm: doesn t work
On tor, 2007-08-09 at 19:19 +0300, Maxim Britov wrote: > On Thu, 09 Aug 2007 14:54:58 +0200 > Henrik Nordstrom wrote: > > > On tor, 2007-08-09 at 15:04 +0300, Maxim Britov wrote: > > > I found squid-2.6stable14 doesn't work with miranda/ntlm_auth > > > I have 407 only. > > > > > > Without patchset #11529 it work fine now here. > > > > Odd.. that change should not break anythng. > > > > can you provide an ethereal packet trace of miranda use both with and > > without the changeset? > > tcpdump -s0 Thanks. Please try the attached patch. Regards Henrik Index: src/auth/ntlm/auth_ntlm.c === RCS file: /cvsroot/squid/squid/src/auth/ntlm/auth_ntlm.c,v retrieving revision 1.40 diff -u -p -r1.40 auth_ntlm.c --- src/auth/ntlm/auth_ntlm.c 4 Jul 2007 00:18:45 - 1.40 +++ src/auth/ntlm/auth_ntlm.c 9 Aug 2007 22:49:44 - @@ -661,12 +661,6 @@ authenticateNTLMAuthenticateUser(auth_us debug(29, 1) ("authenticateNTLMAuthenticateUser: attempt to perform authentication without a connection!\n"); return; } -if (!request->flags.proxy_keepalive) { - debug(29, 2) ("authenticateNTLMAuthenticateUser: attempt to perform authentication without a persistent connection!\n"); - ntlm_request->auth_state = AUTHENTICATE_STATE_FAILED; - request->flags.must_keepalive = 1; - return; -} if (ntlm_request->waiting) { debug(29, 1) ("authenticateNTLMAuthenticateUser: waiting for helper reply!\n"); return; @@ -708,6 +702,12 @@ authenticateNTLMAuthenticateUser(auth_us /* we should have recieved a blob from the clien. pass it to the same * helper process */ debug(29, 9) ("authenticateNTLMAuthenticateUser: auth state challenge with header %s.\n", proxy_auth); + if (!request->flags.proxy_keepalive) { + debug(29, 2) ("authenticateNTLMAuthenticateUser: attempt to perform ntlm authentication without a persistent connection!\n"); + ntlm_request->auth_state = AUTHENTICATE_STATE_FAILED; + request->flags.must_keepalive = 1; + return; + } /* do a cache lookup here. If it matches it's a successful ntlm * challenge - release the helper and use the existing auth_user * details. */ signature.asc Description: This is a digitally signed message part
[squid-users] access.log issues with WCCP
Hello list, I am running squid-2.6-STABLE6 with WCCP version 1. My problem is with access logging. If I configure my browser to manually point to my squid box, I see log data in /var/log/squid/access.log. However, if I do not manually configure the browser I only get TCP_DENIED 400 messages (very few) but web browsing works and if I sniff eth0 and gre1 I see web traffic. My router also reports that WCCP is working ( with the exception of a 0.0.0.0 for my webcache ID). Here's my log statement from squid.conf: access_log /var/log/squid/access.log squid Any thoughts? Thanks, -- Chad
[squid-users] few questions around multiple cache_dirs
Hi. I'm in the early stages of designing and testing a config with multiple aufs cache_dirs on squid-2.6.STABLE3 as httpd accel for a lot of content, and have a few questions based on what I've observed thus far: * "x-squid-internal/vary" stubs appear to be able to wind up on a different cache_dir than the object itself. Is this a bug? Or a tradeoff in favor of performance in the cache_dir being available 99% of the time case, rather than storing the stubs on the same cache_dir so a failure of a disk containing one or the other doesn't invalidate the object? (note: I'm using max-size, which may have contributed to the splitting, as the stubs are small and the objects large). * how does squid determine which of several cache_dirs has an object after a restart... is the complete url->cachefile mapping stored in swap.state and each completely loaded into memory at startup, or are N lookups performed, where N is the # of cache_dirs? Does an unclean shutdown/interrupted flush to swap.state completely invalidate all objects in a cache_dir, or does it attempt to "fsck" the objects? Also, if entirely in memory, is it exempt from cache_mem limits? * although i admittedly can't reproduce now, i earlier saw object files in the aufs cache_dir occasionally getting renamed(rewritten?) in the same cache_dir, incrementing the filename by 1 on each of multiple successive identical requests (same client). any idea what could account for this behavior? thanks, -neil
Re: [squid-users] File Descriptors causing an issue in OpenBSD
> >Odd.. are you sure you are really running the new binary, and that the > >ulimit setting is done correctly in the start script? #Squid startup/shutdown if [ -z $1 ] ; then echo -n "Syntax is: $0 start stop" exit fi if [ $1 != start -a $1 != stop ]; then echo -n "Wrong command" exit fi if [ -x /usr/local/sbin/squid ]; then if [ $1 = 'start' ] ; then echo -n 'Running Squid: ';ulimit -HSn 8192; /usr/local/sbin/squid else echo -n 'Killing Squid: '; /usr/local/sbin/squid -k shutdown fi else echo -n 'Squid not found' fi d> What do you get when you issue the following 2 commands: > limits No command limit. > and > > ulimit -n 1024 > kern.maxfiles > kern.maxfilesperproc i did sysctl -w kern.maxfiles=8192 sysctl -w kern.maxfilesperproc=8192 ---> this gives a error Then i even made changes the Options in /etc/login.def {{ default:\ :path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/local/bin:\ :umask=022:\ :datasize-max=512M:\ :datasize-cur=512M:\ :maxproc-max=512:\ :maxproc-cur=64:\ :openfiles-cur=8192:\ :stacksize-cur=4M:\ :localcipher=blowfish,6:\ :ypcipher=old:\ :tc=auth-defaults:\ :tc=auth-ftp-defaults: }} and {{ daemon:\ :ignorenologin:\ :datasize=infinity:\ :maxproc=infinity:\ :openfiles-cur=8192:\ :stacksize-cur=8M:\ :localcipher=blowfish,8:\ :tc=default: }} and after doing all these changes i uninstalled squid completely and all its file and everything .Then recompiled it and installed it againBut DAMM it gave me the same number of file descriptors. So now i have reduced the cache to 10 GB. I found a Squid Definitive guide where he said to recompile the kernel after editing the kernel configuration file . Squid Object Cache: Version 2.6.STABLE13 Start Time: Thu, 09 Aug 2007 19:09:36 GMT Current Time: Thu, 09 Aug 2007 19:11:13 GMT Connection information for squid: Number of clients accessing cache: 321 Number of HTTP requests received: 2649 Number of ICP messages received:0 Number of ICP messages sent:0 Number of queued ICP replies: 0 Request failure ratio: 0.00 Average HTTP requests per minute since start: 1638.4 Average ICP messages per minute since start:0.0 Select loop called: 34876 times, 2.782 ms avg Cache information for squid: Request Hit Ratios: 5min: 15.1%, 60min: 15.1% Byte Hit Ratios:5min: 29.4%, 60min: 29.4% Request Memory Hit Ratios: 5min: 9.7%, 60min: 9.7% Request Disk Hit Ratios:5min: 44.4%, 60min: 44.4% Storage Swap size: 23806 KB Storage Mem size: 2516 KB Mean Object Size: 7.57 KB Requests given to unlinkd: 0 Median Service Times (seconds) 5 min60 min: HTTP Requests (All): 0.68577 0.68577 Cache Misses: 1.24267 1.24267 Cache Hits:0.00179 0.00179 Near Hits: 0.68577 0.68577 Not-Modified Replies: 0.00091 0.00091 DNS Lookups: 0.00190 0.00190 ICP Queries: 0.0 0.0 :((( Preetish
Re: [squid-users] Opinions sought on best storage type for FreeBSD
Nicole disse na ultima mensagem: > > Hello > I run a large number of FreeBSD based servers as cache accelerators for > large > scale image serving. (amd64 and most with dual core) > > Each server has (3) 147G disks and 36G of the boot disk. > Altho I have some older servers that have 36G and (3) 72G disks. > > The older (smaller) servers seem mostly fine with FreeBSD 6.1-STABLE > using > diskD on Version 2.6.STABLE12. However the larger servers on 6.2-STABLE > and > (Version 2.6.STABLE12 and up) seem to be falling over and falling over > themselves every so often. Hi could you explain better what happens ? > I assume due to the diskD bug with FreeBSD. what bug is it you found? > (enormous > disk usage and swapfiles as compared to AUFS for instance) > if your server use swap you are short on memory (ram) > I have been testing both AUFS and COSS as an alternative and both with > mixed AFAIN aufs works well but slower in comparism to diskd > the bug, I am curious what others have been using or prefer as their > alternative to diskd and why? diskd for sure is the fastest specially on SMP machines but there are not so much people sharing my opinion ... Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
Re: [squid-users] endless growing swap.state after reboot
Henrik Nordstrom disse na ultima mensagem: > On tor, 2007-08-09 at 10:39 -0300, Michel Santos wrote: > >> hmm, what can I say else then asking you for suggestions. Just thinking, >> you say you d not have it on your linux box but me and others are having >> it on freebsd so where we go hounting it? > > Start with trying to find a as simple possible test case, not requiring > a live populated cache.. > > Quite likely the swap.state from a unclean shutdown triggering the > problem is suffifient. ok the first is easy, the latter you mean what, you want the file? > > May also be dependent on the number of cache_dir you have, or other > configuration details (esp cache_swap_state directive), but not sure. > good, normally I use 64 64 (up to 15G) or if the cache_dirs are bigger I use 64 128 (up to 40G) or even 128 128 for larger ones >> In your other reply you say unlikely a fs problem. What else can it be? > > It does smell like there may be a Squid bug lurking here. But without > being able to reproduce it or it sticking out when reading the source > hunting it down is a bit problematic.. > ok so whatever you need I will try to help thank's Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
Re: [squid-users] endless growing swap.state after reboot
Henrik Nordstrom disse na ultima mensagem: > On tor, 2007-08-09 at 10:39 -0300, Michel Santos wrote: > >> hmm, what can I say else then asking you for suggestions. Just thinking, >> you say you d not have it on your linux box but me and others are having >> it on freebsd so where we go hounting it? > > Start with trying to find a as simple possible test case, not requiring > a live populated cache.. > > Quite likely the swap.state from a unclean shutdown triggering the > problem is suffifient. ok the first is easy, the latter you mean what, you want the file? > > May also be dependent on the number of cache_dir you have, or other > configuration details (esp cache_swap_state directive), but not sure. > good, normally I use 64 64 (up to 15G) or if the cache_dirs are bigger I use 64 128 (up to 40G) or even 128 128 for larger ones >> In your other reply you say unlikely a fs problem. What else can it be? > > It does smell like there may be a Squid bug lurking here. But without > being able to reproduce it or it sticking out when reading the source > hunting it down is a bit problematic.. > ok so whatever you need I will try to help thank's Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
[squid-users] Opinions sought on best storage type for FreeBSD
Hello I run a large number of FreeBSD based servers as cache accelerators for large scale image serving. (amd64 and most with dual core) Each server has (3) 147G disks and 36G of the boot disk. Altho I have some older servers that have 36G and (3) 72G disks. The older (smaller) servers seem mostly fine with FreeBSD 6.1-STABLE using diskD on Version 2.6.STABLE12. However the larger servers on 6.2-STABLE and (Version 2.6.STABLE12 and up) seem to be falling over and falling over themselves every so often. I assume due to the diskD bug with FreeBSD. (enormous disk usage and swapfiles as compared to AUFS for instance) I have been testing both AUFS and COSS as an alternative and both with mixed success. As some have pointed out, it's a shame diskd is horked, since it seemed to be nice and fast. However, since I have not heard of any progress on fixing the bug, I am curious what others have been using or prefer as their alternative to diskd and why? Thank you! Nicole
Re: [squid-users] File Descriptors causing an issue in OpenBSD
Preetish wrote: Hi Everybody I have recompiled Squid the way i saw in one of the how to. this is what i did 1)I uninstalled Squid 2) #ulimit -HSn 8192 #then recompiled squid with --with-maxfd=8192 then in my starting squid script i have added ulimit -HSn 8192 But still it shows the same number of file descriptors File descriptor usage for squid: Maximum number of file descriptors: 1024 Largest file desc currently in use:939 Number of file desc currently in use: 929 Files queued for open: 1 Available number of file descriptors: 94 Reserved number of file descriptors: 100 Store Disk files open: 19 IO loop method: kqueue There is something fishy about it coz my cache is only 1.1G . and moreover there is a file squid.core in my /etc/squid and i do not understand its porpose. i searched for it online but still i did understand it. Is my squidclient giving me stale results. I had even cleaned the cache before reinstalling squid. Is there some different way to increase the file descriptors in OpenBSD. Kindly Help. Hi Preetish, On a Linux box, that should have worked right away. I assume that they should also work for BSD boxes too. By the way, as Henrik mentioned, did you verify the binary run /path/to/sbin/squid -v What do you get when you issue the following 2 commands: limits and ulimit -n On your OpenBSD machine, I was wondering why your file descriptors is only 1024 in the first place. On BSD systems, I think increasing the following sysctl tunables might help in general for a busy machine: kern.maxfiles kern.maxfilesperproc Set those values to say 8192 or higher and save it in either your /boot/loader.conf or /etc/sysctl.conf in case of a reboot. Hope it helps. Thanking you... Regards Preetish -- With best regards and good wishes, Yours sincerely, Tek Bahadur Limbu (TAG/TDG Group) Jwl Systems Department Worldlink Communications Pvt. Ltd. Jawalakhel, Nepal http://www.wlink.com.np
Re: [squid-users] Squid too slow.Please Help.Urgent
Preetish wrote: On 8/9/07, Francesco Perillo <[EMAIL PROTECTED]> wrote: Probably squidGuard can be of help I read about it and will definately try it. thanks. Well now the CPU utilization is in check and the internet speed is better than before though not gr8. I will check the link speed tonite again. Still there is another issue about file descriptors or which i will mail using a new thread. Hi Preetish, Good to know that your CPU utilization has gone down and your squid box is serving more clients and the internet speed is better. In your case, I guess you have to trade off a few things between complex filtering and speed. By the way, doesn't your ISP provide you with some sort of MRTG or RRD graphs which provides how your bandwidth link is being utilized. Providing bandwidth consumption graphs to it's clients should be the responsibility of every ISP. This could help you determine if your bandwidth is dedicated to 4 mbps or just burstable. Thanking you... Thanks Guys Cheers Preetish \m/O\m/~ -- With best regards and good wishes, Yours sincerely, Tek Bahadur Limbu (TAG/TDG Group) Jwl Systems Department Worldlink Communications Pvt. Ltd. Jawalakhel, Nepal http://www.wlink.com.np
Re: [squid-users] Squid Wiki down????
On tor, 2007-08-09 at 15:35 +0200, Enrico Popp wrote: > Is the url > > wiki.squid-cache.org down? I cannot reach these url. Seems down indeed. Reason unknown, but expect it to return shortly (or when Kinkie is back) Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] endless growing swap.state after reboot
On tor, 2007-08-09 at 10:39 -0300, Michel Santos wrote: > hmm, what can I say else then asking you for suggestions. Just thinking, > you say you d not have it on your linux box but me and others are having > it on freebsd so where we go hounting it? Start with trying to find a as simple possible test case, not requiring a live populated cache.. Quite likely the swap.state from a unclean shutdown triggering the problem is suffifient. May also be dependent on the number of cache_dir you have, or other configuration details (esp cache_swap_state directive), but not sure. > In your other reply you say unlikely a fs problem. What else can it be? It does smell like there may be a Squid bug lurking here. But without being able to reproduce it or it sticking out when reading the source hunting it down is a bit problematic.. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] error during make
On Thu, 2007-08-09 at 11:41 +0700, zen wrote: > core# grep -2 MALLOPT include/autoconf.h > > /* Define to 1 if you have the `mallopt' function. */ > /* #undef HAVE_MALLOPT */ > > /* Define to 1 if you have the header file. */ This matches my understanding. > > >and the output of > > > > cd lib/ > > # this is the compilation command from your original email except I > > told GCC > > # to stop at the preprocessing step and save the results into MemPool.E > > g++ -DHAVE_CONFIG_H -I. -I. -I../include -I../include -I../include > > -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -D_REENTRANT -g > > -O2 -MT MemPool.o -MD -MP -MF ".deps/MemPool.Tpo" -E -o MemPool.E MemPool.cc > > fgrep -3 mallopt MemPool.E > > > i dont understand what these mean.. sorry Just go ahead and cut-and-paste those commands into your shell, starting from the top Squid source directory. The last command may produce some output. You do not need to cut-and-paste comments (lines starting with '#'). The first command places you into Squid's lib directory. The second precompiles MemPool.cc into MemPool.E using g++. The third searches for mallopt in that precompiled file. > >and > > g++ --version > > > core# g++ --version > g++ (GCC) 3.4.4 [FreeBSD] 20050518 Noted. Thank you, Alex.
[squid-users] Squid Wiki down????
Is the url wiki.squid-cache.org down? I cannot reach these url. Regards Enrico
Re: [squid-users] endless growing swap.state after reboot
Henrik Nordstrom disse na ultima mensagem: > On ons, 2007-08-08 at 07:12 -0300, Michel Santos wrote: >> I am coming back with this issue again since it is still persistent >> >> This problem is real and easy to repeat and destroys the complete >> cache_dir content. The squid vesion is 2.6-Stable14 and certainly it is >> with all 2.6 versions I tested so far. This problem is not as easy to >> launch with 2.5 where it happens in a different way after an unclean >> shutdown. > > And my problem is that I have not been able to reproduce the problem, > and nothing apparent sticks out when reading the source. > hmm, what can I say else then asking you for suggestions. Just thinking, you say you d not have it on your linux box but me and others are having it on freebsd so where we go hounting it? In your other reply you say unlikely a fs problem. What else can it be? Michel ... Datacenter Matik http://datacenter.matik.com.br E-Mail e Data Hosting Service para Profissionais.
Re: [squid-users] FD problem
--- Henrik Nordstrom <[EMAIL PROTECTED]> wrote: > On tor, 2007-08-09 at 05:20 -0700, squid learner > wrote: > > checking Default FD_SETSIZE value... 2048 > > ok. > > checking Maximum number of filedescriptors we can > > open... 1024 > > this most likely limited by ulimit. > > > I tried defrent ways but unable to change the 1024 > > from maximum number > > please give some help > > --with-maxfd=2048 > > skips the "checking Maximum number of > filedescriptors.." check entirely, > trusting what you say. > > Regards > Henrik > Thanks BIG BROTHER It is DONE Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center. http://autos.yahoo.com/green_center/
Re: [squid-users] FW: Allowing streaming media through NTLM Authentication
On tor, 2007-08-09 at 09:25 -0300, Mauricio Silveira wrote: > Ok, the playback plugin or application might not support NTLM... but > why doesn't it happen with native MS proxy implementations, ISA BTW ? Have to look at network traces to answer that, assuming you are using ISA as a HTTP proxy and not a winsocks proxy with the firewall client.. But a guess is that ISA is HTTP/1.1, which changes things a bit.. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] FD problem
On tor, 2007-08-09 at 05:20 -0700, squid learner wrote: > checking Default FD_SETSIZE value... 2048 ok. > checking Maximum number of filedescriptors we can > open... 1024 this most likely limited by ulimit. > I tried defrent ways but unable to change the 1024 > from maximum number > please give some help --with-maxfd=2048 skips the "checking Maximum number of filedescriptors.." check entirely, trusting what you say. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] File Descriptors causing an issue in OpenBSD
On tor, 2007-08-09 at 17:00 +0530, Preetish wrote: > Hi Everybody > > I have recompiled Squid the way i saw in one of the how to. this is what i did > > 1)I uninstalled Squid > 2) > #ulimit -HSn 8192 > #then recompiled squid with --with-maxfd=8192 > then in my starting squid script i have added ulimit -HSn 8192 Sounds right. Acutally the ulimit when compiling isn't needed when you use the configure option. > But still it shows the same number of file descriptors > File descriptor usage for squid: > Maximum number of file descriptors: 1024 Odd.. are you sure you are really running the new binary, and that the ulimit setting is done correctly in the start script? To verify the binary run /path/to/sbin/squid -v > There is something fishy about it coz my cache is only 1.1G . and > moreover there is a file squid.core in my /etc/squid and i do not > understand its porpose. The squid.core is a coredump from a fatal error. You can remove it. > i searched for it online but still i did > understand it. Is my squidclient giving me stale results. I had even > cleaned the cache before reinstalling squid. Is there some different > way to increase the file descriptors in OpenBSD. Kindly Help. What you did should work from what I can tell. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] squid 2.6stable14 + miranda + ntlm: doesn't work
On tor, 2007-08-09 at 15:04 +0300, Maxim Britov wrote: > I found squid-2.6stable14 doesn't work with miranda/ntlm_auth > I have 407 only. > > Without patchset #11529 it work fine now here. Odd.. that change should not break anythng. can you provide an ethereal packet trace of miranda use both with and without the changeset? Regards Henrik signature.asc Description: This is a digitally signed message part
RE: [squid-users] High CPU usage for large object
That explains it. I experiment using proxy-only to serve this big object through squid from an apache on the same server. The cpu usage of squid+apache is cut nearly in half. Perhaps a combination of squid+apache would do well here, just a bit of complication to save this big object once the squid has it so that apache can serve it too. Thanks, Khanh -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Thursday, August 09, 2007 6:51 AM To: NGUYEN, KHANH, ATTSI Cc: squid-users@squid-cache.org Subject: Re: [squid-users] High CPU usage for large object On tis, 2007-08-07 at 13:25 -0400, NGUYEN, KHANH, ATTSI wrote: > So perhaps the squid has some extra overhead. However, 3 times more > seems unusual 3 times more is probably quite right for disk hits. Apache uses sendfile() while Squid reads in the object from disk to write it out again on the socket.. And if you are using aufs or diskd this is being bounced via threads further adding overhead. Regards Henrik
Re: [squid-users] FW: Allowing streaming media through NTLM Authentication
The JVM part is also true, I've had problems with it for some time until I figured out the matter. Internet Banking was my headache. I had to add these within my config: acl JVM browser Java/1.4 Java/1.5 Java/1.6 http_access allow JVM all Ok, the playback plugin or application might not support NTLM... but why doesn't it happen with native MS proxy implementations, ISA BTW ? Mauricio Henrik Nordstrom wrote: On tis, 2007-08-07 at 00:54 -0300, Mauricio Silveira wrote: The former uses http as protocol, so it will ask for user/password, the latter uses mms as protocol, so it won't ask for user/password. As far as my small brain knows... that's mms that should be giving headaches, not the http one! the mms isn't even using the proxy. The problem with NTLM authentication is not the streaming media or the proxy, but the playback application or plugin. These software quite often do not support NTLM proxy authentication only Basic if any proxy authentication at all. Also seen quite frequently with different Java Virtual Machine versions. Regards Henrik begin:vcard fn:Mauricio Silveira n:Silveira;Mauricio org;quoted-printable:FSN do Brasil - Consultoria em Inform=C3=A1tica;Software Development / Networking adr:;;Brazil email;internet:[EMAIL PROTECTED] title:Linux Consultant / Developer tel;cell:11-9949-1040 url:http://www.fsndobrasil.com version:2.1 end:vcard
[squid-users] FD problem
checking Default FD_SETSIZE value... 2048 checking Maximum number of filedescriptors we can open... 1024 I tried defrent ways but unable to change the 1024 from maximum number please give some help it must be checking Maximum number of filedescriptors we can open... 2048 Building a website is a piece of cake. Yahoo! Small Business gives you all the tools to get online. http://smallbusiness.yahoo.com/webhosting
[squid-users] squid 2.6stable14 + miranda + ntlm: doesn't work
I found squid-2.6stable14 doesn't work with miranda/ntlm_auth I have 407 only. Without patchset #11529 it work fine now here. -- Maxim Britov GnuPG KeyID 0x4580A6D66F3DB1FB xmpp:[EMAIL PROTECTED] Fingerprint: 4059 B5C5 8985 5A47 8F5A 8623 4580 A6D6 6F3D B1FB GnuPG-ru Team (http://lists.gnupg.org/mailman/listinfo/gnupg-ru xmpp:[EMAIL PROTECTED]) signature.asc Description: PGP signature
[squid-users] File Descriptors causing an issue in OpenBSD
Hi Everybody I have recompiled Squid the way i saw in one of the how to. this is what i did 1)I uninstalled Squid 2) #ulimit -HSn 8192 #then recompiled squid with --with-maxfd=8192 then in my starting squid script i have added ulimit -HSn 8192 But still it shows the same number of file descriptors File descriptor usage for squid: Maximum number of file descriptors: 1024 Largest file desc currently in use:939 Number of file desc currently in use: 929 Files queued for open: 1 Available number of file descriptors: 94 Reserved number of file descriptors: 100 Store Disk files open: 19 IO loop method: kqueue There is something fishy about it coz my cache is only 1.1G . and moreover there is a file squid.core in my /etc/squid and i do not understand its porpose. i searched for it online but still i did understand it. Is my squidclient giving me stale results. I had even cleaned the cache before reinstalling squid. Is there some different way to increase the file descriptors in OpenBSD. Kindly Help. Regards Preetish
Re: [squid-users] Squid too slow.Please Help.Urgent
On tor, 2007-08-09 at 10:23 +0200, Francesco Perillo wrote: > Probably squidGuard can be of help squidGuard doesn't help invalid use of acl types. squidGuard performs just as badly if you throw everything into a regex instead of using structured acl types (dstdomain in Squid, urllist in squidGuard) Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] endless growing swap.state after reboot
On ons, 2007-08-08 at 10:53 -0300, Michel Santos wrote: > still I do not know if the fs is or not the problem because since squid is > running and compiling well on freebsd it should handle the default ufs2 wo > any problem Very unlikely to be an fs problem. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] endless growing swap.state after reboot
On ons, 2007-08-08 at 07:12 -0300, Michel Santos wrote: > I am coming back with this issue again since it is still persistent > > This problem is real and easy to repeat and destroys the complete > cache_dir content. The squid vesion is 2.6-Stable14 and certainly it is > with all 2.6 versions I tested so far. This problem is not as easy to > launch with 2.5 where it happens in a different way after an unclean > shutdown. And my problem is that I have not been able to reproduce the problem, and nothing apparent sticks out when reading the source. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Digest auth trouble
On ons, 2007-08-08 at 10:34 +0500, Sergey Svyatkin wrote: > Hello. > > There are problems at use digest-auth by means of a perl-script > which takes data of users from a database postgresql. With periodicity of > the order of 40 minutes squid is core dumped. In logs (with > debug_options ALL, 9): Please get a stack trace and file a bug report. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] High CPU usage for large object
On tis, 2007-08-07 at 13:25 -0400, NGUYEN, KHANH, ATTSI wrote: > So perhaps the squid has some extra overhead. However, 3 times more > seems unusual 3 times more is probably quite right for disk hits. Apache uses sendfile() while Squid reads in the object from disk to write it out again on the socket.. And if you are using aufs or diskd this is being bounced via threads further adding overhead. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Compiling issue with 2.6-STABLE14
On tis, 2007-08-07 at 08:16 -0700, SirWING wrote: > Hi. > > I'm trying to compile Squid 2.6-STABLE14 on Linux running: > Kernel 2.4.26 > Gcc version 2.96 > Glibc 2.2.5-44 > (I know, really old versions) > > When the compiler gets to the HttpHeaderTools.c file, the following errors > occur: > > HttpHeaderTools.c: In function `strIsSubstr': > HttpHeaderTools.c:198: parse error before `const' > HttpHeaderTools.c:199: `p' undeclared (first use in this function) Bug #2023. Fixed 2007/07/21 21:15:31. Only seen with old compilers. http://www.squid-cache.org/Versions/v2/2.6/changesets/11548.patch Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Multiple Upstream Proxy
On tis, 2007-08-07 at 16:24 +0300, [EMAIL PROTECTED] wrote: > I am using Squid-2.5Stable13 and everything is working smooth. My > Squidproxy was configured to connect to the internet directly (transparent > upstream proxy), meaning I have not define a cache_peer entry on my Squid. > Now the problem is I need to forward one site let say www.abc.com to > proxy2.xyz.com and all other request will be directly to internet (via > transparent upstream proxy). Can you please give an idea on how to do > this. cache_peer + cache_peer_access + never_direct. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] more bluecoat proxying
On tis, 2007-08-07 at 13:37 +0100, Michael Pye wrote: > We now have another issue whereby the bluecoat will occasionally not > cache > documents because it seems to think they have already expired, and the > squid > gets every request that comes in as the bluecoat is constantly doing a > cache MISS. That's bug #7 in out bug database.. for which the act-as-origin is kind of a workaround. > The bluecoat should be using the Expires: and Cache-Control: headers to > cache that image for 1 hour, and not make any more requests for it until > it expires, but the requests keep coming in. Yes it should. This is a question for Bluecoat support. > Anybody seen this behaviour before ? Henrik perhaps you know of > some issue with squid again? The response from Squid looks right. Assuming the Date is current. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] NTLM_Auth & LDAP_Group help needed.
On tis, 2007-08-07 at 15:06 +1000, nick w wrote: > Hi Henrik, > > Could you advise why the session hangs then? Not without more details on the hang. Regards Henrik signature.asc Description: This is a digitally signed message part
RE: [squid-users] FW: Allowing streaming media through NTLM Authentication
On tis, 2007-08-07 at 00:54 -0300, Mauricio Silveira wrote: > The former uses http as protocol, so it will ask for user/password, the > latter uses mms as protocol, so it won't ask for user/password. > > As far as my small brain knows... that's mms that should be giving > headaches, not the http one! the mms isn't even using the proxy. The problem with NTLM authentication is not the streaming media or the proxy, but the playback application or plugin. These software quite often do not support NTLM proxy authentication only Basic if any proxy authentication at all. Also seen quite frequently with different Java Virtual Machine versions. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Caching authenticated documents
On mån, 2007-08-06 at 16:24 +0200, René GARCIA wrote: > I > had to force the webserver to send the Cache-Control header on each reply. That's the correct thing to do. A HTTP/1.1 server must always respond with an HTTP/1.1 response, minus the small things HTTP/1.1 says MUST NOT be used in response to HTTP/1.0 requests. (mainly transfer-encoding) Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Squid too slow.Please Help.Urgent
On 8/9/07, Francesco Perillo <[EMAIL PROTECTED]> wrote: > > Probably squidGuard can be of help I read about it and will definately try it. thanks. Well now the CPU utilization is in check and the internet speed is better than before though not gr8. I will check the link speed tonite again. Still there is another issue about file descriptors or which i will mail using a new thread. Thanks Guys Cheers Preetish \m/O\m/~
Re: [squid-users] performance problem or not ?
On mån, 2007-08-06 at 10:50 +0200, Jan-Frode Myklebust wrote: > I'm running squid (squid-2.6.STABLE6-4.el5) on an old IBM x330 > server (2x 1266MHZ PIII, 1GB RAM, 2 mirrored 36GB disks for OS > and 20GB squid-spool), serving set-top-boxes' access to the > internet. > > We have a feel for the proxy maybe being slow, but can't really > pinpoint what the problem might be. First thing to check is memory usage. Make sure there is no swap activity. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] username and password in TRANSPARENT mode
On mån, 2007-08-06 at 18:26 +0800, Adrian Chadd wrote: > Look at how a browser talks directly to an origin server when presenting > (HTTP Basic) authentication credentials, and what a proxy ends up doing > with those. What about it? Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] username and password in TRANSPARENT mode
On mån, 2007-08-06 at 16:57 +0800, Adrian Chadd wrote: > I don't know why this isn't better documented Not sure how it can be better documented. It's both in squid.conf and the FAQ, and additionally Squid emits a quite clear warning in cache.log if you try to use it. But yes, it probably could be placed better in the squid.conf comments. Currently in the proxy_auth acl, should be in auth_params. > alas. No, transparent > interception doesn't function with proxy authentication. Its a shortcoming > of the HTTP RFC spec. I wouldn't say it's a shortcoming. It's a very reasonable security restriction to not allow random web servers to fish for proxy authentication credentials, and only allow proxy authentication to known proxies. > I hear rumours about commercial products supporting > cookie-type hacks to do authentication but I've never seen it live. Done it for Squid earlier. Requires a web server which maintains logins tracks the cookie sessions (any cookie based server will do fine) and an external_acl helper which can query the same server to check if a cookie is valid. No modifications to Squid itself required. But it's worth noting that cookie based authentication can never work very well. There will always be cases where the proxy either has to allow access, or break communication. (non-GET methods without a valid cookie). Another possibility is to abuse NTLM authentication. As NTLM is connection oriented it kind of works to authenticate to multiple hops. Never done this with Squid, and it will require a bit of modifications to make it work. > Use WPAD+proxy.pac to autodiscover proxy services for a LAN. Yes. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] forward and reverse proxy - the difference
On sön, 2007-08-05 at 16:46 +0200, Matus UHLAR - fantomas wrote: > the vhost and vport directives tell squid that connections to its local > IP/port should be automatically redirected to different host/port, so you > don't have to play with hosts/dns nor be that careful about acl's No they don't. They tell Squid how it should reconstruct the requested URL from the webserver request received. How Squid then forwards the request is defined by always/never_direct (and it's relatives), cache_peer and cache_peer_access. For requests where Squid goes direct without using cache_peer DNS and/or /etc/hosts is consulted. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Question on redirector and Squid accelerator mode
On lör, 2007-08-04 at 15:37 -0400, fulan Peng wrote: > I have set up Squid to accelerate a web site, say, yahoo.com with > another domain name, https://proxy.mydomain.com. In the web page, > there is a link, google.com, when browser click this link, it will go > to http://www.google.com. What I want is to go my another proxy site, > say, https://proxy.mydomain.com:. You can't do this with Squid alone. Squid can only map URLs to servers, not rewrite the content returned by those servers. In addition to Squid you'll need an ICAP processor which rewrites the returned content. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Can I block CONNECT to any IP (but allow hostnames)?
On tis, 2007-08-07 at 15:03 +, Vadim Pushkin wrote: > OK, so now I have these questions: > > 1. Which ones of these regex'es is the right one to use? > > acl numeric_IPs url_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+ > > OR. > > acl numeric_IPs urlpath_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+ neither. dstdom_regex is the right acl type. > 2. The following will first allow all IP's as per acl numeric_IPs so > as long > as they are a member of allowed-CONNECT, then afterwards do a deny for > acl > numeric_IPs, which will be all other IP's? > > http_access allow CONNECT numeric_IPs allowed-CONNECT > http_access deny CONNECT numeric_IPs I would recommend to just deny unwanted stuff here, and let the allows go down to where you normally allow stuff. http_access deny CONNECT !allowed-CONNECT numeric_IPs Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Can I block CONNECT to any IP (but allow hostnames)?
On fre, 2007-08-03 at 15:18 +1000, Tim Bates wrote: > Can someone tell me if it's possible to block "CONNECT" attempts that > only specify an IP address (rather than a hostname)? acl byip dstdomain_regex ^[0-9\.]*$ http_access deny CONNECT byip Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] What is the most data anyone has cached with squid?
On tor, 2007-08-02 at 20:20 -0400, Mark Vickers wrote: > I was thinking of building several boxes with between 10TB and 20TB of > SATA drives, for some squid caches. You will require quite a bit of memory to use all of that for cache. Rule of thumb: 10 MB of memory per GB of cache. > Has any used squid to cache that much data? > > Any idea what the upper limit is? The practical limit? The memory usage quite quickly puts an upper limit on the cache size. To go above about 3GB of memory for Squid (per instance) you also need a 64-bit server, as 32-bit servers can't support very big processes due to their 32-bit limitation.. Regards Henrik signature.asc Description: This is a digitally signed message part
Re: [squid-users] Squid too slow.Please Help.Urgent
Probably squidGuard can be of help Francesco
Re: [squid-users] NTLM_Auth & LDAP_Group help needed.
On tor, 2007-08-09 at 08:14 +0200, Angel Mieres wrote: > If you need to use the -S option then you have to use it in a different > manner, and obviously the different manner here is the way to call > helper, right? 8=) THe difference is in what is being used as login name. When doing an NTLM login the login name is usually domain/user, while in most other login methods the login name is just the user name. wbinfo_group understands both forms, since it is focused on being used in Windows networks. squid_ldap_group expects a plain user name as login as it's focus is LDAP networks, but can be told via the -S option to ignore any NT domain component of the login name. Regards Henrik signature.asc Description: This is a digitally signed message part