[squid-users] Issues with DNS and Transparent Proxy

2010-06-04 Thread trainier
We have a squid 3.1.1 box running transparently.  Certain hostnames for 
local web servers are not getting resolved by the name server which squid 
and the local system are properly pointed at. 

If the users point their web browser to the ip address of these web 
servers, they can access just fine.  If I request host addresses for these 
hostnames on the system itself, they resolve just fine.  I've even 
pointedly told squid which name servers to use via the 'dns_nameservers' 
directive. 

Still, certain hostnames don't get resolved, yet others do - from the same 
zone.  Squid's logs aren't telling me much of anything.


Regards,

Tim R. Rainier
Systems Administrator II
Kalsec Inc.
http://www.kalsec.com 





The information contained in this e-mail message may be privileged, 
confidential, and protected from disclosure, and no waiver of any privilege is 
intended. If you are not the intended recipient, any dissemination, 
distribution, or copying is strictly prohibited. If you think that you have 
received this e-mail message in error, please e-mail the sender and delete all 
copies.




[squid-users] Too large?

2008-05-23 Thread trainier
Getting the following error when attempting to access:  
http://hiring.monster.com/jobs/createtitle.aspx?:

The request or reply is too large. 
If you are making a POST or PUT request, then your request body (the thing 
you are trying to upload) is too large. If you are making a GET request, 
then the reply body (what you are trying to download) is too large. These 
limits have been established by the Internet Service Provider who operates 
this cache. Please contact them directly if you feel this is an error. 

Squid version: (squid/2.6.STABLE18)

I dug through the old archive posts and have tried the following two 
directives:

reply_body_max_size 0 (This directive is not recognized by squid 2.6 
stable 18)
request_body_max_size 0

I've played with the numbers by setting them to random ceilings.
I can't get it to budge.

Any advise or suggestions?

Thanks,

Tim Rainier




Re: [squid-users] Compile errors

2008-01-23 Thread trainier
Okay, I have the latest stable source.
The exact same issue continues.

Amos Jeffries [EMAIL PROTECTED] wrote on 01/22/2008 10:21:46 PM:

  My build/configure command:
  ./configure --prefix=/services/proxy --enable-icmp --enable-snmp
  --enable-cachemgr-hostname=kmiproxy01 --enable-arp-acl 
--disable-select
  --disable-poll --enable-epoll --enable-large-cache-files
  --disable-ident-lookups --enable-stacktraces --with-large-files
  --enable-storeio=coss,ufs,null
 
  Then I did: make  make install
 
  It failed with:
 
  fs/libcoss.a fs/libufs.a fs/libnull.a auth/libbasic.a -lcrypt
  ../snmplib/libsnmp.a -lmiscutil -lrt -lm -lnsl
  fs/libcoss.a(store_dir_coss.o): In function `storeCossDirCheckLoadAv':
  coss/store_dir_coss.c:687: undefined reference to `aioQueueSize'
  coss/store_dir_coss.c:688: undefined reference to `squidaio_magic1'
  fs/libcoss.a(store_dir_coss.o): In function `storeDirCoss_ReadStripe':
  coss/store_dir_coss.c:1227: undefined reference to `aioRead'
  fs/libcoss.a(store_dir_coss.o): In function `storeCossDirInit':
  coss/store_dir_coss.c:222: undefined reference to `aioInit'
  coss/store_dir_coss.c:223: undefined reference to `squidaio_init'
  fs/libcoss.a(store_dir_coss.o): In function `storeCossDirCallback':
  coss/store_dir_coss.c:735: undefined reference to `aioCheckCallbacks'
  fs/libcoss.a(store_io_coss.o): In function `storeCossWriteMemBuf':
  coss/store_io_coss.c:738: undefined reference to `aioWrite'
  fs/libcoss.a(store_io_coss.o): In function 
`storeCossNewPendingRelocate':
  coss/store_io_coss.c:1086: undefined reference to `aioRead'
  fs/libcoss.a(store_io_coss.o): In function `storeCossSync':
  coss/store_io_coss.c:679: undefined reference to `aioSync'
  collect2: ld returned 1 exit status
  make[3]: *** [squid] Error 1
  make[3]: Leaving directory `/usr/src/kalproxy/squid-2.6.STABLE12/src'
  make[2]: *** [all-recursive] Error 1
  make[2]: Leaving directory `/usr/src/kalproxy/squid-2.6.STABLE12/src'
  make[1]: *** [all] Error 2
  make[1]: Leaving directory `/usr/src/kalproxy/squid-2.6.STABLE12/src'
  make: *** [all-recursive] Error 1
 
  help would be appreciated as I've searched the FAQs and list archives.
 
 First thing, since you are building, get the latest stable source to 
work
 with:
   http://www.squid-cache.org/Versions/v2/2.6/
 
 Amos
 
 



Re: [squid-users] Compile errors

2008-01-23 Thread trainier
Actually, I originally tried that, although coss was first in my list. I 
assume that doesn't matter.
I took your idea and ran with it, though.  I got things to compile 
properly when I omitted the quotes.

So:  --enable-storeio=coss,ufs,aufs,null

Thanks for everyone's suggestions.

Adrian Chadd [EMAIL PROTECTED] wrote on 01/23/2008 09:43:30 AM:

 Hm, try --enable-storeio-ufs,aufs,null,coss ?
 
 
 
 Adrian
 
 On Wed, Jan 23, 2008, [EMAIL PROTECTED] wrote:
  Okay, I have the latest stable source.
  The exact same issue continues.
  
  Amos Jeffries [EMAIL PROTECTED] wrote on 01/22/2008 10:21:46 
PM:
  
My build/configure command:
./configure --prefix=/services/proxy --enable-icmp --enable-snmp
--enable-cachemgr-hostname=kmiproxy01 --enable-arp-acl 
  --disable-select
--disable-poll --enable-epoll --enable-large-cache-files
--disable-ident-lookups --enable-stacktraces --with-large-files
--enable-storeio=coss,ufs,null
   
Then I did: make  make install
   
It failed with:
   
fs/libcoss.a fs/libufs.a fs/libnull.a auth/libbasic.a -lcrypt
../snmplib/libsnmp.a -lmiscutil -lrt -lm -lnsl
fs/libcoss.a(store_dir_coss.o): In function 
`storeCossDirCheckLoadAv':
coss/store_dir_coss.c:687: undefined reference to `aioQueueSize'
coss/store_dir_coss.c:688: undefined reference to 
`squidaio_magic1'
fs/libcoss.a(store_dir_coss.o): In function 
`storeDirCoss_ReadStripe':
coss/store_dir_coss.c:1227: undefined reference to `aioRead'
fs/libcoss.a(store_dir_coss.o): In function `storeCossDirInit':
coss/store_dir_coss.c:222: undefined reference to `aioInit'
coss/store_dir_coss.c:223: undefined reference to `squidaio_init'
fs/libcoss.a(store_dir_coss.o): In function 
`storeCossDirCallback':
coss/store_dir_coss.c:735: undefined reference to 
`aioCheckCallbacks'
fs/libcoss.a(store_io_coss.o): In function `storeCossWriteMemBuf':
coss/store_io_coss.c:738: undefined reference to `aioWrite'
fs/libcoss.a(store_io_coss.o): In function 
  `storeCossNewPendingRelocate':
coss/store_io_coss.c:1086: undefined reference to `aioRead'
fs/libcoss.a(store_io_coss.o): In function `storeCossSync':
coss/store_io_coss.c:679: undefined reference to `aioSync'
collect2: ld returned 1 exit status
make[3]: *** [squid] Error 1
make[3]: Leaving directory 
`/usr/src/kalproxy/squid-2.6.STABLE12/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory 
`/usr/src/kalproxy/squid-2.6.STABLE12/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory 
`/usr/src/kalproxy/squid-2.6.STABLE12/src'
make: *** [all-recursive] Error 1
   
help would be appreciated as I've searched the FAQs and list 
archives.
   
   First thing, since you are building, get the latest stable source to 

  work
   with:
 http://www.squid-cache.org/Versions/v2/2.6/
   
   Amos
   
   
 
 -- 
 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial 
 Squid Support -
 - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -



[squid-users] Compile errors

2008-01-22 Thread trainier
My build/configure command:
./configure --prefix=/services/proxy --enable-icmp --enable-snmp 
--enable-cachemgr-hostname=kmiproxy01 --enable-arp-acl --disable-select 
--disable-poll --enable-epoll --enable-large-cache-files 
--disable-ident-lookups --enable-stacktraces --with-large-files 
--enable-storeio=coss,ufs,null

Then I did: make  make install

It failed with:

fs/libcoss.a fs/libufs.a fs/libnull.a auth/libbasic.a -lcrypt 
../snmplib/libsnmp.a -lmiscutil -lrt -lm -lnsl
fs/libcoss.a(store_dir_coss.o): In function `storeCossDirCheckLoadAv':
coss/store_dir_coss.c:687: undefined reference to `aioQueueSize'
coss/store_dir_coss.c:688: undefined reference to `squidaio_magic1'
fs/libcoss.a(store_dir_coss.o): In function `storeDirCoss_ReadStripe':
coss/store_dir_coss.c:1227: undefined reference to `aioRead'
fs/libcoss.a(store_dir_coss.o): In function `storeCossDirInit':
coss/store_dir_coss.c:222: undefined reference to `aioInit'
coss/store_dir_coss.c:223: undefined reference to `squidaio_init'
fs/libcoss.a(store_dir_coss.o): In function `storeCossDirCallback':
coss/store_dir_coss.c:735: undefined reference to `aioCheckCallbacks'
fs/libcoss.a(store_io_coss.o): In function `storeCossWriteMemBuf':
coss/store_io_coss.c:738: undefined reference to `aioWrite'
fs/libcoss.a(store_io_coss.o): In function `storeCossNewPendingRelocate':
coss/store_io_coss.c:1086: undefined reference to `aioRead'
fs/libcoss.a(store_io_coss.o): In function `storeCossSync':
coss/store_io_coss.c:679: undefined reference to `aioSync'
collect2: ld returned 1 exit status
make[3]: *** [squid] Error 1
make[3]: Leaving directory `/usr/src/kalproxy/squid-2.6.STABLE12/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/src/kalproxy/squid-2.6.STABLE12/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/src/kalproxy/squid-2.6.STABLE12/src'
make: *** [all-recursive] Error 1

help would be appreciated as I've searched the FAQs and list archives.

Tim Rainier


[squid-users] FS Modules

2008-01-11 Thread trainier
As of 2004, the COSS storage module was experimental and not intended 
for everyday use.   This, according to Squid: The definitive Guide, 
O'Rielly.

With the default UFS module enabled, we constantly run into issues when 
squid reaches its maximum storage limit.  When I say issues, I mean 
intermittent connectivity because the cache is busy expiring its oldest 
entries.  The Coss approach sounds beneficial because squid maintains its 
entire cache using 1 file.  Instead of requiring several open calls and 
close calls, squid can use one file to maintain the cache.  So, when the 
cache reaches its maximum size, squid can just start at the beginning of 
the file and overwrite from there.

The theory of this process sounds wonderful.  I wonder how COSS has 
matured since 2004?  Has it matured beyond the developmental phase?
Is there a different storage module I should consider?

I'm running 2.6STABLE12.

Thanks,

Tim Rainier


Re: [squid-users] cache_dir

2008-01-10 Thread trainier
Manoj_Rajkarnikar [EMAIL PROTECTED] wrote on 01/10/2008 12:13:15 AM:

 
 
 
 On Wed, 9 Jan 2008, [EMAIL PROTECTED] wrote:
 
  I have been asked to continue proxying connections out to the 
Internet,
  but to discontinue caching web traffic.
 
  After reading the FAQ and the config guide (2.6STABLE12) I found that:
  'cache_dir null'  Is the approach.  It's failing.
  The error is: Daemon: FATAL: Bungled squid.conf line 19: cache_dir 
null
 
  For giggles, I even tried giving cache_dir null an option of a 
directory,
  which also failed.
 
  The FAQ shows that this option is not enabled or available in the 
default
  build of squid.  I'm reading through the configure script trying to 
find
  some verbiage that might help me locate the compile option.
 
  Here are the compile-time options I'm setting for this build:
 
  ./configure --prefix=/services/proxy --enable-icmp --enable-snmp
  --enable-cachemgr-hostname=kmiproxy01 --enable-arp-acl --enable-ssl
  --disable-select --disable-poll --enable-epoll 
-enable-large-cache-files
  --disable-ident-lookups --enable-stacktraces --with-large-files   
make
   make install
 
 need to add configure option --enable-storeio=null in above. a snip from 

 configure help..
 
 --enable-storeio=list of modules
Build support for the list of store I/O 
modules.
The default is only to build the ufs 
module.
See src/fs for a list of available modules, 
or
Programmers Guide section not yet written
for details on how to build your custom 
 store module

Bah.  I never checked src/fs.  I bet 'null' is listed there.
Thanks a bunch.


Re: [squid-users] cache_dir

2008-01-10 Thread trainier
Amos Jeffries [EMAIL PROTECTED] wrote on 01/10/2008 07:53:44 AM:

 [EMAIL PROTECTED] wrote:
  I have been asked to continue proxying connections out to the 
Internet, 
  but to discontinue caching web traffic.
  
  After reading the FAQ and the config guide (2.6STABLE12) I found that: 

  'cache_dir null'  Is the approach.  It's failing. 
  The error is: Daemon: FATAL: Bungled squid.conf line 19: cache_dir 
null
 
 For 2.6 you need to add --enable-storeio=null with the other FS you want 

 to use.
 

Yeah, I figured this was the option, I just didn't know the parameter to 
hand it.  Which is probably documented in src/fs.
Thanks for clarifying.

  For giggles, I even tried giving cache_dir null an option of a 
directory, 
  which also failed.
 
 That was the right approach :-) just at tad early
 
 The config line does need a path still in 2.6, a small problem that has 
 been cleaned up for 3.1 and 2.7, 2.6stable18 when they arrive.
 
cache_dir null /tmp

Very good to know.  Thanks again, Amos.

 Amos
 
  
  The FAQ shows that this option is not enabled or available in the 
default 
  build of squid.  I'm reading through the configure script trying to 
find 
  some verbiage that might help me locate the compile option.
  
  Here are the compile-time options I'm setting for this build:
  
  ./configure --prefix=/services/proxy --enable-icmp --enable-snmp 
  --enable-cachemgr-hostname=kmiproxy01 --enable-arp-acl --enable-ssl 
  --disable-select --disable-poll --enable-epoll 
-enable-large-cache-files 
  --disable-ident-lookups --enable-stacktraces --with-large-files   
make 
   make install
  
  Can someone please help me disable local caching?
  
  Thanks,
  
  Tim Rainier
 
 
 -- 
 Please use Squid 2.6STABLE17 or 3.0STABLE1.
 There are serious security advisories out on all earlier releases.
 



[squid-users] cache_dir

2008-01-09 Thread trainier
I have been asked to continue proxying connections out to the Internet, 
but to discontinue caching web traffic.

After reading the FAQ and the config guide (2.6STABLE12) I found that: 
'cache_dir null'  Is the approach.  It's failing. 
The error is: Daemon: FATAL: Bungled squid.conf line 19: cache_dir null

For giggles, I even tried giving cache_dir null an option of a directory, 
which also failed.

The FAQ shows that this option is not enabled or available in the default 
build of squid.  I'm reading through the configure script trying to find 
some verbiage that might help me locate the compile option.

Here are the compile-time options I'm setting for this build:

./configure --prefix=/services/proxy --enable-icmp --enable-snmp 
--enable-cachemgr-hostname=kmiproxy01 --enable-arp-acl --enable-ssl 
--disable-select --disable-poll --enable-epoll  -enable-large-cache-files 
--disable-ident-lookups --enable-stacktraces --with-large-files   make 
 make install

Can someone please help me disable local caching?

Thanks,

Tim Rainier


Re: [squid-users] UTC

2007-03-16 Thread trainier
Please don't top post?  I'm not sure what you mean.

Chris Robertson [EMAIL PROTECTED] wrote on 03/15/2007 06:22:35 PM:

 [EMAIL PROTECTED] wrote:
  access.log stores the time/date stamp as: nnn.nnn where 'n' is a 
digit 
  between 0 and 9.
 
  I'd like to read timestamps in human-readable form.  :-)
 
  Like I said, there was a simple perl command to convert it.  I just 
don't 
  know where to find it.
 
  
 
 Please don't top post.
 
 Here's what you are looking for: 
 http://www.squid-cache.org/mail-archive/squid-users/200503/0690.html
 
 Chris

_ 
THIS E-MAIL is private correspondence and is intended only for the 
identified recipients. We attempt to correctly address all e-mails, but if 
for any reason you have received this message in error, please take notice 
that you should not disclose or distribute this message to any other 
person. You should immediately notify the sender and delete this message. 
If the message contains or attaches CONFIDENTIAL information, you must 
treat that information confidentially. For questions, please contact the 
sender.


[squid-users] UTC

2007-03-15 Thread trainier
I know I've had to ask this before, but I went to the FAQ and searched for 
UTC and couldn't find what I'm looking for.

Somone, quite a while back, sent me a utc.pl script to convert standard 
input from UTC to GMT.

Can someone point me to that script?  Google was frutstrating because UTC 
was found in a lot of timestamps.

Thanks in advance,

Tim Rainier
_ 
THIS E-MAIL is private correspondence and is intended only for the 
identified recipients. We attempt to correctly address all e-mails, but if 
for any reason you have received this message in error, please take notice 
that you should not disclose or distribute this message to any other 
person. You should immediately notify the sender and delete this message. 
If the message contains or attaches CONFIDENTIAL information, you must 
treat that information confidentially. For questions, please contact the 
sender.


Re: [squid-users] UTC

2007-03-15 Thread trainier
access.log stores the time/date stamp as: nnn.nnn where 'n' is a digit 
between 0 and 9.

I'd like to read timestamps in human-readable form.  :-)

Like I said, there was a simple perl command to convert it.  I just don't 
know where to find it.

Henrik Nordstrom [EMAIL PROTECTED] wrote on 03/15/2007 09:26:15 
AM:

 tor 2007-03-15 klockan 08:57 -0400 skrev [EMAIL PROTECTED]:
  I know I've had to ask this before, but I went to the FAQ and searched 
for 
  UTC and couldn't find what I'm looking for.
  
  Somone, quite a while back, sent me a utc.pl script to convert 
standard 
  input from UTC to GMT.
 
 UTC and GMT is two names for the exact same thing. 
 
 What exactly is it you want to convert, into what?
 
 Regards
 Henrik
 
 
 [attachment signature.asc deleted by Tim Rainier/KAL/Kalsec] 
_ 
THIS E-MAIL is private correspondence and is intended only for the 
identified recipients. We attempt to correctly address all e-mails, but if 
for any reason you have received this message in error, please take notice 
that you should not disclose or distribute this message to any other 
person. You should immediately notify the sender and delete this message. 
If the message contains or attaches CONFIDENTIAL information, you must 
treat that information confidentially. For questions, please contact the 
sender.


Re: [squid-users] Problem with client browsing and squid

2006-10-05 Thread trainier
What is your cache.log showing?
And your access.log, particularly entries related to sites that don't 
return a response.

pierre [EMAIL PROTECTED] wrote on 10/05/2006 11:08:10 AM:

 Hello,
 
 I m a newbie with squid.
 I just installed it on a Freebsd station with 2 interfaces (one on 
 internet, the other on intranet lan)
 
 On my LAN i have some client browser.
 My clients can browse internet but when browsing some web pages they 
 never get a response.
 
 It s the case for example with google and yahoo search.
 
 Can u help me pls ???
 Pab



[squid-users] Limited site access

2006-06-09 Thread trainier
We've a situation at our facility where specific clients sit in static IP 
address block   This clients are considered restricted and I need a way 
to get these clients to access a set of websites that I've defined. 
There's probably 20 or 30 sites.

Can I get some recommendations on how to do this most-efficiently?

Much appreciated,

Tim Rainier


Re: [squid-users] Blacklisting problem, simple fix?

2006-06-09 Thread trainier
Did you try blocking: .playboy.com ?

Dave Mullen [EMAIL PROTECTED] wrote on 06/09/2006 04:09:11 PM:

 Fellow Users,
 
 I have squid running with a blacklist, but I seem to have found an issue 
with
 my config.  The blacklist lists a domain, but it's not blocking any 
subdomains
 of that domain.  Should it?  Is there an option that turns on this 
recursion
 or something?  For example:
 
 playboy.com is blocked in domains.  www.playboy.com or 
members.playboy.com are
 still reachable.  Shouldn't they be stopped as well as the playboy.com? 
Any
 thoughts?
 
 Thanks in advance,
 
 Dave Mullen



Re: [squid-users] Trying to block IM's

2006-05-22 Thread trainier
Nor will it.  Those IM applications are designed to work around firewalls 
and blocking mechanisms.  They'll even use port 80 to communicate, if they 
have to.

If you really want to block IMs (it's debatable whether doing so is truly 
worth the effort), you need to use an Intrustion Detection System like 
snort.
The snort community has already developed the definitions/signatures to 
use for blocking IMs.  There is a learning curve with setting up snort, 
but it's an incredibly sophisticated and powerful tool.

Hope this helps.

Tim Rainier

Chris Boyd [EMAIL PROTECTED] wrote on 05/22/2006 11:47:29 AM:

 I'm trying to block IM's like MSN, Yahoo..etc...etc
 I've taken acl's from this list but it doesn't seem to be working. 
 
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443 563
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 22  # ssh
 acl Safe_ports port 443 563 # https, snews
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 
 
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl Safe_ports port 4156
 acl CONNECT method CONNECT
 acl usit src 10.133.0.0/16 10.1.0.0/16
 acl ICQ url_regex -i .icq.com
 acl MSN req_mime_type ^application/x-msn-messenger$
 acl YAHOO url_regex .msg.yahoo.com
 acl CHAT url_regex -i webmessenger .webmessenger .messenger.* 
 messenger.yahoo gateway.dll messenger.msn mirc icq.com go.icq 
miranda-im.org
 acl WEBMSN url_regex -i .webmessenger.msn.com
 acl EMESS url_regex -i .e-messenger.net .webmessenger.msn.com/* 
iloveim.com
 acl TALK url_regex -i .google.com/talk talk.google.com .google.
 com/talk* .google.*/talk*
 http_access allow manager usit
 http_access deny manager
 http_access deny !Safe_ports
 http_access allow CONNECT
 http_access allow localhost
 http_access allow usit
 http_access deny MSN
 http_access deny ICQ
 http_access deny YAHOO
 http_access deny CHAT
 http_access deny WEBMSN
 http_access deny EMESS
 http_access deny TALK
 http_access deny all
 
 
 
 
 -
 This email message is intended only for the addressee(s) 
 and contains information that may be confidential and/or 
 copyrighted.  If you are not the intended recipient please 
 notify the sender by reply email and immediately delete 
 this email. Use, disclosure or reproduction of this email 
 by anyone other than the intended recipient(s) is strictly 
 prohibited. USIT has scanned this email for viruses and 
 dangerous content and believes it to be clean. However, 
 virus scanning is ultimately the responsibility of the recipient.
 -
 



Re: [squid-users] Re: Unknown error/warning in cache.log

2006-05-22 Thread trainier
It's simply telling you that the peer squid box was not compiled to 
support digest mode, but this squid box was and you have digest mode 
enabled for it.
If you really need digest mode, recompile your digest squid box to support 
digest mode.  :-)

Tim Rainier

news [EMAIL PROTECTED] wrote on 05/22/2006 10:51:14 AM:

 Am Montag, den 22. Mai 2006 schrubte Henrik:
 
   temporary disabling (Bad Gateway) digest from localhost
  The cache_peer directive in squid.conf..
  That your Squid is build with support for cache digests, and it's peer
  doesn't support digest but you have not told this to Squid..
 
 I added the option no-digest and the message disappeared.
 
 Thank you!
 
 -- 
 ) .--.
 )#=+  '
/## | .+.   Greetings,
 ,,/###,|,,|Michael
 



Re: [squid-users] Squid crash root cause

2006-05-02 Thread trainier
The screen blanking issue could be related to hitting the SysReq key.

Out of curiosity, how do the following log files look:

/var/log/messages
cache.log (located in var/logs/ under the root of your squid directory. 
/etc/crontab
ls -la /etc/cron.weekly

Tim

Neil A. Hillard [EMAIL PROTECTED] wrote on 05/02/2006 08:30:50 AM:

 Hi,
 
 Dwayne Hottinger wrote:
  Quoting Mitesh P Choksi [EMAIL PROTECTED]:
  
  Dear All,
 
  I have had a very unusual situation when my Squid crashes almost 
every
  7 days. However, it is not depending on day of the week, time of the
  day, etc. I have changed hardware, moved from RH7.3 to WBEL, but
  nothing works.
 
  I have not manage to pinpoint the problem. I need assistance to
  identify the root cause of this situation.
 
  I have 5 disk system at edge of the network doing NAT + Squid on 
WBEL.
  I have a satellite delay to the Internet fibre backbone.
 
  I tried vmstat output to be appended to identify the cpu/io
  utilization also SysRq but nothing helps. The console goes blank and
  does not allow anymore activity. I use ping timeouts to alert and
  hardboot the system.
 
  I need guidance in order to find out why does squid crashes so
  regularly as I can almost identify what next couple of days it is
  about to crash.
 
  Any pointers will be appreciated.
 
  Regards,
 
  Mitesh
 
 
  Dear All,
 
  I have had a very unusual situation when my Squid crashes almost 
every
  7 days. However, it is not depending on day of the week, time of the
  day, etc. I have changed hardware, moved from RH7.3 to WBEL, but
  nothing works.
 
  I have not manage to pinpoint the problem. I need assistance to
  identify the root cause of this situation.
 
  I have 5 disk system at edge of the network doing NAT + Squid on 
WBEL.
  I have a satellite delay to the Internet fibre backbone.
 
  I tried vmstat output to be appended to identify the cpu/io
  utilization also SysRq but nothing helps. The console goes blank and
  does not allow anymore activity. I use ping timeouts to alert and
  hardboot the system.
 
  I need guidance in order to find out why does squid crashes so
  regularly as I can almost identify what next couple of days it is
  about to crash.
 
  Any pointers will be appreciated.
 
  Regards,
 
  Mitesh
 
 
  How big is your access.log file?  There is a limit on squid and ifit 
reaches
  that limit it crashes.  You should probably rotate the squid log 
files.
 
 This doesn't sound like a Squid problem.  If your console is 
 unresponsive then it must be an OS / hardware issue.  Squid shouldn't be 

 able to make you server lock up.  You may be better off asking on the 
 mailing list for your OS.
 
 An oversize log file would cause Squid to terminate but wouldn't 'kill' 
 the server.
 
 Try taking a look at disabling power management if you haven't already 
 (if your server even has it).
 
 
 HTH,
 
 
 Neil.
 
 -- 
 Neil Hillard[EMAIL PROTECTED]
 Westland Helicopters Ltd.   http://www.whl.co.uk/
 
 Disclaimer: This message does not necessarily reflect the
  views of Westland Helicopters Ltd.



Re: [squid-users] proxy.pac

2006-04-18 Thread trainier
I think it's important to note that WPAD (Proxy Autodiscovery) is a hosed 
implementation in Internet Explorer.
You'll notice that few other browsers even have the functionality.

WPAD, (Automatically Detect Settings check box in IE) was established to 
either set a DNS entry for wpad.domain.com to point to the proxy server, 
or for DHCP to assign a proxy server automatically.
The short answer is that IE's implementation is flakey as they've 
essentially ignored RFC suggestions.  I've been told that WPAD works 
flawlessly in IE, so long as your DNS/DHCP server is running on a 
Microsoft Windows Server. 

It's also important to note that the IETF has discontinued efforts to 
standardize WPAD.  They did so several years ago.
I hope you weren't planning on using WPAD.  My recommendation is to use an 
autoconfiguration script, then point your clients directly to the script.

Proxy Auto Config scripts are really quite simple.  Using some simple 
javascript functions, the script can be easily set up to do what you want 
to do.
The pac (we call ours wpad.dat) sits in the root of the web server, 
running on the proxy server.  The clients are then pointed to the script, 
via the following script setting in their browser:
http://proxyserver/proxy.pac

Our script reads as follows (explanation is below the script.  Take out 
the line numbers, if you plan to copy/paste the code)

1 function FindProxyForURL(url, host)
   {
2  if (isPlainHostName(host) || isInNet(host, 172.24.0.0, 
255.255.0.0)
3 || 
isInNet(host, 172.16.0.0,  255.255.0.0)
4 || 
isInNet(host, 192.168.0.0, 255.255.0.0))
5  return DIRECT;
6 else
7  return PROXY wpad.kal.kalsec.com:8000; PROXY 
wpad.kal.kalsec.com:8080; DIRECT;
}

1.)  Required.  This function tells the proxy server that it needs to 
process the script's credentials, in order to find the appropriate proxy 
server.
2-5.)  This function checks the URL being requested to see if it exists on 
any of our local networks here.  If it does, the browser is directed to 
the web page directly, without going through the proxy.  We did this 
because we
  didn't want the proxy server to bother with processing requests 
made to local services on our network.
6.)  If the URL being requested does not exist on our network...
7.)  First try port 8000 on our proxy server (wpad.kal.kalsec.com).  If 
that request is not fulfilled, try port 8080 on our proxy server.  If that 
request is not fulfilled, access the site directly.

Netscape has put together a great tutorial for writing proxy autoconfig 
scripts and also explains several different advantages and methods to 
using them.
The tutorial is at = 
http://wp.netscape.com/eng/mozilla/2.0/relnotes/demo/proxy-live.html

Good luck.

Tim Rainier

dharam paul [EMAIL PROTECTED] wrote on 04/18/2006 09:10:01 
AM:

 Hi,
 Here is a newbie not very technical trying to
 implement a proxy pac system where if the proxy dies
 the Browser connects directly to the internet.
 Help on this is requested please.
 Regards
 
 
 
 __ 
 Yahoo! India Matrimony: Find your partner now. Go to 
http://yahoo.shaadi.com



Re: [squid-users] proxy.pac

2006-04-18 Thread trainier
Yes, the truncating problem was simple to work around.  Just copy 
proxy.pac to proxy.pa.

I take the autodiscovery comment back about not being supported in other 
browsers.

I stand by my recommendation, however, to use the configuration script, as 
opposed to autodiscovery.

Merton Campbell Crockett [EMAIL PROTECTED] wrote on 04/18/2006 
10:17:39 AM:

 On 18 Apr 2006, at 06:24 , [EMAIL PROTECTED] wrote:
 
 I think it's important to note that WPAD (Proxy Autodiscovery) is a 
hosed 
 implementation in Internet Explorer.
 
 Microsoft did introduce errors into Web Proxy Automatic Detection 
 (WPAD) when they released Windows 2000.  Some of the damage was 
 repaired with the release of Windows XP.  The former truncates the 
 URL.  The latter requires the URL to be in lowercase.
 
 You'll notice that few other browsers even have the functionality.
 
 Interesting statement.  I find that all of the browsers that I use 
 support Web Proxy Automatic Detection:  Konqueror, Safari, Mozilla, 
 Firefox, and Internet Explorer.
 
 WPAD, (Automatically Detect Settings check box in IE) was established 
to 
 either set a DNS entry for wpad.domain.com to point to the proxy 
server, 
 or for DHCP to assign a proxy server automatically.
 The short answer is that IE's implementation is flakey as they've 
 essentially ignored RFC suggestions.  I've been told that WPAD works 
 flawlessly in IE, so long as your DNS/DHCP server is running on a 
 Microsoft Windows Server.
 
 It appears to work better when using the ISC DNS and DHCP 
 implementations.  With the introduction of Active Directory, 
 Microsoft appears to have changed their implementation and the WPAD 
 functionality is only available to Windows system.
 
 I agree with your assessment of the Internet Explorer 
 implementation.  It is fairly functional when DHCP is used to assign
 the system an IP address.  For Windows systems with static IP 
 address, it's better to manually set the autoconfiguration URL in the 
browser.
 
 Merton Campbell Crockett
 [EMAIL PROTECTED]


RE: [squid-users] proxy.pac

2006-04-18 Thread trainier
Not true at all.  The web browser tries to access the configuration 
script.  If it doesn't get to it, the request is submitted directly.
We wouldn't have been  able to use the functionality otherwise.

Jason Gauthier [EMAIL PROTECTED] wrote on 04/18/2006 12:45:29 PM:

  
  Yes, the truncating problem was simple to work around.  Just 
  copy proxy.pac to proxy.pa.
  
  I take the autodiscovery comment back about not being 
  supported in other browsers.
  
  I stand by my recommendation, however, to use the 
  configuration script, as opposed to autodiscovery.
 
 My biggest disagreement in this case is mobility.  In an enterprise
 environment forcing a .pac file, or forcing any proxy server, is not a
 good idea.  As soon as a mobile user leaves your network they can no
 longer access the internet.  Most users aren't savvy enough to handle
 this.  So, in my case I am forced to use IE/WPAD trickery.
 
 And yes.. It's far from perfect!



RE: [squid-users] proxy.pac

2006-04-18 Thread trainier
Yes, it does.  It won't always find a cached version though.
In either case, it still ends up direct.



Joost de Heer [EMAIL PROTECTED] 
04/18/2006 02:42 PM
Please respond to
[EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
RE: [squid-users] proxy.pac






[EMAIL PROTECTED] wrote:
 Not true at all.  The web browser tries to access the configuration
 script.  If it doesn't get to it, the request is submitted directly.
 We wouldn't have been  able to use the functionality otherwise.

I think it uses the cached proxy.pac.

All out pac's include something like 'if
!isresolvable(some.internal.host) return DIRECT;' to check if they're in
the internal or in an external network.

Joost





Re: [squid-users] Can this be done ?

2006-01-23 Thread trainier
If you use the canned lists from SquidGuard, you're good to go.

3rd party blacklists have a tendency to be illegitimate.  I found one 
person that had geocities.com in the blacklist.
I strongly disagree with that entry.

However, this does not belittle the effectiveness of redirectors.  They 
work and they're reliable.





Christoph Haas [EMAIL PROTECTED] 
01/23/2006 11:15 AM

To
squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Can this be done ?






On Monday 23 January 2006 15:36, S t i n g r a y wrote:
 i am planning to build a Linux based firewall+proxy
 server, currently i am using windows 2003 ISA 2000
 with surfcontrol webfilter, which works fine except
 for the performance point of view.
 now cause this is my first time with linux firewall i
 have chossen MNF firewall from mandiva, mandiva uses
 squid for proxy caching, now i want to know, is it
 possible to do these things with squid or one of its
 plugins ?

 1. block specific catagory related websites ?
 porn,advertiesments,sports,games etc etc ...

Rumors say that redirectors like SquidGuard can do this. But it's not 
reliably IMHO because the quality of the URL lists is too bad to really be 

useful in production. Serious porn surfers will probably only need seconds 

to cirumvent your security. And think of public anonymizing proxies. I 
could not yet reliably block such categories with free software. 
Maintaining the URL lists needs a lot of manpower. So don't expect too 
much. Many administrators are happy enough with it though.

 2. give quota to certian webcontent to everyuser, for
 example allow single to download 100MB worth .mp3 or
 .zip files ?

You would need to write your own external scripts that trace Squid's 
access.log and block access for a certain user - perhaps based on external 

ACLs doing a database lookup. But that's not trivial.

Additional problem: you can't reliably tell whether a file is ZIP or MP3 
because Squid doesn't look at the content to determine the correct MIME 
type. You could of course parse the URL with url_regex ACLs and try to 
detect such files.

 Christoph (yes, I have a real name)
-- 
Never trust a system administrator who wears a tie and suit.




Re: [squid-users] Can this be done ?

2006-01-23 Thread trainier
Blacklists are not restricted to domains (at least SquidGuard's isn't). 
Obviously that would be ineffective.
SquidGuard's regex matching works great, for one.  And their URL blocking 
is especially
effective with blocking sites that continue to register new domains to 
evade bans (They're not all smart enough to change the directory structure
for the phishing/spyware kit they got.)

You are right though.  We can argue all day about why it is/isn't 
effective.

Tim



Christoph Haas [EMAIL PROTECTED] 
01/23/2006 03:40 PM

To
squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Can this be done ?






On Monday 23 January 2006 19:07, [EMAIL PROTECTED] wrote:
 However, this does not belittle the effectiveness of redirectors.  They
 work and they're reliable.

Redirectors in general work well. But whether blacklists are effective or 
not is surely hard to decide and more a religion than a science. But 
considering that in my country alone ~3000 new domains are registered 
every day and some domains even contain multiple types of content which 
belongs to different categories I can't imagine how a blacklist claims to 
be even remotely effective. You should look at the more inventive users in 

our organisation? Such a black list would be no obstacle.

Cheers
 Christoph
-- 
Never trust a system administrator who wears a tie and suit.




Re: [squid-users] Redundancy

2006-01-19 Thread trainier
Okay, once again,

I miss-typed the -z.  I'm not using -z, I'm using -k reconfigure.

Matus UHLAR - fantomas [EMAIL PROTECTED] wrote on 01/19/2006 04:17:31 
AM:

   I realize this isn't normal.  That's why I asked the question.  Are 
you
   using SquidGuard too?
 
 On 18.01 20:21, Mark Elsen wrote:
Yes.
  
   This happens when I update a squidguard database (should have no 
adverse
   affect on squid, but it seems to), then do the squid -z.
 
Why do you issue -z ??
 
 he misread the man page probably. And I guess this is reason why his 
squid
 stops responding for some time...
 
 -- 
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 Due to unexpected conditions Windows 2000 will be released
 in first quarter of year 1901



[squid-users] Redundancy

2006-01-18 Thread trainier
Squid Version:  squid/2.5.STABLE12 

I've configured a proxy script that my clients point to.  It reads as 
follows:

function FindProxyForURL(url, host)
{
 if (isPlainHostName(host) || isInNet(host, 172.24.0.0, 
255.255.0.0)
   || isInNet(host, 192.168.0.0, 
255.255.0.0))
 return DIRECT;
 else
 return PROXY wpad.kal.kalsec.com:8000; PROXY 
wpad.kal.kalsec.com:8080; DIRECT;
}


I'm doing this because, when squid is Store rebuilding it is very slow 
about carrying out cache requests.

Squid mostly only does this when I do a squid -k reconfigure.

My first question is, why does squid have to do this every time I send it 
a reconfigure command?
My second question is, I see there's a command-line option to tell squid 
to ignore cache requests until store rebuilding is complete.

Is there a way to force this, maybe with a compilation parameter?  This 
would affectively enable my redundancy and make it completely
automated.  Otherwise, I have to know that squid is rebuilding it's store 
file and send it that command line parameter every time this happens.

Thanks,

Tim


Re: [squid-users] Redundancy

2006-01-18 Thread trainier
Mark Elsen [EMAIL PROTECTED] wrote on 01/18/2006 11:53:33 AM:

  Squid Version:  squid/2.5.STABLE12
 
  I've configured a proxy script that my clients point to.  It reads as
  follows:
 
  function FindProxyForURL(url, host)
  {
   if (isPlainHostName(host) || isInNet(host, 172.24.0.0,
  255.255.0.0)
 || isInNet(host, 192.168.0.0,
  255.255.0.0))
   return DIRECT;
   else
   return PROXY wpad.kal.kalsec.com:8000; PROXY
  wpad.kal.kalsec.com:8080; DIRECT;
  }
 
 
  I'm doing this because, when squid is Store rebuilding it is very 
slow
  about carrying out cache requests.
 
  Squid mostly only does this when I do a squid -k reconfigure.
 
  My first question is, why does squid have to do this every time I send 
it
  a reconfigure command?
 
   It doesn´t , at least not for me, and I mean upon :
 
  % squid -k reconfigure
 
 using STABLE12 (too).

I realize this isn't normal.  That's why I asked the question.  Are you 
using SquidGuard too?
This happens when I update a squidguard database (should have no adverse 
affect on squid, but it seems to), then do the squid -z.

 
  My second question is, I see there's a command-line option to tell 
squid
  to ignore cache requests until store rebuilding is complete.
 
   -F
 
 But it´s known to be somewhat broken, because SQUID still accepts
 connections at the TCP level; so you get stale connections.
 
 There is a bugzilla for that somewhere.
 
 M.



Re: [squid-users] Redundancy

2006-01-18 Thread trainier
[EMAIL PROTECTED] wrote on 01/18/2006 01:42:06 PM:

 Mark Elsen [EMAIL PROTECTED] wrote on 01/18/2006 11:53:33 AM:
 
   Squid Version:  squid/2.5.STABLE12
  
   I've configured a proxy script that my clients point to.  It reads 
as
   follows:
  
   function FindProxyForURL(url, host)
   {
if (isPlainHostName(host) || isInNet(host, 172.24.0.0,
   255.255.0.0)
  || isInNet(host, 192.168.0.0,
   255.255.0.0))
return DIRECT;
else
return PROXY wpad.kal.kalsec.com:8000; PROXY
   wpad.kal.kalsec.com:8080; DIRECT;
   }
  
  
   I'm doing this because, when squid is Store rebuilding it is very 
 slow
   about carrying out cache requests.
  
   Squid mostly only does this when I do a squid -k reconfigure.
  
   My first question is, why does squid have to do this every time I 
send 
 it
   a reconfigure command?
  
It doesn´t , at least not for me, and I mean upon :
  
   % squid -k reconfigure
  
  using STABLE12 (too).
 
 I realize this isn't normal.  That's why I asked the question.  Are you 
 using SquidGuard too?
 This happens when I update a squidguard database (should have no adverse 

 affect on squid, but it seems to), then do the squid -z.

I obviously mean squid -k, not -z.
 
  
   My second question is, I see there's a command-line option to tell 
 squid
   to ignore cache requests until store rebuilding is complete.
  
-F
  
  But it´s known to be somewhat broken, because SQUID still accepts
  connections at the TCP level; so you get stale connections.
  
  There is a bugzilla for that somewhere.
  
  M.
 



Re: [squid-users] blacklist

2006-01-17 Thread trainier
I'm surpised squid even recovered from trying to do an acl for every 
blacklist entry.

Use squidguard, it is very simple to use.

www.squidguard.org

Tim



Christoph Haas [EMAIL PROTECTED] 
01/17/2006 02:28 PM

To
squid-users@squid-cache.org
cc

Subject
Re: [squid-users] blacklist






On Tuesday 17 January 2006 19:01, Cristina Tanzi Tolenti wrote:
 I would like yo use the blacklists that I downloaded from
 http://urlblacklist.com/, some files are very large: 10-15MB. I create
 an ACL for every blacklist
 Why SQUID don't work (or it's very very slow) with the largest
 blacklists?

I believe that redirectors like SquidGuard will be faster with such large 
lists. Which ACL type did you choose in Squid? dstdomain? Because 
url_regex will surely slow the process down a lot.

Also: what does don't work mean?

Kindly
 Christoph
-- 
Never trust a system administrator who wears a tie and suit.




Re: [squid-users] Bumping users from website

2006-01-10 Thread trainier
Look into SquidGuard or DansGuardian and: 
http://www.squid-cache.org/related-software.html

Tim

David Lynum [EMAIL PROTECTED] wrote on 01/10/2006 01:13:45 PM:

 Dear List,
 
 I've created ACL's in squid to keep my users from going to certain 
 websites during certain parts of the day.  The acl's are working just 
 fine.  But is there a way to kick those same users off of these acl 
 restricted sites if they already happen to be on the site when the acl 
 kicks in?  Let's say the the acl restricts users from visiting a 
 particular website from 4PM-6PM.  If the users are already on the site 
 at 3:59PM, once 4PM hits, so far they're still able to browse the site. 
 Of course once they leave the site they can't get back in though until 
 after 6PM.
 
 Thanks,
 
 David
 [attachment dlynum.vcf deleted by Tim Rainier/KAL/Kalsec] 


RE: [squid-users] Firefox and Squid - How to force the browser to use proxy just like with IE?

2006-01-09 Thread trainier
We use a configuration script (proxy.pac), which IE and firefox clients 
access via the autoconfiguration script setting in both browsers.

Our router is configured to deny access to the internet unless the 
requests are coming from the proxy server.

Windows also has an internal proxy function called proxycfg.  Take a look 
at that as well.

Tim



Brian Sheets [EMAIL PROTECTED] 
01/09/2006 12:14 PM

To
squid-users@squid-cache.org
cc

Subject
RE: [squid-users] Firefox and Squid - How to force the browser to use 
proxy just like with IE?






Hmm.. in my limited experience, configure your router to force all 80
and 443 traffic to go to a proxy config url.

There may be better ways, but this is the only one I could come up with

b

-Original Message-
From: Benedek Frank [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 09, 2006 9:54 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Firefox and Squid - How to force the browser to
use proxy just like with IE?

Hi

We are using Squid, and in native mode, not transparent. All users are
forced to go through the Proxy via Windows Active Directory GPO. This is
good to set people who use IE to get through Proxy, but firefox users
are browsing happily without even hitting the Proxy server once :(

Is there a way that I can force all browsers to go through Proxy?

Thanks

Ben 





Re: [squid-users] restriciting spyware

2006-01-06 Thread trainier
  hello gurus,
 
  Is it possible to stop spyware entering in to the
  network with help of squid.
 
 
  Basically not because SQUID , only deals with the http transport
 layer. You can use , a virus scanning box ; in as a parent for
 your SQUID sever.

 Use adequate anti virus protection on every PC; which has
 access to the Internet , proxy-based or not.
 
 The term virus is ment in a broadened meaning here; i.e. modern
 anti-virus sellers include spyware blocking too into there product(s).
 
 M.

I do agree with Mark, in terms of using adequate malware protection 
(malware was the term decided upon to include any software that's 
malicious: spyware, adware, grayware, viruses, trojans, etc) primarily for 
this problem.

However, proxying the network's traffic is an excellent way to control 
things.  There are quite a few blacklist databases out there which you can 
plug into squid.
I personally use SquidGuard to do it.  The few occasions at which our 
proxy server is down, the users notice it because malware manages to 
affect their machines.  Unfortunately I don't feel like the databases are 
updated often enough.  This is mainly because malware is an exponential 
problem and it's incredibly hard to keep up with.

I'm putting a project team together to specifically set up a database 
which manages malware sources more efficiently.  Currently, the database 
will be built for squidguard, because I like the simplicity of SquidGuard. 
 I've heard DansGuardian is better because it's more feature-rich. 

Additionally, you can check out 
http://www.squid-cache.org/related-software.html

There's all kinds of options out there.

On a side-note, if anyone is interesting in working on my source database, 
please let me know off-list.

Regards,

Tim Rainier



[squid-users] Fatal Error

2005-12-21 Thread trainier
I receive the following email from squid sem-frequently:

From: squid
To: [EMAIL PROTECTED]
Subject: The Squid Cache (version 3.0-PRE3-20050510) died.

You've encountered a fatal error in the Squid Cache version 
3.0-PRE3-20050510.
If a core file was created (possibly in the swap directory),
please execute 'gdb squid core' or 'dbx squid core', then type 'where',
and report the trace back to [EMAIL PROTECTED]

There's never any core file, and squid continues to function just fine.

What's the scoop?

Tim



RE: [squid-users] IPv6 Support

2005-12-02 Thread trainier
Caceres [EMAIL PROTECTED] wrote on 12/02/2005 09:09:08 AM:

 Hi,
 Squid work or dosen't work in IPv6?

It works.

 Mark you already test squid HTTP prxy in IPv6 enviorments??
 
 Regards,
 Paulo Ferreira
 
 ./Caceres
 -
 [EMAIL PROTECTED]
 
  Mark Elsen [EMAIL PROTECTED] wrote on 11/30/2005 01:14:43 PM:
  
Hi, I have a question for you.
   
Squid supports HTTP and FTP proxying over IPv6?
  
No.
  
  
  No?  Squid 2.5, in the least, supports http opver IPv6.
  Not sure on FTP.
  
   
I'm searching a proxy Server to perform HTTP and FTP proxy over 
IPv6
  in my
network, and I ask if Squid support IPv6 because I used it in IPv4
  networks
in one school project a few years ago.
   
If Squid doesn't support IPv6, and somebody know another Proxy 
Server
  that
supports, please reply to me the name of that application.
   
  
M.
 



Re: [squid-users] To Use winmx and emule

2005-11-30 Thread trainier
Squid should not be getting in way of these applications, unless they 
require some sort of http transaction in order for them to work.
If the latter is the case, you should be able to configure them to access 
the web via http through a proxy server.

Are you using your proxy transparently?

Tim Rainier

sasa [EMAIL PROTECTED] wrote on 11/30/2005 01:08:27 PM:

 Hi, I have a problem with access to software like Winmx ed Emule.
 My squid.conf is:
 
 http_port 10.0.0.121:3128
 acl QUERY urlpath_regex cgi-bin \?
 no_cache deny QUERY
 acl windowsupdate dstdomain .windowsupdate.microsoft.com
 no_cache deny windowsupdate
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl Safe_ports port 80   # http
 acl CONNECT method CONNECT
 acl local_net src 10.0.0.0/255.255.255.0
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access deny to_localhost
 acl our_networks src 10.0.0.0/24
 http_access allow our_networks
 http_access allow local_net
 http_access allow localhost
 http_access deny all
 http_reply_access allow all
 
 ..and in squidguard.conf I have:
 
 destination ok {
   domainlist  ok/domains
   urllist ok/urls
 }
 
 destination ok-for-number1 {
   domainlist  ok1/domains
   urllist   ok1/urls
 
 destination ad {
   domainlist  ad/domains
   urllist   ad/urls
   redirect http://10.0.0.122/
 }
 
 .. therefore this address can view all web site nothing restriction, but 

 however cann't to use winmx and emule.
 How I can modify squid.conf for this problem ? without proxy this 
address 
 (ip 10.0.0.122 is a internal client) can use winmx ed emule nothing 
 problems.
 Thanks.
 
 --
 Salvatore. 
 



Re: [squid-users] IPv6 Support

2005-11-30 Thread trainier
Mark Elsen [EMAIL PROTECTED] wrote on 11/30/2005 01:14:43 PM:

  Hi, I have a question for you.
 
  Squid supports HTTP and FTP proxying over IPv6?
 
  No.
 

No?  Squid 2.5, in the least, supports http opver IPv6.
Not sure on FTP.

 
  I'm searching a proxy Server to perform HTTP and FTP proxy over IPv6 
in my
  network, and I ask if Squid support IPv6 because I used it in IPv4 
networks
  in one school project a few years ago.
 
  If Squid doesn't support IPv6, and somebody know another Proxy Server 
that
  supports, please reply to me the name of that application.
 
 
  M.



Re: [squid-users] recommendation need for building server

2005-11-29 Thread trainier
The CPU doesn't really play all that much of a role in the performance of 
Squid.  (Obviously faster CPUs are nice, but really not important with 
squid)

Disk I/O and Memory are much more important than the speed of the cpu.
Disk size and memory size are contingent on each other.

Unfortunately, it's tough to give any suggestions without knowing any 
specifics about your environment.

Tim Rainier


Fw: [squid-users] recommendation need for building server

2005-11-29 Thread trainier
Can I suggest that if you plan to take part in email-based discussions 
that you not use a hideous mechanism like 
this to manage your email?

Using a service like UOL Antispam for personal use is fine.
Public email lists are not.  That service you're using requires the 
following in order for us to answer your questions.

1.)  Read the reply sent back from your account automatically.
2.)  Click the link inside the email  (html email?  blech)
3.)  Wait for the darn web page to load.
4.)  Attempt to read the squiggly characters so the system doesn't think 
you're trying to cheat it.
5.)  Confirm the email (that you already tried to send) to ensure the 
message is delivered.

Again, if you plan to join a discussion list and expect people to respond 
to your questions/comments, I would consider
not using a service like UOL Antispam because people will not bother.

Tim Rainier


- Forwarded by Tim Rainier/KAL/Kalsec on 11/29/2005 02:26 PM -

AntiSpam UOL [EMAIL PROTECTED] wrote on 11/29/2005 02:21:09 PM:

 ANTISPAM UOL » TIRA-TEIMA
 
 
 Olá,
 
 Você enviou uma mensagem para [EMAIL PROTECTED]
 Para que sua mensagem seja encaminhada, por favor, clique aqui

 
 
 
 Esta confirmação é necessária porque [EMAIL PROTECTED] usa o 
 Antispam UOL, um programa que elimina mensagens enviadas por robôs, 
 como pornografia, propaganda e correntes.
 
 As próximas mensagens enviadas para [EMAIL PROTECTED] não precisarão
 ser confirmadas*.
 *Caso você receba outro pedido de confirmação, por favor, peça para 
 [EMAIL PROTECTED] incluí-lo em sua lista de autorizados.

 
 Atenção! Se você não conseguir clicar no atalho acima, acesse este 
endereço:
 http://tira-teima.as.uol.com.br/challengeSender.html?
 data=Mxbw0wV15u4NbKEgEJ619uI96Mp5NjqdtsjEhgRc%
 2F5IufywPv1Do8ASspQOSQKs%2FjBA0RauSgywQ%
 
0AgthAYJKNBVzooOk9N3Ufl3tqEmbJAKvPmKbK19x8TSuZxqklFUP4UTjAnTiLXwhYBJBLKiEuAwc8%
 0AsREKJlTSgB9MMNjN4Vz9V9jtkAWylzaqvCF%2B3dOg
 
 
 
 
 Hi,
 
 You´ve just sent a message to [EMAIL PROTECTED]
 In order to confirm the sent message, please click here

 
 
 
 This confirmation is necessary because [EMAIL PROTECTED] uses 
 Antispam UOL, a service that avoids unwanted messages like 
 advertising, pornography, viruses, and spams.
 
 Other messages sent to [EMAIL PROTECTED] won't need to be confirmed*.
 *If you receive another confirmation request, please ask [EMAIL PROTECTED]
 com.br to include you in his/her authorized e-mail list.

 
 Warning! If the link doesn´t work, please copy the address below and
 paste it on your browser:
 http://tira-teima.as.uol.com.br/challengeSender.html?
 data=Mxbw0wV15u4NbKEgEJ619uI96Mp5NjqdtsjEhgRc%
 2F5IufywPv1Do8ASspQOSQKs%2FjBA0RauSgywQ%
 
0AgthAYJKNBVzooOk9N3Ufl3tqEmbJAKvPmKbK19x8TSuZxqklFUP4UTjAnTiLXwhYBJBLKiEuAwc8%
 0AsREKJlTSgB9MMNjN4Vz9V9jtkAWylzaqvCF%2B3dOg
 
 
 
 Use o AntiSpam UOL e proteja sua caixa postal


Re: Fw: [squid-users] recommendation need for building server

2005-11-29 Thread trainier
I know it looks at the FROM header and maybe this note didn't need to go 
to the whole list.

My point was that every time (s)he asks a question and someone wants to 
answer it, they'll get this
hideous response back, unless they've answered it before.


Tim Rainier



H [EMAIL PROTECTED] 
11/29/2005 05:03 PM

To
squid-users@squid-cache.org
cc

Subject
Re: Fw: [squid-users] recommendation need for building server






On Tuesday 29 November 2005 17:29, [EMAIL PROTECTED] wrote:
 Can I suggest that if you plan to take part in email-based discussions
 that you not use a hideous mechanism like
 this to manage your email?


probably only you are bothering  the list with this ...

if I am not very wrong their service replies to the sender FROM tag :)


H.



 Using a service like UOL Antispam for personal use is fine.
 Public email lists are not.  That service you're using requires the
 following in order for us to answer your questions.

 1.)  Read the reply sent back from your account automatically.
 2.)  Click the link inside the email  (html email?  blech)
 3.)  Wait for the darn web page to load.
 4.)  Attempt to read the squiggly characters so the system doesn't think
 you're trying to cheat it.
 5.)  Confirm the email (that you already tried to send) to ensure the
 message is delivered.

 Again, if you plan to join a discussion list and expect people to 
respond
 to your questions/comments, I would consider
 not using a service like UOL Antispam because people will not bother.

 Tim Rainier


 - Forwarded by Tim Rainier/KAL/Kalsec on 11/29/2005 02:26 PM -

 AntiSpam UOL [EMAIL PROTECTED] wrote on 11/29/2005 02:21:09 PM:
  ANTISPAM UOL » TIRA-TEIMA
 
 
  Olá,
 
  Você enviou uma mensagem para [EMAIL PROTECTED]
  Para que sua mensagem seja encaminhada, por favor, clique aqui
 
 
 
 
  Esta confirmação é necessária porque [EMAIL PROTECTED] usa o
  Antispam UOL, um programa que elimina mensagens enviadas por robôs,
  como pornografia, propaganda e correntes.
 
  As próximas mensagens enviadas para [EMAIL PROTECTED] não precisarão
  ser confirmadas*.
  *Caso você receba outro pedido de confirmação, por favor, peça para
  [EMAIL PROTECTED] incluí-lo em sua lista de autorizados.
 
 
  Atenção! Se você não conseguir clicar no atalho acima, acesse este

 endereço:
  http://tira-teima.as.uol.com.br/challengeSender.html?
  data=Mxbw0wV15u4NbKEgEJ619uI96Mp5NjqdtsjEhgRc%
  2F5IufywPv1Do8ASspQOSQKs%2FjBA0RauSgywQ%

 
0AgthAYJKNBVzooOk9N3Ufl3tqEmbJAKvPmKbK19x8TSuZxqklFUP4UTjAnTiLXwhYBJBLKiEuA
wc8%

  0AsREKJlTSgB9MMNjN4Vz9V9jtkAWylzaqvCF%2B3dOg
 
 
 
 
  Hi,
 
  You´ve just sent a message to [EMAIL PROTECTED]
  In order to confirm the sent message, please click here
 
 
 
 
  This confirmation is necessary because [EMAIL PROTECTED] uses
  Antispam UOL, a service that avoids unwanted messages like
  advertising, pornography, viruses, and spams.
 
  Other messages sent to [EMAIL PROTECTED] won't need to be confirmed*.
  *If you receive another confirmation request, please ask [EMAIL PROTECTED]
  com.br to include you in his/her authorized e-mail list.
 
 
  Warning! If the link doesn´t work, please copy the address below and
  paste it on your browser:
  http://tira-teima.as.uol.com.br/challengeSender.html?
  data=Mxbw0wV15u4NbKEgEJ619uI96Mp5NjqdtsjEhgRc%
  2F5IufywPv1Do8ASspQOSQKs%2FjBA0RauSgywQ%

 
0AgthAYJKNBVzooOk9N3Ufl3tqEmbJAKvPmKbK19x8TSuZxqklFUP4UTjAnTiLXwhYBJBLKiEuA
wc8%

  0AsREKJlTSgB9MMNjN4Vz9V9jtkAWylzaqvCF%2B3dOg
 
 
 
  Use o AntiSpam UOL e proteja sua caixa postal

 A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada
 segura. Service fornecido pelo Datacenter Matik 
 https://datacenter.matik.com.br

-- 


H.







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada 
segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br




Re: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-09 Thread trainier
The disk space is over limit error is not saying the disk is full.  The 
cache has reached the limit that's been set in the squid.conf file.
It could be causing squid to die, but how likely is it that this would be 
the cause, if squid dies 6 minutes after every hour?

My suggestion is to check and see what cron jobs are running: 
cat /etc/crontab
or  (as root): crontab -l and then crontab -l any other users that might 
be running cron jobs

If there's a timely pattern to the connectivity issue, the root of the 
problem probably has something to do with a schedule for something.
Cron would be a good place to start.

On the disk space is over limit issue...
You really shouldn't have to tend to this.  Squid should use whatever 
replacement policy was specified at compile time (forget which one is 
default if none is specified)
To remove old/unused cache objects in an effort to free up space. However, 
if squid is trying to do this, and is actively handling proxy requests at 
the same time,
squid could be running out of resources.  What specs do you have on this 
machine?  CPU/Ram/etc.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



[EMAIL PROTECTED] 
11/09/2005 09:45 AM

To
[EMAIL PROTECTED], [EMAIL PROTECTED], 
squid-users@squid-cache.org
cc

Subject
AW: [squid-users] Squid unreachable every hour and 6 minutes.






Please repeat that again.

(1) stop squid

(2) find out wht are the cache directories squid uses, for example

   # grep cache_dir squid.conf
   cache_dir ufs  /data1/squid_cache 6000 32 512
   cache_dir ufs  /data2/squid_cache 1 32 512
   #

 In this example /data1/squid_cache and /data2/squid_cache are the 
cache dirs.

(3) Clean all cache dirs - in this example:

   cd /data1/squid_cache
   rm -f *
   cd /data2/squid_cache
   rm -f *

(3) create the cache structures again:   squid -z

(4) Start squid.
What happens?
Is squid running? ps -ef | grep squid
What does cache.log say since starting of squid?
Is squid reachable?

(5) What happens after 1 hour an 6 minutes?

Werner Rost

-Ursprüngliche Nachricht-
Von: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 9. November 2005 15:10
An: Dave Raven; squid-users@squid-cache.org
Betreff: RE: [squid-users] Squid unreachable every hour and 6 minutes.


I already tried to :
- Stop Squid, delete swap.state, restart squid
- Stop Squid, format my cache parition, squid -z, start squid
- change cache_dir ufs /cache 5000 16 256 to cache_dir ufs 
/cache 100 16 256, squid -k restart.
- reboot completely the server

But nothing worked.




-Original Message-
From: Dave Raven [mailto:[EMAIL PROTECTED] 
Sent: Mittwoch, 9. November 2005 14:58
To: Gix, Lilian (CI/OSR) *; squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Try use my method posted earlier to search for code files. 
The fact that your log suddenly shows squid restarting means 
it died unexpectedly. If there is a core file it'll be squids 
problem - if not its probably something else causing the problem. 

Also, you should try clean out your cache_dir potentially... 
Remove everything and run squid -z to recreate it 

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED]
Sent: 09 November 2005 03:32 PM
To: Mike Cudmore
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Great, thanks for your answer and questions :
 
1- I have a message form my browser (IE, Firefox) witch says 
the proxy is unreachable. My MSN, yahoo messengers loose their access.
2- Ping like all other services still work perfectly. (SSH, Apache,
Ping,...)
3- Cache.log part is on the previous mail. You can see that 
there is nothing special between 09:44:45 and 10:07:05 (when 
squid come back)
 
Thanks for help.
 
L.G.



From: Mike Cudmore [mailto:[EMAIL PROTECTED]
Sent: Mittwoch, 9. November 2005 14:10
To: Gix, Lilian (CI/OSR) *
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.


Lilian

I may have missed earlier entries in this thread so apologies 
if I ask you to repeat any info.
 
 
1) How do you know squid is unreachable at these times ?
 
2) try pinging the host squid is running on for a period that 
covers the time that the squid is unreachable. Does the host 
become unreachable as well?
 
3) What does cache.log say for these periods ?
 
I have more thoughts depending on the answers to these 
 
Regards
 
Mike
 

 Gix, Lilian (CI/OSR) * [EMAIL PROTECTED] 09/11/05
12:44:30 

Hello,


Realy no body have an idea ?

:(

L.G. 

-Original Message-
From: Gix, Lilian (CI/OSR) *
Sent: Mittwoch, 2. November 2005 10:26
To: squid-users@squid-cache.org
Subject: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,

I have a problem with my squid :
every hour and 6 minutes, it is unreachable for few second.

Here is a 

Re: [squid-users] Large Solaris (2.8) Squid Server Advice Needed

2005-11-07 Thread trainier
 I am not sure if I am using both my hardware resources and my squid.conf 

 properly, especially with regards to: cache_dir ufs /usr/squidcache 8192 
16 
 256

In terms of cache_dir, it looks fine.  (assuming you're not using veritas 
volume manager on the partition from which you're running your squid 
cache.)
I have some issues with other portions of squid.conf, but they're noted 
below.

 Many apologies for such a long email, but I have done my best to be as 
 informative as possible.

Quite honestly better than the latter.  I personally prefer too much 
information. :-) 


 acl all src 192.9.65.0/255.255.255.0 192.9.64.0/255.255.255.0
 acl all src 10.90.0.0-10.95.0.0/255.255.0.0 
 172.16.0.0-172.19.0.0/255.255.0.0 192.168.0.0/255.255.0.0

No offense at all, but this is hideous.
acl all src needs to be exactly that.  Something that pertains to 
everything.
In fact, the default acl for all is really what should be left there. 
ie:

acl all src 0.0.0.0/255.255.255.255

This accounts for everything.  The idea is that you deny anything that 
matches
the all acl entry.  The deny statement goes at the very bottom of your 
ACL.
It states: If you haven't matched any of my allow acl's, you are denied 
access to my cache.

As an example, consider the following:
acl one_nine_two src 192.9.64.0/23
acl ten_ninety src 10.90.0.0/16
acl ten_ninety_five src 10.95.0.0/16
acl one_seven_two src 172.16.0.0/14
acl one_six_eight src 192.168.0.0/16
acl all src 0.0.0.0/255.255.255.255
http_access allow one_nine_two
http_access allow ten_ninety
http_access allow ten_ninety_five
http_access allow one_seven_two
http_access allow one_six_eight
http_access deny all

Concatenating all of your subnets into one acl makes for a real 
trouble-shooting nightmare.
Plus, seeing the http_access deny all missing from any squid config 
really makes me cringe.
I personally don't want people to be able to anonymously access my squid 
proxy (I don't care what kind of firewalls or physical securities are in 
place).

Cisco routers, for example, have an assumed deny all at the botton of 
their acls (it's not over-rideable either) to serve the same purpose.

The only other issue I have, that's worth noting, deals with my history 
and experience with solaris.  I have multiple vendors that have written 
products that run on solaris.  Three of them (names are not important 
here) have complained countless times about inconsistencies with how 
solaris terminates tcp sessions.  At a mere glance of the problem, I've 
seen sockets opened for connections in solaris and those specific sockets 
remained open until the duration of the machines uptime.  The sympton 
suggests that solaris or the application are not terminating the tcp 
connection properly (fin, fin-ack, etc).  Regardless, I've seen a few 
vendors that have complained about this and wanted to warn you of that.

Speaking of, anyone en-list experienced anything like this with squid on 
Solaris?  I've a couple sparc machines here at work and wouldn't mind 
tinkering with squid if I found it to be worth my while.

I guess that's enough for today. :-)

Tim Rainier


Re: [squid-users] email

2005-11-01 Thread trainier
How ironic that you sent that message from an MSN account.  :-)

Basically, good luck.

I would block the standard mail ports, then use a content filter to block 
the html-based email sites.
But you'd have to do them manually for each and every site.  It's not 
practical and it's a lot of work.

And you'll never block it all.  Ever.

What exactly, if you don't mind me asking, would you do this for?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



azeem ahmad [EMAIL PROTECTED] 
11/01/2005 08:41 AM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] email






hi list
i want to block my users from any kind of email. is it possible. bcuz if i 

block smtp/pop3 then there are so many web based mail servers like 
msn/yahoo. and if i block http then it means i have blocked everything on 
my 
network. i like to block every kind of email plz tell me how can i do 
that.
Regards
Azeem

_
Don't just search. Find. Check out the new MSN Search! 
http://search.msn.com/





RE: [squid-users] secure web sites wont show on my clients

2005-10-31 Thread trainier
 I have a brand new Gentoo Linux install set up with the following:
 
 Arno's Firewall 1.8.4d is firewalling my internet connection and 
 forwarding all outgoing port 80 traffic through a transparent proxy 
 setup.

Cool.  Is it doing the same for outgoing port 443?
If not, that's why secure websites aren't working.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
  Sent: Saturday, October 29, 2005 4:35 PM
  To: squid-users@squid-cache.org
  Subject: [squid-users] secure web sites wont show on my clients
  
  
  I have a brand new Gentoo Linux install set up with the following:
  
  Arno's Firewall 1.8.4d is firewalling my internet connection and 
  forwarding all outgoing port 80 traffic through a transparent proxy 
  setup.
  
 
 Is it preventing clients from accessing the outside world on port 443?
 
  dnsmasq is both my dns server and dhcp server (both of these 
  work no problem.
  
  I've installed dansguardian with the default config file (for now)
  
  Ive install squid 2.5 stable11  with an alered 
  /etc/squid/squid.conf file.
  
  My sequence is internal internet request - dansguardian - squid - 
  out to internet
  
  I just couldnt follow all the comments in such a large config 
  file so I 
  copied the sample one that comes with squid to squid.conf.sample
  
  and started over with a blank squid.conf file
  
  here it is:
  
  
  http_port 127.0.0.1:3128
  httpd_accel_host virtual
  httpd_accel_port 80
  httpd_accel_with_proxy on
  httpd_accel_uses_host_header on
  
  
  acl all src 0.0.0.0/0.0.0.0
  acl localhost src 127.0.0.1
  follow_x_forwarded_for allow localhost
  acl_uses_indirect_client on
  delay_pool_uses_indirect_client on
  log_uses_indirect_client on
  
  
  acl homenet src 192.168.0.0/24
  
  http_access allow localhost
  http_access allow homenet
  http_access deny all
  
  Ok:
  
  this setup seems to work for regular port 80 traffic ok
 
 So Squid is working fine...
 
  
  (please note, Im going for an unfiltered setup for now, I 
  want to make 
  sure everything that needs to work does, BEFORE the access 
  rules start 
  changing stuff, I want to know for sure that my problem was 
  in my last 
  rule change, not a setup issue
  
  My problem with this setup is web sites that require you to log in.
  
  EG www.hotmail.com
  
  dont work for the log in part.
  
  there are no error messages, just timeouts on the connection and 
  windows shows the DNS error page.
 
 It's likely not a squid problem.  You can't intercept SSL traffic 
 (and it doesn't look like you are trying), so you have to let it go 
 direct, (and obviously let the responses back in).  Check your firewall 
rules.
 
  
  What am I missing? Is it safe_ports?  (I read about those in 
  my master 
  copy of the .conf.default file)
  
  I want to make sure that squid allows all of my normal 
  traffic before I 
  start restricting any.
  
  Could someone please tell me what I've missed here, Thanks
  
  Rance
  
  
 
 Chris



Re: [squid-users] Spam mail through Squid server

2005-10-26 Thread trainier
 SMTP is allowed through your squid program itself, not the squid server.
This is not correct.  Although it might be possible to pass email through 
squid, squid does not natively
allow smtp proxying.  Squid proxies and caches http traffic and nothing 
more.  Unfortunately, due to variations of how connect() is used,
I suppose this is possible.

 Disable squid from allowing itself to connect to foreign hosts on port 
25, 
 or else you will continually be tracking people down rather than just 
 preventing the problem from happening in the first place.

I'm curious to know your recommendation on this one.  It's not like 
there's an acl or config notation that
states: allow_smtp yes|no

How would you suggest doing this?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Covington, Chris [EMAIL PROTECTED] wrote on 10/26/2005 
10:55:04 AM:

 On Tue, Oct 25, 2005 at 11:02:51PM +0100, lokesh.
 [EMAIL PROTECTED] wrote:
  I want to insert actual PC IP address also in header of that mail.
  SMTP is not allowed through my squid server.
 
 SMTP is allowed through your squid program itself, not the squid server. 
 
 Disable squid from allowing itself to connect to foreign hosts on port 
25, 
 or else you will continually be tracking people down rather than just 
 preventing the problem from happening in the first place.
 
 ---
 Chris Covington
 IT
 Plus One Health Management
 75 Maiden Lane Suite 801
 NY, NY 10038
 646-312-6269
 http://www.plusoneactive.com
 



Re: [squid-users] can't get to certain sites through proxy

2005-10-26 Thread trainier
The error message, or a copy of cache.log would be a good start.
Second, you appear to be trying to accel an http server.  Are you doing 
this on purpose?
This is NOT proxying as you see it.  This is used to speed up web servers 
and should not be used.
This applies to all your http_accel entries.

Also, what happend to your 'http_access deny all' line?  You don't really 
want anyone from anywhere to be
able to use your proxy do you?  If so, can you save me the trouble of 
hunting you down and give your IP
address so I can toss it into the blacklist?  :-)

Please get a better understanding of how ACLs work before you drop a proxy 
device out there.  That http_access deny all line
is VERY important.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Mark Drago [EMAIL PROTECTED] wrote on 10/26/2005 11:00:47 AM:

 Hello,
 
 Is there a page or an FAQ somewhere that may help me troubleshoot a
 problem where a site works fine when not going through squid but has
 errors when accessed through squid?  I'm having trouble logging in to a
 site when the connection is going through the proxy and I'm not really
 sure where to start.
 
 I'm running squid version 2.5.STABLE9 and my configuration file is
 included below.
 
 Any hints, tips, or links are greatly appreciated.
 
 Thank You,
 Mark Drago
 
 /etc/squid.conf
 ---
 
 http_port 3128
 hierarchy_stoplist cgi-bin ?
 
 acl QUERY urlpath_regex cgi-bin \?
 no_cache deny QUERY
 
 cache_dir ufs /var/squid/cache 7727 16 256
 
 cache_access_log /dev/null
 cache_log /dev/null
 cache_store_log none
 
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern .   0   20% 4320
 
 acl all src 0.0.0.0/0.0.0.0
 
 http_access allow all
 icp_access allow all
 miss_access allow all
 
 half_closed_clients off
 server_persistent_connections off
 client_persistent_connections off
 
 visible_hostname serial_number.bascom.net
 unique_hostname serial_number.bascom.net
 
 httpd_accel_host virtual
 httpd_accel_port 80
 httpd_accel_with_proxy on
 httpd_accel_uses_host_header on
 
 maximum_object_size 12 KB
 
 redirect_program /usr/local/bin/jesred
 redirect_children 40
 
 uri_whitespace deny
 [attachment signature.asc deleted by Tim Rainier/KAL/Kalsec] 


Re: [squid-users] Spam mail through Squid server

2005-10-25 Thread trainier
Uhm, yeah.  Why aren't you trying to prevent this activity?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



D  E Radel [EMAIL PROTECTED] 
10/25/2005 05:00 PM

To
[EMAIL PROTECTED], squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Spam mail through Squid server






If that really is the case, how about blocking access to that some 
server and cancelling your customer's account?

- Original Message - 
From: [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Wednesday, October 26, 2005 9:36 AM
Subject: [squid-users] Spam mail through Squid server


Hi

I am running transparent squid server on Redhat ES 3.0 box. I noticed
some time some of my users establish http connection with some server on
internet and send spam mail. Header of that mail always contain squid
server IP address. Is there any way I can insert customer's PC ip
address also which is actually sending that mail?

Thanks - LK





Re: [squid-users] Which the best OS for Squid?

2005-10-13 Thread trainier
  kalproxy:/var/log/squid # free -m
   total   used   free sharedbuffers cached
  Mem:  1007995 12  0  4 33
  -/+ buffers/cache:957 50
  Swap: 1027 18   1008

 I call this running low on memory. I recommeng adding more RAM. If you
 can't, first limit memory_pools_size, then you probably should decrease
 cache_mem a bit (32-16-8MB)

Actually, -/+ buffers/cache is generally only around 4-500 used.  Not 
terrible.  My cache_mem is currently set to 8.
Is there a FAQ that explains memory pools and how to use them?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Matus UHLAR - fantomas [EMAIL PROTECTED] wrote on 10/13/2005 03:44:37 
AM:

 On 12.10 16:57, [EMAIL PROTECTED] wrote:
   On a side-note.  Your 4x33 are set up as RAID or LVM?
   neither one is a good idea.
   http://www.squid-cache.org/Doc/FAQ/FAQ-3.html#ss3.11
  
  Indeed.  I was making sure he wasn't raiding his squid cache. :-)
  
  if your computes has enough of memory left for metadata cache (inodes 
and
  directories where squid data are left), it's OK. if not, you have 
huge
  performance bottleneck here (and with only 50 MB of cache_mem, buy 
more
  memory).
  
  kalproxy:/var/log/squid # free -m
   total   used   free sharedbuffers cached
  Mem:  1007995 12  0  4 33
  -/+ buffers/cache:957 50
  Swap: 1027 18   1008
 
 I call this running low on memory. I recommeng adding more RAM. If you
 can't, first limit memory_pools_size, then you probably should decrease
 cache_mem a bit (32-16-8MB)
 
 -- 
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 - Have you got anything without Spam in it?
 - Well, there's Spam egg sausage and Spam, that's not got much Spam in 
it.



Re: [squid-users] Crashed squid 2.5.STABLE11

2005-10-12 Thread trainier
My guess is that, yes, you're filling the /var partition when you rotate 
those logs.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Lucia Di Occhi [EMAIL PROTECTED] wrote on 10/12/2005 09:12:17 AM:

 Here is the output from df -h
 
 FilesystemSize  Used Avail Use% Mounted on
 /dev/sda3 7.9G  1.5G  6.1G  20% /
 /dev/sda1  99M   12M   83M  13% /boot
 /dev/sda6 109G   45G   59G  44% /cache
 none  1.8G 0  1.8G   0% /dev/shm
 /dev/sda2  16G   13G  2.6G  83% /var
 
 It looks like squid started crashing only 2 days ago after the 
logrotate, 
 here is the script:
 
 /var/log/squid/access.log {
 daily
 rotate 30
 copytruncate
 compress
 notifempty
 missingok
 }
 /var/log/squid/cache.log {
 daily
 rotate 30
 copytruncate
 compress
 notifempty
 missingok
 }
 
 /var/log/squid/store.log {
 daily
 rotate 30
 copytruncate
 compress
 notifempty
 missingok
 # This script asks squid to rotate its logs on its own.
 # Restarting squid is a long process and it is not worth
 # doing it just to rotate logs
 #postrotate
 #  /usr/sbin/squid -k rotate
 #endscript
 }
 
 It has been working fine for a long time, so I am not quite sure why it 
is 
 now crashing.  Could it be that our internet traffic has increased and 
the 
 logs are getting so large than when they are rotating and zipping there 
 isn't enough space on /var?
 From: [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Crashed squid 2.5.STABLE11
 Date: Tue, 11 Oct 2005 16:21:47 -0400
 
 Sorry.  That's `df -h` as opposed to `du -h`.
 
 Tim Rainier
 Information Services, Kalsec, INC
 [EMAIL PROTECTED]
 
 
 
 [EMAIL PROTECTED]
 10/11/2005 03:38 PM
 
 To
 squid-users@squid-cache.org
 cc
 
 Subject
 Re: [squid-users] Crashed squid 2.5.STABLE11
 
 
 
 
 
 
 First, and foremost, I would hesitate rotating the store log.  Henrik 
and
 probably several others, can verify that notion.
 Second, do a `du -h` and email the output back.
 
 Tim Rainier
 Information Services, Kalsec, INC
 [EMAIL PROTECTED]
 
 
 
 Lucia Di Occhi [EMAIL PROTECTED]
 10/11/2005 02:29 PM
 
 To
 [EMAIL PROTECTED], squid-users@squid-cache.org
 cc
 
 Subject
 Re: [squid-users] Crashed squid 2.5.STABLE11
 
 
 
 
 
 
 Sure it does and I keep 30 days worth of logs:
 
 squid[~]ls -alh /var/log/squid/
 total 12G
 drwxr-x---   2 squid squid 4.0K Oct 11 04:07 .
 drwxr-xr-x  12 root  root  4.0K Oct  9 04:04 ..
 -rw-r--r--   1 squid squid 365M Oct 11 14:28 access.log
 -rw-r--r--   1 squid squid  47M Oct  2 04:02 access.log.10.gz
 -rw-r--r--   1 squid squid 140M Oct  1 04:04 access.log.11.gz
 -rw-r--r--   1 squid squid 172M Sep 30 04:04 access.log.12.gz
 -rw-r--r--   1 squid squid 174M Sep 29 04:04 access.log.13.gz
 -rw-r--r--   1 squid squid 155M Sep 28 04:04 access.log.14.gz
 -rw-r--r--   1 squid squid 174M Sep 27 04:04 access.log.15.gz
 -rw-r--r--   1 squid squid  90M Sep 26 04:03 access.log.16.gz
 -rw-r--r--   1 squid squid  66M Sep 25 04:03 access.log.17.gz
 -rw-r--r--   1 squid squid 146M Sep 24 04:04 access.log.18.gz
 -rw-r--r--   1 squid squid 158M Sep 23 04:04 access.log.19.gz
 -rw-r--r--   1 squid squid 177M Oct 11 04:04 access.log.1.gz
 -rw-r--r--   1 squid squid 160M Sep 22 04:04 access.log.20.gz
 -rw-r--r--   1 squid squid 152M Sep 21 04:04 access.log.21.gz
 -rw-r--r--   1 squid squid 163M Sep 20 04:04 access.log.22.gz
 -rw-r--r--   1 squid squid  85M Sep 19 04:09 access.log.23.gz
 -rw-r--r--   1 squid squid  64M Sep 18 04:02 access.log.24.gz
 -rw-r--r--   1 squid squid 127M Sep 17 04:04 access.log.25.gz
 -rw-r--r--   1 squid squid 143M Sep 16 04:04 access.log.26.gz
 -rw-r--r--   1 squid squid 145M Sep 15 04:04 access.log.27.gz
 -rw-r--r--   1 squid squid 143M Sep 14 04:04 access.log.28.gz
 -rw-r--r--   1 squid squid 135M Sep 13 04:03 access.log.29.gz
 -rw-r--r--   1 squid squid 100M Oct 10 04:03 access.log.2.gz
 -rw-r--r--   1 squid squid  63M Sep 12 04:03 access.log.30.gz
 -rw-r--r--   1 squid squid  87M Oct  9 04:03 access.log.3.gz
 -rw-r--r--   1 squid squid 144M Oct  8 04:04 access.log.4.gz
 -rw-r--r--   1 squid squid 157M Oct  7 04:04 access.log.5.gz
 -rw-r--r--   1 squid squid 169M Oct  6 04:05 access.log.6.gz
 -rw-r--r--   1 squid squid 158M Oct  5 04:04 access.log.7.gz
 -rw-r--r--   1 squid squid 176M Oct  4 04:04 access.log.8.gz
 -rw-r--r--   1 squid squid  80M Oct  3 04:03 access.log.9.gz
 -rw-r--r--   1 squid squid  34K Oct 11 14:08 cache.log
 -rw-r--r--   1 squid squid 8.5K Oct  2 04:02 cache.log.10.gz
 -rw-r--r--   1 squid squid  12K Oct  1 04:04 cache.log.11.gz
 -rw-r--r--   1 squid squid  19K Sep 30 04:04 cache.log.12.gz
 -rw-r--r--   1 squid squid  31K Sep 29 04:04 cache.log.13.gz
 -rw-r--r--   1 squid squid  17K Sep 28 04:04 cache.log.14.gz
 -rw-r--r--   1 squid squid  21K Sep 27 04:04 cache.log.15.gz
 -rw-r--r--   1 squid squid  13K Sep 26 04:03 cache.log.16.gz
 -rw-r--r--   1 squid 

RE: [squid-users] Which the best OS for Squid?

2005-10-12 Thread trainier
Very cool!  Thanx!

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Chris Robertson [EMAIL PROTECTED] wrote on 10/11/2005 06:09:53 PM:

  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, October 11, 2005 1:20 PM
  To: squid-users@squid-cache.org
  Subject: Re: [squid-users] Which the best OS for Squid?
  
  
  First off, there's no possible way my cache would fill the '/' 
  partition.  There's a cache size directive in squid that's 
  designed to 
  limit the amount of disk space usage.
  Not to mention the fact that I have a utility script that 
  runs every 15 
  minutes, which pages me if partitions are = to 90% their 
  capacity.  I 
  mean, honestly, who would run a 146GB cache?
  
  Second off, it's a performance thing.  The fact is, the box 
  and the web 
  run quite fine.  This was a test server that was thrown into 
  production 
  because it works.
  My plans to upgrade the device are set, I'm just trying to 
  find the time 
  to do them.  :-)
  
  Thirdly, can someone PLEASE answer my question about setting / to 
  'noatime', as opposed to avoiding it by telling me how and 
  why what I'm 
  doing
  is stupid?
  
  Once again, are there pitfalls to having '/' set to 'noatime'?
  
  :-)
  
  Tim Rainier
  Information Services, Kalsec, INC
  [EMAIL PROTECTED]
  
  
 
 Simple solutions: don't set noatime in fstab.  Just set it for the cache 
dir. 
 http://www.faqs.org/docs/securing/chap6sec73.html
 
 As for pitfalls, I don't really see any outside of forensics.  All 
 the atime option does is keep track of when a file is read (accessed).
 
 Chris



Re: [squid-users] Which the best OS for Squid?

2005-10-12 Thread trainier
Oh yeah.  I definitely see the advantages.

The fact is, we're small enough that it hasn't sorely affected us much at 
all.  My access log for squid grows to about 4-10 GB in a week.
I made it adimently clear that I would only retain 1 weeks worth of access 
logging information.

When it comes down to it, I would've never moved this box into production 
if we ran anything else off from it.  It's dedicated to caching and 
blocking content (squidguard).
I have had very few complaints on performance (with the exception of the 
period when we were testing with authentication routines).

On a side-note.  Your 4x33 are set up as RAID or LVM?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Rodrigo A B Freire [EMAIL PROTECTED] 
10/11/2005 10:52 PM

To
squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Which the best OS for Squid?






In my cache server, every mount is set noatime. Since the server is 
dedicated to the web caching, I don't need the last time someone accessed 
a 
given file.

BTW, FYI.. On my cache.log startup:

Max Swap size: 132592000 KB

4x33 GB UW-SCSI2 drives. I fills almost my entire RAM. My cache_mem is set 

to only 50 MBs, since the object table goes quite large into the memory. 
But, why mind? The disk access is fast enough (avg. 9 ms to access cached 
objects).

The OS lies in a 9-GB disk drive. Logs rotated and compressed 
(access.log and cache.log, don't log store.log) daily. Monthly, FTPed to 
another server, backup and remove the old ones from cache machine 
[access.log may reach 1 GB/day].

My cache occupies entire hard disks, with no partitions [mkfs.reiser4 
(yeah!) -f /dev/sdb, sdc, sdd, sde].

The advantage? Well, If I want to zero the cache swap, I just stop 
squid, umount the partition, kick a mkfs and re-mount drives. Elapsed 
time? 
10 seconds, I say.
rm -f /usr/local/squid/var/cachexx ?
Damn, it may take up to 20 minutes (with a 27 GB cache_dir).

Another thing I have into account is the fact of the intense I/O in 
the 
cache dir. Definitely, I don't feel comfy about all this I/O in my OS main 

disk. And, placing in different disks, the log writting isn't concomitant 
with a eventual disk op cache-related.

A power loss might led to a long fsck (the cache mounts aren't 
FSCKed), 
which results to long time bringing the machine back up and lots of users 
whining (altough we use WPAD with quite good results when directing the 
traffic to another server in case of failure).

My 2 cents: I would consider seriously creating a separate partition 
to 
the cache (if a second or more disks isn't an option). Both of them with 
noatime  ;-)

Best regards,

Rodrigo.

- Original Message - 
From: [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Tuesday, October 11, 2005 6:19 PM
Subject: Re: [squid-users] Which the best OS for Squid?


 First off, there's no possible way my cache would fill the '/'
 partition.  There's a cache size directive in squid that's designed to
 limit the amount of disk space usage.
 Not to mention the fact that I have a utility script that runs every 15
 minutes, which pages me if partitions are = to 90% their capacity.  I
 mean, honestly, who would run a 146GB cache?

 Second off, it's a performance thing.  The fact is, the box and the web
 run quite fine.  This was a test server that was thrown into production
 because it works.
 My plans to upgrade the device are set, I'm just trying to find the time
 to do them.  :-)

 Thirdly, can someone PLEASE answer my question about setting / to
 'noatime', as opposed to avoiding it by telling me how and why what I'm
 doing
 is stupid?

 Once again, are there pitfalls to having '/' set to 'noatime'?

 :-)

 Tim Rainier
 Information Services, Kalsec, INC
 [EMAIL PROTECTED]



 Joost de Heer [EMAIL PROTECTED]
 10/11/2005 05:07 PM
 Please respond to
 [EMAIL PROTECTED]


 To
 [EMAIL PROTECTED]
 cc
 squid-users@squid-cache.org
 Subject
 Re: [squid-users] Which the best OS for Squid?






 [EMAIL PROTECTED] said:
 What if the squid cache is stored on the / partition?

 That's a bad idea. Your cache could potentially fill up the root
 partition.

 Wouldn't that be a hideous mistake to set / to 'noatime' ?

 Wouldn't it be a hideous mistake to put the cache on the same partition 
as
 /?

 Joost



 





Re: [squid-users] Which the best OS for Squid?

2005-10-12 Thread trainier
Oh.  You're running 4 seperate caches?
Yeah, I couldn't see why anyone would want to RAID squid cache.  :-)

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Rodrigo A B Freire [EMAIL PROTECTED] 
10/12/2005 01:24 PM

To
[EMAIL PROTECTED]
cc

Subject
Re: [squid-users] Which the best OS for Squid?






Hello, Tim!

The disks ae completely stand-alone. No volume manager, no RAID:

#/dev/sdb/usr/local/squid/var/cacheb reiser4 rw,noatime  0   0
#/dev/sdc/usr/local/squid/var/cachec reiser4 rw,noatime  0   0
#/dev/sdd/usr/local/squid/var/cached reiser4 rw,noatime  0   0
#/dev/sde/usr/local/squid/var/cachee reiser4 rw,noatime  0   0

I've read somewere that Squid is a worst-case scenario for RAIDs, due 
to 
the atomicity of the files and they're sparsed.

Best regards,

Rodrigo.

- Original Message - 
From: [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Wednesday, October 12, 2005 11:03 AM
Subject: Re: [squid-users] Which the best OS for Squid?


 Oh yeah.  I definitely see the advantages.

 The fact is, we're small enough that it hasn't sorely affected us much 
at
 all.  My access log for squid grows to about 4-10 GB in a week.
 I made it adimently clear that I would only retain 1 weeks worth of 
access
 logging information.

 When it comes down to it, I would've never moved this box into 
production
 if we ran anything else off from it.  It's dedicated to caching and
 blocking content (squidguard).
 I have had very few complaints on performance (with the exception of the
 period when we were testing with authentication routines).

 On a side-note.  Your 4x33 are set up as RAID or LVM?

 Tim Rainier
 Information Services, Kalsec, INC
 [EMAIL PROTECTED]



 Rodrigo A B Freire [EMAIL PROTECTED]
 10/11/2005 10:52 PM

 To
 squid-users@squid-cache.org
 cc

 Subject
 Re: [squid-users] Which the best OS for Squid?






In my cache server, every mount is set noatime. Since the server is
 dedicated to the web caching, I don't need the last time someone 
accessed
 a
 given file.

BTW, FYI.. On my cache.log startup:

 Max Swap size: 132592000 KB

 4x33 GB UW-SCSI2 drives. I fills almost my entire RAM. My cache_mem is 
set

 to only 50 MBs, since the object table goes quite large into the memory.
 But, why mind? The disk access is fast enough (avg. 9 ms to access 
cached
 objects).

The OS lies in a 9-GB disk drive. Logs rotated and compressed
 (access.log and cache.log, don't log store.log) daily. Monthly, FTPed to
 another server, backup and remove the old ones from cache machine
 [access.log may reach 1 GB/day].

My cache occupies entire hard disks, with no partitions [mkfs.reiser4
 (yeah!) -f /dev/sdb, sdc, sdd, sde].

The advantage? Well, If I want to zero the cache swap, I just stop
 squid, umount the partition, kick a mkfs and re-mount drives. Elapsed
 time?
 10 seconds, I say.
rm -f /usr/local/squid/var/cachexx ?
Damn, it may take up to 20 minutes (with a 27 GB cache_dir).

Another thing I have into account is the fact of the intense I/O in
 the
 cache dir. Definitely, I don't feel comfy about all this I/O in my OS 
main

 disk. And, placing in different disks, the log writting isn't 
concomitant
 with a eventual disk op cache-related.

A power loss might led to a long fsck (the cache mounts aren't
 FSCKed),
 which results to long time bringing the machine back up and lots of 
users
 whining (altough we use WPAD with quite good results when directing the
 traffic to another server in case of failure).

My 2 cents: I would consider seriously creating a separate partition
 to
 the cache (if a second or more disks isn't an option). Both of them with
 noatime  ;-)

 Best regards,

 Rodrigo.

 - Original Message - 
 From: [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Sent: Tuesday, October 11, 2005 6:19 PM
 Subject: Re: [squid-users] Which the best OS for Squid?


 First off, there's no possible way my cache would fill the '/'
 partition.  There's a cache size directive in squid that's designed to
 limit the amount of disk space usage.
 Not to mention the fact that I have a utility script that runs every 15
 minutes, which pages me if partitions are = to 90% their capacity.  I
 mean, honestly, who would run a 146GB cache?

 Second off, it's a performance thing.  The fact is, the box and the web
 run quite fine.  This was a test server that was thrown into production
 because it works.
 My plans to upgrade the device are set, I'm just trying to find the 
time
 to do them.  :-)

 Thirdly, can someone PLEASE answer my question about setting / to
 'noatime', as opposed to avoiding it by telling me how and why what I'm
 doing
 is stupid?

 Once again, are there pitfalls to having '/' set to 'noatime'?

 :-)

 Tim Rainier
 Information Services, Kalsec, INC
 [EMAIL PROTECTED]



 Joost de Heer [EMAIL PROTECTED]
 10/11/2005 05:07 PM
 Please respond to
 [EMAIL PROTECTED]


 

Re: [squid-users] Which the best OS for Squid?

2005-10-12 Thread trainier
 On a side-note.  Your 4x33 are set up as RAID or LVM?
 neither one is a good idea.
 http://www.squid-cache.org/Doc/FAQ/FAQ-3.html#ss3.11

Indeed.  I was making sure he wasn't raiding his squid cache. :-)

 if your computes has enough of memory left for metadata cache (inodes 
and
directories where squid data are left), it's OK. if not, you have huge
performance bottleneck here (and with only 50 MB of cache_mem, buy more
memory).

kalproxy:/var/log/squid # free -m
 total   used   free sharedbuffers cached
Mem:  1007995 12  0  4 33
-/+ buffers/cache:957 50
Swap: 1027 18   1008


Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Matus UHLAR - fantomas [EMAIL PROTECTED] 
10/12/2005 02:12 PM

To
squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Which the best OS for Squid?






On 12.10 10:03, [EMAIL PROTECTED] wrote:
 The fact is, we're small enough that it hasn't sorely affected us much 
at 
 all.  My access log for squid grows to about 4-10 GB in a week.

wow, that's very much of data transferred in a week.

 I made it adimently clear that I would only retain 1 weeks worth of 
access 
 logging information.

1-2 weeks are recommended, if you can set up a few FAST disks.



you should use each drive as separate cache_dir, unless you use mirroring
(which is useless in many cases)

[someone other...]

 In my cache server, every mount is set noatime. Since the server 
is 
 dedicated to the web caching, I don't need the last time someone 
accessed
 a given file.

the 'noatime' option for most filesystems does not matter.

 4x33 GB UW-SCSI2 drives. I fills almost my entire RAM. My cache_mem is 
set 
 
 to only 50 MBs, since the object table goes quite large into the memory. 

 But, why mind? The disk access is fast enough (avg. 9 ms to access 
cached 
 objects).

if your computes has enough of memory left for metadata cache (inodes and
directories where squid data are left), it's OK. if not, you have huge
performance bottleneck here (and with only 50 MB of cache_mem, buy more
memory).

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety. -- Benjamin Franklin, 1759




Re: [squid-users] Performance tweaks

2005-10-11 Thread trainier
Not much, no. 
Seems to me I found a couple compilation errors when I first tried to 
install it.
It was no big deal or anything.

If you run into trouble, you can contact me off-list since it's probably 
beyond the scope of this list.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



D  E Radel [EMAIL PROTECTED] 
10/10/2005 05:40 PM

To
squid-users@squid-cache.org, [EMAIL PROTECTED]
cc

Subject
Re: [squid-users] Performance tweaks







- Original Message - 
From: [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Tuesday, October 11, 2005 9:56 AM
Subject: Re: [squid-users] Performance tweaks


 Squid's url_regex is a hideously slow way of managing blackholed
 urls/sites/domains.
 I'm not necessarily blaming the program itself, the fact is, regular
 expressions can be quite computational.
  SquidGuard, on the other hand, is VERY fast and works quite well.

Is it much of a feat to make the switch from url_regex to SquidGuard? 
Any gotchas?
D.Radel.






Re: [squid-users] Crashed squid 2.5.STABLE11

2005-10-11 Thread trainier
Does this file exist? - /var/log/squid/store.log
Does the user running squid have permission to write to it?

Basically, do an ls -lah /var/log/squid
and paste the output into the reply email.



Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Lucia Di Occhi [EMAIL PROTECTED] wrote on 10/11/2005 10:21:52 AM:

 Linux squid 2.6.12-1.1378_FC3smp #1 SMP Wed Sep 14 04:52:36 EDT 2005 
i686 
 i686 i386 GNU/Linux
 
 Squid Cache: Version 2.5.STABLE11
 configure options:  --program-prefix= --prefix=/usr --exec-prefix=/usr 
 --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc 
--datadir=/usr/share 
 --includedir=/usr/include --libdir=/usr/lib --libexecdir=/usr/libexec 
 --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man 
 --infodir=/usr/share/info --exec_prefix=/usr --bindir=/usr/sbin 
 --libexecdir=/usr/lib/squid --localstatedir=/var --sysconfdir=/etc/squid 

 --enable-poll --enable-removal-policies=heap,lru 
 --enable-storeio=aufs,coss,diskd,null,ufs --enable-linux-netfilter 
 --with-pthreads --enable-useragent-log --enable-referer-log 
 --disable-dependency-tracking --enable-cachemgr-hostname=localhost 
 --disable-ident-lookups --enable-truncate --enable-underscores 
 --datadir=/usr/share
 
 Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.4/specs
 Configured with: ../configure --prefix=/usr --mandir=/usr/share/man 
 --infodir=/usr/share/info --enable-shared --enable-threads=posix 
 --disable-checking --with-system-zlib --enable-__cxa_atexit 
 --disable-libunwind-exceptions --enable-java-awt=gtk 
 --host=i386-redhat-linux
 Thread model: posix
 gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)
 
 From cache.log
 
 2005/10/11 02:31:33| WARNING: unparseable HTTP header field 
 {403,14,URL,/vanityParser.cfm}
 2005/10/11 02:31:33| WARNING: unparseable HTTP header field 
 {404,*,URL,/test.htm}
 2005/10/11 02:31:33| WARNING: unparseable HTTP header field 
 {500,100,URL,/iisHelp/common/500-100.asp}
 2005/10/11 02:31:33| ctx: exit level  0
 2005/10/11 02:31:33| WARNING: unparseable HTTP header field 
 {403,14,URL,/vanityParser.cfm}
 2005/10/11 02:31:33| WARNING: unparseable HTTP header field 
 {404,*,URL,/test.htm}
 2005/10/11 02:31:33| WARNING: unparseable HTTP header field 
 {500,100,URL,/iisHelp/common/500-100.asp}
 2005/10/11 02:49:09| httpReadReply: Excess data from GET 
 http://hpdjjs.com/cgi-bin/jumpstation.py?query=overlandserver;
 2005/10/11 03:04:53| httpReadReply: Excess data from HEAD 
 http://www.overture.com/;
 2005/10/11 03:23:51| ctx: enter level  0: 
 'http://viewmorepics.myspace.com/index.cfm?
 
fuseaction=viewImagefriendID=28636111imageID=251293587Mytoken=8B0D8DCC-7FD4-4333-991911DE8DEF62D12839554968'
 2005/10/11 03:23:51| WARNING: unparseable HTTP header field 
 {403,14,URL,/vanityParser.cfm}
 2005/10/11 03:23:51| WARNING: unparseable HTTP header field 
 {404,*,URL,/test.htm}
 2005/10/11 03:23:51| WARNING: unparseable HTTP header field 
 {500,100,URL,/iisHelp/common/500-100.asp}
 2005/10/11 03:23:51| ctx: exit level  0
 2005/10/11 03:23:51| WARNING: unparseable HTTP header field 
 {403,14,URL,/vanityParser.cfm}
 2005/10/11 03:23:51| WARNING: unparseable HTTP header field 
 {404,*,URL,/test.htm}
 2005/10/11 03:23:51| WARNING: unparseable HTTP header field 
 {500,100,URL,/iisHelp/common/500-100.asp}
 2005/10/11 03:29:59| httpReadReply: Excess data from HEAD 
 http://www.overture.com/;
 2005/10/11 04:02:53| storeDirWriteCleanLogs: Starting...
 2005/10/11 04:02:53| WARNING: Closing open FD   12
 2005/10/11 04:02:54| 65536 entries written so far.
 2005/10/11 04:02:54|131072 entries written so far.
 2005/10/11 04:02:54|196608 entries written so far.
 2005/10/11 04:02:54|262144 entries written so far.
 2005/10/11 04:02:54|327680 entries written so far.
 2005/10/11 04:02:54|393216 entries written so far.
 2005/10/11 04:02:54|458752 entries written so far.
 2005/10/11 04:02:54|524288 entries written so far.
 2005/10/11 04:02:54|589824 entries written so far.
 2005/10/11 04:02:54|655360 entries written so far.
 2005/10/11 04:02:54|720896 entries written so far.
 2005/10/11 04:02:54|786432 entries written so far.
 2005/10/11 04:02:54|851968 entries written so far.
 2005/10/11 04:02:54|917504 entries written so far.
 2005/10/11 04:02:54|983040 entries written so far.
 2005/10/11 04:02:54|   1048576 entries written so far.
 2005/10/11 04:02:54|   1114112 entries written so far.
 2005/10/11 04:02:54|   1179648 entries written so far.
 2005/10/11 04:02:54|   1245184 entries written so far.
 2005/10/11 04:02:54|   1310720 entries written so far.
 2005/10/11 04:02:55|   1376256 entries written so far.
 2005/10/11 04:02:55|   1441792 entries written so far.
 2005/10/11 04:02:55|   1507328 entries written so far.
 2005/10/11 04:02:55|   1572864 entries written so far.
 2005/10/11 04:02:55|   1638400 entries written so far.
 2005/10/11 04:02:55|   1703936 entries written so far.
 2005/10/11 04:02:55|   1769472 entries 

Re: [squid-users] Which the best OS for Squid?

2005-10-11 Thread trainier
What if the squid cache is stored on the / partition?
Wouldn't that be a hideous mistake to set / to 'noatime' ?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Henrik Nordstrom [EMAIL PROTECTED] wrote on 10/11/2005 10:07:21 AM:

 On Tue, 11 Oct 2005 [EMAIL PROTECTED] wrote:
 
  This is more of a filesystem question, then it is an operating
  system/distro question.
  Based on my research, the benchmarks on the web claim ReiserFS to 
provide
  up to 15-20% faster results.
 
  I've not had any time to do any benchmarking.  My cache is currently
  running on an ext3 partition running
  under SLES8 SP3
 
 Regardless of which filesystem you select the most important tuning 
aspect 
 for filesystem performance for Squid (after selection of hardware) is 
the 
 noatime mount option.
 
 A more complete list, in priority order:
 
1. Amount of memory available
 
2. Number of harddrives used for cache
 
3. noatime mount option
 
4. type of filesystem (except for a few really bad choices).
 
 
 On systems with syncronous directory updates (Solaris, some BSD 
versions)
 
1.5 Mount option to enable asyncronous directory updates, or 
preferably 
 a filesystem meta journal on a separate device taking the heat of 
 directory updates.
 
 Regards
 Henrik



Re: [squid-users] Crashed squid 2.5.STABLE11

2005-10-11 Thread trainier
First, and foremost, I would hesitate rotating the store log.  Henrik and 
probably several others, can verify that notion.
Second, do a `du -h` and email the output back.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Lucia Di Occhi [EMAIL PROTECTED] 
10/11/2005 02:29 PM

To
[EMAIL PROTECTED], squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Crashed squid 2.5.STABLE11






Sure it does and I keep 30 days worth of logs:

squid[~]ls -alh /var/log/squid/
total 12G
drwxr-x---   2 squid squid 4.0K Oct 11 04:07 .
drwxr-xr-x  12 root  root  4.0K Oct  9 04:04 ..
-rw-r--r--   1 squid squid 365M Oct 11 14:28 access.log
-rw-r--r--   1 squid squid  47M Oct  2 04:02 access.log.10.gz
-rw-r--r--   1 squid squid 140M Oct  1 04:04 access.log.11.gz
-rw-r--r--   1 squid squid 172M Sep 30 04:04 access.log.12.gz
-rw-r--r--   1 squid squid 174M Sep 29 04:04 access.log.13.gz
-rw-r--r--   1 squid squid 155M Sep 28 04:04 access.log.14.gz
-rw-r--r--   1 squid squid 174M Sep 27 04:04 access.log.15.gz
-rw-r--r--   1 squid squid  90M Sep 26 04:03 access.log.16.gz
-rw-r--r--   1 squid squid  66M Sep 25 04:03 access.log.17.gz
-rw-r--r--   1 squid squid 146M Sep 24 04:04 access.log.18.gz
-rw-r--r--   1 squid squid 158M Sep 23 04:04 access.log.19.gz
-rw-r--r--   1 squid squid 177M Oct 11 04:04 access.log.1.gz
-rw-r--r--   1 squid squid 160M Sep 22 04:04 access.log.20.gz
-rw-r--r--   1 squid squid 152M Sep 21 04:04 access.log.21.gz
-rw-r--r--   1 squid squid 163M Sep 20 04:04 access.log.22.gz
-rw-r--r--   1 squid squid  85M Sep 19 04:09 access.log.23.gz
-rw-r--r--   1 squid squid  64M Sep 18 04:02 access.log.24.gz
-rw-r--r--   1 squid squid 127M Sep 17 04:04 access.log.25.gz
-rw-r--r--   1 squid squid 143M Sep 16 04:04 access.log.26.gz
-rw-r--r--   1 squid squid 145M Sep 15 04:04 access.log.27.gz
-rw-r--r--   1 squid squid 143M Sep 14 04:04 access.log.28.gz
-rw-r--r--   1 squid squid 135M Sep 13 04:03 access.log.29.gz
-rw-r--r--   1 squid squid 100M Oct 10 04:03 access.log.2.gz
-rw-r--r--   1 squid squid  63M Sep 12 04:03 access.log.30.gz
-rw-r--r--   1 squid squid  87M Oct  9 04:03 access.log.3.gz
-rw-r--r--   1 squid squid 144M Oct  8 04:04 access.log.4.gz
-rw-r--r--   1 squid squid 157M Oct  7 04:04 access.log.5.gz
-rw-r--r--   1 squid squid 169M Oct  6 04:05 access.log.6.gz
-rw-r--r--   1 squid squid 158M Oct  5 04:04 access.log.7.gz
-rw-r--r--   1 squid squid 176M Oct  4 04:04 access.log.8.gz
-rw-r--r--   1 squid squid  80M Oct  3 04:03 access.log.9.gz
-rw-r--r--   1 squid squid  34K Oct 11 14:08 cache.log
-rw-r--r--   1 squid squid 8.5K Oct  2 04:02 cache.log.10.gz
-rw-r--r--   1 squid squid  12K Oct  1 04:04 cache.log.11.gz
-rw-r--r--   1 squid squid  19K Sep 30 04:04 cache.log.12.gz
-rw-r--r--   1 squid squid  31K Sep 29 04:04 cache.log.13.gz
-rw-r--r--   1 squid squid  17K Sep 28 04:04 cache.log.14.gz
-rw-r--r--   1 squid squid  21K Sep 27 04:04 cache.log.15.gz
-rw-r--r--   1 squid squid  13K Sep 26 04:03 cache.log.16.gz
-rw-r--r--   1 squid squid  14K Sep 25 04:03 cache.log.17.gz
-rw-r--r--   1 squid squid  27K Sep 24 04:04 cache.log.18.gz
-rw-r--r--   1 squid squid  20K Sep 23 04:04 cache.log.19.gz
-rw-r--r--   1 squid squid  15K Oct 11 04:04 cache.log.1.gz
-rw-r--r--   1 squid squid  20K Sep 22 04:04 cache.log.20.gz
-rw-r--r--   1 squid squid  18K Sep 21 04:04 cache.log.21.gz
-rw-r--r--   1 squid squid  14K Sep 20 04:04 cache.log.22.gz
-rw-r--r--   1 squid squid  12K Sep 19 04:09 cache.log.23.gz
-rw-r--r--   1 squid squid 8.6K Sep 18 04:02 cache.log.24.gz
-rw-r--r--   1 squid squid  17K Sep 17 04:04 cache.log.25.gz
-rw-r--r--   1 squid squid  19K Sep 16 04:04 cache.log.26.gz
-rw-r--r--   1 squid squid  22K Sep 15 04:04 cache.log.27.gz
-rw-r--r--   1 squid squid  17K Sep 14 04:04 cache.log.28.gz
-rw-r--r--   1 squid squid  17K Sep 13 04:03 cache.log.29.gz
-rw-r--r--   1 squid squid 7.8K Oct 10 04:03 cache.log.2.gz
-rw-r--r--   1 squid squid  12K Sep 12 04:03 cache.log.30.gz
-rw-r--r--   1 squid squid 9.1K Oct  9 04:03 cache.log.3.gz
-rw-r--r--   1 squid squid  10K Oct  8 04:04 cache.log.4.gz
-rw-r--r--   1 squid squid  13K Oct  7 04:04 cache.log.5.gz
-rw-r--r--   1 squid squid  16K Oct  6 04:05 cache.log.6.gz
-rw-r--r--   1 squid squid  11K Oct  5 04:04 cache.log.7.gz
-rw-r--r--   1 squid squid  17K Oct  4 04:04 cache.log.8.gz
-rw-r--r--   1 squid squid 9.3K Oct  3 04:03 cache.log.9.gz
-rw-r--r--   1 root  root  2.2K Oct 11 08:08 squid.out
-rw-r--r--   1 squid squid 395M Oct 11 14:28 store.log
-rw-r--r--   1 squid squid  85M Oct  2 04:03 store.log.10.gz
-rw-r--r--   1 squid squid 235M Oct  1 04:06 store.log.11.gz
-rw-r--r--   1 squid squid 282M Sep 30 04:07 store.log.12.gz
-rw-r--r--   1 squid squid 285M Sep 29 04:06 store.log.13.gz
-rw-r--r--   1 squid squid 255M Sep 28 04:06 store.log.14.gz
-rw-r--r--   1 squid squid 287M Sep 27 04:06 store.log.15.gz
-rw-r--r--   1 squid squid 157M Sep 26 04:04 store.log.16.gz
-rw-r--r--   1 squid squid 117M Sep 25 04:04 store.log.17.gz
-rw-r--r--   1 squid 

RE: [squid-users] Which the best OS for Squid?

2005-10-11 Thread trainier
I realize that and agree.  My situation was screwy because of the server 
I'm running squid on.
It has several internal partitions that are used for bios/post which 
disallowed me to set up partitions the
way I wanted to.

Not to mention the fact that this was really just a test squid box that I 
had tuned to the point where we were satisfied with it being in 
production.
It's in the works to have the services transferred to our new server 
platform standard, from which squid WILL have its own disks.

I guess I just wanted to clarify for myself, and anyone else interested, 
that (like the document states) setting the 'noatime' mount parameter is 
contingent
on the squid cache being on its own disk(s).

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Chris Robertson [EMAIL PROTECTED] wrote on 10/11/2005 03:06:09 PM:

  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, October 11, 2005 10:27 AM
  To: squid-users@squid-cache.org
  Subject: Re: [squid-users] Which the best OS for Squid?
  Henrik Nordstrom [EMAIL PROTECTED] wrote on 10/11/2005 
  10:07:21 AM:
  
   On Tue, 11 Oct 2005 [EMAIL PROTECTED] wrote:
   
This is more of a filesystem question, then it is an operating
system/distro question.
Based on my research, the benchmarks on the web claim ReiserFS to 
  provide
up to 15-20% faster results.
   
I've not had any time to do any benchmarking.  My cache 
  is currently
running on an ext3 partition running
under SLES8 SP3
   
   Regardless of which filesystem you select the most important tuning 
  aspect 
   for filesystem performance for Squid (after selection of 
  hardware) is 
  the 
   noatime mount option.
   
   A more complete list, in priority order:
   
  1. Amount of memory available
   
  2. Number of harddrives used for cache
   
  3. noatime mount option
   
  4. type of filesystem (except for a few really bad choices).
   
   
   On systems with syncronous directory updates (Solaris, some BSD 
  versions)
   
  1.5 Mount option to enable asyncronous directory updates, or 
  preferably 
   a filesystem meta journal on a separate device taking the heat of 
   directory updates.
   
   Regards
   Henrik
  
  
  
  What if the squid cache is stored on the / partition?
  Wouldn't that be a hideous mistake to set / to 'noatime' ?
  
  Tim Rainier
  Information Services, Kalsec, INC
  [EMAIL PROTECTED]
  
  
 
 If you are concerned enough about the performance of Squid to use 
 the noatime option, Squid should not only have its own partition, it
 should have its own disks.  Having Squid's cache on / indicates 
 (to me) a multi-use box, not a dedicated Squid server.  Notice as 
 well, the recommendation for mounting Squid's cache noatime falls 
 below having multiple disks for cache.
 
 Chris
 
 Chris



Re: [squid-users] Crashed squid 2.5.STABLE11

2005-10-11 Thread trainier
Sorry.  That's `df -h` as opposed to `du -h`.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



[EMAIL PROTECTED] 
10/11/2005 03:38 PM

To
squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Crashed squid 2.5.STABLE11






First, and foremost, I would hesitate rotating the store log.  Henrik and 
probably several others, can verify that notion.
Second, do a `du -h` and email the output back.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Lucia Di Occhi [EMAIL PROTECTED] 
10/11/2005 02:29 PM

To
[EMAIL PROTECTED], squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Crashed squid 2.5.STABLE11






Sure it does and I keep 30 days worth of logs:

squid[~]ls -alh /var/log/squid/
total 12G
drwxr-x---   2 squid squid 4.0K Oct 11 04:07 .
drwxr-xr-x  12 root  root  4.0K Oct  9 04:04 ..
-rw-r--r--   1 squid squid 365M Oct 11 14:28 access.log
-rw-r--r--   1 squid squid  47M Oct  2 04:02 access.log.10.gz
-rw-r--r--   1 squid squid 140M Oct  1 04:04 access.log.11.gz
-rw-r--r--   1 squid squid 172M Sep 30 04:04 access.log.12.gz
-rw-r--r--   1 squid squid 174M Sep 29 04:04 access.log.13.gz
-rw-r--r--   1 squid squid 155M Sep 28 04:04 access.log.14.gz
-rw-r--r--   1 squid squid 174M Sep 27 04:04 access.log.15.gz
-rw-r--r--   1 squid squid  90M Sep 26 04:03 access.log.16.gz
-rw-r--r--   1 squid squid  66M Sep 25 04:03 access.log.17.gz
-rw-r--r--   1 squid squid 146M Sep 24 04:04 access.log.18.gz
-rw-r--r--   1 squid squid 158M Sep 23 04:04 access.log.19.gz
-rw-r--r--   1 squid squid 177M Oct 11 04:04 access.log.1.gz
-rw-r--r--   1 squid squid 160M Sep 22 04:04 access.log.20.gz
-rw-r--r--   1 squid squid 152M Sep 21 04:04 access.log.21.gz
-rw-r--r--   1 squid squid 163M Sep 20 04:04 access.log.22.gz
-rw-r--r--   1 squid squid  85M Sep 19 04:09 access.log.23.gz
-rw-r--r--   1 squid squid  64M Sep 18 04:02 access.log.24.gz
-rw-r--r--   1 squid squid 127M Sep 17 04:04 access.log.25.gz
-rw-r--r--   1 squid squid 143M Sep 16 04:04 access.log.26.gz
-rw-r--r--   1 squid squid 145M Sep 15 04:04 access.log.27.gz
-rw-r--r--   1 squid squid 143M Sep 14 04:04 access.log.28.gz
-rw-r--r--   1 squid squid 135M Sep 13 04:03 access.log.29.gz
-rw-r--r--   1 squid squid 100M Oct 10 04:03 access.log.2.gz
-rw-r--r--   1 squid squid  63M Sep 12 04:03 access.log.30.gz
-rw-r--r--   1 squid squid  87M Oct  9 04:03 access.log.3.gz
-rw-r--r--   1 squid squid 144M Oct  8 04:04 access.log.4.gz
-rw-r--r--   1 squid squid 157M Oct  7 04:04 access.log.5.gz
-rw-r--r--   1 squid squid 169M Oct  6 04:05 access.log.6.gz
-rw-r--r--   1 squid squid 158M Oct  5 04:04 access.log.7.gz
-rw-r--r--   1 squid squid 176M Oct  4 04:04 access.log.8.gz
-rw-r--r--   1 squid squid  80M Oct  3 04:03 access.log.9.gz
-rw-r--r--   1 squid squid  34K Oct 11 14:08 cache.log
-rw-r--r--   1 squid squid 8.5K Oct  2 04:02 cache.log.10.gz
-rw-r--r--   1 squid squid  12K Oct  1 04:04 cache.log.11.gz
-rw-r--r--   1 squid squid  19K Sep 30 04:04 cache.log.12.gz
-rw-r--r--   1 squid squid  31K Sep 29 04:04 cache.log.13.gz
-rw-r--r--   1 squid squid  17K Sep 28 04:04 cache.log.14.gz
-rw-r--r--   1 squid squid  21K Sep 27 04:04 cache.log.15.gz
-rw-r--r--   1 squid squid  13K Sep 26 04:03 cache.log.16.gz
-rw-r--r--   1 squid squid  14K Sep 25 04:03 cache.log.17.gz
-rw-r--r--   1 squid squid  27K Sep 24 04:04 cache.log.18.gz
-rw-r--r--   1 squid squid  20K Sep 23 04:04 cache.log.19.gz
-rw-r--r--   1 squid squid  15K Oct 11 04:04 cache.log.1.gz
-rw-r--r--   1 squid squid  20K Sep 22 04:04 cache.log.20.gz
-rw-r--r--   1 squid squid  18K Sep 21 04:04 cache.log.21.gz
-rw-r--r--   1 squid squid  14K Sep 20 04:04 cache.log.22.gz
-rw-r--r--   1 squid squid  12K Sep 19 04:09 cache.log.23.gz
-rw-r--r--   1 squid squid 8.6K Sep 18 04:02 cache.log.24.gz
-rw-r--r--   1 squid squid  17K Sep 17 04:04 cache.log.25.gz
-rw-r--r--   1 squid squid  19K Sep 16 04:04 cache.log.26.gz
-rw-r--r--   1 squid squid  22K Sep 15 04:04 cache.log.27.gz
-rw-r--r--   1 squid squid  17K Sep 14 04:04 cache.log.28.gz
-rw-r--r--   1 squid squid  17K Sep 13 04:03 cache.log.29.gz
-rw-r--r--   1 squid squid 7.8K Oct 10 04:03 cache.log.2.gz
-rw-r--r--   1 squid squid  12K Sep 12 04:03 cache.log.30.gz
-rw-r--r--   1 squid squid 9.1K Oct  9 04:03 cache.log.3.gz
-rw-r--r--   1 squid squid  10K Oct  8 04:04 cache.log.4.gz
-rw-r--r--   1 squid squid  13K Oct  7 04:04 cache.log.5.gz
-rw-r--r--   1 squid squid  16K Oct  6 04:05 cache.log.6.gz
-rw-r--r--   1 squid squid  11K Oct  5 04:04 cache.log.7.gz
-rw-r--r--   1 squid squid  17K Oct  4 04:04 cache.log.8.gz
-rw-r--r--   1 squid squid 9.3K Oct  3 04:03 cache.log.9.gz
-rw-r--r--   1 root  root  2.2K Oct 11 08:08 squid.out
-rw-r--r--   1 squid squid 395M Oct 11 14:28 store.log
-rw-r--r--   1 squid squid  85M Oct  2 04:03 store.log.10.gz
-rw-r--r--   1 squid squid 235M Oct  1 04:06 store.log.11.gz
-rw-r--r--   1 squid squid 282M Sep 30 04:07 store.log.12.gz
-rw-r--r--   1 squid squid 285M Sep 29 04:06 store.log.13.gz
-rw-r--r--   1 squid 

Re: [squid-users] Which the best OS for Squid?

2005-10-11 Thread trainier
What is it about browsing the web that's not fast enough?
It could simply be that authentication routines are slowing it down.

Part of the whole reason behind caching data is to prevent having to 
download popular sites/images/files/etc more than once.
For example, if 20 people request the current weather information from 
weather.com, then squid downloads it once from the web and uses said 
downloaded data
for the 20 people that requested it.  Thus, decreasing the amount of 
bandwidth required and speeding things up (since it's a lot faster to 
download data from a system on
your LAN than it is to grab it 20 times from the web.

Six and one half dozen of the other, really.  If the machine you're 
running squid on has other responsibilities (especially those that are I/O 
intensive), then disabling the cache
would speed the server up.  However, I don't think it would necessarily 
speed up squid.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Covington, Chris [EMAIL PROTECTED] 
10/11/2005 03:49 PM

To
Squid Users squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Which the best OS for Squid?






 This is more of a filesystem question, then it is an operating 
 system/distro question.

Let's say one is using Squid primarily for access control.  What
benefits would a cache provide?  Would eliminating the cache help speed
up squid, assuming there is ample bandwidth?


---
Chris Covington
IT
Plus One Health Management
75 Maiden Lane Suite 801
NY, NY 10038
646-312-6269
http://www.plusoneactive.com




Re: [squid-users] Which the best OS for Squid?

2005-10-11 Thread trainier
First off, there's no possible way my cache would fill the '/' 
partition.  There's a cache size directive in squid that's designed to 
limit the amount of disk space usage.
Not to mention the fact that I have a utility script that runs every 15 
minutes, which pages me if partitions are = to 90% their capacity.  I 
mean, honestly, who would run a 146GB cache?

Second off, it's a performance thing.  The fact is, the box and the web 
run quite fine.  This was a test server that was thrown into production 
because it works.
My plans to upgrade the device are set, I'm just trying to find the time 
to do them.  :-)

Thirdly, can someone PLEASE answer my question about setting / to 
'noatime', as opposed to avoiding it by telling me how and why what I'm 
doing
is stupid?

Once again, are there pitfalls to having '/' set to 'noatime'?

:-)

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Joost de Heer [EMAIL PROTECTED] 
10/11/2005 05:07 PM
Please respond to
[EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Which the best OS for Squid?






[EMAIL PROTECTED] said:
 What if the squid cache is stored on the / partition?

That's a bad idea. Your cache could potentially fill up the root 
partition.

 Wouldn't that be a hideous mistake to set / to 'noatime' ?

Wouldn't it be a hideous mistake to put the cache on the same partition as
/?

Joost





[squid-users] Sparse Objects in Memory

2005-10-10 Thread trainier
I'm getting the following in my messages log, quite frequently:

Oct 10 07:46:31 kalproxy (squid): Squid has attempted to read data from 
memory that is not present. This is an indication of of (pre-3.0) code 
that hasn't been updated to deal with sparse objects in memory. Squid 
should coredump.allowing to review the cause. Immediately preceeding this 
message is a dump of the available data in the format [start,end). The [ 
means from the value, the ) means up to the value. I.e. [1,5) means that 
there are 4 bytes of data, at offsets 1,2,3,4.

Is there a patch written for this?  Do I need to force squid to core in 
order to do the backtrace?  Is the backtrace still needed?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]


Re: [squid-users] Squid: no_cache directive issue.

2005-10-10 Thread trainier
Pardon the standard is it plugged in? question, but

Does wget know there's a proxy server it needs to go through?
Unless you're running the proxy via port 80 (or it's transparent), wget 
does not appear to be going through a proxy, which would
make your test useless.

If your proxy is not set up transparently, you need to set the environment 
variable: http_proxy=ipaddress:port of the proxy server.

Just a thought.  :-)

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Fabiano Silos Reis [EMAIL PROTECTED] 
10/10/2005 12:31 PM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] Squid: no_cache directive issue.






Hello,

Is there any kind of memory cache even if the object is marked to not to
be cached?

On my tests:

I tried with 

acl someserver dstdomain domain.com
no_cache deny someserver

Using wget to debug it I have:

[EMAIL PROTECTED] bin]# wget -S  -O x http://domain.com/
--10:04:43--  http://domain.com/
   = `x'
Resolving domain.com... 10.0.19.2
Connecting to domain.com[10.0.19.2]:80... connected.
HTTP request sent, awaiting response...
 1 HTTP/1.0 302 Moved Temporarily
 2 Date: Mon, 10 Oct 2005 13:04:42 GMT
 3 Server: Apache/2.0.54 (Unix)
 4 Location: http://domain.com/@/
 5 Content-Length: 286
 6 Content-Type: text/html; charset=iso-8859-1
 7 X-Cache: MISS from squid.domain.com
 8 Connection: keep-alive
Location: http://domain.com/@/ [following]
--10:04:43--  http://domain.com/@/
   = `x'
Connecting to domain.com[10.0.19.2]:80... connected.
HTTP request sent, awaiting response...
 1 HTTP/1.0 200 OK
 2 Date: Mon, 10 Oct 2005 13:04:42 GMT
 3 Server: Apache/2.0.54 (Unix)
 4 Content-Length: 1252
 5 Content-Type: text/html
 6 X-Cache: MISS from squid.domain.com
 7 Connection: keep-alive

100%[===] 1,252 --.--K/s

10:04:43 (11.94 MB/s) - `x' saved [1,252/1,252]

As you can see the answer is not being cached by the squid server. 

Explaining more about domain.com:

Domain.com is being served by 2 webservers that are on the back of a
Alteon Switch. These two are real servers for a virtual server. Alteon
manipulates the connection as a transparent balancer and it is
configured with round robin algorithm, it is, each connection to port 80
of the virtual server is redirected to a real server. 

In the body of the HTML I put a debug message to know where the answer
came from. In this case if the answer came from 'webserver1' I will see
in the body the phrase: this came from webserver1.

My problem:

If the content is not being cached why do I receive only answers from
webserver1 when I get from squid? I tried running wget lot of times and
I always get in the body the same answer caming from webserver1. 

If I point wget to get the content direct from the virtual server
(Configured in the alteon switch) I receive different body on each get. 


What I think it is happening:

I think this is being caused by some memory cache algorithm. Even being
explicit saying to squit to not to cache a content it retain the object
for seconds in its memory.

I did a test on my server wich is very simple. I wget just one time from
http://domain.com and after exactly 15 seconds I wget it again and there
it is. The body came with the debug flag from webserver2. Is it a
proof of something or not?

I appreciate any help.

Fabiano

Here is my squid.conf:

http_port 10.0.19.2:80
cache_mem 64 MB
maximum_object_size 4096 KB
maximum_object_size_in_memory 100 KB
fqdncache_size 1024
cache_replacement_policy lru
memory_replacement_policy lru
cache_dir ufs /iG/http_servers/squid/bin/var/cache 4 16 256
cache_access_log /iG/logs/squidzone/zis-01-access_log
logfile_rotate 0
cache_log /iG/http_servers/squid/bin/var/logs/cache.log
cache_store_log /iG/http_servers/squid/bin/var/logs/store.log
#cache_store_log none
emulate_httpd_log on
debug_options NONE,0
log_fqdn off
redirect_rewrites_host_header on

acl someserver dstdomain zone.ig.com.br
no_cache deny someserver
#always_direct allow someserver

acl alteon src 10.0.0.0/255.0.0.0
acl PURGE method PURGE
http_access allow PURGE alteon
http_access deny PURGE

quick_abort_min 0 KB
quick_abort_max 0 KB
half_closed_clients off
shutdown_lifetime 10 seconds
acl all src 0.0.0.0/0.0.0.0
acl in_localhost src 127.0.0.0/8
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80
acl CONNECT method CONNECT
acl to_rz dst 10.0.0.0/255.0.0.0
acl internal_port port 80
http_access allow all to_rz
http_access allow Safe_ports
http_access allow internal_port
http_access allow in_localhost to_localhost
http_access deny CONNECT SSL_ports
http_access deny all
http_reply_access allow all
icp_access allow all
visible_hostname squid.domain.com
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_single_host off
httpd_accel_with_proxy off
httpd_accel_uses_host_header on
client_db off
coredump_dir /var/cache

squid -v


Re: [squid-users] Sparse Objects in Memory

2005-10-10 Thread trainier
 3.0-PRE3-20050510

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Mark Elsen [EMAIL PROTECTED] wrote on 10/10/2005 12:47:44 PM:

   I'm getting the following in my messages log, quite frequently:
 
  Oct 10 07:46:31 kalproxy (squid): Squid has attempted to read data 
from
  memory that is not present. This is an indication of of (pre-3.0) code
  that hasn't been updated to deal with sparse objects in memory. Squid
  should coredump.allowing to review the cause. Immediately preceeding 
this
  message is a dump of the available data in the format [start,end). The 
[
  means from the value, the ) means up to the value. I.e. [1,5) means 
that
  there are 4 bytes of data, at offsets 1,2,3,4.
 
  Is there a patch written for this?  Do I need to force squid to core 
in
  order to do the backtrace?  Is the backtrace still needed?
 
  Tim Rainier
  Information Services, Kalsec, INC
  [EMAIL PROTECTED]
 
 
  Squid version being used ?
 
  M.



RE: [squid-users] Squid: no_cache directive issue.

2005-10-10 Thread trainier
That isn't transparent at all, actually.  set the environment variable 
http_proxy to the ip address (or name) and port of your
squid machine, so wget requests are proxied, which is a requirement for 
your testing purposes.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Fabiano Silos Reis [EMAIL PROTECTED] wrote on 10/10/2005 02:17:29 PM:

 Squid is configured as a kind of transparent proxy.
 
 wget resolve the name for domain.com as the ip address of squid server.
 When squid tries to resolve domain.com it asks to a dns server who
 answer the ip address of the Alteon VIP. In that way it work as a
 transparent proxy.
 
 Thanks in advance,
 
 Fabiano
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Sent: Monday, October 10, 2005 12:40 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid: no_cache directive issue.
 
 Pardon the standard is it plugged in? question, but
 
 Does wget know there's a proxy server it needs to go through?
 Unless you're running the proxy via port 80 (or it's transparent), wget 
 does not appear to be going through a proxy, which would
 make your test useless.
 
 If your proxy is not set up transparently, you need to set the
 environment 
 variable: http_proxy=ipaddress:port of the proxy server.
 
 Just a thought.  :-)
 
 Tim Rainier
 Information Services, Kalsec, INC
 [EMAIL PROTECTED]
 
 
 
 Fabiano Silos Reis [EMAIL PROTECTED] 
 10/10/2005 12:31 PM
 
 To
 squid-users@squid-cache.org
 cc
 
 Subject
 [squid-users] Squid: no_cache directive issue.
 
 
 
 
 
 
 Hello,
 
 Is there any kind of memory cache even if the object is marked to not to
 be cached?
 
 On my tests:
 
 I tried with 
 
 acl someserver dstdomain domain.com
 no_cache deny someserver
 
 Using wget to debug it I have:
 
 [EMAIL PROTECTED] bin]# wget -S  -O x http://domain.com/
 --10:04:43--  http://domain.com/
= `x'
 Resolving domain.com... 10.0.19.2
 Connecting to domain.com[10.0.19.2]:80... connected.
 HTTP request sent, awaiting response...
  1 HTTP/1.0 302 Moved Temporarily
  2 Date: Mon, 10 Oct 2005 13:04:42 GMT
  3 Server: Apache/2.0.54 (Unix)
  4 Location: http://domain.com/@/
  5 Content-Length: 286
  6 Content-Type: text/html; charset=iso-8859-1
  7 X-Cache: MISS from squid.domain.com
  8 Connection: keep-alive
 Location: http://domain.com/@/ [following]
 --10:04:43--  http://domain.com/@/
= `x'
 Connecting to domain.com[10.0.19.2]:80... connected.
 HTTP request sent, awaiting response...
  1 HTTP/1.0 200 OK
  2 Date: Mon, 10 Oct 2005 13:04:42 GMT
  3 Server: Apache/2.0.54 (Unix)
  4 Content-Length: 1252
  5 Content-Type: text/html
  6 X-Cache: MISS from squid.domain.com
  7 Connection: keep-alive
 
 100%[===] 1,252 --.--K/s
 
 10:04:43 (11.94 MB/s) - `x' saved [1,252/1,252]
 
 As you can see the answer is not being cached by the squid server. 
 
 Explaining more about domain.com:
 
 Domain.com is being served by 2 webservers that are on the back of a
 Alteon Switch. These two are real servers for a virtual server. Alteon
 manipulates the connection as a transparent balancer and it is
 configured with round robin algorithm, it is, each connection to port 80
 of the virtual server is redirected to a real server. 
 
 In the body of the HTML I put a debug message to know where the answer
 came from. In this case if the answer came from 'webserver1' I will see
 in the body the phrase: this came from webserver1.
 
 My problem:
 
 If the content is not being cached why do I receive only answers from
 webserver1 when I get from squid? I tried running wget lot of times and
 I always get in the body the same answer caming from webserver1. 
 
 If I point wget to get the content direct from the virtual server
 (Configured in the alteon switch) I receive different body on each get. 
 
 
 What I think it is happening:
 
 I think this is being caused by some memory cache algorithm. Even being
 explicit saying to squit to not to cache a content it retain the object
 for seconds in its memory.
 
 I did a test on my server wich is very simple. I wget just one time from
 http://domain.com and after exactly 15 seconds I wget it again and there
 it is. The body came with the debug flag from webserver2. Is it a
 proof of something or not?
 
 I appreciate any help.
 
 Fabiano
 
 Here is my squid.conf:
 
 http_port 10.0.19.2:80
 cache_mem 64 MB
 maximum_object_size 4096 KB
 maximum_object_size_in_memory 100 KB
 fqdncache_size 1024
 cache_replacement_policy lru
 memory_replacement_policy lru
 cache_dir ufs /iG/http_servers/squid/bin/var/cache 4 16 256
 cache_access_log /iG/logs/squidzone/zis-01-access_log
 logfile_rotate 0
 cache_log /iG/http_servers/squid/bin/var/logs/cache.log
 cache_store_log /iG/http_servers/squid/bin/var/logs/store.log
 #cache_store_log none
 emulate_httpd_log on
 debug_options NONE,0
 log_fqdn off
 redirect_rewrites_host_header on
 
 acl someserver dstdomain 

Re: [squid-users] Performance tweaks

2005-10-10 Thread trainier
Squid's url_regex is a hideously slow way of managing blackholed 
urls/sites/domains.
I'm not necessarily blaming the program itself, the fact is, regular 
expressions can be quite computational.
SquidGuard, on the other hand, is VERY fast and works quite well.

Lots of folks around here swear by DansGuardian too.  I personally didn't 
need the added features/complications
that DansGuardian provide, so I stick with squidguard.

Semi-off-topic
Per a discussion with some of the admins at the North American Network 
Operators Group, I'm setting up a
service to get automation into updating a centrally managed squidguard 
database.

The hope is to get more users to contribute to it (making it more 
effective) and to also make it more portable
so other services can use it.

Fyi, if anyone is interested in contributing, please email me offlist.
/semi-off-topic

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

Hendrik Voigtländer [EMAIL PROTECTED] wrote on 10/10/2005 
04:47:37 PM:

 
 
  
  
  I set the full debug then checked my cache log. The slow down seems to 

  be my acl
  for example
  acl noporn1 url_regex /usr/local/etc/squid/noporn1
  Which is a file i picked off the web that contains a list of porn 
sites 
  about 44318 in total. Silly me :)
  So that is not the way to do that i will search out other methods.
  
  Cheers Terry
 
 I am using squidGuard with no performance impact at all to achieve 
 blacklisting. It comes with the distro I'm using (debian) and I didn't 
 bother to compile it by myself :-)
 
 Best regards,
 
 Hendrik Voigtländer



Re: [squid-users] Audio Problems

2005-10-05 Thread trainier
That's contingent on the audio player they're using, and the way you've 
set up squid?

Is squid set up transparently?
Is the audio player(s) being used supportive of proxies?
If so, is the proxy parameter set?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



david brown [EMAIL PROTECTED] 
10/05/2005 03:19 PM

To
[EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Audio Problems







I don't know which FAQ are you say. Because in the Squid web page, the 
first 
entry in the FAQ say Wha is Squid?
I need to allow users to play audio streaming. Without proxy that is 
posible, but when proxy is use (in IE) that is imposible.
What I need to do?
Thanks!

BLOCKQUOTE style='PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: 
#A0C6E5 
2px solid; MARGIN-RIGHT: 0px'font 
style='FONT-SIZE:11px;FONT-FAMILY:tahoma,sans-serif'hr color=#A0C6E5 
size=1
From:  iMark Elsen lt;[EMAIL PROTECTED]gt;/ibrReply-To: iMark 

Elsen lt;[EMAIL PROTECTED]gt;/ibrTo:  idavid brown 
lt;[EMAIL PROTECTED]gt;/ibrCC: 
isquid-users@squid-cache.org/ibrSubject:  iRe: [squid-users] Audio 

Problems/ibrDate:  iWed, 5 Oct 2005 18:52:21 +0200/ibrgt;  gt; 

The LAN users can't play music from Internet with proxy server.brgt; 
gt; 
How I resolve this?brgt;brgt;Chect the first entry of the 
FAQ.brgt;brgt;M.br/font/BLOCKQUOTE

_
MSN Amor: busca tu ½ naranja http://latam.msn.com/amor/





Re: [squid-users] transparent proxy with squid

2005-10-04 Thread trainier
I would assume you'd need to do something similiar to:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT 
--to-port 3128 

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Sushil Deore [EMAIL PROTECTED] 
10/04/2005 01:06 PM

To
Henrik Nordstrom [EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
[squid-users] transparent proxy with squid







Dear Henrik,

I configured a transparent proxy with squid by referring
http://www.faqs.org/docs/Linux-mini/TransparentProxy.html#ss2.3

Though Daniel has strictly mentioed not to ask him about HTTPS with
transparent proxy but still I do have certain doubts in my mind which I'll
try to explain here.

I am setting up a wireless network for which I'll be using transparent
proxy with squid. Though my transparent proxy is working fine and
serving all port 80 traffic. I also need to serve https request and at
this stage I am stucked up.

With a seperate squid box as a proxy server I am serving the HTTPS
request.

Kindly let me know or suggest me how do I serve the HTTPS request using
the transparent proxy or do I need to setup any additional setup to serve
HTTPS traffic?

Thanks in advance.

With Regards,

-- Sushil.





Re: [squid-users] korean site slow and unable to open chinese sites

2005-09-30 Thread trainier
Can't get to the site at all, or it takes forever to load/is timing out?

If the ladder is the case, consider the fact that the site is 22 hops away 
(at least from me).

If it's the former, what is the error message?  Connection Timed Out ?

Might need to boost the timeout setting because it literally took me 4-6 
minutes to load that site completely.

Tim Rainier




Abdock [EMAIL PROTECTED] 
09/30/2005 01:02 PM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] korean site slow and unable to open chinese sites







Hello,

Some of our uses cannot get to open the below site via squid, any help ??

www.mofcom.gov.cn 

Thanks and Rgds,




RE: [squid-users] problem about squid exhaust all memory

2005-09-27 Thread trainier
I'd be interested in seeing your squid.conf as well.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Chris Robertson [EMAIL PROTECTED] 
09/27/2005 04:11 PM

To
squid-users@squid-cache.org
cc

Subject
RE: [squid-users] problem about squid exhaust all memory






 -Original Message-
 From: djx [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, September 27, 2005 1:18 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] problem about squid exhaust all memory
 
 
 hi,everyone:
 I encount a problem , I need help from someone .
 
   Squid use more and more memory continuously during it's 
 running ,and it will restart when all physical memory is 
 exhausted ,so my squid restart many times a day . It's boring 
 ,how can I solve the prolem ?
 

How much physical memory does your Squid box have?  Is it doing anything 
but Squid?  Are you perhaps suffering from 
http://www.squid-cache.org/Versions/v2/2.5/bugs/#squid-2.5.STABLE6-client_db_gc?

Upgrading would not be a bad course of action in any case...

 every time it restart ,the following information is logged:
 
 
 FATAL: xcalloc: Unable to allocate 1 blocks of 4104 bytes!
 
 Squid Cache (Version 2.5.STABLE6): Terminated abnormally.
 CPU Usage: 61.889 seconds = 33.408 user + 28.481 sys
 Maximum Resident Size: 323524 KB
 Page faults with physical i/o: 1725
 2005/09/27 17:03:57| Not currently OK to rewrite swap log.
 2005/09/27 17:03:57| storeDirWriteCleanLogs: Operation aborted.
 2005/09/27 17:04:00| Starting Squid Cache version 2.5.STABLE6 
 for i386-unknown-freebsd5.0...
 2005/09/27 17:04:00| Process ID 7561
 2005/09/27 17:04:00| With 7232 file descriptors available
 2005/09/27 17:04:00| DNS Socket created at 0.0.0.0, port 49428, FD 4
 2005/09/27 17:04:00| Adding nameserver 202.99.23.252 from 
 /etc/resolv.conf
 2005/09/27 17:04:00| Unlinkd pipe opened on FD 9
 2005/09/27 17:04:00| Swap maxSize 512 KB, estimated 393846 objects
 2005/09/27 17:04:00| Target number of buckets: 19692
 2005/09/27 17:04:00| Using 32768 Store buckets
 2005/09/27 17:04:00| Max Mem  size: 262144 KB
 2005/09/27 17:04:00| Max Swap size: 512 KB
 2005/09/27 17:04:00| Store logging disabled
 2005/09/27 17:04:00| Rebuilding storage in /cms/squidcache (DIRTY)
 2005/09/27 17:04:00| Using Least Load store dir selection
 2005/09/27 17:04:00| chdir: /usr/local/squid/var/cache: (2) 
 No such file or directory
 2005/09/27 17:04:00| Current Directory is /cms/squidcache
 2005/09/27 17:04:00| Loaded Icons.
 2005/09/27 17:04:00| Accepting HTTP connections at 0.0.0.0, 
 port 80, FD 8.
 2005/09/27 17:04:00| Accepting ICP messages at 0.0.0.0, port 
 3130, FD 10.
 2005/09/27 17:04:00| WCCP Disabled.
 2005/09/27 17:04:00| Ready to serve requests.
 2005/09/27 17:04:02| comm_accept: FD 8: (53) Software caused 
 connection abort
 2005/09/27 17:04:02| httpAccept: FD 8: accept failure: (53) 
 Software caused connection abort
 2005/09/27 17:04:02| comm_accept: FD 8: (53) Software caused 
 connection abort
 2005/09/27 17:04:02| httpAccept: FD 8: accept failure: (53) 
 Software caused connection abort
 2005/09/27 17:04:04| comm_accept: FD 8: (53) Software caused 
 connection abort
 2005/09/27 17:04:04| httpAccept: FD 8: accept failure: (53) 
 Software caused connection abort
 2005/09/27 17:04:04| comm_accept: FD 8: (53) Software caused 
 connection abort
 2005/09/27 17:04:04| httpAccept: FD 8: accept failure: (53) 
 Software caused connection abort
 2005/09/27 17:04:04| comm_accept: FD 8: (53) Software caused 
 connection abort
 2005/09/27 17:04:04| httpAccept: FD 8: accept failure: (53) 
 Software caused connection abort
 2005/09/27 17:04:04| comm_accept: FD 8: (53) Software caused 
 connection abort
 2005/09/27 17:04:04| httpAccept: FD 8: accept failure: (53) 
 Software caused connection abort
 2005/09/27 17:04:07| comm_accept: FD 8: (53) Software caused 
 connection abort
 2005/09/27 17:04:07| httpAccept: FD 8: accept failure: (53) 
 Software caused connection abort
 

As for these messages: 
http://www.squid-cache.org/mail-archive/squid-users/200401/0239.html

Chris




RE: [squid-users] which is better for me ?

2005-09-26 Thread trainier
DansGuardian has WAY more flexibility.
SquidGuard is WAY faster, in my opinion.

What's your priority in terms of filtering needs?

Tim Rainier
Information Services, Kalsec, INC




Piszcz, Justin [EMAIL PROTECTED] 
09/26/2005 01:33 PM

To
Odhiambo Washington [EMAIL PROTECTED], squid-users@squid-cache.org
cc

Subject
RE: [squid-users] which is better for me ?






HTB is a way to limit bandwidth via QoS/Kernel/TC/iproute2/iptables.

However, he can use squid to limit bandwidth, which is what I would
recommend.

-Original Message-
From: Odhiambo Washington [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 26, 2005 12:47 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] which is better for me ?

* On 26/09/05 13:29 -0300, LinuXKiD wrote:
 Hi,
 
 I want to put a XXX filter on my squid.
 
 I've:
- kernel 2.4.28
- squid 2.5.stable10 with zph patch
- htb bandwidth traffic control
- 1Gb RAM
- 40 Gb SATA Disk
- AMD Athlon 64  - 2000Mhz.
- 200 hosts LAN.
 
 
 Which XXX filter is more accurate for me ?
 squid guard  ? DansGuardian ?  Other ?


DansGuardian, naturally! Out of the box, DG will do it. It's the ONLY
true content filter I know of.

What is htb?? Software or hardware? How do you use it with Squid?
Some URL of howto, please?


-Wash

http://www.netmeister.org/news/learn2quote.html

--
+==+
|\  _,,,---,,_ | Odhiambo Washington[EMAIL PROTECTED]
Zzz /,`.-'`'-.  ;-;;,_ | Wananchi Online Ltd.   www.wananchi.com
   |,4-  ) )-,_. ,\ (  `'-'| Tel: +254 20 313985-9  +254 20 313922
  '---''(_/--'  `-'\_) | GSM: +254 722 743223   +254 733 744121
+==+
The intelligence of any discussion diminishes with the square of the
number of participants.
 -- Adam Walinsky




Re: [squid-users] Problem with tcp_outgoing_address

2005-09-23 Thread trainier
Seems to me you need to change:
tcp_outgoing_address 192.168.29.254 network_local
To:
tcp_outgoing_address external_address_of_adsl_link network_local

Or am I not understanding your question?  :-)

Tim Rainier
Information Services, Kalsec, INC




Fabio Silva [EMAIL PROTECTED] 
09/23/2005 05:46 PM
Please respond to
Fabio Silva [EMAIL PROTECTED]


To
squid-users@squid-cache.org
cc

Subject
[squid-users] Problem with tcp_outgoing_address






Hi all,
i´m with a problem with squid
my english is not very well
i have 2 links adsl and i want that the users use the internet across
one of this example

LINK 1 - just to receive mail and http request
  LINUX (squid) - (network users)
LINK 2 - just to internet users (downloads..)

i´ve configured the squid to use tcp_outgoing_address
but.. when i try to use i get the follow message

The following error was encountered:

* Socket Failure

The system returned:

(99) Cannot assign requested address

Squid is unable to create a TCP socket, presumably due to excessive
load. Please retry your request.

Your cache administrator is [EMAIL PROTECTED]

I need help to resolve it.. and.. if dont work.. how can i do it ?

my squid.conf is..

acl network_local src 192.168.29.0/255.255.255.0
tcp_outgoing_address 192.168.29.254 network_local


Thanks..


--
---
Fabio S. Silva
[EMAIL PROTECTED]




Re: [squid-users] Squid disconnects internet...

2005-09-20 Thread trainier
Heh

This actually sounds a lot like a retransmit issue
From a shell (as root) try the following command:

while true
do
 netstat -s | grep retrans
 sleep 3
 clear
done

This will report the network retransmits occurring on the network 
interface(s).
If you see this number grow rapidly (several per second or per 3 seconds, 
in this case)
then it's a good indication that your NIC is retransmitting packets with 
the switch or
router it's plugged into.  I know this to be a common issue with Cisco 
devices.  It generally
occurs because of a duplex mismatch.  ie: router/switch has the port for 
your server configured as
Auto-negotiate, but your server is attempting for force 100/full (or 
vice-versa).

Might be something to consider/look at.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Jorge A. Rodriguez [EMAIL PROTECTED] 
09/20/2005 02:43 PM

To
[EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Squid disconnects internet...






Hi, Thank you for your response, yes, I used the package from debian 
(apt) it has been precompiled with that option, as a transparent proxy 
it works, it justs becomes intermittent, it caches some pages then it 
doesnt connect to anything, I even get some statistics from sarg (squid 
report tool) but every 2 to 3 minutes the clients loose all conection to 
the internet and then it comes back in other 2 to 3 minutes and so on...
MSN Clients disconnects, Firefox just say looking up for the website and 
thats it, and then everything works fine again... the strangest thing.
( I am changing the network cards today, just to rule that out...)
[EMAIL PROTECTED] wrote:

Jorge,

Squid requires specific compilation paramaters if you plan to run the 
cache as transparent:
  --enable-ipf-transparent
or
  --enable-pf-transparent
Respectively...

Did you use either of these?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Jorge A. Rodriguez [EMAIL PROTECTED] 
09/20/2005 02:10 PM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] Squid disconnects internet...






Hi,
I am having a strange problem, my sarge (debian) box uses squid 2.5 
stable 9, if I use the box to share internet it works fine, but if I add 
squid (as transparent proxy) it works for a little while then everything 
gets disconnected after  2 minutes(more or less...) and then after some 
other time it gets connected again (I dont get response from internet, 
msn disconnects, telnet connections hangs...) all I get from cache.log is
 CACHEMGR: unknown@127.0.0.1 requesting 'storedir'
 CACHEMGR: unknown@127.0.0.1 requesting 'counters'
  httpReadReply: Excess data from GET
 From Access.log I get TCP MISS/(with different numbers 200, 0, 304)
Thank you.


NOTICE: This electronic transmission contains information from
GlobalVantage Design Source, which may be confidential or privileged.
This information is intended to be for the use of the individual or
entity named above. If you are not the intended recipient, please notify
GlobalVantage immediately of your receipt of this transmission, delete
it, and be aware that any disclosure, copying, distribution or use of
the contents of this transmission is prohibited.





 



-- 
Jorge Rodriguez
GlobalVantage Design Source
Information Technologies, Supervisor.
Phone: (+52 33)3121 34 32 xt. 4004
Fax: (+52 33)3121 34 32 xt. 4017
www.globalvantage.biz

NOTICE: This electronic transmission contains information from
GlobalVantage Design Source, which may be confidential or privileged.
This information is intended to be for the use of the individual or
entity named above. If you are not the intended recipient, please notify
GlobalVantage immediately of your receipt of this transmission, delete
it, and be aware that any disclosure, copying, distribution or use of
the contents of this transmission is prohibited.






RE: [squid-users] Squid disconnects internet...

2005-09-20 Thread trainier
The only WCCP-specific compile-time option I'm aware of is to disable 
WCCP.
If you want to use WCCP through a transparent proxy, I would assume that 
you need specifically compile squid using the two noted compile-options.
It sounds like, however, some of the linux distributions out there are 
setting specific compile options when they build their packages.

I've never used a package install for squid, so I can't be sure this is 
true.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Shoebottom, Bryan [EMAIL PROTECTED] 
09/20/2005 02:53 PM

To
[EMAIL PROTECTED], squid-users@squid-cache.org
cc

Subject
RE: [squid-users] Squid disconnects internet...







Hello,

This is off topic, but do you need this compile time option for WCCP
transparent caches?

Thanks,
Bryan


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: September 20, 2005 2:29 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid disconnects internet...

Jorge,

Squid requires specific compilation paramaters if you plan to run the 
cache as transparent:
  --enable-ipf-transparent
or
  --enable-pf-transparent
Respectively...

Did you use either of these?

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Jorge A. Rodriguez [EMAIL PROTECTED] 
09/20/2005 02:10 PM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] Squid disconnects internet...






Hi,
I am having a strange problem, my sarge (debian) box uses squid 2.5 
stable 9, if I use the box to share internet it works fine, but if I add

squid (as transparent proxy) it works for a little while then everything

gets disconnected after  2 minutes(more or less...) and then after some 
other time it gets connected again (I dont get response from internet, 
msn disconnects, telnet connections hangs...) all I get from cache.log
is
 CACHEMGR: unknown@127.0.0.1 requesting 'storedir'
 CACHEMGR: unknown@127.0.0.1 requesting 'counters'
  httpReadReply: Excess data from GET
 From Access.log I get TCP MISS/(with different numbers 200, 0, 304)
Thank you.


NOTICE: This electronic transmission contains information from
GlobalVantage Design Source, which may be confidential or privileged.
This information is intended to be for the use of the individual or
entity named above. If you are not the intended recipient, please notify
GlobalVantage immediately of your receipt of this transmission, delete
it, and be aware that any disclosure, copying, distribution or use of
the contents of this transmission is prohibited.








Re: [squid-users] What means squid:  ???

2005-09-16 Thread trainier
How are you attempting to start squid?

Tim Rainier
Information Services, Kalsec, INC




Daniel Navarro [EMAIL PROTECTED] 
09/16/2005 11:29 AM

To
Squid Cache squid-users@squid-cache.org
cc

Subject
[squid-users] What means squid: ???






Hi all fellows,

My squid is not starting at bootime, and yes is
chkconfig for that al leves 3, 4 and 5.

Manually starts fine.

I see this lines on boot.log

Sep 13 15:48:21 ngproxy squid: Starting squid: 
Sep 13 15:48:22 ngproxy squid: .
Sep 13 15:48:22 ngproxy squid: 
Sep 13 15:48:22 ngproxy squid: 
Sep 13 15:48:22 ngproxy rc: Starting squid:  succeeded

what is going on? and how can I solve?

Regards in advance, from Venezuela, Daniel Navarro

__
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis! 
Regístrate ya - http://correo.espanol.yahoo.com/ 




Re: [squid-users] Reverse proxy question

2005-09-15 Thread trainier
Personally, I'd use a proxy configuration script that exempts internal 
requests from being proxied.
Then set your clients up to use the script.

Not that I'm not suggesting the use of WPAD.  IE and firefox/mozilla, for 
example, have an option in their network settings to
use an automatic proxy configuration script.  The script uses javascript 
to define where/how requests should be handled.

The squid faq has several links on where to get instructions on how to 
create autoconfiguration scripts.

I find the script to be good practice because you can adjust changes to 
the proxy simply my updating one script.

It will most likely work around your 403 error, as well.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Oleksii Krykun [EMAIL PROTECTED] 
09/15/2005 09:34 AM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] Reverse proxy question






I have WWW server www.myserver on Apache with some links to internal 
servers 
as

http://mywwwserver/link1
http://mywwwserver/link2
http://mywwwserver/link3

On apache I rewrite:

RewriteEngine on
RewriteRule ^/link1(.*) http://myproxyserver/link1$1 [P]
RewriteRule ^/link2(.*) http://myproxyserver/link2$1 [P]
RewriteRule ^/link3(.*) http://myproxyserver/link3$1 [P]

I used MS Proxy 2.0 as myproxyserver before.
On my proxyserver following rules were applied:

http://myproxyserver/link1 - http://10.1.1.1/dir1
http://myproxyserver/link2 - http://10.1.1.1/dir2
http://myproxyserver/link3 - http://10.1.1.2/

All works fine.

Now I change MS Proxy with Squid+SquidGuard.
In squid.conf I use:
http_port=80
httpd_accel_host mynewproxy
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on

mywwwserver is allowed by acl settings.

Rewrite rules from squidGuard.conf are following:

[EMAIL PROTECTED]://mynewproxy/[EMAIL PROTECTED]://10.1.1.1/dir1/@i
[EMAIL PROTECTED]://mynewproxy/[EMAIL PROTECTED]://10.1.1.1/dir2/@i
[EMAIL PROTECTED]://mynewproxy/[EMAIL PROTECTED]://10.1.1.2/@i

All works for outside requests but for internal users request to e.g. 
http://mywwwserver/link1/file.html gives me 403 error.

Where I am wrong?

Thanks,
Oleksii

А вы в курсе как заработать на курсе?
FOREX. Дилинговый центр АКБ Укрсоцбанк.
Работает круглосуточно.
Лиц НБУ N5 от 29.12.2001.
http://forex.ukrsotsbank.com/?ito=873itc=8




[squid-users] Replacement/Removal Policy Type

2005-09-01 Thread trainier
If I want to use the heap LFUDA replacement/removal policy, what needs 
to go into the list of modules section of the --enable-removal-policies= 
parameter for configuring/compiling?
I went into src/repl to look for policy names.  Tried a few names by 
guessing and am unable to come up with anything that works.

I've no specific need for LFUDA, short of wanting to benchmark some 
different setup scenarios.

Can anyone help me with getting squid compiled to support the other policy 
types?

Tim Rainier


Re: [squid-users] Replacement/Removal Policy Type

2005-09-01 Thread trainier
I take that back, I was falsely using quotes.
Your suggestion is working, thank you.

Tim Rainier




Mark Elsen [EMAIL PROTECTED] 
09/01/2005 09:27 AM

To
[EMAIL PROTECTED] [EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Replacement/Removal Policy Type






On 9/1/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 If I want to use the heap LFUDA replacement/removal policy, what needs
 to go into the list of modules section of the 
--enable-removal-policies=
 parameter for configuring/compiling?
 I went into src/repl to look for policy names.  Tried a few names by
 guessing and am unable to come up with anything that works.
 
 I've no specific need for LFUDA, short of wanting to benchmark some
 different setup scenarios.
 
 Can anyone help me with getting squid compiled to support the other 
policy
 types?
 
 Tim Rainier
 

  You just need :

 --enable-removal-policies=heap

  M.




Re: [squid-users] Replacement/Removal Policy Type

2005-09-01 Thread trainier
Once I got the configure option sorted, I added the following line to 
squid.conf:
cache_replacement_policy heap LFUDA

Is this correct?
If so, I'm missing something because squid contiually page faults and bogs 
the machine down.  (average of 100 processes waiting on the CPU).

What gives?

Tim Rainier




Mark Elsen [EMAIL PROTECTED] 
09/01/2005 10:13 AM

To
[EMAIL PROTECTED] [EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Replacement/Removal Policy Type






  Tried that already.  It fails with a no policy named 'heap' is 
available
 
 Tim Rainier
 
 
Please post exact output of configure session , using
for instance the ´script´ tool.

M.




Re: [squid-users] Replacement/Removal Policy Type

2005-09-01 Thread trainier
Added memory_replacement_policy head LFUDA
Things appear to be up and running just fine now.

Just for good measure, am I missing anything else?

Tim Rainier




[EMAIL PROTECTED] 
09/01/2005 12:43 PM

To
squid-users@squid-cache.org
cc

Subject
Re: [squid-users] Replacement/Removal Policy Type






Once I got the configure option sorted, I added the following line to 
squid.conf:
cache_replacement_policy heap LFUDA

Is this correct?
If so, I'm missing something because squid contiually page faults and bogs 

the machine down.  (average of 100 processes waiting on the CPU).

What gives?

Tim Rainier




Mark Elsen [EMAIL PROTECTED] 
09/01/2005 10:13 AM

To
[EMAIL PROTECTED] [EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Replacement/Removal Policy Type






  Tried that already.  It fails with a no policy named 'heap' is 
available
 
 Tim Rainier
 
 
Please post exact output of configure session , using
for instance the ´script´ tool.

M.






[squid-users] Fatal Errors....

2005-08-29 Thread trainier
This is the third crash.  No core files.

1.)  Is there some specific explanation for why there isn't a core file?
2.)  The cache/store logs give absolutely nothing that explains these 
fatal errors at all.  Is there something else I could look at?

Maybe attach an strace session to the running process.

All I know is that all the sudden, squid stops servicing requests. Machine 
resources are plentiful (CPU, disk space, memory and network)
Not sure what else to look at.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]

- Forwarded by Tim Rainier/KAL/Kalsec on 08/29/2005 11:10 AM -

[EMAIL PROTECTED] (nobody) 
08/29/2005 11:06 AM

To
[EMAIL PROTECTED]
cc

Subject







From: squid
To: [EMAIL PROTECTED]
Subject: The Squid Cache (version 3.0-PRE3-20050510) died.

You've encountered a fatal error in the Squid Cache version 
3.0-PRE3-20050510.
If a core file was created (possibly in the swap directory),
please execute 'gdb squid core' or 'dbx squid core', then type 'where',
and report the trace back to [EMAIL PROTECTED]

Thanks!




Re: [squid-users] dead squid, Still looking,

2005-08-29 Thread trainier
Yes:
-FDon't serve any requests until store is rebuilt.

My guess is that squid is trying to rebuild the store, but is too busy 
servicing requests.
If that doesn't work, you can manually clear the cache, the store file and 
have squid rebuild the cache hierarchy (squid -z).

Tim Rainier
Information Services, Kalsec, INC




John R. Van Lanen, Network Operations - TCCSA [EMAIL PROTECTED] 
08/29/2005 12:12 PM

To
squid-users@squid-cache.org
cc
[EMAIL PROTECTED]
Subject
[squid-users] dead squid, Still looking,






Me again, sorry,

my squid dies, when it does the cache.log most always shows the same last 
entry
of 

Store rebuilding is 4.4% (or some other number) complete

Then nothing, The Squid status shows 

squid dead but pid file exists,

Does anyone know in ES 3 if there is a problem with processes just
disappearing?

I've Tryed increasing Cache_mem from 8 to 768
disabled store.log
Increase ifp_redirector processes from 10 to 30.

Any other settings anyone could think of?

Orginally posted;

I'm running Redhat ES 3.0,

Running Squid Stable 2.5 ver 3 that comes with the Redhat ES 3.0

The issue I have is that squid will stop dead,

Do a squid status and it shows

squid dead but pid file exists
squid: ERROR Could not send signal 0 to process 5530: (3) No such process,

Have seen this before, restart squid or rebooting solves the issue for 
awhile,
the 5530 number changes.

Cache.log shows nothing but normal stuff.

Searching the net shows others with the same issues but can't find a 
solution.

Let me know if you can help.  Thanks.
__
| John R. Van Lanen, Manager of Network Operations  Voice: (330) 264-6047  
 |
| Tri-County Computer Services Association (TCCSA)  Fax:   (330) 264-5703  
 |
| Do not meddle in the affairs of dragons, E-mail:[EMAIL PROTECTED]|
|  because you are crunchy and taste good with ketchup   |
--




Re: [squid-users] dead squid, Still looking,

2005-08-29 Thread trainier
While we're out there talking about cache size limits, I need a 
refresher

What is the general rule in terms of setting a cache size limit?
I, obviously, want the cachable space to be as large as possible, but seem 
to experience a huge decrease in performance and increase in crashes as I 
increase the cache size?

Is there some general rule that compares physical ram and cpu which would 
help me set up the optimal cache size limitation?

I have 1 gig of ram 2 gig swap space and a 2.4 gb xeon processor.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



John R. Van Lanen, Network Operations - TCCSA [EMAIL PROTECTED] 
08/29/2005 04:46 PM

To
[EMAIL PROTECTED]
cc
squid-users@squid-cache.org, [EMAIL PROTECTED]
Subject
Re: [squid-users] dead squid, Still looking,






Start debug level 1 in squid.conf, The problem about strace, is squid will
start another process, that new process it the one that it can't find and 
then
dies. This happens in milli seconds.  No time to enter the command or know
which process will it be.  Squid from what I see always starts two 
processes, a
main and another.  The other is the one it can't find therefore making 
squid
die.

Example
squid dead but pid file exists
squid: ERROR Could not send signal 0 to process 5530: (3) No such
process,

This process was generated seconds before it attempts to send signal 0 to 
the
process.

Could not possiblely do a trace before it happens.

Sorry if this is confusing but it's confusing me. I know other people run 
squid
with more users than I do on more limited machines.

 The squid cache has reached it size limit and is being rebuilt to
 compensate...

 You could load squid in debug mode.  There are a few debug mode options
 available for you...
 Just use squid -? to get a list of options.

 Alternatively, you can attach an strace session to squid to see exactly
 what it's doing.
 strace -p pid of squid  /var/log/squid.trace.out 21

 Tim Rainier
 Information Services, Kalsec, INC




 John R. Van Lanen, Network Operations - TCCSA [EMAIL PROTECTED]
 08/29/2005 03:29 PM

 To
 [EMAIL PROTECTED]
 cc
 squid-users@squid-cache.org, [EMAIL PROTECTED]
 Subject
 Re: [squid-users] dead squid, Still looking,






 If I do this, its a matter of hours and the same occurs again.  What
 triggers
 the process that squid feels it needs to start again and add a new
 process, why
 did the old process die to begin with?  In other words, what triggers 
the
 need
 to do this over and over again.  When doing a squid status I only ever 
see
 2
 processes.

  Here is the last 1/8 of the File,

   2005/08/19 17:28:21| Starting Squid Cache version 2.5.STABLE3 for
   i386-redhat-linux-gnu...
   2005/08/19 17:28:21| Process ID 5486
   2005/08/19 17:28:21| With 1024 file descriptors available
   2005/08/19 17:28:21| DNS Socket created at 0.0.0.0, port 32785, FD 4
   2005/08/19 17:28:21| Adding nameserver 10.1.1.101 from
 /etc/resolv.conf
   2005/08/19 17:28:21| helperOpenServers: Starting 10 'ifp_redirector'
   processes
   2005/08/19 17:28:21| User-Agent logging is disabled.
   2005/08/19 17:28:21| Referer logging is disabled.
   2005/08/19 17:28:21| Unlinkd pipe opened on FD 19
   2005/08/19 17:28:21| Swap maxSize 2048000 KB, estimated 157538 
objects
   2005/08/19 17:28:21| Target number of buckets: 7876
   2005/08/19 17:28:21| Using 8192 Store buckets
   2005/08/19 17:28:21| Max Mem  size: 8192 KB
   2005/08/19 17:28:21| Max Swap size: 2048000 KB
   2005/08/19 17:28:21| Rebuilding storage in /usr/squid (DIRTY)
   2005/08/19 17:28:21| Using Least Load store dir selection
   2005/08/19 17:28:21| chdir: /var/spool/squid: (2) No such file or
   directory
   2005/08/19 17:28:21| Current Directory is /root
   2005/08/19 17:28:21| Loaded Icons.
   2005/08/19 17:28:22| Accepting HTTP connections at 0.0.0.0, port 
7034,
   FD 21.
   2005/08/19 17:28:22| Accepting ICP messages at 0.0.0.0, port 3130, 
FD
 22.
   2005/08/19 17:28:22| WCCP Disabled.
   2005/08/19 17:28:22| Ready to serve requests.
   2005/08/19 17:28:22| Store rebuilding is  3.5% complete
   2005/08/19 17:28:25| Starting Squid Cache version 2.5.STABLE3 for
   i386-redhat-linux-gnu...
   2005/08/19 17:28:25| Process ID 5511
   2005/08/19 17:28:25| With 1024 file descriptors available
   2005/08/19 17:28:25| DNS Socket created at 0.0.0.0, port 32785, FD 4
   2005/08/19 17:28:25| Adding nameserver 10.1.1.101 from
 /etc/resolv.conf
   2005/08/19 17:28:25| helperOpenServers: Starting 10 'ifp_redirector'
   processes
   2005/08/19 17:28:25| User-Agent logging is disabled.
   2005/08/19 17:28:25| Referer logging is disabled.
   2005/08/19 17:28:25| Unlinkd pipe opened on FD 19
   2005/08/19 17:28:25| Swap maxSize 2048000 KB, estimated 157538 
objects
   2005/08/19 17:28:25| Target number of buckets: 7876
   2005/08/19 17:28:25| Using 8192 Store buckets
   2005/08/19 17:28:25| Max Mem  size: 8192 KB
   2005/08/19 17:28:25| Max Swap size: 2048000 KB
   2005/08/19 17:28:25| Rebuilding storage in 

Re: [squid-users] new to squid

2005-08-17 Thread trainier
Actually,  ./configure --help is quite sufficient at displaying 
compile-time options and their descriptions.
I would start there.

Tim Rainier




Abdock [EMAIL PROTECTED] 
08/17/2005 01:09 PM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] new to squid







Dear All,

I need to set up a tranparent squid box, and want to use CentOS 4, getting 
squid from source is great, can anybody help me on the compile lines ?

Have like 1,000 users. and a bandwidth of 4mb in / 1 mb out.


Thanks a lot,

Ab.





[squid-users] Unable to rebuild cache

2005-08-05 Thread trainier
Squid's cache limit is set to 4GB.
When the cache fills up and squid attempts to rebuild, it dies and reloads 
itself continually, failing to rebuild the cache.
My squid.conf is below:

  squid.conf ---
cache_effective_user nobody
log_fqdn on
http_port 8000
icp_port 3130
htcp_port 4827
udp_incoming_address 0.0.0.0
udp_outgoing_address 255.255.255.255
tcp_outgoing_address 208.224.3.155
icp_query_timeout 2000
maximum_icp_query_timeout 2000
mcast_icp_query_timeout 2000
dead_peer_timeout 10 seconds
cache_mem 64 MB
maximum_object_size 2 KB
half_closed_clients off
#cache_mem 4048 MB
cache_swap_low 95 
cache_swap_high 99 
#maximum_object_size 100 KB
minimum_object_size 0 KB
visible_hostname kalproxy.kalsec.com
cache_dir ufs /services/squid/var/cache 4048 16 256
cache_access_log /services/squid/var/logs/access.log
cache_log /services/squid/var/logs/cache.log
ftp_passive on
ftp_sanitycheck off
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
quick_abort_pct 95
acl kalmail src mail.kalsec.com
acl hitachi dst 192.168.1.117
acl manager proto cache_object
acl intranet dst 192.168.1.0/255.255.255.0
acl one_nine_two src 192.168.1.0/255.255.255.0
acl one_seven_two_twenty_four src 172.24.0.0/255.255.0.0
acl one_seven_two_sixteen_two_fifty src 172.16.250.0/255.255.255.0
acl one_seven_two_twenty_four_ten src 172.24.10.0/255.255.255.0
acl hr src 172.16.100.0/255.255.255.0
acl vine src 192.168.2.0/255.255.255.0
acl vpn src 192.168.99.0/255.255.255.0
acl all src 0.0.0.0/0.0.0.0
http_access allow kalmail
http_access allow intranet
http_access allow one_nine_two
http_access allow one_seven_two_twenty_four
http_access allow one_seven_two_sixteen_two_fifty
http_access allow one_seven_two_twenty_four_ten
http_access allow hr
http_access allow hitachi
http_access allow vpn
http_access allow manager
http_access allow vine
redirect_program /services/squidGuard/bin/squidGuard -c 
/services/squid/etc/squidguard.conf
redirect_children 50
cachemgr_passwd 8675309
cache_mgr [EMAIL PROTECTED]
coredump_dir /services/squid/core
http_access deny all
---squid.conf--

Tim Rainier



RE: [squid-users] Unable to rebuild cache

2005-08-05 Thread trainier
All it ever reported was that the store was 1.5% rebuilt and then it would 
show it starting back up.
Yes, that's correct, it never even reported in the log that it was quiting 
and re-starting.

Tim Rainier




Chris Robertson [EMAIL PROTECTED] 
08/05/2005 03:29 PM

To
squid-users@squid-cache.org
cc

Subject
RE: [squid-users] Unable to rebuild cache






 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 05, 2005 5:40 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Unable to rebuild cache
 
 
 Squid's cache limit is set to 4GB.
 When the cache fills up and squid attempts to rebuild, it 
 dies and reloads 
 itself continually, failing to rebuild the cache.
 My squid.conf is below:
 
   squid.conf ---
 cache_effective_user nobody
 log_fqdn on
 http_port 8000
 icp_port 3130
 htcp_port 4827
 udp_incoming_address 0.0.0.0
 udp_outgoing_address 255.255.255.255
 tcp_outgoing_address 208.224.3.155
 icp_query_timeout 2000
 maximum_icp_query_timeout 2000
 mcast_icp_query_timeout 2000
 dead_peer_timeout 10 seconds
 cache_mem 64 MB
 maximum_object_size 2 KB
 half_closed_clients off
 #cache_mem 4048 MB
 cache_swap_low 95 
 cache_swap_high 99 
 #maximum_object_size 100 KB
 minimum_object_size 0 KB
 visible_hostname kalproxy.kalsec.com
 cache_dir ufs /services/squid/var/cache 4048 16 256
 cache_access_log /services/squid/var/logs/access.log
 cache_log /services/squid/var/logs/cache.log
 ftp_passive on
 ftp_sanitycheck off
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 quick_abort_min 0 KB
 quick_abort_max 0 KB
 quick_abort_pct 95
 acl kalmail src mail.kalsec.com
 acl hitachi dst 192.168.1.117
 acl manager proto cache_object
 acl intranet dst 192.168.1.0/255.255.255.0
 acl one_nine_two src 192.168.1.0/255.255.255.0
 acl one_seven_two_twenty_four src 172.24.0.0/255.255.0.0
 acl one_seven_two_sixteen_two_fifty src 172.16.250.0/255.255.255.0
 acl one_seven_two_twenty_four_ten src 172.24.10.0/255.255.255.0
 acl hr src 172.16.100.0/255.255.255.0
 acl vine src 192.168.2.0/255.255.255.0
 acl vpn src 192.168.99.0/255.255.255.0
 acl all src 0.0.0.0/0.0.0.0
 http_access allow kalmail
 http_access allow intranet
 http_access allow one_nine_two
 http_access allow one_seven_two_twenty_four
 http_access allow one_seven_two_sixteen_two_fifty
 http_access allow one_seven_two_twenty_four_ten
 http_access allow hr
 http_access allow hitachi
 http_access allow vpn
 http_access allow manager
 http_access allow vine
 redirect_program /services/squidGuard/bin/squidGuard -c 
 /services/squid/etc/squidguard.conf
 redirect_children 50
 cachemgr_passwd 8675309
 cache_mgr [EMAIL PROTECTED]
 coredump_dir /services/squid/core
 http_access deny all
 ---squid.conf-
 -
 
 Tim Rainier
 
 

The cache.log might hold some clues.  Check it out, and if you don't see 
anything obvious, share it with the list.

Chris




Re: [squid-users] still having problems with Mircosoft Update

2005-08-04 Thread trainier
This issue has been discussed numerous times on this list.
For an archive search, try: 
http://www.google.com/search?q=site:squid-cache.org+%2B%22Windows+Update%22hl=enlr=start=30sa=N

Tim Rainier




Matt Ashfield [EMAIL PROTECTED] 
08/04/2005 11:36 AM
Please respond to
[EMAIL PROTECTED]


To
squid-users@squid-cache.org
cc

Subject
[squid-users] still having problems with Mircosoft Update






Hi All,

I'm Running squid 2.5 on a RedHat Enterprise server. I'm running it as a
transparent proxy, but am having problems getting it to allow users to get
to windowsupdate properly. It can get to the site, but when it checks for
updates, it fails. I've tried adjusting my MTU size, but this did not 
help.
I think my issue is that WindowsUpdate uses port 443 and I'm not sure if 
I'm
accommodating this in my configuration correctly. Below is as much info as 
I
think may be useful.

My iptables rule looks like below. The first line is to redirect all port 
80
requests to my squid port of 3128. The other two lines are for DNS:
iptables -t nat -A PREROUTING -s 192.168.144.0/23 -p tcp --dport 80 -j 
REDIRECT --to-port 3128
iptables -t nat -A POSTROUTING -p tcp --dport 53 -j SNAT --to-source
x.x.144.200
iptables -t nat -A POSTROUTING -p udp --dport 53 -j SNAT --to-source
x.x.144.200

Within my squid.conf, I am running squid on port 3128. I have a redirector
script, and have the following lines that seem pertinent: 
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 563 443
acl Safe_ports port 80 21 70 210 911 1025-65535
acl Safe_ports port 280 # http-mgmt 
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

redirector_access allow REDIRECT !SSL_ports

As well, to allow windowsupdate sites I have:
acl NoRedirect url_regex -i .*microsoft\.com
acl NoRedirect url_regex -i .*akamai\.net
acl NoRedirect url_regex -i .*akamai\.com
acl NoRedirect url_regex -i .*windows\.com
acl NoRedirect url_regex -i .*windowsupdate\.com
acl NoRedirect url_regex -i .*windowsupdate\.net
acl NoRedirect url_regex -i .*msft\.com
acl NoRedirect url_regex -i .*msft\.net
acl NoRedirect url_regex -i .*nsatc\.com
acl NoRedirect url_regex -i .*nsatc\.net
acl NoRedirect url_regex -i .*edgesuite\.net
acl NoRedirect url_regex -i .*akadns\.com
acl NoRedirect url_regex -i .*akadns\.net
acl NoRedirect url_regex -i .*207.46.*
acl NoRedirect url_regex -i .*64.2.21.*
acl NoRedirect url_regex -i .*unb\.ca.*
acl NoRedirect url_regex -i windowsupdate
redirector_access deny NoRedirect


Any help is greatly appreciated.

Cheers

Matt






Re: [squid-users] tproxy

2005-08-04 Thread trainier
First, and foremost, you'll need to use the --enable-ipf-transparent 
config option when you run the configure.
Second, you'll want to search the Squid FAQ.  There are some useful tips 
out there about how to set that up.

Tim Rainier




Rodrigo Gesswein [EMAIL PROTECTED] 
08/04/2005 02:00 PM
Please respond to
Rodrigo Gesswein [EMAIL PROTECTED]


To
squid-users@squid-cache.org
cc

Subject
[squid-users] tproxy






Hi!

I'm new to squid user's mailing list and I have some questions
regarding implementation of cttproxy and squid.

Basically, I can't put all the pieces together to build a really
transparent squid box. Does anyone have a mini-howto or tips about
this implementation ?

What I do:

1.- Downloaded and install latest version on cttproxy, for kernel
2.6.12 and iptables 1.2.7a.

2.- Compile squid-2.5.stable5 with JES patch:
http://www.squid-cache.org/mail-archive/squid-dev/200404/0032.html

3.- Follow instructions from:
http://www.sanog.org/resources/sanog4-devdas-transproxy.pdf

All tests I made, http request comes with IP of my squid box and not 
client IP.
Any tips or ideas ?

Thank you very much

Rodrigo.




Re: [squid-users] 407 Error

2005-08-03 Thread trainier
Why not use Log Rotations?

Or is this not a *nix box?

Tim Rainier




Carlos Eduardo Gomes Marins [EMAIL PROTECTED] 
08/03/2005 04:50 PM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] 407 Error






Hi all,
 
Due to a large number of users (5000) and lack of disk space for
logging, I'm trying to find out how to stop logging 407 error messages.
Is there a way to change this behavior in compiling time or in source
files (before compiling)?
Thanks,
 
Carlos.




Re: [squid-users] proxy.pac

2005-08-02 Thread trainier
Yes, only when using WPAD.

Although some of the proxy.pa requests did make it to the webserver, the 
majority of those requests required me to
actually sniff the machines manually to find them. 

 I wonder if they resolved the issue in a relatively recent service pack 
or something similiar. 
I know for a fact, about a year ago, it was happening on a lot of machines 
here.

Like I said, we stopped using WPAD just like they stopped working on the 
RFC.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Kevin [EMAIL PROTECTED] 
08/01/2005 07:25 PM
Please respond to
Kevin [EMAIL PROTECTED]


To
[EMAIL PROTECTED] [EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] proxy.pac






On 8/1/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 I'd reply to the question sent to the list, but I deleted it already.
 
 There's a bug in IE that truncates the last character of the
 autoconfiguration file.

If I'm reading this right, you're saying that this problem only happens
when using Proxy Automatic Configuration (PAC) in combination with
the automatic discovery feature (WPAD) of MSIE?


 The problem is the packet which requests that file, sometimes get's
 fragmented, not always.
 This essentially causes IE to request two files: proxy.pa and c
 A very simple work-around is to copy proxy.pac to proxy.pa
 You should see somewhat more consistency here.

Interesting.  We support many thousands of Windows workstations,
all using proxy.pac but none using WPAD.

I do not see any requests for proxy.pa in the logs on the web server
hosting the PAC file.  I do see a ton of errors for some really humorous
typos -- it's amazing how many different ways there are to creatively
spell proxy :)


Kevin Kadow

(P.S.  I do see a high number of requests for proxy.pac?Type=WMT,
is anybody else seeing these?)




Re: [squid-users] proxy.pac

2005-08-02 Thread trainier
I should've clarified that. 
We used the DNS records for AutoDiscovery.

I'm not sure if that matters, but we didn't use DHCP.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



Rodrigo A B Freire [EMAIL PROTECTED] 
08/01/2005 08:33 PM

To
squid-users@squid-cache.org
cc

Subject
Re: [squid-users] proxy.pac






We use both WPAD and PAC... Inclusive, setting the WPAD in DHCP. No 
problem at all, no truncated requests in the access.log.

The proxy.pac?Type=WMT is when the Windows Media Player try to open a 
web URN.

- Original Message - 
From: Kevin [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Monday, August 01, 2005 8:25 PM
Subject: Re: [squid-users] proxy.pac


On 8/1/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 I'd reply to the question sent to the list, but I deleted it already.

 There's a bug in IE that truncates the last character of the
 autoconfiguration file.

If I'm reading this right, you're saying that this problem only happens
when using Proxy Automatic Configuration (PAC) in combination with
the automatic discovery feature (WPAD) of MSIE?


 The problem is the packet which requests that file, sometimes get's
 fragmented, not always.
 This essentially causes IE to request two files: proxy.pa and c
 A very simple work-around is to copy proxy.pac to proxy.pa
 You should see somewhat more consistency here.

Interesting.  We support many thousands of Windows workstations,
all using proxy.pac but none using WPAD.

I do not see any requests for proxy.pa in the logs on the web server
hosting the PAC file.  I do see a ton of errors for some really humorous
typos -- it's amazing how many different ways there are to creatively
spell proxy :)


Kevin Kadow

(P.S.  I do see a high number of requests for proxy.pac?Type=WMT,
is anybody else seeing these?)





Re: [squid-users] proxy.pac

2005-08-02 Thread trainier
Completely the opposite for us.
At the time of testing, XP was the only machine that seemed to work 
consistently.
We only had a couple XP machines at that time.

Tim Rainier




Merton Campbell Crockett [EMAIL PROTECTED] 
08/01/2005 10:28 PM

To
Rodrigo A B Freire [EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] proxy.pac






On Mon, 1 Aug 2005, Rodrigo A B Freire wrote:

We use both WPAD and PAC... Inclusive, setting the WPAD in DHCP. No 
problem
 at all, no truncated requests in the access.log.

WPAD worked reasonably well for WindowsNT and Windows2000; however, there 
was a problem with the file name in Windows2000 and the initial release of 

WindowsXP.  The Microsoft DHCP Service returned the wrong byte count for 
the string returned for option 252.  The DHCP Client compensated for this 
by decrementing the string length.  This resulted in the file name being 
truncated when the ISC DHCP daemon was used.  The solution was to define a 

symlink proxy.pa -- proxy.pac.

Another issue was case of the file name.  The complete URL needed to be 
lowercase.  I have a bad habit of using uppercase for host names.  It's a 
cheap shot way of identifying if a local name was used to resolve the 
name.  :-)

WindowsXP SP1 introduced another problem:  the proxy.pac file does not 
appear to be used except when it is passed as option 252 in response to a 
DHCPINFORM request.  It appears that WindowsXP systems with static IP 
addresses will not use the proxy.pac file.

Another problem that appears to affect both Windows2000 and WindowsXP is 
the cacheing of Web connections which is done by host name rather than 
URL.  I wrote a load-balancing proxy.pac file.  If the first reference 
causes proxy A to be selected, proxy A is used for all URLs involving that 

host although the hashing algorithm would suggest that proxy B should have 

been used for some of the requests.  The Windows Registry needs to be 
modified to overcome this cacheing behaviour.

Merton Campbell Crockett




 
The proxy.pac?Type=WMT is when the Windows Media Player try to open a 
web
 URN.
 
 - Original Message - From: Kevin [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Sent: Monday, August 01, 2005 8:25 PM
 Subject: Re: [squid-users] proxy.pac
 
 
 On 8/1/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  I'd reply to the question sent to the list, but I deleted it already.
  
  There's a bug in IE that truncates the last character of the
  autoconfiguration file.
 
 If I'm reading this right, you're saying that this problem only happens
 when using Proxy Automatic Configuration (PAC) in combination with
 the automatic discovery feature (WPAD) of MSIE?
 
 
  The problem is the packet which requests that file, sometimes get's
  fragmented, not always.
  This essentially causes IE to request two files: proxy.pa and c
  A very simple work-around is to copy proxy.pac to proxy.pa
  You should see somewhat more consistency here.
 
 Interesting.  We support many thousands of Windows workstations,
 all using proxy.pac but none using WPAD.
 
 I do not see any requests for proxy.pa in the logs on the web server
 hosting the PAC file.  I do see a ton of errors for some really humorous
 typos -- it's amazing how many different ways there are to creatively
 spell proxy :)
 
 
 Kevin Kadow
 
 (P.S.  I do see a high number of requests for proxy.pac?Type=WMT,
 is anybody else seeing these?)
 
 

-- 
BEGIN:   vcard
VERSION: 3.0
FN:  Merton 
Campbell Crockett
ORG: General 
Dynamics Advanced Information Systems;
 Intelligence and Exploitation Systems
N: Crockett;Merton;Campbell
EMAIL;TYPE=internet: [EMAIL PROTECTED]
TEL;TYPE=work,voice,msg,pref:+1(805)497-5045
TEL;TYPE=work,fax:   +1(805)497-5050
TEL;TYPE=cell,voice,msg: +1(805)377-6762
END: vcard




Re: [squid-users] acl issues

2005-08-01 Thread trainier
What's your config look like?

Tim



Joe Acquisto [EMAIL PROTECTED] 
08/01/2005 03:21 PM

To
squid-users@squid-cache.org
cc

Subject
[squid-users] acl issues






Still chasing getting PC restrictions to work.

I just don't get it.  I have acl's defined, and I can see it checking 
them, in the cache.log.  However, it seems it is hosing up on  the IP 
check.  Always seems to be checking 127.0.0.1 instead of the actual 
connection's IP. 

Below is an example from the log:

2005/08/01 14:54:56| aclCheck: checking 'http_access allow JOESPC LETIN1'
2005/08/01 14:54:56| aclMatchAclList: checking JOESPC
2005/08/01 14:54:56| aclMatchAcl: checking 'acl JOESPC src 192.168.0.16'
2005/08/01 14:54:56| aclMatchIp: '127.0.0.1' NOT found
2005/08/01 14:54:56| aclMatchAclList: returning 0

Why is aclMatchIp checking 127.0.0.1?  What am I missing?




Re: [squid-users] ftp connect ?

2004-09-17 Thread trainier
My understanding is that CONNECT was originally designed to allow a 
proxy to dynamically switch to being a tunnel.  ie: ssl.
The problem is, application vendors are mis-using the CONNECT method 
because it's easy. 

These particular vendors and their products are rendered as tainted 
because they're not following rfc suggestions and/or recommendations.

RFC 2817 might be helpful to you.  - http://www.ietf.org/rfc/rfc2817.txt
RFC 3143 might also be interesting - 
ftp://ftp.rfc-editor.org/in-notes/rfc3143.txt

Best regards,

Tim Rainier




[EMAIL PROTECTED]
09/17/2004 09:56 AM
 
To: [EMAIL PROTECTED]
cc: 
Subject:[squid-users] ftp connect ?



Hello,


Most ftp clients that support http proxies use the CONNECT method, once 
they
have authenticated.
This method is not allowed by default on the FTP port.
So these clients (inc. Filezilla, ...) don't get much further than
authentication.


Is it a security breach to allow CONNECT method on port 21 ?

Where could I find more info about this topic ?


Thank You,

Andrew.





Re: [squid-users] squid + masquerade

2004-08-25 Thread trainier
Do the following:

[internet] - [mail server] - [fw  gateway coyote] 
 [proxy server] --- [LAN]


Tim




Fabrice Régnier [EMAIL PROTECTED]
08/25/2004 09:54 AM
 
To: [EMAIL PROTECTED]
cc: 
Subject:[squid-users] squid + masquerade


Hi all :)

Inside the acces.log, the client column is always the IP of the fire 
wall that makes masquerade and i would like to see the IP of my LAN 
clients. The squid server is on the mail server. I use ipchains for the 
fw.
any idea ?

network architecture:

internet (0)[mail server](1)--(2)[fw  gateway 
coyote](3)-(4)[LAN]

(0) : public IP
(1) : 192.168.99.10
(2) : 192.168.99.20
(3) : 172.25.100.252
(4) : 172.25.100.x

thanx

fab.


-- 
Fabrice Régnier
Informaticien du GDS 56
tel:02.97.63.09.09
fax:02.97.63.37.10




Re: [squid-users] squid + masquerade

2004-08-25 Thread trainier
Squid needs to be running on your LAN.

Tim




Fabrice Régnier [EMAIL PROTECTED]
08/25/2004 12:27 PM
 
To: [EMAIL PROTECTED]
cc: 
Subject:Re: [squid-users] squid + masquerade


You mean i should create a third network ? the proxy server should be a 
stand alone machine ? Squid should be running in my LAN ? I'm sorry, i 
don't get you.

fab

[EMAIL PROTECTED] wrote:
 Do the following:
 
 [internet] - [mail server] - [fw  gateway 
coyote] 
  [proxy server] --- [LAN]
 
 
 Tim
 
 
 
 
 Fabrice Régnier [EMAIL PROTECTED]
 08/25/2004 09:54 AM
 
 To: [EMAIL PROTECTED]
 cc: 
 Subject:[squid-users] squid + masquerade
 
 
 Hi all :)
 
 Inside the acces.log, the client column is always the IP of the fire 
 wall that makes masquerade and i would like to see the IP of my LAN 
 clients. The squid server is on the mail server. I use ipchains for the 
 fw.
 any idea ?
 
 network architecture:
 
 internet (0)[mail server](1)--(2)[fw  gateway 
 coyote](3)-(4)[LAN]
 
 (0) : public IP
 (1) : 192.168.99.10
 (2) : 192.168.99.20
 (3) : 172.25.100.252
 (4) : 172.25.100.x
 
 thanx
 
 fab.
 
 


-- 
Fabrice Régnier
Informaticien du GDS 56
tel:02.97.63.09.09
fax:02.97.63.37.10




[squid-users] High CPU Utilization

2004-08-24 Thread trainier
This has happened before and I guess I have never gotten to the bottom of 
it.  All of the sudden, squid took and held onto 98% of the CPU.
The machine has plenty of CPU and RAM, not to mention disk space.  There 
were no warnings in the cache.log. 

cache_effective_user is set to nobody.
coredump_dir is set to /services/squid/core (which exists and 
cache_effective_user has write access to)

Yet, I can't force squid to dump a core file, or I don't know how.  Can 
anyone tell me how I can do this?

I thought SIGABRT was supposed to do that.

Tim


[squid-users] High CPU Utilization

2004-08-23 Thread trainier
This has happened before and I guess I have never gotten to the bottom of 
it.  All of the sudden, squid took and held onto 98% of the CPU.
The machine has plenty of CPU and RAM, not to mention disk space.  There 
were no warnings in the cache.log. 

cache_effective_user is set to nobody.
coredump_dir is set to /services/squid/core (which exists and 
cache_effective_user has write access to)

Yet, I can force squid to dump a core file, or I don't know how.  Can 
anyone tell me how I can do this?

I thought SIGABRT was supposed to do that.

Tim




André Füchsel [EMAIL PROTECTED]
08/23/2004 01:39 PM
 
To: Chris Perreault [EMAIL PROTECTED], 
[EMAIL PROTECTED]
cc: 
Subject:RE: [squid-users] accelerating proxy and default 
page  (index.html )


At 16:42 23.08.2004 +0200, Chris Perreault wrote:
http://www.mydomain.com/  with a trailing slash...does that work? And
once
you connect to index.html you can navigate throughout the rest of the
site
ok?

Trailing slash does not work. Yes, once connected the rest is working 
fine. 
Strange...

A. 





RE: [squid-users] FW: no client IP address

2004-07-07 Thread trainier
Angela is correct. 
If you set client_netmask to 255.255.255.255 it will block ip addresses 
from being logged.  That is the whole point in client_netmask.
Here's the tag, from the documentation:

# TAG: client_netmask 
# A netmask for client addresses in logfiles and cachemgr output. 
# Change this to protect the privacy of your cache clients. 
# 
#client_netmask 255.255.255.255

What you want to do, is comment out the client_netmask feature.

Regards,

Tim Rainier




Angela Burrell [EMAIL PROTECTED]
07/07/2004 03:59 PM
 
To: [EMAIL PROTECTED]
cc: squid users [EMAIL PROTECTED]
Subject:RE: [squid-users] FW: no client IP address


A question: Why would you set the netmask to 255.255.255.255?

This effectively filters out the entire IP address... ?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
Sent: July 7, 2004 12:37 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] FW: no client IP address



I have a RedHat Linux server running Squid.  My squid.conf file has the
client_netmask set to 255.255.255.255 but access.log is not capturing the
client ip address.  I have restarted the server many times.  As far as web
browsing, all users have access, and that is functional.  However, the
following is what is logged into access.log:

proxy - - [07/Jul/2004:11:15:59 -0500] GET http://www.google.com/search?
HTTP/1.0 200 18902 TCP_MISS:DIRECT
proxy - - [07/Jul/2004:11:16:01 -0500] GET
http://www.google.com/images/logo_sm.gif HTTP/1.0 304 194
TCP_CLIENT_REFRESH_MISS:DIRECT

Any idea why the client ip address isn't there when the netmask is set to
255.255.255.255?





Re: [squid-users] SO_FAIL

2004-07-07 Thread trainier
My first suggestion would be to look at the permissions of the cache 
directory.
Does the squid user have write permissions to it?

Regards,

Tim Rainier






Sunil Mohan Ranta [EMAIL PROTECTED]
07/07/2004 02:20 PM
 
To: [EMAIL PROTECTED]
cc: 
Subject:[squid-users] SO_FAIL



i am getting SO_FAIL messages in store.log
and no cache is being created on my system
the size of of cache_dir is constant at 17MB

how do i enable caching, and get rid of this SO_FAIL message ??

help please !!

-- 
killall -9 killall
http://students.iiit.net/~sunilmohan/




Re: [squid-users] WHERE does Squid cache DNS lookups?

2004-06-27 Thread trainier
I'm not sure where squid stores the DNS cache.  However, your test of 
attempting to use a transparent proxy, is probably not sufficient.  I had 
a lot of trouble using the Auto-Configuration script with Internet 
Explorer.  I mean, a LOT.  I found out that there was/is a bug in IE 5.0+ 
where-in, when IE sends out the auto-discovery packets, it chops the last 
character off the script name.  So it looks for proxy.pa instead of 
proxy.pac. 
When you said you got a couldn't find page error that wasn't from squid, 
it's probably because IE didn't attempt to connect to the page through 
squid, because it couldn't find your squid configuration script.

Just an fyi.

TimR






dravya [EMAIL PROTECTED]
06/25/2004 02:19 PM
 
To: Squid Mailing List [EMAIL PROTECTED]
cc: 
Subject:[squid-users] WHERE does Squid cache DNS lookups?


Hi folk,

When Squid does a DNS lookup, where does it save the cached entries? I 
know you can
specify how many entries you want it to save but where does it actually 
save it on disk? 

In fact how does squid do DNS lookups? Squid is listening on port 3128 and 
a DNS lookup
sends a udp packet on port 53. How does squid intercept such a lookup? 

If I set my browser settings manually (non transparent), and type a 
non-existing url. I
get a message from squid saying it couldn't do a look up. However, when I 
make squid
transparent. I don't see the same page again. I only see a message that 
says couldn't find
page (which is not from squid).
So is squid caching DNS entries??

Thanx guys... really want this to work... and help would be greatly 
appreciated
dravya







  1   2   >