Re: [squid-users] New to Squid

2009-03-16 Thread Brett Lymn
On Fri, Mar 13, 2009 at 04:14:42PM +0100, Kinkie wrote:
 
 Making AD work in a firewalled environment is not really that easy
 (nor secure), but I'd assume that that side of things has already been
 covered.
 

This is totally off-topic but the above statement is not true.  What you
need to do is use an IPSEC tunnel - there are MS docuemnts that
describe how you can do this.

-- 
Brett Lymn
Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer.




Re: [squid-users] delay pools not used yet

2009-03-16 Thread Amos Jeffries

ronnie nyaruwabvu wrote:

here is how i defined the ACLs

acl staff_net src 196.2.2.0/24
acl staff_net src 10.0.0.0/16
acl staff_net src  168.167.8.0/21
acl staff_net src  168.167.30.0/24
acl staff_net src  168.167.32.0/24
acl staff_net src  168.167.34.0/24
acl FastInternet src 10.0.5.229



Okay, no reason why those should not work.
You are giving http_access allow to them right?

Amos


thanks.

regards,

ronnie



- Original Message 
From: Amos Jeffries squ...@treenet.co.nz
To: ronnie nyaruwabvu mobileron...@yahoo.co.uk
Cc: squid-users@squid-cache.org
Sent: Friday, 13 March, 2009 3:34:49
Subject: Re: [squid-users] delay pools not used yet

ronnie nyaruwabvu wrote:

hi,

i have configured 2 delay pools, one for fast internet access and the other for normal internet access. 
delay_pools 2

delay_initial_bucket_level 50
#
#allow fast internet access to hosts defined in FastInternet
delay_class 1 1
delay_access 1 allow FastInternet
delay_access 1 deny all
delay_parameters 1 65536/655360
#
#allocate 5120k bandwidth to staff and students subnets
delay_class 2 3
delay_access 2 allow staff_net
delay_access 2 deny all
delay_parameters 2 655360/1310720 -1/-1 4096/16384

when i check with squidclient if the delay pools are working i get 
Delay pools configured: 2

Pool: 1
Class: 1
Aggregate:
Max: 655360
Restore: 65536
Current: 655360
Pool: 2
Class: 3
Aggregate:
Max: 1310720
Restore: 655360
Current: 1310720
Network:
Disabled.
Individual:
Max: 16384
Restore: 4096
Current [All networks]: Not used yet.

from this the pools are not working. where am i getting it wrong?

regards,

ronnie



We will need to see how you define FastInternet and staff_net ACL.

Amos
-- Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6



  



--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


[squid-users] Squid 3.0 stable 13 all NTLM connectors Reserved/Deferred

2009-03-16 Thread Берсенев Виктор Сергеевич
On the working 2.7 stable 4 - Work in fine,   install new squid 3.0 stable
13 - get older config

And squid it began to be restarted often on error All ntlmauthenticator
processes are busy

Cachemgr.cgi:


#   FD  PID # Requests  # Deferred Requests Flags   Time
Offset  Request
1   58  980126080   R   0.000   0   (none) 
2   59  9802343 0   R   0.000   0   (none) 
3   60  9803167 0   R   0.000   0   (none) 
4   61  980479  0   R   0.000   0   (none) 
5   62  980549  0   R   0.000   0   (none) 
6   63  980623  0   R   0.000   0   (none) 
7   64  9807662 0   R   0.000   0   (none) 
8   65  9808193 0   R   0.000   0   (none) 
9   66  980962  0   R   0.000   0   (none) 
10  67  981029  0   R   0.000   0   (none) 
11  68  981117  0   R   0.000   0   (none) 
12  69  98127   0   R   0.001   0   (none) 
13  70  9813910 0   R   0.000   0   (none) 
14  71  9814327 0   R   0.000   0   (none) 

All ntlm connectors status is RESERVED / Deferred
All basics work in fine

Squid restart 1-3 hours in day or night 4-5 hours

Of use squid -k reconfigure 
Cachemgr.cgi:
#   FD  PID # Requests  # Deferred Requests Flags   Time
Offset  Request
1   58  980126080   RS  0.000   0   (none) 
2   59  9802343 0   RS  0.000   0   (none) 
3   60  9803167 0   RS  0.000   0   (none) 
4   61  980479  0   RS  0.000   0   (none) 
5   62  980549  0   RS  0.000   0   (none) 
6   63  980623  0   RS  0.000   0   (none) 
7   64  9807662 0   RS  0.000   0   (none) 
8   65  9808193 0   RS  0.000   0   (none) 
9   66  980962  0   RS  0.000   0   (none) 
1   90  9810290 0   R   0.000   0   (none) 
2   91  981117000   0.000   0   (none) 
3   92  981275  0   R   0.000   0   (none) 
4   93  981328000   R   0.000   0   (none) 
5   94  9814327 0   0.000   0   (none)  

And RS status Remains until squid -k shutdown



Squid configured on 
external_acl_type nt_group ttl=200 children=30 %LOGIN
/usr/local/squid/libexec/wbinfo_group.pl

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 80
auth_param ntlm keep_alive off

auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
#auth_param basic children 10
auth_param basic children 20
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

P.S install Other version 
3.0 stable 7 - error again
2.7 stable 6 - work in fine


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Squid 3.0 stable 13 all NTLM connectors Reserved/Deferred

2009-03-16 Thread Берсенев Виктор Сергеевич
On the working 2.7 stable 4 - Work in fine,   install new squid 3.0 stable
13 - get older config

And squid it began to be restarted often on error All ntlmauthenticator
processes are busy

Cachemgr.cgi:


#   FD  PID # Requests  # Deferred Requests Flags   Time
Offset  Request
1   58  980126080   R   0.000   0   (none)
2   59  9802343 0   R   0.000   0   (none)
3   60  9803167 0   R   0.000   0   (none)
4   61  980479  0   R   0.000   0   (none)
5   62  980549  0   R   0.000   0   (none)
6   63  980623  0   R   0.000   0   (none)
7   64  9807662 0   R   0.000   0   (none)
8   65  9808193 0   R   0.000   0   (none)
9   66  980962  0   R   0.000   0   (none)
10  67  981029  0   R   0.000   0   (none)
11  68  981117  0   R   0.000   0   (none)
12  69  98127   0   R   0.001   0   (none)
13  70  9813910 0   R   0.000   0   (none)
14  71  9814327 0   R   0.000   0   (none)

All ntlm connectors status is RESERVED / Deferred
All basics work in fine

Squid restart 1-3 hours in day or night 4-5 hours

Of use squid -k reconfigure
Cachemgr.cgi:
#   FD  PID # Requests  # Deferred Requests Flags   Time
Offset  Request
1   58  980126080   RS  0.000   0   (none)
2   59  9802343 0   RS  0.000   0   (none)
3   60  9803167 0   RS  0.000   0   (none)
4   61  980479  0   RS  0.000   0   (none)
5   62  980549  0   RS  0.000   0   (none)
6   63  980623  0   RS  0.000   0   (none)
7   64  9807662 0   RS  0.000   0   (none)
8   65  9808193 0   RS  0.000   0   (none)
9   66  980962  0   RS  0.000   0   (none)
1   90  9810290 0   R   0.000   0   (none)
2   91  981117000   0.000   0   (none)
3   92  981275  0   R   0.000   0   (none)
4   93  981328000   R   0.000   0   (none)
5   94  9814327 0   0.000   0   (none)

And RS status Remains until squid -k shutdown



Squid configured on
external_acl_type nt_group ttl=200 children=30 %LOGIN
/usr/local/squid/libexec/wbinfo_group.pl

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 80
auth_param ntlm keep_alive off

auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
#auth_param basic children 10
auth_param basic children 20
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

P.S install Other version
3.0 stable 7 - error again
2.7 stable 6 - work in fine


Re: [squid-users] restart url_redirector processe when it dies

2009-03-16 Thread Dieter Bloms
Hi Amos,

On Sun, Mar 15, Amos Jeffries wrote:

 I use an url_rewrite_program, which seems to die after about 40
 requests.
 Squid starts 15 processes, which are enough, but after some time one
 process after another die and at the end all processes where gone.

 Is it possible to let squid restart an url_rewrite_program, when it dies ?



 What version of Squid are you using that does not do this restart 
 automatically?
 Squid only dies when ALL helpers for a needed service are dying too fast to 
 recover quickly.

I use squid 2.7.STABLE6.
I've 15 processes running, when I kill 2 of them, I see only 13 of 15
processes running in the cache manager menu.

like:

--snip from cache manager menu --
Redirector Statistics:
program: /usr/local/bin/webcatredir
number running: 13 of 15
requests sent: 2482
replies received: 2481
queue length: 0
avg service time: 3.33 msec
--snip--
 
for me it looks like the 2 killed processes will not be started, or does
it take some time ?

--snip from squid -v 
Squid Cache: Version 2.7.STABLE6
configure options:  '--prefix=/usr' '--sysconfdir=/etc/squid'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--localstatedir=/var'
'--libexecdir=/usr/sbin' '--datadir=/usr/share/squid' '--disable-carp'
'--disable-htcp' '--disable-icap-client' '--disable-ident-lookups'
'--disable-wccp' '--disable-wccpv2' '--enable-async-io=128'
'--enable-auth=basic digest' '--enable-basic-auth-helpers=LDAP'
'--enable-digest-auth-helpers=ldap'
'--enable-default-err-language=German_Datev'
'--enable-err-languages=German' '--enable-snmp'
'--enable-storeio=aufs,ufs,diskd,null' '--enable-referer-log'
'--enable-useragent-log' '--enable-large-cache-files'
'--enable-removal-policies=lru,heap' '--mandir=/usr/share/man'
'--with-default-user=squid' '--with-filedescriptors=8192'
'--with-large-files' '--with-pthreads' '--with-aio'
--snip--


-- 
Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


[squid-users] Can Squid act as streaming server?

2009-03-16 Thread Adi Gamliel
Hi,

 

Can Squid act as streaming server? 

For example instead of opening many sessions toward an Internet radio,
it will open one and distribute in on the LAN.

 

Regards,

Adi

 

 



Re: [squid-users] restart url_redirector processe when it dies

2009-03-16 Thread Amos Jeffries

Dieter Bloms wrote:

Hi Amos,

On Sun, Mar 15, Amos Jeffries wrote:


I use an url_rewrite_program, which seems to die after about 40
requests.
Squid starts 15 processes, which are enough, but after some time one
process after another die and at the end all processes where gone.

Is it possible to let squid restart an url_rewrite_program, when it dies ?


What version of Squid are you using that does not do this restart 
automatically?
Squid only dies when ALL helpers for a needed service are dying too fast to 
recover quickly.


I use squid 2.7.STABLE6.
I've 15 processes running, when I kill 2 of them, I see only 13 of 15
processes running in the cache manager menu.

like:

--snip from cache manager menu --
Redirector Statistics:
program: /usr/local/bin/webcatredir
number running: 13 of 15
requests sent: 2482
replies received: 2481
queue length: 0
avg service time: 3.33 msec
--snip--
 
for me it looks like the 2 killed processes will not be started, or does

it take some time ?


May take some time.
They are restarted if they are needed, up to the max limit of children. 
Or at some point if they are noticed by other means.


IIRC the too-fast-to-recover ratio is 1 death per request handled or 
less. (helper dies after its first request is fatal, or dies before 
responding to first request is more fatal).



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] Can Squid act as streaming server?

2009-03-16 Thread Amos Jeffries

Adi Gamliel wrote:

Hi,

 

Can Squid act as streaming server? 


For example instead of opening many sessions toward an Internet radio,
it will open one and distribute in on the LAN.


Not really.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


[squid-users] RE: Squid 3.0 stable 13 all NTLM connectors Reserved/Deferred

2009-03-16 Thread Берсенев Виктор Сергеевич
The same error 

http://www.nabble.com/NTLM-Authenticator-Reserved-status-problem-td21430460.
html

http://www.nabble.com/Too-many-queued-ntlmauthenticator-requests-td18343369.
html

no answers

-Original Message-
From: Берсенев Виктор Сергеевич [mailto:twobo...@ek.apress.ru] 
Sent: Monday, March 16, 2009 12:35 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid 3.0 stable 13 all NTLM connectors
Reserved/Deferred

On the working 2.7 stable 4 - Work in fine,   install new squid 3.0 stable
13 - get older config

And squid it began to be restarted often on error All ntlmauthenticator
processes are busy



smime.p7s
Description: S/MIME cryptographic signature


[squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Gavin McCullagh
Hi,

we're running a reasonably busy squid proxy system here which peaks at
about 130-150 requests per second.  

The OS is Ubuntu Hardy and at the minute, I'm using the packaged 2.6.18
squid version.  I'm considering a hand-compile of 2.7, though it's quite
nice to get security patches from the distro. 

We have 2x SATA disks, a 150GB and a 1TB.  The linux system is on software
RAID1 across the two disks.  The main cache is 600GB in size on a single
non-RAID 970GB partition at the end of the 1TB disk.  A smaller partition
is reserved on the other disk as a secondary cache, but that's not in use
yet and the squid logs are currently written there.  The filesystems for
the caches are reiserfs v3 and the cache format is AUFS. 

We've been monitoring the hit rates, cpu usage, etc. using munin.   We
average about 13% byte hit rate.  Iowait is now a big issue -- perhaps not
surprisingly.  I had 4GB RAM in the server and PAE turned on.  I upped this
to 8GB with the idea of expanding squid's RAM cache.  Of course, I forgot
that the squid process can't address anything like that much RAM on a
32-bit system.  I think the limit is about 3GB, right?

I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
notice squid terminates with signal 6 and restarts as the cache_mem fills.
I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
a little more politely in this situation -- either not attempting to
allocate the extra RAM, giving a warning or an error?

My main question is, is there a sensible way for me to use the extra RAM?
I know the OS does disk caching with it but with a 600GB cache, I doubt
that'll be much help.  I thought of creating a 3-4GB ramdisk and using it
as a volatile cache for squid which gets re-created (either by squid -z or
by dd of an fs image) each time the machine reboots.  The things is, I
don't know how squid addresses multiple caches.  If one cache is _much_
faster but smaller than the other, can squid prioritise using it for the
most regularly hit data or does it simply treat each cache as equal?  Are
there docs on these sorts of issues?

Any suggestions would be most welcome.

Gavin



Re: [squid-users] Squid server as transparent proxy and problem with Rapid Share

2009-03-16 Thread Matus UHLAR - fantomas
Hello,

On 14.03.09 14:38, Azhar H. Chowdhury wrote:
 We are running ISP and have few cache+proxy servers running Squid as 
 transparent. Lots of our clients have been
 using site like rapidshare from where they download files/program without 
 having  an account.

as an ISP, you should NOT use proxy this way, imho. There are other
problems you may have. You shoult (imho) allow users to decide whether to
use proxy or not.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Emacs is a complicated operating system without good text editor.


[squid-users] how to allow ftp connection through squid proxy

2009-03-16 Thread goody goody


Hi there,

I am currently using squid stable v.3 as transparent proxy on freebsd 6.4.

i am facing problem when accessing the ftp site. can any body guide me or 
provide me some useful link, for tweaking the settings to allow ftp access 
through squid.


many thanks,
.Goody.



  


Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Amos Jeffries

Gavin McCullagh wrote:

Hi,

we're running a reasonably busy squid proxy system here which peaks at
about 130-150 requests per second.  


The OS is Ubuntu Hardy and at the minute, I'm using the packaged 2.6.18
squid version.  I'm considering a hand-compile of 2.7, though it's quite
nice to get security patches from the distro. 


FYI: The latest Intrepid or Jaunty package should work just as well in 
Hardy.




We have 2x SATA disks, a 150GB and a 1TB.  The linux system is on software
RAID1 across the two disks.  The main cache is 600GB in size on a single
non-RAID 970GB partition at the end of the 1TB disk.  A smaller partition
is reserved on the other disk as a secondary cache, but that's not in use
yet and the squid logs are currently written there.  The filesystems for
the caches are reiserfs v3 and the cache format is AUFS. 


We've been monitoring the hit rates, cpu usage, etc. using munin.   We
average about 13% byte hit rate.  Iowait is now a big issue -- perhaps not
surprisingly.  I had 4GB RAM in the server and PAE turned on.  I upped this
to 8GB with the idea of expanding squid's RAM cache.  Of course, I forgot
that the squid process can't address anything like that much RAM on a
32-bit system.  I think the limit is about 3GB, right?


For 32-bit I think it is yes. You can rebuild squid as 64-bit or check 
the distro for a 64-bit build.


However keep this in mind:  rule-of-thumb is 10MB index per GB of cache.

So your 600 GB disk cache is likely to use ~6GB of RAM for index + 
whatever cache_mem you allocate for RAM-cache + index for RAM-cache + OS 
and application memory.




I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
notice squid terminates with signal 6 and restarts as the cache_mem fills.
I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
a little more politely in this situation -- either not attempting to
allocate the extra RAM, giving a warning or an error?


cache.log should contain a FATAL: message and possibly a line or two 
beforehand about why and where the crash occured.

Please can you post that info here.



My main question is, is there a sensible way for me to use the extra RAM?
I know the OS does disk caching with it but with a 600GB cache, I doubt
that'll be much help.


RAM swapping (disk caching by the OS) is one major performance killer. 
Squid needs direct access to all its memory for fast index searches and 
in-transit processing.




 I thought of creating a 3-4GB ramdisk and using it
as a volatile cache for squid which gets re-created (either by squid -z or
by dd of an fs image) each time the machine reboots.   The things is, I
don't know how squid addresses multiple caches.  If one cache is _much_
faster but smaller than the other, can squid prioritise using it for the
most regularly hit data or does it simply treat each cache as equal?  Are
there docs on these sorts of issues?


No need that is already built into Squid. cache_mem defines the amount 
of RAM-cache Squid uses.


Squid allocates the disk space based on free space and attempts to 
spread the load evenly over all dirs to minimize disk access/seek times. 
cache_mem is used for the hottest objects to minimize delays even further.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] how to allow ftp connection through squid proxy

2009-03-16 Thread Amos Jeffries

goody goody wrote:


Hi there,

I am currently using squid stable v.3 as transparent proxy on freebsd 6.4.

i am facing problem when accessing the ftp site. can any body guide me or 
provide me some useful link, for tweaking the settings to allow ftp access 
through squid.



Squid can only map FTP objects into HTTP objects.
To do that use the ftp_access controls same as you would http_access

http://www.squid-cache.org/Doc/config/


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Gavin McCullagh
Hi,

thanks for the reply.

On Tue, 17 Mar 2009, Amos Jeffries wrote:

 FYI: The latest Intrepid or Jaunty package should work just as well in  
 Hardy.

I'll look into this.  I tried to build the intrepid debian package from
source, but I came across a build dependency which was apparently not
available on hardy: libgssglue-dev.  I'll look into installing the
pre-built package, but I would've thought it would need newer version of
libraries.

In general, I'm looking for simple maintenance and patching, but not at the
expense of too much performance.  Would we benefit much from a hand-built
squid install?  In what way?

 Of course, I forgot that the squid process can't address anything like
 that much RAM on a 32-bit system.  I think the limit is about 3GB,
 right?

 For 32-bit I think it is yes. You can rebuild squid as 64-bit or check  
 the distro for a 64-bit build.

The server hardware isn't 64-bit so surely I can't run a 64-bit squid
build, can I?

 However keep this in mind:  rule-of-thumb is 10MB index per GB of cache.

 So your 600 GB disk cache is likely to use ~6GB of RAM for index +  
 whatever cache_mem you allocate for RAM-cache + index for RAM-cache + OS  
 and application memory.

Ouch.  That's not a rule of thumb I'd seen anywhere.  I'm really not
observing it either.  Squid runs stabley for days with a 1.7GB cache_mem
and a 600GB disk cache. 

It may help that we're allowing large objects into the cache and using
heap lfuda.  We plot the average object size with munin and it's about
90KB.  Presumably the 10MB per 1GB is strongly a function of average object
size.  
http://deathcab.gcd.ie/munin/gcd.ie/watcher.gcd.ie.html#Squid

The drops in RAM usage are all due to squid restarting.  As long as I keep
the cache_mem below about 1.8-2GB

 I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
 notice squid terminates with signal 6 and restarts as the cache_mem fills.
 I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
 a little more politely in this situation -- either not attempting to
 allocate the extra RAM, giving a warning or an error?

 cache.log should contain a FATAL: message and possibly a line or two  
 beforehand about why and where the crash occured.
 Please can you post that info here.

My apologies, there is a useful error, though in syslog not cache.log.

Mar 15 22:50:24 watcher squid[6751]: httpReadReply: Excess data from POST 
http://im.studivz.net/webx/re;
Mar 15 22:52:50 watcher squid[6748]: Squid Parent: child process 6751 exited 
due to signal 6
Mar 15 22:52:53 watcher squid[4206]: Starting Squid Cache version 2.6.STABLE18 
for i386-debian-linux-gnu...
Mar 15 22:52:53 watcher squid[4206]: Store logging disabled
Mar 15 22:52:53 watcher squid[4206]: Rebuilding storage in 
/var/spool/squid/cache2 (DIRTY)
Mar 15 22:54:29 watcher squid[4206]:262144 Entries Validated so far.
Mar 15 22:54:29 watcher squid[4206]:524288 Entries Validated so far.

I read this before and missed the out of memory error which appears in
the syslog:

Mar 15 22:52:50 watcher out of memory [6751]

this seems to happen every time:

Mar 10 11:58:12 watcher out of memory [22646]
Mar 10 17:52:03 watcher out of memory [24620]
Mar 11 00:57:52 watcher out of memory [31626]

 My main question is, is there a sensible way for me to use the extra RAM?
 I know the OS does disk caching with it but with a 600GB cache, I doubt
 that'll be much help.

 RAM swapping (disk caching by the OS) is one major performance killer.  
 Squid needs direct access to all its memory for fast index searches and  
 in-transit processing.

Of course.  We definitely don't see any swapping to disk.  I watch our
munin memory graphs carefully for this.  What I mean is that the linux OS
does the opposite where RAM is unused -- it caches data in RAM, reads ahead
open files, etc. but this probably won't help much where the amount of data
on disk is very large.

http://deathcab.gcd.ie/munin/gcd.ie/watcher.gcd.ie.html#System

 I thought of creating a 3-4GB ramdisk and using it
 as a volatile cache for squid which gets re-created (either by squid -z or
 by dd of an fs image) each time the machine reboots.   The things is, I
 don't know how squid addresses multiple caches.  If one cache is _much_
 faster but smaller than the other, can squid prioritise using it for the
 most regularly hit data or does it simply treat each cache as equal?  Are
 there docs on these sorts of issues?

 No need that is already built into Squid. cache_mem defines the amount  
 of RAM-cache Squid uses.

Right, but if the squid process is hitting its 32-bit memory limit, I
can't increase this any more, can I?  This is why I'm suggesting a ramdisk
cache as that won't expand squid's internal memory usage.

 Squid allocates the disk space based on free space and attempts to  
 spread the load evenly over all dirs to minimize disk access/seek times.  
 cache_mem is used for the hottest objects to minimize delays 

Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Matus UHLAR - fantomas
Hello,

althouh Amos explained much, I think there may be something to add...

 Gavin McCullagh wrote:
 We've been monitoring the hit rates, cpu usage, etc. using munin.   We
 average about 13% byte hit rate.  Iowait is now a big issue -- perhaps not
 surprisingly.  I had 4GB RAM in the server and PAE turned on.  I upped this
 to 8GB with the idea of expanding squid's RAM cache.  Of course, I forgot
 that the squid process can't address anything like that much RAM on a
 32-bit system.  I think the limit is about 3GB, right?

On 17.03.09 00:31, Amos Jeffries wrote:
 For 32-bit I think it is yes. You can rebuild squid as 64-bit or check 
 the distro for a 64-bit build.

I think it was mentioned that 32-bit squid running on 64-bit system can use
nearly 4GB of RAM.

Note that it's always good for squid to leave some RAM for OS, disk cache
etc. Especially on system with that big cache. Object fetched from disk
aren't cached by squid, only by the OS.

 So your 600 GB disk cache is likely to use ~6GB of RAM for index + 
 whatever cache_mem you allocate for RAM-cache + index for RAM-cache + OS 
 and application memory.

That's not so good. But with that big cache, I'd increase
maximum_object_size which would lower the objects count (Not sure how much).

 I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
 notice squid terminates with signal 6 and restarts as the cache_mem fills.
 I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
 a little more politely in this situation -- either not attempting to
 allocate the extra RAM, giving a warning or an error?
 
 cache.log should contain a FATAL: message and possibly a line or two 
 beforehand about why and where the crash occured.
 Please can you post that info here.

However it's very probable that the squid's address space grew out of
possibilities. Recompile OS/SQUID or Lower cache_mem, and maybe even
cache_disk - for performance reasons people already mentioned they use 50%
of their cache partition size (can your single disk handle the traffic?)

 My main question is, is there a sensible way for me to use the extra RAM?
 I know the OS does disk caching with it but with a 600GB cache, I doubt
 that'll be much help.
 
 RAM swapping (disk caching by the OS) is one major performance killer. 
 Squid needs direct access to all its memory for fast index searches and 
 in-transit processing.

OS disk caching will still help you read from the disk faster (caching of
directories content etc). If you leave some RAM for the OS, of course.

  I thought of creating a 3-4GB ramdisk and using it
 as a volatile cache for squid which gets re-created (either by squid -z or
 by dd of an fs image) each time the machine reboots.   The things is, I
 don't know how squid addresses multiple caches.  If one cache is _much_
 faster but smaller than the other, can squid prioritise using it for the
 most regularly hit data or does it simply treat each cache as equal?  Are
 there docs on these sorts of issues?
 
 No need that is already built into Squid. cache_mem defines the amount 
 of RAM-cache Squid uses.

Although there are afaik some problems in older squid versions that
discouraged using too big memory_cache. And, again, memory cache is only for
objects fetched from network, not for those on disk.

 Squid allocates the disk space based on free space and attempts to 
 spread the load evenly over all dirs to minimize disk access/seek times. 
 cache_mem is used for the hottest objects to minimize delays even further.


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
How does cat play with mouse? cat /dev/mouse


[squid-users] Squid exitin periodicaly ( Preparing for shut down after )

2009-03-16 Thread twinturbo
Squid 2.5STABLE12 on SLES10

I know this is quite an old version but it's on our production machine.

Anyway we have a strange issue where squid seems to be shunting down every 22
minuets or so,  the logs says Preparing for shut down after XXX requests.

Now every minuet we do a squid -k reconfigure as we run squidGuard and it's
config can change all the time. This has never seemed to be a problem in the
past.

I am building up a fresh machine to take over but would like to get this one
working properly too.

So far I have stoped the store.log being written and got the other logs rotating
more than once a day to keep them small.

I was previously getting errors about there being to few redirectors so I upped
that to 30, I have now set it back down to 10 to see what happens.

Rob




Re: [squid-users] squid SNMP acl

2009-03-16 Thread Daniel Kühl
Your SNMP section into squid.conf must be like that:

# SNMP
acl snmpcommunity snmp_community public
snmp_port 3401
snmp_access allow snmpcommunity localhost
snmp_access deny all

And, at your snmpd.conf that it's on the same same server of  squid,
must contain this line:

proxy -v 2c -c public localhost:3401 .1.3.6.1.4.1.3495.1





On Fri, Mar 13, 2009 at 11:11 AM, Merdouille
jgerha...@r-advertising.com wrote:

 Hi everybody

 Now one of my squid servers does every things i wanted to, i try snmp
 features

 acl     snmppublic      snmp_community  public
 acl     manager         src             192.168.100.194
 snmp_port 3401
 snmp_access     allow   snmppublic
 snmp_access     allow   manager
 snmp_access     deny    all

 snmp_incoming_address 0.0.0.0
 snmp_outgoing_address 255.255.255.255


 It's impossible to retreive data from 192.168.100.194

 And i try smp_access allow all for testing and i only can retreive data from
 localhost.

 Any idea?

 i tryed
 --
 View this message in context: 
 http://www.nabble.com/squid-SNMP-acl-tp22497151p22497151.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Amos Jeffries

Gavin McCullagh wrote:

Hi,

thanks for the reply.

On Tue, 17 Mar 2009, Amos Jeffries wrote:

FYI: The latest Intrepid or Jaunty package should work just as well in  
Hardy.


I'll look into this.  I tried to build the intrepid debian package from
source, but I came across a build dependency which was apparently not
available on hardy: libgssglue-dev.  I'll look into installing the
pre-built package, but I would've thought it would need newer version of
libraries.

In general, I'm looking for simple maintenance and patching, but not at the
expense of too much performance.  Would we benefit much from a hand-built
squid install?  In what way?


I'm not aware of any major benefits. Just some minor gains from reduced 
binary size when unused components are build-disabled. Someone reported 
a major reduction when building with LVM recently.  Thats about it.






Of course, I forgot that the squid process can't address anything like
that much RAM on a 32-bit system.  I think the limit is about 3GB,
right?
For 32-bit I think it is yes. You can rebuild squid as 64-bit or check  
the distro for a 64-bit build.


The server hardware isn't 64-bit so surely I can't run a 64-bit squid
build, can I?


Ah, no I believe thats a problem. I kind of assumed that since your 
system could take 2GB of RAM it was 64-bit enabled hardware.





However keep this in mind:  rule-of-thumb is 10MB index per GB of cache.

So your 600 GB disk cache is likely to use ~6GB of RAM for index +  
whatever cache_mem you allocate for RAM-cache + index for RAM-cache + OS  
and application memory.


Ouch.  That's not a rule of thumb I'd seen anywhere.  I'm really not
observing it either.  Squid runs stabley for days with a 1.7GB cache_mem
and a 600GB disk cache. 


It may help that we're allowing large objects into the cache and using
heap lfuda.  We plot the average object size with munin and it's about
90KB.  Presumably the 10MB per 1GB is strongly a function of average object
size.  
	http://deathcab.gcd.ie/munin/gcd.ie/watcher.gcd.ie.html#Squid


(sites restricted, but never mind)
Yes the rule-of-thumb was from past measures mediated by object size 
(averages between 64KB and 128KB). I'm surprised you are seeing such a 
low index size.




The drops in RAM usage are all due to squid restarting.  As long as I keep
the cache_mem below about 1.8-2GB


Maybe the large-file changes in 2.7 will help then.




I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
notice squid terminates with signal 6 and restarts as the cache_mem fills.
I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
a little more politely in this situation -- either not attempting to
allocate the extra RAM, giving a warning or an error?
cache.log should contain a FATAL: message and possibly a line or two  
beforehand about why and where the crash occured.

Please can you post that info here.


My apologies, there is a useful error, though in syslog not cache.log.

Mar 15 22:50:24 watcher squid[6751]: httpReadReply: Excess data from POST 
http://im.studivz.net/webx/re;
Mar 15 22:52:50 watcher squid[6748]: Squid Parent: child process 6751 exited 
due to signal 6
Mar 15 22:52:53 watcher squid[4206]: Starting Squid Cache version 2.6.STABLE18 
for i386-debian-linux-gnu...
Mar 15 22:52:53 watcher squid[4206]: Store logging disabled
Mar 15 22:52:53 watcher squid[4206]: Rebuilding storage in 
/var/spool/squid/cache2 (DIRTY)
Mar 15 22:54:29 watcher squid[4206]:262144 Entries Validated so far.
Mar 15 22:54:29 watcher squid[4206]:524288 Entries Validated so far.

I read this before and missed the out of memory error which appears in
the syslog:

Mar 15 22:52:50 watcher out of memory [6751]

this seems to happen every time:

Mar 10 11:58:12 watcher out of memory [22646]
Mar 10 17:52:03 watcher out of memory [24620]
Mar 11 00:57:52 watcher out of memory [31626]


My main question is, is there a sensible way for me to use the extra RAM?
I know the OS does disk caching with it but with a 600GB cache, I doubt
that'll be much help.
RAM swapping (disk caching by the OS) is one major performance killer.  
Squid needs direct access to all its memory for fast index searches and  
in-transit processing.


Of course.  We definitely don't see any swapping to disk.  I watch our
munin memory graphs carefully for this.  What I mean is that the linux OS
does the opposite where RAM is unused -- it caches data in RAM, reads ahead
open files, etc. but this probably won't help much where the amount of data
on disk is very large.

http://deathcab.gcd.ie/munin/gcd.ie/watcher.gcd.ie.html#System


I thought of creating a 3-4GB ramdisk and using it
as a volatile cache for squid which gets re-created (either by squid -z or
by dd of an fs image) each time the machine reboots.   The things is, I
don't know how squid addresses multiple caches.  If one cache is _much_
faster but smaller than the other, can squid prioritise using it for the
most regularly hit 

Re: [squid-users] Squid exitin periodicaly ( Preparing for shut down after )

2009-03-16 Thread Amos Jeffries

twintu...@f2s.com wrote:

Squid 2.5STABLE12 on SLES10

I know this is quite an old version but it's on our production machine.



Yes. Please bug SLES about using a newer release.



Anyway we have a strange issue where squid seems to be shunting down every 22
minuets or so,  the logs says Preparing for shut down after XXX requests.

Now every minuet we do a squid -k reconfigure as we run squidGuard and it's
config can change all the time. This has never seemed to be a problem in the
past.

I am building up a fresh machine to take over but would like to get this one
working properly too.

So far I have stoped the store.log being written and got the other logs rotating
more than once a day to keep them small.

I was previously getting errors about there being to few redirectors so I upped
that to 30, I have now set it back down to 10 to see what happens.

Rob


Preparing for shut down after XXX requests occurs when Squid receives 
its proper shutdown signal. A clean/graceful shutdown proceeds to follow.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Gavin McCullagh
Hi,

On Tue, 17 Mar 2009, Amos Jeffries wrote:

 The server hardware isn't 64-bit so surely I can't run a 64-bit squid
 build, can I?

 Ah, no I believe thats a problem. I kind of assumed that since your  
 system could take 2GB of RAM it was 64-bit enabled hardware.

Ah well.

 It may help that we're allowing large objects into the cache and using
 heap lfuda.  We plot the average object size with munin and it's about
 90KB.  Presumably the 10MB per 1GB is strongly a function of average object
 size.
   http://deathcab.gcd.ie/munin/gcd.ie/watcher.gcd.ie.html#Squid

 (sites restricted, but never mind)

Sorry, that's fixed now if you want a look.

 Yes the rule-of-thumb was from past measures mediated by object size  
 (averages between 64KB and 128KB). I'm surprised you are seeing such a  
 low index size.

It is quite strange -- 90KB is bang in the middle of your range.  We're
getting about 5-10% of cache hits served from RAM too so there's definitely
cache_mem in use.

 The drops in RAM usage are all due to squid restarting.  As long as I keep
 the cache_mem below about 1.8-2GB

 Maybe the large-file changes in 2.7 will help then.

That's interesting.

 Sorry, some of you may be scratching your heads and wondering why one would
 do something so crazy.  I've just got 4GB RAM sitting moreorless idle,
 a really busy disk and would like to use one to help the other :-)

 Aha, in that case maybe. It would be an interesting setup anyhow.

I'll give it some thought, but it sounds like there's no other easy way to
use the spare RAM.

Gavin



Re: [squid-users] Squid exitin periodicaly ( Preparing for shut down after )

2009-03-16 Thread twinturbo
(Amos) Sorry did not reply to list Ignore..

I wish SLES10 was more up to date on a few packages!!

I can't find anythin that may be shutting down squid, certainly there seems to
be no cron jobs and the issues are happeing at aproximatly 22 minuet intervals
which is not consistent with a cron schedule.

It's very odd, and been hapenig for a while but we had not noticed.

I may just try a full restart on the system.

Thanks

Rob



twintu...@f2s.com wrote:
 Squid 2.5STABLE12 on SLES10

 I know this is quite an old version but it's on our production machine.


Yes. Please bug SLES about using a newer release.


 Anyway we have a strange issue where squid seems to be shunting down every 22
 minuets or so,  the logs says Preparing for shut down after XXX requests.

 Now every minuet we do a squid -k reconfigure as we run squidGuard and it's
 config can change all the time. This has never seemed to be a problem in the
 past.

 I am building up a fresh machine to take over but would like to get this one
 working properly too.

 So far I have stoped the store.log being written and got the other logs
rotating
 more than once a day to keep them small.

 I was previously getting errors about there being to few redirectors so I
upped
 that to 30, I have now set it back down to 10 to see what happens.

 Rob

Preparing for shut down after XXX requests occurs when Squid receives
its proper shutdown signal. A clean/graceful shutdown proceeds to follow.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
   Current Beta Squid 3.1.0.6




Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Marcello Romani

Gavin McCullagh ha scritto:

Hi,

we're running a reasonably busy squid proxy system here which peaks at
about 130-150 requests per second.  


The OS is Ubuntu Hardy and at the minute, I'm using the packaged 2.6.18
squid version.  I'm considering a hand-compile of 2.7, though it's quite
nice to get security patches from the distro. 


We have 2x SATA disks, a 150GB and a 1TB.  The linux system is on software
RAID1 across the two disks.  The main cache is 600GB in size on a single
non-RAID 970GB partition at the end of the 1TB disk.  A smaller partition
is reserved on the other disk as a secondary cache, but that's not in use
yet and the squid logs are currently written there.  The filesystems for
the caches are reiserfs v3 and the cache format is AUFS. 


We've been monitoring the hit rates, cpu usage, etc. using munin.   We
average about 13% byte hit rate.  Iowait is now a big issue -- perhaps not
surprisingly.  I had 4GB RAM in the server and PAE turned on.  I upped this
to 8GB with the idea of expanding squid's RAM cache.  Of course, I forgot
that the squid process can't address anything like that much RAM on a
32-bit system.  I think the limit is about 3GB, right?

I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
notice squid terminates with signal 6 and restarts as the cache_mem fills.
I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
a little more politely in this situation -- either not attempting to
allocate the extra RAM, giving a warning or an error?

My main question is, is there a sensible way for me to use the extra RAM?
I know the OS does disk caching with it but with a 600GB cache, I doubt
that'll be much help.  I thought of creating a 3-4GB ramdisk and using it
as a volatile cache for squid which gets re-created (either by squid -z or
by dd of an fs image) each time the machine reboots.  The things is, I
don't know how squid addresses multiple caches.  If one cache is _much_
faster but smaller than the other, can squid prioritise using it for the
most regularly hit data or does it simply treat each cache as equal?  Are
there docs on these sorts of issues?

Any suggestions would be most welcome.

Gavin





From my little experience I would suggest that you give squid cache_mem 
a value of just some hundreds of MBs, and let the other GBs of ram to 
squid for indexes and the OS for disk caching. I guess after some time 
this will take you near a ramdisk-only setup.
Also, this would move the problem of accessing a very large ram address 
space from squid (which being only 32-bit can lead to problems) to the 
OS, which IMHO is better suited for this task.


Also, I don't understand why spending so much on memory instead of 
buying some more spindles to have a more balanced server in the end 
(maybe space constraints ?)


Just my 2 cents.

--
Marcello Romani


Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Marcello Romani

Gavin McCullagh ha scritto:

Hi,

we're running a reasonably busy squid proxy system here which peaks at
about 130-150 requests per second.  


The OS is Ubuntu Hardy and at the minute, I'm using the packaged 2.6.18
squid version.  I'm considering a hand-compile of 2.7, though it's quite
nice to get security patches from the distro. 


We have 2x SATA disks, a 150GB and a 1TB.  The linux system is on software
RAID1 across the two disks.  The main cache is 600GB in size on a single
non-RAID 970GB partition at the end of the 1TB disk.  A smaller partition
is reserved on the other disk as a secondary cache, but that's not in use
yet and the squid logs are currently written there.  The filesystems for
the caches are reiserfs v3 and the cache format is AUFS. 


We've been monitoring the hit rates, cpu usage, etc. using munin.   We
average about 13% byte hit rate.  Iowait is now a big issue -- perhaps not
surprisingly.  I had 4GB RAM in the server and PAE turned on.  I upped this
to 8GB with the idea of expanding squid's RAM cache.  Of course, I forgot
that the squid process can't address anything like that much RAM on a
32-bit system.  I think the limit is about 3GB, right?

I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
notice squid terminates with signal 6 and restarts as the cache_mem fills.
I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
a little more politely in this situation -- either not attempting to
allocate the extra RAM, giving a warning or an error?

My main question is, is there a sensible way for me to use the extra RAM?
I know the OS does disk caching with it but with a 600GB cache, I doubt
that'll be much help.  I thought of creating a 3-4GB ramdisk and using it
as a volatile cache for squid which gets re-created (either by squid -z or
by dd of an fs image) each time the machine reboots.  The things is, I
don't know how squid addresses multiple caches.  If one cache is _much_
faster but smaller than the other, can squid prioritise using it for the
most regularly hit data or does it simply treat each cache as equal?  Are
there docs on these sorts of issues?

Any suggestions would be most welcome.

Gavin





From my little experience I would suggest that you give squid cache_mem 
a value of just some hundreds of MBs, and let the other GBs of ram to 
squid for indexes and the OS for disk caching. I guess after some time 
this will take you near a ramdisk-only setup.
Also, this would move the problem of accessing a very large ram address 
space from squid (which being only 32-bit can lead to problems) to the 
OS, which IMHO is better suited for this task.


Also, I don't understand why spending so much on memory instead of 
buying some more spindles to have a more balanced server in the end 
(maybe space constraints ?)


Just my 2 cents.

--
Marcello Romani


Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Gavin McCullagh
Hi,

On Mon, 16 Mar 2009, Marcello Romani wrote:

 From my little experience I would suggest that you give squid cache_mem  
 a value of just some hundreds of MBs, and let the other GBs of ram to  
 squid for indexes and the OS for disk caching. I guess after some time  
 this will take you near a ramdisk-only setup.

Really?  I would have thought the linux kernel's disk caching would be far
less optimised for this than using a large squid cache_mem (whatever about
a ramdisk).

 Also, this would move the problem of accessing a very large ram address  
 space from squid (which being only 32-bit can lead to problems) to the  
 OS, which IMHO is better suited for this task.

It's starting to look that way alright.

 Also, I don't understand why spending so much on memory instead of  
 buying some more spindles to have a more balanced server in the end  
 (maybe space constraints ?)

The cost of 8GB of ram was about €100, so it was relatively cheap.  As you
guessed, the machine itself is 1U and doesn't have space for any more hard
drives.

Gavin



Re: [squid-users] restart url_redirector processe when it dies

2009-03-16 Thread Chris Woodfield
To elaborate, squid should restart new url_rewrite_program instances  
when the number of live children falls to = 50% of the configured  
number. So once 8 processes out of 15 die, squid should launch a whole  
new set of 15. You'll then have 23 url_rewriter processes, but squid  
will launch 15 more when they die and only 7 are left. The circle of  
life continues...


In squid 2.7, if you don't want to wait for half your helpers to die  
before squid launches more, you can adjust the threshold in  
helper.c:helperServerFree() by changing the following line:


if (hlp-n_active = hlp-n_to_start / 2)  {

And changing the right size of the evaluation from / 2 to, say, * . 
75 or similar.


This can be used as a somewhat ghetto way of dealing with a rewriter  
that has a memory leak - put a dead-man's counter into the rewriter  
that causes the process to exit after X number of requests, and let  
squid launch new ones as needed. (Not that I've ever done anything  
like that myself, nosiree...)


-C

On Mar 15, 2009, at 5:14 AM, Amos Jeffries wrote:


Dieter Bloms wrote:

Hi,
I use an url_rewrite_program, which seems to die after about 40
requests.
Squid starts 15 processes, which are enough, but after some time one
process after another die and at the end all processes where gone.
Is it possible to let squid restart an url_rewrite_program, when it  
dies ?


What version of Squid are you using that does not do this restart  
automatically?
Squid only dies when ALL helpers for a needed service are dying too  
fast to recover quickly.


Amos
--
Please be using
 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
 Current Beta Squid 3.1.0.6





Re: [squid-users] squid SNMP acl

2009-03-16 Thread Merdouille

May i use squid and snmpd?

Squid can respond directly to snmp ask.


-- 
View this message in context: 
http://www.nabble.com/squid-SNMP-acl-tp22497151p22540328.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] SquidNT cache_peer authentication/encryption/failover

2009-03-16 Thread y_o_u

Hi everyone,

Here is my goal:
I would like for my laptop users to be routed through my public squid proxy
so that when they are anywhere, including in a hotel/coffee shop/etc, their
traffic is coming through us.
 
After asking around and researching some options, one interesting option
includes using squid in a way I had never thought of.

The setup I saw had one Squid server as a standard Internet facing proxy
server (nothing special about that). The interesting thing was they also had
installed Squid locally on the laptop clients and ran Squid using the
cache_peer option to forward all traffic to the standard Squid server (this
is somewhat guesswork because the setup I saw is closed source, but I had
some clues that points me in this direction). 

NOTE: 
SquidNT (2.7.STABLE5 on server, 2.7.STABLE6 on clients) Windows XP SP3 on
clients Windows 2003 Server for server role Clients all have proxy settings
set to localhost on port 3128 Server is listening on port 80 This is an AD
domain, all settings pushed by GPO's. 
Users running with user rights, and machines are fairly locked down (can't
touch the options in IE)
server- internet facing Squid server running normal proxy service
client- laptop computers running Squid and forwarding all requests to the
server using cache_peer

I have gotten both the Squid server and Squid client up and running, with a
few problems/questions.

1. I have the forwarding working in some manor. The problem I have is I have
the public facing server set up to require authentication. I have tried the
login=user:password option for cache_peer on the client side, still with no
luck. When looking at the the access logs on the server, the Squid client
isn't passing the credentials supplied to the Squid server (ie. see below)

1237037364.556  0 68.117.163.156 TCP_DENIED/407 1804 GET
http://yahoo.com/ - NONE/- text/html

2. I also am looking for an option in squid.conf that essentially says if
you cant reach the cache_peer, then go direct for this query. Hopefully
this will resolve the problems I have been having in hot spots with captive
portals. I have looked at always_direct and never_direct, but after reading,
it doesn't look either of those are what I am looking for. I would expect
this to allow the authentication to the captive portal, but as soon as the
cache_peer was reachable, for all traffic to be redirected to it. 

3. Lastly, I am interested in using SSL to encrypt the traffic going from
the client to the server. I am using the SSL version of SquidNT
(http://squid.acmeconsulting.it/download/squid-2.7.STABLE6-bin-SSL.zip), so
the options are available, I just have not found any good documentation on
how to set this up (specifically, what has to be configured on the server
side (the client settings seem pretty strait forward, the stuff I have found
is always for setting up SSL redirects/etc. for web servers using
acceleration mode). 

If I need to provide any further information, please let me know. Thanks for
any help/suggestions, have a good one!
 
Josh
 
 
Before I came to trying Squid in this manor, here were other ways I tried to
proxy users sessions.
 
1. A simple proxy setting in Internet Explorer to point all traffic back
through the public proxy. This fails in the following way: Say a user is at
a hot spot with a captive portal that requires authentication. This sets it
up a problem where the browser cant get to the proxy server because of the
captive portal, and cant authenticate to the captive portal because of the
proxy settings (and bypass proxy server for local addresses wont work
because a lot of captive portals authenticate to non local addresses).
2. PAC files- I thought this was going to be the answer, but unfortunately
this leaves too many holes open. The problem is that when the user first
logs on to a hot spot, the browser cant talk to the proxy (because the user
isn't authenticated yet to the captive portal), so everything is direct
(unfiltered, not good). It takes the user having to close and then open the
browser again before the session would be directed to the proxy (again, not
good).

NOTE: All this is worthless is some situations, as some hot spots don't
allow proxy connections (even if they are going to a proxy on port 80).

Here is my client squid.conf (without commenting for brevity ;) 
WELCOME TO SQUID 2.7.STABLE6 
auth_param ntlm program c:/squid/libexec/mswin_ntlm_auth.exe
auth_param ntlm children 5
auth_param ntlm keep_alive
onexternal_acl_type win_domain_group ttl=300 %LOGIN
c:/squid/libexec/mswin_check_lm_group.exe -G -c acl all src all 
acl manager proto cache_object 
acl localhost src 127.0.0.1/32 
acl to_localhost dst 127.0.0.0/8 
acl localnet src 10.0.0.0/8 
acl localnet src 172.16.0.0/12 
acl localnet src 192.168.0.0/16 
acl SSL_ports port 443 
acl Safe_ports port 80 
acl Safe_ports port 21 
acl Safe_ports port 443 
acl Safe_ports port 70 
acl Safe_ports port 210 
acl Safe_ports port 1025-65535 
acl Safe_ports port 280 
acl 

Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-16 Thread Marcello Romani

Gavin McCullagh ha scritto:

Hi,

On Mon, 16 Mar 2009, Marcello Romani wrote:

From my little experience I would suggest that you give squid cache_mem  
a value of just some hundreds of MBs, and let the other GBs of ram to  
squid for indexes and the OS for disk caching. I guess after some time  
this will take you near a ramdisk-only setup.


Really?  I would have thought the linux kernel's disk caching would be far
less optimised for this than using a large squid cache_mem (whatever about
a ramdisk).


As others have pointed out, squid's cache_mem is not used to serve 
on-disk cache objects, while os's disk cache will hold those objects in 
RAM after squid requests them for the first time.
So if you leave most of RAM to OS for disk cache you'll end up having 
many on-disk object loaded from RAM, i.e. very quickly.
Also, squid needs memory besides cache_mem, for its own internal 
structures and for managing the on-disk repository. If its address space 
is already almost filled up by cache_mem alone, it might have problems 
allocating its own memory structures.
OS's disk cache, on the other hand, is not allcated from squid's process 
memory space and has also a variable size, automatically adjusted by the 
OS when app memory needs grow or shrink.


This is how I understand the whole story so far, others might correct me 
of course :-)




Also, this would move the problem of accessing a very large ram address  
space from squid (which being only 32-bit can lead to problems) to the  
OS, which IMHO is better suited for this task.


It's starting to look that way alright.

Also, I don't understand why spending so much on memory instead of  
buying some more spindles to have a more balanced server in the end  
(maybe space constraints ?)


The cost of 8GB of ram was about €100, so it was relatively cheap.  As you
guessed, the machine itself is 1U and doesn't have space for any more hard
drives.

Gavin





--
Marcello Romani


[squid-users] Squid approach to C10K problem

2009-03-16 Thread Roy M.
Hi,

Just out of curious, how does squid as a single threaded server, to
handle massive amount of clients nowsday?

http://www.kegel.com/c10k.html

Especially some others (http://varnish.projects.linpro.no/) say squid
is old fashion and don't honor those approach such as epoll/kqueue
which is famous in handling many clients using minmount efforts.

But I know it is not true, I have been using Squid for years and think
squid is very efficient and fast enought to serve massive clients. So
I want to know if any hidden secrete behind?

What is you guy comments on those issues ( c10k/epool etc)

Thank you.


[squid-users] Don't log clientParseRequestMethod messages

2009-03-16 Thread Herbert Faleiros
Is there a way to avoid log clientParseRequestMethod: Unsupported method
in request... messages in my cache.log?


[squid-users] Config suggestion

2009-03-16 Thread Herbert Faleiros
Hardware (only running Squid):

# cat /proc/cpuinfo  | egrep -i xeon | uniq
model name  : Intel(R) Xeon(R) CPU   E5405  @ 2.00GHz
# cat /proc/cpuinfo  | egrep -i xeon | wc -l
8

# free -m
 total   used   free sharedbuffers cached
Mem: 32148   2238  29910  0244823
-/+ buffers/cache:   1169  30978
Swap:15264  0  15264

# lsscsi
[0:0:0:0]diskMAXTOR   ATLAS15K2_73WLS  JNZH  /dev/sda
[0:0:1:0]diskSEAGATE  ST3300655LW  0003  /dev/sdb
[0:0:4:0]diskSEAGATE  ST3146807LC  0007  /dev/sdc
[3:0:0:0]diskSEAGATE  ST3300655SS  0004  /dev/sdd
[3:0:1:0]diskSEAGATE  ST3300655SS  0004  /dev/sde

# fdisk -l | grep GB
Disk /dev/sda: 73.5 GB, 73557090304 bytes
Disk /dev/sdb: 300.0 GB, 3000 bytes
Disk /dev/sdc: 146.8 GB, 146815737856 bytes
Disk /dev/sdd: 300.0 GB, 3000 bytes
Disk /dev/sde: 300.0 GB, 3000 bytes

# lspci | grep -Ei 'sas|scsi'
04:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET
PCI-Express Fusion-MPT SAS (rev 04)
06:02.0 SCSI storage controller: Adaptec ASC-29320LP U320 (rev 03)


# uname -srm
Linux 2.6.27.7 x86_64

Squid:

# squid -v
Squid Cache: Version 3.0.STABLE13
configure options:  '--bindir=/usr/bin' '--sbindir=/usr/sbin'
'--libexecdir=/usr/libexec' '--datadir=/usr/share/squid'
'--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--includedir=/usr/include'
'--mandir=/usr/man' '--localstatedir=/var' '--enable-async-io'
'--with-pthreads' '--enable-xmalloc-statistics' '--enable-storeio=aufs'
'--enable-removal-policies' '--enable-err-languages=English Portuguese'
'--enable-linux-netfilter' '--disable-wccp' '--disable-wccpv2'
'--disable-ident-lookups' '--enable-snmp' '--enable-kill-parent-hack'
'--enable-delay-pools' '--enable-follow-x-forwarded-for'
'--with-large-files' '--with-filedescriptors=65536' 'CFLAGS= -march=native'
'CXXFLAGS= -march=native'

# cat /etc/squid/squid.conf | grep -E cache_'mem|dir'\
cache_mem 8192 MB
cache_dir aufs /var/cache/proxy/cache1 102400 16 256
cache_dir aufs /var/cache/proxy/cache2 102400 16 256
cache_dir aufs /var/cache/proxy/cache3 102400 16 256
cache_dir aufs /var/cache/proxy/cache4 102400 16 256
cache_dir aufs /var/cache/proxy/cache5 102400 16 256
cache_dir aufs /var/cache/proxy/cache6 102400 16 256
cache_dir aufs /var/cache/proxy/cache7 102400 16 256
cache_dir aufs /var/cache/proxy/cache8 102400 16 256


# cat /etc/fstab  | grep proxy
/dev/vg00/cache  /var/cache/proxy ext3defaults 1   2


Yes, I know, LVM, ext3 and aufs are bad ideas... I'm particularly
interested in a better cache_dir configuration (maximizing disk's usage)
and the correct cache_mem parameter to this hardware. (and others
possible/useful tips)

Thanks,

-- 
Herbert


[squid-users] Cache digest question

2009-03-16 Thread Chris Woodfield

Hi,

I'm looking into setting up cache peering - I currently have small  
sets of reverse-proxy squids sitting behind a load balancer, with no  
URI hashing or other content-based switching in play (thanks to a nice  
bug/feature in Foundry's IOS that prevents graceful rehashing when  
new servers are added to a VIP..) So I'm looking at other ways to  
scale horizontally our cache capacity (and increase hit rates as I go)  
- so cache-peering in proxy-only mode seems to be a good solution


Due to various reasons, it's looking like cache digests are going to  
be the best way to go in our environment (Option #2 is multicast, but,  
ew). However, one big question I have is this - are cache digests  
intended to replace, or to supplement, normal ICP cache query behavior?


For example, let's say squid A and squid B exchange cache digests  
every 10 minutes. squid A has just retrieved a cache digest from squid  
B, and then gets a new request for an object one minute after the  
cache exchange. One minute later (8 minutes before the next digest  
exchange), squid A gets a request for the same URL. This object is a  
local miss to squid A, but it in-cache for squid B although it's not  
in the latest digest that squid A has received from B.


Will squid A either 1. Do a normal ICP query to squid B due to the  
fact that it's a cache miss, or 2. Presume that squid B doesn't have  
the object since it wasn't in the last digest, and retrieve it itself?  
In other words, do digest exchanges preclude ICP queries for objects  
requests that are local cache misses and are not in the most-recent  
cache digests that a squid has received?


Personally, I'm hoping the answer is #1, as #2 can easily result in  
duplicated content between the squids, which is exactly what I'm  
trying to avoid here.


Thanks,

-Chris




Re: [squid-users] Squid exitin periodicaly ( Preparing for shut down after )

2009-03-16 Thread Amos Jeffries
 (Amos) Sorry did not reply to list Ignore..

 I wish SLES10 was more up to date on a few packages!!

 I can't find anythin that may be shutting down squid, certainly there
 seems to
 be no cron jobs and the issues are happeing at aproximatly 22 minuet
 intervals
 which is not consistent with a cron schedule.

 It's very odd, and been hapenig for a while but we had not noticed.

 I may just try a full restart on the system.

 Thanks

 Rob

You could also check what type of controls you have around the 'manager'
ACL in squid.conf. Every visitor with an allow line before the deny
manager line may have the option to restart Squid with an HTTP request.

Amos




 twintu...@f2s.com wrote:
 Squid 2.5STABLE12 on SLES10

 I know this is quite an old version but it's on our production machine.


 Yes. Please bug SLES about using a newer release.


 Anyway we have a strange issue where squid seems to be shunting down
 every 22
 minuets or so,  the logs says Preparing for shut down after XXX
 requests.

 Now every minuet we do a squid -k reconfigure as we run squidGuard and
 it's
 config can change all the time. This has never seemed to be a problem in
 the
 past.

 I am building up a fresh machine to take over but would like to get this
 one
 working properly too.

 So far I have stoped the store.log being written and got the other logs
 rotating
 more than once a day to keep them small.

 I was previously getting errors about there being to few redirectors so
 I
 upped
 that to 30, I have now set it back down to 10 to see what happens.

 Rob

 Preparing for shut down after XXX requests occurs when Squid receives
 its proper shutdown signal. A clean/graceful shutdown proceeds to follow.

 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
Current Beta Squid 3.1.0.6







Re: [squid-users] Squid exitin periodicaly ( Preparing for shut down after )

2009-03-16 Thread ROBIN
I will have a look, the basic config file has been in use for about 10
years with no major issues. ( god i feel old now thinking about that.) 

Will examine and post the manager ACL's


Rob


On Tue, 2009-03-17 at 10:59 +1200, Amos Jeffries wrote:
  (Amos) Sorry did not reply to list Ignore..
 
  I wish SLES10 was more up to date on a few packages!!
 
  I can't find anythin that may be shutting down squid, certainly there
  seems to
  be no cron jobs and the issues are happeing at aproximatly 22 minuet
  intervals
  which is not consistent with a cron schedule.
 
  It's very odd, and been hapenig for a while but we had not noticed.
 
  I may just try a full restart on the system.
 
  Thanks
 
  Rob
 
 You could also check what type of controls you have around the 'manager'
 ACL in squid.conf. Every visitor with an allow line before the deny
 manager line may have the option to restart Squid with an HTTP request.
 
 Amos
 
 
 
 
  twintu...@f2s.com wrote:
  Squid 2.5STABLE12 on SLES10
 
  I know this is quite an old version but it's on our production machine.
 
 
  Yes. Please bug SLES about using a newer release.
 
 
  Anyway we have a strange issue where squid seems to be shunting down
  every 22
  minuets or so,  the logs says Preparing for shut down after XXX
  requests.
 
  Now every minuet we do a squid -k reconfigure as we run squidGuard and
  it's
  config can change all the time. This has never seemed to be a problem in
  the
  past.
 
  I am building up a fresh machine to take over but would like to get this
  one
  working properly too.
 
  So far I have stoped the store.log being written and got the other logs
  rotating
  more than once a day to keep them small.
 
  I was previously getting errors about there being to few redirectors so
  I
  upped
  that to 30, I have now set it back down to 10 to see what happens.
 
  Rob
 
  Preparing for shut down after XXX requests occurs when Squid receives
  its proper shutdown signal. A clean/graceful shutdown proceeds to follow.
 
  Amos
  --
  Please be using
 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
 Current Beta Squid 3.1.0.6
 
 
 
 
 
 



Re: [squid-users] Squid approach to C10K problem

2009-03-16 Thread Kinkie
On Mon, Mar 16, 2009 at 6:08 PM, Roy M. setesting...@gmail.com wrote:
 Hi,

 Just out of curious, how does squid as a single threaded server, to
 handle massive amount of clients nowsday?

 http://www.kegel.com/c10k.html

 Especially some others (http://varnish.projects.linpro.no/) say squid
 is old fashion and don't honor those approach such as epoll/kqueue
 which is famous in handling many clients using minmount efforts.

Squid uses epoll and kqueue on Linux and FreeBSD, and has been doing
it for some time.
Squid 2.7 also supports Solaris' /dev/poll (squid 3.X doesn't) nor
MSWindows' Overlapped I/O and Completion Ports.
I suspect that the Varnish documentation is a bit out of date.

 But I know it is not true, I have been using Squid for years and think
 squid is very efficient and fast enought to serve massive clients. So
 I want to know if any hidden secrete behind?

It's hard to have any secrets when the code is in the open :P
Squid is reasonably efficient. It can be - and hopefully is being -
still improved.

 What is you guy comments on those issues ( c10k/epool etc)

Some of the most obvious issues have been addressed. There's still
lots to do tho.

-- 
/kinkie


Re: [squid-users] Squid approach to C10K problem

2009-03-16 Thread Amos Jeffries
 Hi,

 Just out of curious, how does squid as a single threaded server, to
 handle massive amount of clients nowsday?

 http://www.kegel.com/c10k.html

 Especially some others (http://varnish.projects.linpro.no/) say squid
 is old fashion and don't honor those approach such as epoll/kqueue
 which is famous in handling many clients using minmount efforts.

 But I know it is not true, I have been using Squid for years and think
 squid is very efficient and fast enought to serve massive clients. So
 I want to know if any hidden secrete behind?

 What is you guy comments on those issues ( c10k/epool etc)

 Thank you.


(FYI: this is just my opinion here, not any other developers)

There really is no accurate direct comparison. varnish is a niche
reverse-proxy, Squid is a general caching proxy.

The varnish write-up about Squid appears to be based on the obsolete 2.5
or such. Squid has epoll/kqueue since 2.6 when built to enable them.
Squid-3.1 now operates in a fully async model which avoids event-driven
bottlenecks.

The fantastic new cache management system varnish goes on about being its
cornerstone has an almost exact equivalent in the Squid COSS directory,
which are extremely fast for small objects but not recommended for large
objects. (A problem often reported about varnish IME).

There are still some legacy design issues, but we are steadily working at
eliminating those on the way to full SMP and HTTP/1.1 support.

We don't exactly have high-end hardware to test on so aim mostly at
cracking the C1K/C5K problems for the low-end stuff we have, with hope
that it scales up towards 10K on faster hardware.

The C10K problem in a general proxy is hampered by network lag times to
remote servers and obfuscated sites causing 50% hit ratios. Varnish is
aimed at the reverse-proxy market where 80% hit ratio is considered bad.
For those setup Squid performance rises dramatically (about an order of
magnitude), though we don't have accurate benchmarks with current Squid to
use for direct comparison to anything.

In the last year I've seen graphs from a few Squid that are pushing
50MB/sec at 40-50% hit ratio forward-proxy under 500 requests/sec. Are
you sure you want Squid to push C10K ;)

The closest thing to a 'trick' we have is 100% non-blocking code running
async on a 'greedy' CPU algorithm to get as much of each transaction
finished as possible before anything needs to be switched/paged for the
next transaction.

Amos




Re: [squid-users] squid SNMP acl

2009-03-16 Thread Chris Woodfield
You can use them together, but you can't bind squid to the standard  
SNMP UDP/161 port if snmpd is also bound to that port.


In my setup, I have snmpd configured to proxy requests for squid's MIB  
to squid, which is listening on localhost:1610:


squid.conf:
acl snmpcommunity snmp_community foobar
snmp_port 1610
snmp_access allow snmpcommunity localhost
snmp_access deny all

Then in /etc/snmpd.conf:

proxy -m /usr/local/squid/share/mib.txt -v 1 -c foobar localhost:1610 . 
1.3.6.1.4.1.3495.1


This will save you from having to configure a custom port in your snmp  
queries.


-C

On Mar 16, 2009, at 11:13 AM, Merdouille wrote:



May i use squid and snmpd?

Squid can respond directly to snmp ask.


--
View this message in context: 
http://www.nabble.com/squid-SNMP-acl-tp22497151p22540328.html
Sent from the Squid - Users mailing list archive at Nabble.com.





Re: [squid-users] restart url_redirector processe when it dies

2009-03-16 Thread Amos Jeffries
 To elaborate, squid should restart new url_rewrite_program instances
 when the number of live children falls to = 50% of the configured
 number. So once 8 processes out of 15 die, squid should launch a whole
 new set of 15. You'll then have 23 url_rewriter processes, but squid
 will launch 15 more when they die and only 7 are left. The circle of
 life continues...

PS: yes that is a known bug. Squid should only start the 8 dead ones back.
It's on the queue for fixing one day.


 In squid 2.7, if you don't want to wait for half your helpers to die
 before squid launches more, you can adjust the threshold in
 helper.c:helperServerFree() by changing the following line:

 if (hlp-n_active = hlp-n_to_start / 2)  {

 And changing the right size of the evaluation from / 2 to, say, * .
 75 or similar.

 This can be used as a somewhat ghetto way of dealing with a rewriter
 that has a memory leak - put a dead-man's counter into the rewriter
 that causes the process to exit after X number of requests, and let
 squid launch new ones as needed. (Not that I've ever done anything
 like that myself, nosiree...)


Hmm, inspiration for a new idea and explanation of a bug in one post.
Thank you very much.

Amos

 -C

 On Mar 15, 2009, at 5:14 AM, Amos Jeffries wrote:

 Dieter Bloms wrote:
 Hi,
 I use an url_rewrite_program, which seems to die after about 40
 requests.
 Squid starts 15 processes, which are enough, but after some time one
 process after another die and at the end all processes where gone.
 Is it possible to let squid restart an url_rewrite_program, when it
 dies ?

 What version of Squid are you using that does not do this restart
 automatically?
 Squid only dies when ALL helpers for a needed service are dying too
 fast to recover quickly.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6







Re: [squid-users] squid SNMP acl

2009-03-16 Thread Amos Jeffries

 May i use squid and snmpd?

 Squid can respond directly to snmp ask.


You may use any tool you like to contact Squid for SNMP data.
As long as the tool can handle SNMPv2 non-bulk request/response it is
expected to do fine.
The catch that needs v2 is because some tables of ie client IP addresses
are not always incrementally increasing values.

Amos




Re: [squid-users] SquidNT cache_peer authentication/encryption/failover

2009-03-16 Thread Amos Jeffries

 Hi everyone,

 Here is my goal:
 I would like for my laptop users to be routed through my public squid
 proxy
 so that when they are anywhere, including in a hotel/coffee shop/etc,
 their
 traffic is coming through us.

 After asking around and researching some options, one interesting option
 includes using squid in a way I had never thought of.

 The setup I saw had one Squid server as a standard Internet facing proxy
 server (nothing special about that). The interesting thing was they also
 had
 installed Squid locally on the laptop clients and ran Squid using the
 cache_peer option to forward all traffic to the standard Squid server
 (this
 is somewhat guesswork because the setup I saw is closed source, but I had
 some clues that points me in this direction).

 NOTE:
 SquidNT (2.7.STABLE5 on server, 2.7.STABLE6 on clients) Windows XP SP3 on
 clients Windows 2003 Server for server role Clients all have proxy
 settings
 set to localhost on port 3128 Server is listening on port 80 This is an AD
 domain, all settings pushed by GPO's.
 Users running with user rights, and machines are fairly locked down (can't
 touch the options in IE)
 server- internet facing Squid server running normal proxy service
 client- laptop computers running Squid and forwarding all requests to the
 server using cache_peer

 I have gotten both the Squid server and Squid client up and running, with
 a
 few problems/questions.

 1. I have the forwarding working in some manor. The problem I have is I
 have
 the public facing server set up to require authentication. I have tried
 the
 login=user:password option for cache_peer on the client side, still with
 no
 luck. When looking at the the access logs on the server, the Squid client
 isn't passing the credentials supplied to the Squid server (ie. see below)

 1237037364.556  0 68.117.163.156 TCP_DENIED/407 1804 GET
 http://yahoo.com/ - NONE/- text/html

It's likely your squid is only sending basic authentication back to the
peer. Squid does not do NTLM etc between peers. You will need to add basic
as one of the auth methods last on the list.


 2. I also am looking for an option in squid.conf that essentially says if
 you cant reach the cache_peer, then go direct for this query. Hopefully
 this will resolve the problems I have been having in hot spots with
 captive
 portals. I have looked at always_direct and never_direct, but after
 reading,
 it doesn't look either of those are what I am looking for. I would expect
 this to allow the authentication to the captive portal, but as soon as the
 cache_peer was reachable, for all traffic to be redirected to it.

Settign cache_peer to a parent cache with default option and the
prefer_direct deny all should do what you want.


 3. Lastly, I am interested in using SSL to encrypt the traffic going from
 the client to the server. I am using the SSL version of SquidNT
 (http://squid.acmeconsulting.it/download/squid-2.7.STABLE6-bin-SSL.zip),
 so
 the options are available, I just have not found any good documentation on
 how to set this up (specifically, what has to be configured on the server
 side (the client settings seem pretty strait forward, the stuff I have
 found
 is always for setting up SSL redirects/etc. for web servers using
 acceleration mode).

Search the squid wiki: http://wiki.squid-cache.org


 If I need to provide any further information, please let me know. Thanks
 for
 any help/suggestions, have a good one!

 Josh


 Before I came to trying Squid in this manor, here were other ways I tried
 to
 proxy users sessions.

 1. A simple proxy setting in Internet Explorer to point all traffic back
 through the public proxy. This fails in the following way: Say a user is
 at
 a hot spot with a captive portal that requires authentication. This sets
 it
 up a problem where the browser cant get to the proxy server because of the
 captive portal, and cant authenticate to the captive portal because of the
 proxy settings (and bypass proxy server for local addresses wont work
 because a lot of captive portals authenticate to non local addresses).
 2. PAC files- I thought this was going to be the answer, but unfortunately
 this leaves too many holes open. The problem is that when the user first
 logs on to a hot spot, the browser cant talk to the proxy (because the
 user
 isn't authenticated yet to the captive portal), so everything is direct
 (unfiltered, not good). It takes the user having to close and then open
 the
 browser again before the session would be directed to the proxy (again,
 not
 good).

 NOTE: All this is worthless is some situations, as some hot spots don't
 allow proxy connections (even if they are going to a proxy on port 80).

 Here is my client squid.conf (without commenting for brevity ;)
 WELCOME TO SQUID 2.7.STABLE6
 auth_param ntlm program c:/squid/libexec/mswin_ntlm_auth.exe
 auth_param ntlm children 5
 auth_param ntlm keep_alive
 onexternal_acl_type win_domain_group ttl=300 %LOGIN
 

Re: [squid-users] Config suggestion

2009-03-16 Thread Amos Jeffries
 Hardware (only running Squid):

 # cat /proc/cpuinfo  | egrep -i xeon | uniq
 model name  : Intel(R) Xeon(R) CPU   E5405  @ 2.00GHz
 # cat /proc/cpuinfo  | egrep -i xeon | wc -l
 8

 # free -m
  total   used   free sharedbuffers cached
 Mem: 32148   2238  29910  0244823
 -/+ buffers/cache:   1169  30978
 Swap:15264  0  15264

 # lsscsi
 [0:0:0:0]diskMAXTOR   ATLAS15K2_73WLS  JNZH  /dev/sda
 [0:0:1:0]diskSEAGATE  ST3300655LW  0003  /dev/sdb
 [0:0:4:0]diskSEAGATE  ST3146807LC  0007  /dev/sdc
 [3:0:0:0]diskSEAGATE  ST3300655SS  0004  /dev/sdd
 [3:0:1:0]diskSEAGATE  ST3300655SS  0004  /dev/sde

 # fdisk -l | grep GB
 Disk /dev/sda: 73.5 GB, 73557090304 bytes
 Disk /dev/sdb: 300.0 GB, 3000 bytes
 Disk /dev/sdc: 146.8 GB, 146815737856 bytes
 Disk /dev/sdd: 300.0 GB, 3000 bytes
 Disk /dev/sde: 300.0 GB, 3000 bytes

 # lspci | grep -Ei 'sas|scsi'
 04:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET
 PCI-Express Fusion-MPT SAS (rev 04)
 06:02.0 SCSI storage controller: Adaptec ASC-29320LP U320 (rev 03)


 # uname -srm
 Linux 2.6.27.7 x86_64

 Squid:

 # squid -v
 Squid Cache: Version 3.0.STABLE13
 configure options:  '--bindir=/usr/bin' '--sbindir=/usr/sbin'
 '--libexecdir=/usr/libexec' '--datadir=/usr/share/squid'
 '--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--includedir=/usr/include'
 '--mandir=/usr/man' '--localstatedir=/var' '--enable-async-io'
 '--with-pthreads' '--enable-xmalloc-statistics' '--enable-storeio=aufs'
 '--enable-removal-policies' '--enable-err-languages=English Portuguese'
 '--enable-linux-netfilter' '--disable-wccp' '--disable-wccpv2'
 '--disable-ident-lookups' '--enable-snmp' '--enable-kill-parent-hack'
 '--enable-delay-pools' '--enable-follow-x-forwarded-for'
 '--with-large-files' '--with-filedescriptors=65536' 'CFLAGS=
 -march=native'
 'CXXFLAGS= -march=native'

 # cat /etc/squid/squid.conf | grep -E cache_'mem|dir'\
 cache_mem 8192 MB
 cache_dir aufs /var/cache/proxy/cache1 102400 16 256
 cache_dir aufs /var/cache/proxy/cache2 102400 16 256
 cache_dir aufs /var/cache/proxy/cache3 102400 16 256
 cache_dir aufs /var/cache/proxy/cache4 102400 16 256
 cache_dir aufs /var/cache/proxy/cache5 102400 16 256
 cache_dir aufs /var/cache/proxy/cache6 102400 16 256
 cache_dir aufs /var/cache/proxy/cache7 102400 16 256
 cache_dir aufs /var/cache/proxy/cache8 102400 16 256


 # cat /etc/fstab  | grep proxy
 /dev/vg00/cache  /var/cache/proxy ext3defaults 1   2


 Yes, I know, LVM, ext3 and aufs are bad ideas... I'm particularly
 interested in a better cache_dir configuration (maximizing disk's usage)
 and the correct cache_mem parameter to this hardware. (and others
 possible/useful tips)

You have 5 physical disks by the looks of it. Best usage of those is to
split the cache_dir one per disk (sharing a disk leads to seek clashes).

I'm not to up on the L1/L2 efficiencies, but 64 256 or higher L1 seems
to be better for larger dir sizes.

For a quad or higher CPU machine, you may do well to have multiple Squid
running (one per 2 CPUs or so). One squid doing the caching on the 300GB
drives and one on the smaller ~100 GB drives (to get around a small bug
where mismatched AUFS dirs cause starvation in small dir), peered together
with no-proxy option to share info without duplicating cache.

Absolutely minimal swapping of memory.

Amos




Re: [squid-users] Cache digest question

2009-03-16 Thread Amos Jeffries
 Hi,

 I'm looking into setting up cache peering - I currently have small
 sets of reverse-proxy squids sitting behind a load balancer, with no
 URI hashing or other content-based switching in play (thanks to a nice
 bug/feature in Foundry's IOS that prevents graceful rehashing when
 new servers are added to a VIP..) So I'm looking at other ways to
 scale horizontally our cache capacity (and increase hit rates as I go)
 - so cache-peering in proxy-only mode seems to be a good solution

 Due to various reasons, it's looking like cache digests are going to
 be the best way to go in our environment (Option #2 is multicast, but,
 ew). However, one big question I have is this - are cache digests
 intended to replace, or to supplement, normal ICP cache query behavior?

I believe it's replace. Though I may be wrong. I have not seen both in
action together yet.


 For example, let's say squid A and squid B exchange cache digests
 every 10 minutes. squid A has just retrieved a cache digest from squid
 B, and then gets a new request for an object one minute after the
 cache exchange. One minute later (8 minutes before the next digest
 exchange), squid A gets a request for the same URL. This object is a
 local miss to squid A, but it in-cache for squid B although it's not
 in the latest digest that squid A has received from B.

 Will squid A either 1. Do a normal ICP query to squid B due to the
 fact that it's a cache miss, or 2. Presume that squid B doesn't have
 the object since it wasn't in the last digest, and retrieve it itself?
 In other words, do digest exchanges preclude ICP queries for objects
 requests that are local cache misses and are not in the most-recent
 cache digests that a squid has received?

 Personally, I'm hoping the answer is #1, as #2 can easily result in
 duplicated content between the squids, which is exactly what I'm
 trying to avoid here.

2-layer CARP mesh is the 'standard' topology recommended for this since
Wikipedia had such success with it. Where the underlayer does all caching
and the load balancing Squid overlayer splits requests into to the
underlayer using CARP.

Amos




Re: [squid-users] Cache digest question

2009-03-16 Thread Chris Woodfield


On Mar 16, 2009, at 9:07 PM, Amos Jeffries wrote:


Hi,

I'm looking into setting up cache peering - I currently have small
sets of reverse-proxy squids sitting behind a load balancer, with no
URI hashing or other content-based switching in play (thanks to a  
nice

bug/feature in Foundry's IOS that prevents graceful rehashing when
new servers are added to a VIP..) So I'm looking at other ways to
scale horizontally our cache capacity (and increase hit rates as I  
go)

- so cache-peering in proxy-only mode seems to be a good solution

Due to various reasons, it's looking like cache digests are going to
be the best way to go in our environment (Option #2 is multicast,  
but,

ew). However, one big question I have is this - are cache digests
intended to replace, or to supplement, normal ICP cache query  
behavior?


I believe it's replace. Though I may be wrong. I have not seen both in
action together yet.



Answered my own question with some lab testing - cache digests are  
*supplemental* to normal ICP behavior. When receiving a URL request  
that's an internal miss, it will look up cache digests first, then do  
an ICP query, then query direct. This is the behavior I was hoping it  
would have :) Even works with multicast ICP, which was a pleasant  
surprise.


the mgr:peer_select even gives you a nice statistic as to how many  
queries were cache-digest hits vs. ICP hits:


...
Algorithm usage:
Cache Digest:2390 ( 62%)
Icp: 1457 ( 38%)
Total:   3847 (100%)




For example, let's say squid A and squid B exchange cache digests
every 10 minutes. squid A has just retrieved a cache digest from  
squid

B, and then gets a new request for an object one minute after the
cache exchange. One minute later (8 minutes before the next digest
exchange), squid A gets a request for the same URL. This object is a
local miss to squid A, but it in-cache for squid B although it's not
in the latest digest that squid A has received from B.

Will squid A either 1. Do a normal ICP query to squid B due to the
fact that it's a cache miss, or 2. Presume that squid B doesn't have
the object since it wasn't in the last digest, and retrieve it  
itself?

In other words, do digest exchanges preclude ICP queries for objects
requests that are local cache misses and are not in the most-recent
cache digests that a squid has received?

Personally, I'm hoping the answer is #1, as #2 can easily result in
duplicated content between the squids, which is exactly what I'm
trying to avoid here.


2-layer CARP mesh is the 'standard' topology recommended for this  
since
Wikipedia had such success with it. Where the underlayer does all  
caching

and the load balancing Squid overlayer splits requests into to the
underlayer using CARP.



I was really hoping I could do this with our existing load balancers,  
but Foundry boned the pony on their content-hashing functionality -  
there's no way to do a graceful hash redistribution when adding a  
new real server to the pool.



Amos






Re: [squid-users] restart url_redirector processe when it dies

2009-03-16 Thread Chris Woodfield


On Mar 16, 2009, at 8:13 PM, Amos Jeffries wrote:


To elaborate, squid should restart new url_rewrite_program instances
when the number of live children falls to = 50% of the configured
number. So once 8 processes out of 15 die, squid should launch a  
whole

new set of 15. You'll then have 23 url_rewriter processes, but squid
will launch 15 more when they die and only 7 are left. The circle of
life continues...


PS: yes that is a known bug. Squid should only start the 8 dead ones  
back.

It's on the queue for fixing one day.



In squid 2.7, if you don't want to wait for half your helpers to die
before squid launches more, you can adjust the threshold in
helper.c:helperServerFree() by changing the following line:

if (hlp-n_active = hlp-n_to_start / 2)  {

And changing the right size of the evaluation from / 2 to, say,  
* .

75 or similar.

This can be used as a somewhat ghetto way of dealing with a rewriter
that has a memory leak - put a dead-man's counter into the rewriter
that causes the process to exit after X number of requests, and let
squid launch new ones as needed. (Not that I've ever done anything
like that myself, nosiree...)



Hmm, inspiration for a new idea and explanation of a bug in one post.
Thank you very much.


Not a problem - I guess you can consider this a feature request to  
make that threshold tunable in squid.conf :)


-C



Amos


-C

On Mar 15, 2009, at 5:14 AM, Amos Jeffries wrote:


Dieter Bloms wrote:

Hi,
I use an url_rewrite_program, which seems to die after about 40
requests.
Squid starts 15 processes, which are enough, but after some time  
one

process after another die and at the end all processes where gone.
Is it possible to let squid restart an url_rewrite_program, when it
dies ?


What version of Squid are you using that does not do this restart
automatically?
Squid only dies when ALL helpers for a needed service are dying too
fast to recover quickly.

Amos
--
Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
Current Beta Squid 3.1.0.6











Re: [squid-users] Don't log clientParseRequestMethod messages

2009-03-16 Thread Amos Jeffries

Herbert Faleiros wrote:

Is there a way to avoid log clientParseRequestMethod: Unsupported method
in request... messages in my cache.log?


No it's a debug log and those messages are important/useful to track bad 
clients in your traffic.


What unknown methods is it recording?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


[squid-users] Large-scale Reverse Proxy for serving images FAST

2009-03-16 Thread David Tosoff

All,

I'm new to Squid and I have been given the task of optimizing the delivery of 
photos from our website. We have 1 main active image server which serves up the 
images to the end user via 2 chained CDNs. We want to drop the middle CDN as 
it's not performing well and is a waste of money; in it's stead we plan to 
place a few reverse proxy web accelerators between the primary CDN and our 
image server.

We currently recieve 152 hits/sec on average with about 550hps max to our 
secondary CDN from cache misses at the Primary.
I would like to serve a lot of this content straight from memory to get it out 
there as fast as possible.

I've read around that there are memory and processing limitations in Squid in 
the magnitude of 2-4GB RAM and 1 core/1 thread, respectively. So, my solution 
was to run multiple instances, as we don't have the rackspace to scale this out 
otherwise.

I've managed to build a working config of 1:1 squid:origin, but I am having 
trouble scaling this up and out.

Here is what I have attempted to do, maybe someone can point me in the right 
direction:

Current config:
User Browser - Prim CDN - Sec CDN - Our Image server @ http port 80

New config idea:
User - Prim CDN - Squid0 @ http :80 - round-robin to parent squid 
instances on same machine @ http :81, :82, etc - Our Image server @ http :80


Squid0's (per diagram above) squid.conf:

acl Safe_ports port 80
acl PICS_DOM_COM dstdomain pics.domain.com
acl SQUID_PEERS src 127.0.0.1
http_access allow PICS_DOM_COM
icp_access allow SQUID_PEERS
miss_access allow SQUID_PEERS
http_port 80 accel defaultsite=pics.domain.com
cache_peer localhost parent 81 3130 name=imgCache1 round-robin proxy-only
cache_peer localhost parent 82 3130 name=imgCache2 round-robin proxy-only
cache_peer_access imgCache1 allow PICS_DOM_COM
cache_peer_access imgCache2 allow PICS_DOM_COM
cache_mem 8192 MB
maximum_object_size_in_memory 100 KB
cache_dir aufs /usr/local/squid0/cache 1024 16 256  -- This one isn't really 
relevant, as nothing is being cached on this instance (proxy-only)
icp_port 3130
visible_hostname pics.domain.com/0

Everything else is per the defaults in squid.conf.


Parent squids' (from above diagram) squid.conf:

acl Safe_ports port 81
acl PICS_DOM_COM dstdomain pics.domain.com
acl SQUID_PEERS src 127.0.0.1
http_access allow PICS_DOM_COM
icp_access allow SQUID_PEERS
miss_access allow SQUID_PEERS
http_port 81 accel defaultsite=pics.domain.com
cache_peer 192.168.0.223 parent 80 0 no-query originserver name=imgParent
cache_peer localhost sibling 82 3130 name=imgCache2 proxy-only
cache_peer_access imgParent allow PICS_DOM_COM
cache_peer_access imgCache2 allow PICS_DOM_COM
cache_mem 8192 MB
maximum_object_size_in_memory 100 KB
cache_dir aufs /usr/local/squid1/cache 10240 16 256
visible_hostname pics.domain.com/1
icp_port 3130
icp_hit_stale on

Everything else per defaults.



So, when I run this config and test I see the following happen in the logs:

From Squid0 I see that it resolves to grab the image from one of it's parent 
caches. This is great! (some show as Timeout_first_up_parent and others as 
just first_up_parent)

1237253713.769 62 127.0.0.1 TCP_MISS/200 2544 GET 
http://pics.domain.com:81/thumbnails/59/78/45673695.jpg - 
TIMEOUT_FIRST_UP_PARENT/imgParent image/jpeg

From the parent cache that it resolves to, I see that it grabs the image from 
IT'S parent, originserver (our image server). Subsequent requests are 
'TCP_HIT' or mem hit. Great stuff!

1237253713.769 62 127.0.0.1 TCP_MISS/200 2694 GET 
http://pics.domain.com/thumbnails/59/78/45673695.jpg - 
FIRST_PARENT_MISS/imgCache1 image/jpeg


Problem is, it doesn't round-robin the requests to both of my parent squids 
and you end up with a very 1-sided cache. If I stop the parent instance that 
is resolving the items, the second parent doesn't take over either. If I then 
proceed to restart the Squid0 instance, it will then direct the requests to 
the second parent, but then the first wont recieve any requests. So I know 
both parent configs work, but I must be doing something wrong somewhere, or 
is this all just a silly idea...?


Can anyone comment on the best way to run a high-traffic set of accel cache 
instances similar to this, or how to fix what i've tried to do? Or another way 
to put a LOT of data into a squid instance's memory. (We have ~150Million x 2KB 
images that are randomly requested).
I'd like to see different content cached on each instance with little or no 
overlap with round-robin handling which squid gets to cache an item and icp 
handling which squid has that item.

I'm open to other ideas too..

Sorry for the loong email.

Thanks all!

David


  __
Instant Messaging, free SMS, sharing photos and more... Try the new Yahoo! 
Canada Messenger at http://ca.beta.messenger.yahoo.com/