[squid-users] WCCP Routing

2008-01-23 Thread Dave Raven
Hi all,
Is it possible to make the request back out the router that sent in
a WCCP packet to begin with? For example if you have two routers, and router
A sends request A and router B sends request B to send them back through
their origin routers, regardless of your default route etc so that B will
stick with B and A with A ?

Thank you
Dave



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-14 Thread Dave Raven
I have seen the error messages before, but not during these tests. diskd 
definitely seems to delay the time-till-crash by a lot - as I understand it the 
problems in diskd are crashes under high load, not that it slows it down right?

Thanks for the help
Dave

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of John Moylan
Sent: Wednesday, November 14, 2007 12:39 PM
To: Dave Raven
Subject: Re: [squid-users] Squid Performance (with Polygraph)

Doesn't diskd have a bug whereby it has issues under heavy load.
http://www.squid-cache.org/bugs/show_bug.cgi?id=761 . If so, I am
surprised that it is behaving best under heavy load.
http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16-RELEASENOTES.html

J



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-14 Thread Dave Raven
Hi Tek,
I've had to make several modifications to the standard setup to get it 
to handle the actual requests coming in, the cache (without disks) is able to 
maintain around 1800RPS now - of course I don't expect the disks to ever get 
that high. 

I'm running 4.11, the relevant kernel tweaks are -- 

options SMP 
options APIC_IO

options MSGMNB=32768
options MSGMNI=160
options MSGSEG=2048
options MSGSSZ=256
options MSGTQL=8192

options MAXDSIZ=(1536*1024*1024)
options DFLDSIZ=(1536*1024*1024)

maxusers1024

In loader.conf --
kern.ipc.nmbclusters=32768

That leaves me with a max process limit of 1.5gig, enough memory for diskd and 
a netstat -m like so:

258/1040/131072 mbufs in use (current/peak/max):
258 mbufs allocated to data
256/1018/32768 mbuf clusters in use (current/peak/max)


As for sysctl tunables --

vfs.vmiodirenable=1 
kern.ipc.maxsockbuf=2097152 
kern.ipc.somaxconn=8192 
kern.ipc.maxsockets=16424 
kern.maxfiles=65536 
kern.maxfilesperproc=32768 
net.inet.tcp.rfc1323=1
net.inet.tcp.delayed_ack=0 
net.inet.tcp.sendspace=32768 
net.inet.tcp.recvspace=65535
net.inet.ip.portrange.last=44999
net.inet.ip.portrange.hifirst=45000
net.inet.tcp.keepidle=15000
net.inet.tcp.keepintvl=5000
net.inet.tcp.keepinit=6
net.inet.tcp.msl=6000


To sum up the above, I have increase my maxfiles, changed the send/receive 
space and increased the available ports to squid. I've also modified the 
timeout and msl settings for tcp to get it to drop FIN_WAIT TIME_WAIT etc 
sessions which are wasting ports. 

I'm almost certain the diskd crash is an actual crash and not a slow down from 
my experience..


Thanks 
Dave




-Original Message-
From: Tek Bahadur Limbu [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 14, 2007 1:48 PM
To: Dave Raven
Cc: 'John Moylan'; 'squid-users'
Subject: Re: [squid-users] Squid Performance (with Polygraph)

Hi Dave,

Dave Raven wrote:
 I have seen the error messages before, but not during these tests. diskd 
 definitely seems to delay the time-till-crash by a lot - as I understand it 
 the problems in diskd are crashes under high load, not that it slows it down 
 right?

 From my experience, YES, DISKD crashes under high load but does not 
actually slows Squid down. It slows Squid initially while rebuilding 
it's cache after the crash but recovers quite fast not to hamper 
performance.
Only under certain circumstances, will it cause the cache to go beyond 
repair and the only way out is to wipe out the cache and rebuild it from 
scratch.

The time for the DISKD crashes also seems to vary alot from crashing 
multiple times a day to a single crash a week or two.

 From your earlier posts, since all your testings lasted from 10 minutes 
to 18 hours, maybe the DISKD crash did not appear during that time.

Also your FreeBSD version 4.x might have also made the difference!

Can you post your FreeBSD 4.x KERNEL parameters that you compiled for 
your testing purposes?


Thanking you...




 
 Thanks for the help
 Dave
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of John Moylan
 Sent: Wednesday, November 14, 2007 12:39 PM
 To: Dave Raven
 Subject: Re: [squid-users] Squid Performance (with Polygraph)
 
 Doesn't diskd have a bug whereby it has issues under heavy load.
 http://www.squid-cache.org/bugs/show_bug.cgi?id=761 . If so, I am
 surprised that it is behaving best under heavy load.
 http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16-RELEASENOTES.html
 
 J
 
 
 
 


-- 

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal

http://www.wlink.com.np

http://teklimbu.wordpress.com



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-14 Thread Dave Raven
Hi Adrian,
Will do - I'll setup polymix-4 tomorrow and try starting on a full
cache. Something interesting though - my processor usage never really gets
over 50% or so (SMP or single processor) until it crashes; but with SMP
800RPS lasts 200+ minutes, and without it only 80 minutes...

Thanks
Dave

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 14, 2007 1:51 PM
To: Dave Raven
Cc: 'Adrian Chadd'; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)

What you may need to do is run the tests at lower req/sec's to find out
where
its stable; or actually run Polmix-4 up properly.

Disk caches - UFS to a large extent, COSS somewhat - take a while to reach
a 'steady state'. With UFS (which I think you're using here, right?) you end
up initially laying down objects in some kind of linear fashion on disk,
then
over time some objects are replaced and others aren't; you end up with
no temporal locality and various types of fragmentation.

Furthermore, if you're using datacomm-1 then you should know that the
working
set increases without bounds over time - squid writes all of the cachable
objects to disk even though there's no chance of reading them back.

Try stopping the test and restarting squid but don't clear out the cachedir;
see if the performance takes that long to drop or whether it drops
immediately.

I'd love to help out more but my only test environments right now have one
IDE disk in them; the most disk plentiful array I have is a Sun E250.
So its difficult to replicate results :)





Adrian

On Wed, Nov 14, 2007, Dave Raven wrote:
 Hi Adrian and all,
  Sorry it's taken me so long to get back but I wanted to be sure I had all
 my ducks in a row. I couldn't find a program to graph what we needed, so I
 wrote a small script to log results - I've now done multiple tests, with
 multiple disk types etc and settled on using diskd as it lasts by far the
 longest. I'm graphing tps (transactions per second) on the drives and
there
 is no plateau but a steady rise. 
 
 Right now the best configuration I have (from my tests) is 2xLSI MegaRaid
 SCSI U320-1 controllers, with 4 Seagate Cheetah 15K.5 drives on each of
 them. In this configuration it lasts longer than the SATA controller, but
 exhibits the exact same behavior. 
 
 With 800RPS my CPU usage never goes over 50%. When I start the cache my
TPS
 across all the drives starts at about 400, and remains there for a few
 minutes - 10-20 probably. After that it begins a steep climb, which later
 flattens out a bit. This is the pattern seen in all of my tests (only the
 time and total tps differ). 
 
 From my reading it would appear that the original steep climb is because
 buffers become full - I wonder why it wouldn't write to the drives at full
 speed to start with if needed, but this part I understand I guess. The
 confusing part is why after that does it slowly and steadily rise -
forever.
 
 
 For example, my 800RPS test runs for 240 minutes (graph of the TPS is
 attached to email), until it reaches the max tps the controller/drives
seem
 to be able to handle, and fails. If I do it at 600RPS it lasts 350
minutes,
 also climbing though until it fails. At 1200 RPS it fails after 30
minutes. 
 
 What is causing it to constantly climb - surely if it was a queue or build
 up of some type it's not reasonable to assume it might take  300 minutes
to
 actually break ? If that was the case it obviously has more steam in the
 first 100 minutes so why not utilize it?  
 
 I have graphs available for all the tests, and I can arrange any
 stats/figures/configs etc needed - even access. I have run it at 1200 RPS
 for 18 hours with a null cache directory and it did not fail, so it's
 definitely disk drive handling (I guess) - but 350 minutes before it
 actually fails, and a slow, steady, predictable pattern? If it was
 plateau'ing like originally suggested I'd agree it's obviously hitting a
 limit - but my rawio tests show each drive is capable of 450 random
 writes/reads per second which is far higher than its doing.
 
 Thanks
 Dave
 
 
 -Original Message-
 From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
 Sent: Saturday, November 10, 2007 12:13 AM
 To: Dave Raven
 Cc: 'Adrian Chadd'; squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid Performance (with Polygraph)
 
 On Fri, Nov 09, 2007, Dave Raven wrote:
  Hi Adrian,
  
  It works for the full 4 hours with a null cache directory. How would
  I see any kind of stats/information on disk IO? From the stats I can see
 so
  far, the disk stats don't change at all when it fails ...
 
 That'd be because you're probably maxing the disk system out early on and
 what you're seeing is a slowly-growing disk service queue?
 
  I'm currently using COSS, but I've also tried this with ufs and diskd
 (with
  the same results, just different times that it fails after).
 
 COSS should handle small object loads better but some

RE: [squid-users] Squid Performance (with Polygraph)

2007-11-14 Thread Dave Raven
I've been testing quite a few combination so for this I had hyperthreading
disabled in the BIOS, and SMP enabled on BSD - so 2 processors. If I enable
hyperthreading its 4. 

I understand that squid would favour only one processor, yet with SMP on it
lasts 3x longer. My guess would be that's its because the diskd processes
are able to use more than one processor? Except their cpu usage never goes
over ~3%...

Thanks
Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 15, 2007 3:07 AM
To: Dave Raven
Cc: 'Adrian Chadd'; squid-users@squid-cache.org
Subject: RE: [squid-users] Squid Performance (with Polygraph)

On ons, 2007-11-14 at 14:29 +0200, Dave Raven wrote:

   Will do - I'll setup polymix-4 tomorrow and try starting on a full 
 cache. Something interesting though - my processor usage never really 
 gets over 50% or so (SMP or single processor) until it crashes; but 
 with SMP 800RPS lasts 200+ minutes, and without it only 80 minutes...

How many processors is reported by the kernel (including hyperthreading)?

Squid is only a single thread, so if you have more than one processor pipe
then Squid will only use one of them.

Regards
Henrik



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-13 Thread Dave Raven
Hi Adrian and all,
 Sorry it's taken me so long to get back but I wanted to be sure I had all
my ducks in a row. I couldn't find a program to graph what we needed, so I
wrote a small script to log results - I've now done multiple tests, with
multiple disk types etc and settled on using diskd as it lasts by far the
longest. I'm graphing tps (transactions per second) on the drives and there
is no plateau but a steady rise. 

Right now the best configuration I have (from my tests) is 2xLSI MegaRaid
SCSI U320-1 controllers, with 4 Seagate Cheetah 15K.5 drives on each of
them. In this configuration it lasts longer than the SATA controller, but
exhibits the exact same behavior. 

With 800RPS my CPU usage never goes over 50%. When I start the cache my TPS
across all the drives starts at about 400, and remains there for a few
minutes - 10-20 probably. After that it begins a steep climb, which later
flattens out a bit. This is the pattern seen in all of my tests (only the
time and total tps differ). 

From my reading it would appear that the original steep climb is because
buffers become full - I wonder why it wouldn't write to the drives at full
speed to start with if needed, but this part I understand I guess. The
confusing part is why after that does it slowly and steadily rise - forever.


For example, my 800RPS test runs for 240 minutes (graph of the TPS is
attached to email), until it reaches the max tps the controller/drives seem
to be able to handle, and fails. If I do it at 600RPS it lasts 350 minutes,
also climbing though until it fails. At 1200 RPS it fails after 30 minutes. 

What is causing it to constantly climb - surely if it was a queue or build
up of some type it's not reasonable to assume it might take  300 minutes to
actually break ? If that was the case it obviously has more steam in the
first 100 minutes so why not utilize it?  

I have graphs available for all the tests, and I can arrange any
stats/figures/configs etc needed - even access. I have run it at 1200 RPS
for 18 hours with a null cache directory and it did not fail, so it's
definitely disk drive handling (I guess) - but 350 minutes before it
actually fails, and a slow, steady, predictable pattern? If it was
plateau'ing like originally suggested I'd agree it's obviously hitting a
limit - but my rawio tests show each drive is capable of 450 random
writes/reads per second which is far higher than its doing.

Thanks
Dave


-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Saturday, November 10, 2007 12:13 AM
To: Dave Raven
Cc: 'Adrian Chadd'; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)

On Fri, Nov 09, 2007, Dave Raven wrote:
 Hi Adrian,
 
   It works for the full 4 hours with a null cache directory. How would
 I see any kind of stats/information on disk IO? From the stats I can see
so
 far, the disk stats don't change at all when it fails ...

That'd be because you're probably maxing the disk system out early on and
what you're seeing is a slowly-growing disk service queue?

 I'm currently using COSS, but I've also tried this with ufs and diskd
(with
 the same results, just different times that it fails after).

COSS should handle small object loads better but some have reported little
benefit beyond a pair of COSS disks.

Try graphing the aggregate disk and per-disk transactions/second; I bet
you'll find it plateau'ing relatively quickly early on.




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
attachment: 800RPS-SCSI.jpg

RE: [squid-users] How can I do this??

2007-11-13 Thread Dave Raven
You could use refresh_pattern to force everything to be cached...

-Original Message-
From: Robert Collins [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 14, 2007 3:37 AM
To: murrah boswell
Cc: squid-users
Subject: Re: [squid-users] How can I do this??

On Tue, 2007-11-13 at 18:28 -0700, murrah boswell wrote:
 Hello,
 
 I have asked this recently, but have still not figured it out, so 
 please excuse me for asking it again.
 
 I am trying to setup Squid to only allow one user through to the Web and 
   configure it so all other users only have access to information 
 stored in the cache.
 
 I am using squid-2.6.STABLE16 on a single server, so there are no 
 siblings relationships.
 
 The idea is to use wget and a special privileged user to fetch pages 
 from the Web and store them in the cache for other users in the system.
 
 Can this be done, and if so, how?

To some degree. miss_access can be used to stop other users accessing data;
however you may find uncachable data will make the users see many errors.
(things like web dots tend to be uncachable).

-Rob
--
GPG key available at: http://www.robertcollins.net/keys.txt.



RE: [squid-users] How can I do this??

2007-11-13 Thread Dave Raven
What about using offline mode - if its educational I assume you could
probably just download after hours ? Turn it off when you download with
wget, and put it back on afterwards?

-Original Message-
From: murrah boswell [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 14, 2007 4:04 AM
To: squid-users
Subject: Re: [squid-users] How can I do this??

Hello,


 
 To some degree. miss_access can be used to stop other users accessing
 data; however you may find uncachable data will make the users see many
 errors. (things like web dots tend to be uncachable).

I know that miss_access must be part of the puzzle to allow my 
priviledged user to go beyond missed hits, but using miss_access where I 
allow only my priviledged user raises error conditions of Relaying 
Denied for the other users.

Once I get the system working, I will address the issue of uncachable 
data. I am developing a system for an educational environment, so I 
believe that most of the needed data will be static and cachable.


Thanks,
Murrah Boswell



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-09 Thread Dave Raven
Hi all,
Okay I managed to do a lot more testing at the office today. Firstly
some of the questions asked --

CPU Usage: The cpu usage is around 30% during the test, when the unit begins
to fail it actually goes down a bit. 

Mbufs/Clusters: All fine - they do rise quickly after the problem happens
but this is because the established network connections are still coming in
600 a second, but only being satisfied at a rate of say 200 a second. The
send queues then get big, and mbuf usage goes up - this is not the cause of
the failure though, it's a side effect. For the first x minutes its between
250 and 3000 mbufs (and clusters) used, and my max is 65k/32k

As for system logs there are none - there is nothing suspicious anywhere
until the side effects kick in, e.g. mbufs running out etc. Squid also logs
nothing at all. I've also checked if I'm using too much memory and that's
not the case - swap is not used at all during the entire test. 

This is the process of what happens --

1. PolyClt + PolySrv begin, 800 RPS. 

2. ESTABLISHED netstat connections are around 2000 once 800RPS is reached
(about 20 seconds). CPU load is 30%, mbufs are available etc.

3. Once memory becomes full (quickly) disk drive usage begins - squid -z
puts the TPS per drive at well over 1000/s when I run it, when the cache is
doing 800 RPS the tps is about 30 per drive (low..). 

4. After a period of time (almost always the same (+/- 60 seconds) depending
on RPS) the ESTABLISHED connections start rising, at the exact same time the
PolyClt starts showing less RPS. This is the slow down as such. 

5. Because of this, polyclt continues to send requests which the unit
continues to accept - quickly all available sockets are used, and the unit
will then crash

Interestingly enough though - if I stop the polyclt when this happens and
restart it - in under 10 seconds - it continues on for another x minutes
without problem. If I leave it running the unit never comes right.

I have used systat -vmstat 1, systat -tcp 1, systat -iostat 1 and all
the stats from Munin, and a MRTG graphing config for squid and they all show
nothing of interest. The only result that changes between working time and
slow down is that the connections go through the roof as explained above...

I have also seen it fail at 300RPS, but only after 82 minutes - which seems
like a very long time if it was going to fail because of disk load. The
entire time the disks are very underloaded. That said, if I use a null cache
directory this doesn't happen

I know that sounds like its clearly drives - but 82 minutes ??

Thanks for all the help
Dave

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 09, 2007 11:55 AM
To: Dave Raven
Cc: 'Adrian Chadd'; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)

Check netstat -mb and see if you're running out of mbufs?
You haven't mentioned whether the CPU is being pegged at this point?



Adrian

On Fri, Nov 09, 2007, Dave Raven wrote:
 Hi all,
   Okay I've done some of what you requested, and unfortunately failed
 to find anything specific. I can pretty much guarantee the times at which
 the requests will slow down now. 600RPS = 15 minutes, 800 RPS = 11
minutes,
 400 RPS = ~80 minutes. 
 
 During that time (before and during the problem) systat -vmstat 1 shows
the
 same interrupts - about 4000 on em1 (ifac) and 250 on hptmv0 - my
controller
 for the SATA drives. 
 
 If I use a systat -iostat 1 I can see that none of the drives are 100%
 utilized at any time during the test. Systat -tcp 1 also doesn't show me
 anything out of the ordinary. I have setup munin to monitor the host but
 unfortunately its not showing much. 
 
 Also the problem is that when the problem begins, it starts filling up
 network connections - once it fills all the available ports nothing can
 monitor it :/
 
 I'm going to try use a different network card, then a different
motherboard
 etc - try some different setups today. Thanks again for all the help and
 please let me know if anyone has any ideas...
 
 Thanks
 Dave
 
 -Original Message-
 From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
 Sent: Friday, November 09, 2007 4:08 AM
 To: Dave Raven
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid Performance (with Polygraph)
 
 On Thu, Nov 08, 2007, Dave Raven wrote:
  Hi Adrian,
   What would cause it to fail after a specific time though - if the
 cache_mem
  is already full and its using the drives? I would have thought it would
 fail
  immediately ? 
  
  Also there are no log messages about failures or anything...
 
 Who knows :) its hard without having remote access, or lots of logging/
 statistics to correlate the trouble times with.
 
 Try installing munin and graph all the system-specific stuff. See what
 correlates against the failure time. You might notice something, like
 out of memory/paging, or an increase in interrupts, or something

RE: [squid-users] Squid Performance (with Polygraph)

2007-11-09 Thread Dave Raven
Hi Adrian,

It works for the full 4 hours with a null cache directory. How would
I see any kind of stats/information on disk IO? From the stats I can see so
far, the disk stats don't change at all when it fails ...

I'm currently using COSS, but I've also tried this with ufs and diskd (with
the same results, just different times that it fails after).

Thanks
Dave

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 09, 2007 3:35 PM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)

Rightio; this reads like you're running out of disk IO.
Try running the test with a null cache dir and make sure the box can handle
that load.

Squid unfortunately had crap disk IO code for whats available these days.




Adrian

On Fri, Nov 09, 2007, Dave Raven wrote:
 Hi all,
   Okay I managed to do a lot more testing at the office today. Firstly
 some of the questions asked --
 
 CPU Usage: The cpu usage is around 30% during the test, when the unit
begins
 to fail it actually goes down a bit. 
 
 Mbufs/Clusters: All fine - they do rise quickly after the problem happens
 but this is because the established network connections are still coming
in
 600 a second, but only being satisfied at a rate of say 200 a second. The
 send queues then get big, and mbuf usage goes up - this is not the cause
of
 the failure though, it's a side effect. For the first x minutes its
between
 250 and 3000 mbufs (and clusters) used, and my max is 65k/32k
 
 As for system logs there are none - there is nothing suspicious anywhere
 until the side effects kick in, e.g. mbufs running out etc. Squid also
logs
 nothing at all. I've also checked if I'm using too much memory and that's
 not the case - swap is not used at all during the entire test. 
 
 This is the process of what happens --
 
 1. PolyClt + PolySrv begin, 800 RPS. 
 
 2. ESTABLISHED netstat connections are around 2000 once 800RPS is reached
 (about 20 seconds). CPU load is 30%, mbufs are available etc.
 
 3. Once memory becomes full (quickly) disk drive usage begins - squid -z
 puts the TPS per drive at well over 1000/s when I run it, when the cache
is
 doing 800 RPS the tps is about 30 per drive (low..). 
 
 4. After a period of time (almost always the same (+/- 60 seconds)
depending
 on RPS) the ESTABLISHED connections start rising, at the exact same time
the
 PolyClt starts showing less RPS. This is the slow down as such. 
 
 5. Because of this, polyclt continues to send requests which the unit
 continues to accept - quickly all available sockets are used, and the unit
 will then crash
 
 Interestingly enough though - if I stop the polyclt when this happens and
 restart it - in under 10 seconds - it continues on for another x minutes
 without problem. If I leave it running the unit never comes right.
 
 I have used systat -vmstat 1, systat -tcp 1, systat -iostat 1 and
all
 the stats from Munin, and a MRTG graphing config for squid and they all
show
 nothing of interest. The only result that changes between working time and
 slow down is that the connections go through the roof as explained
above...
 
 I have also seen it fail at 300RPS, but only after 82 minutes - which
seems
 like a very long time if it was going to fail because of disk load. The
 entire time the disks are very underloaded. That said, if I use a null
cache
 directory this doesn't happen
 
 I know that sounds like its clearly drives - but 82 minutes ??
 
 Thanks for all the help
 Dave
 
 -Original Message-
 From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
 Sent: Friday, November 09, 2007 11:55 AM
 To: Dave Raven
 Cc: 'Adrian Chadd'; squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid Performance (with Polygraph)
 
 Check netstat -mb and see if you're running out of mbufs?
 You haven't mentioned whether the CPU is being pegged at this point?
 
 
 
 Adrian
 
 On Fri, Nov 09, 2007, Dave Raven wrote:
  Hi all,
  Okay I've done some of what you requested, and unfortunately failed
  to find anything specific. I can pretty much guarantee the times at
which
  the requests will slow down now. 600RPS = 15 minutes, 800 RPS = 11
 minutes,
  400 RPS = ~80 minutes. 
  
  During that time (before and during the problem) systat -vmstat 1 shows
 the
  same interrupts - about 4000 on em1 (ifac) and 250 on hptmv0 - my
 controller
  for the SATA drives. 
  
  If I use a systat -iostat 1 I can see that none of the drives are 100%
  utilized at any time during the test. Systat -tcp 1 also doesn't show me
  anything out of the ordinary. I have setup munin to monitor the host but
  unfortunately its not showing much. 
  
  Also the problem is that when the problem begins, it starts filling up
  network connections - once it fills all the available ports nothing can
  monitor it :/
  
  I'm going to try use a different network card, then a different
 motherboard
  etc - try some different setups today. Thanks

[squid-users] Squid Performance (with Polygraph)

2007-11-08 Thread Dave Raven
Hi all, 
I'm busy testing a squid box with 8xSATA drives, 4gig of DDRII
memory and 2x 2.6gig dual core processors. I'm using the basic datacomm test
from polygraph. I've configured 6 of the drives to use COSS, and the other
two diskd (I've also done basic ufs tests). During all of the tests below
I'm using 96mb for cache memory...

If I run 600RPS the unit handles it fine, for about 23 minutes - at that
stage network connections start rising very quickly (and eventually running
out) and responses on the polygraph client slow down until it dies. If I use
1200 RPS it happens after about 10 minutes. 

Then if I use a single IDE drive (just using ufs), at 600RPS it handles fine
for about 14 minutes at which stage it dies.

My main question is what is it that's causing the connections to rise. To me
it's that the responses are taking longer to fulfill - the reason for this I
assumed would be the disk drive. But how is the IDE drive going for so long?
96mb of memory is filling up way faster than that, and I can see it
accessing the drive. The transfers per second and transfer speeds on the
drives don't change when it begins to fail, and neither do any real squid
stats...

I've also tested just having a 500mb cache on one IDE drive, filling it
first and then doing this - it also lasts just as long (having to delete
files as well etc)...

Any idea what's happening at that stage? 

Thanks for the help
Dave



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-08 Thread Dave Raven
Hi, 
I've been looking for a way to do the profiling, but I'm stuck with
FreeBSD 4 - any ideas? Cache_mem is at 96mb, its almost definitely getting
filled immediately - I've also tried setting it to 8 just to be sure, no
difference...

It's a bit difficult to graph -- disk IO I can see with iostat, it seems to
stay the same even after my slow down period...

Thanks
Dave

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 08, 2007 5:17 PM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)

Do some system-level profiling runs (oprofile under Linux, dtrace under
Solaris) during the fill phase, the disk intensive phase and the disk
overload phase. Are you graphing statistics? Can you graph stuff like
CPU, swapping/paging, disk IO?

Whats cache_mem set to?




Adrian

On Thu, Nov 08, 2007, Dave Raven wrote:
 Hi all, 
   I'm busy testing a squid box with 8xSATA drives, 4gig of DDRII
 memory and 2x 2.6gig dual core processors. I'm using the basic datacomm
test
 from polygraph. I've configured 6 of the drives to use COSS, and the other
 two diskd (I've also done basic ufs tests). During all of the tests below
 I'm using 96mb for cache memory...
 
 If I run 600RPS the unit handles it fine, for about 23 minutes - at that
 stage network connections start rising very quickly (and eventually
running
 out) and responses on the polygraph client slow down until it dies. If I
use
 1200 RPS it happens after about 10 minutes. 
 
 Then if I use a single IDE drive (just using ufs), at 600RPS it handles
fine
 for about 14 minutes at which stage it dies.
 
 My main question is what is it that's causing the connections to rise. To
me
 it's that the responses are taking longer to fulfill - the reason for this
I
 assumed would be the disk drive. But how is the IDE drive going for so
long?
 96mb of memory is filling up way faster than that, and I can see it
 accessing the drive. The transfers per second and transfer speeds on the
 drives don't change when it begins to fail, and neither do any real squid
 stats...
 
 I've also tested just having a 500mb cache on one IDE drive, filling it
 first and then doing this - it also lasts just as long (having to delete
 files as well etc)...
 
 Any idea what's happening at that stage? 
 
 Thanks for the help
 Dave

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-08 Thread Dave Raven
Hi Adrian,
I've got diskd configured to be used for objects over 500k - the
datacomm run is all 13K objects so essentially it's doing nothing.
Interestingly though I see the same stuff if I use ufs only, or just diskd. 

I am using kqueue - I will try to get you stats on what that shows. If I
push it too far (1800 RPS) I can see squid visibly failing - error messages,
too much drive load etc. But at 1200RPS it runs fine for  10 minutes - I'd
really like to get this solved as I think there is potential for a lot of
performance.

I've just run a test now at 300RPS and it failed after 80 minutes -- very
weird...

I'll try to get you all the stats I can tomorrow morning 

Thanks again for the help
Dave


-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 08, 2007 5:37 PM
To: Dave Raven
Cc: 'Adrian Chadd'; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)

On Thu, Nov 08, 2007, Dave Raven wrote:
 Hi, 
   I've been looking for a way to do the profiling, but I'm stuck with
 FreeBSD 4 - any ideas? Cache_mem is at 96mb, its almost definitely getting
 filled immediately - I've also tried setting it to 8 just to be sure, no
 difference...

Hm. FreeBSD-4 doesn't have pmc, but pmc is proving to be a bit useless when
profiling high-syscall-throughput applications. tsk.

 It's a bit difficult to graph -- disk IO I can see with iostat, it seems
to
 stay the same even after my slow down period...

I'll assume you're running with kqueue. I'd run systat -vmstat 1 under
freebsd
and watch all the key values, see what peaks.

Also, are you using diskd when you're not using COSS?




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-08 Thread Dave Raven
Hi Adrian,
 What would cause it to fail after a specific time though - if the cache_mem
is already full and its using the drives? I would have thought it would fail
immediately ? 

Also there are no log messages about failures or anything...

Thanks
Dave

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 08, 2007 8:05 PM
To: Dave Raven
Cc: 'Adrian Chadd'; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)

On Thu, Nov 08, 2007, Dave Raven wrote:
 Hi Adrian,
   I've got diskd configured to be used for objects over 500k - the
 datacomm run is all 13K objects so essentially it's doing nothing.
 Interestingly though I see the same stuff if I use ufs only, or just
diskd. 

Ok.

 I am using kqueue - I will try to get you stats on what that shows. If I
 push it too far (1800 RPS) I can see squid visibly failing - error
messages,
 too much drive load etc. But at 1200RPS it runs fine for  10 minutes -
I'd
 really like to get this solved as I think there is potential for a lot of
 performance.
 
 I've just run a test now at 300RPS and it failed after 80 minutes -- very
 weird...

Well, firstly rule out the disk subsystem. Configure a null cache_dir and
say
128mb RAM. Run Squid and see if it falls over.

There's plenty of reasons the disk subsystem may be slow, especially if the
hardware chipsets are commodity in any way. But Squid won't get you more
than
about 80-120 req/sec out of commodity hard disks, perhaps even less if you
start
trying to use modern enormous disks.


Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -



RE: [squid-users] Squid Performance (with Polygraph)

2007-11-08 Thread Dave Raven
Hi all,
Okay I've done some of what you requested, and unfortunately failed
to find anything specific. I can pretty much guarantee the times at which
the requests will slow down now. 600RPS = 15 minutes, 800 RPS = 11 minutes,
400 RPS = ~80 minutes. 

During that time (before and during the problem) systat -vmstat 1 shows the
same interrupts - about 4000 on em1 (ifac) and 250 on hptmv0 - my controller
for the SATA drives. 

If I use a systat -iostat 1 I can see that none of the drives are 100%
utilized at any time during the test. Systat -tcp 1 also doesn't show me
anything out of the ordinary. I have setup munin to monitor the host but
unfortunately its not showing much. 

Also the problem is that when the problem begins, it starts filling up
network connections - once it fills all the available ports nothing can
monitor it :/

I'm going to try use a different network card, then a different motherboard
etc - try some different setups today. Thanks again for all the help and
please let me know if anyone has any ideas...

Thanks
Dave

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 09, 2007 4:08 AM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Performance (with Polygraph)

On Thu, Nov 08, 2007, Dave Raven wrote:
 Hi Adrian,
  What would cause it to fail after a specific time though - if the
cache_mem
 is already full and its using the drives? I would have thought it would
fail
 immediately ? 
 
 Also there are no log messages about failures or anything...

Who knows :) its hard without having remote access, or lots of logging/
statistics to correlate the trouble times with.

Try installing munin and graph all the system-specific stuff. See what
correlates against the failure time. You might notice something, like
out of memory/paging, or an increase in interrupts, or something. ;)

Thats all I can offer at the present time, sorry.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -



[squid-users] Same Domain Caching

2007-10-22 Thread Dave Raven
Hi all,
Is there a way to assume that anything under a certain domain is
similar across servers? For example, www.youtube.com videos come from
various servers --

1191839044.533  53841 10.10.108.250 TCP_MISS/200 1770189 GET
http://sjc-v180.sjc.youtube.com/get_video? - DIRECT/64.15.120.171 video/flv
1140
1191917902.678 610481 10.10.100.198 TCP_MISS/200 9465378 GET
http://v194.youtube.com/get_video? - DIRECT/208.65.154.167 video/flv 1068

And so on.. if I force caching on video/flv files it should make for good
caching of the content, but 100 users viewing a video could all go to
different servers to get it - meaning instead of getting 100 hits I get 100x
the content size in cache?

Is there a way around this?

Thanks for the help
Dave



RE: [squid-users] Same Domain Caching

2007-10-22 Thread Dave Raven
I had considered doing that as well - using a redirector to match on
youtube.com/get_video but then I'll need to save those to the disk and
manage them myself as opposed to using squids method. Is there a successful
use of ETags for something like this / is it worth looking at? As I
understand it that's essentially there point (if used correctly) ?

Something else that might be worth looking into (for me) is whats after the
? - I suppose that if it in some way identifies the video I could rewrite
the url to be the one I know is cache (e.g. the first ever request for that)

Thanks again for the help
Dave

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 22, 2007 9:38 AM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Same Domain Caching

On Mon, Oct 22, 2007, Dave Raven wrote:
 Hi all,
   Is there a way to assume that anything under a certain domain is
 similar across servers? For example, www.youtube.com videos come from
 various servers --
 
 1191839044.533  53841 10.10.108.250 TCP_MISS/200 1770189 GET
 http://sjc-v180.sjc.youtube.com/get_video? - DIRECT/64.15.120.171
video/flv
 1140
 1191917902.678 610481 10.10.100.198 TCP_MISS/200 9465378 GET
 http://v194.youtube.com/get_video? - DIRECT/208.65.154.167 video/flv 1068
 
 And so on.. if I force caching on video/flv files it should make for good
 caching of the content, but 100 users viewing a video could all go to
 different servers to get it - meaning instead of getting 100 hits I get
100x
 the content size in cache?
 
 Is there a way around this?

There's been a few attempts at it but noone yet seems to have implemented
what I've suggested. Someone recently posted how he does it via log
post-processing
and rewriter rules.





Adrian



[squid-users] tcp_recv_bufsize and performance

2007-10-03 Thread Dave Raven
Hi all, 
I've been doing some high performance testing with squid (2.6) and
if you use enough hardware the problem shifts to being with network
connections (for me at least). Above around 300 RPS on a unit with latency
on both sides and many clients you start to chew up network buffers (on
BSD). The maximum value for mbuf clusters is around 32k and by reducing a
lot of the timeouts and flushing connections faster you can directly affect
the amount of mbuf clusters being used (it would appear). 

Without those changes, the cache hits the 32k mbuf cluster limit pretty
quickly, netstat -na has about 10,000 lines in it of TIME_WAIT, ESTABLISHED
etc. 

My question is actually to do with tcp_recv_bufsize - can this cause network
buffer overuse - if no value is specified it uses the maximum the OS
allowed, but the example if 500bytes, which is significantly less than my
maximum receive buffer of 65535. 

What effect on performance does tcp_recv_bufsize have - whats a reasonable
value, will it help to use less mbufs and will a smaller value degrade
performance?

Or are there other tunables that would do a better job of reducing it?

Thanks for the help
D



[squid-users] FTP Proxy ?

2006-12-13 Thread Dave Raven
Hi, 
 Is squid able to properly proxy ftp - e.g. support uploads and
authentication through a web browser (like IE), or just downloads? I did do
some googling but most of my findings were older 

Thanks in advance
Dave



[squid-users] Strange Mbuf Problem

2006-09-18 Thread Dave Raven
Hi all,
Are there any known bugs or config problems etc that might be able
to cause mbufs on a FreeBSD box to be completely utilised? This is not under
high load - its during the down time, and they are completely used within 5
minutes, whereas the cache has been running under much higher load all day.

I believe I have narrowed it down to specific traffic - a machine behind the
cache was mirroring content - getting only 206 messages (partial content).
Could this be the cause?

Thanks in advance
Dave 



[squid-users] Failure Ratio?

2006-08-30 Thread Dave Raven
Hi all, 
 I have a strange problem with the Failure Ratio messages --

2006/08/30 02:41:09| Failure Ratio at 1.37
2006/08/30 02:41:09| Going into hit-only-mode for 5 minutes...
2006/08/30 02:46:14| Failure Ratio at 1.46
2006/08/30 02:46:14| Going into hit-only-mode for 5 minutes...
2006/08/30 02:51:20| Failure Ratio at 1.52

This has occurred on a cache of ours and continued to throughout the night.
However, upon restarting it everything began to work fine again. 

If there was some real failure, would it not have automatically restarted to
fetch requests? Given that after manually restarting it the cache worked
fine we can assume the link was not down at that stage? It also has no
peers..

Thanks in advance
Dave



RE: [squid-users] Need To Bind ICP To Specific Interface

2005-11-13 Thread Dave Raven
 udp_incoming_address is correct, I don't know why its breaking. Can you
send us a config file ?

You could of course just deny access to the port on the other cards for an
easy fix... Through some other mechanism.

-Original Message-
From: Vadim Pushkin [mailto:[EMAIL PROTECTED] 
Sent: 11 November 2005 11:10 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Need To Bind ICP To Specific Interface

hello again;

I have built three squid proxy servers, 2-5.STABLE11, on Solaris.  I have
all three machines with two network interfaces (Gbit) each.  The first
interface is used for connections to/from my clients, as well as to the
Internet (allowed by a firewall rule).  The second interface, I am trying to
use to establish an ICP LAN.  So far I have been able to get all three
machines talking to each other via this interface, but I have not been able
to lock down the interface at which ICP listens on to just one interface,
i.e.:

# netstat -a -f inet | grep 3130
  *.3130Idle

If I tweak my squid.conf to use the second ports IP address under
udp_incoming_address, then I am able to see that it is listening on this
port, but then all requests to the proxy fail, ICP or not.

I have already created an acl for this interface as well.

Could someone please help?

Thank you,

.vp




RE: [squid-users] software caused connection abort

2005-11-13 Thread Dave Raven
Try 'debug_options'

-Original Message-
From: Wojciech Puchar [mailto:[EMAIL PROTECTED] 
Sent: 12 November 2005 11:03 PM
To: squid-users@squid-cache.org
Subject: [squid-users] software caused connection abort

can such messages

Nov 12 22:01:47 hel squid[22265]: comm_accept: FD 8: (53) Software caused
connection abort Nov 12 22:01:47 hel squid[22265]: httpAccept: FD 8: accept
failure: (53) Software caused connection abort


be disabled in logs?

it's quite useless, looks it's produced when someone will press stop in
browser.

it would save many MB of logs daily :)



RE: [squid-users] software caused connection abort

2005-11-13 Thread Dave Raven
Try 'debug_options'

-Original Message-
From: Wojciech Puchar [mailto:[EMAIL PROTECTED] 
Sent: 12 November 2005 11:03 PM
To: squid-users@squid-cache.org
Subject: [squid-users] software caused connection abort

can such messages

Nov 12 22:01:47 hel squid[22265]: comm_accept: FD 8: (53) Software caused
connection abort Nov 12 22:01:47 hel squid[22265]: httpAccept: FD 8: accept
failure: (53) Software caused connection abort


be disabled in logs?

it's quite useless, looks it's produced when someone will press stop in
browser.

it would save many MB of logs daily :)



RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-11 Thread Dave Raven
Run squid under some sort of trace program - you'll need to see whats
causing it to crash... 

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Sent: 11 November 2005 09:45 AM
To: Serassio Guido; Chris Robertson; squid-users@squid-cache.org
Subject: RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,

Webalizer is a software to create some statistic on squid Log files.

But even if I disable it, I didn't see any difference. Restart continues.

L.G.


-Original Message-
From: Serassio Guido [mailto:[EMAIL PROTECTED]
Sent: Freitag, 11. November 2005 08:36
To: Chris Robertson; squid-users@squid-cache.org
Subject: RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

Hi,

At 19.53 10/11/2005, Chris Robertson wrote:
   0 0 * * * /etc/webmin/webalizer/webalizer.pl
  /cache_log/access.log
 
  What is the content of webalizer.pl ?
 
  Regards
 
  Guido
 
 

Does it matter? It only runs once per day (at midnight).

It's the only custom script related to squid present on crontab, so why
don't check it when squid is still doing unexpected things ? It's a work of
half minute 

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-11 Thread Dave Raven
Run squid under some sort of trace program - you'll need to see whats
causing it to crash... 

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Sent: 11 November 2005 09:45 AM
To: Serassio Guido; Chris Robertson; squid-users@squid-cache.org
Subject: RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,

Webalizer is a software to create some statistic on squid Log files.

But even if I disable it, I didn't see any difference. Restart continues.

L.G.


-Original Message-
From: Serassio Guido [mailto:[EMAIL PROTECTED]
Sent: Freitag, 11. November 2005 08:36
To: Chris Robertson; squid-users@squid-cache.org
Subject: RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

Hi,

At 19.53 10/11/2005, Chris Robertson wrote:
   0 0 * * * /etc/webmin/webalizer/webalizer.pl
  /cache_log/access.log
 
  What is the content of webalizer.pl ?
 
  Regards
 
  Guido
 
 

Does it matter? It only runs once per day (at midnight).

It's the only custom script related to squid present on crontab, so why
don't check it when squid is still doing unexpected things ? It's a work of
half minute 

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-10 Thread Dave Raven
Run some memory processor burn tests... E.g. 'memtest' and 'burnP6' 

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2005 09:37 AM
To: [EMAIL PROTECTED]; squid-users@squid-cache.org
Subject: RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,


Thanks for your help :

proxy1:~#  crontab  -l
0 0 * * * /etc/webmin/webalizer/webalizer.pl /cache_log/access.log
proxy1:~# more /etc/crontab
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user  command
17 ** * *   rootrun-parts --report /etc/cron.hourly
1 0 * * *   roottest -x /usr/sbin/anacron || run-parts
--report /etc/cron.daily
47 6* * 7   roottest -x /usr/sbin/anacron || run-parts
--report /etc/cron.weekly
52 61 * *   roottest -x /usr/sbin/anacron || run-parts
--report /etc/cron.monthly

proxy1:~# ls /etc/cron.hourly/
proxy1:~#


The server is a compaq DL580 (2*Xeon700Mhz, 1G of Ram, Raid 5: 32G), working
on Debian


L.G.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Mittwoch, 9. November 2005 16:53
To: squid-users@squid-cache.org
Subject: Re: AW: [squid-users] Squid unreachable every hour and 6 minutes.

The disk space is over limit error is not saying the disk is full.  The
cache has reached the limit that's been set in the squid.conf file.
It could be causing squid to die, but how likely is it that this would be
the cause, if squid dies 6 minutes after every hour?

My suggestion is to check and see what cron jobs are running: 
cat /etc/crontab
or  (as root): crontab -l and then crontab -l any other users that might be
running cron jobs

If there's a timely pattern to the connectivity issue, the root of the
problem probably has something to do with a schedule for something.
Cron would be a good place to start.

On the disk space is over limit issue...
You really shouldn't have to tend to this.  Squid should use whatever
replacement policy was specified at compile time (forget which one is
default if none is specified) To remove old/unused cache objects in an
effort to free up space. However, if squid is trying to do this, and is
actively handling proxy requests at the same time, squid could be running
out of resources.  What specs do you have on this machine?  CPU/Ram/etc.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



[EMAIL PROTECTED]
11/09/2005 09:45 AM

To
[EMAIL PROTECTED], [EMAIL PROTECTED],
squid-users@squid-cache.org cc

Subject
AW: [squid-users] Squid unreachable every hour and 6 minutes.






Please repeat that again.

(1) stop squid

(2) find out wht are the cache directories squid uses, for example

   # grep cache_dir squid.conf
   cache_dir ufs  /data1/squid_cache 6000 32 512
   cache_dir ufs  /data2/squid_cache 1 32 512
   #

 In this example /data1/squid_cache and /data2/squid_cache are the cache
dirs.

(3) Clean all cache dirs - in this example:

   cd /data1/squid_cache
   rm -f *
   cd /data2/squid_cache
   rm -f *

(3) create the cache structures again:   squid -z

(4) Start squid.
What happens?
Is squid running? ps -ef | grep squid
What does cache.log say since starting of squid?
Is squid reachable?

(5) What happens after 1 hour an 6 minutes?

Werner Rost

-Ursprüngliche Nachricht-
Von: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 9. November 2005 15:10
An: Dave Raven; squid-users@squid-cache.org
Betreff: RE: [squid-users] Squid unreachable every hour and 6 minutes.


I already tried to :
- Stop Squid, delete swap.state, restart squid
- Stop Squid, format my cache parition, squid -z, start squid
- change cache_dir ufs /cache 5000 16 256 to cache_dir ufs /cache 100
16 256, squid -k restart.
- reboot completely the server

But nothing worked.




-Original Message-
From: Dave Raven [mailto:[EMAIL PROTECTED]
Sent: Mittwoch, 9. November 2005 14:58
To: Gix, Lilian (CI/OSR) *; squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Try use my method posted earlier to search for code files. 
The fact that your log suddenly shows squid restarting means it died 
unexpectedly. If there is a core file it'll be squids problem - if not 
its probably something else causing the problem.

Also, you should try clean out your cache_dir potentially... 
Remove everything and run squid -z to recreate it

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED]
Sent: 09 November 2005 03:32 PM
To: Mike Cudmore
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Great, thanks for your answer and questions :
 
1- I have a message form my browser (IE, Firefox) witch says the proxy 
is unreachable. My MSN, yahoo messengers

RE: [squid-users] Getting error Too few basicauthenticator processes are running

2005-11-10 Thread Dave Raven
Run  '/usr/local/squid/libexec/ncsa_auth /usr/local/squid/etc/passwd'

Type   'USERNAME PASSWORD'

And see what it says - I suspect you wont get that far though. Once you try
run it it should giv eyou and error

-Original Message-
From: ads squid [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2005 09:40 AM
To: Chris Robertson; squid-users@squid-cache.org
Subject: RE: [squid-users] Getting error Too few basicauthenticator
processes are running

--- Chris Robertson [EMAIL PROTECTED] wrote:

  -Original Message-
  From: ads squid [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 09, 2005 3:42 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] Getting error Too few
 basicauthenticator
  processes are running
  
  
  Hi,
  I am trying to configure squid version squid-2.5.STABLE12 as 
  follows :
  
  [EMAIL PROTECTED] squid-2.5.STABLE12]# /usr/local/squid/sbin/squid 
  -NCd1
  
  
  I am getting following error 
  
  2005/11/09 18:03:40| Accepting HTTP connections at 0.0.0.0, port 
  3128, FD 15.
  2005/11/09 18:03:40| WCCP Disabled.
  2005/11/09 18:03:40| Ready to serve requests.
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #1
  (FD 6) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #2
  (FD 7) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #3
  (FD 8) exited
  2005/11/09 18:03:41| Too few basicauthenticator processes are 
  running
  FATAL: The basicauthenticator helpers are crashing
 too
  rapidly, need help!
  
  Aborted
  
  
  
  I have configured squid with minimum options as
  follows:
  [EMAIL PROTECTED] squid-2.5.STABLE12]# ./configure
 

--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,SASL,MSNT
  
  .
  
  Please help me to solve the problem.
  I want to use basic authentication.
  
  Thanks for support.
  
 
 What does your auth_param line look like?
 
 Chris
 

It looks like as following :


auth_param basic program
/usr/local/squid/libexec/ncsa_auth
/usr/local/squid/etc/passwd
###

Thanks for support.




__
Yahoo! FareChase: Search multiple travel sites in one click.
http://farechase.yahoo.com



RE: [squid-users] Getting error Too few basicauthenticator processes are running

2005-11-10 Thread Dave Raven
Run  '/usr/local/squid/libexec/ncsa_auth /usr/local/squid/etc/passwd'

Type   'USERNAME PASSWORD'

And see what it says - I suspect you wont get that far though. Once you try
run it it should giv eyou and error

-Original Message-
From: ads squid [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2005 09:40 AM
To: Chris Robertson; squid-users@squid-cache.org
Subject: RE: [squid-users] Getting error Too few basicauthenticator
processes are running

--- Chris Robertson [EMAIL PROTECTED] wrote:

  -Original Message-
  From: ads squid [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 09, 2005 3:42 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] Getting error Too few
 basicauthenticator
  processes are running
  
  
  Hi,
  I am trying to configure squid version squid-2.5.STABLE12 as 
  follows :
  
  [EMAIL PROTECTED] squid-2.5.STABLE12]# /usr/local/squid/sbin/squid 
  -NCd1
  
  
  I am getting following error 
  
  2005/11/09 18:03:40| Accepting HTTP connections at 0.0.0.0, port 
  3128, FD 15.
  2005/11/09 18:03:40| WCCP Disabled.
  2005/11/09 18:03:40| Ready to serve requests.
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #1
  (FD 6) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #2
  (FD 7) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #3
  (FD 8) exited
  2005/11/09 18:03:41| Too few basicauthenticator processes are 
  running
  FATAL: The basicauthenticator helpers are crashing
 too
  rapidly, need help!
  
  Aborted
  
  
  
  I have configured squid with minimum options as
  follows:
  [EMAIL PROTECTED] squid-2.5.STABLE12]# ./configure
 

--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,SASL,MSNT
  
  .
  
  Please help me to solve the problem.
  I want to use basic authentication.
  
  Thanks for support.
  
 
 What does your auth_param line look like?
 
 Chris
 

It looks like as following :


auth_param basic program
/usr/local/squid/libexec/ncsa_auth
/usr/local/squid/etc/passwd
###

Thanks for support.




__
Yahoo! FareChase: Search multiple travel sites in one click.
http://farechase.yahoo.com



RE: [squid-users] Urgent Samba / Squid NTLM Auth Problems

2005-11-09 Thread Dave Raven
Hi Abbas, 
Unfortunately we're still experimenting with ntlm_auth ourselves -
it would probably be best to ask the samba user group your question. I
suspect your smb.conf may not be setup correctly...

Does anyone have any idea's on our problem below? Sorry to nag - we're
willing to try anything

Thanks
Dave 

-Original Message-
From: Abbas Salehi [mailto:[EMAIL PROTECTED] 
Sent: 09 November 2005 12:22 PM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Urgent Samba / Squid NTLM Auth Problems

Dear sir

I did all of your recommanded from  document step by step

I  succeeded to joined to the domain and active directory , i can see the
domain users and groups

kinit command works properly,

net ads testjoin
Join is OK

net ads join administrator
Joined 'squid-server' to realm 'TEST.COM'

But ntlm_auth does not work properly,

I have following error when i run it :

ntlm_auth --username=administrator
password: **
NT_STATUS_CANT_ACCESS_DOMAIN_INFO: NT_STATUS_CANT_ACCESS_DOMAIN_INFO
(0xc0da)

when i run the squid and set the the machine as proxy,the squid authenticate
but does not accept the user

When i browes some web pages, bring the dialog box, contain user and
password and domian, but does not accept,

We have following error in my logs

Winbind :

[2005/10/30 14:02:11, 0] nsswitch/winbindd_util.c:get_trust_pw(1033)
  get_trust_pw: could not fetch trust account password for my domain
TEST.COM

Can anybody help me,

How can i  solve this problem,

Regards
Abbas Salehi

- Original Message -
From: Dave Raven [EMAIL PROTECTED]
To: 'Serassio Guido' [EMAIL PROTECTED]; 'Ian Barnes'
[EMAIL PROTECTED]; squid-users@squid-cache.org
Sent: Tuesday, November 08, 2005 6:49 PM
Subject: RE: [squid-users] Urgent Samba / Squid NTLM Auth Problems


 Hi all,
 I'm currently working on this problem with Ian. It seems like
 ntlm_auth is handling the requests fine -

 [EMAIL PROTECTED] /usr/local/bin # ./ntlm_auth --username=ianb
 --configfile=/usr/local/etc/smb.conf
 password:
 NT_STATUS_OK: Success (0x0)

 It also works through squid when using wget

 [2005/11/08 17:15:09, 3] utils/ntlm_auth.c:check_plaintext_auth(292)
   NT_STATUS_OK: Success (0x0)

 Note that it says check_plaintext_auth though, when using a browser (e.g.
 IE) we see the following messages

 [2005/11/08 15:16:36, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
   Got user=[IANB] domain=[MASTERMIND] workstation=[IANB] len1=24 len2=24
 [2005/11/08 15:16:37, 3] utils/ntlm_auth.c:winbind_pw_check(427)
   Login for user [EMAIL PROTECTED] failed due to [Wrong Password]

 Why is it using a different method? It seems like the problem only occurs
 when it doesn't use check_plaintext_auth.  Is there anything we can do to
 learn more?

 Thanks for all the help so far
 Dave





RE: [squid-users] Urgent Samba / Squid NTLM Auth Problems

2005-11-09 Thread Dave Raven
Hi Abbas, 
Unfortunately we're still experimenting with ntlm_auth ourselves -
it would probably be best to ask the samba user group your question. I
suspect your smb.conf may not be setup correctly...

Does anyone have any idea's on our problem below? Sorry to nag - we're
willing to try anything

Thanks
Dave 

-Original Message-
From: Abbas Salehi [mailto:[EMAIL PROTECTED] 
Sent: 09 November 2005 12:22 PM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Urgent Samba / Squid NTLM Auth Problems

Dear sir

I did all of your recommanded from  document step by step

I  succeeded to joined to the domain and active directory , i can see the
domain users and groups

kinit command works properly,

net ads testjoin
Join is OK

net ads join administrator
Joined 'squid-server' to realm 'TEST.COM'

But ntlm_auth does not work properly,

I have following error when i run it :

ntlm_auth --username=administrator
password: **
NT_STATUS_CANT_ACCESS_DOMAIN_INFO: NT_STATUS_CANT_ACCESS_DOMAIN_INFO
(0xc0da)

when i run the squid and set the the machine as proxy,the squid authenticate
but does not accept the user

When i browes some web pages, bring the dialog box, contain user and
password and domian, but does not accept,

We have following error in my logs

Winbind :

[2005/10/30 14:02:11, 0] nsswitch/winbindd_util.c:get_trust_pw(1033)
  get_trust_pw: could not fetch trust account password for my domain
TEST.COM

Can anybody help me,

How can i  solve this problem,

Regards
Abbas Salehi

- Original Message -
From: Dave Raven [EMAIL PROTECTED]
To: 'Serassio Guido' [EMAIL PROTECTED]; 'Ian Barnes'
[EMAIL PROTECTED]; squid-users@squid-cache.org
Sent: Tuesday, November 08, 2005 6:49 PM
Subject: RE: [squid-users] Urgent Samba / Squid NTLM Auth Problems


 Hi all,
 I'm currently working on this problem with Ian. It seems like
 ntlm_auth is handling the requests fine -

 [EMAIL PROTECTED] /usr/local/bin # ./ntlm_auth --username=ianb
 --configfile=/usr/local/etc/smb.conf
 password:
 NT_STATUS_OK: Success (0x0)

 It also works through squid when using wget

 [2005/11/08 17:15:09, 3] utils/ntlm_auth.c:check_plaintext_auth(292)
   NT_STATUS_OK: Success (0x0)

 Note that it says check_plaintext_auth though, when using a browser (e.g.
 IE) we see the following messages

 [2005/11/08 15:16:36, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
   Got user=[IANB] domain=[MASTERMIND] workstation=[IANB] len1=24 len2=24
 [2005/11/08 15:16:37, 3] utils/ntlm_auth.c:winbind_pw_check(427)
   Login for user [EMAIL PROTECTED] failed due to [Wrong Password]

 Why is it using a different method? It seems like the problem only occurs
 when it doesn't use check_plaintext_auth.  Is there anything we can do to
 learn more?

 Thanks for all the help so far
 Dave





RE: [squid-users] Urgent Samba / Squid NTLM Auth Problems

2005-11-09 Thread Dave Raven
Okay I have an update with more progress - it seems the problem is only to
do with ntlmssp. If I only have a basic authenticator - which looks like the
following, it works perfectly:

auth_param basic program /usr/optec/ntlm_auth.sh basic
auth_param basic children 10
auth_param basic realm server.opteqint.net Cache NTLM Authentication
auth_param basic credentialsttl 2 hours

(ntlm_auth.sh runs the ntlm_auth squid-2.5-basic helper) 

I see the following debug messages:

[2005/11/09 13:20:43, 3] utils/ntlm_auth.c:check_plaintext_auth(292)
  NT_STATUS_OK: Success (0x0)


However, when I use ntlmssp in the squid config, shown below, it does not
work:

auth_param ntlm program /usr/optec/ntlm_auth.sh ntlmssp 
auth_param ntlm children 10 
auth_param ntlm use_ntlm_negotiate yes 

I see the following debug messages:
[2005/11/09 13:22:37, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
  Got user=[ianb] domain=[MASTERMIND] workstation=[LUCY] len1=24 len2=24
[2005/11/09 13:22:37, 3] utils/ntlm_auth.c:winbind_pw_check(427)
  Login for user [EMAIL PROTECTED] failed due to [Wrong Password]


If I type ian instead of ianb, I see an error saying the user does not
exist. This must mean that somehow the wrong password is being passed in the
wrong way - even though it is typed right. 

For anyone who hasn't read the rest of this thread please note: this only
happens with the security option on the AD server set to ONLY allow
NTLMv2/LMv2 and not anything else. If we turn that off it works perfectly...

As I understand it the password doesn't come to squid in plaintext when its
using ntlmssp, and I believe that there is some kind of handling problem
with that now? If I type in the password on the command line with the
ntlm_auth program, it is able to validate it just fine using NTLMv2 -
enforcing my belief that something is wrong here...

Any suggestions AT ALL would be appreciated...

Thanks
Dave




RE: [squid-users] Urgent Samba / Squid NTLM Auth Problems

2005-11-09 Thread Dave Raven
Okay I have an update with more progress - it seems the problem is only to
do with ntlmssp. If I only have a basic authenticator - which looks like the
following, it works perfectly:

auth_param basic program /usr/optec/ntlm_auth.sh basic
auth_param basic children 10
auth_param basic realm server.opteqint.net Cache NTLM Authentication
auth_param basic credentialsttl 2 hours

(ntlm_auth.sh runs the ntlm_auth squid-2.5-basic helper) 

I see the following debug messages:

[2005/11/09 13:20:43, 3] utils/ntlm_auth.c:check_plaintext_auth(292)
  NT_STATUS_OK: Success (0x0)


However, when I use ntlmssp in the squid config, shown below, it does not
work:

auth_param ntlm program /usr/optec/ntlm_auth.sh ntlmssp 
auth_param ntlm children 10 
auth_param ntlm use_ntlm_negotiate yes 

I see the following debug messages:
[2005/11/09 13:22:37, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
  Got user=[ianb] domain=[MASTERMIND] workstation=[LUCY] len1=24 len2=24
[2005/11/09 13:22:37, 3] utils/ntlm_auth.c:winbind_pw_check(427)
  Login for user [EMAIL PROTECTED] failed due to [Wrong Password]


If I type ian instead of ianb, I see an error saying the user does not
exist. This must mean that somehow the wrong password is being passed in the
wrong way - even though it is typed right. 

For anyone who hasn't read the rest of this thread please note: this only
happens with the security option on the AD server set to ONLY allow
NTLMv2/LMv2 and not anything else. If we turn that off it works perfectly...

As I understand it the password doesn't come to squid in plaintext when its
using ntlmssp, and I believe that there is some kind of handling problem
with that now? If I type in the password on the command line with the
ntlm_auth program, it is able to validate it just fine using NTLMv2 -
enforcing my belief that something is wrong here...

Any suggestions AT ALL would be appreciated...

Thanks
Dave




RE: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-09 Thread Dave Raven
Are there any .core files for squid?

find / -name squid.core -print


It seems like your cache is crashing for some unknown reason - are you not
killing it every few hours somehow? Its highly unlikely that squid is dying
after EXACTLY 66 minutes. Also try using the latest version...

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Sent: 09 November 2005 02:45 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,


Realy no body have an idea ?

:(

L.G. 

-Original Message-
From: Gix, Lilian (CI/OSR) *
Sent: Mittwoch, 2. November 2005 10:26
To: squid-users@squid-cache.org
Subject: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,
 
I have a problem with my squid :
every hour and 6 minutes, it is unreachable for few second.
 
Here is a part of Cache.log :
 

 
2005/11/02 09:44:45| Detected REVIVED Parent: virus2.com/8080/0
2005/11/02 10:07:05| Starting Squid Cache version 2.5.STABLE6 for
i386-debian-li
nux-gnu...
2005/11/02 10:07:05| Process ID 1430
2005/11/02 10:07:05| With 4096 file descriptors available
2005/11/02 10:07:05| DNS Socket created at 0.0.0.0, port 32772, FD 5
2005/11/02 10:07:05| Adding nameserver 193.108.217.70 from
/etc/resolv.conf
2005/11/02 10:07:05| User-Agent logging is disabled.
2005/11/02 10:07:05| Referer logging is disabled.
2005/11/02 10:07:05| Unlinkd pipe opened on FD 10
2005/11/02 10:07:05| Swap maxSize 102400 KB, estimated 7876 objects
2005/11/02 10:07:05| Target number of buckets: 393
2005/11/02 10:07:05| Using 8192 Store buckets
2005/11/02 10:07:05| Max Mem  size: 102400 KB
2005/11/02 10:07:05| Max Swap size: 102400 KB
2005/11/02 10:07:05| Local cache digest enabled; rebuild/rewrite
every 3600/3600
 sec
2005/11/02 10:07:05| Store logging disabled
2005/11/02 10:07:05| Rebuilding storage in /cache (DIRTY)
2005/11/02 10:07:05| Using Least Load store dir selection
2005/11/02 10:07:05| Set Current Directory to /cache_log/
2005/11/02 10:07:05| Loaded Icons.
2005/11/02 10:07:15| Accepting HTTP connections at 0.0.0.0, port
8080, FD 11.
2005/11/02 10:07:15| Accepting ICP messages at 0.0.0.0, port 3130,
FD 12.
2005/11/02 10:07:15| HTCP Disabled.
2005/11/02 10:07:15| WCCP Disabled.
2005/11/02 10:07:15| Configuring Parent 10.4.13.184/8080/0
2005/11/02 10:07:15| Ready to serve requests.
2005/11/02 10:07:15| Configuring Parent virus1.com/8080/0
2005/11/02 10:07:15| Configuring Parent virus2.com/8080/0
2005/11/02 10:07:15| Store rebuilding is  1.9% complete
2005/11/02 10:07:16| WARNING: newer swaplog entry for dirno 0,
fileno 031F
2005/11/02 10:07:16| WARNING: newer swaplog entry for dirno 0,
fileno 0371

2005/11/02 10:07:19| WARNING: newer swaplog entry for dirno 0,
fileno 39E4
2005/11/02 10:07:19| WARNING: newer swaplog entry for dirno 0,
fileno 39E8
2005/11/02 10:07:19| Done reading /cache swaplog (215057
entries)
2005/11/02 10:07:19| Finished rebuilding storage from disk.
2005/11/02 10:07:19|111709 Entries scanned
2005/11/02 10:07:19| 0 Invalid entries.
2005/11/02 10:07:19| 0 With invalid flags.
2005/11/02 10:07:19| 35086 Objects loaded.
2005/11/02 10:07:19| 0 Objects expired.
2005/11/02 10:07:19|  6194 Objects cancelled.
2005/11/02 10:07:19|  3443 Duplicate URLs purged.
2005/11/02 10:07:19| 70406 Swapfile clashes avoided.
2005/11/02 10:07:19|   Took 14.5 seconds (2411.4 objects/sec).
2005/11/02 10:07:19| Beginning Validation Procedure
2005/11/02 10:07:19|   Completed Validation Procedure
2005/11/02 10:07:19|   Validated 31666 Entries
2005/11/02 10:07:19|   store_swap_size = 400260k
2005/11/02 10:07:20| WARNING: Disk space over limit: 399480 KB 
102400 KB
2005/11/02 10:07:20| storeLateRelease: released 2 objects
2005/11/02 10:07:31| WARNING: Disk space over limit: 384820 KB 
102400 KB
2005/11/02 10:07:39| WARNING: 1 swapin MD5 mismatches
2005/11/02 10:07:42| WARNING: Disk space over limit: 377240 KB 
102400 KB
2005/11/02 10:07:53| WARNING: Disk space over limit: 369680 KB 
102400 KB
2005/11/02 10:08:04| WARNING: Disk space over limit: 351148 KB 
102400 KB
2005/11/02 10:08:15| WARNING: Disk space over limit: 340112 KB 
102400 KB
2005/11/02 10:08:26| WARNING: Disk space over limit: 320184 KB 
102400 KB
2005/11/02 10:08:37| WARNING: Disk space over limit: 309412 KB 
102400 KB
2005/11/02 10:08:41| clientProcessHit: Vary object loop!
2005/11/02 10:08:48| WARNING: Disk space over limit: 295500 KB 
102400 KB
   

RE: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-09 Thread Dave Raven
Are there any .core files for squid?

find / -name squid.core -print


It seems like your cache is crashing for some unknown reason - are you not
killing it every few hours somehow? Its highly unlikely that squid is dying
after EXACTLY 66 minutes. Also try using the latest version...

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Sent: 09 November 2005 02:45 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,


Realy no body have an idea ?

:(

L.G. 

-Original Message-
From: Gix, Lilian (CI/OSR) *
Sent: Mittwoch, 2. November 2005 10:26
To: squid-users@squid-cache.org
Subject: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,
 
I have a problem with my squid :
every hour and 6 minutes, it is unreachable for few second.
 
Here is a part of Cache.log :
 

 
2005/11/02 09:44:45| Detected REVIVED Parent: virus2.com/8080/0
2005/11/02 10:07:05| Starting Squid Cache version 2.5.STABLE6 for
i386-debian-li
nux-gnu...
2005/11/02 10:07:05| Process ID 1430
2005/11/02 10:07:05| With 4096 file descriptors available
2005/11/02 10:07:05| DNS Socket created at 0.0.0.0, port 32772, FD 5
2005/11/02 10:07:05| Adding nameserver 193.108.217.70 from
/etc/resolv.conf
2005/11/02 10:07:05| User-Agent logging is disabled.
2005/11/02 10:07:05| Referer logging is disabled.
2005/11/02 10:07:05| Unlinkd pipe opened on FD 10
2005/11/02 10:07:05| Swap maxSize 102400 KB, estimated 7876 objects
2005/11/02 10:07:05| Target number of buckets: 393
2005/11/02 10:07:05| Using 8192 Store buckets
2005/11/02 10:07:05| Max Mem  size: 102400 KB
2005/11/02 10:07:05| Max Swap size: 102400 KB
2005/11/02 10:07:05| Local cache digest enabled; rebuild/rewrite
every 3600/3600
 sec
2005/11/02 10:07:05| Store logging disabled
2005/11/02 10:07:05| Rebuilding storage in /cache (DIRTY)
2005/11/02 10:07:05| Using Least Load store dir selection
2005/11/02 10:07:05| Set Current Directory to /cache_log/
2005/11/02 10:07:05| Loaded Icons.
2005/11/02 10:07:15| Accepting HTTP connections at 0.0.0.0, port
8080, FD 11.
2005/11/02 10:07:15| Accepting ICP messages at 0.0.0.0, port 3130,
FD 12.
2005/11/02 10:07:15| HTCP Disabled.
2005/11/02 10:07:15| WCCP Disabled.
2005/11/02 10:07:15| Configuring Parent 10.4.13.184/8080/0
2005/11/02 10:07:15| Ready to serve requests.
2005/11/02 10:07:15| Configuring Parent virus1.com/8080/0
2005/11/02 10:07:15| Configuring Parent virus2.com/8080/0
2005/11/02 10:07:15| Store rebuilding is  1.9% complete
2005/11/02 10:07:16| WARNING: newer swaplog entry for dirno 0,
fileno 031F
2005/11/02 10:07:16| WARNING: newer swaplog entry for dirno 0,
fileno 0371

2005/11/02 10:07:19| WARNING: newer swaplog entry for dirno 0,
fileno 39E4
2005/11/02 10:07:19| WARNING: newer swaplog entry for dirno 0,
fileno 39E8
2005/11/02 10:07:19| Done reading /cache swaplog (215057
entries)
2005/11/02 10:07:19| Finished rebuilding storage from disk.
2005/11/02 10:07:19|111709 Entries scanned
2005/11/02 10:07:19| 0 Invalid entries.
2005/11/02 10:07:19| 0 With invalid flags.
2005/11/02 10:07:19| 35086 Objects loaded.
2005/11/02 10:07:19| 0 Objects expired.
2005/11/02 10:07:19|  6194 Objects cancelled.
2005/11/02 10:07:19|  3443 Duplicate URLs purged.
2005/11/02 10:07:19| 70406 Swapfile clashes avoided.
2005/11/02 10:07:19|   Took 14.5 seconds (2411.4 objects/sec).
2005/11/02 10:07:19| Beginning Validation Procedure
2005/11/02 10:07:19|   Completed Validation Procedure
2005/11/02 10:07:19|   Validated 31666 Entries
2005/11/02 10:07:19|   store_swap_size = 400260k
2005/11/02 10:07:20| WARNING: Disk space over limit: 399480 KB 
102400 KB
2005/11/02 10:07:20| storeLateRelease: released 2 objects
2005/11/02 10:07:31| WARNING: Disk space over limit: 384820 KB 
102400 KB
2005/11/02 10:07:39| WARNING: 1 swapin MD5 mismatches
2005/11/02 10:07:42| WARNING: Disk space over limit: 377240 KB 
102400 KB
2005/11/02 10:07:53| WARNING: Disk space over limit: 369680 KB 
102400 KB
2005/11/02 10:08:04| WARNING: Disk space over limit: 351148 KB 
102400 KB
2005/11/02 10:08:15| WARNING: Disk space over limit: 340112 KB 
102400 KB
2005/11/02 10:08:26| WARNING: Disk space over limit: 320184 KB 
102400 KB
2005/11/02 10:08:37| WARNING: Disk space over limit: 309412 KB 
102400 KB
2005/11/02 10:08:41| clientProcessHit: Vary object loop!
2005/11/02 10:08:48| WARNING: Disk space over limit: 295500 KB 
102400 KB
   

RE: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-09 Thread Dave Raven
Try use my method posted earlier to search for code files. The fact that
your log suddenly shows squid restarting means it died unexpectedly. If
there is a core file it'll be squids problem - if not its probably something
else causing the problem. 

Also, you should try clean out your cache_dir potentially... Remove
everything and run squid -z to recreate it 

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Sent: 09 November 2005 03:32 PM
To: Mike Cudmore
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Great, thanks for your answer and questions :
 
1- I have a message form my browser (IE, Firefox) witch says the proxy is
unreachable. My MSN, yahoo messengers loose their access.
2- Ping like all other services still work perfectly. (SSH, Apache,
Ping,...)
3- Cache.log part is on the previous mail. You can see that there is nothing
special between 09:44:45 and 10:07:05 (when squid come back)
 
Thanks for help.
 
L.G.



From: Mike Cudmore [mailto:[EMAIL PROTECTED]
Sent: Mittwoch, 9. November 2005 14:10
To: Gix, Lilian (CI/OSR) *
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.


Lilian

I may have missed earlier entries in this thread so apologies if I ask
you to repeat any info.
 
 
1) How do you know squid is unreachable at these times ?
 
2) try pinging the host squid is running on for a period that covers the
time that the squid is unreachable. Does the host become unreachable as
well?
 
3) What does cache.log say for these periods ?
 
I have more thoughts depending on the answers to these 
 
Regards
 
Mike
 

 Gix, Lilian (CI/OSR) * [EMAIL PROTECTED] 09/11/05
12:44:30 

Hello,


Realy no body have an idea ?

:(

L.G. 

-Original Message-
From: Gix, Lilian (CI/OSR) * 
Sent: Mittwoch, 2. November 2005 10:26
To: squid-users@squid-cache.org
Subject: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,

I have a problem with my squid :
every hour and 6 minutes, it is unreachable for few second.

Here is a part of Cache.log :



2005/11/02 09:44:45| Detected REVIVED Parent: virus2.com/8080/0
2005/11/02 10:07:05| Starting Squid Cache version 2.5.STABLE6
for i386-debian-li
nux-gnu...
2005/11/02 10:07:05| Process ID 1430
2005/11/02 10:07:05| With 4096 file descriptors available
2005/11/02 10:07:05| DNS Socket created at 0.0.0.0, port 32772,
FD 5
2005/11/02 10:07:05| Adding nameserver 193.108.217.70 from
/etc/resolv.conf
2005/11/02 10:07:05| User-Agent logging is disabled.
2005/11/02 10:07:05| Referer logging is disabled.
2005/11/02 10:07:05| Unlinkd pipe opened on FD 10
2005/11/02 10:07:05| Swap maxSize 102400 KB, estimated 7876
objects
2005/11/02 10:07:05| Target number of buckets: 393
2005/11/02 10:07:05| Using 8192 Store buckets
2005/11/02 10:07:05| Max Mem  size: 102400 KB
2005/11/02 10:07:05| Max Swap size: 102400 KB
2005/11/02 10:07:05| Local cache digest enabled; rebuild/rewrite
every 3600/3600
sec
2005/11/02 10:07:05| Store logging disabled
2005/11/02 10:07:05| Rebuilding storage in /cache (DIRTY)
2005/11/02 10:07:05| Using Least Load store dir selection
2005/11/02 10:07:05| Set Current Directory to /cache_log/
2005/11/02 10:07:05| Loaded Icons.
2005/11/02 10:07:15| Accepting HTTP connections at 0.0.0.0, port
8080, FD 11.
2005/11/02 10:07:15| Accepting ICP messages at 0.0.0.0, port
3130, FD 12.
2005/11/02 10:07:15| HTCP Disabled.
2005/11/02 10:07:15| WCCP Disabled.
2005/11/02 10:07:15| Configuring Parent 10.4.13.184/8080/0
2005/11/02 10:07:15| Ready to serve requests.
2005/11/02 10:07:15| Configuring Parent virus1.com/8080/0
2005/11/02 10:07:15| Configuring Parent virus2.com/8080/0
2005/11/02 10:07:15| Store rebuilding is  1.9% complete
2005/11/02 10:07:16| WARNING: newer swaplog entry for dirno 0,
fileno 031F
2005/11/02 10:07:16| WARNING: newer swaplog entry for dirno 0,
fileno 0371

2005/11/02 10:07:19| WARNING: newer swaplog entry for dirno 0,
fileno 39E4
2005/11/02 10:07:19| WARNING: newer swaplog entry for dirno 0,
fileno 39E8
2005/11/02 10:07:19| Done reading /cache swaplog (215057
entries)
2005/11/02 10:07:19| Finished rebuilding storage from disk.
2005/11/02 10:07:19|111709 Entries scanned
2005/11/02 10:07:19| 0 Invalid entries.
2005/11/02 10:07:19| 0 With invalid flags.
2005/11/02 10:07:19| 35086 Objects loaded.
2005/11/02 10:07:19| 0 Objects expired.
2005/11/02 10:07:19|  6194 Objects cancelled.
2005/11/02 10:07:19|  3443 Duplicate URLs purged.
2005/11/02 10:07:19| 70406 Swapfile clashes avoided.
2005/11/02 10:07:19|   Took 14.5 seconds (2411.4 objects/sec).
2005/11/02 10:07:19| Beginning Validation Procedure
2005/11/02 10:07:19|   Completed Validation Procedure

RE: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-09 Thread Dave Raven
Try use my method posted earlier to search for code files. The fact that
your log suddenly shows squid restarting means it died unexpectedly. If
there is a core file it'll be squids problem - if not its probably something
else causing the problem. 

Also, you should try clean out your cache_dir potentially... Remove
everything and run squid -z to recreate it 

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Sent: 09 November 2005 03:32 PM
To: Mike Cudmore
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Great, thanks for your answer and questions :
 
1- I have a message form my browser (IE, Firefox) witch says the proxy is
unreachable. My MSN, yahoo messengers loose their access.
2- Ping like all other services still work perfectly. (SSH, Apache,
Ping,...)
3- Cache.log part is on the previous mail. You can see that there is nothing
special between 09:44:45 and 10:07:05 (when squid come back)
 
Thanks for help.
 
L.G.



From: Mike Cudmore [mailto:[EMAIL PROTECTED]
Sent: Mittwoch, 9. November 2005 14:10
To: Gix, Lilian (CI/OSR) *
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.


Lilian

I may have missed earlier entries in this thread so apologies if I ask
you to repeat any info.
 
 
1) How do you know squid is unreachable at these times ?
 
2) try pinging the host squid is running on for a period that covers the
time that the squid is unreachable. Does the host become unreachable as
well?
 
3) What does cache.log say for these periods ?
 
I have more thoughts depending on the answers to these 
 
Regards
 
Mike
 

 Gix, Lilian (CI/OSR) * [EMAIL PROTECTED] 09/11/05
12:44:30 

Hello,


Realy no body have an idea ?

:(

L.G. 

-Original Message-
From: Gix, Lilian (CI/OSR) * 
Sent: Mittwoch, 2. November 2005 10:26
To: squid-users@squid-cache.org
Subject: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,

I have a problem with my squid :
every hour and 6 minutes, it is unreachable for few second.

Here is a part of Cache.log :



2005/11/02 09:44:45| Detected REVIVED Parent: virus2.com/8080/0
2005/11/02 10:07:05| Starting Squid Cache version 2.5.STABLE6
for i386-debian-li
nux-gnu...
2005/11/02 10:07:05| Process ID 1430
2005/11/02 10:07:05| With 4096 file descriptors available
2005/11/02 10:07:05| DNS Socket created at 0.0.0.0, port 32772,
FD 5
2005/11/02 10:07:05| Adding nameserver 193.108.217.70 from
/etc/resolv.conf
2005/11/02 10:07:05| User-Agent logging is disabled.
2005/11/02 10:07:05| Referer logging is disabled.
2005/11/02 10:07:05| Unlinkd pipe opened on FD 10
2005/11/02 10:07:05| Swap maxSize 102400 KB, estimated 7876
objects
2005/11/02 10:07:05| Target number of buckets: 393
2005/11/02 10:07:05| Using 8192 Store buckets
2005/11/02 10:07:05| Max Mem  size: 102400 KB
2005/11/02 10:07:05| Max Swap size: 102400 KB
2005/11/02 10:07:05| Local cache digest enabled; rebuild/rewrite
every 3600/3600
sec
2005/11/02 10:07:05| Store logging disabled
2005/11/02 10:07:05| Rebuilding storage in /cache (DIRTY)
2005/11/02 10:07:05| Using Least Load store dir selection
2005/11/02 10:07:05| Set Current Directory to /cache_log/
2005/11/02 10:07:05| Loaded Icons.
2005/11/02 10:07:15| Accepting HTTP connections at 0.0.0.0, port
8080, FD 11.
2005/11/02 10:07:15| Accepting ICP messages at 0.0.0.0, port
3130, FD 12.
2005/11/02 10:07:15| HTCP Disabled.
2005/11/02 10:07:15| WCCP Disabled.
2005/11/02 10:07:15| Configuring Parent 10.4.13.184/8080/0
2005/11/02 10:07:15| Ready to serve requests.
2005/11/02 10:07:15| Configuring Parent virus1.com/8080/0
2005/11/02 10:07:15| Configuring Parent virus2.com/8080/0
2005/11/02 10:07:15| Store rebuilding is  1.9% complete
2005/11/02 10:07:16| WARNING: newer swaplog entry for dirno 0,
fileno 031F
2005/11/02 10:07:16| WARNING: newer swaplog entry for dirno 0,
fileno 0371

2005/11/02 10:07:19| WARNING: newer swaplog entry for dirno 0,
fileno 39E4
2005/11/02 10:07:19| WARNING: newer swaplog entry for dirno 0,
fileno 39E8
2005/11/02 10:07:19| Done reading /cache swaplog (215057
entries)
2005/11/02 10:07:19| Finished rebuilding storage from disk.
2005/11/02 10:07:19|111709 Entries scanned
2005/11/02 10:07:19| 0 Invalid entries.
2005/11/02 10:07:19| 0 With invalid flags.
2005/11/02 10:07:19| 35086 Objects loaded.
2005/11/02 10:07:19| 0 Objects expired.
2005/11/02 10:07:19|  6194 Objects cancelled.
2005/11/02 10:07:19|  3443 Duplicate URLs purged.
2005/11/02 10:07:19| 70406 Swapfile clashes avoided.
2005/11/02 10:07:19|   Took 14.5 seconds (2411.4 objects/sec).
2005/11/02 10:07:19| Beginning Validation Procedure
2005/11/02 10:07:19|   Completed Validation Procedure

RE: [squid-users] RE: Urgent Samba / Squid NTLM Auth Problems

2005-11-09 Thread Dave Raven
Hi Adam, 
We are currently talking to samba, but we are able to join the
domain. Where we sit right now is that if we use -basic instead of -ntlmssp
it works fine. I've narrowed it down to the password that's the problem -
its obtaining the user, domain and workstation just fine. All the command
line tools work perfectly - only when using auth_param ntlm * does it
fail...

As far as I have been able to understand it, there is either a problem with
the way squid is passing the reply to the ntlm challenge to the helper, or a
problem with the helper...

At the moment I'll take any options that are possible...
 

-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Adam Aube
Sent: 09 November 2005 09:12 PM
To: squid-users@squid-cache.org
Subject: [squid-users] RE: Urgent Samba / Squid NTLM Auth Problems

Dave Raven wrote:

 Okay I have an update with more progress - it seems the problem is 
 only to do with ntlmssp. If I only have a basic authenticator - which 
 looks like the following, it works perfectly:

 However, when I use ntlmssp in the squid config, shown below, it does 
 not
 work:
 
 auth_param ntlm program /usr/optec/ntlm_auth.sh ntlmssp auth_param 
 ntlm children 10 auth_param ntlm use_ntlm_negotiate yes
 
 I see the following debug messages:
 [2005/11/09 13:22:37, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
   Got user=[ianb] domain=[MASTERMIND] workstation=[LUCY] len1=24
 len2=24
 [2005/11/09 13:22:37, 3] utils/ntlm_auth.c:winbind_pw_check(427)
   Login for user [EMAIL PROTECTED] failed due to [Wrong 
 Password]
 
 If I type ian instead of ianb, I see an error saying the user does not 
 exist. This must mean that somehow the wrong password is being passed 
 in the wrong way - even though it is typed right.
 
 For anyone who hasn't read the rest of this thread please note: this 
 only happens with the security option on the AD server set to ONLY 
 allow
 NTLMv2/LMv2 and not anything else. If we turn that off it works 
 perfectly...

It looks like this might be a Samba issue - Ian had stated that if only
NTLMv2 is allowed, then Samba can't even join the domain. I would suggest
taking this to the Samba list.

Adam



RE: [squid-users] RE: Urgent Samba / Squid NTLM Auth Problems

2005-11-09 Thread Dave Raven
Hi Adam, 
We are currently talking to samba, but we are able to join the
domain. Where we sit right now is that if we use -basic instead of -ntlmssp
it works fine. I've narrowed it down to the password that's the problem -
its obtaining the user, domain and workstation just fine. All the command
line tools work perfectly - only when using auth_param ntlm * does it
fail...

As far as I have been able to understand it, there is either a problem with
the way squid is passing the reply to the ntlm challenge to the helper, or a
problem with the helper...

At the moment I'll take any options that are possible...
 

-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Adam Aube
Sent: 09 November 2005 09:12 PM
To: squid-users@squid-cache.org
Subject: [squid-users] RE: Urgent Samba / Squid NTLM Auth Problems

Dave Raven wrote:

 Okay I have an update with more progress - it seems the problem is 
 only to do with ntlmssp. If I only have a basic authenticator - which 
 looks like the following, it works perfectly:

 However, when I use ntlmssp in the squid config, shown below, it does 
 not
 work:
 
 auth_param ntlm program /usr/optec/ntlm_auth.sh ntlmssp auth_param 
 ntlm children 10 auth_param ntlm use_ntlm_negotiate yes
 
 I see the following debug messages:
 [2005/11/09 13:22:37, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
   Got user=[ianb] domain=[MASTERMIND] workstation=[LUCY] len1=24
 len2=24
 [2005/11/09 13:22:37, 3] utils/ntlm_auth.c:winbind_pw_check(427)
   Login for user [EMAIL PROTECTED] failed due to [Wrong 
 Password]
 
 If I type ian instead of ianb, I see an error saying the user does not 
 exist. This must mean that somehow the wrong password is being passed 
 in the wrong way - even though it is typed right.
 
 For anyone who hasn't read the rest of this thread please note: this 
 only happens with the security option on the AD server set to ONLY 
 allow
 NTLMv2/LMv2 and not anything else. If we turn that off it works 
 perfectly...

It looks like this might be a Samba issue - Ian had stated that if only
NTLMv2 is allowed, then Samba can't even join the domain. I would suggest
taking this to the Samba list.

Adam



RE: [squid-users] Urgent Samba / Squid NTLM Auth Problems

2005-11-08 Thread Dave Raven
Hi all, 
I'm currently working on this problem with Ian. It seems like
ntlm_auth is handling the requests fine - 

[EMAIL PROTECTED] /usr/local/bin # ./ntlm_auth --username=ianb
--configfile=/usr/local/etc/smb.conf
password: 
NT_STATUS_OK: Success (0x0)

It also works through squid when using wget

[2005/11/08 17:15:09, 3] utils/ntlm_auth.c:check_plaintext_auth(292)
  NT_STATUS_OK: Success (0x0)

Note that it says check_plaintext_auth though, when using a browser (e.g.
IE) we see the following messages

[2005/11/08 15:16:36, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
  Got user=[IANB] domain=[MASTERMIND] workstation=[IANB] len1=24 len2=24
[2005/11/08 15:16:37, 3] utils/ntlm_auth.c:winbind_pw_check(427)
  Login for user [EMAIL PROTECTED] failed due to [Wrong Password]

Why is it using a different method? It seems like the problem only occurs
when it doesn't use check_plaintext_auth.  Is there anything we can do to
learn more?

Thanks for all the help so far
Dave



RE: [squid-users] Urgent Samba / Squid NTLM Auth Problems

2005-11-08 Thread Dave Raven
Hi all, 
I'm currently working on this problem with Ian. It seems like
ntlm_auth is handling the requests fine - 

[EMAIL PROTECTED] /usr/local/bin # ./ntlm_auth --username=ianb
--configfile=/usr/local/etc/smb.conf
password: 
NT_STATUS_OK: Success (0x0)

It also works through squid when using wget

[2005/11/08 17:15:09, 3] utils/ntlm_auth.c:check_plaintext_auth(292)
  NT_STATUS_OK: Success (0x0)

Note that it says check_plaintext_auth though, when using a browser (e.g.
IE) we see the following messages

[2005/11/08 15:16:36, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
  Got user=[IANB] domain=[MASTERMIND] workstation=[IANB] len1=24 len2=24
[2005/11/08 15:16:37, 3] utils/ntlm_auth.c:winbind_pw_check(427)
  Login for user [EMAIL PROTECTED] failed due to [Wrong Password]

Why is it using a different method? It seems like the problem only occurs
when it doesn't use check_plaintext_auth.  Is there anything we can do to
learn more?

Thanks for all the help so far
Dave



RE: [squid-users] Squid 2.5-Stable10 With Negotiate Patch and Sambe 3.x

2005-09-29 Thread Dave Raven
Hello,
How does this login=*:secret option work? I have set up two caches
and put the authentication on the bottom unit, setting a cache peer with
login=*:secret (intead of PASS) and it doesn't work? Well, it all works, but
with no username in the log file at the top...

Any advice?

Thanks
Dave 

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 28 September 2005 12:57 AM
To: Cole
Cc: 'Henrik Nordstrom'; 'Squid Users'
Subject: RE: [squid-users] Squid 2.5-Stable10 With Negotiate Patch and Sambe
3.x

On Wed, 28 Sep 2005, Cole wrote:

 I understand SPNEGO to be the Kerberos Authentication Method that is 
 being built into the latest browsers? Like firefox and IE 5.5+?

Firefox has experimental SPNEGO support available. By default disabled from
what I have been told, but once enabled happily uses SPNEGO both to web
servers and proxies.

IE has support for SPNEGO to web servers only, not proxies. Why Microsoft
has not added SPNEGO support to proxy connections is a mystery that only
Microsoft can answer.

 The main problem stopping us from using ntlm is that we have multiple 
 levels of cache. The top level cache is responsible for user auth and 
 acls. According to your previous posts, this cannot be done with ntlm.

And it cannot be done with Negotiate either. Both share the same design
flaws causing breakage when run over HTTP compliant proxies.

In setups requiring NTLM of Negotiate authentication you need to run the
authentiction on the leaf caches closest to the client. With a little
tinkering you can then have the login (but not password) forwarded in the
proxy chain by using the login=*:secret cache_peer option if needed but this
is extra bonus. The simpler path is to allow requests from trusted child
caches without requiring authentication again.

 Thats why I was trying to use a Samba-3.x, but I used the wrong helper 
 obviously. Is there a specific Samba-3.x that I would have to use 
 here, that has SPNEGO built into it? Or are all the Samba-3.x SPNEGO
enabled?

The exact Samba versions needed to use SPNEGO over HTTP it still a bit
uncertain. From what it looks Samba 4 may be required at this time, but
maybe it works in current Samba-3.3.X as well.

Regards
Henrik



RE: [squid-users] Squid 2.5-Stable10 With Negotiate Patch and Sambe 3.x

2005-09-29 Thread Dave Raven
Hello,
How does this login=*:secret option work? I have set up two caches
and put the authentication on the bottom unit, setting a cache peer with
login=*:secret (intead of PASS) and it doesn't work? Well, it all works, but
with no username in the log file at the top...

Any advice?

Thanks
Dave 

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 28 September 2005 12:57 AM
To: Cole
Cc: 'Henrik Nordstrom'; 'Squid Users'
Subject: RE: [squid-users] Squid 2.5-Stable10 With Negotiate Patch and Sambe
3.x

On Wed, 28 Sep 2005, Cole wrote:

 I understand SPNEGO to be the Kerberos Authentication Method that is 
 being built into the latest browsers? Like firefox and IE 5.5+?

Firefox has experimental SPNEGO support available. By default disabled from
what I have been told, but once enabled happily uses SPNEGO both to web
servers and proxies.

IE has support for SPNEGO to web servers only, not proxies. Why Microsoft
has not added SPNEGO support to proxy connections is a mystery that only
Microsoft can answer.

 The main problem stopping us from using ntlm is that we have multiple 
 levels of cache. The top level cache is responsible for user auth and 
 acls. According to your previous posts, this cannot be done with ntlm.

And it cannot be done with Negotiate either. Both share the same design
flaws causing breakage when run over HTTP compliant proxies.

In setups requiring NTLM of Negotiate authentication you need to run the
authentiction on the leaf caches closest to the client. With a little
tinkering you can then have the login (but not password) forwarded in the
proxy chain by using the login=*:secret cache_peer option if needed but this
is extra bonus. The simpler path is to allow requests from trusted child
caches without requiring authentication again.

 Thats why I was trying to use a Samba-3.x, but I used the wrong helper 
 obviously. Is there a specific Samba-3.x that I would have to use 
 here, that has SPNEGO built into it? Or are all the Samba-3.x SPNEGO
enabled?

The exact Samba versions needed to use SPNEGO over HTTP it still a bit
uncertain. From what it looks Samba 4 may be required at this time, but
maybe it works in current Samba-3.3.X as well.

Regards
Henrik



RE: [squid-users] SPNEGO patch

2005-09-21 Thread Dave Raven
Hi Henrik, 
Thanks for the reply. I have downloaded the patch and applied it
successfully but I can't get it to compile... 

/usr/bin/ar cru libheap.a heap/store_heap_replacement.o
heap/store_repl_heap.o 
ranlib libheap.a
Making all in auth
Making all in basic
Making all in ntlm
Making all in digest
Making all in negotiate
make: don't know how to make all. Stop
*** Error code 1

That's the error I get. There is only a Makefile.am:
[EMAIL PROTECTED]
/usr/ports/www/squid/work/squid-2.5.STABLE10/src/auth/negotiate # cat
Makefile.am
SUBDIRS =

I've tried various things but they all meet with failure... Is there
something I'm missing? Thanks again for the help

Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 21 September 2005 02:40 AM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] SPNEGO patch



On Tue, 20 Sep 2005, Dave Raven wrote:

 I saw you guys mentioned a patch in the following thread :
 http://www.squid-cache.org/mail-archive/squid-users/200508/0144.html 
 (SPNEGO
 generic)

 I was wondering if this patch is publically available, cause I cannot 
 seem to find that patch on the website. If I could please get a direct 
 link, or any info regarding this would be welcome...

Sorry, it seems I had forgot to update the projects list.

The code can be found in the devel.squid-cache.org CVS repository, and the
project should become visible on the web pages shortly.

Regards
Henrik



RE: [squid-users] SPNEGO patch

2005-09-21 Thread Dave Raven
Hi Henrik, 
Thanks for the reply. I have downloaded the patch and applied it
successfully but I can't get it to compile... 

/usr/bin/ar cru libheap.a heap/store_heap_replacement.o
heap/store_repl_heap.o 
ranlib libheap.a
Making all in auth
Making all in basic
Making all in ntlm
Making all in digest
Making all in negotiate
make: don't know how to make all. Stop
*** Error code 1

That's the error I get. There is only a Makefile.am:
[EMAIL PROTECTED]
/usr/ports/www/squid/work/squid-2.5.STABLE10/src/auth/negotiate # cat
Makefile.am
SUBDIRS =

I've tried various things but they all meet with failure... Is there
something I'm missing? Thanks again for the help

Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 21 September 2005 02:40 AM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] SPNEGO patch



On Tue, 20 Sep 2005, Dave Raven wrote:

 I saw you guys mentioned a patch in the following thread :
 http://www.squid-cache.org/mail-archive/squid-users/200508/0144.html 
 (SPNEGO
 generic)

 I was wondering if this patch is publically available, cause I cannot 
 seem to find that patch on the website. If I could please get a direct 
 link, or any info regarding this would be welcome...

Sorry, it seems I had forgot to update the projects list.

The code can be found in the devel.squid-cache.org CVS repository, and the
project should become visible on the web pages shortly.

Regards
Henrik



RE: [squid-users] SPNEGO patch

2005-09-21 Thread Dave Raven
Thanks Chris, after running bootstrap.sh and a little tweaking its compiled!

Thanks again
Dave  

-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: 21 September 2005 10:35 PM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] SPNEGO patch

 -Original Message-
 From: Dave Raven [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, September 21, 2005 12:19 PM
 To: 'Henrik Nordstrom'
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] SPNEGO patch
 
 
 Hi Henrik, 
   Thanks for the reply. I have downloaded the patch and applied it 
 successfully but I can't get it to compile...
 
 /usr/bin/ar cru libheap.a heap/store_heap_replacement.o 
 heap/store_repl_heap.o ranlib libheap.a Making all in auth Making all 
 in basic Making all in ntlm Making all in digest Making all in 
 negotiate
 make: don't know how to make all. Stop
 *** Error code 1
 
 That's the error I get. There is only a Makefile.am:
 [EMAIL PROTECTED]
 /usr/ports/www/squid/work/squid-2.5.STABLE10/src/auth/negotiate # cat 
 Makefile.am SUBDIRS =
 
 I've tried various things but they all meet with failure... Is there 
 something I'm missing? Thanks again for the help
 
 Dave
 

Did you run bootstrap.sh?
http://www.squid-cache.org/mail-archive/squid-users/200506/0102.html

Chris



RE: [squid-users] SPNEGO patch

2005-09-21 Thread Dave Raven
Thanks Chris, after running bootstrap.sh and a little tweaking its compiled!

Thanks again
Dave  

-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: 21 September 2005 10:35 PM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] SPNEGO patch

 -Original Message-
 From: Dave Raven [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, September 21, 2005 12:19 PM
 To: 'Henrik Nordstrom'
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] SPNEGO patch
 
 
 Hi Henrik, 
   Thanks for the reply. I have downloaded the patch and applied it 
 successfully but I can't get it to compile...
 
 /usr/bin/ar cru libheap.a heap/store_heap_replacement.o 
 heap/store_repl_heap.o ranlib libheap.a Making all in auth Making all 
 in basic Making all in ntlm Making all in digest Making all in 
 negotiate
 make: don't know how to make all. Stop
 *** Error code 1
 
 That's the error I get. There is only a Makefile.am:
 [EMAIL PROTECTED]
 /usr/ports/www/squid/work/squid-2.5.STABLE10/src/auth/negotiate # cat 
 Makefile.am SUBDIRS =
 
 I've tried various things but they all meet with failure... Is there 
 something I'm missing? Thanks again for the help
 
 Dave
 

Did you run bootstrap.sh?
http://www.squid-cache.org/mail-archive/squid-users/200506/0102.html

Chris



[squid-users] Digest + NTLM Auth

2005-09-20 Thread Dave Raven
Hi all,
Is it possible to use digest as a failover to ntlmssp? E.g. in most
configurations with ntlm the cache uses ntlm and then falls back to basic to
authenticate browsers like netscape. Is it possible to make that fallback
use digest authentication-ntlm, eliminating cleartext between the cache and
the user (for authentication at least). 

So summed up - is it possible to authenticate against an ntlm server as
basic does, but with digest between the client and the cache?

Thanks
Dave



[squid-users] Digest + NTLM Auth

2005-09-20 Thread Dave Raven
Hi all,
Is it possible to use digest as a failover to ntlmssp? E.g. in most
configurations with ntlm the cache uses ntlm and then falls back to basic to
authenticate browsers like netscape. Is it possible to make that fallback
use digest authentication-ntlm, eliminating cleartext between the cache and
the user (for authentication at least). 

So summed up - is it possible to authenticate against an ntlm server as
basic does, but with digest between the client and the cache?

Thanks
Dave



RE: [squid-users] Single Signon and Cache Peers

2005-09-13 Thread Dave Raven
Is anyone interested in this? We are willing to pay for the development. 

I have read up on it some more and it seems to be possible, but not as easy
as I described below. 

-Original Message-
From: Dave Raven [mailto:[EMAIL PROTECTED] 
Sent: 31 August 2005 10:22 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Single Signon and Cache Peers

Hi all,
A while ago I did a few tests to see why single signon was breaking
through cache peers. It seems like a valid single signon request comes with
a DOMAIN\user format, and no domain once its passed through the peers -
causing it to fail?

I did it a long time ago and didn't really do enough work to know for sure
but I do know it doesn't work. Can anyone suggest a way to get this to work,
or is it even possible? Even just adding on the domain? Although I'm sure
its not that simple..

We'd be willing to pay for the development work, and release it to the
public - I just want to know if its possible?

Thanks
Dave



RE: [squid-users] Single Signon and Cache Peers

2005-09-13 Thread Dave Raven
Is anyone interested in this? We are willing to pay for the development. 

I have read up on it some more and it seems to be possible, but not as easy
as I described below. 

-Original Message-
From: Dave Raven [mailto:[EMAIL PROTECTED] 
Sent: 31 August 2005 10:22 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Single Signon and Cache Peers

Hi all,
A while ago I did a few tests to see why single signon was breaking
through cache peers. It seems like a valid single signon request comes with
a DOMAIN\user format, and no domain once its passed through the peers -
causing it to fail?

I did it a long time ago and didn't really do enough work to know for sure
but I do know it doesn't work. Can anyone suggest a way to get this to work,
or is it even possible? Even just adding on the domain? Although I'm sure
its not that simple..

We'd be willing to pay for the development work, and release it to the
public - I just want to know if its possible?

Thanks
Dave



[squid-users] Single Signon and Cache Peers

2005-08-31 Thread Dave Raven
Hi all,
A while ago I did a few tests to see why single signon was breaking
through cache peers. It seems like a valid single signon request comes with
a DOMAIN\user format, and no domain once its passed through the peers -
causing it to fail?

I did it a long time ago and didn't really do enough work to know for sure
but I do know it doesn't work. Can anyone suggest a way to get this to work,
or is it even possible? Even just adding on the domain? Although I'm sure
its not that simple..

We'd be willing to pay for the development work, and release it to the
public - I just want to know if its possible?

Thanks
Dave



[squid-users] Single Signon and Cache Peers

2005-08-31 Thread Dave Raven
Hi all,
A while ago I did a few tests to see why single signon was breaking
through cache peers. It seems like a valid single signon request comes with
a DOMAIN\user format, and no domain once its passed through the peers -
causing it to fail?

I did it a long time ago and didn't really do enough work to know for sure
but I do know it doesn't work. Can anyone suggest a way to get this to work,
or is it even possible? Even just adding on the domain? Although I'm sure
its not that simple..

We'd be willing to pay for the development work, and release it to the
public - I just want to know if its possible?

Thanks
Dave



RE: [squid-users] HTTP1.1 Protocol

2005-06-07 Thread Dave Raven
Not as far as I know - pretty sure infact. The main reason for this has
something to do with hop to hop gzip, but its getting pretty close. Someone
else can give you more info hopefully 

-Original Message-
From: squid squid [mailto:[EMAIL PROTECTED] 
Sent: 07 June 2005 07:06 AM
To: squid-users@squid-cache.org
Subject: [squid-users] HTTP1.1 Protocol

Does Squid 2.5 Stable10 supports HTTP1.1 protocol by default or will it drop
HTTP1.1 to HTTP1.0??? Is there any configuration required to be set in order
to supprt HTTP1.1???

_
Take a break! Find destinations on MSN Travel. http://www.msn.com.sg/travel/



RE: [squid-users] HTTP1.1 Protocol

2005-06-07 Thread Dave Raven
Not as far as I know - pretty sure infact. The main reason for this has
something to do with hop to hop gzip, but its getting pretty close. Someone
else can give you more info hopefully 

-Original Message-
From: squid squid [mailto:[EMAIL PROTECTED] 
Sent: 07 June 2005 07:06 AM
To: squid-users@squid-cache.org
Subject: [squid-users] HTTP1.1 Protocol

Does Squid 2.5 Stable10 supports HTTP1.1 protocol by default or will it drop
HTTP1.1 to HTTP1.0??? Is there any configuration required to be set in order
to supprt HTTP1.1???

_
Take a break! Find destinations on MSN Travel. http://www.msn.com.sg/travel/



RE: [squid-users] User Authentification ?

2005-06-06 Thread Dave Raven
Afaik the only way is using challenge/response - you'll need winbindd to
communicate with the logon server as the doze session (u/p) isn't in
cleartext. On the note winbindd support is pretty reliable - maybe we can
solve your problem with that?

-Original Message-
From: Phibee Network operation Center [mailto:[EMAIL PROTECTED] 
Sent: 06 June 2005 11:03 PM
To: squid-users@squid-cache.org
Subject: [squid-users] User Authentification ?

Hi

can i put a squid authentification based on a  login/password into a file on
my linux proxy server and don't open a login/pass box but only based on the
login/pass of the windows session ?

Actually i use Winbindd, but i have a big quantity of problems with and
can't continue ..

thanks for your help




RE: [squid-users] User Authentification ?

2005-06-06 Thread Dave Raven
Afaik the only way is using challenge/response - you'll need winbindd to
communicate with the logon server as the doze session (u/p) isn't in
cleartext. On the note winbindd support is pretty reliable - maybe we can
solve your problem with that?

-Original Message-
From: Phibee Network operation Center [mailto:[EMAIL PROTECTED] 
Sent: 06 June 2005 11:03 PM
To: squid-users@squid-cache.org
Subject: [squid-users] User Authentification ?

Hi

can i put a squid authentification based on a  login/password into a file on
my linux proxy server and don't open a login/pass box but only based on the
login/pass of the windows session ?

Actually i use Winbindd, but i have a big quantity of problems with and
can't continue ..

thanks for your help




RE: [squid-users] Stopping Movies / Sound traffic in Squid using ACL

2005-06-06 Thread Dave Raven
I haven't tested this, but you may just have to fiddle with the regex or my
late night typo's

acl blockedstuff regex ^.*\.(mpe|mov|wmf|asf|divx|mpg|mpeg|mp3|wav|avi|ogg)$
http_access deny blockedstuff

-Original Message-
From: John Walubengo [mailto:[EMAIL PROTECTED] 
Sent: 06 June 2005 08:03 AM
To: dev singh
Cc: squidrunner support; Zero One; squid-users@squid-cache.org
Subject: Re: [squid-users] Stopping Movies / Sound traffic in Squid using
ACL

Dev,

plse give me the actual lines to implement the above using acl's.

walu.

--- dev singh [EMAIL PROTECTED] wrote:

 Hi walu,
 
 u can use delay pools of squid to limit b/w usage of users , also u 
 can use acl's to restrict movies and music channels.
 
 Other thing is u can use cbq to limit b/w usage on ip , port and 
 service basis.
 
 Regards
 dev
 
 On 6/3/05, John Walubengo [EMAIL PROTECTED] wrote:
  
  I am running squid 2.5 Stable 1-2. in a college
 environment
  of about 200PCs.
  Most of my students are clogging up the bandwidth by
 tuning
  into movie and music channels..
  
  How do i stop this in squid.  What commnands should i insert in the 
  squid.conf?
  
  thanx.
  walu.
  
  
  
  __
  Do You Yahoo!?
  Tired of spam?  Yahoo! Mail has the best spam
 protection around
  http://mail.yahoo.com
 
 




__
Discover Yahoo! 
Use Yahoo! to plan a weekend, have fun online and more. Check it out! 
http://discover.yahoo.com/



RE: [squid-users] Stopping Movies / Sound traffic in Squid using ACL

2005-06-06 Thread Dave Raven
I haven't tested this, but you may just have to fiddle with the regex or my
late night typo's

acl blockedstuff regex ^.*\.(mpe|mov|wmf|asf|divx|mpg|mpeg|mp3|wav|avi|ogg)$
http_access deny blockedstuff

-Original Message-
From: John Walubengo [mailto:[EMAIL PROTECTED] 
Sent: 06 June 2005 08:03 AM
To: dev singh
Cc: squidrunner support; Zero One; squid-users@squid-cache.org
Subject: Re: [squid-users] Stopping Movies / Sound traffic in Squid using
ACL

Dev,

plse give me the actual lines to implement the above using acl's.

walu.

--- dev singh [EMAIL PROTECTED] wrote:

 Hi walu,
 
 u can use delay pools of squid to limit b/w usage of users , also u 
 can use acl's to restrict movies and music channels.
 
 Other thing is u can use cbq to limit b/w usage on ip , port and 
 service basis.
 
 Regards
 dev
 
 On 6/3/05, John Walubengo [EMAIL PROTECTED] wrote:
  
  I am running squid 2.5 Stable 1-2. in a college
 environment
  of about 200PCs.
  Most of my students are clogging up the bandwidth by
 tuning
  into movie and music channels..
  
  How do i stop this in squid.  What commnands should i insert in the 
  squid.conf?
  
  thanx.
  walu.
  
  
  
  __
  Do You Yahoo!?
  Tired of spam?  Yahoo! Mail has the best spam
 protection around
  http://mail.yahoo.com
 
 




__
Discover Yahoo! 
Use Yahoo! to plan a weekend, have fun online and more. Check it out! 
http://discover.yahoo.com/



[squid-users] Myip / cache peer problems

2005-04-18 Thread Dave Raven
Hi all,
I'm having a problem using a combination of myip and cache peer
access. What I want to do is say if a user is pointing to the cache on ip
10.10.5.199 go to cache peer 10.10.0.1 and if its pointing to me on
10.10.5.200 go to the cache peer on 10.10.0.2

To do this I'm using two acls' with myip defining the 5.199 and 5.200 ips, I
then use four cache_peer_access lines, two per host. One saying force myip1
to 10.10.0.1 and deny all after it, and the other saying force myip2 to
10.10.0.2 and deny all after it.

I also have never direct all and always direct none. If I use the ip that's
bound to the card as the main ip (.200) it works perfectly and I get sent
through the right host. If I use the aliased ip however, I get a failed to
select source, and a message to say my peer is down! I can access the peer
perfectly fine through telnet, and icp... Yet squid believes it to be down?

Any ideas?

I can email the config file tomorrow if required


Thanks in advance



[squid-users] Myip / cache peer problems

2005-04-18 Thread Dave Raven
Hi all,
I'm having a problem using a combination of myip and cache peer
access. What I want to do is say if a user is pointing to the cache on ip
10.10.5.199 go to cache peer 10.10.0.1 and if its pointing to me on
10.10.5.200 go to the cache peer on 10.10.0.2

To do this I'm using two acls' with myip defining the 5.199 and 5.200 ips, I
then use four cache_peer_access lines, two per host. One saying force myip1
to 10.10.0.1 and deny all after it, and the other saying force myip2 to
10.10.0.2 and deny all after it.

I also have never direct all and always direct none. If I use the ip that's
bound to the card as the main ip (.200) it works perfectly and I get sent
through the right host. If I use the aliased ip however, I get a failed to
select source, and a message to say my peer is down! I can access the peer
perfectly fine through telnet, and icp... Yet squid believes it to be down?

Any ideas?

I can email the config file tomorrow if required


Thanks in advance



RE: [squid-users] Myip / cache peer problems

2005-04-18 Thread Dave Raven
Yes, and if I configure both to use round robin or use both with icp they
both work.. Its something to do with the _access and myip 

I hope to have more info tomorrow though

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 18 April 2005 09:52 PM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Myip / cache peer problems

On Mon, 18 Apr 2005, Dave Raven wrote:

 I also have never direct all and always direct none. If I use the ip 
 that's bound to the card as the main ip (.200) it works perfectly and 
 I get sent through the right host. If I use the aliased ip however, I 
 get a failed to select source, and a message to say my peer is down! I 
 can access the peer perfectly fine through telnet, and icp... Yet squid
believes it to be down?

Does the peer work without warnings in cache.log if you only have this
failing peer defined and no cache_peer_access lines?

Regards
Henrik



RE: [squid-users] Myip / cache peer problems

2005-04-18 Thread Dave Raven
Yes, and if I configure both to use round robin or use both with icp they
both work.. Its something to do with the _access and myip 

I hope to have more info tomorrow though

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 18 April 2005 09:52 PM
To: Dave Raven
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Myip / cache peer problems

On Mon, 18 Apr 2005, Dave Raven wrote:

 I also have never direct all and always direct none. If I use the ip 
 that's bound to the card as the main ip (.200) it works perfectly and 
 I get sent through the right host. If I use the aliased ip however, I 
 get a failed to select source, and a message to say my peer is down! I 
 can access the peer perfectly fine through telnet, and icp... Yet squid
believes it to be down?

Does the peer work without warnings in cache.log if you only have this
failing peer defined and no cache_peer_access lines?

Regards
Henrik



[squid-users] cache_peer_access limitations

2005-04-03 Thread Dave Raven
Hi all,
    I have a rather serious problem – and can’t think of any way to
solve it. I have a cache hierarchy with TWO cache boxes running on different
internet links at the core, then 10 regional cacheÂ’s peering to the top. The
reason I say two, is because one link is for a specific set of users, and
the other is for normal users. These “business” users need to go through the
regional cache’s and out the “business” link – but I’m not transparently
caching so they are actually pointing at the regional cache. 

My thinking was that I could use cache_peer_access and use a different port
for business users on the regional cache’s – changing their setup is not a
problem. However, cache_peer_access port specification only works with the
destination port, and if they are pointing to me on say 3129 but going to
google.com:80 it wont pick it upÂ… 

So I need a way to maintain (through multiple cacheÂ’s) a business users
state and have those sessions go out the one peer at the top. Does anyone
know of a way to do that? If not, I’ll have to task someone to “modify” the
cache_peer_access – or something else? What would be the best place to look?

Thanks
Dave




[squid-users] cache_peer_access limitations

2005-04-03 Thread Dave Raven
Hi all,
    I have a rather serious problem – and can’t think of any way to
solve it. I have a cache hierarchy with TWO cache boxes running on different
internet links at the core, then 10 regional cacheÂ’s peering to the top. The
reason I say two, is because one link is for a specific set of users, and
the other is for normal users. These “business” users need to go through the
regional cache’s and out the “business” link – but I’m not transparently
caching so they are actually pointing at the regional cache. 

My thinking was that I could use cache_peer_access and use a different port
for business users on the regional cache’s – changing their setup is not a
problem. However, cache_peer_access port specification only works with the
destination port, and if they are pointing to me on say 3129 but going to
google.com:80 it wont pick it upÂ… 

So I need a way to maintain (through multiple cacheÂ’s) a business users
state and have those sessions go out the one peer at the top. Does anyone
know of a way to do that? If not, I’ll have to task someone to “modify” the
cache_peer_access – or something else? What would be the best place to look?

Thanks
Dave




RE: [squid-users] Challenge/Response with Cache Peers (NTLM)

2005-01-31 Thread Dave Raven
Hello,

the main cache unit forwards requests to the two peers, which are
set as parents with icp enabled. There is no logging or authentication until
the squid NTLM unit at which stage the user is authenticated against the
Windows 2003 machine. I have it working perfectly if I point directly to
squid NTLM, but if I point to main cache it fails. If I look in the log
when its successful I get DOMAIN\user - when it fails all I see is user...

I hope this has explained it more...

The main goal is to do single signon through multiple cache's with
login=PASS set on the peers

-Original Message-
From: Kinkie [mailto:[EMAIL PROTECTED] 
Sent: 29 January 2005 11:34 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Challenge/Response with Cache Peers (NTLM)

On Thu, 2005-01-27 at 21:26 +0200, Dave Raven wrote:
 Hi all,
   I've been testing the behavior of Challenge/Response today with
 cache peers. the versions etc are not relevant as I have
Challenge/Response
 and BASIC working fine if I point directly to the unit. Below is a
makeshift
 diagram of how I've set this up now:
 
-
| squid |
| NTLM  |  Windows 2003
-
   |
 /   \
 peer1 -- peer2
\/
 \  /
main cache
 
 I point to main cache, which has two parents which are the only routes
 (never_direct + always_direct) - login=PASS is on my peer lines. On those
 two I have setup each of them as siblings with login=PASS, and a parent of
 the squid NTLM authenticating unit (which works fine if I point direct),
 also with login=PASS.
 
 The behavior I see is that if I'm using the auth box, I have to login
(with
 basic) with DOMAIN\user (and challenge response works). If I go through
the
 peers I have to login with only the user - if I add the domain it doesn't
 work at _all_. When I try challenge response it naturally doesn't work as
 the username gets passed with no domain...

Could you paste the relevant lines in the three boxes' squid.conf?

 Is the fix for this as simple as it seems? Or is the problem more
 complicated. I'd really like to get this working...

Do you want the two peers to be directly accessed? If the purpose is for
them to only cache, you might want to distinguish roles: main cache does
auth + logging + request routing, the others do caching (you might want
use CARP to balance the parents to maximize efficiency). If so, it would
be enough for you to use a 'src' type acl on the parents locked on the
main cache ip and log usernames only on the main cache log.

Kinkie



RE: [squid-users] Challenge/Response with Cache Peers (NTLM)

2005-01-31 Thread Dave Raven
Hello,

the main cache unit forwards requests to the two peers, which are
set as parents with icp enabled. There is no logging or authentication until
the squid NTLM unit at which stage the user is authenticated against the
Windows 2003 machine. I have it working perfectly if I point directly to
squid NTLM, but if I point to main cache it fails. If I look in the log
when its successful I get DOMAIN\user - when it fails all I see is user...

I hope this has explained it more...

The main goal is to do single signon through multiple cache's with
login=PASS set on the peers

-Original Message-
From: Kinkie [mailto:[EMAIL PROTECTED] 
Sent: 29 January 2005 11:34 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Challenge/Response with Cache Peers (NTLM)

On Thu, 2005-01-27 at 21:26 +0200, Dave Raven wrote:
 Hi all,
   I've been testing the behavior of Challenge/Response today with
 cache peers. the versions etc are not relevant as I have
Challenge/Response
 and BASIC working fine if I point directly to the unit. Below is a
makeshift
 diagram of how I've set this up now:
 
-
| squid |
| NTLM  |  Windows 2003
-
   |
 /   \
 peer1 -- peer2
\/
 \  /
main cache
 
 I point to main cache, which has two parents which are the only routes
 (never_direct + always_direct) - login=PASS is on my peer lines. On those
 two I have setup each of them as siblings with login=PASS, and a parent of
 the squid NTLM authenticating unit (which works fine if I point direct),
 also with login=PASS.
 
 The behavior I see is that if I'm using the auth box, I have to login
(with
 basic) with DOMAIN\user (and challenge response works). If I go through
the
 peers I have to login with only the user - if I add the domain it doesn't
 work at _all_. When I try challenge response it naturally doesn't work as
 the username gets passed with no domain...

Could you paste the relevant lines in the three boxes' squid.conf?

 Is the fix for this as simple as it seems? Or is the problem more
 complicated. I'd really like to get this working...

Do you want the two peers to be directly accessed? If the purpose is for
them to only cache, you might want to distinguish roles: main cache does
auth + logging + request routing, the others do caching (you might want
use CARP to balance the parents to maximize efficiency). If so, it would
be enough for you to use a 'src' type acl on the parents locked on the
main cache ip and log usernames only on the main cache log.

Kinkie



[squid-users] Challenge/Response with Cache Peers (NTLM)

2005-01-27 Thread Dave Raven
Hi all,
I've been testing the behavior of Challenge/Response today with
cache peers. the versions etc are not relevant as I have Challenge/Response
and BASIC working fine if I point directly to the unit. Below is a makeshift
diagram of how I've set this up now:

   -
   | squid |
   | NTLM  |  Windows 2003
 -
  |
/   \
peer1 -- peer2
   \/
\  /
   main cache

I point to main cache, which has two parents which are the only routes
(never_direct + always_direct) - login=PASS is on my peer lines. On those
two I have setup each of them as siblings with login=PASS, and a parent of
the squid NTLM authenticating unit (which works fine if I point direct),
also with login=PASS.

The behavior I see is that if I'm using the auth box, I have to login (with
basic) with DOMAIN\user (and challenge response works). If I go through the
peers I have to login with only the user - if I add the domain it doesn't
work at _all_. When I try challenge response it naturally doesn't work as
the username gets passed with no domain...

Is the fix for this as simple as it seems? Or is the problem more
complicated. I'd really like to get this working...

Any suggestions?

Thanks
Dave



[squid-users] Challenge/Response with Cache Peers (NTLM)

2005-01-27 Thread Dave Raven
Hi all,
I've been testing the behavior of Challenge/Response today with
cache peers. the versions etc are not relevant as I have Challenge/Response
and BASIC working fine if I point directly to the unit. Below is a makeshift
diagram of how I've set this up now:

   -
   | squid |
   | NTLM  |  Windows 2003
 -
  |
/   \
peer1 -- peer2
   \/
\  /
   main cache

I point to main cache, which has two parents which are the only routes
(never_direct + always_direct) - login=PASS is on my peer lines. On those
two I have setup each of them as siblings with login=PASS, and a parent of
the squid NTLM authenticating unit (which works fine if I point direct),
also with login=PASS.

The behavior I see is that if I'm using the auth box, I have to login (with
basic) with DOMAIN\user (and challenge response works). If I go through the
peers I have to login with only the user - if I add the domain it doesn't
work at _all_. When I try challenge response it naturally doesn't work as
the username gets passed with no domain...

Is the fix for this as simple as it seems? Or is the problem more
complicated. I'd really like to get this working...

Any suggestions?

Thanks
Dave



[squid-users] Cache_peer_access + NTLM groups

2004-08-25 Thread Dave Raven
Hi all, 
Is there any way that I might direct requests to different cache
peers based on a group reply from an NTLM authentication? I wish to make a
certain group go through one peer, and another group through the other...
Are there any other ways of doing this?
attachment: winmail.dat

RE: [squid-users] Uses a Windows NT authentication domain.

2004-08-21 Thread Dave Raven
Look into NTLM with squid, there is a lot of info on the site
(www.squid-cache.org)

-Original Message-
From: Hiu Yen Onn [mailto:[EMAIL PROTECTED] 
Sent: 21 August 2004 07:12 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] Uses a Windows NT authentication domain.


hi,

i am new to squid, they are anyone who is knowing on using a Windows NT 
authentication for squid. pls, advise. thanks.


Cheers,
yenonn



RE: [squid-users] user auth

2004-08-21 Thread Dave Raven
Absolutely, look into the possibility of a redirector like squidGuard as
well as using basic auth. You can find more about it on
www.squid-cache.org - you'll need to create password files etc., but its not
terribly difficult, and its very possible.


-Original Message-
From: Barry Rumsey [mailto:[EMAIL PROTECTED] 
Sent: 21 August 2004 05:30 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] user auth


Hi

I am wondering if it is possible to auth by user name instead of IP.

My main computer is a linux box which connects to the net, my sister
connects 
from a windows machine and is allowed full access to the net. the problem is

that my daughter also uses the windows machine and I want to limit the 
internet to only on certain days at certain times for her. Is this possible 
with squid?

Thanks in advance
B.Rumsey

ps. If am newly convert windows user so I don't know to  much about linux.



RE: [squid-users] user auth

2004-08-21 Thread Dave Raven
I suspect the problem is that you allow non-authenticated traffic as well -
you need to disallow all http access by default and allow authenticated
users - something like this:

acl NCSA proxy_auth REQUIRED
http_access allow NCSA
http_access deny all

-Original Message-
From: Barry Rumsey [mailto:[EMAIL PROTECTED] 
Sent: 21 August 2004 01:23 PM
To: [EMAIL PROTECTED]
Subject: Re: [squid-users] user auth


On Saturday 21 August 2004 15:29, Barry Rumsey wrote:
 Hi

 I am wondering if it is possible to auth by user name instead of IP.

 My main computer is a linux box which connects to the net, my sister
 connects from a windows machine and is allowed full access to the net. the
 problem is that my daughter also uses the windows machine and I want to
 limit the internet to only on certain days at certain times for her. Is
 this possible with squid?

 Thanks in advance
 B.Rumsey

 ps. If am newly convert windows user so I don't know to  much about linux.
 Thanks to those that replied. I looked into acl and have setup the
ncsa_auth 
and an htpasswd. I have got the windows machine asking for username and 
password. I have set a test user as user=abc password=abc this works but if
I 
put the password as password=abcd this also works?

the other question is how do I set up days and time allowed for each user?

Thanks in advance
B.Rumsey



RE: [squid-users] ip setup

2004-08-21 Thread Dave Raven
That's squid connecting to the foreign websites on port 80 (http). If you
want to firewall it allow squid to setup a state out on port 80 (make sure
its statefull), and firewall everything else in...

You don't have a security problem with it connected from a random port to
port 80 on another machine - you should be carefull about people browsing
through you if its open though (e.g. block port 3128)

-Original Message-
From: devendra [mailto:[EMAIL PROTECTED] 
Sent: 21 August 2004 02:05 PM
To: Henrik Nordstrom
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] ip setup


Hello,

from foreign IP port 80 to our external network IP at different port  than 
1024 like 55963,55965,55964,55871 and so on.

Can u suggest me, on which NIC i should block incoming or outgoing request.

Deven

At 01:36 PM 21/08/2004, Henrik Nordstrom wrote:

On Sat, 21 Aug 2004, devendra wrote:

for client machines and other is configured with external network, but i 
found that lot of foreign IP connecting to the external ip, with 
connection ESTABLISHED and SYN_SENT.

These are most likely the requests Squid is making out to the Internet to 
fetch the content requested by the users.

What are the local port numbers of these connections? Is many going to the 
same local port number or all to different ports  1024?

Regards
Henrik



RE: [squid-users] Squid + ICAP

2004-08-21 Thread Dave Raven
I do, I have been unable to get it working with the latest version though -
so I'm using a snapshot from the page. It appears to be perfectly compliant
so it should work with Symantec - as far as I know the only problem is with
Trends IWSS.

-Original Message-
From: Christoph Nagelreiter [mailto:[EMAIL PROTECTED] 
Sent: 21 August 2004 03:29 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] Squid + ICAP


Hello,

does anybody use Squid + ICAP -- scanning http traffic with Anti-Virus
software (especially Symantec ScanEngine)?

Thanks.
Regards,
Chris



RE: [squid-users] LDAP groups with a redirector

2004-08-20 Thread Dave Raven
Hi,
NTLM authentication passes domain and user information to
squidGuard, I think in the format domain/username or visa versa. Would it be
possible/worthwhile (with group_ldap - and perhaps some code changes from
us) to pass username/group to the redirector, at which stage we'll handle
splitting it etc?

Thanks again
Dave


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: 19 August 2004 02:56 PM
To: Dave Raven
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] LDAP groups with a redirector


On Thu, 19 Aug 2004, Dave Raven wrote:

 I have been looking into the group_ldap acl's in squid, and they
 look great. My problem is, are there any redirectors or ways to pass the
 group to a redirector, that will act on these ldap groups. Or ones with
ldap
 support? At the moment I'm using squidguard and its not looking likely

The external_acl information can not be passed to redirectors.

But you should be able to extend your redirector to perform the required
lookups. Not very efficient thou..


What is needed for this to work is some kind of tagging mechanism where
http_access can assign a tag to the request and this passed to redirectors
etc.

Regards
Henrik



RE: [squid-users] hierarchy problerms

2004-08-20 Thread Dave Raven
Add   login=PASS  to the end of your cache peer line, this will instruct it
to pass up any login information in the request.



-Original Message-
From: Swaroop Shere [mailto:[EMAIL PROTECTED] 
Sent: 20 August 2004 09:40 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] hierarchy problerms
Importance: High


Hello,
I am a college student trying to implement a
hierarchy of proxy servers as a part of a project. The
parent is at 10.0.1.1, while the child is at
10.0.2.55. While, i am configuring the child, I have
no control over the parent. (I have to give a written
application for any services from the authorities wrt
the parent) They have enabled the icp port. After my
initial configuration, the child squid started
successfully. I have kept no authentication at the
child (comments kept at all authentication
parameters), but the parent requires authentication,
it uses some microsoft module (sorry, i am still a
newbie).
Now i have 2 problems. 
First:

Whenever a browser configured to request the child
tries to access a local website (eg
http://10.0.0.222), it gets a prompt for username and
password, whose title shows 10.0.2.55 and even if i
enter the same username and passwd that i use for the
parent, it gives me an authentication error. Through
tcpdump, i found out that the child squid is not
forwarding the username and password to the parent.
The error page that the client recieves, shows that
the error is generated by the parent squid (10.0.1.1).
Also, all requests from the clients through the parent
directly are served.

Second:

Whenever a browser configured to request the child
squid tries to access a remote website (eg
www.google.com), the client does not get any prompt
for proxy username and password, and finally gives an
error, that the page is not found. If the same client
is configured to request the parent (10.0.1.1), it
gets the pages. What could be this problem?

Please help,
Thank you.





___ALL-NEW Yahoo!
Messenger - all new features - even more fun!  http://uk.messenger.yahoo.com



RE: [squid-users] squid chroot jail no running copy error

2004-08-20 Thread Dave Raven
I'm not what you would call a Fedora pro, but I suspect you will need to
chroot to the enviroment in order to run the squid -k reconfigure... E.g.
chroot /wka usr/local/squidSTABLE6/sbin/squid -k reconfigure

?


-Original Message-
From: Rick G. Kilgore [mailto:[EMAIL PROTECTED] 
Sent: 20 August 2004 04:02 PM
To: Mohsin Khan
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] squid chroot jail no running copy error


I am starting squid via the /etc/rc.d/rc3.d/S99local init file right

now. the command in the file is chroot /wka 
usr/local/squidSTABLE6/sbin/squid -sD

I can see the process running as squid with ps -ef | grep squid. The

pid from ps matches the pid in the squid.pid in the logs directory.

OS type Fedora core 1.



Mohsin Khan wrote:
 do you run the squid from jail chroot enviroment. ? Do
 you see the process when you do ps awux.
 --- Rick G. Kilgore [EMAIL PROTECTED]
 wrote:
 
 
Hello all,

I am new to the squid proxy. I do like it allot and
it has already been
very useful.
Problem I have put squid in a chroot jail. I tested
the squid
configuration and function prior to building the
chroot volume. every
thing seems to work ok logging and chaching.
When I try to do a squid -k shutdown or rotate the
system tells me that
their is no running copy.
I did look at the FAQ and did try the solution for
11.43, did not help.
squid pid is on the correct volume in the correct
place.
I was unable to locate anything in the mail archive.
Hopefully I am not



This message is for the designated recipient only
and may contain
privileged, proprietary, or otherwise private
information.  If you have
received it in error, please notify the sender
immediately and delete 
the original.
Any other use of the email by you is prohibited.


Este mensaje esta' para el recipiente sen~alado
solamente y puede contener 
la informacio'n privilegiada, propietaria, o de otra
manera privada. Si 
usted lo ha recibido en error, notifique por favor
el remitente 
inmediatamente y suprima la original. Cualquier otro
uso del email de 
usted se prohi'be.


Rick G. Kilgore
State of Colorado Department of Revenue IT/CSTARS
(DDP/CCR/RWOC)
E-Mail: [EMAIL PROTECTED]
Phone: (303) 205-5659
Fax: (303) 205-5715


 
 
 
 =
 Regards, 
 Mohsin Khan 
 CCNA ( Cisco Certified Network Associate 2.0 ) 
 http://pk.aaghaz.net 
 
 
Happy is the one who can smile

 
 
 
 
 
 
 
   
 __
 Do you Yahoo!?
 Yahoo! Mail Address AutoComplete - You start. We finish.
 http://promotions.yahoo.com/new_mail 
 


-- 
Hoy es: viernes julio veintedos  des miles y cuatro
fase del dia  coma esta usted --- how are you

This message is for the designated recipient only and may contain
privileged, proprietary, or otherwise private information.  If you have
received it in error, please notify the sender immediately and delete 
the original.
Any other use of the email by you is prohibited.


Este mensaje esta' para el recipiente sen~alado solamente y puede contener 
la informacio'n privilegiada, propietaria, o de otra manera privada. Si 
usted lo ha recibido en error, notifique por favor el remitente 
inmediatamente y suprima la original. Cualquier otro uso del email de 
usted se prohi'be.


Rick G. Kilgore
State of Colorado Department of Revenue IT/CSTARS (DDP/CCR/RWOC)
E-Mail: [EMAIL PROTECTED]
Phone: (303) 205-5659
Fax: (303) 205-5715



RE: [squid-users] Transparent config OK if not used?

2004-08-20 Thread Dave Raven
As long as your iptables rules only affect traffic that's not destined to
your squid port, you should be fine

-Original Message-
From: Steve Snyder [mailto:[EMAIL PROTECTED] 
Sent: 20 August 2004 04:06 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] Transparent config OK if not used?


I understand that there are some problems associated with configuring 
Squid (2.5S6 + patches) as a transparent proxy.  Are there any negative 
affects from having a transparent config even if the browsers are 
directly addressing the cache?

On my (Linux, RedHat v9) LAN the browsers should all be configured to 
point to the Squid proxy.  However there is the occasional lapse, such as 
from newly-installed browser or a guest system on the network.  I would 
like to have a transparent config in place to ensure that all HTTP 
traffic goes through the proxy, but not at the cost of introducing 
problems for all the correctly configured browsers.

So, if I enable transparent proxying with these options:

  httpd_accel_port 80
  httpd_accel_host virtual
  httpd_accel_with_proxy on
  httpd_accel_uses_host_header on

(together with an iptables rule) will there be problems on the browsers 
that are already explicitly configured to address the proxy?

Thanks.



[squid-users] LDAP groups with a redirector

2004-08-19 Thread Dave Raven
Hi all,
I have been looking into the group_ldap acl's in squid, and they
look great. My problem is, are there any redirectors or ways to pass the
group to a redirector, that will act on these ldap groups. Or ones with ldap
support? At the moment I'm using squidguard and its not looking likely

Thanks
Dave



[squid-users] FW: LDAP search through a AD Forest

2004-08-16 Thread Dave Raven
Hi all,
I have ldap pretty much fully working, but I'm wondering if its
possible to search through multiple domains, under one AD forest (ldap
connection)?

Other web cache's seem to have problems with this is squid able to do it? It
is in Native mode so we have to use Kerberos to connect to it - apparently
the problem is that there is only one REALM allowed with the Kerberos
connection ?

Thanks
Dave



[squid-users] IWSS + squid-icap

2004-07-27 Thread Dave Raven
Hello all,
Saw some mention on the lists of a patch needed to use IWSS, is it
possible to just change an option on the Trend server to fix it? Or do we
defiantly need the patch? If so, please can someone tell me how to get the
patch - on the list it says email protected for the contact Hendrik posted..

Thanks
Dave

P.s. please copy me on replies (not on the list)



[squid-users] IWSS + squid-icap

2004-07-27 Thread Dave Raven
Hello all,
Saw some mention on the lists of a patch needed to use IWSS, is it
possible to just change an option on the Trend server to fix it? Or do we
defiantly need the patch? If so, please can someone tell me how to get the
patch - on the list it says email protected for the contact Hendrik posted..

Thanks
Dave



[squid-users] ICAP configuration

2004-05-14 Thread Dave Raven
Hi all, 
Just a few questions on ICAP configuration if anyone can help -
there are not many documents on it. I'm wondering about the differences
between load balancing (round robin or smart) between multiple servers, and
forcing squid to icap to multiple servers each request... My best guess is
this :

#Preset acls:
acl HTTP proto HTTP
acl GET method GET

---
icap_service s1 respmod_precache 0 icap://10.10.10.1:1344/virus_checking
icap_service s2 respmod_precache 0 icap://10.10.10.2:1344/virus_checking
icap_class c1 s1 s2
icap_access c1 allow HTTP GET
---

That would load balance c1 between the two servers


icap_service s1 respmod_precache 0 icap://10.10.10.1:1344/virus_checking
icap_service s2 respmod_precache 0 icap://10.10.10.2:1344/content_checking
icap_class c1 s1
icap_class c2 s2

icap_access c1 allow HTTP GET
icap_access c2 allow HTTP GET
---

That would cause the request to go through BOTH servers...



Are my assumptions correct?

Thanks
Dave Raven



[squid-users] ICAP build

2004-04-20 Thread Dave Raven
Hi all,
I've been trying to build squid with ICAP support lately (applied
the patch to the latest squid) and with the squid-icap-2.5-200404051745
snapshot and had no success (lots of automake/autoconf errors with the
snapshot and a lot of errors in icap_common.c with the patch). Is anyone
currently using it or is there another version or something? Also I intend
to try and interface this with the netapp icap server release, and then get
that to talk to clamav - is there a better way of handing virus scanning
over to Clam ?

Thanks
Dave



RE: [squid-users] ICAP build

2004-04-20 Thread Dave Raven
: `type' undeclared (first use in this function)
icap_common.c:304: `ICAP_SERVICE_MAX' undeclared (first use in this
function)
icap_common.c:305: `icap_service_type_str' undeclared (first use in this
function)
icap_common.c:307: warning: return from incompatible pointer type
icap_common.c:308: warning: control reaches end of non-void function
icap_common.c: In function `icapCheckAcl':
icap_common.c:331: `icap_access' undeclared (first use in this function)
icap_common.c:331: `iter' undeclared (first use in this function)
icap_common.c:331: warning: statement with no effect
icap_common.c:332: syntax error before `*'
icap_common.c:334: structure has no member named `icapcfg'
icap_common.c:336: `icapChecklist' undeclared (first use in this function)
icap_common.c:342: structure has no member named `class'
icap_common.c: At top level:
icap_common.c:390: syntax error before `IcapStateData'
icap_common.c: In function `icapReadHeader':
icap_common.c:398: `fd' undeclared (first use in this function)
icap_common.c:442: `icap' undeclared (first use in this function)
icap_common.c:446: `isIcap' undeclared (first use in this function)
icap_common.c: At top level:
icap_common.c:460: syntax error before `*'
icap_common.c:461: warning: return-type defaults to `int'
icap_common.c: In function `icapParseConnectionClose':
icap_common.c:462: `s' undeclared (first use in this function)
icap_common.c:462: `e' undeclared (first use in this function)
icap_common.c: At top level:
icap_common.c:469: syntax error before `*'
icap_common.c: In function `icapSetKeepAlive':
icap_common.c:473: `icap' undeclared (first use in this function)
icap_common.c:475: `hdrs' undeclared (first use in this function)
icap_common.c: At top level:
icap_common.c:578: syntax error before `*'
icap_common.c: In function `icapParseChunkedBody':
icap_common.c:582: `icap' undeclared (first use in this function)
icap_common.c:593: warning: implicit declaration of function `store'
icap_common.c:593: `store_data' undeclared (first use in this function)
icap_common.c: In function `icapAddAuthUserHeader':
icap_common.c:663: structure has no member named `icapcfg'
icap_common.c:667: structure has no member named `icapcfg'
icap_common.c:669: structure has no member named `icapcfg'
icap_common.c:671: structure has no member named `icapcfg'
icap_common.c:672: structure has no member named `icapcfg'
icap_common.c:655: warning: `userofslen' might be used uninitialized in this
function
gmake[3]: *** [icap_common.o] Error 1
gmake[3]: Leaving directory `/root/dave-squid/squid-2.5.STABLE5/src'
gmake[2]: *** [all-recursive] Error 1
gmake[2]: Leaving directory `/root/dave-squid/squid-2.5.STABLE5/src'
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory `/root/dave-squid/squid-2.5.STABLE5/src'
gmake: *** [all-recursive] Error 1



Any help would be MOST appreciated, thanks
Dave



-Original Message-
From: Dave Raven [mailto:[EMAIL PROTECTED] 
Sent: 20 April 2004 12:03 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] ICAP build


Hi all,
I've been trying to build squid with ICAP support lately (applied
the patch to the latest squid) and with the squid-icap-2.5-200404051745
snapshot and had no success (lots of automake/autoconf errors with the
snapshot and a lot of errors in icap_common.c with the patch). Is anyone
currently using it or is there another version or something? Also I intend
to try and interface this with the netapp icap server release, and then get
that to talk to clamav - is there a better way of handing virus scanning
over to Clam ?

Thanks
Dave




[squid-users] Squid_ldap_auth with groups

2004-02-24 Thread Dave Raven
Hi all,
I have my ldap auth working with users and all now, and -f
sAMAccountName=%s works perfect, but I need to also check that the user is a
member of iNet Users. Now my first guess is that maybe its not working
because I don't have quotes around iNet users - but I can't get it to accept
them anyway.. Is this the right way to do what I'm trying to?

/usr/local/libexec/ldap_auth -b OU=Users,OU=**,DC=*,DC=co,DC=za -h
10.9.9.5 -D CN=Proxy User,OU=Users,OU=Phalaborwa,DC=foskor,DC=co,DC=za -w
proxy2004 -f ((sAMAccoun
tName=%s)(memberOf=CN=iNet Users,OU=Groups,OU=*,DC=,DC=co,DC=za))

Any help will be appreciated
Thanks
Dave




[squid-users] Memory usage

2004-02-20 Thread Dave Raven
Top:
43236 nobody 2   0   368M   347M poll13:38  0.00%  0.00% squid

Ps -ax |grep squid:
43234  ??  Is 0:00.02 /usr/local/sbin/squid
43236  ??  D 13:37.96 (squid) (squid)
87951  ??  Ss 0:01.61 (squidGuard) (squidGuard)
87952  ??  Ss 0:00.24 (squidGuard) (squidGuard)
87953  ??  Is 0:00.10 (squidGuard) (squidGuard)
There are also 2x ldap_auth's running:
[EMAIL PROTECTED] /home/opteq # ps -ax|grep ldap
87955  ??  Is 0:00.48 (ldap_auth) -b OU=Users,
87956  ??  Is 0:00.18 (ldap_auth) -b OU=Users,

Squid.conf:
cache_mem 128 MB

Any suggestions as to why its using so much memory ?
It seems to have only started post ldap auth a few days 
ago... But that's not definate


Thanks
Dave Raven



[squid-users] Squid_ldap_auth stupid question

2004-02-17 Thread Dave Raven
Hi all,
I have a stupid question with ldap_auth, 
its really a squid question - when 
I use a user of test\test to get in the 
ldap domain it removes the \ on the 
authenticate parameters line, if I escape it
(\\) it puts two backslashes - I've tried 
a few different weird combinations and can't 
get it right... Any ideas?



RE: [squid-users] Squid_ldap_auth stupid question

2004-02-17 Thread Dave Raven
To bind a search user - I have to use the test\ part or the login fails and
I can't change the AD server..

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 17 February 2004 01:40 PM
To: Dave Raven
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] Squid_ldap_auth stupid question


On Tue, 17 Feb 2004, Dave Raven wrote:

 I have a stupid question with ldap_auth, 
 its really a squid question - when 
 I use a user of test\test to get in the 
 ldap domain it removes the \ on the 
 authenticate parameters line

Is this in the auth_param basic program line for binding as a search user,
or in the login request from the browser?

LDAP very rarely have \ in login names. LDAP is not NT Domain and is 
structured very differently from NT domains.

Regards
Henrik




RE: [squid-users] Squid_ldap_auth stupid question

2004-02-17 Thread Dave Raven
-D binddn   DN to bind as to perform searches
-w bindpasswd   password for binddn

I'm using those two options - I assumed that -D domain\user -w
userpassword was correct for what I'm trying - is this wrong?

I have a Java ldap program - if I append the base DN or anything to that to
login it fails, including if I just use the user - but if I have the
domain\user it logs in fine. I've spoken to the people who run the AD server
and they also say I will have to login with domain\user ?

Is there a way around this?


Thanks
Dave
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 17 February 2004 02:21 PM
To: Dave Raven
Cc: [EMAIL PROTECTED]
Subject: RE: [squid-users] Squid_ldap_auth stupid question


On Tue, 17 Feb 2004, Dave Raven wrote:

 To bind a search user - I have to use the test\ part or the login fails
and
 I can't change the AD server..

I never used \ in any AD LDAP logins, but I have to admit that I never 
have tried to create a user with \ in his name if this is what you refer 
to.

What is the exact login DN you specify to squid_ldap_auth?

The login DN is not a login name, it is the LDAP object name of the user
object to bind to, usually cn=user name, cn=users, dc=company, dc=com

Regards
Henrik




[squid-users] squid_ldap_auth

2004-02-06 Thread Dave Raven
Hi all, 
I have a need with squid_ldap_auth, 
and am entirely unsure how to get it 
working..

I need to autheticate users in one OU, 
but only if they are a member of a 
group in another OU --

This would be the user:
CN=Test User,OU=Users,OU=Branch1,DC=test,DC=co,DC=za

And this is the group he is a member of, that means 
He has internet access:
CN=iNet,OU=Groups,OU=Branch1,DC=test,DC=co,DC=za

How might I accomplish this?
Any idea's will be helpful

Thanks
Dave



RE: [squid-users] squid_ldap_auth

2004-02-06 Thread Dave Raven
BSD - ldap directory is an AD server running 2000

-Original Message-
From: Lewars, Mitchell (EM, PTL) [mailto:[EMAIL PROTECTED] 
Sent: 06 February 2004 01:55 PM
To: 'Dave Raven'
Subject: RE: [squid-users] squid_ldap_auth


Are you running on Linux ?

-Original Message-
From: Dave Raven [mailto:[EMAIL PROTECTED]
Sent: Friday, February 06, 2004 6:12 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] squid_ldap_auth


Hi all, 
I have a need with squid_ldap_auth, 
and am entirely unsure how to get it 
working..

I need to autheticate users in one OU, 
but only if they are a member of a 
group in another OU --

This would be the user:
CN=Test User,OU=Users,OU=Branch1,DC=test,DC=co,DC=za

And this is the group he is a member of, that means 
He has internet access:
CN=iNet,OU=Groups,OU=Branch1,DC=test,DC=co,DC=za

How might I accomplish this?
Any idea's will be helpful

Thanks
Dave



RE: [squid-users] squid 2.5.STABLE4 + FreeBSD 5.x = crash after a while...

2004-02-06 Thread Dave Raven
Agreed - info from cache.log and try recompile your squid now with bsd5

-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED] 
Sent: 06 February 2004 03:08 PM
To: Evren Yurtesen; [EMAIL PROTECTED]
Subject: RE: [squid-users] squid 2.5.STABLE4 + FreeBSD 5.x = crash after a
while...



  
 Hello,
 
 I have been using squid 2.5 stable for a while with 4.9 
 version of FreeBSD 
 and it was working fine for months. Now I had to upgrade to 
 5.x version to 
 get better support for hyperthreading and sata drives.
 When I am using squid with 5.x version of the freebsd. It 
 crash after 1-3 
 days of usage randomly.
 
 The symptoms are that squid use 98% of the cpu and it doesnt respond.
 It just stucks and I cant even send kill -TERM signal to it. 
 I have tried 
 using half closed clients option on and off in my conf file 
 with the same 
 result.
 
 Is there anybody else who is having similar problem? I dont 
 have any ACLs 
 at all and the same conf file of squid was working with 4.9 
 stable anyhow.
 
 Any suggestions?

   - which 2.5 stable release are you using ?
   - Does squid crash or just consumes lot's of CPU ?
   - Anyway whatever, anything in cache.log which could provide
 more info ?
 Or error related info in cache.log, prior to the real problem
 situation you are encountering at a certain point ?
   - Possibly weird incompat. issues related to os 'offered' shared libs
 may be a culprit. Did you try re-building squid ?

  M.

   
 
 Thanks,
 Evren 
 
 



Re: [squid-users] Squid

2003-04-01 Thread Dave Raven
check cache.log
tail cache.log   and mail if you can figure it out from there..

--Dave

- Original Message -
From: Kevin Hoffer [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 5:31 PM
Subject: [squid-users] Squid


: Squid will not stay running. I start it up with
: /usr/local/squid/sbin/squid -sY -f /usr/local/squid/etc/squid.conf but
it
: won't stay running. I copied the default.squid.conf to squid.conf so I
am
: using the default config file.
:
: Any Ideas?
:
: If you get more then one of these messages I am sorry. I have sent it
like
: twice but it never ends up comming back to my email, no clue if it
should
: or not but other lists I am on do.
:



Re: [squid-users] Ftp help

2003-04-01 Thread Dave Raven
if people are pointing to squid.
not transparently.

Then it will log all requests as it does with http.

--Dave

- Original Message - 
From: darlene [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 6:29 PM
Subject: [squid-users] Ftp help


: 
: Is it possible to log all incoming and outgoing files for ftp through
: squid?
: 
: Thanks
: 
: 
: 



Re: [squid-users] Logs and Aol.

2003-04-01 Thread Dave Raven
acl aol dst 64.12.163.198
http_access allow aol

dont make aol go through authentication
you probably find your users aren't auth'ing
with aol...  I assume they have multiple ips though
so...


- Original Message -
From: Ampugnani, Fernando [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 9:17 PM
Subject: [squid-users] Logs and Aol.


: Hi all,
: Anybody know why aol makes many entry in acces.log like this...
:
: 1049223695.538  1 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223696.894  1 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223698.158  1 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223698.730  3 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223700.744  4 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223711.584  2 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223714.246  3 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223714.573  1 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223715.425  1 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
: 1049223719.224  2 207.169.88.210 TCP_DENIED/407 1716 POST
: http://64.12.163.198:20480/data? - NONE/- text/html
: 1049223719.288  1 207.169.88.210 TCP_DENIED/407 1725 GET
: http://64.12.163.198:20480/monitor? - NONE/- text/html
:
: this entry does grow my access.log very quickly; in one day access.log
: growth 90MB more or less.
:
: Is this regular?
:
: There are any way to solve this?
:
: Fernando Ampugnani
: EDS Argentina - Software, Storage  Network
: Global Operation Solution Delivery
: Tel: 5411 4704 3428
: Mail: [EMAIL PROTECTED]
:
:
:



[squid-users] 2.5-stable1: mbuf clusters on fBSD

2003-03-26 Thread Dave Raven
Hi all,
Having some serious troubles with a clients squid box,
its running on FreeBSD-4.7-RELEASE-p9; and I've
just installed the squid through ports, -STABLE1 +
all the patches in ports Makefile.

26229/26624/133120 mbufs in use (current/peak/max):
23620 mbufs allocated to data
2609 mbufs allocated to packet headers
23367/23478/33280 mbuf clusters in use (current/peak/max)
53612 Kbytes allocated to network (11% of mb_map in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines


Thats the output I am currently getting from netstat -mb; as you
can see the mbuf cluster usage is EXTREMELY high, considering
this box is on a 256k link with +/- 50 users. Not high load at all.

When I reboot the box the mbuf clusters value is obviously reset
to 0; but from there is steadily rises as the cache is used, and
continues
to rise until reaching 33280 (an enormous value for mbuf clusters).
I've
pushed this value up throughout the day, and am certain that
something
is wrong. What could be causing squid to chew through these mbuf
clusters at a constant rate until its completely used? The value
does drop
a little now and then, like from 23367 to 23365, then continues to
rise.

What can I do to find out whats causing the usage? Or how can I halt
it,
I've checked most of the mailing lists and people are just having
problems
because they dont have enough mbuf clusters, I have far too many, a
usual box like this uses less than 5000 in my experience.

Any help will be much appreciated

Thanks
Dave