RE: [squid-users] Getting error Too few basicauthenticator processes are running

2005-11-10 Thread ads squid
--- Chris Robertson [EMAIL PROTECTED] wrote:

  -Original Message-
  From: ads squid [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 09, 2005 3:42 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] Getting error Too few
 basicauthenticator
  processes are running
  
  
  Hi,
  I am trying to configure squid version
  squid-2.5.STABLE12 as follows :
  
  [EMAIL PROTECTED] squid-2.5.STABLE12]#
  /usr/local/squid/sbin/squid -NCd1
  
  
  I am getting following error 
  
  2005/11/09 18:03:40| Accepting HTTP connections at
  0.0.0.0, port 3128, FD 15.
  2005/11/09 18:03:40| WCCP Disabled.
  2005/11/09 18:03:40| Ready to serve requests.
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #1
  (FD 6) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #2
  (FD 7) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #3
  (FD 8) exited
  2005/11/09 18:03:41| Too few basicauthenticator
  processes are running
  FATAL: The basicauthenticator helpers are crashing
 too
  rapidly, need help!
  
  Aborted
  
  
  
  I have configured squid with minimum options as
  follows:
  [EMAIL PROTECTED] squid-2.5.STABLE12]# ./configure
 

--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,SASL,MSNT
  
  .
  
  Please help me to solve the problem.
  I want to use basic authentication.
  
  Thanks for support.
  
 
 What does your auth_param line look like?
 
 Chris
 

It looks like as following :


auth_param basic program 
/usr/local/squid/libexec/ncsa_auth
/usr/local/squid/etc/passwd
###

Thanks for support.




__ 
Yahoo! FareChase: Search multiple travel sites in one click.
http://farechase.yahoo.com


RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-10 Thread Dave Raven
Run some memory processor burn tests... E.g. 'memtest' and 'burnP6' 

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2005 09:37 AM
To: [EMAIL PROTECTED]; squid-users@squid-cache.org
Subject: RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

Hello,


Thanks for your help :

proxy1:~#  crontab  -l
0 0 * * * /etc/webmin/webalizer/webalizer.pl /cache_log/access.log
proxy1:~# more /etc/crontab
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user  command
17 ** * *   rootrun-parts --report /etc/cron.hourly
1 0 * * *   roottest -x /usr/sbin/anacron || run-parts
--report /etc/cron.daily
47 6* * 7   roottest -x /usr/sbin/anacron || run-parts
--report /etc/cron.weekly
52 61 * *   roottest -x /usr/sbin/anacron || run-parts
--report /etc/cron.monthly

proxy1:~# ls /etc/cron.hourly/
proxy1:~#


The server is a compaq DL580 (2*Xeon700Mhz, 1G of Ram, Raid 5: 32G), working
on Debian


L.G.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Mittwoch, 9. November 2005 16:53
To: squid-users@squid-cache.org
Subject: Re: AW: [squid-users] Squid unreachable every hour and 6 minutes.

The disk space is over limit error is not saying the disk is full.  The
cache has reached the limit that's been set in the squid.conf file.
It could be causing squid to die, but how likely is it that this would be
the cause, if squid dies 6 minutes after every hour?

My suggestion is to check and see what cron jobs are running: 
cat /etc/crontab
or  (as root): crontab -l and then crontab -l any other users that might be
running cron jobs

If there's a timely pattern to the connectivity issue, the root of the
problem probably has something to do with a schedule for something.
Cron would be a good place to start.

On the disk space is over limit issue...
You really shouldn't have to tend to this.  Squid should use whatever
replacement policy was specified at compile time (forget which one is
default if none is specified) To remove old/unused cache objects in an
effort to free up space. However, if squid is trying to do this, and is
actively handling proxy requests at the same time, squid could be running
out of resources.  What specs do you have on this machine?  CPU/Ram/etc.

Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]



[EMAIL PROTECTED]
11/09/2005 09:45 AM

To
[EMAIL PROTECTED], [EMAIL PROTECTED],
squid-users@squid-cache.org cc

Subject
AW: [squid-users] Squid unreachable every hour and 6 minutes.






Please repeat that again.

(1) stop squid

(2) find out wht are the cache directories squid uses, for example

   # grep cache_dir squid.conf
   cache_dir ufs  /data1/squid_cache 6000 32 512
   cache_dir ufs  /data2/squid_cache 1 32 512
   #

 In this example /data1/squid_cache and /data2/squid_cache are the cache
dirs.

(3) Clean all cache dirs - in this example:

   cd /data1/squid_cache
   rm -f *
   cd /data2/squid_cache
   rm -f *

(3) create the cache structures again:   squid -z

(4) Start squid.
What happens?
Is squid running? ps -ef | grep squid
What does cache.log say since starting of squid?
Is squid reachable?

(5) What happens after 1 hour an 6 minutes?

Werner Rost

-Ursprüngliche Nachricht-
Von: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 9. November 2005 15:10
An: Dave Raven; squid-users@squid-cache.org
Betreff: RE: [squid-users] Squid unreachable every hour and 6 minutes.


I already tried to :
- Stop Squid, delete swap.state, restart squid
- Stop Squid, format my cache parition, squid -z, start squid
- change cache_dir ufs /cache 5000 16 256 to cache_dir ufs /cache 100
16 256, squid -k restart.
- reboot completely the server

But nothing worked.




-Original Message-
From: Dave Raven [mailto:[EMAIL PROTECTED]
Sent: Mittwoch, 9. November 2005 14:58
To: Gix, Lilian (CI/OSR) *; squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Try use my method posted earlier to search for code files. 
The fact that your log suddenly shows squid restarting means it died 
unexpectedly. If there is a core file it'll be squids problem - if not 
its probably something else causing the problem.

Also, you should try clean out your cache_dir potentially... 
Remove everything and run squid -z to recreate it

-Original Message-
From: Gix, Lilian (CI/OSR) * [mailto:[EMAIL PROTECTED]
Sent: 09 November 2005 03:32 PM
To: Mike Cudmore
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid unreachable every hour and 6 minutes.

Great, thanks for your answer and questions :
 
1- I have a message form my browser (IE, Firefox) witch says the proxy 
is unreachable. My MSN, yahoo messengers 

RE: [squid-users] Getting error Too few basicauthenticator processes are running

2005-11-10 Thread Dave Raven
Run  '/usr/local/squid/libexec/ncsa_auth /usr/local/squid/etc/passwd'

Type   'USERNAME PASSWORD'

And see what it says - I suspect you wont get that far though. Once you try
run it it should giv eyou and error

-Original Message-
From: ads squid [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2005 09:40 AM
To: Chris Robertson; squid-users@squid-cache.org
Subject: RE: [squid-users] Getting error Too few basicauthenticator
processes are running

--- Chris Robertson [EMAIL PROTECTED] wrote:

  -Original Message-
  From: ads squid [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 09, 2005 3:42 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] Getting error Too few
 basicauthenticator
  processes are running
  
  
  Hi,
  I am trying to configure squid version squid-2.5.STABLE12 as 
  follows :
  
  [EMAIL PROTECTED] squid-2.5.STABLE12]# /usr/local/squid/sbin/squid 
  -NCd1
  
  
  I am getting following error 
  
  2005/11/09 18:03:40| Accepting HTTP connections at 0.0.0.0, port 
  3128, FD 15.
  2005/11/09 18:03:40| WCCP Disabled.
  2005/11/09 18:03:40| Ready to serve requests.
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #1
  (FD 6) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #2
  (FD 7) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #3
  (FD 8) exited
  2005/11/09 18:03:41| Too few basicauthenticator processes are 
  running
  FATAL: The basicauthenticator helpers are crashing
 too
  rapidly, need help!
  
  Aborted
  
  
  
  I have configured squid with minimum options as
  follows:
  [EMAIL PROTECTED] squid-2.5.STABLE12]# ./configure
 

--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,SASL,MSNT
  
  .
  
  Please help me to solve the problem.
  I want to use basic authentication.
  
  Thanks for support.
  
 
 What does your auth_param line look like?
 
 Chris
 

It looks like as following :


auth_param basic program
/usr/local/squid/libexec/ncsa_auth
/usr/local/squid/etc/passwd
###

Thanks for support.




__
Yahoo! FareChase: Search multiple travel sites in one click.
http://farechase.yahoo.com



RE: [squid-users] Getting error Too few basicauthenticator processes are running

2005-11-10 Thread Dave Raven
Run  '/usr/local/squid/libexec/ncsa_auth /usr/local/squid/etc/passwd'

Type   'USERNAME PASSWORD'

And see what it says - I suspect you wont get that far though. Once you try
run it it should giv eyou and error

-Original Message-
From: ads squid [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2005 09:40 AM
To: Chris Robertson; squid-users@squid-cache.org
Subject: RE: [squid-users] Getting error Too few basicauthenticator
processes are running

--- Chris Robertson [EMAIL PROTECTED] wrote:

  -Original Message-
  From: ads squid [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 09, 2005 3:42 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] Getting error Too few
 basicauthenticator
  processes are running
  
  
  Hi,
  I am trying to configure squid version squid-2.5.STABLE12 as 
  follows :
  
  [EMAIL PROTECTED] squid-2.5.STABLE12]# /usr/local/squid/sbin/squid 
  -NCd1
  
  
  I am getting following error 
  
  2005/11/09 18:03:40| Accepting HTTP connections at 0.0.0.0, port 
  3128, FD 15.
  2005/11/09 18:03:40| WCCP Disabled.
  2005/11/09 18:03:40| Ready to serve requests.
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #1
  (FD 6) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #2
  (FD 7) exited
  2005/11/09 18:03:41| WARNING: basicauthenticator
 #3
  (FD 8) exited
  2005/11/09 18:03:41| Too few basicauthenticator processes are 
  running
  FATAL: The basicauthenticator helpers are crashing
 too
  rapidly, need help!
  
  Aborted
  
  
  
  I have configured squid with minimum options as
  follows:
  [EMAIL PROTECTED] squid-2.5.STABLE12]# ./configure
 

--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,SASL,MSNT
  
  .
  
  Please help me to solve the problem.
  I want to use basic authentication.
  
  Thanks for support.
  
 
 What does your auth_param line look like?
 
 Chris
 

It looks like as following :


auth_param basic program
/usr/local/squid/libexec/ncsa_auth
/usr/local/squid/etc/passwd
###

Thanks for support.




__
Yahoo! FareChase: Search multiple travel sites in one click.
http://farechase.yahoo.com



[squid-users] delay pools configuration question

2005-11-10 Thread kfliong

Hi,

I am trying to configure delay pools but come across a few question 
which are not explained in some of the delay pools howto articles.


So basically I have 4 level of delay pools with 1 fastest speed and 4 
lowest speed.


My questions on delay pools :

1) can I jumble up delay_access numbers like below?

delay_access 2 allow blahblah
delay_access 4 allow blahblah
delay_access 1 allow blahblah
delay_access 3 allow blahblah

How are the lines in delay_access interpreted?

2) Let's say delay_access 1 and delay_access 2 have intersection 
rules. Who wins?


eg. delay_access 1 have users from ip 192.168.1.1-192.168.1.100 AND 
delay_access have 2 users with srcdomain name of john and james which 
also falls into within IP range of delay_access 1. So, which 
delay_access with john and james get? What if I specify delay_access 
2 lines above delay_access 1 lines? Will delay_access 2 get matched 
first and then ignore delay_access 1?


3) Let's say I have mary and kate in delay_access 1 for fast speed 
but I want certain sites to be slow for them how do i do that? 
delay_access 2 handle the slow sites. So do I do it?


(a)
delay_access 1 allow marykate
delay_access 2 allow slowsites

(b)
delay_access 1 allow marykate
delay_access 1 deny slowsites
delay_access 2 allow slowsites

Which one is correct, (a) or (b)?


Also, is (a1) same as (b1)?

(a1)
delay_access 1 allow marykate !slowsites

(b1)
delay_access 1 allow marykate
delay_access 1 deny slowsites


Hope you can understand my question.

Thanks for helping.




Re: [squid-users] Binding IP address to username

2005-11-10 Thread Matus UHLAR - fantomas
On 09.11 13:45, Pieter De Wit wrote:
 I would like to know how I can bind an IP address to a username in
 squid. So let's say I have a user called user1 and a machine on IP
 1.2.3.4. I would like squid to log any requests that come from 1.2.3.4 as
 if the user user1 logged in.

Why? do you mean this as form of weak user authentication?

You probably can turn on log_fwdn and set up hosts file where you define
each ip as username so in IP field you'll have usernames.

also, you can parse log files after rotation and replace IPs with users, or
change ident fields to user, accorging to IP connected. Simple awk/perl
script can do that.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The only substitute for good manners is fast reflexes. 


Re: [squid-users] Large Solaris (2.8) Squid Server Advice Needed

2005-11-10 Thread Matus UHLAR - fantomas
On 08.11 14:01, Vadim Pushkin wrote:
 My responses below.  Thank you all for the assistance, very much 
 appreciated.  Is anyone interested in my posting the final squid.conf when 
 this is all said and done?

 I hope you configured squid with heap removal policies and async IO allowed
 
 I've configured squid like this:
 
 ./configure  --prefix=/usr/local/squid --enable-storeio=diskd,ufs --enable-i
 cmp --enable-snmp --enable-err-languages=English 
 --enable-default-err-language=E
 nglish --disable-hostname-checks --enable-underscores --enable-stacktrace
 
 What am I missing, if anything?
 These?
 
 --enable-heap-replacement

--enable-removal-policies=heap,lru

 --enable-async-io[=N_THREADS]  (Leave N blank?)

yes.

 I will test with your suggests using aufs.  Thank you very much, though I 
 did not even think of using aufs as an option.  Shall I compile like this?
 
 --with-aufs-threads=N_THREADS (Leave N blank?, or do not use?)

i think you don't need to use this

 --enable-storeio=ufs,aufs

yes.

 At the moment I am having a discussion on why we should not be using 
 Veritas Disk Suite, I couldn't care less if we lose this data, and the 
 mirror overhead will slow things down alot, no?

if you have HW mirror, it should not slow writes much, but it would speed up
reads. it depends how much will you miss your cache if you loose it.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Spam = (S)tupid (P)eople's (A)dvertising (M)ethod


Re: [squid-users] Long Query String results in Invalid response

2005-11-10 Thread Matus UHLAR - fantomas
On 09.11 13:25, Sears, Shawn wrote:
 Attached is a sample query string and error response.
 
 
 GET
 /articles/asearch.html?which_index=bothmeta-dc=10func=simple_searchfi
 eld-Name=stowecollection-label=black+history+monthcollection-label=wom
 ens+history+monthcollection-label=asian+pacific+american+heritage+month
 collection-label=hispanic+heritage+monthcollection-label=american+indi
 an+heritagecollection-text=Black+Historycollection-text=Women%27s+Hist
 orycollection-text=Asian+Pacific+American+Heritagecollection-text=Hisp
 anic+Heritagecollection-text=American+Indian+Heritageupdate-version=Fe
 b.+2000update-version=June+2000update-version=Sept.+2000update-versio
 n=Jan.+2001update-version=Apr.+2001update-version=July+2001update-ver
 sion=Oct.+2001update-version=Jan.+2002update-version=Apr.+2002update-
 version=July+2002update-version=Oct.+2002update-version=Jan.+2003upda
 te-version=Apr.+2003update-version=Aug.+2003update-version=Dec.+2003u
 pdate-version=Apr.+2004update-version=July+2004update-version=Feb.+200
 5update-version=Sept.+2005subj_name= HTTP/1.0
 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
 application/x-shockwave-flash, application/vnd.ms-excel,
 application/vnd.ms-powerpoint, application/msword, */*
 Referer: http://www.anb.org/subscriber-home.html
 Accept-Language: en-us
 Proxy-Connection: Keep-Alive
 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET
 CLR 1.1.4322)
 Host: www.anb.org
 Cookie: anb=carvlib:aNbcUGjoLMuAk:1131560733
 
 The following error was encountered: 
 
 Invalid Response

is this the browser or squid's message? check logs.
maybe yhou should tune up request_header_max_size



-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Emacs is a complicated operating system without good text editor.


[squid-users] cache size and replacement policy

2005-11-10 Thread lokesh.khanna
Hi 

I am using Squid 2.5.10 on RedHat 3.0 ES. I noticed my usage of cache1
and cache2 directory doesn't go higher than 12.7 G even I have free
space in that.

Is it because of my replacement policy? Or is it because of something
else? I want to increase cache. Current Byte Hit Ratio is less than 25
%. I want to increase it to save more Bandwidth.

Number of object stored is always 1800 K. I am doing snmp polling to
check number of objects

Below is my configuration.

cache_dir diskd /cache1/squid 20480 16 256 Q1=64 Q2=72
cache_dir diskd /cache2/squid 20480 16 256 Q1=64 Q2=72

cache_replacement_policy heap LFUDA
memory_replacement_policy heap LFUDA


Thanks - LK 
Disclaimer

The information contained in this e-mail, any attached files, and response 
threads are confidential and 
may be legally privileged. It is intended solely for the use of individual(s) 
or entity to which it is addressed
and others authorised to receive it. If you are not the intended recipient, 
kindly notify the sender by return 
mail and delete this message and any attachment(s) immediately.
 
Save as expressly permitted by the author, any disclosure, copying, 
distribution or taking action in reliance 
on the contents of the information contained in this e-mail is strictly 
prohibited and may be unlawful.
 
Unless otherwise clearly stated, and related to the official business of 
Accelon Nigeria Limited, opinions, 
conclusions, and views expressed in this message are solely personal to the 
author.
 
Accelon Nigeria Limited accepts no liability whatsoever for any loss, be it 
direct, indirect or consequential, 
arising from information made available in this e-mail and actions resulting 
there from.
 
For more information about Accelon Nigeria Limited, please see our website at
http://www.accelonafrica.com
**


RE: [squid-users] Long Query String results in Invalid response

2005-11-10 Thread Sears, Shawn
This is the message I recieve in the browser from Squid.   I turned the 
debuging up to 9 and didn't see any glaring errors. Is ther something I should 
be looking for?
 

What should I make the  request_header_max_size?  I played around with that 
setting making it as large as 2Mb and I did not have any different results.
 
 

 



From: Matus UHLAR - fantomas [mailto:[EMAIL PROTECTED]
Sent: Thu 11/10/2005 4:40 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Long Query String results in Invalid response



On 09.11 13:25, Sears, Shawn wrote:
 Attached is a sample query string and error response.


 GET
 /articles/asearch.html?which_index=bothmeta-dc=10func=simple_searchfi
 eld-Name=stowecollection-label=black+history+monthcollection-label=wom
 ens+history+monthcollection-label=asian+pacific+american+heritage+month
 collection-label=hispanic+heritage+monthcollection-label=american+indi
 an+heritagecollection-text=Black+Historycollection-text=Women%27s+Hist
 orycollection-text=Asian+Pacific+American+Heritagecollection-text=Hisp
 anic+Heritagecollection-text=American+Indian+Heritageupdate-version=Fe
 b.+2000update-version=June+2000update-version=Sept.+2000update-versio
 n=Jan.+2001update-version=Apr.+2001update-version=July+2001update-ver
 sion=Oct.+2001update-version=Jan.+2002update-version=Apr.+2002update-
 version=July+2002update-version=Oct.+2002update-version=Jan.+2003upda
 te-version=Apr.+2003update-version=Aug.+2003update-version=Dec.+2003u
 pdate-version=Apr.+2004update-version=July+2004update-version=Feb.+200
 5update-version=Sept.+2005subj_name= HTTP/1.0
 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
 application/x-shockwave-flash, application/vnd.ms-excel,
 application/vnd.ms-powerpoint, application/msword, */*
 Referer: http://www.anb.org/subscriber-home.html
 Accept-Language: en-us
 Proxy-Connection: Keep-Alive
 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET
 CLR 1.1.4322)
 Host: www.anb.org
 Cookie: anb=carvlib:aNbcUGjoLMuAk:1131560733

 The following error was encountered:

 Invalid Response

is this the browser or squid's message? check logs.
maybe yhou should tune up request_header_max_size



--
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Emacs is a complicated operating system without good text editor.




[squid-users] RE: AW: Squid unreachable every hour and 6 minutes.

2005-11-10 Thread Adam Aube
Gix, Lilian (CI/OSR) * wrote:

 Thanks for your help :
 
 proxy1:~#  crontab  -l
 0 0 * * * /etc/webmin/webalizer/webalizer.pl /cache_log/access.log
 proxy1:~# more /etc/crontab
 SHELL=/bin/sh
 PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
 
 # m h dom mon dow user  command
 17 ** * *   rootrun-parts --report /etc/cron.hourly
 1 0 * * *   roottest -x /usr/sbin/anacron || run-parts --report
 /etc/cron.daily
 47 6* * 7   roottest -x /usr/sbin/anacron || run-parts --report
 /etc/cron.weekly
 52 61 * *   roottest -x /usr/sbin/anacron || run-parts --report
 /etc/cron.monthly
 
 proxy1:~# ls /etc/cron.hourly/
 proxy1:~#
 
 
 The server is a compaq DL580 (2*Xeon700Mhz, 1G of Ram, Raid 5: 32G),
 working on Debian

What about /etc/cron.d/ and /var/spool/cron/crontabs/?

Adam



[squid-users] Re: squid_ldap_auth and Windows 2003 AD

2005-11-10 Thread Adam Aube
Colin Farley wrote:

 We have a few production squid proxy servers running various STABLE
 versions of squid 2.5 and are encountering some issues as we upgrade our
 Domain controllers from windows 2000 to windows 2003.  The proxy servers
 query the LDAP directory for user access control.

 Ideally, we would like all proxy servers to use a base dn that allows them
 to search the entire domain (dn=domain,dn=lan), when querying Windows
 2000 domain controllers this works perfectly.  However, when we point
 these proxy servers to Windows 2003 domain controllers for LDAP queries
 squid_ldap_auth fails.

 I have found that if I specify an ou for the base dn it works fine
 (ou=site1,dn=domain,dn=lan).  So, it seems that Windows 2003 domain
 controllers have added security that stops searches beginning from the
 base of the domain and searches must start within an ou.

Have you configured squid_ldap_auth to bind using a user account?

Adam



Re: [squid-users] squid_ldap_auth and Windows 2003 AD

2005-11-10 Thread Colin Farley
Thanks for the reply.  I had a look at the article and I don't think that
it explains my situation.  My squid_ldap_auth command points to a squid
user and supplies a password so I am not doing anonymous searches.  I think
the fact that it works when a specify an OU means that it's not an
authentication problem but rather a search restriction.  Any thoughts are
appreciated.

Thanks,
 Colin


   
 Serassio Guido
 [EMAIL PROTECTED] 
 cmeconsulting.it  To 
   Colin Farley
 11/10/2005 01:35  [EMAIL PROTECTED],
 AMsquid-users@squid-cache.org 
cc 
   
   Subject 
   Re: [squid-users] squid_ldap_auth   
   and Windows 2003 AD 
   
   
   
   
   
   




Hi,

At 22.25 09/11/2005, Colin Farley wrote:
So, it seems that Windows 2003 domain
controllers have added security that stops searches beginning from the
base
of the domain and searches must start within an ou.  Has anyone
encountered
this?  Are there any fixes that anyone is aware of?  Any help is greatly
appreciated.

Correct, look here:

http://support.microsoft.com/default.aspx?scid=326690

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/




Re: [squid-users] squid_ldap_auth and Windows 2003 AD

2005-11-10 Thread Serassio Guido

Hi,

At 16.32 10/11/2005, Colin Farley wrote:


Thanks for the reply.  I had a look at the article and I don't think that
it explains my situation.  My squid_ldap_auth command points to a squid
user and supplies a password so I am not doing anonymous searches.  I think
the fact that it works when a specify an OU means that it's not an
authentication problem but rather a search restriction.  Any thoughts are
appreciated.


This SHOULD BE the solution to your problem, it fixed a my similar 
problem with LDAP authentication with Apache, so please try it.


Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-10 Thread Serassio Guido

Hi,

At 08.36 10/11/2005, Gix, Lilian (CI/OSR) * wrote:

0 0 * * * /etc/webmin/webalizer/webalizer.pl /cache_log/access.log


What is the content of webalizer.pl ?

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] Build error! Help! -client_side.o(.text+0xf65): In function `gzip_data':/home/lq/squid-2.5.S12/src/client_side.c:2053: undefined reference to `deflate'

2005-11-10 Thread ro vencentro
I want to make squid support gzip,but I have a  problem when compiling:

source='string_arrays.c' object='string_arrays.o' libtool=no \
depfile='.deps/string_arrays.Po' tmpdepfile='.deps/string_arrays.TPo' \
depmode=gcc3 /bin/sh ../cfgaux/depcomp \
gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I.
-I../include -I. -I. -I../include -I../include-g -O2 -Wall -c
`test -f string_arrays.c || echo './'`string_arrays.c
gcc  -g -O2 -Wall  -g -o squid  access_log.o acl.o asn.o
authenticate.o cache_cf.o CacheDigest.o cache_manager.o carp.o
cbdata.o client_db.o client_side.o comm.o comm_select.o debug.o 
disk.o dns_internal.o errorpage.o ETag.o event.o external_acl.o fd.o
filemap.o forward.o fqdncache.o ftp.o gopher.o helper.o  http.o
HttpStatusLine.o HttpHdrCc.o HttpHdrRange.o HttpHdrContRange.o
HttpHeader.o HttpHeaderTools.o HttpBody.o HttpMsg.o HttpReply.o
HttpRequest.o icmp.o icp_v2.o icp_v3.o ident.o internal.o ipc.o
ipcache.o  logfile.o main.o mem.o MemPool.o MemBuf.o mime.o
multicast.o neighbors.o net_db.o Packer.o pconn.o peer_digest.o
peer_select.o redirect.o referer.o refresh.o send-announce.o  ssl.o 
stat.o StatHist.o String.o stmem.o store.o store_io.o store_client.o
store_digest.o store_dir.o store_key_md5.o store_log.o store_rebuild.o
store_swapin.o store_swapmeta.o store_swapout.o tools.o unlinkd.o
url.o urn.o useragent.o wais.o wccp.o whois.o  repl_modules.o
auth_modules.o store_modules.o globals.o string_arrays.o -L../lib
repl/liblru.a fs/libufs.a auth/libbasic.a -lcrypt -lmiscutil -lm
-lresolv -lbsd -lnsl
client_side.o(.text+0xf65): In function `gzip_data':
/home/lq/squid-2.5.S12/src/client_side.c:2053: undefined reference to `deflate'
client_side.o(.text+0xf9c):/home/lq/squid-2.5.S12/src/client_side.c:2059:
undefined reference to `crc32'
client_side.o(.text+0x54ba): In function `clientSendMoreData':
/home/lq/squid-2.5.S12/src/client_side.c:2082: undefined reference to `deflate'
client_side.o(.text+0x68f7):/home/lq/squid-2.5.S12/src/client_side.c:1539:
undefined reference to `deflateInit2_'
client_side.o(.text+0x6905):/home/lq/squid-2.5.S12/src/client_side.c:1541:
undefined reference to `crc32'
collect2: ld returned 1 exit status
make[3]: *** [squid] Error 1
make[3]: Leaving directory `/home/lq/squid-2.5.S12/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/home/lq/squid-2.5.S12/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/lq/squid-2.5.S12/src'
make: *** [all-recursive] Error 1


Re: [squid-users] Re: squid_ldap_auth and Windows 2003 AD

2005-11-10 Thread Colin Farley
Yes, I have. The searches are being performed by an authenticated user.

Thanks,
Colin


   
 Adam Aube 
 [EMAIL PROTECTED] 
 u To 
 Sent by: news squid-users@squid-cache.org 
 [EMAIL PROTECTED]  cc 
 rg   
   Subject 
   [squid-users]  Re: squid_ldap_auth  
 11/10/2005 08:51  and Windows 2003 AD 
 AM
   
   
   
   
   




Colin Farley wrote:

 We have a few production squid proxy servers running various STABLE
 versions of squid 2.5 and are encountering some issues as we upgrade our
 Domain controllers from windows 2000 to windows 2003.  The proxy servers
 query the LDAP directory for user access control.

 Ideally, we would like all proxy servers to use a base dn that allows
them
 to search the entire domain (dn=domain,dn=lan), when querying Windows
 2000 domain controllers this works perfectly.  However, when we point
 these proxy servers to Windows 2003 domain controllers for LDAP queries
 squid_ldap_auth fails.

 I have found that if I specify an ou for the base dn it works fine
 (ou=site1,dn=domain,dn=lan).  So, it seems that Windows 2003 domain
 controllers have added security that stops searches beginning from the
 base of the domain and searches must start within an ou.

Have you configured squid_ldap_auth to bind using a user account?

Adam




[squid-users] can the squid reverse proxy enque some get request?

2005-11-10 Thread Pakozdi Tibor
Hi!

I have the following situation:

At the web server I have a time and resource consuming page
which should be cached by the Squid (configured as reverse
proxy / http accelerator). 
Let's suppose that the page generation lasts 5 seconds. At
first there is no cache at the Squid reverse proxy, then
come some HTTP GET requests in less than 5 seconds. Now at
my configuration all of the HTTP GET requests went to the
web server, individually making page generations at the web
server. 
The required behaviour would be that only the first HTTP GET
goes to the web server and the others are waiting at the
Squid reverse proxy for the cache to be generated, and from
the cache those requests could be served.
The requests after the generation time has been served from
the reverse proxy cache, so that was OK, just the requests
that came at the web server page generation time frame(that
5 sec mentionad above) has passed the reverse proxy.

I do not know if I have failed to configure the reverse
proxy or what else could happened, but at my computer it
simply did not work.

Please could you help me about this?
What can I change at the configuration to work as I have
written?
Or is it a common (reverse) proxy behaviour?

And here is the configuration, if this is not a common
(reverse) proxy related thing, but this is only a behaviour
of some installations:
OS: Cygwin 2.510.2.2 on Win2000 SP4
Web server: Apache Web Server 2.0.53
Squid proxy: squid-2.5.STABLE12.tar.gz

Thanks:
Tibor Pakozdi

___
KGFB 2006 - Garantáltan a legjobb ár! Nyerje meg az új Swiftet + 
garantált 10,000,-  Ft értékű ajándék. WWW.NETRISK.HU




RE: [squid-users] Getting error Too few basicauthenticator processes are running

2005-11-10 Thread Chris Robertson
 -Original Message-
 From: ads squid [mailto:[EMAIL PROTECTED] 
 Sent: 10 November 2005 09:40 AM
 To: Chris Robertson; squid-users@squid-cache.org
 Subject: RE: [squid-users] Getting error Too few basicauthenticator
 processes are running
 
 --- Chris Robertson [EMAIL PROTECTED] wrote:
 
   -Original Message-
   From: ads squid [mailto:[EMAIL PROTECTED]
   Sent: Wednesday, November 09, 2005 3:42 AM
   To: squid-users@squid-cache.org
   Subject: [squid-users] Getting error Too few
  basicauthenticator
   processes are running
   
   
   Hi,
   I am trying to configure squid version squid-2.5.STABLE12 as 
   follows :
   
   [EMAIL PROTECTED] squid-2.5.STABLE12]# 
 /usr/local/squid/sbin/squid 
   -NCd1
   
   
   I am getting following error 
   
   2005/11/09 18:03:40| Accepting HTTP connections at 0.0.0.0, port 
   3128, FD 15.
   2005/11/09 18:03:40| WCCP Disabled.
   2005/11/09 18:03:40| Ready to serve requests.
   2005/11/09 18:03:41| WARNING: basicauthenticator
  #1
   (FD 6) exited
   2005/11/09 18:03:41| WARNING: basicauthenticator
  #2
   (FD 7) exited
   2005/11/09 18:03:41| WARNING: basicauthenticator
  #3
   (FD 8) exited
   2005/11/09 18:03:41| Too few basicauthenticator processes are 
   running
   FATAL: The basicauthenticator helpers are crashing
  too
   rapidly, need help!
   
   Aborted
   
   
   
   I have configured squid with minimum options as
   follows:
   [EMAIL PROTECTED] squid-2.5.STABLE12]# ./configure
  
 
 --enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,SASL,MSNT
   
   .
   
   Please help me to solve the problem.
   I want to use basic authentication.
   
   Thanks for support.
   
  
  What does your auth_param line look like?
  
  Chris
  
 
 It looks like as following :
 
 
 auth_param basic program
 /usr/local/squid/libexec/ncsa_auth
 /usr/local/squid/etc/passwd
 ###
 
 Thanks for support.
 
 
 -Original Message-
 From: Dave Raven [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 09, 2005 11:29 PM
 To: 'ads squid'; squid-users@squid-cache.org
 Subject: RE: [squid-users] Getting error Too few basicauthenticator
 processes are running
 
 
 Run  '/usr/local/squid/libexec/ncsa_auth /usr/local/squid/etc/passwd'
 
 Type   'USERNAME PASSWORD'
 
 And see what it says - I suspect you wont get that far 
 though. Once you try run it it should giv eyou and error
 

Make sure you are logged in as the cache_effective_user when you run this 
command.  Else, use:

su squid -c /usr/local/squid/libexec/ncsa_auth /usr/local/squid/etc/passwd

Chris


RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-10 Thread Chris Robertson
 -Original Message-
 From: Serassio Guido [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 10, 2005 7:23 AM
 To: Gix, Lilian (CI/OSR) *; [EMAIL PROTECTED];
 squid-users@squid-cache.org
 Subject: RE: AW: [squid-users] Squid unreachable every hour and 6
 minutes.
 
 
 Hi,
 
 At 08.36 10/11/2005, Gix, Lilian (CI/OSR) * wrote:
  0 0 * * * /etc/webmin/webalizer/webalizer.pl 
 /cache_log/access.log
 
 What is the content of webalizer.pl ?
 
 Regards
 
 Guido
 
 

Does it matter? It only runs once per day (at midnight).

Personally at this point, I would just run Squid under strace or some other 
debugging interface and see if that gives any indication of what is happening.

While I certainly agree that the regularity of the crashes points to an outside 
influence, it doesn't preclude something internal (such as cache digest 
creation)...  *shrug*

Chris


Re: [squid-users] Large Solaris (2.8) Squid Server Advice Needed

2005-11-10 Thread Vadim Pushkin


Here is my draft squid.conf file, and my configure options when I built 
squid..


NOTE **  I am now looking to turn both of my squid servers into cache peers 
of each other.  Both machines have two network interfaces, and I plan on 
dedicating one of these for a private LAN connection solely for ICP use.  
Am I stating this properly within my squid.conf? I wish to ensure that 
inter-caching a) does not leak out of interface A, only interface B (my 
private LAN) and that between these two machines on LAN B (again, private 
LAN), that they are able to access each others cache freely.


Thank you all!

.vp

--BUILD LINE---

./configure --prefix=/opt/squid/current --enable-storeio=ufs,aufs 
--enable-icmp --enable-err-languages=English 
--enable-default-err-language=English --disable-hostname-checks 
--enable-underscores --enable-stacktrace --enable-async-io --enable-snmp 
--enable-removal-policies=heap,lru


##  Is there any purpose to specifying both ufs *and* aufs for 
--enable-storeio?
## I built with just aufs and it seems to be working fine, though I haven't 
really

## stressed it much.

 SQUID.CONF ---

http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_dir aufs /usr/local/squid/cache 51200 64 256
# Increase maximum object size ?
maximum_object_size 32 MB
# Use this instead?
# maximum_object_size 500 KB
cache_mem  4 MB
cache_swap_low  97
cache_swap_high 100

ipcache_size 4096
ipcache_low  90
ipcache_high 95
fqdncache_size 4096
buffered_logs off
# Use heap LFUDA replacement policy:
cache_replacement_policy heap LFUDA
cache_access_log /usr/local/squid/var/logs/access.log
# cache_access_log /usr/local/squid/cache
# cache_log /dev/null
# cache_store_log none
ftp_user squid_ftp@
# Keep?
# diskd_program /usr/local/squid/libexec/diskd
debug_options ALL,1
#reference_age 6 month
quick_abort_min 1 KB
quick_abort_max 1048576 KB
quick_abort_pct 90
connect_timeout 30 seconds
read_timeout 5 minutes
request_timeout 30 seconds
client_lifetime 2 hour
half_closed_clients off
pconn_timeout 120 seconds
ident_timeout 10 seconds
shutdown_lifetime 15 seconds
# request_body_max_size 50 MB
request_header_max_size 100 KB
request_body_max_size 1000 KB

refresh_pattern ^ftp:   144050% 86400   
reload-into-ims
refresh_pattern ^gopher:14400%  1440
reload-into-ims
refresh_pattern .   0   50% 86400   
reload-into-ims


acl DIALUPS  src 192.168.0.0/16
acl IntraNet_One   src 12.20.0.0/16
acl IntraNet_Two  src 12.30.0.0/16
acl BACKUPS src 12.40.0.0/16
acl ICP_ONE src 10.20.30.2/255.255.255.252
acl ICP_ONE src 10.20.30.2/255.255.255.252
#
# Everyone Else
#
acl all src 0.0.0.0/255.255.255.255
#
http_access allow DIALUPS
http_access allow IntraNet_One
http_access deny IntraNet_Two
http_access allow BACKUPS
#
http_access deny all
acl manager proto cache_object

acl localhost src 127.0.0.1/255.255.255.255
#
# Define Safe Ports to use.
#
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
#
# Define SSL Ports
#
acl SSL_ports port 443 563

acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

#
# http_access allow all
#
# ??? One per each network as above?
#
http_reply_access allow Remote_Access
#
http_reply_access allow DIALUPS
http_reply_access allow IntraNet_One
http_reply_access deny IntraNet_Two
http_reply_access allow BACKUP
#
http_reply_access deny all

cache_mgr [EMAIL PROTECTED]

visible_hostname squidproxy-1

logfile_rotate 14

coredump_dir /usr/local/squid/var/cache

cache_effective_user nobody
cache_effective_group nobody

# CACHE PEER
icp_port 3130
# icp_access allow all
# Is this correct?
icp_access allow ICP_ONE
icp_access allow ICP_TWO

#
cache_peer 10.20.30.2 sibling   3128  3130

# The other host has
# cache_peer 10.20.30.3 sibling   3128  3130

peer_connect_timeout 10 seconds
dns_testnames localhost

--- END OF SQUID.CONF FILE 


From: Matus UHLAR - fantomas [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Large Solaris (2.8) Squid Server Advice Needed
Date: Thu, 10 Nov 2005 10:37:59 +0100



On 08.11 14:01, Vadim Pushkin wrote:



 My responses below.  Thank you all for the assistance, very much
 appreciated.  Is anyone interested in my posting the final squid.conf 
when

 this is all said and done?

 I hope you configured squid with heap removal policies and async IO 
allowed


 

RE: [squid-users] Large Solaris (2.8) Squid Server Advice Needed

2005-11-10 Thread Chris Robertson
 -Original Message-
 From: Vadim Pushkin [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 10, 2005 10:40 AM
 To: [EMAIL PROTECTED]; squid-users@squid-cache.org
 Subject: Re: [squid-users] Large Solaris (2.8) Squid Server Advice
 Needed
 
 
 
 Here is my draft squid.conf file, and my configure options 
 when I built 
 squid..
 
 NOTE **  I am now looking to turn both of my squid servers 
 into cache peers 
 of each other.  Both machines have two network interfaces, 
 and I plan on 
 dedicating one of these for a private LAN connection solely 
 for ICP use.  
 Am I stating this properly within my squid.conf? I wish to 
 ensure that 
 inter-caching a) does not leak out of interface A, only 
 interface B (my 
 private LAN) and that between these two machines on LAN B 
 (again, private 
 LAN), that they are able to access each others cache freely.
 
 Thank you all!
 
 .vp
 
 --BUILD LINE---
 
 ./configure --prefix=/opt/squid/current --enable-storeio=ufs,aufs 
 --enable-icmp --enable-err-languages=English 
 --enable-default-err-language=English --disable-hostname-checks 
 --enable-underscores --enable-stacktrace --enable-async-io 
 --enable-snmp 
 --enable-removal-policies=heap,lru
 
 ##  Is there any purpose to specifying both ufs *and* aufs for 
 --enable-storeio?
 ## I built with just aufs and it seems to be working fine, 
 though I haven't 
 really
 ## stressed it much.

As I understand it, specifying both lets you use either.  If you are only going 
to use aufs, just specify aufs.

 
  SQUID.CONF ---
 
 http_port 8080
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 no_cache deny QUERY
 cache_dir aufs /usr/local/squid/cache 51200 64 256
 # Increase maximum object size ?
 maximum_object_size 32 MB
 # Use this instead?
 # maximum_object_size 500 KB

Depends on your customers' usage patterns.  One ~5GB item will save a lot of 
bandwidth if it's cacheable and requested more than once.  On the other hand, 
it will prevent a bundle of 5MB images to be cached.

 cache_mem  4 MB
 cache_swap_low  97
 cache_swap_high 100

I'd lower cache_swap_high to 98.  With a cache as large as you have, each 
percent is in the neighborhood of 500MB of data.  Setting cache_swap_high will 
start aggressively purging cached objects when you have around 1GB of cache 
space free. 

 
 ipcache_size 4096
 ipcache_low  90
 ipcache_high 95
 fqdncache_size 4096
 buffered_logs off
 # Use heap LFUDA replacement policy:
 cache_replacement_policy heap LFUDA
 cache_access_log /usr/local/squid/var/logs/access.log
 # cache_access_log /usr/local/squid/cache
 # cache_log /dev/null
 # cache_store_log none
 ftp_user squid_ftp@
 # Keep?
 # diskd_program /usr/local/squid/libexec/diskd

If you are using aufs as the cache_dir type, you don't need to specify diskd.  
Actually, you only need to specify it, if it's different from default.

 debug_options ALL,1
 #reference_age 6 month
 quick_abort_min 1 KB
 quick_abort_max 1048576 KB
 quick_abort_pct 90
 connect_timeout 30 seconds
 read_timeout 5 minutes
 request_timeout 30 seconds
 client_lifetime 2 hour
 half_closed_clients off
 pconn_timeout 120 seconds
 ident_timeout 10 seconds
 shutdown_lifetime 15 seconds
 # request_body_max_size 50 MB
 request_header_max_size 100 KB
 request_body_max_size 1000 KB
 
 refresh_pattern ^ftp:   144050% 86400   
 reload-into-ims
 refresh_pattern ^gopher:14400%  1440
 reload-into-ims
 refresh_pattern .   0   50% 86400   
 reload-into-ims
 
 acl DIALUPS  src 192.168.0.0/16
 acl IntraNet_One   src 12.20.0.0/16
 acl IntraNet_Two  src 12.30.0.0/16
 acl BACKUPS src 12.40.0.0/16
 acl ICP_ONE src 10.20.30.2/255.255.255.252
 acl ICP_ONE src 10.20.30.2/255.255.255.252

Why is ICP_ONE specified twice?  I imagine it should either be ICP_TWO (used 
below) or should just be removed (if ICP_ONE covers the whole subnet).

 #
 # Everyone Else
 #
 acl all src 0.0.0.0/255.255.255.255
 #
 http_access allow DIALUPS
 http_access allow IntraNet_One
 http_access deny IntraNet_Two
 http_access allow BACKUPS

http_access allow ICP_ONE #  Otherwise requests for cached content from peers 
will fail.

 #
 http_access deny all
 acl manager proto cache_object
 
 acl localhost src 127.0.0.1/255.255.255.255
 #
 # Define Safe Ports to use.
 #
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 563 # https, snews
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 #
 # Define SSL Ports
 #
 acl SSL_ports port 443 563
 
 acl CONNECT method CONNECT
 
 http_access allow manager localhost
 

Re: [squid-users] Re: squid_ldap_auth and Windows 2003 AD

2005-11-10 Thread Colin Farley
Yes, I can in some cases.  If I am querying windows 2003 DC and the base DN
is the base of the domain (dn=domain,dn=lan) then I get the following:

squid_ldap_auth: WARNING, LDAP search error 'Operations error'
ERR Success

But if I specify an ou (ou=site1,dn=domain,dn=lan) then it works
correctly.  If I query a Windows 2000 DC the it works either way.

Colin




Hi Colin, I had a tough time with getting the syntax, can you do command
line lookups using squid_ldap_auth ?


On Thu, 2005-11-10 at 11:29 -0600, Colin Farley wrote:
 Yes, I have. The searches are being performed by an authenticated user.

 Thanks,
 Colin



  Adam Aube
  [EMAIL PROTECTED]
  u
To
  Sent by: news squid-users@squid-cache.org
  [EMAIL PROTECTED]
cc
  rg

Subject
[squid-users]  Re: squid_ldap_auth
  11/10/2005 08:51  and Windows 2003 AD
  AM









 Colin Farley wrote:

  We have a few production squid proxy servers running various STABLE
  versions of squid 2.5 and are encountering some issues as we upgrade
our
  Domain controllers from windows 2000 to windows 2003.  The proxy
servers
  query the LDAP directory for user access control.

  Ideally, we would like all proxy servers to use a base dn that allows
 them
  to search the entire domain (dn=domain,dn=lan), when querying Windows
  2000 domain controllers this works perfectly.  However, when we point
  these proxy servers to Windows 2003 domain controllers for LDAP queries
  squid_ldap_auth fails.

  I have found that if I specify an ou for the base dn it works fine
  (ou=site1,dn=domain,dn=lan).  So, it seems that Windows 2003 domain
  controllers have added security that stops searches beginning from the
  base of the domain and searches must start within an ou.

 Have you configured squid_ldap_auth to bind using a user account?

 Adam







Re: [squid-users] can the squid reverse proxy enque some get request?

2005-11-10 Thread Henrik Nordstrom

On Thu, 10 Nov 2005, Pakozdi Tibor wrote:


Let's suppose that the page generation lasts 5 seconds. At
first there is no cache at the Squid reverse proxy, then
come some HTTP GET requests in less than 5 seconds. Now at
my configuration all of the HTTP GET requests went to the
web server, individually making page generations at the web
server.
The required behaviour would be that only the first HTTP GET
goes to the web server and the others are waiting at the
Squid reverse proxy for the cache to be generated, and from
the cache those requests could be served.


This is not possible out-of-the-box, but look for collapsed_forwarding at 
devel.squid-cache.org.


Regards
Henrik


RE: [squid-users] Getting error Too few basicauthenticator processes are running

2005-11-10 Thread ads squid
I created file /usr/local/squid/etc/passwd and
created user accounts with passwords and it is through
now.
May be creation of file /usr/local/squid/etc/passwd
was required.
Thanks for support.


--- Dave Raven [EMAIL PROTECTED] wrote:

 Run  '/usr/local/squid/libexec/ncsa_auth
 /usr/local/squid/etc/passwd'
 
 Type   'USERNAME PASSWORD'
 
 And see what it says - I suspect you wont get that
 far though. Once you try
 run it it should giv eyou and error
 
 -Original Message-
 From: ads squid [mailto:[EMAIL PROTECTED] 
 Sent: 10 November 2005 09:40 AM
 To: Chris Robertson; squid-users@squid-cache.org
 Subject: RE: [squid-users] Getting error Too few
 basicauthenticator
 processes are running
 
 --- Chris Robertson [EMAIL PROTECTED] wrote:
 
   -Original Message-
   From: ads squid [mailto:[EMAIL PROTECTED]
   Sent: Wednesday, November 09, 2005 3:42 AM
   To: squid-users@squid-cache.org
   Subject: [squid-users] Getting error Too few
  basicauthenticator
   processes are running
   
   
   Hi,
   I am trying to configure squid version
 squid-2.5.STABLE12 as 
   follows :
   
   [EMAIL PROTECTED] squid-2.5.STABLE12]#
 /usr/local/squid/sbin/squid 
   -NCd1
   
   
   I am getting following error 
   
   2005/11/09 18:03:40| Accepting HTTP connections
 at 0.0.0.0, port 
   3128, FD 15.
   2005/11/09 18:03:40| WCCP Disabled.
   2005/11/09 18:03:40| Ready to serve requests.
   2005/11/09 18:03:41| WARNING: basicauthenticator
  #1
   (FD 6) exited
   2005/11/09 18:03:41| WARNING: basicauthenticator
  #2
   (FD 7) exited
   2005/11/09 18:03:41| WARNING: basicauthenticator
  #3
   (FD 8) exited
   2005/11/09 18:03:41| Too few basicauthenticator
 processes are 
   running
   FATAL: The basicauthenticator helpers are
 crashing
  too
   rapidly, need help!
   
   Aborted
   
   
   
   I have configured squid with minimum options as
   follows:
   [EMAIL PROTECTED] squid-2.5.STABLE12]# ./configure
  
 

--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,SASL,MSNT
   
   .
   
   Please help me to solve the problem.
   I want to use basic authentication.
   
   Thanks for support.
   
  
  What does your auth_param line look like?
  
  Chris
  
 
 It looks like as following :
 
 
 auth_param basic program
 /usr/local/squid/libexec/ncsa_auth
 /usr/local/squid/etc/passwd
 ###
 
 Thanks for support.
 
 
 
   
 __
 Yahoo! FareChase: Search multiple travel sites in
 one click.
 http://farechase.yahoo.com
 
 




__ 
Start your day with Yahoo! - Make it your home page! 
http://www.yahoo.com/r/hs


[squid-users] How to make squid work for access.log

2005-11-10 Thread suresh kumar
Hi all,
I am a new member to this group and new to
squid as well. My machine ip is  192.168.10.172  my
gateway ip is  192.168.10.200 . Can I install squid
in my machine and make a cache of all websites through
the access log. Or else the squid should be installed
in gateway machine alone . Because I have tried
installing and starting squid in my machine . But
nothing comes in the access log but the squid is
running . The configuration for squid is perfect .
What should I do for getting the access log using
squid . The http_port I am using for squid is 3128 .
Is there any thing needed  additionally to cache the
web requests coming to 80 ports using squid . If
anybody knows kindly assist me.

Suresh Kumar

Send instant messages to your online friends http://uk.messenger.yahoo.com 


RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-10 Thread Serassio Guido

Hi,

At 19.53 10/11/2005, Chris Robertson wrote:

  0 0 * * * /etc/webmin/webalizer/webalizer.pl
 /cache_log/access.log

 What is the content of webalizer.pl ?

 Regards

 Guido



Does it matter? It only runs once per day (at midnight).


It's the only custom script related to squid present on crontab, so 
why don't check it when squid is still doing unexpected things ? It's 
a work of half minute 


Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



RE: AW: [squid-users] Squid unreachable every hour and 6 minutes.

2005-11-10 Thread Gix, Lilian (CI/OSR) *
Hello,

Webalizer is a software to create some statistic on squid Log files.

But even if I disable it, I didn't see any difference. Restart
continues.

L.G.


-Original Message-
From: Serassio Guido [mailto:[EMAIL PROTECTED] 
Sent: Freitag, 11. November 2005 08:36
To: Chris Robertson; squid-users@squid-cache.org
Subject: RE: AW: [squid-users] Squid unreachable every hour and 6
minutes.

Hi,

At 19.53 10/11/2005, Chris Robertson wrote:
   0 0 * * * /etc/webmin/webalizer/webalizer.pl
  /cache_log/access.log
 
  What is the content of webalizer.pl ?
 
  Regards
 
  Guido
 
 

Does it matter? It only runs once per day (at midnight).

It's the only custom script related to squid present on crontab, so 
why don't check it when squid is still doing unexpected things ? It's 
a work of half minute 

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/