Re: [squid-users] Unable to Compile Source After Appying ntlm_auth_popups.patch

2004-02-12 Thread Henrik Nordstrom
On Wed, 11 Feb 2004, Jim Richey wrote:

> After installing squid-2.5.STABLE4-ntlm_auth_popups.patch  I am no 
> longer able to compile the source code.

Please download the patch again. The patch was updated yesterday as it was 
missing a large part of the changes. All changes in src/auth/ntlm/ was 
missing from the patch file.

Correct filesize of the patch is 63653 bytes.

Regards
Henrik



[squid-users] Re: Is it possible to handle 200reqs/s?

2004-02-12 Thread Johnston Orr
Which benchmarking tool are you using?  Web Polygraph?
And what sort of network?  100M Ethernet?

Johnston

> On Wed, 11 Feb 2004 16:02:16 +0200, Andriy Korud wrote:
>
> > Hi,
> > I wonder if Squid can handle load on single machine (with traffic approx
> > 15Mbps)? If anybody achieved that, can you please share on which
> > OS/hardware that was done?
> > I have Xeon 2.8/1G RAM, 2x15k SCSI disks dedicated for chache (OS will
be
> > Linux or FreeBSD) - so, what can I expect from Squid on such hardware?
>
> At least 400/s i'd say. Depends on your disk cache size and disk i/o.
> Benchmarking a small pentium 3 for a single small html file does ~900
> requests per second here, same machine in production does ~120 / second at
> around 30% cpu.
>
> --
> Gabriel Wicke




Re: [squid-users] squid, forwarding some specific requests to another proxy

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, Shahriar Mokhtari wrote:

> Your assumtion is right. Any http request goes through my cache server 
> running squid which I control. The ISP is using HTTP interception. The 
> problem is that I am not sure exactly which sites the ISP filters, so I 
> wonder if I can send the filtered http request to a proxy, and the way I 
> squid undrestand that a page is filtered is using my ISP output (what 
> squid receives for a http request). The ISP generates exactly the same 
> message for any filtered page.

You may be able to acheive something along these lines by configuring
Squid to use a non-ICP parent (no-query no-cache-digests
no-netdb-exchanges cache_peer options) and "prefer_direct on".

It is not 100% perfect, but will work most of the time.

Regards
Henrik



RE: [squid-users] Is it possible to handle 200reqs/s?

2004-02-12 Thread Chris Wilcox
> To give you an idea:
> Pentium III - 1Ghz, 1,5 GB Ram, RedHat 7.3 about 4000 clients
> accessing
> the server.
>
> Normally we have a 3Mbps line for our users and we handle 160Req/sec
> but we once gave the users 5Mbps and we handled 220 Reqs/sec.
> So I'm not
> sure what the limit is of this machine.
> BTW before optimizing squid we could only handle 60 Reqs/sec.
> Optimizing is neccesary, the default of squid isn't really
> good for big
> loads.
  How did you achieve your squid optimizing ?

  M.
I'm interested in the same question!  How did you optimise Squid to get such 
an improvement?

Regards,

Chris

_
Tired of 56k? Get a FREE BT Broadband connection 
http://www.msn.co.uk/specials/btbroadband



Re: [squid-users] Massive problems with https connections to Domino Server (long) -partly solution

2004-02-12 Thread Rainer Traut
Hi,
thanks to both of you, Henrik and vda(?),
for being so patient with me. ;)
[EMAIL PROTECTED] wrote:

- you can block other programs like icq.
Only way of really blocking things like icq I can think of is
by changing dns resolution for these hosts. simply done on the proxy
server and not for the whole network.


Doable with iptables (block by port#)
Not that easy...
You can configure icq to use nearly any port connecting to
their login servers, and ICQ will try them all out for you... ;)
So if you have any open port through your firewall chance is big
that icq works.
What can you do against someone plugging into your intranet
a preconfigured laptop which will NOT ask novell about anything
before going direct?
That's right, sure.
But we usually do not allow anyone or any ip address to go directly.
In this case here we allowed this to test with and without proxy.
Ok, here is what we did so that we cannot reproduce the error anymore.
The images of our application are loaded by javascript and switched from 
visible to invisible and back again.
But there seemed to be a mistake so that every image was requested again 
and again by the browser though it should not.
Just the navigator part had about 50 imgaes loaded on every click.
We reduced this dramatically so that i cannot reproduce this behaviour
anymore. I know this does not exlain why i could DOS the server but it 
works now... Only explanation I have is traffic caused by the client was 
simply too high?!

Rainer



RE: [squid-users] Is it possible to handle 200reqs/s?

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, Chris Wilcox wrote:

> I'm interested in the same question!  How did you optimise Squid to get such 
> an improvement?

things one need to look into

* Disk subsystem. Use more than one drive and one of the async cache_dir 
types. One cache_dir per physical drive, maybe more if using diskd.

* If using diskd, remember to carefully read the Squid FAQ on how to 
configure the OS to support diskd.

* Number of filedescriptors. The default of 1024 is not sufficient for 
high loads.

* Total number of sockets, file handles etc allowed to be open in the 
system, and per-process limits of the same.

* SYN backlog size in the OS settings. Especially if you have WAN or 
Dialup users connecting.

* Unbound TCP port range available for outgoing connections. Some systems 
default with a range of only 4000 ports which quickly run out when 
approaching 150-200 req/s.

* Sufficient amount of memory, and in some cases OS tuning to allow for
large processes. Some OS:es also require the swap to be disabled to 
prevent swapping even if there is sufficient memory.

Then monitor the system and tweak things until you see desired results.

Regards
Henrik



[squid-users] windows update problems

2004-02-12 Thread Emilio Salgari
Hi!
In my squid.conf I have set
acl windowsupdate dstdomain .windowsupdate.microsoft.com
http_access allow our_network1 windowsupdate
http_access allow our_network2 windowsupdate
no_cache deny windowsupdate
In this way all users from our_network1 and our_network2 can access windows 
update, but when it tries to look for new updates it ends up says that there 
has been an error.

Does anyone of you update his win machine regularly through squid?

Any hint?

Thanks in advance!

_
Nuovo MSN Messenger con sfondi e giochi! http://messenger.msn.it/ Provalo 
subito!



[squid-users] Squid optimize settings

2004-02-12 Thread Peter van der Does
Hi,

It's fairly easy to optimize:

Re-compiled squid to handle 8192 filehandles, we ran into problems
there after a while.

Settings in squid.conf
dns_children 32
cache_dir aufs /var/cache/cacheA 1000 16 256
cache_mem 450 MB

Now use as much cache_mem as you can, that really makes a difference.
Don't use all your memory for caching. I remember reading you should use
half of your free memory but I can't remember on what site I read that.

We're using a very small cache_dir as we also have a second cache on
the outside of our firewall. There is where we have our major
cache_dir.
cache_dir aufs /cache/cacheA 16000 32 256
cache_dir aufs /cache/cacheB 16000 32 256
cache_dir aufs /var/cache/cacheC 14000 32 256

Our filesystem is EXT3, as far as I read it's better to use ReiserFS
altho people differ on that as well.
I'm sure there are more things we could do but nobody is complaining
anymore and the effort of optimizing the last few bits and bytes isn't
worth it.
Besides, like i said earlier it seems like our line speed is holding us
back more as squid is at the moment.

In follow-up to Henrik's mail:
We used diskd at first but afer switchting to aufs the system became
more stable.
We are not running the cache on a separate drive but that would surely
make a difference. The first implementation of the linux system could
have been better, if we would have set up a new squid machine we would
make that change as well.

Greetings
Peter


RE: [squid-users] windows update problems

2004-02-12 Thread Elsen Marc

 
> 
> Hi!
> In my squid.conf I have set
> 
> acl windowsupdate dstdomain .windowsupdate.microsoft.com
> http_access allow our_network1 windowsupdate
> http_access allow our_network2 windowsupdate
> no_cache deny windowsupdate
> 
> In this way all users from our_network1 and our_network2 can 
> access windows 
> update, but when it tries to look for new updates it ends up 
> says that there 
> has been an error.
> 
> Does anyone of you update his win machine regularly through squid?
> 
> Any hint?

  Checkout

  http://www.squid-cache.org/mail-archive/squid-users/200312/0109.html

  and subsequent thread. Maybe informative for you.
 
  M.

> 
> Thanks in advance!
> 
> _
> Nuovo MSN Messenger con sfondi e giochi! 
> http://messenger.msn.it/ Provalo 
> subito!
> 
> 


[squid-users] R: [squid-users] how to for activate transparent

2004-02-12 Thread Net Mail
for activate the transparent function therefore I must set:

http_port 8080
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy  on
httpd_accel_uses_host_header on

just ?

-
Andrea S.
IT :: El.Mo S.p.A.


-Messaggio originale-
Da: Elsen Marc [mailto:[EMAIL PROTECTED]
Inviato: mercoledì 11 febbraio 2004 16.25
A: Net Mail; [EMAIL PROTECTED]
Oggetto: RE: [squid-users] how to for activate transparent



 : [squid-users] how to for activate transparent
>
>
> hi
> i don't remember how to for activate the option transparent of proxy:
> httpd_accel_uses_host_header off --ON??

   http://www.squid-cache.org/Doc/FAQ/FAQ-17.html

   M.



[squid-users] Blacklist

2004-02-12 Thread Yemi Fowe

Hello all,
I will appreciate if someone can give me hint on how i
can block my clients from having access to some
spammail sites.
Thanx

--Yemi

__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html


RE: [squid-users] Blacklist

2004-02-12 Thread Chris Wilcox
www.squidguard.org
www.dansguardian.org
You can also do some filtering with Squid itself in respect of regexp on 
URL's etc and also ban URLs/domains but I've never done this personally.

hth

Regards,

Chris

Hello all,
I will appreciate if someone can give me hint on how i
can block my clients from having access to some
spammail sites.
Thanx
--Yemi

__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html
_
Stay in touch with absent friends - get MSN Messenger 
http://www.msn.co.uk/messenger



Re: [squid-users] Massive problems with https connections to Domino Server (long) -partly solution

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, Rainer Traut wrote:

> Ok, here is what we did so that we cannot reproduce the error anymore.
> The images of our application are loaded by javascript and switched from 
> visible to invisible and back again.

Ok. Standard onover thing.

> But there seemed to be a mistake so that every image was requested again 
> and again by the browser though it should not.

Maybe the responses from your server is not cacheable or similar.

> Just the navigator part had about 50 imgaes loaded on every click.
> We reduced this dramatically so that i cannot reproduce this behaviour
> anymore. I know this does not exlain why i could DOS the server but it 
> works now...

Maybe your browser is configured to allow very many connections. If it is 
then such storm of requests will cause a lot of connections to be opened, 
and with persistent connections enabled in the server these connections 
will stay open for a long time.

> Only explanation I have is traffic caused by the client was simply too
> high?!

If the client is working properly then there should only be 2 connections 
per client.

The default setting is to only open 2 connections but if you have
installed a download accelerator or similar software which reconfigured
these settings of the browser, or if your browser is buggy, then there may
be very many connections initiated, up to one per image concurrently 
requested.

Regards
Henrik



Re: [squid-users] R: [squid-users] how to for activate transparent

2004-02-12 Thread Henrik Nordstrom
And somehow intercept the traffic.

Regards
Henrik

On Thu, 12 Feb 2004, Net Mail wrote:

> for activate the transparent function therefore I must set:
> 
> http_port 8080
> httpd_accel_host virtual
> httpd_accel_port 80
> httpd_accel_with_proxy  on
> httpd_accel_uses_host_header on
> 
> just ?
> 
> -
> Andrea S.
> IT :: El.Mo S.p.A.
> 
> 
> -Messaggio originale-
> Da: Elsen Marc [mailto:[EMAIL PROTECTED]
> Inviato: mercoledì 11 febbraio 2004 16.25
> A: Net Mail; [EMAIL PROTECTED]
> Oggetto: RE: [squid-users] how to for activate transparent
> 
> 
> 
>  : [squid-users] how to for activate transparent
> >
> >
> > hi
> > i don't remember how to for activate the option transparent of proxy:
> > httpd_accel_uses_host_header off --ON??
> 
>http://www.squid-cache.org/Doc/FAQ/FAQ-17.html
> 
>M.
> 



Re: [squid-users] Squid optimize settings

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, Peter van der Does wrote:

> Now use as much cache_mem as you can, that really makes a difference.

The general recommendation is to not use very much cache_mem with the
exception if your are running a reverse proxy/accelerator. All benchmarks
indicate this only gives marginal benefits in a normal Internet proxy 
and the memory is usually better spent on being able to have a large 
cache.

See the Squid FAQ on memory usage for details.

> Don't use all your memory for caching. I remember reading you should use
> half of your free memory but I can't remember on what site I read that.

The general recommendation is to have twice the amount of physical memory 
to what the rule of thumb says Squid will use.

> Our filesystem is EXT3, as far as I read it's better to use ReiserFS
> altho people differ on that as well.

In all results I have sen Reiserfs wins on speed, but not with a very
large margin.

> In follow-up to Henrik's mail:
> We used diskd at first but afer switchting to aufs the system became
> more stable.

To use diskd you must configure the system properly as per the 
requirements for using diskd (see the Squid FAQ). If not it will be very 
unstable.

> We are not running the cache on a separate drive but that would surely
> make a difference.

Since you only have a very small local cache and a lot of memory I would
recommend you to run with no disk cache at all. This makes a huge
difference in performance. See the null cache_dir type.

> The first implementation of the linux system could have been better, if
> we would have set up a new squid machine we would make that change as
> well.

It is alway good to select the proper hardware for the job. Most people 
setting up their first Squid go with hardware suitable for a normal 
fileserver has somewhat different requirements than Squid.

For a high performance Squid you want

 * A single fast CPU. SMP is not of any use for Squid.
 * Several harddrives. The seek time is the first bottleneck you will run 
into and the most cost effective way of optimizing seek time is to add 
more harddrives (two drives seek twice as fast as one).
 * No RAID for the cache drives. Certainly not RAID5. But with a good RAID
controller there is no problem from the RAID until you reach around
150-200 req/s on a 5-drive RAID5.
 * Plenty of memory as per the memory usage guidelines in the FAQ.

Note: There is a upper limit on how large Squid you can build on most 
hardware. It is not practical to plan a Squid where the Squid memory usage 
will go above 2 GB. But at the same time there is no noticeable benefits 
in cache hit ratio in storing more than 1 weeks worth of content so this 
limitation isn't really a problem assuming one is willing to accept that 
the harddrives may be too large and can not be fully used.

Regards
Henrik



Re: [squid-users] Squid optimize settings

2004-02-12 Thread Peter van der Does
Is it enough to change:
cache_dir aufs /var/cache/cacheA 1000 16 256
 to
cache_dir null /tmp

to make that happen?

The FAQ talks about just proxy and no caching at all, but we would liek
to keep the mem caching.

Greets
Peter



>>> Henrik Nordstrom <[EMAIL PROTECTED]> 12-02-2004 13:43:36 >>>
> We are not running the cache on a separate drive but that would
surely
> make a difference.

Since you only have a very small local cache and a lot of memory I
would
recommend you to run with no disk cache at all. This makes a huge
difference in performance. See the null cache_dir type.




[squid-users] squid-users-uc.1076592977.eknelehokloaikbjfpkb-juanca= sat.com.py@squid-cache.org

2004-02-12 Thread Juanca






[squid-users] transparent to implicit

2004-02-12 Thread Ted Kaczmarek
Need a transparent setup for transition, but also need to be able to
filter ssl urls with squidguard. 

Any thoughts on this?

Ted

 



Re: [squid-users] Squid optimize settings

2004-02-12 Thread Peter van der Does
I was refering to that, as you can read in the FAQ:

4.20 Can I make Squid proxy only, without caching anything? 

Sure, there are few things you can do. 

You can use the no_cache access list to make Squid never cache any
response: 

acl all src 0/0
no_cache deny all

no_cache deny all, means not to cache anything, but does that
in/exclude memory caching??

Peter
>>> unixware <[EMAIL PROTECTED]> 12-02-2004 15:26:58 >>>

See this link
http://www.squid-cache.org/Doc/FAQ/FAQ-4.html#ss4.20 

Regards





--- Peter van der Does <[EMAIL PROTECTED]>
wrote:
> Is it enough to change:
> cache_dir aufs /var/cache/cacheA 1000 16 256
>  to
> cache_dir null /tmp
> 
> to make that happen?
> 
> The FAQ talks about just proxy and no caching at
> all, but we would liek
> to keep the mem caching.
> 
> Greets
> Peter
> 
> 
> 
> >>> Henrik Nordstrom <[EMAIL PROTECTED]>
> 12-02-2004 13:43:36 >>>
> > We are not running the cache on a separate drive
> but that would
> surely
> > make a difference.
> 
> Since you only have a very small local cache and a
> lot of memory I
> would
> recommend you to run with no disk cache at all. This
> makes a huge
> difference in performance. See the null cache_dir
> type.
> 
> 


__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html


[squid-users] Problem with ACLs and access

2004-02-12 Thread mclinden
First, I've read the FAQ and searched this list and tried various 
incantations of the configuration, below, but it simply isn't doing what I 
expect. What I want to do is to allow access from two networks:

172.16.0.0/22 (split subnet)
172.26.9.75-172.26.9.100/32

Deny access to 172.16.19.246.
Deny access to everyone else.

my acls are:

acl all src 0.0.0.0/0.0.0.0
acl nw1 src 172.16.0.0/22
acl nw2 src 172.26.9.75-172.26.9.100/32
acl block src 172.16.19.246/32

and

http_access deny block
http_access allow nw1
http_access allow nw2
http_access deny all

But instead of having the desired effect, I seem to be blocking all 
access. This is squid-2.5-STABLE4.

Thanks in advance.

Sean McLinden
Allegheny County Health Department


Re: [squid-users] Squid optimize settings

2004-02-12 Thread unixware

See this link
http://www.squid-cache.org/Doc/FAQ/FAQ-4.html#ss4.20

Regards





--- Peter van der Does <[EMAIL PROTECTED]>
wrote:
> Is it enough to change:
> cache_dir aufs /var/cache/cacheA 1000 16 256
>  to
> cache_dir null /tmp
> 
> to make that happen?
> 
> The FAQ talks about just proxy and no caching at
> all, but we would liek
> to keep the mem caching.
> 
> Greets
> Peter
> 
> 
> 
> >>> Henrik Nordstrom <[EMAIL PROTECTED]>
> 12-02-2004 13:43:36 >>>
> > We are not running the cache on a separate drive
> but that would
> surely
> > make a difference.
> 
> Since you only have a very small local cache and a
> lot of memory I
> would
> recommend you to run with no disk cache at all. This
> makes a huge
> difference in performance. See the null cache_dir
> type.
> 
> 


__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html


[squid-users] need help debugging

2004-02-12 Thread JOHNSON DAVID R
I want to debug my squid process so I can find out what might be causing the
FATAL: Segment Fault received... dying error.
What level can I set the debug_options to so that I get this info without my
cache.log filling up in under 15 minutes?



David Johnson | Network Administrator |
Hampton University | Hampton, VA | 23669 |
office 757.728.6528 | fax 757.727.5438
mailto:[EMAIL PROTECTED]




Re: [squid-users] need help debugging

2004-02-12 Thread Duane Wessels



On Thu, 12 Feb 2004, JOHNSON DAVID R wrote:

> I want to debug my squid process so I can find out what might be causing the
> FATAL: Segment Fault received... dying error.
> What level can I set the debug_options to so that I get this info without my
> cache.log filling up in under 15 minutes?

Changing debug_options probably won't help.  The best way to debug
this is to make sure Squid can leave a coredump file and then use
gdb to get a stack trace.  See
http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.19 and/or Chapter
16 of Squid: The Definitive Guide.

Duane W.


Re: [squid-users] squid, forwarding some specific requests to another proxy

2004-02-12 Thread Duane Wessels
> Your assumtion is right. Any http request goes through my cache server
> running squid which I control. The ISP is using HTTP interception. The
> problem is that I am not sure exactly which sites the ISP filters, so I
> wonder if I can send the filtered http request to a proxy, and the way I
> squid undrestand that a page is filtered is using my ISP output (what
> squid receives for a http request). The ISP generates exactly the same
> message for any filtered page.

I see.  You want Squid to automatically retry its request after
getting an error from the intercepting proxy.  Then Henrik's
suggestion to use 'prefer_direct on' is probably what you want.

Duane W.



Re: [squid-users] Problem with ACLs and access

2004-02-12 Thread mclinden
I see the logic, here, but it makes no difference. All users on 172.16.x 
are still being denied, even after making the recommended change.

Sean






"Muthukumar" <[EMAIL PROTECTED]>
02/12/2004 01:04 PM
 
To: <[EMAIL PROTECTED]>
cc: 
Subject:Re: [squid-users] Problem with ACLs and access



> acls
> acl all src 0.0.0.0/0.0.0.0
> acl nw1 src 172.16.0.0/22
> acl nw2 src 172.26.9.75-172.26.9.100/32
> acl block src 172.16.19.246/32
> 
> and
> 
> http_access deny block
> http_access allow nw1
> http_access allow nw2
> http_access deny all
>>

Ok.
 http_access allow nw1 !block
 http_access allow nw2
 http_access deny all

Check this.You will get some other responses.


Regards,
Muthukumar.
India: 0-91-94431-01756




RE: [squid-users] need help debugging

2004-02-12 Thread JOHNSON DAVID R
can you force a squid crash to test and see if a dump file is generated.

David Johnson | Network Administrator |
Hampton University | Hampton, VA | 23669 |
office 757.728.6528 | fax 757.727.5438
mailto:[EMAIL PROTECTED]


-Original Message-
From: JOHNSON DAVID R [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 12, 2004 11:53 AM
Cc: [EMAIL PROTECTED]
Subject: [squid-users] need help debugging


I want to debug my squid process so I can find out what might be causing the
FATAL: Segment Fault received... dying error.
What level can I set the debug_options to so that I get this info without my
cache.log filling up in under 15 minutes?



David Johnson | Network Administrator |
Hampton University | Hampton, VA | 23669 |
office 757.728.6528 | fax 757.727.5438
mailto:[EMAIL PROTECTED]



Re: [squid-users] Squid optimize settings

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, Peter van der Does wrote:

> Is it enough to change:
> cache_dir aufs /var/cache/cacheA 1000 16 256
>  to
> cache_dir null /tmp
> 
> to make that happen?

Provided your binary is build with support for the null cache_dir type.

> The FAQ talks about just proxy and no caching at all, but we would liek
> to keep the mem caching.

null was designed primarily for use with "no_cache deny all" type
configurations, but works just as well with caching enabled.

Regards
Henrik



Re: [squid-users] Squid optimize settings

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, Peter van der Does wrote:

> no_cache deny all, means not to cache anything, but does that
> in/exclude memory caching??

it does.

But the null cache_dir type does not.


By using no_cache + null you get a Squid completely without cache.

By just using null you get a Squid which only caches in memory.

By using no_cache without null you get a Squid which does not cache but 
still requires a cache directory.

Regards
Henrik



Re: [squid-users] transparent to implicit

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, Ted Kaczmarek wrote:

> Need a transparent setup for transition, but also need to be able to
> filter ssl urls with squidguard. 

You can't filter SSL URLs, only domain names, and only if the browser is 
configured to use the proxy.

Without the browser being configured to use the proxy all that you have to 
work with is the destination IP address, which is not very interesting for 
filtering.

Regards
Henrik



Re: [squid-users] Problem with ACLs and access

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004 [EMAIL PROTECTED] wrote:

> http_access deny block
> http_access allow nw1
> http_access allow nw2
> http_access deny all

This is perfectly fine from what I can tell.

> But instead of having the desired effect, I seem to be blocking all 
> access. This is squid-2.5-STABLE4.

Make sure "squid -k parse" is happy and that there is no other rules 
before this which blocks the request.

In addition, what is said in access.log?

Regards
Henrik



RE: [squid-users] need help debugging

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, JOHNSON DAVID R wrote:

> can you force a squid crash to test and see if a dump file is generated.

kill -ABRT `cat /usr/local/squid/var/logs/squid.pid`

Regards
Henrik



Re: [squid-users] Problem with ACLs and access

2004-02-12 Thread mclinden
For every access by every user I get TCP_DENIED.

Sean





Henrik Nordstrom <[EMAIL PROTECTED]>
02/12/2004 03:08 PM
 
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED]
Subject:Re: [squid-users] Problem with ACLs and access


On Thu, 12 Feb 2004 [EMAIL PROTECTED] wrote:

> http_access deny block
> http_access allow nw1
> http_access allow nw2
> http_access deny all

This is perfectly fine from what I can tell.

> But instead of having the desired effect, I seem to be blocking all 
> access. This is squid-2.5-STABLE4.

Make sure "squid -k parse" is happy and that there is no other rules 
before this which blocks the request.

In addition, what is said in access.log?

Regards
Henrik





[squid-users] Time limits ?

2004-02-12 Thread Dominik Jais
Hallo there, 
I'm new on the list and i've got some little problem.
We got times with high traffic, will say from 17:00 to 21:00 o'clock.
I want the users during that time to be online for  30 minutes.
At the rest they can surf until dawn. 

Is there a possibilty in squid + squidguard to get this work.
Even if authentification is done by ncsa. 
thx
D. Jais



[squid-users] CONNECT method(s)

2004-02-12 Thread trainier
1076134181.846148 .kal.kalsec.com TCP_MISS/200 3551 
CONNECT ad.doubleclick.net:443 - DIRECT/216.73.87.22 -

How do I get around this problem? 
That request should've been denied, it seems it was allowed because the 
requesting agent is using the CONNECT method. 
Is there anything I can do about this?

Tim


Re: [squid-users] Problem with ACLs and access

2004-02-12 Thread Henrik Nordstrom
And does the IP address of the user as logged in access.log makes sense 
for your ACLs?

On Thu, 12 Feb 2004 [EMAIL PROTECTED] wrote:

> For every access by every user I get TCP_DENIED.
> 
> Sean



Re: [squid-users] Time limits ?

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004, Dominik Jais wrote:

> I want the users during that time to be online for 30 minutes. At the
> rest they can surf until dawn.
> 
> Is there a possibilty in squid + squidguard to get this work.

No need for squidguard for this.

See the time acl in squid.conf.

> Even if authentification is done by ncsa. 

Yes.

Regards
Henrik



Re: [squid-users] CONNECT method(s)

2004-02-12 Thread Henrik Nordstrom
On Thu, 12 Feb 2004 [EMAIL PROTECTED] wrote:

> 1076134181.846148 .kal.kalsec.com TCP_MISS/200 3551 
> CONNECT ad.doubleclick.net:443 - DIRECT/216.73.87.22 -
> 
> How do I get around this problem? 
> That request should've been denied, it seems it was allowed because the 
> requesting agent is using the CONNECT method. 

Then you have http_access rules saying this should be allowed.

> Is there anything I can do about this?

Yes, deny the request in http_access.

If unsure post your http_access rules and we will try to help you out.

Squid FAQ Chapter 10 Access Controls is also helpful.

Regards
Henrik



RE: [squid-users] need help debugging

2004-02-12 Thread Duane Wessels



On Thu, 12 Feb 2004, JOHNSON DAVID R wrote:

> can you force a squid crash to test and see if a dump file is generated.

Yes, you can send it a signal manually, like this

% ps ax|grep squid
 5981 ??   IW   0:01.51 /usr/local/squid/sbin/squid -sD
13646 ??   R   02:55:03 (squid) -sD
% kill -SEGV 13646



RE: [squid-users] windows update problems

2004-02-12 Thread mwestern
you might also like http://www.glob.com.au/windowsupdate_cache/ - we are
sucessfully using it with no dramas...

-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 12, 2004 8:57 PM
To: Emilio Salgari; [EMAIL PROTECTED]
Subject: RE: [squid-users] windows update problems



 
> 
> Hi!
> In my squid.conf I have set
> 
> acl windowsupdate dstdomain .windowsupdate.microsoft.com
> http_access allow our_network1 windowsupdate
> http_access allow our_network2 windowsupdate
> no_cache deny windowsupdate
> 
> In this way all users from our_network1 and our_network2 can 
> access windows 
> update, but when it tries to look for new updates it ends up 
> says that there 
> has been an error.
> 
> Does anyone of you update his win machine regularly through squid?
> 
> Any hint?

  Checkout

  http://www.squid-cache.org/mail-archive/squid-users/200312/0109.html

  and subsequent thread. Maybe informative for you.
 
  M.

> 
> Thanks in advance!
> 
> _
> Nuovo MSN Messenger con sfondi e giochi! 
> http://messenger.msn.it/ Provalo 
> subito!
> 
> 


[squid-users] Squid Performance Analysis

2004-02-12 Thread Merton Campbell Crockett
For several years, I've used Webalizer to generate periodic web reports
detailing Squid activity and the crufty, old NLANR scripts to generate
extremely plain weekly reports.  Both Webalizer and NLANR generate reports
from the client perspective.

I have a requirement to generate a different report.  One that answers the
question:  "What bandwidth would we have used had we not installed a Squid
proxy?"  If possible, this needs to be generated from the standard Squid
access log.

Is there something that analyzes the various "*_HIT" statuses in the log
and produces a "what might have been report"?  Does anyone know of any
tools that are not listed on the Squid Cache web site that would provide
this type of report?

Merton Campbell Crockett

-- 
BEGIN:  vcard
VERSION:3.0
FN: Merton Campbell Crockett
ORG:General Dynamics Advanced Information Systems;
Intelligence and Exploitation Systems
N:  Crockett;Merton;Campbell
EMAIL;TYPE=internet:[EMAIL PROTECTED]
TEL;TYPE=work,voice,msg,pref:   +1(805)497-5045
TEL;TYPE=work,fax:  +1(805)497-5050
TEL;TYPE=cell,voice,msg:+1(805)377-6762
END:vcard



RE: [squid-users] Squid Performance Analysis

2004-02-12 Thread Jay Turner
> Is there something that analyzes the various "*_HIT" statuses in the log
> and produces a "what might have been report"?  Does anyone know of any
> tools that are not listed on the Squid Cache web site that would provide
> this type of report?

Your requirements sound like you are looking for a cache reporting tool.

Have you tried Calamaris?
It can provide information like the following:

Incoming TCP-requests by status
status request % Byte % sec kB/sec 
   
HIT 1488651 37.68 5481382K 21.96 0 9.74 
  TCP_IMS_HIT 486076 12.30 139571K 0.56 0 7.82 
  TCP_REFRESH_HIT 413379 10.46 1626804K 6.52 0 4.87 
  TCP_MEM_HIT 280950 7.11 492567K 1.97 0 41.14 
  TCP_HIT 223217 5.65 3122269K 12.51 0 16.05 
  TCP_NEGATIVE_HIT 85029 2.15 100170K 0.40 0 24.26 
MISS 2435997 61.65 19010M 77.99 2 3.53 
  TCP_MISS 2206700 55.85 18375M 75.39 2 3.50 
  TCP_CLIENT_REFRESH_MISS 184121 4.66 369832K 1.48 0 5.17 
  TCP_REFRESH_MISS 45138 1.14 279813K 1.12 1 4.22 
  TCP_SWAPFAIL_MISS 38 0.00 19094 0.00 0 3.97 
ERROR 26514 0.67 11954009 0.05 70 0.01 
  TCP_MISS 22614 0.57 10625538 0.04 78 0.01 
  TCP_REFRESH_MISS 2685 0.07 0 0.00 37 0.00 
  NONE 901 0.02 1140085 0.00 0 41.78 
  TCP_DENIED 159 0.00 182942 0.00 0 42.76 
  TCP_CLIENT_REFRESH_MISS 155 0.00 5444 0.00 8 0.00 
   
Sum 3951162   24374M   2 3.14 

But formatted nicer via a web interface..

http://cord.de/tools/squid/calamaris/Welcome.html

Regards
Jay

 




[squid-users] Windows Media Player 9 reverts to basic authentication after NTLM failure?

2004-02-12 Thread mwestern
Hi All,
IE6 SP1, XP all latest winBLOWS updates.latest security fix has broken
Windows Media player NTLM auth.

I thought at first it was the problem mentioned in this thread
http://www.squid-cache.org/mail-archive/squid-users/200402/0066.html 

I've worked through that.  my wininet.dll is 6.0.2800.1400, i've tried
making "ERR_CACHE_ACCESS_DENIED message larger than 1460 bytes. "
substansially larger as suggested by Henrik.

But it's not IE that is the problem, it's windows media player 9.it
takes 20 seconds to timeout and throw up a dialog box that prompts for a
username password, and if i retype password it works, using basic auth which
I have turned on for our linux users.

actually browsing works in WMP, but as soon as i click an audio link it has
the problem. 

to test:

start, run, wmplayer, wait for page.  click seal (or anything else that
takes your fancy), select an audio link and wait

squid 2.5_Stable3

access_log:
1076636556.018221 10.160.0.200 TCP_MISS/200 2799 GET
http://windowsmedia.com/CAPS/ImagesContent/A39590F5-ACF0-4A63-A11E-8CF3F80C3
AA9.jpg lonsdale\mwestern DIRECT/207.46.248.112 image/jpeg
1076636556.065221 10.160.0.200 TCP_MISS/200 2393 GET
http://windowsmedia.com/CAPS/ImagesContent/75214D0A-92A2-4E22-9952-1B6271642
3DD.jpg lonsdale\mwestern DIRECT/207.46.196.100 image/jpeg
1076636556.107218 10.160.0.200 TCP_MISS/200 2078 GET
http://windowsmedia.com/CAPS/ImagesContent/255D4381-F9BF-4FB5-B0B8-F9CCA5785
E2D.jpg lonsdale\mwestern DIRECT/207.46.196.101 image/jpeg
1076636556.128225 10.160.0.200 TCP_MISS/200 3067 GET
http://windowsmedia.com/CAPS/ImagesContent/BD45D765-8C54-4990-9434-6B996184C
8C7.jpg lonsdale\mwestern DIRECT/207.46.248.113 image/jpeg
1076636571.445502 10.160.0.200 TCP_MISS/302 190 GET
http://redir.windowsmedia.com/CT/WM3-en-us/7/146/S3/L5/d.htm?
lonsdale\mwestern DIRECT/207.46.130.110 -
1076636572.541   1095 10.160.0.200 TCP_MISS/200 579 GET
http://www.b2klovesyou.com/videos/B2K_BadaboomVidFull_300.asx
lonsdale\mwestern DIRECT/64.14.39.199 video/x-ms-asf
1076636598.950 11 10.160.0.200 TCP_DENIED/407 1906 GET
http://wm.sony.global.speedera.net/wm.sony.global/B2K/B2K_BadaboomVidFull_30
0.wmv - NONE/- text/html




I don't want to just upgrade to 2.5_Stable4 *unless* i know that this is
fixed. at least not in a hurry.  i will eventually, but it's a
production server.  

Regards
Matthew


[squid-users] RE: Windows Media Player 9 reverts to basic authentication after NTLM failure?

2004-02-12 Thread mwestern
Sorry to reply to my own post.   it may not be basic auth because when the
box pops up a simple username password will not work.  it requires
domain/username password.   this is obviously not basic auth, but ntlm??



-Original Message-
From: Matthew Western,R&D Aust 
Sent: Friday, February 13, 2004 12:19 PM
To: [EMAIL PROTECTED]
Subject: Windows Media Player 9 reverts to basic authentication after
NTLM failure?


Hi All,
IE6 SP1, XP all latest winBLOWS updates.latest security fix has broken
Windows Media player NTLM auth.

I thought at first it was the problem mentioned in this thread
http://www.squid-cache.org/mail-archive/squid-users/200402/0066.html 

I've worked through that.  my wininet.dll is 6.0.2800.1400, i've tried
making "ERR_CACHE_ACCESS_DENIED message larger than 1460 bytes. "
substansially larger as suggested by Henrik.

But it's not IE that is the problem, it's windows media player 9.it
takes 20 seconds to timeout and throw up a dialog box that prompts for a
username password, and if i retype password it works, using basic auth which
I have turned on for our linux users.

actually browsing works in WMP, but as soon as i click an audio link it has
the problem. 

to test:

start, run, wmplayer, wait for page.  click seal (or anything else that
takes your fancy), select an audio link and wait

squid 2.5_Stable3

access_log:
1076636556.018221 10.160.0.200 TCP_MISS/200 2799 GET
http://windowsmedia.com/CAPS/ImagesContent/A39590F5-ACF0-4A63-A11E-8CF3F80C3
AA9.jpg lonsdale\mwestern DIRECT/207.46.248.112 image/jpeg
1076636556.065221 10.160.0.200 TCP_MISS/200 2393 GET
http://windowsmedia.com/CAPS/ImagesContent/75214D0A-92A2-4E22-9952-1B6271642
3DD.jpg lonsdale\mwestern DIRECT/207.46.196.100 image/jpeg
1076636556.107218 10.160.0.200 TCP_MISS/200 2078 GET
http://windowsmedia.com/CAPS/ImagesContent/255D4381-F9BF-4FB5-B0B8-F9CCA5785
E2D.jpg lonsdale\mwestern DIRECT/207.46.196.101 image/jpeg
1076636556.128225 10.160.0.200 TCP_MISS/200 3067 GET
http://windowsmedia.com/CAPS/ImagesContent/BD45D765-8C54-4990-9434-6B996184C
8C7.jpg lonsdale\mwestern DIRECT/207.46.248.113 image/jpeg
1076636571.445502 10.160.0.200 TCP_MISS/302 190 GET
http://redir.windowsmedia.com/CT/WM3-en-us/7/146/S3/L5/d.htm?
lonsdale\mwestern DIRECT/207.46.130.110 -
1076636572.541   1095 10.160.0.200 TCP_MISS/200 579 GET
http://www.b2klovesyou.com/videos/B2K_BadaboomVidFull_300.asx
lonsdale\mwestern DIRECT/64.14.39.199 video/x-ms-asf
1076636598.950 11 10.160.0.200 TCP_DENIED/407 1906 GET
http://wm.sony.global.speedera.net/wm.sony.global/B2K/B2K_BadaboomVidFull_30
0.wmv - NONE/- text/html




I don't want to just upgrade to 2.5_Stable4 *unless* i know that this is
fixed. at least not in a hurry.  i will eventually, but it's a
production server.  

Regards
Matthew


RE: [squid-users] Windows Media Player 9 reverts to basic authent ication after NTLM failure?

2004-02-12 Thread mwestern
 

MSKB has a hotfix.  

FIX: Windows Media Player 9 Series Prompts User for Credentials with NTLM
Authenticated Proxy
http://support.microsoft.com/default.aspx?scid=kb;en-us;816089
FIX: Windows Media Player 9 Series May Not Be Able to Connect Through an
Authenticated Proxy 2.0 Proxy Server
http://support.microsoft.com/default.aspx?scid=kb;en-us;830414

I HATE microsoft.  HATE HATE HATE DOUBLE HATE, LOATHE ENTIRELY.(thankye
grinch).

regards
Matthew




-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, February 13, 2004 12:19 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] Windows Media Player 9 reverts to basic
authentication after NTLM failure?


Hi All,
IE6 SP1, XP all latest winBLOWS updates.latest security fix has broken
Windows Media player NTLM auth.

I thought at first it was the problem mentioned in this thread
http://www.squid-cache.org/mail-archive/squid-users/200402/0066.html 

I've worked through that.  my wininet.dll is 6.0.2800.1400, i've tried
making "ERR_CACHE_ACCESS_DENIED message larger than 1460 bytes. "
substansially larger as suggested by Henrik.

But it's not IE that is the problem, it's windows media player 9.it
takes 20 seconds to timeout and throw up a dialog box that prompts for a
username password, and if i retype password it works, using basic auth which
I have turned on for our linux users.

actually browsing works in WMP, but as soon as i click an audio link it has
the problem. 

to test:

start, run, wmplayer, wait for page.  click seal (or anything else that
takes your fancy), select an audio link and wait

squid 2.5_Stable3

access_log:
1076636556.018221 10.160.0.200 TCP_MISS/200 2799 GET
http://windowsmedia.com/CAPS/ImagesContent/A39590F5-ACF0-4A63-A11E-8CF3F80C3
AA9.jpg lonsdale\mwestern DIRECT/207.46.248.112 image/jpeg
1076636556.065221 10.160.0.200 TCP_MISS/200 2393 GET
http://windowsmedia.com/CAPS/ImagesContent/75214D0A-92A2-4E22-9952-1B6271642
3DD.jpg lonsdale\mwestern DIRECT/207.46.196.100 image/jpeg
1076636556.107218 10.160.0.200 TCP_MISS/200 2078 GET
http://windowsmedia.com/CAPS/ImagesContent/255D4381-F9BF-4FB5-B0B8-F9CCA5785
E2D.jpg lonsdale\mwestern DIRECT/207.46.196.101 image/jpeg
1076636556.128225 10.160.0.200 TCP_MISS/200 3067 GET
http://windowsmedia.com/CAPS/ImagesContent/BD45D765-8C54-4990-9434-6B996184C
8C7.jpg lonsdale\mwestern DIRECT/207.46.248.113 image/jpeg
1076636571.445502 10.160.0.200 TCP_MISS/302 190 GET
http://redir.windowsmedia.com/CT/WM3-en-us/7/146/S3/L5/d.htm?
lonsdale\mwestern DIRECT/207.46.130.110 -
1076636572.541   1095 10.160.0.200 TCP_MISS/200 579 GET
http://www.b2klovesyou.com/videos/B2K_BadaboomVidFull_300.asx
lonsdale\mwestern DIRECT/64.14.39.199 video/x-ms-asf
1076636598.950 11 10.160.0.200 TCP_DENIED/407 1906 GET
http://wm.sony.global.speedera.net/wm.sony.global/B2K/B2K_BadaboomVidFull_30
0.wmv - NONE/- text/html




I don't want to just upgrade to 2.5_Stable4 *unless* i know that this is
fixed. at least not in a hurry.  i will eventually, but it's a
production server.  

Regards
Matthew


[squid-users] Run time errors

2004-02-12 Thread Deepa D
Hi,
   I am having a problem with squid. When I visit a
few websites like www.mail.yahoo.com through the squid
proxy, some run time error popups occur, whereas these
don't appear when not using the proxy. 
   I am currently using squid-2.5.STABLE4.
   Plz let me know why this problem occurs and if it
can be solved by some means.

  Regards and TIA,
 Deepa  


Yahoo! India Education Special: Study in the UK now.
Go to http://in.specials.yahoo.com/index1.html


[squid-users] Does Squid support content delivery ?

2004-02-12 Thread aiggno
Hi all,

Does Squid support content delivery as some Cisco Content Engine ?

Thanks and best regards,
Aiggno



RE: [squid-users] Squid Performance Analysis

2004-02-12 Thread Merton Campbell Crockett
On Fri, 13 Feb 2004, Jay Turner wrote:

> > Is there something that analyzes the various "*_HIT" statuses in the log
> > and produces a "what might have been report"?  Does anyone know of any
> > tools that are not listed on the Squid Cache web site that would provide
> > this type of report?
> 
> Your requirements sound like you are looking for a cache reporting tool.
> 
> Have you tried Calamaris?

No, I haven't.  The link on the Squid-Cache log analyzer page pointed to 
the page put up when the Europe was in an uproar over the EC permitting 
corporations to patent software.  Thanks for the link to a valid page.

> http://cord.de/tools/squid/calamaris/Welcome.html

After asking the list, I went through the list again and spotted a package 
called Squeezer2.  The one thing going for it is that it reports the 
percentage of bandwidth.  Just the right thing for the "pointy-hair boss", 
Erbsenzaehler, etc.

Merton Campbell Crocett


-- 
BEGIN:  vcard
VERSION:3.0
FN: Merton Campbell Crockett
ORG:General Dynamics Advanced Information Systems;
Intelligence and Exploitation Systems
N:  Crockett;Merton;Campbell
EMAIL;TYPE=internet:[EMAIL PROTECTED]
TEL;TYPE=work,voice,msg,pref:   +1(805)497-5045
TEL;TYPE=work,fax:  +1(805)497-5050
TEL;TYPE=cell,voice,msg:+1(805)377-6762
END:vcard


Re: [squid-users] Run time errors

2004-02-12 Thread Durai
Hi,
  It would be useful to provide what run time error you had.

Regs,
Durai.

- Original Message - 
From: "Deepa D" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, February 13, 2004 10:05 AM
Subject: [squid-users] Run time errors


> Hi,
>I am having a problem with squid. When I visit a
> few websites like www.mail.yahoo.com through the squid
> proxy, some run time error popups occur, whereas these
> don't appear when not using the proxy. 
>I am currently using squid-2.5.STABLE4.
>Plz let me know why this problem occurs and if it
> can be solved by some means.
> 
>   Regards and TIA,
>  Deepa  
> 
> 
> Yahoo! India Education Special: Study in the UK now.
> Go to http://in.specials.yahoo.com/index1.html


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.576 / Virus Database: 365 - Release Date: 1/30/2004


Re: [squid-users] squid, forwarding some specific requests to another proxy

2004-02-12 Thread Shahriar Mokhtari
Henrik Nordstrom wrote:

On Thu, 12 Feb 2004, Shahriar Mokhtari wrote:

 

Your assumtion is right. Any http request goes through my cache server 
running squid which I control. The ISP is using HTTP interception. The 
problem is that I am not sure exactly which sites the ISP filters, so I 
wonder if I can send the filtered http request to a proxy, and the way I 
squid undrestand that a page is filtered is using my ISP output (what 
squid receives for a http request). The ISP generates exactly the same 
message for any filtered page.
   

You may be able to acheive something along these lines by configuring
Squid to use a non-ICP parent (no-query no-cache-digests
no-netdb-exchanges cache_peer options) and "prefer_direct on".
It is not 100% perfect, but will work most of the time.

Regards
Henrik
 

Thanks for the reply. I was wondering if I can set up another small 
cache server that uses a parent (say one of ircache) and then set it to 
be a sibiling for my main cache server. This helps me to do whatever 
modification I need in future without messing a lot with the main cache 
server. Any particular advise?

Mokhtari



[squid-users] Why Rebuilding Storage is (DIRTY)

2004-02-12 Thread Siao Yuan Tan
Dear All,

I have managed to get squid 2.5 with all current patches running.  Everything 
seems to be fine except in cache.log file, the "Rebuilding storage 
in /home/.squid (DIRTY)" is troubling me whether it means anything.

What does it means by DIRTY, I had delete all swap in the directory and run 
squid -z and still get the same thing.  Anyone know anything, appreciate if 
you could advise me on this matter.

Thanks,
Siao Tan

2004/02/13 13:32:36| Starting Squid Cache version 2.5.STABLE4 for i686-pc-
linux-gnu... 2004/02/13 13:32:36| Process ID 3111 2004/02/13 13:32:36| With 
1024 file descriptors available 2004/02/13 13:32:36| DNS Socket created at 
0.0.0.0, port 33077, FD 4 2004/02/13 13:32:36| Adding nameserver 165.21.83.88 
from /etc/resolv.conf 2004/02/13 13:32:36| Adding nameserver 203.121.16.85 
from /etc/resolv.conf 2004/02/13 13:32:36| Adding nameserver 165.21.100.88 
from /etc/resolv.conf 2004/02/13 13:32:36| Adding nameserver 202.188.0.133 
from /etc/resolv.conf 2004/02/13 13:32:36| Adding nameserver 203.121.16.120 
from /etc/resolv.conf 2004/02/13 13:32:36| Adding nameserver 202.188.1.5 
from /etc/resolv.conf 2004/02/13 13:32:36| Unlinkd pipe opened on FD 9 
2004/02/13 13:32:36| Swap maxSize 1024000 KB, estimated 78769 objects 
2004/02/13 13:32:36| Target number of buckets: 3938 2004/02/13 13:32:36| 
Using 8192 Store buckets 2004/02/13 13:32:36| Max Mem  size: 8192 KB 
2004/02/13 13:32:36| Max Swap size: 1024000 KB 2004/02/13 13:32:36| 
Rebuilding storage in /home/.squid (DIRTY) 2004/02/13 13:32:36| Using Least 
Load store dir selection 2004/02/13 13:32:36| Current Directory 
is /home/.squid 2004/02/13 13:32:36| Loaded Icons. 2004/02/13 13:32:36| 
Accepting HTTP connections at 0.0.0.0, port 3128, FD 10. 2004/02/13 13:32:36| 
Accepting ICP messages at 0.0.0.0, port 3130, FD 11. 2004/02/13 13:32:36| 
Accepting SNMP messages on port 3401, FD 12. 2004/02/13 13:32:36| WCCP 
Disabled. 2004/02/13 13:32:36| Ready to serve requests. 2004/02/13 13:32:36| 
Done scanning /home/.squid swaplog (0 entries) 2004/02/13 13:32:36| Finished 
rebuilding storage from disk.
2004/02/13 13:32:36| 0 Entries scanned
2004/02/13 13:32:36| 0 Invalid entries.
2004/02/13 13:32:36| 0 With invalid flags.
2004/02/13 13:32:36| 0 Objects loaded.
2004/02/13 13:32:36| 0 Objects expired.
2004/02/13 13:32:36| 0 Objects cancelled.
2004/02/13 13:32:36| 0 Duplicate URLs purged.
2004/02/13 13:32:36| 0 Swapfile clashes avoided.
2004/02/13 13:32:36|   Took 0.4 seconds (   0.0 objects/sec).
2004/02/13 13:32:36| Beginning Validation Procedure
2004/02/13 13:32:36|   Completed Validation Procedure
2004/02/13 13:32:36|   Validated 0 Entries
2004/02/13 13:32:36|   store_swap_size = 0k
2004/02/13 13:32:37| storeLateRelease: released 0 objects



Re: [squid-users] squid, forwarding some specific requests to another proxy

2004-02-12 Thread Henrik Nordstrom
On Fri, 13 Feb 2004, Shahriar Mokhtari wrote:

> Thanks for the reply. I was wondering if I can set up another small 
> cache server that uses a parent (say one of ircache) and then set it to 
> be a sibiling for my main cache server.

You can configure things in this manner, but not for doing what you ask 
for.

A sibling will not get any traffic unless it has clients of it's own.
Requests are only forwarded to a sibling if the sibling reports it already
have the requested object cached.

You can set it up as a parent however.

> This helps me to do whatever modification I need in future without
> messing a lot with the main cache server.

What is the problem with doing this on the main cache server?

Regards
Henrik



Re: [squid-users] Run time errors

2004-02-12 Thread Henrik Nordstrom
On Fri, 13 Feb 2004, Deepa D wrote:

>I am having a problem with squid. When I visit a
> few websites like www.mail.yahoo.com through the squid
> proxy, some run time error popups occur, whereas these
> don't appear when not using the proxy. 

What does the error say?

Regards
Henrik



Re: [squid-users] Windows Media Player 9 reverts to basic authentication after NTLM failure?

2004-02-12 Thread Henrik Nordstrom
On Fri, 13 Feb 2004 [EMAIL PROTECTED] wrote:

> I don't want to just upgrade to 2.5_Stable4 *unless* i know that this is
> fixed. at least not in a hurry.  i will eventually, but it's a
> production server.  

The release which includes the recently discussed NTLM bug fixes will be 
2.5.STABLE5.

But I think your problem is the media player, not Squid.

Regards
Henrik



Re: [squid-users] Why Rebuilding Storage is (DIRTY)

2004-02-12 Thread Henrik Nordstrom
On Fri, 13 Feb 2004, Siao Yuan Tan wrote:

> I have managed to get squid 2.5 with all current patches running.  Everything 
> seems to be fine except in cache.log file, the "Rebuilding storage 
> in /home/.squid (DIRTY)" is troubling me whether it means anything.

It means that the previous time you ran Squid you did not let it to 
terminate in a clean manner, and Squid need to verify the consistency of 
the cache a little harder while rebuilding the internal index of what is 
cached.

> What does it means by DIRTY, I had delete all swap in the directory and run 
> squid -z and still get the same thing.  Anyone know anything, appreciate if 
> you could advise me on this matter.

Seeing this on a newly created cache directory is OK. Just means Squid 
will verify the consistency of the new cache directory a little harder.

Seeing this after what you think is a normal restart is not OK.

Regards
Henrik



RE: [squid-users] Why Rebuilding Storage is (DIRTY)

2004-02-12 Thread Siao Yuan Tan
Oh No, I am seeing this after numerous normal restart.  What could be
wrong?

Please advise.

Thanks,
Siao Tan

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 13, 2004 3:30 PM
To: Siao Yuan Tan
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] Why Rebuilding Storage is (DIRTY)


On Fri, 13 Feb 2004, Siao Yuan Tan wrote:

> I have managed to get squid 2.5 with all current patches running.  
> Everything
> seems to be fine except in cache.log file, the "Rebuilding storage 
> in /home/.squid (DIRTY)" is troubling me whether it means anything.

It means that the previous time you ran Squid you did not let it to 
terminate in a clean manner, and Squid need to verify the consistency of

the cache a little harder while rebuilding the internal index of what is

cached.

> What does it means by DIRTY, I had delete all swap in the directory 
> and run
> squid -z and still get the same thing.  Anyone know anything,
appreciate if 
> you could advise me on this matter.

Seeing this on a newly created cache directory is OK. Just means Squid 
will verify the consistency of the new cache directory a little harder.

Seeing this after what you think is a normal restart is not OK.

Regards
Henrik