RE: [squid-users] squidGuard 1.3.0 released

2007-11-06 Thread Paul Cocker
Someone care to explain the difference, or history, behind squidGuard
and squidGuard? :)


Paul Cocker

-Original Message-
From: Guido Serassio [mailto:[EMAIL PROTECTED] 
Sent: 05 November 2007 22:07
To: squid-users@squid-cache.org
Subject: [squid-users] squidGuard 1.3.0 released

We are pleased to announce the availability of the release 1.3.0 of
squidGuard.

squidGuard-1.3.0 is based on the original squidguard-1.2.0 codebase, but
has many new publicly available enhancements and features which have
been developed over the last six years after squidGuard-1.2.0 was
released, and these have now been rolled into this formal
squidguard-1.3.0 release. This version also adds native Windows support
using the MSYS+MinGW build environment.

This new release can be downloaded from the squidGuard Sourceforge
project:

http://sourceforge.net/project/showfiles.php?group_id=184120


The most important new additions in this squidGuard-1.3.0 release are:


   * Imported squidguard-sed.patch from K12LTSP project. This allow
 squidGuard to rewrite the Google URL with the safe=active tag


   * Updated the redirector protocol to Squid 2.6 version


   * Imported netdirect-squidGuard-full.patch based on work of
 Chris Frey and Adam Gorski


   * Native Windows port using MSYS+MinGW environment


We openly welcome and encourage bug reports should you run into any
issues with the new release. Bug reports can be entered into the
squidGuard Bug Tracker at:
http://sourceforge.net/tracker/?group_id=184120atid=907981


This squidGuard-1.3.0 software was brought to you by Guido Serassio and
Norbert Szasz, and is mainly based on many third-party contributions
made available over the years. Many thanks to all contributors who have
submitted new features.


This works is not related in any way with the so called official
squidGuard project at the new www.squidguard.org.


Note: If there is interest in becoming an official sponsor for the
ongoing squidGuard maintenance or development efforts please contact
using the project forum at http://sourceforge.net/forum/?group_id=184120


Best regards
Guido Serassio  Norbert Szasz





TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.



RE: [squid-users] Optimal maximum cache size

2007-11-06 Thread Paul Cocker
I assume the in-memory index is in addition to the memory_cache? So if
you have a 100GB disk cache you would need 1GB RAM... but that would
only cover the index and so you would need more memory for squid itself
and the memory_cache? 


Paul Cocker

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: 05 November 2007 23:44
To: Paul Cocker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Optimal maximum cache size

 Is there such a thing as too much disk cache? Presumably squid has to 
 have some way of checking this cache, and at some point it takes 
 longer to look for a cached page than to serve it direct. At what 
 point do you hit that sort of problem, or is it so large no human mind
should worry?
 :)

 Paul
 IT Systems Admin

Disk cache is limited by access time and ironically RAM.

Squid holds an in-memory index of 10MB-ram per GB-disk. With large disk
caches this can fill RAM pretty fast, particularly if the cache is full
of small objects. Large objects use less index space more disk.

Some with smaller systems hit the limit at 20-100GB, others in cache
farms reach TB.

As for the speed of lookup vs DIRECT. If anyone has stats, please let us
know.

Amos





TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.



RE: [squid-users] Domain URL blacklists

2007-11-06 Thread Paul Cocker
Apologies for my ignorance, but what then does squidGuard add as I was
under the impression that filtering was its big job. Would I be right at
assuming then that squidGuard is faster at processing block lists?


Paul Cocker

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: 01 November 2007 22:09
To: Paul Cocker
Cc: jeff donovan; squid
Subject: RE: [squid-users] Domain  URL blacklists

 Just squid, it's running on a Windows box and I don't have the time 
 currently to figure out how to run cygwin and squidguard together, so 
 I'm looking simply to hook the most useful lists direct into squid and

 see how much it harms performance.


 Paul Cocker
 IT Systems Administrator

 -Original Message-
 From: jeff donovan [mailto:[EMAIL PROTECTED]
 Sent: 01 November 2007 17:29
 To: squid
 Subject: Re: [squid-users] Domain  URL blacklists


 On Nov 1, 2007, at 10:23 AM, Paul Cocker wrote:

 My bad, in fact from further analysis it seems that the domain files 
 are the mysite.com listings and URLs are things like 
 mysite.com/something/?somethingelse.htm. Does the later have any 
 relevance or use within Squid?


Squid can handle these by itself. With a regular squid -k reconfigure
after updating the files.

For the list of pure hostnames a dstdomain acl is the best.
For the list of URI snippets a urlpath_regex acl probably with -i is
needed.

If the domain/ip file is an pruned version of the domains with URI
entries, then the URI may not be useful as its all caught by the domain.
If they are different then yes both have a use.

Amos



 Paul Cocker
 IT Systems Administrator

 -Original Message-
 From: Paul Cocker [mailto:[EMAIL PROTECTED]
 Sent: 01 November 2007 13:23
 To: squid-users@squid-cache.org
 Subject: [squid-users] Domain  URL blacklists

 I am using elements of Shalla's blacklists to block content. However,

 they ship in two files, domains and URLs, the former being IP 
 addresses and the later URLs. Since our squid proxy is running on 
 Windows I would need to experiment with cygwin to get SquidGuard 
 running, and that isn't something I have time for at the moment, so I

 am trying to plug in what I can without crippling performance (and 
 what is the likely performance impact?).

 Do I call both files via acl {aclname} dstdomain {filepath}, or 
 should

 IP lists be called using a different command?

 Paul Cocker
 IT Systems Administrator


 Hi paul  are you using DansGuardian or SquidGuard ? or trying to do 
 this with just squid?

 -jeff




 TNT Post is the trading name for TNT Post UK Ltd (company number:
 04417047), TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland

 Ltd (05695897),TNT Post North Ltd (05701709) and TNT Post South West 
 Ltd (05983401). Emma's Diary and Lifecycle are trading names for 
 Lifecycle Marketing (Mother and Baby) Ltd (02556692). All companies 
 are registered in England and Wales; registered address: 1 Globeside 
 Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, SL7 1HY.








TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.



Re: [squid-users] Optimal maximum cache size

2007-11-06 Thread Amos Jeffries

Paul Cocker wrote:

I assume the in-memory index is in addition to the memory_cache? So if
you have a 100GB disk cache you would need 1GB RAM... but that would
only cover the index and so you would need more memory for squid itself
and the memory_cache? 



I believe so. Plus memory for all the ACLs and active connections etc.
Squid is unfortunately very RAM hungry and nobody has sponsored much in 
the way of memory optimisation in a long while.


Amos



Paul Cocker

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: 05 November 2007 23:44

To: Paul Cocker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Optimal maximum cache size

Is there such a thing as too much disk cache? Presumably squid has to 
have some way of checking this cache, and at some point it takes 
longer to look for a cached page than to serve it direct. At what 
point do you hit that sort of problem, or is it so large no human mind

should worry?

:)

Paul
IT Systems Admin


Disk cache is limited by access time and ironically RAM.

Squid holds an in-memory index of 10MB-ram per GB-disk. With large disk
caches this can fill RAM pretty fast, particularly if the cache is full
of small objects. Large objects use less index space more disk.

Some with smaller systems hit the limit at 20-100GB, others in cache
farms reach TB.

As for the speed of lookup vs DIRECT. If anyone has stats, please let us
know.

Amos





TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.





Re: [squid-users] Optimal maximum cache size

2007-11-06 Thread Tek Bahadur Limbu

Hi Amos,

Amos Jeffries wrote:

Is there such a thing as too much disk cache? Presumably squid has to
have some way of checking this cache, and at some point it takes longer
to look for a cached page than to serve it direct. At what point do you
hit that sort of problem, or is it so large no human mind should worry?
:)

Paul
IT Systems Admin


Disk cache is limited by access time and ironically RAM.

Squid holds an in-memory index of 10MB-ram per GB-disk. With large disk
caches this can fill RAM pretty fast, particularly if the cache is full of
small objects. Large objects use less index space more disk.

Some with smaller systems hit the limit at 20-100GB, others in cache farms
reach TB.

As for the speed of lookup vs DIRECT. If anyone has stats, please let us
know.


I can't understand under what circumstances the cache Lookup will be 
slower than DIRECT lookup unless one has a net connection faster than 
the disks!


For a 20 GB cache with 1175539 on-disk objects:

Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   1.24267  1.38447
Cache Misses:  1.54242  1.71839
Cache Hits:0.00919  0.00865
Near Hits: 1.38447  1.62803
Not-Modified Replies:  0.00179  0.00091
DNS Lookups:   0.04237  0.04433
ICP Queries:   0.00102  0.00096

The cache Lookup is 170 times faster than DIRECT lookups!


MAYBE, if I use a bigger cache say, 100-300 GB, the results could be 
different. But I believe that running multiple Squid boxes with smaller 
caches (10-30 GB) is always better than running 1 single Squid box with 
a (100-300 GB) cache.


The benefits of running multiple smaller caches far outweigh running a 
single large cache.


But this is only my opinion.

From my guess and experience, to run a 300 GB cache, one needs about 6 
GB of memory! But I can't imagine how to manage a 300 GB cache if it 
gets corrupted!



Thanking you...




Amos







--

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal

http://www.wlink.com.np

http://teklimbu.wordpress.com


RE: [squid-users] squid3 WindowsUpdate failed

2007-11-06 Thread Jorge Bastos
Alex,
The only ACL i have in squid.conf is:

---
acl all_cache src 0.0.0.0/0.0.0.0
no_cache deny all_cache
---

I'm one of the people who's having this problems.
Now I'm using 3.0.PRE6 until this is fixed.



-Original Message-
From: Alex Rousskov [mailto:[EMAIL PROTECTED] 
Sent: segunda-feira, 5 de Novembro de 2007 16:31
To: Amos Jeffries
Cc: John Mok; squid-users@squid-cache.org
Subject: Re: [squid-users] squid3 WindowsUpdate failed

On Sun, 2007-11-04 at 19:30 +1300, Amos Jeffries wrote:
 I have just had the opportunity to do WU on a customers box and
 managed to reproduce one of the possible WU failures.
 
 This one was using WinXP, and the old WindowsUpdate (NOT 
 MicrosoftUpdate, teht remains untested). With squid configured to
 permit 
 client access to:
 
 # Windows Update / Microsoft Update
 #
 redir.metaservices.microsoft.com
 images.metaservices.microsoft.com
 c.microsoft.com
 windowsupdate.microsoft.com
 #
 # WinXP / Win2k
 .update.microsoft.com
 download.windowsupdate.com
 # Win Vista
 .download.windowsupdate.com
 # Win98
 wustat.windows.com
 crl.microsoft.com
 
 AND also CONNECT access to www.update.microsoft.com:443
 
 PROBLEM:
The client box detects a needed update,
then during the Download Updates phase it says ...failed! and
 stops.
 
 CAUSE:
 
 This was caused by a bug in squid reading the ACL:
download.windowsupdate.com
   ...
.download.windowsupdate.com
 
   - squid would detect that download.windowsupdate.com was a
 subdomain 
 of .download.windowsupdate.com  and .download.windowsupdate.com would
 be 
 culled off the ACL as unneeded.
 
   - That culled bit held the wildcard letting v4.download.* and 
 www.download.* be retrieved later in the process.
 
   - BUT, specifying JUST .download.windowsupdate.com would cause 
 download.windowsupdate.com/fubar to FAIL under the same circumstances.
 
 during the WU process requests for application at 
 www.download.windowsupdate.com/fubar and K/Q updates at 
 v(3|4|5).download.windowsupdate.com/fubar2
 would result in a 403 and thus the FAIL.
 
 
 SOLUTION:
   Changing the wildcard match to an explicit for fixes this and WU 
 succeeds again.
 OR,
   Changing the wildcard to .windowsupdate.com also fixes the problem
 for this test.

Can other folks experiencing Windows Update troubles with Squid3 confirm
that their setup does not have the same ACL problem?

In general, if we do not find a way to get more information about the
Windows Update problem, we would have to assume it does not exist in
most environments and release Squid3 STABLE as is. If you want the
problem fixed before the stable Squid3 release, please help us reproduce
or debug the problem.

Thank you,

Alex.





Re: [squid-users] squid3 WindowsUpdate failed

2007-11-06 Thread Adrian Chadd
On Tue, Nov 06, 2007, Jorge Bastos wrote:
 Alex,
 The only ACL i have in squid.conf is:
 
 ---
 acl all_cache src 0.0.0.0/0.0.0.0
 no_cache deny all_cache
 ---
 
 I'm one of the people who's having this problems.
 Now I'm using 3.0.PRE6 until this is fixed.

So wait - Squid-3.0.PRE6 works but Squid-3.0.PRE7 with exactly the same
configuration file doesn't?



Adrian


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] RE: Running Squid on NT default domain on client

2007-11-06 Thread Wever, J.
Hi  People,   
 
I have set up Squid 2.6 on a Win 2003 server with ntlm authentication,
it works great only one problem is that my client (linux thinclients)
are not joined to the domain and whenever they are prompted for a
user/pass the user has to fill: domain\user for it to work. 

If the client (user) types just his username and password the hostname
is used as the domain.

I have searched the faq and the email database and found many replies
about configuring samba with smb.conf to use the default domain, however
i'm not using samba.  
 
Is there anywhere else where i might set the default domain so my users
only have to fill in a username and a password (without domain\)?
 
Thanks a lot!
 
Jelle Wever
 


Re: [squid-users] RE: Running Squid on NT default domain on client

2007-11-06 Thread Guido Serassio

Hi,

At 11.11 06/11/2007, Wever, J. wrote:

Hi  People,

I have set up Squid 2.6 on a Win 2003 server with ntlm authentication,
it works great only one problem is that my client (linux thinclients)
are not joined to the domain and whenever they are prompted for a
user/pass the user has to fill: domain\user for it to work.

If the client (user) types just his username and password the hostname
is used as the domain.


This is a NTLM correct behaviour: it happens also on Windows clients 
non joined to a domain.
Correctly Internet Explorer displays a login dialog box with three 
fields (username, password and domain) for NTLM authentication, while 
Firefox displays always a two fields dialog box for both basic and 
NTLM authentication.



I have searched the faq and the email database and found many replies
about configuring samba with smb.conf to use the default domain, however
i'm not using samba.

Is there anywhere else where i might set the default domain so my users
only have to fill in a username and a password (without domain\)?


This is a Client side problem, not a server side problem: It's the 
client that fills the domain field of the NTLM request with the local 
machine name. I don't know if it's possible to set the default NTLM 
domain used for authentication on the Linux client.


Regards

Guido Serassio



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] Problem while using reply_body_max_size

2007-11-06 Thread sridhar panaman
Hi,
I am trying to configure my squid to block users from downloading
anything less than 3MB. But when I  give reply_body_max_size 3000  I
am unable to view certain website like www.microsoft.com.
Is there something else that I should add to this line or is it totally wrong?

Requesting your help
-- 
Sridhar Panaman


Re: [squid-users] squid accel peer load balancing weighted round robin?

2007-11-06 Thread Sylvain Viart

Hi,

To sum up:

   * proxy squid 2.6.STABLE16
   * accelerator, squid only speak to apache2 (originserver), no other
 proxy speaking together.
   * I want to weight loadblance the squid query to the parent (origin)
   * + I want to filter the url in 2 type static and php. static
 o URL are directed to the static peer, not balanced
 o any php content URL are directed to php peer with weighted
   round-robin selection.

squid behavior seems to be bugged when I put :
cache_peer php-01 parent 80 0 no-query originserver round-robin weight=2 
login=PASS
#cache_peer php-02 parent 80 0 no-query originserver round-robin 
weight=0 login=PASS
cache_peer php-03 parent 80 0 no-query originserver round-robin weight=2 
login=PASS
cache_peer php-04 parent 80 0 no-query originserver round-robin weight=1 
login=PASS
cache_peer php-05 parent 80 0 no-query originserver round-robin weight=2 
login=PASS
cache_peer php-06 parent 80 0 no-query originserver round-robin weight=2 
login=PASS
cache_peer php-07 parent 80 0 no-query originserver round-robin weight=3 
login=PASS
cache_peer php-08 parent 80 0 no-query originserver round-robin weight=2 
login=PASS


weight are not respected and all the load seems to fall on the last 
declared peer. Note, I also declare some other peer not involved in the 
load balancing scheme.
Particularly, I've re-implemented the round robin behavior via my 
redirector. And I first produced a bugged algorithm with was also 
counting the static peer in the peer rotation.


bugged algo, static peer selection are also counted and break the round 
robin selection.

$n=0;
while()
{
   if(static)
  {
 s/url/static/;
  }
  else
  {
   $peer = $all_peer[$n%nb_peer];
  }
 print;
 $n++;
}

I measure load by looking on some MRTG like graph of all the server 
pool. And clearly it see, than the load is badly divided on each peer.


config problem.

use round-robin for strictly old fashioned round-robin, 
weighted-round-robin for round-robin with weight= load balancing

weighted-round-robin, starts from squid3 I think.

CARP is purpose build load balancing algorithm, and as far as I 
know, it should work with originserver.  
http://docs.huihoo.com/gnu_linux/squid/html/x2398.html

No. It's a parent proxy/server thing.

use 'carp' to define a set of parents which should
be used as a CARP array. The requests will be
distributed among the parents based on the CARP load
balancing hash function based on their weight

says so twice to be sure.

FWIW, originserver only affects the replies squid produces. Whether 
it spoofs being a web server for the data requested.
Yes, but it's some what confusing, because parent seems to name 
orginserver and hierarchical proxy.
I've read some old post which say that the algorithm was only available 
for parent proxy. Which means for me that it can apply to another 
proxy not an origin server.


http://www.mail-archive.com/squid-users@squid-cache.org/msg09265.html

But as Amos said, it may be the same for squid.

I tested the CARP config and squid complain about the conf syntax:

cache_peer php-01 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-03 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-04 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-05 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-06 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-07 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.187500
cache_peer php-08 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-09 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.437500


squid -k parse
2007/11/06 16:08:36| parse_peer: token='carp-load-factor=.062500'
FATAL: Bungled squid.conf line 592: cache_peer varan-01 parent 80 0 
no-query no-digest originserver login=PASS carp-load-factor=.062500


Squid Cache: Version 2.6.STABLE16
configure options:  '--prefix=/usr' '--exec_prefix=/usr' 
'--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' 
'--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' 
'--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' 
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' 
'--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' 
'--enable-snmp' '--enable-delay-pools' '--enable-htcp' 
'--enable-cache-digests' '--enable-underscores' '--enable-referer-log' 
'--enable-useragent-log' '--enable-auth=basic,digest,ntlm' 
'--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' 
'--with-maxfd=65536' 'i386-debian-linux' 'build_alias=i386-debian-linux' 
'host_alias=i386-debian-linux' 'target_alias=i386-debian-linux'


carp seems to be enabled...

Regards,

RE: [squid-users] squid3 WindowsUpdate failed

2007-11-06 Thread Jorge Bastos
On my machine it's, 3.0-PRE6 and 3.0-RC1



-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: terça-feira, 6 de Novembro de 2007 10:02
To: Jorge Bastos
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] squid3 WindowsUpdate failed

On Tue, Nov 06, 2007, Jorge Bastos wrote:
 Alex,
 The only ACL i have in squid.conf is:
 
 ---
 acl all_cache src 0.0.0.0/0.0.0.0
 no_cache deny all_cache
 ---
 
 I'm one of the people who's having this problems.
 Now I'm using 3.0.PRE6 until this is fixed.

So wait - Squid-3.0.PRE6 works but Squid-3.0.PRE7 with exactly the same
configuration file doesn't?



Adrian


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -



RE: [squid-users] squid3 WindowsUpdate failed

2007-11-06 Thread Alex Rousskov

On Tue, 2007-11-06 at 09:24 +, Jorge Bastos wrote:
 Alex,
 The only ACL i have in squid.conf is:
 
 ---
 acl all_cache src 0.0.0.0/0.0.0.0
 no_cache deny all_cache
 ---

OK, thanks.

 I'm one of the people who's having this problems.
 Now I'm using 3.0.PRE6 until this is fixed.

Can you help us troubleshoot the problem? Can you run the latest Squid3
daily snapshot and collect full debugging (debug_options ALL,9) logs
when Windows Update is malfunctioning?

Thank you,

Alex.

 -Original Message-
 From: Alex Rousskov [mailto:[EMAIL PROTECTED] 
 Sent: segunda-feira, 5 de Novembro de 2007 16:31
 To: Amos Jeffries
 Cc: John Mok; squid-users@squid-cache.org
 Subject: Re: [squid-users] squid3 WindowsUpdate failed
 
 On Sun, 2007-11-04 at 19:30 +1300, Amos Jeffries wrote:
  I have just had the opportunity to do WU on a customers box and
  managed to reproduce one of the possible WU failures.
  
  This one was using WinXP, and the old WindowsUpdate (NOT 
  MicrosoftUpdate, teht remains untested). With squid configured to
  permit 
  client access to:
  
  # Windows Update / Microsoft Update
  #
  redir.metaservices.microsoft.com
  images.metaservices.microsoft.com
  c.microsoft.com
  windowsupdate.microsoft.com
  #
  # WinXP / Win2k
  .update.microsoft.com
  download.windowsupdate.com
  # Win Vista
  .download.windowsupdate.com
  # Win98
  wustat.windows.com
  crl.microsoft.com
  
  AND also CONNECT access to www.update.microsoft.com:443
  
  PROBLEM:
 The client box detects a needed update,
 then during the Download Updates phase it says ...failed! and
  stops.
  
  CAUSE:
  
  This was caused by a bug in squid reading the ACL:
 download.windowsupdate.com
...
 .download.windowsupdate.com
  
- squid would detect that download.windowsupdate.com was a
  subdomain 
  of .download.windowsupdate.com  and .download.windowsupdate.com would
  be 
  culled off the ACL as unneeded.
  
- That culled bit held the wildcard letting v4.download.* and 
  www.download.* be retrieved later in the process.
  
- BUT, specifying JUST .download.windowsupdate.com would cause 
  download.windowsupdate.com/fubar to FAIL under the same circumstances.
  
  during the WU process requests for application at 
  www.download.windowsupdate.com/fubar and K/Q updates at 
  v(3|4|5).download.windowsupdate.com/fubar2
  would result in a 403 and thus the FAIL.
  
  
  SOLUTION:
Changing the wildcard match to an explicit for fixes this and WU 
  succeeds again.
  OR,
Changing the wildcard to .windowsupdate.com also fixes the problem
  for this test.
 
 Can other folks experiencing Windows Update troubles with Squid3 confirm
 that their setup does not have the same ACL problem?
 
 In general, if we do not find a way to get more information about the
 Windows Update problem, we would have to assume it does not exist in
 most environments and release Squid3 STABLE as is. If you want the
 problem fixed before the stable Squid3 release, please help us reproduce
 or debug the problem.
 
 Thank you,
 
 Alex.
 
 



[squid-users] NIC and Squid

2007-11-06 Thread stephane lepain aka riganta
Hi Guys, 

I am wondering if there is any possibilities for me to tell squid to act only 
on one NIC. Indeed, I have two of them on my PC and would like Squid to use 
only one. 
-- 

Stephen
Cordialement, Best Regards


[squid-users] carp doc bug : parse_peer: token='carp-load-factor=0.5' SQUID2.6

2007-11-06 Thread Sylvain Viart

Hi,

I'm trying to test the CARP load balancing.

squid-2.6.16/src

But the documentation seems to be bugged.

Form the source :
cache_cf.c

#if USE_CARP
   } else if (!strcasecmp(token, carp)) {
   if (p-type != PEER_PARENT)
   fatalf(parse_peer: non-parent carp peer %s/%d\n, 
p-host, p-http_port);

   p-options.carp = 1;
#endif

The only supported parametter in the cache_peer parsing directive seems 
to be 'carp' not 'carp-load-factor'.


Also from the source:  void carpInit(void)
carp.c 193 lines --49%--
   /* and load factor */
   p-carp.load_factor = ((double) p-weight) / (double) W;

it seems to me, that the load_factor is in fact, calculated from peer 
weight?


this works:
cache_peer php-01 parent 80 0 carp no-query no-digest originserver 
login=PASS weight=1
cache_peer php-03 parent 80 0 carp no-query no-digest originserver 
login=PASS weight=1
cache_peer php-04 parent 80 0 carp no-query no-digest originserver 
login=PASS weight=1
cache_peer php-05 parent 80 0 carp no-query no-digest originserver 
login=PASS weight=1
cache_peer php-06 parent 80 0 carp no-query no-digest originserver 
login=PASS weight=1
cache_peer php-07 parent 80 0 carp no-query no-digest originserver 
login=PASS weight=2
cache_peer php-08 parent 80 0 carp no-query no-digest originserver 
login=PASS weight=1
cache_peer php-09 parent 80 0 carp no-query no-digest originserver 
login=PASS weight=7


this one fails:

cache_peer php-01 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-03 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-04 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-05 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-06 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-07 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.187500
cache_peer php-08 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.062500
cache_peer php-09 parent 80 0 no-query no-digest originserver login=PASS 
carp-load-factor=0.437500


Regards,
Sylvain.


Re: [squid-users] Optimal maximum cache size

2007-11-06 Thread Matus UHLAR - fantomas
On 05.11.07 19:00, Paul Cocker wrote:
 Is there such a thing as too much disk cache? Presumably squid has to
 have some way of checking this cache, and at some point it takes longer
 to look for a cached page than to serve it direct. At what point do you
 hit that sort of problem, or is it so large no human mind should worry?

you usually don't need too much of content, I guess content more than one
month old is useless. Is only taked place on disk and in memory. Mostly if
it isn't being HIT anymore. However, if your machine is fast enough, it
should not cause any troubles and you can leave as much as you want.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Micro$oft random number generator: 0, 0, 0, 4.33e+67, 0, 0, 0...


[squid-users] WCCPv2 and HTTPS problems

2007-11-06 Thread Dalibor Dukic
Hi,

I configured transparent squid box and WCCPv2 with CISCO 6k5. After some
time I noticed that clients have problems with HTTPS sites. If I
manually configure proxy setting in browser and bypass WCCP everything
goes OK. 

I'm using standard service group (web-cache). Maybe some web server
check that HTTP and HTTPS request are coming with same source address
and block HTTPS access. Clients and squid are on public addresses and
this requests come with different source IPs. I can't change this and
put clients and squid boxes behind NAT machine. :(
Is anyone notice that same behavior? 
Maybe I can setup service-group with 80 and 443 port so I can resolve
issues with different IPs, is this correct?

Thanks in advance, Dalibor



Re: [squid-users] NIC and Squid

2007-11-06 Thread Beavis
I'm not sure if it's possible to bind it to a physical interface but
you can sure bind it to an IP address

http://www.squid-cache.org/Versions/v2/2.6/cfgman/tcp_outgoing_address.html


regards,
-pf

On 11/6/07, stephane lepain aka riganta [EMAIL PROTECTED] wrote:
 Hi Guys,

 I am wondering if there is any possibilities for me to tell squid to act only
 on one NIC. Indeed, I have two of them on my PC and would like Squid to use
 only one.
 --

 Stephen
 Cordialement, Best Regards



Re: [squid-users] squid accel peer load balancing weighted round robin?

2007-11-06 Thread Chris Robertson

Sylvain Viart wrote:

Hi,

To sum up:

   * proxy squid 2.6.STABLE16
   * accelerator, squid only speak to apache2 (originserver), no other
 proxy speaking together.
   * I want to weight loadblance the squid query to the parent (origin)
   * + I want to filter the url in 2 type static and php. static
 o URL are directed to the static peer, not balanced
 o any php content URL are directed to php peer with weighted
   round-robin selection.

squid behavior seems to be bugged when I put :
cache_peer php-01 parent 80 0 no-query originserver round-robin 
weight=2 login=PASS
#cache_peer php-02 parent 80 0 no-query originserver round-robin 
weight=0 login=PASS
cache_peer php-03 parent 80 0 no-query originserver round-robin 
weight=2 login=PASS
cache_peer php-04 parent 80 0 no-query originserver round-robin 
weight=1 login=PASS
cache_peer php-05 parent 80 0 no-query originserver round-robin 
weight=2 login=PASS
cache_peer php-06 parent 80 0 no-query originserver round-robin 
weight=2 login=PASS
cache_peer php-07 parent 80 0 no-query originserver round-robin 
weight=3 login=PASS
cache_peer php-08 parent 80 0 no-query originserver round-robin 
weight=2 login=PASS


weight are not respected and all the load seems to fall on the last 
declared peer. 

SNIP



I tested the CARP config and squid complain about the conf syntax:

cache_peer php-01 parent 80 0 no-query no-digest originserver 
login=PASS carp-load-factor=0.062500


The 2.6 STABLE 16 changed the cache_peer CARP directive.  See 
http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16-RELEASENOTES.html#s2 
and  http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html


carp_load_factor has been done away with, and now it seems you would 
just replace round-robin in your config above with carp.


cache_peer php-03 parent 80 0 no-query no-digest originserver 
login=PASS carp-load-factor=0.062500
cache_peer php-04 parent 80 0 no-query no-digest originserver 
login=PASS carp-load-factor=0.062500
cache_peer php-05 parent 80 0 no-query no-digest originserver 
login=PASS carp-load-factor=0.062500
cache_peer php-06 parent 80 0 no-query no-digest originserver 
login=PASS carp-load-factor=0.062500
cache_peer php-07 parent 80 0 no-query no-digest originserver 
login=PASS carp-load-factor=0.187500
cache_peer php-08 parent 80 0 no-query no-digest originserver 
login=PASS carp-load-factor=0.062500
cache_peer php-09 parent 80 0 no-query no-digest originserver 
login=PASS carp-load-factor=0.437500


squid -k parse
2007/11/06 16:08:36| parse_peer: token='carp-load-factor=.062500'
FATAL: Bungled squid.conf line 592: cache_peer varan-01 parent 80 0 
no-query no-digest originserver login=PASS carp-load-factor=.062500


Squid Cache: Version 2.6.STABLE16
configure options:  '--prefix=/usr' '--exec_prefix=/usr' 
'--bindir=/usr/sbin' '--sbindir=/usr/sbin' 
'--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' 
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' 
'--enable-async-io' '--with-pthreads' 
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' 
'--enable-arp-acl' '--enable-epoll' 
'--enable-removal-policies=lru,heap' '--enable-snmp' 
'--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' 
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log' 
'--enable-auth=basic,digest,ntlm' '--enable-carp' 
'--enable-follow-x-forwarded-for' '--with-large-files' 
'--with-maxfd=65536' 'i386-debian-linux' 
'build_alias=i386-debian-linux' 'host_alias=i386-debian-linux' 
'target_alias=i386-debian-linux'


carp seems to be enabled...

Regards,
Sylvain.


Chris


[squid-users] Differentiating http and ssl requests.

2007-11-06 Thread Srinivas B
Hi All,

I am using Squid 2.6 Stable 12. I have my configuration like below.

http_port 8080 accel defaultsite=myhttpsite.net
https_port 8081 accel defaultsite=myhttpsite.net cert=path_to_cert
key=path_to_key protocol=http

By looking above, you might understand the setup. Its

Client ---http  ssl---Squid---http---HTTP Server

My question is that

HTTP Server sees no difference between actual requests, i.e., whether
client request is http or https.

Is there anyway to find out this at backend(Http web server) through
HTTP Headers or any other configuration?

Thank you All.

Srini


Re: [squid-users] NIC and Squid

2007-11-06 Thread Leonardo Rodrigues Magalhães


   No it cannot bind to a phisical interface. Anyway you can assure 
that with firewall rules.


   What squid is capable of doing, as Beavis wrote, is to bind to a 
specific IP address. But instead of using tcp_outgoing address as 
proposed by Beavis, i would recommend using the 'binded' ip address on 
the http_port parameter


http://www.squid-cache.org/Versions/v2/2.6/cfgman/http_port.html

something like

http_port 192.168.1.10:3128

   and then squid would NOT answer on your other NIC (192.168.2.10 for 
example) 


   tcp_outgoing_address deals with output addresses, not input things. 
I dont think it's the case of using tcp_outgoing_address for acchieving 
what you need. http_port is ready for doing the IP specific bind you need.



Beavis escreveu:

I'm not sure if it's possible to bind it to a physical interface but
you can sure bind it to an IP address

http://www.squid-cache.org/Versions/v2/2.6/cfgman/tcp_outgoing_address.html


On 11/6/07, stephane lepain aka riganta [EMAIL PROTECTED] wrote:
  

Hi Guys,

I am wondering if there is any possibilities for me to tell squid to act only
on one NIC. Indeed, I have two of them on my PC and would like Squid to use
only one.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






smime.p7s
Description: S/MIME Cryptographic Signature


Re: [squid-users] NIC and Squid

2007-11-06 Thread Amos Jeffries
 Hi Guys,

 I am wondering if there is any possibilities for me to tell squid to act
 only
 on one NIC. Indeed, I have two of them on my PC and would like Squid to
 use
 only one.

Squid doesn't know about NIC. But it does know about IP.
You need to configure squid to listen (*_port) and send
(*_outgoing_address) on a specific IPA which is assigned to the NIC you
want to use.
http://www.squid-cache.org/Versions/v2/2.6/cfgman/

Amos




Re: [squid-users] squid accel peer load balancing weighted round robin?

2007-11-06 Thread Amos Jeffries
 Hi,

 To sum up:

 * proxy squid 2.6.STABLE16
 * accelerator, squid only speak to apache2 (originserver), no other
   proxy speaking together.
 * I want to weight loadblance the squid query to the parent (origin)
 * + I want to filter the url in 2 type static and php. static
   o URL are directed to the static peer, not balanced
   o any php content URL are directed to php peer with weighted
 round-robin selection.

 squid behavior seems to be bugged when I put :
 cache_peer php-01 parent 80 0 no-query originserver round-robin weight=2
 login=PASS
 #cache_peer php-02 parent 80 0 no-query originserver round-robin
 weight=0 login=PASS
 cache_peer php-03 parent 80 0 no-query originserver round-robin weight=2
 login=PASS
 cache_peer php-04 parent 80 0 no-query originserver round-robin weight=1
 login=PASS
 cache_peer php-05 parent 80 0 no-query originserver round-robin weight=2
 login=PASS
 cache_peer php-06 parent 80 0 no-query originserver round-robin weight=2
 login=PASS
 cache_peer php-07 parent 80 0 no-query originserver round-robin weight=3
 login=PASS
 cache_peer php-08 parent 80 0 no-query originserver round-robin weight=2
 login=PASS

 weight are not respected and all the load seems to fall on the last
 declared peer. Note, I also declare some other peer not involved in the
 load balancing scheme.

round-robin means old fashioned Round-Robin. One query per peer, looping,
no exceptions.

weighted-round-robin means Weight Balanced Round-Robin

... and yes weighted-round-robin is only provided in Squid 3.0 or later.

 Particularly, I've re-implemented the round robin behavior via my
 redirector. And I first produced a bugged algorithm with was also
 counting the static peer in the peer rotation.

I'd advise going to squid-3.0-RC1, unless its running on windows.
We believe Squid3 is ready for release on any non-window platform, there
is just one know bug in win32 holding it in RC back from final.


 bugged algo, static peer selection are also counted and break the round
 robin selection.
 $n=0;
 while()
 {
 if(static)
{
   s/url/static/;
}
else
{
 $peer = $all_peer[$n%nb_peer];
}
   print;
   $n++;
 }

 I measure load by looking on some MRTG like graph of all the server
 pool. And clearly it see, than the load is badly divided on each peer.

 config problem.

 use round-robin for strictly old fashioned round-robin,
 weighted-round-robin for round-robin with weight= load balancing
 weighted-round-robin, starts from squid3 I think.

 CARP is purpose build load balancing algorithm, and as far as I
 know, it should work with originserver.
 http://docs.huihoo.com/gnu_linux/squid/html/x2398.html
 No. It's a parent proxy/server thing.
 
 use 'carp' to define a set of parents which should
 be used as a CARP array. The requests will be
 distributed among the parents based on the CARP load
 balancing hash function based on their weight
 
 says so twice to be sure.

 FWIW, originserver only affects the replies squid produces. Whether
 it spoofs being a web server for the data requested.
 Yes, but it's some what confusing, because parent seems to name
 orginserver and hierarchical proxy.
 I've read some old post which say that the algorithm was only available
 for parent proxy. Which means for me that it can apply to another
 proxy not an origin server.

 http://www.mail-archive.com/squid-users@squid-cache.org/msg09265.html

 But as Amos said, it may be the same for squid.

 I tested the CARP config and squid complain about the conf syntax:

 cache_peer php-01 parent 80 0 no-query no-digest originserver login=PASS
 carp-load-factor=0.062500

carp-load-factor= was replaced by weight= in some early 2.6 release.

Amos

 cache_peer php-03 parent 80 0 no-query no-digest originserver login=PASS
 carp-load-factor=0.062500
 cache_peer php-04 parent 80 0 no-query no-digest originserver login=PASS
 carp-load-factor=0.062500
 cache_peer php-05 parent 80 0 no-query no-digest originserver login=PASS
 carp-load-factor=0.062500
 cache_peer php-06 parent 80 0 no-query no-digest originserver login=PASS
 carp-load-factor=0.062500
 cache_peer php-07 parent 80 0 no-query no-digest originserver login=PASS
 carp-load-factor=0.187500
 cache_peer php-08 parent 80 0 no-query no-digest originserver login=PASS
 carp-load-factor=0.062500
 cache_peer php-09 parent 80 0 no-query no-digest originserver login=PASS
 carp-load-factor=0.437500

 squid -k parse
 2007/11/06 16:08:36| parse_peer: token='carp-load-factor=.062500'
 FATAL: Bungled squid.conf line 592: cache_peer varan-01 parent 80 0
 no-query no-digest originserver login=PASS carp-load-factor=.062500

 Squid Cache: Version 2.6.STABLE16
 configure options:  '--prefix=/usr' '--exec_prefix=/usr'
 '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid'
 '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid'
 '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads'
 

Re: [squid-users] carp doc bug : parse_peer: token='carp-load-factor=0.5' SQUID2.6

2007-11-06 Thread Amos Jeffries
 Hi,

 I'm trying to test the CARP load balancing.

 squid-2.6.16/src

 But the documentation seems to be bugged.

snip

 The only supported parametter in the cache_peer parsing directive seems
 to be 'carp' not 'carp-load-factor'.


The authoritative documentation provided appears to be correct.
Looks like the docs you are using were written for squid 2.5

http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html


 Also from the source:  void carpInit(void)
 carp.c 193 lines --49%--
 /* and load factor */
 p-carp.load_factor = ((double) p-weight) / (double) W;

 it seems to me, that the load_factor is in fact, calculated from peer
 weight?

Yes. carp-load-factor has been obsoleted by the weight= option since 2.6.

Amos





Re: [squid-users] Squid cluster - flat or hierarchical

2007-11-06 Thread John Moylan
Hi,

My loadbalancing is handled very well by LVS.  My caches are using
unicast ICP with the no-proxy option for their cache_peer's. I don't
think Carp or round robin anything would help me much. My concern is
whether or not my caches performance could suffer from forwarding
loops if they are all siblings of each other? Is it OK to ignore the
forwarding loop warnings in cache.log?

J





On Nov 6, 2007 7:29 AM, Amos Jeffries [EMAIL PROTECTED] wrote:

 John Moylan wrote:
  Hi,
 
  I have 4 Squid 2.6 reverse proxy servers sitting behind an LVS
  loadbalancer with 1 public IP address. In order to improve the hit
  rate all 4 servers are all peering with eachother using ICP.
 
 
  squid1 - sibling squid{2,3,4}
  squid2 - sibling squid{1,3,4}
  squid3 - sibling squid{1,2,4}
  squid4 - sibling squid{1,2,3}
 
  This works fine, apart from lots of warnings about forwarding loops in
  the cache.log
 
  I would like to ensure that the configs are optimized for an up and
  coming big traffic event.
 
  Can I disregard these forwarding loops and keep my squids in a flat
  structure or should I break them up into parent sibling relationships.
  Will the forwarding loop errors I am experiencing cause issues during
  a quick surge in traffic?
 

 The CARP peering algorithm has been specialy designed and added to cope
 efficiently with large arrays or clusters of squid.

 IFAIK it's as simple as adding the 'carp' option to your cache_peer
 lines in place of other such as round-robin.

 http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html

 Amos



Re: [squid-users] carp doc bug : parse_peer: token='carp-load-factor=0.5' SQUID2.6

2007-11-06 Thread Chris Robertson

Amos Jeffries wrote:

Hi,

I'm trying to test the CARP load balancing.

squid-2.6.16/src

But the documentation seems to be bugged.



snip
  

The only supported parametter in the cache_peer parsing directive seems
to be 'carp' not 'carp-load-factor'.




The authoritative documentation provided appears to be correct.
Looks like the docs you are using were written for squid 2.5

http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html


  

Also from the source:  void carpInit(void)
carp.c 193 lines --49%--
/* and load factor */
p-carp.load_factor = ((double) p-weight) / (double) W;

it seems to me, that the load_factor is in fact, calculated from peer
weight?



Yes. carp-load-factor has been obsoleted by the weight= option since 2.6.

Amos
  


More specifically, since 2.6 STABLE 16.  See 
http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16-RELEASENOTES.html#s2.


Chris


Re: [squid-users] carp doc bug : parse_peer: token='carp-load-factor=0.5' SQUID2.6

2007-11-06 Thread Amos Jeffries
 Amos Jeffries wrote:
 Hi,

 I'm trying to test the CARP load balancing.

 squid-2.6.16/src

 But the documentation seems to be bugged.


 snip

 The only supported parametter in the cache_peer parsing directive seems
 to be 'carp' not 'carp-load-factor'.



 The authoritative documentation provided appears to be correct.
 Looks like the docs you are using were written for squid 2.5

 http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html



 Also from the source:  void carpInit(void)
 carp.c 193 lines --49%--
 /* and load factor */
 p-carp.load_factor = ((double) p-weight) / (double) W;

 it seems to me, that the load_factor is in fact, calculated from peer
 weight?


 Yes. carp-load-factor has been obsoleted by the weight= option since
 2.6.

 Amos


 More specifically, since 2.6 STABLE 16.  See
 http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16-RELEASENOTES.html#s2.

 Chris


well, if we are going to be that pedantic. :)

It was introduced in 2.6-14 or earlier, and left undocumented (bug 2052).
During regular squid.conf cleanups of 2.6s15, the 3.0 format using weight=
was documented in 2.6s14-20070821.
/squidward

Amos





Re: [squid-users] Squid cluster - flat or hierarchical

2007-11-06 Thread Amos Jeffries
 Hi,

 My loadbalancing is handled very well by LVS.  My caches are using
 unicast ICP with the no-proxy option for their cache_peer's. I don't
 think Carp or round robin anything would help me much. My concern is
 whether or not my caches performance could suffer from forwarding
 loops if they are all siblings of each other? Is it OK to ignore the
 forwarding loop warnings in cache.log?

I'm not entirely sure. The warning appears when a request is dropped due
to the VERY nasty routing situation.
You may need to tweak the options a bit to remove them for siblings.
As an educated guess I'd expect digests etc  to be leading to some of the
loops. As peer A tells peer B that peer C has access to it, when peer C
actually gets it from peer A etc.
Still tweaking a flat heirarchy to work as a cloud is harder than using a
efficiency-designed algorithm.

You WILL need some default for going direct though. Either to a default
parent, or allow_direct permissions.

Amos


 On Nov 6, 2007 7:29 AM, Amos Jeffries [EMAIL PROTECTED] wrote:

 John Moylan wrote:
  Hi,
 
  I have 4 Squid 2.6 reverse proxy servers sitting behind an LVS
  loadbalancer with 1 public IP address. In order to improve the hit
  rate all 4 servers are all peering with eachother using ICP.
 
 
  squid1 - sibling squid{2,3,4}
  squid2 - sibling squid{1,3,4}
  squid3 - sibling squid{1,2,4}
  squid4 - sibling squid{1,2,3}
 
  This works fine, apart from lots of warnings about forwarding loops in
  the cache.log
 
  I would like to ensure that the configs are optimized for an up and
  coming big traffic event.
 
  Can I disregard these forwarding loops and keep my squids in a flat
  structure or should I break them up into parent sibling relationships.
  Will the forwarding loop errors I am experiencing cause issues during
  a quick surge in traffic?
 

 The CARP peering algorithm has been specialy designed and added to cope
 efficiently with large arrays or clusters of squid.

 IFAIK it's as simple as adding the 'carp' option to your cache_peer
 lines in place of other such as round-robin.

 http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html

 Amos






Re: [squid-users] Differentiating http and ssl requests.

2007-11-06 Thread Amos Jeffries
 Hi All,

 I am using Squid 2.6 Stable 12. I have my configuration like below.

 http_port 8080 accel defaultsite=myhttpsite.net
 https_port 8081 accel defaultsite=myhttpsite.net cert=path_to_cert
 key=path_to_key protocol=http

'tis usually better to accelerate on port 80. That way you don't have to
produce every URL saying myhttpsite.com:8080. and http://myhttpsite.com
will  'just-work'


 By looking above, you might understand the setup. Its

 Client ---http  ssl---Squid---http---HTTP Server

 My question is that

 HTTP Server sees no difference between actual requests, i.e., whether
 client request is http or https.

aha, the certificates on https_port are what squid sends to clients for
that part of the SSL.

There is another set of certs needed on cache_peer to SLL the link between
squid and the actual web server.


 Is there anyway to find out this at backend(Http web server) through
 HTTP Headers or any other configuration?

Depends on the web server capabilities. Nothing to do with squid.

Amos




Re: [squid-users] Optimal maximum cache size

2007-11-06 Thread Colin Campbell
Hi,

On Tue, 2007-11-06 at 17:33 +0100, Matus UHLAR - fantomas wrote:
 On 05.11.07 19:00, Paul Cocker wrote:
  Is there such a thing as too much disk cache?

I recall seeing something on the squid-cache web site that said about 1
week is the optimal age for content. How you size the cache is a bit of
a guess. I have two identical systems with 36 GBytes of cache each. One
is doing about 3 times as much traffic as the other (so much for manual
load balancing :-). The busy one has an LRU of 6 days while the quiet
one has dropped down to about 9 days (I'm sure it was about 20 days not
that long ago). That tells me they're not too bad size-wise. Both
systems have just over 2,000,000 store entries. The cache_dir parameters
of interest are: aufs, 19150, 46, 256.

The whole theory behind squid is that it's quicker to get the object
from disk than to retrieve it over the net. Getting objects from memory
is nice but I wouldn't over-emphasise its importance.

Colin
-- 
Colin Campbell
Unix Support/Postmaster/Hostmaster
Citec
+61 7 3227 6334


Re: [squid-users] WCCPv2 and HTTPS problems

2007-11-06 Thread Adrian Chadd
On Tue, Nov 06, 2007, Dalibor Dukic wrote:
 Hi,
 
 I configured transparent squid box and WCCPv2 with CISCO 6k5. After some
 time I noticed that clients have problems with HTTPS sites. If I
 manually configure proxy setting in browser and bypass WCCP everything
 goes OK. 
 
 I'm using standard service group (web-cache). Maybe some web server
 check that HTTP and HTTPS request are coming with same source address
 and block HTTPS access. Clients and squid are on public addresses and
 this requests come with different source IPs. I can't change this and
 put clients and squid boxes behind NAT machine. :(
 Is anyone notice that same behavior? 
 Maybe I can setup service-group with 80 and 443 port so I can resolve
 issues with different IPs, is this correct?

Squid doesn't currently handle transparently intercepting SSL, even for
the situation you require above.

You should investigate the TPROXY Squid integration which, when combined
with a correct WCCPv2 implementation and compatible network design,
will allow your requests to look like they're coming from your client
IPs.

The other alternative is to write or use a very basic TCP connection proxy
which will handle transparently intercepted connections and just connect
to the original destination server. This will let the requests come from
the same IP as the proxy.

(Yes, I've done the above in the lab and verified the concept works fine.)



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] WCCPv2 and HTTPS problems

2007-11-06 Thread Hemant Raj Chhetri

On Wed, 7 Nov 2007 12:45:11 +0900
 Adrian Chadd [EMAIL PROTECTED] wrote:

On Tue, Nov 06, 2007, Dalibor Dukic wrote:

Hi,

I configured transparent squid box and WCCPv2 with CISCO 
6k5. After some
time I noticed that clients have problems with HTTPS 
sites. If I
manually configure proxy setting in browser and bypass 
WCCP everything
goes OK. 

I'm using standard service group (web-cache). Maybe some 
web server
check that HTTP and HTTPS request are coming with same 
source address
and block HTTPS access. Clients and squid are on public 
addresses and
this requests come with different source IPs. I can't 
change this and

put clients and squid boxes behind NAT machine. :(
Is anyone notice that same behavior? 
Maybe I can setup service-group with 80 and 443 port so 
I can resolve

issues with different IPs, is this correct?


Squid doesn't currently handle transparently 
intercepting SSL, even for

the situation you require above.

You should investigate the TPROXY Squid integration 
which, when combined
with a correct WCCPv2 implementation and compatible 
network design,
will allow your requests to look like they're coming 
from your client

IPs.

The other alternative is to write or use a very basic 
TCP connection proxy
which will handle transparently intercepted connections 
and just connect
to the original destination server. This will let the 
requests come from

the same IP as the proxy.

(Yes, I've done the above in the lab and verified the 
concept works fine.)




Adrian

--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - 
Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges 
available in WA -



Hi Adrian,
  I am also facing the same problem with https 
sites. Yahoo works fine with me but I am having problem 
with hotmail. Please advice me on how do I handle this or 
is there any guide which I can refer to.


Thanking you,
Hemant.
++
This footer space is available to carry your advertisements unobtrusively. 
Please contact 02-3226999 or email [EMAIL PROTECTED] for advertisement programs 
available.
++


[squid-users] Solaris/OpenSSL/MD5 Issues

2007-11-06 Thread Randall DuCharme



Greetings,



I've recently run into a problem with building Squid on the latest Solaris 10 
release from Sun as well as the Nevada 74 release of OpenSolaris and before I 
start hacking and wasting time, I'm wondering if someone else has 
encountered/solved this.  I've done a pretty exhaustive Google and BLOG 
(several) search(es) but am still empty-handed.



The problem seems to be about /usr/include/sys/md5.h  as follows:



In file included from /usr/include/inet/ip_stack.h:37,

 from /usr/include/inet/ip.h:50,

 from /usr/include/netinet/ip_compat.h:189,

 from IPInterception.cc:59:

/usr/include/sys/md5.h:62: error: conflicting declaration 'typedef struct 
MD5_CTX MD5_CTX'

../include/md5.h:59: error: 'MD5_CTX' has a previous declaration as `typedef 
struct MD5Context MD5_CTX'

/usr/include/sys/md5.h:62: error: declaration of `typedef struct MD5_CTX 
MD5_CTX'

../include/md5.h:59: error: conflicts with previous declaration `typedef struct 
MD5Context MD5_CTX'

/usr/include/sys/md5.h:62: error: declaration of `typedef struct MD5_CTX 
MD5_CTX'

../include/md5.h:59: error: conflicts with previous declaration `typedef struct 
MD5Context MD5_CTX'

/usr/include/sys/md5.h:62: error: declaration of `typedef struct MD5_CTX 
MD5_CTX'

../include/md5.h:59: error: conflicts with previous declaration `typedef struct 
MD5Context MD5_CTX'

/usr/include/sys/md5.h:66: error: declaration of C function `void 
MD5Final(void*, MD5_CTX*)' conflicts with

../include/md5.h:63: error: previous declaration `void MD5Final(uint8_t*, 
MD5Context*)' here

gmake[1]: *** [IPInterception.lo] Error 1

gmake[1]: Leaving directory `/export/home/randy/Download/squid-3.0.RC1/src'

gmake: *** [all-recursive] Error 1





I'm still running an older (2.6-STABLE5) release that was built on an earlier 
release of OpenSolaris so I'm not exactly sure what changed or when.  





Further, I've tried to build 3.0RC1 with SunStudio12 but it complains about 
operator overloading when building Squid's 3.0 RC1 like so:



CC: Warning: Option -fhuge-objects passed to ld, if ld is invoked, ignored 
otherwise

HttpRequestMethod.h, line 138: Error: Overloading ambiguity between 
operator!=(const HttpRequestMethod, const _method_t) and operator!=(int, 
int).

1 Error(s) detected.

*** Error code 1

make: Fatal error: Command failed for target `cf_gen.o'

Current working directory /export/home/randy/Download/squid-3.0.RC1/src



I can hack my way past the overloading ambiguity problem as well as the GNU 
assumpton about the -f switch (-fhuge-objects ) but end up

with roughly the same problem with /usr/include/sys/md5.h.



What the heck am I missing??





Kind regards!



-- 

Randall D. DuCharme (Radio AD5GB)

Powered by OpenSolaris!

http://www.opensolaris.org

___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!




Re: [squid-users] WCCPv2 and HTTPS problems

2007-11-06 Thread Adrian Chadd
On Wed, Nov 07, 2007, Hemant Raj Chhetri wrote:

 Hi Adrian,
   I am also facing the same problem with https 
 sites. Yahoo works fine with me but I am having problem 
 with hotmail. Please advice me on how do I handle this or 
 is there any guide which I can refer to.

I don't know of an easy way to handle this, I'm sorry. I know how I'd handle
it in Squid-2.6 but it'd require a couple weeks of work and another few weeks
of testing.

(Considering how much of a problem this has caused people in the past I'm
surprised a solution hasn't been contributed back to the project..)



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -