[squid-users] "This cache is currently building its digest."

2010-11-03 Thread david robertson
Hello, I'm having a cache-digest related issue that I'm hoping someone
here can help me with.

I've got a few frontend servers, which talk to a handful of backend
servers.  Everything is working swimmingly, with the exception of
cache digests.

The digests used to work without issue, but suddenly all of my backend
servers have stopped building their digests.  They all say "This cache
is currently building its digest." when you try to access the digest.
It's as if the digest rebuild never finishes.  Nothing has changed
with my configuration, and all of the backends (6 of them) have
started doing this at roughly the same time.

My first thought would be cache corruption, but I've reset all of the
caches, and the issue still persists.

Any ideas?


Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/squid2' '--enable-async-io'
'--enable-icmp' '--enable-useragent-log' '--enable-snmp'
'--enable-cache-digests' '--enable-follow-x-forwarded-for'
'--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
'--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
'--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
-O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
-fno-exceptions'


RE: [squid-users] Re: Authentication using squid_kerb_auth with Internet Explorer 8 on Windows Server 2008 R2

2010-11-03 Thread Paul Freeman
Markus
After further investigation using gdb I have been able to determine the
problem is caused by a particular combination of encryption and checksum
types which seems to only occur (at this stage) in Windows 2008 R2 and
possibly Windows 7 although I have not confirmed this.

In my Windows 2008 R2 environment (including Active Directory, running in
Windows 2003 mode rather than Windows 2008), the keytab which I created for
squid using msktutil (with enctypes = 28) gave me keys encrypted with ArcFour
with HMAC/md5, AES-128 CTS mode with 96-bit SHA-1 HMAC and AES-256 CTS mode
with 96-bit SHA-1 HMAC.

The problem lies with the Kerberos libraries installed with Ubuntu 10.04 LTS
(1.8.1+dfsg-2ubuntu0.3).  They return an error when working with AES-256 and
the checksum encryption type ArcFour with HMAC/md5.  This has been reported
on the MIT Kerberos developers list
(http://mailmain.mit.edu/pipermail/krbdev/2010-July/009148.html) and assigned
ticket 6751.  This has been resolved and included in the MIT Kerberos 1.8.3
release.  However, it does not appear to have been backported to Ubuntu 10.04
LTS yet.

I compiled the MIT Kerberos 1.8.3 source and re-built squid_kerb_auth against
these libraries and the problem no longer occurs ie. A domain user logged
into a Windows 2008 R2 server can authenticate using Kerberos in IE8.
Kerberos authentication continues to work with IE8 and Firefox in Windows XP
for domain users.

I greatly appreciate the assistance of Markus Moeller in resolving this.
Without his guidance and suggestions it would have taken me a lot longer to
nail down the problem.

Hopefully this information will be of some use to others.

Regards

Paul

> -Original Message-
> From: Markus Moeller [mailto:hua...@moeller.plus.com]
> Sent: Sunday, 31 October 2010 6:45 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Re: Authentication using squid_kerb_auth with
> Internet Explorer 8 on Windows Server 2008 R2
> 
> My tests show the same.  RC4 works but AES 128/256 fail.  It seems to
> be
> some incompatibility between MS and MIT/Heimdal Kerberos libraries
> introduces in R2
> 
> Markus
> 
> "DmitrySh"  wrote in message
> news:1288361044027-3019158.p...@n4.nabble.com...
> >
> > I solve the problem on Win7 (temporary)
> > I set RC4-HMAC type for kerberos transactions in Local Security
> Policy
> > http://technet.microsoft.com/en-us/library/dd560670%28WS.10%29.aspx
> > Now both keys on client machine are in RC4-HMAC type (krbtgt and
> > HTTP/fqdn_of_proxy)
> > That's help in my case.
> > Sounds not so good if this be AES256, but i think it's before of
> mixed
> > mode
> > of AD (2003 and 2008).
> > Try to communicate with microsoft about this.
> > P.S. Sorry for my english :)
> >
> > Regards,
> > Dmitry
> > --
> > View this message in context:
> > http://squid-web-proxy-cache.1019090.n4.nabble.com/Authentication-
> using-squid-kerb-auth-with-Internet-Explorer-8-on-Windows-Server-2008-
> R2-tp3013070p3019158.html
> > Sent from the Squid - Users mailing list archive at Nabble.com.
> >
> 
> 
> 
> 
> __ Information from ESET Smart Security, version of virus
> signature database 5586 (20101102) __
> 
> The message was checked by ESET Smart Security.
> 
> http://www.eset.com
> 
 

__ Information from ESET Smart Security, version of virus signature
database 5589 (20101103) __

The message was checked by ESET Smart Security.

http://www.eset.com
 


Re: [squid-users] squid_session

2010-11-03 Thread Amos Jeffries
On Wed, 3 Nov 2010 16:04:13 -0500, jon jon  wrote:
> Hello,<
> 
> I have looked through the mail archive, and not found what I am
> looking for. I have Squid 2.6 installed on my Slackware box. I am new
> to Linux, so I apologize for the noob question. How do install
> squid_session the external helper program? Do I need to run the
> ./configure --enable squid_session

Maybe yes, maybe no.

Running the command "locate bin/squid_session" should tell you where it
is.
Probably /usr/bin/squid/squid_session or
/usr/local/bin/squid/squid_session

> 
> Or is this program alreay installed, and I just need to add a line to =
> my squid.conf file?
> I dont understand how to install that program.

If it's not already installed with Squid it can be added by re-building
with the option:
 ./configure --enable-external-acl-helpers==session

Amos


[squid-users] squid_session

2010-11-03 Thread jon jon
Hello,<

I have looked through the mail archive, and not found what I am
looking for. I have Squid 2.6 installed on my Slackware box. I am new
to Linux, so I apologize for the noob question. How do install
squid_session the external helper program? Do I need to run the
./configure --enable squid_session

Or is this program alreay installed, and I just need to add a line to =
my squid.conf file?
I dont understand how to install that program.

Thanks for any help


Re: [squid-users] header_replace: Can items be merged?

2010-11-03 Thread Amos Jeffries

On 04/11/10 03:43, Jenny Lee wrote:






Date: Thu, 4 Nov 2010 03:39:49 +1300
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] header_replace: Can items be merged?

On 04/11/10 03:32, Jenny Lee wrote:


Hello folks,

acl BADGUY dstdomain somewhere
header_replace User-Agent Nutscrape/1.0 (CP/M; 8-bit) BADGUY

Is there any way to do this? It is sucking in the whole line as UA (as 
expected). Quotation marks did not help.

Any ideas?


request_header_access User-Agent deny BADGUY

The replace only happens if the header has been removed first.



Sorry I left it out assuming this is given. Yes it is removed.

request_header_access User-Agent deny all

It just does not work with BADGUY or anything else. It is correctly replaced 
though... but with the entire line... UA becomes:

User-Agent Nutscrape/1.0 (CP/M; 8-bit) BADGUY
"User-Agent Nutscrape/1.0 (CP/M; 8-bit)" BADGUY
BADGUY User-Agent Nutscrape/1.0 (CP/M; 8-bit)



Oh. I see. No ACLs are not accepted on that directive.
The only tuning available is by not doing some removals.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


RE: [squid-users] header_replace: Can items be merged?

2010-11-03 Thread Jenny Lee




> Date: Thu, 4 Nov 2010 03:39:49 +1300
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] header_replace: Can items be merged?
>
> On 04/11/10 03:32, Jenny Lee wrote:
> >
> > Hello folks,
> >
> > acl BADGUY dstdomain somewhere
> > header_replace User-Agent Nutscrape/1.0 (CP/M; 8-bit) BADGUY
> >
> > Is there any way to do this? It is sucking in the whole line as UA (as 
> > expected). Quotation marks did not help.
> >
> > Any ideas?
>
> request_header_access User-Agent deny BADGUY
>
> The replace only happens if the header has been removed first.

 
Sorry I left it out assuming this is given. Yes it is removed.
 
request_header_access User-Agent deny all
 
It just does not work with BADGUY or anything else. It is correctly replaced 
though... but with the entire line... UA becomes: 
 
User-Agent Nutscrape/1.0 (CP/M; 8-bit) BADGUY
"User-Agent Nutscrape/1.0 (CP/M; 8-bit)" BADGUY
BADGUY User-Agent Nutscrape/1.0 (CP/M; 8-bit)
 
No difference.
 
3.2.0.1
 
J


 
 
  

Re: [squid-users] header_replace: Can items be merged?

2010-11-03 Thread Amos Jeffries

On 04/11/10 03:32, Jenny Lee wrote:


Hello folks,

acl BADGUY dstdomain somewhere
header_replace User-Agent Nutscrape/1.0 (CP/M; 8-bit) BADGUY

Is there any way to do this? It is sucking in the whole line as UA (as 
expected). Quotation marks did not help.

Any ideas?


request_header_access User-Agent deny BADGUY

The replace only happens if the header has been removed first.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Set-Cookie2 and Cookie2 Non-Existant

2010-11-03 Thread Amos Jeffries

On 04/11/10 03:28, Jenny Lee wrote:





Date: Mon, 1 Nov 2010 00:58:50 +
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Set-Cookie2 and Cookie2 Non-Existant

On Fri, 29 Oct 2010 08:44:25 +, Jenny Lee  wrote:

Hello Folks,

I checked in HTTPheader.c and these headers are not available in

3.2.0.1:

Set-Cookie2, Cookie2

parse_http_header_access: unknown header name 'Set-Cookie2'

I downloaded all the other sources and they are not available in them
either.

How can I block these headers?


Thanks for the heads-up. I've added these to the sets of RFC defined
headers for 3.HEAD. Future releases they should be blockable. A patch will
be available here shortly
http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-10999.patch


Thank you Amos.

Will this be incorporated to next release of 3.2 series?



Yes. It is already in the snapshots.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


[squid-users] header_replace: Can items be merged?

2010-11-03 Thread Jenny Lee

Hello folks,
 
acl BADGUY dstdomain somewhere
header_replace User-Agent Nutscrape/1.0 (CP/M; 8-bit) BADGUY
 
Is there any way to do this? It is sucking in the whole line as UA (as 
expected). Quotation marks did not help.
 
Any ideas?
 
J
 

  

RE: [squid-users] Set-Cookie2 and Cookie2 Non-Existant

2010-11-03 Thread Jenny Lee



> Date: Mon, 1 Nov 2010 00:58:50 +
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Set-Cookie2 and Cookie2 Non-Existant
> 
> On Fri, 29 Oct 2010 08:44:25 +, Jenny Lee  wrote:
> > Hello Folks,
> > 
> > I checked in HTTPheader.c and these headers are not available in
> 3.2.0.1:
> > Set-Cookie2, Cookie2
> > 
> > parse_http_header_access: unknown header name 'Set-Cookie2'
> > 
> > I downloaded all the other sources and they are not available in them
> > either.
> > 
> > How can I block these headers?
> 
> Thanks for the heads-up. I've added these to the sets of RFC defined
> headers for 3.HEAD. Future releases they should be blockable. A patch will
> be available here shortly
> http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-10999.patch
 
Thank you Amos.
 
Will this be incorporated to next release of 3.2 series?
  

[squid-users] http-returncode 417 and POST-request

2010-11-03 Thread Tom Tux
Hi

I have servers, which needs to connect to microsoft.com with a
POST-request. This POST-request will fail (417-error), until I
configure "ignore_expect_100 on" in squid.conf (using squid 3.1.6).
Are there known problems / issues, with enabling the parameter
"ignore_expect_100"? Could this be a security-issue?

Is there another way in squid.conf allowing these servers the POST-request?

xx,xx.xx.xx NONE/417 4362 POST http://go.microsoft.com/fwlink/? -
NONE/- text/html

Thanks a lot.
Tom


[squid-users] Squid Rewriter

2010-11-03 Thread Daniel Echizen
Hi, im modding a havp proxy to work like squid parent doing rewrite of
some dinamic urls like, youtube, facebook, megaupload, etc..
All work done, but i got a little issue with the hit, when i got miss,
i can handle the header and put in database the total size of file,
then download him, so, when i got hit, i do a match with size of file
in disk, and size of file in database do know if its a hit or not.. my
problem is, im wondering if i can check hit size without looking in
database, i've tried to do this with the content-length but no sucess,
cant handle the header after hit check, and still no sucess modding
havp to do this. I know its not a squid issue, but if anyone can help
me..


[squid-users] icp peering multicast option

2010-11-03 Thread My LinuxHAList
Hi,

I may run multiple instances of squid inside a box.
Those instances may be serving out of the same eth0 or some bonded interface.

I have a question on the icp multicast option. 

Is squid icp multicast called with "loopback" option on, so that when
one instance of my squid is sending out the multicast, the rest of the
instances within the same box will receive it ?

Thanks


RE: [squid-users] Squid network read()'s only 2k long?

2010-11-03 Thread Martin Sperl
To add to the confusion on the buffers used by squid:

I have observed something that looks similar to this issue:squid is writing to 
the network with 4k buffers, even if it has got everything in its local 
buffers, so it could do a write of the whole buffer...

Here an example that delivers: 17961 body-bytes to the peer from the filled 
buffers:

1288787592.475200 epoll_ctl(6, EPOLL_CTL_ADD, 58, {EPOLLOUT|EPOLLERR|EPOLLHUP, 
{u32=58, u64=18133351923770}}) = 0
1288787592.475324 epoll_wait(6, {{EPOLLOUT, {u32=58, u64=18133351923770}}}, 
32768, 10) = 1
1288787592.475358 gettimeofday({1288787592, 475366}, NULL) = 0
1288787592.475387 write(58, "HTTP/1.0 200 OK\r\nContent-Type: 
text/html;charset=UTF-8\r\nCache-Control: no-cache\r\nContent-Length: 
17961\r\nDate: Wed, 03 Nov 2010 12:33:11 GMT\r\nVary: 
User-Agent,Accept-Encoding\r\nPragma: no-cache\r\nConnection: 
keep-alive\r\n\r\n"..., 4094) = 4094
1288787592.456083 gettimeofday({1288787592, 456092}, NULL) = 0
1288787592.456119 epoll_ctl(6, EPOLL_CTL_DEL, 72, {0, {u32=72, 
u64=20385083017920584}}) = 0
1288787592.456158 epoll_wait(6, {{EPOLLIN, {u32=48, u64=17583596109872}}}, 
32768, 10) = 1
1288787592.456188 gettimeofday({1288787592, 456196}, NULL) = 0
1288787592.456217 read(48, "LO \302\241B\303\201JA "..., 4094) = 3733
1288787592.456359 gettimeofday({1288787592, 456367}, NULL) = 0
1288787592.456390 epoll_ctl(6, EPOLL_CTL_ADD, 72, {EPOLLIN|EPOLLERR|EPOLLHUP, 
{u32=72, u64=281470681743432}}) = 0
1288787592.456510 epoll_ctl(6, EPOLL_CTL_MOD, 72, 
{EPOLLIN|EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=72, u64=7829725380680}}) = 0
1288787592.456555 epoll_wait(6, {{EPOLLOUT, {u32=72, u64=7829725380680}}}, 
32768, 10) = 1
1288787592.456585 gettimeofday({1288787592, 456594}, NULL) = 0
1288787592.456629 write(72, "RESPMOD icap://127.0.0.1:1344/respmod 
ICAP/1.0\r\nHost: 127.0.0.1:1344\r\nDate: ", 1823) = 1823
1288787592.456894 gettimeofday({1288787592, 456903}, NULL) = 0
1288787592.456934 epoll_wait(6, {{EPOLLOUT, {u32=72, u64=7829725380680}}}, 
32768, 10) = 1
1288787592.456963 gettimeofday({1288787592, 456972}, NULL) = 0
1288787592.456992 write(72, "100\r\n\n \"..., 4096) = 
4096
1288787592.476044 gettimeofday({1288787592, 476069}, NULL) = 0
1288787592.476098 epoll_ctl(6, EPOLL_CTL_DEL, 72, {0, {u32=72, 
u64=20385083017920584}}) = 0
1288787592.476140 epoll_wait(6, {{EPOLLOUT, {u32=58, u64=18133351923770}}}, 
32768, 10) = 1
1288787592.476585 gettimeofday({1288787592, 476594}, NULL) = 0
1288787592.476615 write(58, "refront?item=1997025&BK=index "..., 4096) = 
4096
1288787592.476758 gettimeofday({1288787592, 476767}, NULL) = 0



> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Dienstag, 02. November 2010 02:15
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Squid network read()'s only 2k long?
> 
> On Mon, 1 Nov 2010 23:20:52 +, Declan White 
> wrote:
> > On Mon, Nov 01, 2010 at 10:55:12PM +, Declan White wrote:
> >> On Mon, Nov 01, 2010 at 09:36:53PM +, Amos Jeffries wrote:
> >> > On Mon, 1 Nov 2010 15:00:21 +, decl...@is.bbc.co.uk wrote:
> >> > > I went for a rummage in the code for the buffer size decisions, but
> >> > > got
> >> > > very very lost in the OO abstractions very quickly. Can anyone
> point
> >> > > me at
> >> > > anything I can tweak to fix this?
> >> >
> >> > It's a global macro defined by auto-probing your operating systems
> TCP
> >> > receiving buffer when building. Default is 16KB and max is 64KB.
> There
> >> > may
> >> > also be auto-probing done at run time.
> >> >
> >> > It is tunable at run-time with
> >> > http://www.squid-cache.org/Doc/config/tcp_recv_bufsize/
> >>
> >> Oh thank God! Thanks :) (and annoyed with myself that I missed that)
> >
> > Nuts.. actually, that didn't do anything :(
> >
> > 17314:  write(16, " G E T   / c f g m a n .".., 639)= 639
> > 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
> > 17314:  write(6, "\0\0\010\b\0\0\0\0\0\010".., 16)  = 16
> > 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
> > 17314:  read(11, " H T T P / 1 . 1   2 0 0".., 2046)= 2046
> > 17314:  write(6, "\0\0\0\n\004\0\0", 8) = 8
> > 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 2
> > 17314:  write(10, " H T T P / 1 . 0   2 0 0".., 2180)   = 2180
> > 17314:  read(11, " f o n t - s i z e :   3".., 2046)= 834
> > 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 2
> > 17314:  write(10, " f o n t - s i z e :   3".., 834)= 834
> > 17314:  read(11, "   n o n e ;\n }\n\n # m".., 2046)= 1066
> > 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
> > 17314:  write(10, "   n o n e ;\n }\n\n # m".., 1066)   = 1066
> > 17314:  write(8, " [ 0 1 / N o v / 2 0 1 0".., 403) = 403
> >
> > It's still reading from the remote server in 2046 byte lumps, which
> meant
> > three trips round the event loop where it might only have needed one.
> >
> > I'm guessing that setting is for the kernel level TCP receive bu

[squid-users] Squid 3.1.9 OSX client_side.cc okToAccept: WARNING! Your cache is running out of filedescriptors

2010-11-03 Thread donovan jeffrey j
greetings
updated 2 transparent proxies last night. and both are spewing noise about 
filedescriptors. this is coming from the system.

2010/11/03 08:48:36| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:48:52| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:49:08| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:49:24| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:49:40| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:49:56| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:50:12| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:50:28| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:50:44| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors
2010/11/03 08:51:00| client_side.cc(2980) okToAccept: WARNING! Your cache is 
running out of filedescriptors

here is what sysctl -a gives me.


kern.exec: unknown type returned
kern.maxfiles = 12288
kern.maxfilesperproc = 10240
kern.corefile = /cores/core.%P
kern.maxfiles: 12288
kern.maxfilesperproc: 10240


what should i set these to and do I need to recompile with any special 
adjustments ?

./configure --enable-icmp --enable-storeio=diskd,ufs,aufs --enable-delay-pools 
--disable-htcp --enable-ssl --enable-ipfw-transparent --enable-snmp 
--enable-underscores --enable-basic-auth-helpers=NCSA,LDAP,getpwnam



Re: [squid-users] Errors with sasl while compiling Squid 3.1.4

2010-11-03 Thread lieven
I had this same issue and could .. ehrm "guess" (sorry) from the logs 
that I was missing g++


After apt-getting g++, everything went smooth.

thanks for pointing to the solution.

cheers!
Lieven


Henrik Nordström wrote:

ons 2010-06-30 klockan 14:25 +0200 skrev Babelo Gmvsdm:

Hi When I run ./configure to prepare compilation on Squid 3.1.4 I got this =
errors:
 
checking /usr/include/sasl.h usability... no

checking /usr/include/sasl.h presence... no
checking for /usr/include/sasl.h... no
 
checking sasl.h usability... no

checking sasl.h presence... no
checking for sasl.h... no
configure: error: Neither SASL nor SASL2 found
 


Whereas /usr/include/sasl.h is present in the right directory=20


Check config.log for more information.

Regards
Henrik



Re: [squid-users] Multisite ICP peering

2010-11-03 Thread Chris Toft
Thanks for the reply, I actually fixed it. Removed the multicast-responder 
option and just left multicast-sibling.

Man this thing flies on 5 boxes with 64gb memory and 10x 50gb solid state 
drives for the cache :-)

I will post working config tomorrow for anyone interested.



Chris Toft
Mob: 0459 029 454

On 03/11/2010, at 19:51, Josip Almasi  wrote:

> Chris Toft wrote:
>> Hi,
>>
>> I am having an issue with getting multisite ICP peering to work correctly.
>>
>> Let me lay out the scenario.
>>
>> Two sites, one primary and one DR.
>>
>> 3 squid caches on the primary site, 2 on the secondary site in front of a 
>> cluster of 4 back end web servers on each site.
>>
>> Primary Site
>> I have set them up so when requests hit the squid caches (behind a 
>> loadbalancer) on each site they will check their squid peers for the image 
>> before passing to the back end. This is exactly what we wanted.
>>
>> Secondary Site
>> Again setup the same as the primary site but in the configs all 5 squid 
>> caches are listed as multicast peers, but when the secondary site is hit it 
>> only checks the caches on that site and then goes to the origin webservers.
>>
>> The idea being the primary site will get hit 90% of the time and therefore 
>> have a "hot" cache, with 10% of traffic hitting the secondary site to give a 
>> "warm" cache. This being the case we wanted the secondary site squid caches 
>> to use the multicast peers in the primary site before hitting the backend 
>> webservers.
>>
>> I have tried various options of multicast, sibling, parent for the type of 
>> server.
>> I have tried using weighting to make the primary site caches a higher 
>> priority than the backend web servers
>> There are no connectivity issues because if I comment out the secondary site 
>> webservers in the configs then it will try the primary site squid caches and 
>> then fall back to the primary site webservers.
>>
>> How can I setup the config so that when a request comes in to the secondary 
>> site it will check the primary site cache for an object before falling back 
>> to the webservers???
>
> You got to have class D address to get UDP multicast working, or mbone,
> or VPN/private addresses.
> Alternative is cache digest, just shorted the time period. And then I'd
> recommend using 'proxy' instead of 'sibling' on sec site.
>
> Regards...

The information contained in this e-mail message and any accompanying files is 
or may be confidential. If you are not the intended recipient, any use, 
dissemination, reliance, forwarding, printing or copying of this e-mail or any 
attached files is unauthorised. This e-mail is subject to copyright. No part of 
it should be reproduced, adapted or communicated without the written consent of 
the copyright owner. If you have received this e-mail in error please advise 
the sender immediately by return e-mail or telephone and delete all copies. 
Fairfax does not guarantee the accuracy or completeness of any information 
contained in this e-mail or attached files. Internet communications are not 
secure, therefore Fairfax does not accept legal responsibility for the contents 
of this message or attached files.


Re: [squid-users] Multisite ICP peering

2010-11-03 Thread Josip Almasi

Chris Toft wrote:

Hi,

I am having an issue with getting multisite ICP peering to work correctly.

Let me lay out the scenario.

Two sites, one primary and one DR.

3 squid caches on the primary site, 2 on the secondary site in front of a 
cluster of 4 back end web servers on each site.

Primary Site
I have set them up so when requests hit the squid caches (behind a 
loadbalancer) on each site they will check their squid peers for the image 
before passing to the back end. This is exactly what we wanted.

Secondary Site
Again setup the same as the primary site but in the configs all 5 squid caches 
are listed as multicast peers, but when the secondary site is hit it only 
checks the caches on that site and then goes to the origin webservers.

The idea being the primary site will get hit 90% of the time and therefore have a "hot" 
cache, with 10% of traffic hitting the secondary site to give a "warm" cache. This being 
the case we wanted the secondary site squid caches to use the multicast peers in the primary site 
before hitting the backend webservers.

I have tried various options of multicast, sibling, parent for the type of 
server.
I have tried using weighting to make the primary site caches a higher priority 
than the backend web servers
There are no connectivity issues because if I comment out the secondary site 
webservers in the configs then it will try the primary site squid caches and 
then fall back to the primary site webservers.

How can I setup the config so that when a request comes in to the secondary 
site it will check the primary site cache for an object before falling back to 
the webservers???


You got to have class D address to get UDP multicast working, or mbone, 
or VPN/private addresses.
Alternative is cache digest, just shorted the time period. And then I'd 
recommend using 'proxy' instead of 'sibling' on sec site.


Regards...