Re: [squid-users] Peering caches (squid and 3rd parties) - How to

2013-06-12 Thread Amos Jeffries

On 12/06/2013 8:24 a.m., Guillermo Javier Nardoni - Grupo GERYON wrote:

Hello everyone,

We have this situation and we tried a lot of configurations without success.

• 1000 Customers
• 4 Caches BOX running Squid 2.7 on Debian Squeeze • Caches are full-meshed
to each other • Every Squid is running in transparent mode (http_port 3128
transparent) • Every Squid is running HAARPCACHE on localhost at port 8080
(HAARPCACHE is a Thundercache 3.1 fork wich Works PERFECT for caching sites
like youtube with lots of HITS) .
• Every Squid is connected to Internet through RB1 • RB2 (Mikrotik RouterOS)
is doing round-robin selection on every squid redirecting all trafic to port
80 to internet to port 3128 on squid


snip


As you can see, the same file is downloaded twice (at least) if the petition
is not redirected to the same cache box.
How can I achieve the goal to ask every cache and if the file is cached on
any sibling or parent it shouldn’t be downloaded from internet but the cache
itself.


The simplest thing you can do with your existing proxies is to set them 
up into a CARP installation.


With the CARP design you have two layers of proxies:
- layer 1 is the Squid acting as gateways between clients and wherever 
the data comes from.
- layer 2 is the HAARPCACHE proxies acting as caches for that specific 
content.



To change you current configuration into a CARP system all you need to 
do is:


1) make all HAARP proxies listen on IP:port which are accessible from 
any of the Squid.


2) add a cache_peer line to each Squid.conf pointing at each HAARP proxy.
+ Using the carp option on every one of these cache_peer lines.
+ Use the same cache_peer_access ACL setup that you have now, but for 
everyone of those new cache_peer as well.


3) ensure each of your Squid always have identical cache_peer settings.
- you can do that by writing all the HAARP related settings into a 
separate file which is mirrored between the Squid and using squid.conf 
include directive to load it.


After this no matter which squid receives the client request it will 
hash the URL and point it at the HAARP proxy which is most likely to 
have already cached it.


Done.


Extra notes:

* in a textbook CARP array all requests go to the parent caches - this 
is optional. In your case only the HAARP URLs will go there.


* in a textbook CARP array the frontend does not cache - this is optional.

Amos



Re: [squid-users] Peering caches (squid and 3rd parties) - How to

2013-06-12 Thread Eliezer Croitoru

Hey Amos,

I am unsure about one thing.
in a case of carp array the related documents are:
- 
http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+10.+Talking+to+Other+Squids/10.9+Cache+Array+Routing+Protocol/

- http://docs.huihoo.com/gnu_linux/squid/html/x2398.html
- 
http://wiki.squid-cache.org/Features/LoadBalance#CARP_:_Cache_Array_Routing_Protocol


His case is dynamic urls to the same content.
let say 10 urls of the same youtube video are not guaranteed to go to 
the same cache.

This is why using ICP or HTCP comes handy.
We dont need to know the request HASH in order to get a cached object 
but the whole array can rely on each other ICP capabilities.

What do you think?

Eliezer

On 6/12/2013 9:33 AM, Amos Jeffries wrote:

The simplest thing you can do with your existing proxies is to set them
up into a CARP installation.

With the CARP design you have two layers of proxies:
- layer 1 is the Squid acting as gateways between clients and wherever
the data comes from.
- layer 2 is the HAARPCACHE proxies acting as caches for that specific
content.


To change you current configuration into a CARP system all you need to
do is:

1) make all HAARP proxies listen on IP:port which are accessible from
any of the Squid.

2) add a cache_peer line to each Squid.conf pointing at each HAARP proxy.
+ Using the carp option on every one of these cache_peer lines.
+ Use the same cache_peer_access ACL setup that you have now, but for
everyone of those new cache_peer as well.

3) ensure each of your Squid always have identical cache_peer settings.
- you can do that by writing all the HAARP related settings into a
separate file which is mirrored between the Squid and using squid.conf
include directive to load it.

After this no matter which squid receives the client request it will
hash the URL and point it at the HAARP proxy which is most likely to
have already cached it.

Done.


Extra notes:

* in a textbook CARP array all requests go to the parent caches - this
is optional. In your case only the HAARP URLs will go there.

* in a textbook CARP array the frontend does not cache - this is optional.

Amos




[squid-users] Peering caches (squid and 3rd parties) - How to

2013-06-11 Thread Guillermo Javier Nardoni - Grupo GERYON
Hello everyone,

We have this situation and we tried a lot of configurations without success.

• 1000 Customers
• 4 Caches BOX running Squid 2.7 on Debian Squeeze • Caches are full-meshed
to each other • Every Squid is running in transparent mode (http_port 3128
transparent) • Every Squid is running HAARPCACHE on localhost at port 8080
(HAARPCACHE is a Thundercache 3.1 fork wich Works PERFECT for caching sites
like youtube with lots of HITS) .
• Every Squid is connected to Internet through RB1 • RB2 (Mikrotik RouterOS)
is doing round-robin selection on every squid redirecting all trafic to port
80 to internet to port 3128 on squid

cat /etc/haarp/haarp.lst
root@cpe-58-1-26-172:/etc/haarp# cat /etc/haarp/haarp.lst
http.*\.4shared\.com.*(\.exe|\.iso|\.torrent|\.zip|\.rar|\.pdf|\.doc|\.tar|\
.mp3|\.mp4|\.avi|\.wmv)
http.*\.avast\.com.*(\.def|\.vpu|\.vpaa|\.stamp)
http.*(\.avg\.com|\.grisoft\.com|\.grisoft\.cz).*(\.bin|\.exe)
http.*(\.avgate\.com|\.avgate\.net|\.freeav\.net|\.freeav\.com).*(\.gz)
http.*\.bitgravity\.com.*(\.flv\.mp4)
http.*\.etrustdownloads\.ca\.com.*(\.tar|\.zip|\.exe|\.pkg)
http.*flashvideo\.globo\.com.*(\.mp4|\.flv)
http.{1,4}vsh\.r7\.com\/.*(\.mp4)$
74\.125\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[
0-9][0-9]?)
#http.*\.googlevideo\.com.*videoplayback
#http.*fpatch\.grandchase\.com\.br.*(\.kom|\.mkom|\.mp3)
http.*(\.kaspersky-labs\.com|\.geo\.kaspersky\.com|kasperskyusa\.com).*(\.av
c|\.kdc|\.klz|\.bz2|\.dat|\.dif)
#http.*\.mccont\.com.*\.flv
http.*\.metacafe\.com.*\.flv
http.{1,4}media\w*\.justin.tv\/archives\/(\w|\/|-)*\.flv(\?.*|$)
http.{1,4}\w*juegos\w*\.juegosdiarios\.com\/(\w|\/|-)*\.swf$
http.{1,4}\w*\.juegosjuegos\.com\/games(\w|\/|-)*\.swf$
##http.*(\.windowsupdate\.com|(\.microsoft\.com)).*(\.cab|\.exe|\.iso|\.zip|
\.psf)
http.*(\.windowsupdate\.com|(update|download|dlservice|windowsupdate)\.micro
soft\.com)\/.*(\.cab|\.exe|\.iso|\.zip|\.psf|\.txt|\.crt)$
http.*\.pornotube\.com.*\.flv
http.*\.terra\.com.*\.flv
#http.*uol\.com\.br.*\.flv
http.*\.viddler\.com.*\.flv
#http.*\.video\.msn\.com.*\.flv
http.*(porn|img).*\.xvideos\.com\/videos\/(thumbs\/)?.*(\.jpg|\.flv\?.*|\.mp
4\?.*)$
http.*\.youtube\.com.*videoplayback\?
http.*\.ziddu\.com.*(\.exe|\.iso|\.torrent|\.zip|\.rar|\.pdf|\.doc|\.tar|\.m
p3|\.mp4|\.avi|\.wmv)
http.*edgecastcdn\.net/.*(\.mp4|\.flv)
http.*adobe\.com/.*(\.cab|\.aup|\.exe|\.msi|\.upd|\.msp)
http.*\.eset\.com.*\.nup
http.*\.nai\.com.*(\.zip|\.tar|\.exe|\.gem)
http.*\.pop6\.com.*(\.flv)
http.*\.symantecliveupdate\.com.*(\.zip|\.exe)
#http.*\.xpg\.com\.br.*
http.{1,4}\w*\.ytimg\.com.*(hqdefault(\.jpg|\.mp4)$|M[0-9]+\.jpg\?sigh=)
http.{1,4}\w*google(\.\w|\w)*\.doubleclick\.net\/pagead\/ads\?.*
http.*img[0-9]\.submanga\.com\/(hd)?pages\/.*(\.jpg|\.webp)
http.*(profile|s?photos|video).{0,5}\.ak\.fbcdn\.net\/.*(\.mp4\?.*|\_[a-z]\.
jpg$|\.mp4$|\_[a-z]\.png$)
#http.*(profile|s?photos|video).{0,5}\.ak\.fbcdn\.net\/.*(\.mp4\?.*|\_n\.jpg
$|\.mp4$|\_n\.png$)
http.*\.video\.pornhub\.\w*\.com\/videos\/.*\.flv\?.*
http.*\.(publicvideo|publicphoto)\.xtube\.com\/(videowall\/)?videos?\/.*(\.f
lv\?.*|\_Thumb\.flv$)
http.*public\.tube8\.com\/.*\.mp4.*
http.*videos\..*\.redtubefiles\.com\/.*\.flv
(205\.196\.|199\.91\.)[0-9]{2,3}\.[0-9]{1,3}\/.*
#http.*\.rapidshare\.com\/cgi-bin\/.*\.cgi\?.*sub=download
http.*\.vimeo.com\/.*\.mp4(\?.*)?$
http.*images\.orkut\.com\/orkut\/photos\/.*\.jpg$
http.{1,4}(\w|\/|\.|-)*media\.tumblr\.com\/(\w|\/|-|\.)*tumblr(\w|\/|-)*(\.p
ng|\.jpg)$
#http.{1,7}speedtest(\w|-)*(\.|\w)+\/speedtest\/(random.*\.jpg|latency\.txt)
\?.*
#http.{1,10}testdevelocidad.{1,5}\/speedtest\/(random.*\.jpg|latency\.txt)\?
.*
#http.{1,7}(\.|[a-z]|[0-9]|-)+(\/\w+)?(\/speedtest)+\/(random[0-9]+x[0-9]+\.
jpg|latency\.txt)

As you can well see!, youtube and many others sites is cachings its content
through HAARPCACHE and not by squid itself. BTW It Works GREAT.

Configuration on every squid.conf at /etc/squid

Proxy1:
IP: 192.168.1.1

cache_peer   192.168.2.1 sibling   3128  3130 
proxy-only cache_peer   192.168.3.1 sibling   3128 
3130  proxy-only cache_peer   192.168.4.1 sibling  
3128  3130  proxy-only

acl haarp_lst url_regex -i /etc/haarp/haarp.lst
cache deny haarp_lst
cache_peer 127.0.0.1 parent 8080 0 proxy-only no-digest dead_peer_timeout 2
seconds cache_peer_access 127.0.0.1 allow haarp_lst cache_peer_access
127.0.0.1 deny all


Proxy2:
IP: 192.168.2.1

cache_peer   192.168.1.1 sibling   3128  3130 
proxy-only cache_peer   192.168.3.1 sibling   3128 
3130  proxy-only cache_peer   192.168.4.1 sibling  
3128  3130  proxy-only

acl haarp_lst url_regex -i /etc/haarp/haarp.lst
cache deny haarp_lst
cache_peer 127.0.0.1 parent 8080 0 proxy-only no-digest dead_peer_timeout 2
seconds cache_peer_access 127.0.0.1 allow haarp_lst cache_peer_access
127.0.0.1 deny all

Proxy3:
IP: 192.168.3.1

cache_peer   192.168.2.1 sibling   3128 

Re: [squid-users] Peering caches (squid and 3rd parties) - How to

2013-06-11 Thread Eliezer Croitoru

On 6/11/2013 11:24 PM, Guillermo Javier Nardoni - Grupo GERYON wrote:

Hello everyone,

We have this situation and we tried a lot of configurations without success.

• 1000 Customers
• 4 Caches BOX running Squid 2.7 on Debian Squeeze • Caches are full-meshed
to each other • Every Squid is running in transparent mode (http_port 3128
transparent) • Every Squid is running HAARPCACHE on localhost at port 8080
(HAARPCACHE is a Thundercache 3.1 fork wich Works PERFECT for caching sites
like youtube with lots of HITS) .
• Every Squid is connected to Internet through RB1 • RB2 (Mikrotik RouterOS)
is doing round-robin selection on every squid redirecting all trafic to port
80 to internet to port 3128 on squid

cat /etc/haarp/haarp.lst
root@cpe-58-1-26-172:/etc/haarp# cat /etc/haarp/haarp.lst
http.*\.4shared\.com.*(\.exe|\.iso|\.torrent|\.zip|\.rar|\.pdf|\.doc|\.tar|\
.mp3|\.mp4|\.avi|\.wmv)
http.*\.avast\.com.*(\.def|\.vpu|\.vpaa|\.stamp)
http.*(\.avg\.com|\.grisoft\.com|\.grisoft\.cz).*(\.bin|\.exe)
http.*(\.avgate\.com|\.avgate\.net|\.freeav\.net|\.freeav\.com).*(\.gz)
http.*\.bitgravity\.com.*(\.flv\.mp4)
http.*\.etrustdownloads\.ca\.com.*(\.tar|\.zip|\.exe|\.pkg)
http.*flashvideo\.globo\.com.*(\.mp4|\.flv)
http.{1,4}vsh\.r7\.com\/.*(\.mp4)$
74\.125\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[
0-9][0-9]?)
#http.*\.googlevideo\.com.*videoplayback
#http.*fpatch\.grandchase\.com\.br.*(\.kom|\.mkom|\.mp3)
http.*(\.kaspersky-labs\.com|\.geo\.kaspersky\.com|kasperskyusa\.com).*(\.av
c|\.kdc|\.klz|\.bz2|\.dat|\.dif)
#http.*\.mccont\.com.*\.flv
http.*\.metacafe\.com.*\.flv
http.{1,4}media\w*\.justin.tv\/archives\/(\w|\/|-)*\.flv(\?.*|$)
http.{1,4}\w*juegos\w*\.juegosdiarios\.com\/(\w|\/|-)*\.swf$
http.{1,4}\w*\.juegosjuegos\.com\/games(\w|\/|-)*\.swf$
##http.*(\.windowsupdate\.com|(\.microsoft\.com)).*(\.cab|\.exe|\.iso|\.zip|
\.psf)
http.*(\.windowsupdate\.com|(update|download|dlservice|windowsupdate)\.micro
soft\.com)\/.*(\.cab|\.exe|\.iso|\.zip|\.psf|\.txt|\.crt)$
http.*\.pornotube\.com.*\.flv
http.*\.terra\.com.*\.flv
#http.*uol\.com\.br.*\.flv
http.*\.viddler\.com.*\.flv
#http.*\.video\.msn\.com.*\.flv
http.*(porn|img).*\.xvideos\.com\/videos\/(thumbs\/)?.*(\.jpg|\.flv\?.*|\.mp
4\?.*)$
http.*\.youtube\.com.*videoplayback\?
http.*\.ziddu\.com.*(\.exe|\.iso|\.torrent|\.zip|\.rar|\.pdf|\.doc|\.tar|\.m
p3|\.mp4|\.avi|\.wmv)
http.*edgecastcdn\.net/.*(\.mp4|\.flv)
http.*adobe\.com/.*(\.cab|\.aup|\.exe|\.msi|\.upd|\.msp)
http.*\.eset\.com.*\.nup
http.*\.nai\.com.*(\.zip|\.tar|\.exe|\.gem)
http.*\.pop6\.com.*(\.flv)
http.*\.symantecliveupdate\.com.*(\.zip|\.exe)
#http.*\.xpg\.com\.br.*
http.{1,4}\w*\.ytimg\.com.*(hqdefault(\.jpg|\.mp4)$|M[0-9]+\.jpg\?sigh=)
http.{1,4}\w*google(\.\w|\w)*\.doubleclick\.net\/pagead\/ads\?.*
http.*img[0-9]\.submanga\.com\/(hd)?pages\/.*(\.jpg|\.webp)
http.*(profile|s?photos|video).{0,5}\.ak\.fbcdn\.net\/.*(\.mp4\?.*|\_[a-z]\.
jpg$|\.mp4$|\_[a-z]\.png$)
#http.*(profile|s?photos|video).{0,5}\.ak\.fbcdn\.net\/.*(\.mp4\?.*|\_n\.jpg
$|\.mp4$|\_n\.png$)
http.*\.video\.pornhub\.\w*\.com\/videos\/.*\.flv\?.*
http.*\.(publicvideo|publicphoto)\.xtube\.com\/(videowall\/)?videos?\/.*(\.f
lv\?.*|\_Thumb\.flv$)
http.*public\.tube8\.com\/.*\.mp4.*
http.*videos\..*\.redtubefiles\.com\/.*\.flv
(205\.196\.|199\.91\.)[0-9]{2,3}\.[0-9]{1,3}\/.*
#http.*\.rapidshare\.com\/cgi-bin\/.*\.cgi\?.*sub=download
http.*\.vimeo.com\/.*\.mp4(\?.*)?$
http.*images\.orkut\.com\/orkut\/photos\/.*\.jpg$
http.{1,4}(\w|\/|\.|-)*media\.tumblr\.com\/(\w|\/|-|\.)*tumblr(\w|\/|-)*(\.p
ng|\.jpg)$
#http.{1,7}speedtest(\w|-)*(\.|\w)+\/speedtest\/(random.*\.jpg|latency\.txt)
\?.*
#http.{1,10}testdevelocidad.{1,5}\/speedtest\/(random.*\.jpg|latency\.txt)\?
.*
#http.{1,7}(\.|[a-z]|[0-9]|-)+(\/\w+)?(\/speedtest)+\/(random[0-9]+x[0-9]+\.
jpg|latency\.txt)

As you can well see!, youtube and many others sites is cachings its content
through HAARPCACHE and not by squid itself. BTW It Works GREAT.

Configuration on every squid.conf at /etc/squid

Proxy1:
IP: 192.168.1.1

cache_peer   192.168.2.1 sibling   3128  3130
proxy-only cache_peer   192.168.3.1 sibling   3128
3130  proxy-only cache_peer   192.168.4.1 sibling
3128  3130  proxy-only

acl haarp_lst url_regex -i /etc/haarp/haarp.lst
cache deny haarp_lst
cache_peer 127.0.0.1 parent 8080 0 proxy-only no-digest dead_peer_timeout 2
seconds cache_peer_access 127.0.0.1 allow haarp_lst cache_peer_access
127.0.0.1 deny all


Proxy2:
IP: 192.168.2.1

cache_peer   192.168.1.1 sibling   3128  3130
proxy-only cache_peer   192.168.3.1 sibling   3128
3130  proxy-only cache_peer   192.168.4.1 sibling
3128  3130  proxy-only

acl haarp_lst url_regex -i /etc/haarp/haarp.lst
cache deny haarp_lst
cache_peer 127.0.0.1 parent 8080 0 proxy-only no-digest dead_peer_timeout 2
seconds cache_peer_access 127.0.0.1 allow haarp_lst cache_peer_access
127.0.0.1 deny all

Proxy3:
IP: 192.168.3.1


Re: [squid-users] Peering caches (squid and 3rd parties) - How to

2013-06-11 Thread Eliezer Croitoru

On 6/11/2013 11:24 PM, Guillermo Javier Nardoni - Grupo GERYON wrote:

Hello everyone,

We have this situation and we tried a lot of configurations without success.

• 1000 Customers
• 4 Caches BOX running Squid 2.7 on Debian Squeeze • Caches are full-meshed
to each other • Every Squid is running in transparent mode (http_port 3128
transparent) • Every Squid is running HAARPCACHE on localhost at port 8080
(HAARPCACHE is a Thundercache 3.1 fork wich Works PERFECT for caching sites
like youtube with lots of HITS) .
• Every Squid is connected to Internet through RB1 • RB2 (Mikrotik RouterOS)
is doing round-robin selection on every squid redirecting all trafic to port
80 to internet to port 3128 on squid


This is my latest StoreID helper:
http://www1.ngtech.co.il/paste/1009/

If you somebody wants to update and publish it please feel free.
The HAARPCACHE sources for the patterns are here:
https://github.com/keikurono/haarpcache/tree/master/haarp/plugins


cat /etc/haarp/haarp.lst
root@cpe-58-1-26-172:/etc/haarp# cat /etc/haarp/haarp.lst

SNIP

Eliezer


Re: [squid-users] Peering caches (squid and 3rd parties) - How to

2013-06-11 Thread Alex Rousskov
On 06/11/2013 02:49 PM, Eliezer Croitoru wrote:

 There is a small bug which when StoreID is being used the proxy asks
 from the sibling only a StoreID url in the ICP requests.
 If you do ask me I think that it should work this way

No, it should not. StoreID effect should be local. If somebody wants a
global effect, they should rewrite the request URL.

Alex.



Re: [squid-users] Peering caches (squid and 3rd parties) - How to

2013-06-11 Thread Eliezer Croitoru

On 6/12/2013 1:30 AM, Alex Rousskov wrote:

On 06/11/2013 02:49 PM, Eliezer Croitoru wrote:


There is a small bug which when StoreID is being used the proxy asks
from the sibling only a StoreID url in the ICP requests.
If you do ask me I think that it should work this way


No, it should not. StoreID effect should be local. If somebody wants a
global effect, they should rewrite the request URL.

Alex.


There was a small talk with Amos about it
I know about this issue and it's a bit complex to fix ICP and HTCP code 
since we need to add the ICP into StoreID request.

it was never done even in old versions.

I am not sure if I filed a bug about it So later this week I will file a 
detailed bug on each and one of the problems to make sure it will be fixed.


Eliezer


FW: [squid-users] Peering squid multiple instances.

2010-03-24 Thread GIGO .




 From: gi...@msn.com
 To: squ...@treenet.co.nz
 Subject: RE: [squid-users] Peering squid multiple instances.
 Date: Wed, 24 Mar 2010 07:12:15 +


 Dear Amos,

 Thank you for your response and better design tips. However i am not able to 
 comprehend it well (due to lack of expereince and knowledge both however at 
 current). So i request you to elaborate it a bit more. Your guidance would be 
 a real valuable.

 Question 1:

 You said that under my configuration this is the case:

 Client - squidinstance1 - squidinstance2 - (web servers)

 or

 client - squidinstance2 - webserver

 Well i am failing to understand how clients can talk to squidinstance2 
 directly when:

 1. squidinstance2 is configured with an acl to accept traffic from localhost 
 only.
 2. On the Squid clients (browsers) the port 8080 of first instance is 
 configured. And this is the only traffic that is being accepted through the 
 iptables as well.

 according to my perception isnt this the case

 client -squidinstance1 - webserver
 client -squidinstance1 - squidinstance2 - webserver

 Please guide me in this respect.


 Question 2:

 I have created multiple instances to run on the same machine ,because in my 
 server there are three hard drives. OS is on Physical RAID1.Cache directory 
 is on the third hard drive (comprising 80% of total space). This setup is 
 done because i wanted to survive a directory failure. so even all my drives 
 which are holding cache directories get failed. Even then my client will be 
 able to browse the internet through proxy-only instance until the disk system 
 holding the OS fails. I am not sure that whether this approach is correct or 
 not but this is what i have learnt in these days through available faqs and 
 ofcourse guidance through squidmail help. Please guide me on this.


 Question 3:


 what does it mean by parent is the peering method for origin web servers? 
 also you wrote that by reason of Parent it does not matter which protocol you 
 are using. Pleae guide me.



 Question 4:

 i interpret that you mean that two instances running on the same machine 
 should have sibling type relationships configured identically with digest 
 type protocol between them. It means that i should run two instances but 
 pointing to different cache directories on my third hard drive and instead of 
 50 Gb big cache give lets say 25 Gb space to each.((Holding two cache 
 directories on the same hard isnt it degrade performance ? so is it only 
 possible when i have multiple drives for holding cache ))Both permitted to 
 cache data from origin servers.However in case of a cache miss first check 
 the sibling before going to the origin server. Am i correct in understanding 
 you?


 You further said that for failover which i am sorry that i failed to 
 understand at this point of time due to my current skill/competency. However 
 i am eager to learn and determined to work hard. your detailed response will 
 be really really valueable to me (I have just started a couple of weeks 
 back). Please is the following setup is for failover of a whole squid proxy 
 server or failover of squid processes?

 * a cache_peer parent type to the web server. With originserver
 and default selection enabled.
 This topology utilizes a single layer of multiple proxies. Possibly with
 hardware load balancing in iptables etc sending alternate requests to
 each of the two proxies listening ports.
 Useful for small-medium businesses requiring scale with minimal
 hardware. Probably their own existing load balancers already purchased
 from earlier attempts. IIRC the benchmark for this is somewhere around
 600-700 req/sec.

 The next step up in performance and HA is to have an additional layer of
 Squid acting as the load-balancer doing CARP to reduce cache duplication
 and remove sibling data transfers. This form of scaling out is how
 WikiMedia serve their sites up.
 It is documented somewhat in the wiki as ExtremeCarpFrontend. With a
 benchmark so far for a single box reaching 990 req/sec.

 These maximum speed benchmarks are only achievable by reverse-proxy
 people. Regular ISP setups can expect their maximum to be somewhere
 below 1/2 or 1/3 of that rate due to the content diversity and RTT lag
 of remote servers. (well that part i understood)

 Question 5:

 can you please tell some good read for knowledge/concepts builder? I have get 
 hold of squid definitve guide though a very good one however isnt'it a bit 
 outdated.Can you recommend please? Specially on the topics of Authenticating 
 Active directory users in squid proxy.








 
 Date: Wed, 24 Mar 2010 18:06:46 +1300
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Peering squid multiple instances.

 GIGO . wrote:
 I have successfully setup running of multiple instances of squid for the 
 sake of surviving a Cache directory failure. However I

Re: [squid-users] Peering squid multiple instances.

2010-03-24 Thread Amos Jeffries

GIGO . wrote:

Dear Amos,
 
Thank you for your response and better design tips. However i am not able to comprehend it well (due to lack of expereince and knowledge both however at current). So i request you to elaborate it a bit more. Your guidance would be a real valuable.
 
Question 1:
 
You said that under my configuration this is the case:
 
Client - squidinstance1 - squidinstance2 - (web servers)
 
or 
 
client - squidinstance2 - webserver
 
Well i am failing to understand how clients can talk to squidinstance2 directly when:
 
1. squidinstance2 is configured with an acl to accept traffic from localhost only.


I did not see any http_access lines in your displayed config. I assumed 
some things wrongly it seems. And also mixed your questions up with 
someone others similar questions.


What you posted was a good setup for failover if a normal caching proxy 
(squid2) dies. Using a non-caching interface instance (squid1) to prefer 
fetching from cache with direct no-caching route as a backup.


In this case the parent type was correct and with only two Squid the 
ICP/HTCP/digest selection methods should be avoided.


(The bits I said that lead to your Q2-4 were intended for that other 
setup. Very sorry.)




Question 5:
 
can you please tell some good read for knowledge/concepts builder? I have get hold of squid definitve guide though a very good one however isnt'it a bit outdated.Can you recommend please? Specially on the topics of Authenticating Active directory users in squid proxy.
 


The wiki is where we point people. It started as a copy of the 
definitive guide and the older FAQ guide. Then we tried to improve it, 
increase it and update things for the currently supported Squid releases.


Hopefully its easy enough to read and learn from. Suggestions for 
improvement are always welcome.






Date: Wed, 24 Mar 2010 18:06:46 +1300
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Peering squid multiple instances.

GIGO . wrote:

I have successfully setup running of multiple instances of squid for the sake 
of surviving a Cache directory failure. However I still have few confusions 
regarding peering multiple instances of squid. Please guide me in this respect.


In my setup i percept that my second instance is doing caching on behalf of 
requests send to Instance 1? Am i correct.


You are right in your understanding of what you have configured. I've
some suggestions below on a better topology though.



what protocol to select for peers in this scenario? what is the recommendation? 
(carp, digest, or icp/htcp)


Under your current config there is no selection, ALL requests go through
both peers.

Client - Squid1 - Squid2 - WebServer

or

Client - Squid2 - WebServer

thus Squid2 and WebServer are both bottleneck points.



If syntax of my cache_peer directive is correct or local loop back address 
should not be used this way?


Syntax is correct.
Use of localhost does not matter. It's a useful choice for providing
some security and extra speed to the inter-proxy traffic.



what is the recommended protocol for peering squids with each other?


Does not matter to your existing config. By reason of the parent
selection.



what is the recommended protocl for peering squid with ISA Server.


parent is the peering method for origin web servers. With
originserver selection method.


Instance 1:

visible_hostname vSquidlhr
unique_hostname vSquidMain
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log /var/logs/access.log
cache_log /var/logs/cache.log

cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay
prefer_direct off
cache_dir aufs /var/spool/squid3 100 256 16
coredump_dir /var/spool/squid3
cache deny all



Instance 2:

visible_hostname SquidProxylhr
unique_hostname squidcacheprocess
pid_filename /var/run/squid3cache.pid
http_port 3128
icp_port 0
snmp_port 7172
access_log /var/logs/access2.log
cache_log /var/logs/cache2.log


coredump_dir /cache01/var/spool/squid3
cache_dir aufs /cache01/var/spool/squid3 5 48 768
cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 12 bytes
quick_abort_min -1



snip me bad comments

Amos

--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] Peering squid multiple instances.

2010-03-23 Thread GIGO .

I have successfully setup running of multiple instances of squid for the sake 
of surviving a Cache directory failure. However I still have few confusions 
regarding peering multiple instances of squid. Please guide me in this respect.
 
 
In my setup i percept that my second instance is doing caching on behalf of 
requests send to Instance 1? Am i correct.
 
 
 
what protocol to select for peers in this scenario? what is the recommendation? 
(carp, digest, or icp/htcp)
 
 
 
If syntax of my cache_peer directive is correct or local loop back address 
should not be used this way?
 
 
 
what is the recommended protocol for peering squids with each other?
 
 
 
what is the recommended protocl for peering squid with ISA Server.
 
 
 
Instance 1:

visible_hostname vSquidlhr
unique_hostname vSquidMain
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log  /var/logs/access.log
cache_log /var/logs/cache.log

cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay
prefer_direct off
cache_dir aufs /var/spool/squid3 100 256 16
coredump_dir /var/spool/squid3
cache deny all
 
 
 
Instance 2:
 
visible_hostname SquidProxylhr
unique_hostname squidcacheprocess
pid_filename /var/run/squid3cache.pid
http_port 3128
icp_port 0
snmp_port 7172
access_log /var/logs/access2.log
cache_log /var/logs/cache2.log
 

coredump_dir /cache01/var/spool/squid3
cache_dir aufs /cache01/var/spool/squid3 5 48 768
cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 12 bytes
quick_abort_min -1
 
 
 
regards,

  
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

Re: [squid-users] Peering squid multiple instances.

2010-03-23 Thread Amos Jeffries

GIGO . wrote:

I have successfully setup running of multiple instances of squid for the sake 
of surviving a Cache directory failure. However I still have few confusions 
regarding peering multiple instances of squid. Please guide me in this respect.
 
 
In my setup i percept that my second instance is doing caching on behalf of requests send to Instance 1? Am i correct.
 


You are right in your understanding of what you have configured. I've 
some suggestions below on a better topology though.


 
 
what protocol to select for peers in this scenario? what is the recommendation? (carp, digest, or icp/htcp)
 


Under your current config there is no selection, ALL requests go through 
both peers.


Client - Squid1 - Squid2 - WebServer

or

Client - Squid2 - WebServer

thus Squid2 and WebServer are both bottleneck points.

 
 
If syntax of my cache_peer directive is correct or local loop back address should not be used this way?
 


Syntax is correct.
Use of localhost does not matter. It's a useful choice for providing 
some security and extra speed to the inter-proxy traffic.



 
what is the recommended protocol for peering squids with each other?
 


Does not matter to your existing config. By reason of the parent 
selection.


 
 
what is the recommended protocl for peering squid with ISA Server.
 


parent is the peering method for origin web servers. With 
originserver selection method.


 
Instance 1:


visible_hostname vSquidlhr
unique_hostname vSquidMain
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log  /var/logs/access.log
cache_log /var/logs/cache.log

cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay
prefer_direct off
cache_dir aufs /var/spool/squid3 100 256 16
coredump_dir /var/spool/squid3
cache deny all
 
 
 
Instance 2:
 
visible_hostname SquidProxylhr

unique_hostname squidcacheprocess
pid_filename /var/run/squid3cache.pid
http_port 3128
icp_port 0
snmp_port 7172
access_log /var/logs/access2.log
cache_log /var/logs/cache2.log
 


coredump_dir /cache01/var/spool/squid3
cache_dir aufs /cache01/var/spool/squid3 5 48 768
cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 12 bytes
quick_abort_min -1
 


What I suggest for failover is two proxies configured identically:

 * a cache_peer sibling type between them. Using digest selection. To 
localhost (different ports)
 * permitting both to cache data from the origin (optionally from the 
peer).
 * a cache_peer parent type to the web server. With originserver 
and default selection enabled.



This topology utilizes a single layer of multiple proxies. Possibly with 
hardware load balancing in iptables etc sending alternate requests to 
each of the two proxies listening ports.
  Useful for small-medium businesses requiring scale with minimal 
hardware. Probably their own existing load balancers already purchased 
from earlier attempts. IIRC the benchmark for this is somewhere around 
600-700 req/sec.



The next step up in performance and HA is to have an additional layer of 
Squid acting as the load-balancer doing CARP to reduce cache duplication 
 and remove sibling data transfers. This form of scaling out is how 
WikiMedia serve their sites up.
 It is documented somewhat in the wiki as ExtremeCarpFrontend. With a 
benchmark so far for a single box reaching 990 req/sec.



These maximum speed benchmarks are only achievable by reverse-proxy 
people. Regular ISP setups can expect their maximum to be somewhere 
below 1/2 or 1/3 of that rate due to the content diversity and RTT lag 
of remote servers.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] peering

2005-04-27 Thread Hermann-Marcus Behrens
Hello,
I'm using the latest Squid 3 Beta (squid-3.0-PRE3-20050427) as a reverse 
proxy.
I have one server which does heavy image-calculations (it renders maps, 
2-3 seconds for each image). Now I
added a second server and I would like to use the cache_peer option so 
that each web accelerator checks
first, if the requested image is already rendered on the other cache.

Unfortunatly I was not able to get this working. I tried to change the 
always_direct option, but if I delete this option,
the cache is not working any more.

My configuration looks like this:
http_port 213.133.a.c:80 accel defaultsite=127.0.0.1
cache_peer 127.0.0.1 parent 80 0 no-query originserver no-digest 
name=mydomain
cache_peer 213.133.a.b  sibling 80 3130
cache_peer 213.133.a.c  sibling 80 3130

acl my_domains dstdomain www.domain.de
cache_peer_access mydomain allow my_domains
http_access allow my_domains
always_direct allow all
Does someone know, how to get this working? Or is the combination of a 
reverse proxy and the use of other caches in a hierarchy not possible?

Greetings from germany,
Hermann Behrens
--
Hermann-Marcus Behrens / citybeat.de
E-Mail:  [EMAIL PROTECTED]
Web: www.citybeat.de
Telefon: 0421 - 16 80 80 - 0
Fax: 0421 - 16 80 80 -80
Adresse: Zum Huchtinger Bahnhof 13 / 28259 Bremen


[squid-users] Peering squid caches from non ICP parent cache

2003-09-06 Thread Karmila Sari
Hi,


I've problem with peering squid caches, since the
parent cache did not support ICP the rule below did
not work.

cache_peer 192.168.1.13 parent 3128 3130 no_query

Is it possible to configure child cache to use the
parent cache which disable icp_port?

regards,
karmila

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: [squid-users] Peering squid caches from non ICP parent cache

2003-09-06 Thread Kenneth Oncinian
On Saturday 06 September 2003 2:02 pm, Karmila Sari wrote:
 Hi,


 I've problem with peering squid caches, since the
 parent cache did not support ICP the rule below did
 not work.

 cache_peer 192.168.1.13 parent 3128 3130 no_query

 Is it possible to configure child cache to use the
 parent cache which disable icp_port?

yes, you could disable it by 
cache_peer 192.168.1.13 parent 3128 0 no_query

 regards,
 karmila

 __
 Do you Yahoo!?
 Yahoo! SiteBuilder - Free, easy-to-use web site design software
 http://sitebuilder.yahoo.com



Re: [squid-users] Peering squid caches from non ICP parent cache

2003-09-06 Thread Henrik Nordstrom
On Saturday 06 September 2003 08.05, Kenneth Oncinian wrote:

 yes, you could disable it by
 cache_peer 192.168.1.13 parent 3128 0 no_query

You also need prefer_direct off or never_direct allow all.

Regards
Henrik
-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


Re: [squid-users] peering

2003-07-20 Thread Chris Knipe
  On my parent proxy however, I get constant 403's when the sibling
  tries to query it.  I suspect it is a acl that I am missing, but
  I'm not sure what...

 The other peer needs to be allowed to access the server in
 http_access. If not they will be given 403 on attempt to access the
 cache, just as any other http client not allowed by http_access.

Yup.  Thanks Hendrik, I've seem to sort it out.  Appart from a small glitch
in the ACL, I seemed to have made a mistake with miss_access as well.  A
couple of minutes on google did fix it however.


  1058645715.781  4 x.x.x TCP_DENIED/403 1469 GET
  y.y.y:3128/squid-internal-dynamic/netdb - NONE/- text/html

 Is it intentional to use netdb exchanges? If not disable them in the
 cache_peer line..

Okkies, will do that...

It's all working brilliantly now though...  My hit rates went up with an
additional 40% odd, so I'm quite impressed. :)

--
me



[squid-users] peering

2003-07-19 Thread Chris Knipe
Lo everyone,

I have setup two squid servers in a parent  sibling relation.  The peering
itself seems to be setup correctly, both proxies start, and I can see that
both proxies contact each other via the cache log.

On my parent proxy however, I get constant 403's when the sibling tries to
query it.  I suspect it is a acl that I am missing, but I'm not sure what...

1058645715.781  4 x.x.x TCP_DENIED/403 1469 GET
y.y.y:3128/squid-internal-dynamic/netdb - NONE/- text/html

x.x.x.x is my sibling proxy, plain and simply setup with:
cache_peer y.y.y.y parent 3128 3130 default

I have given x.x.x.x ICMP Query access (ACL), as well as http query
access.

What am I missing?




Re: [squid-users] peering

2003-07-19 Thread Schelstraete Bart
Chris,

Isn't it possible that your cache peer requires authentication, or that 
it doesn't allow your host?

rgrds,

  Bart

Chris Knipe wrote:

Lo everyone,

I have setup two squid servers in a parent  sibling relation.  The peering
itself seems to be setup correctly, both proxies start, and I can see that
both proxies contact each other via the cache log.
On my parent proxy however, I get constant 403's when the sibling tries to
query it.  I suspect it is a acl that I am missing, but I'm not sure what...
1058645715.781  4 x.x.x TCP_DENIED/403 1469 GET
y.y.y:3128/squid-internal-dynamic/netdb - NONE/- text/html
x.x.x.x is my sibling proxy, plain and simply setup with:
cache_peer y.y.y.y parent 3128 3130 default
I have given x.x.x.x ICMP Query access (ACL), as well as http query
access.
What am I missing?