Re: [squid-users] Re: Re: [squid-users] centralized storage for squid

2008-03-11 Thread Kinkie
2008/3/11 Neil Harkins [EMAIL PROTECTED]:
 F5 has some documents on how to implement consisent hashes in bigip
  irules (tcl),
  but i wound up writing a custom one for use in front of our squids
  that only does one
  checksum per request, as opposed to one per squid in the pool, to avoid 
 wasting
  cpu cycles on the LB.

  it uses a precomputed table for the nodes, but doesn't need to be recomputed
  when you add/remove a few, they just fit in between the others. i'll
  try to finish
  the writeup and submit it to devcentral soon.

You're also welcome to write about this issue in the squid wiki (be it
a link to devcentral or an article in the wiki itself)

-- 
/kinkie


Re: [squid-users] Re: Re: [squid-users] centralized storage for squid

2008-03-10 Thread Neil Harkins
F5 has some documents on how to implement consisent hashes in bigip
irules (tcl),
but i wound up writing a custom one for use in front of our squids
that only does one
checksum per request, as opposed to one per squid in the pool, to avoid wasting
cpu cycles on the LB.

it uses a precomputed table for the nodes, but doesn't need to be recomputed
when you add/remove a few, they just fit in between the others. i'll
try to finish
the writeup and submit it to devcentral soon.

-neil

2008/3/10 Mark Nottingham [EMAIL PROTECTED]:
 This is the problem that CARP and other consistent hashing approaches
  are supposed to solve. Unfortunately, the Squid in the front will
  often be a bottleneck...

  Cheers,




  On 07/03/2008, at 1:43 PM, Siu Kin LAM wrote:

   Hi Pablo
  
   Actually, it is my case.
   The URL-hash is helpful to reduce the duplicated
   objects. However, once adding/removing squid server,
   load balancer needs to re-calculate the hash of URL
   which cause lot of TCP_MISS in squid server at the
   inital stage.
  
   Do you have same experience ?
  
   Thanks
  
  
   --- Pablo Garc燰 [EMAIL PROTECTED] 說:
  
   I dealt with the same problem using a load balancer
   in front of the
   cache farm, using a URL-HASH algorithm to send the
   same url to the
   same cache every time. It works great, and also
   increases the hit
   ratio a lot.
  
   Regards, Pablo
  
   2008/3/6 Siu Kin LAM [EMAIL PROTECTED]:
   Dear all
  
   At this moment, I have several squid servers for
   http
   caching. Many duplicated objects have been found
   in
   different servers.  I would minimize to data
   storage
   by installing a large centralized storage and the
   squid servers mount to the storage as data disk.
  
   Have anyone tried this before?
  
   thanks a lot
  
  
   Yahoo! 網上安全攻略,教你如何防範黑客!
   請前往http://hk.promo.yahoo.com/security/index.html
   了解更多。
  
  
  
  
  
Yahoo! 網上安全攻略,教你如何防範黑客! 請前往http://hk
   .promo.yahoo.com/security/index.html 了解更多。

  --
  Mark Nottingham   [EMAIL PROTECTED]





Re: [squid-users] Re: Re: [squid-users] centralized storage for squid

2008-03-07 Thread Pablo García
I have the same problem, tough I don't remove my squid servers very often.
I've partially resolved this problem thanks to the implementation of
the algorithm of my load balancer.
What it does, it's calculate the hash taking into account all the
squids in the pool, whether they're up or down. Then if the algorithm
chooses a server that is down, then the calculation happens again. So
if one of my squids restarts, as soon as it starts again, it receives
the same urls as before using disk cache instead of memory.

What you also can try is to link all the squids toghether whith icp to
create a sibling relationship between them. though I guess the
hierarchical cache scenario would help you best to reduce the load in
your web servers.

Hope this helps,

Regards, Pablo


2008/3/7 Siu Kin LAM [EMAIL PROTECTED]:
 Hi Pablo

  Actually, it is my case.
  The URL-hash is helpful to reduce the duplicated
  objects. However, once adding/removing squid server,
  load balancer needs to re-calculate the hash of URL
  which cause lot of TCP_MISS in squid server at the
  inital stage.

  Do you have same experience ?

  Thanks


  --- Pablo Garc燰 [EMAIL PROTECTED] 說:



   I dealt with the same problem using a load balancer
   in front of the
   cache farm, using a URL-HASH algorithm to send the
   same url to the
   same cache every time. It works great, and also
   increases the hit
   ratio a lot.
  
   Regards, Pablo
  
   2008/3/6 Siu Kin LAM [EMAIL PROTECTED]:
Dear all
   
At this moment, I have several squid servers for
   http
caching. Many duplicated objects have been found
   in
different servers.  I would minimize to data
   storage
by installing a large centralized storage and the
squid servers mount to the storage as data disk.
   
Have anyone tried this before?
   
thanks a lot
   
   
 Yahoo! 網上安全攻略,教你如何防範黑客!
   請前往http://hk.promo.yahoo.com/security/index.html
   了解更多。
   
  



   Yahoo! 網上安全攻略,教你如何防範黑客! 
 請前往http://hk.promo.yahoo.com/security/index.html 了解更多。