[squid-users] Large-scale Reverse Proxy for serving images FAST

2009-03-16 Thread David Tosoff

All,

I'm new to Squid and I have been given the task of optimizing the delivery of 
photos from our website. We have 1 main active image server which serves up the 
images to the end user via 2 chained CDNs. We want to drop the middle CDN as 
it's not performing well and is a waste of money; in it's stead we plan to 
place a few reverse proxy web accelerators between the primary CDN and our 
image server.

We currently recieve 152 hits/sec on average with about 550hps max to our 
secondary CDN from cache misses at the Primary.
I would like to serve a lot of this content straight from memory to get it out 
there as fast as possible.

I've read around that there are memory and processing limitations in Squid in 
the magnitude of 2-4GB RAM and 1 core/1 thread, respectively. So, my solution 
was to run multiple instances, as we don't have the rackspace to scale this out 
otherwise.

I've managed to build a working config of 1:1 squid:origin, but I am having 
trouble scaling this up and out.

Here is what I have attempted to do, maybe someone can point me in the right 
direction:

Current config:
User Browser -> Prim CDN -> Sec CDN -> Our Image server @ http port 80

New config idea:
User -> Prim CDN -> Squid0 @ http :80 -> round-robin to "parent" squid 
instances on same machine @ http :81, :82, etc -> Our Image server @ http :80


Squid0's (per diagram above) squid.conf:

acl Safe_ports port 80
acl PICS_DOM_COM dstdomain pics.domain.com
acl SQUID_PEERS src 127.0.0.1
http_access allow PICS_DOM_COM
icp_access allow SQUID_PEERS
miss_access allow SQUID_PEERS
http_port 80 accel defaultsite=pics.domain.com
cache_peer localhost parent 81 3130 name=imgCache1 round-robin proxy-only
cache_peer localhost parent 82 3130 name=imgCache2 round-robin proxy-only
cache_peer_access imgCache1 allow PICS_DOM_COM
cache_peer_access imgCache2 allow PICS_DOM_COM
cache_mem 8192 MB
maximum_object_size_in_memory 100 KB
cache_dir aufs /usr/local/squid0/cache 1024 16 256  -- This one isn't really 
relevant, as nothing is being cached on this instance (proxy-only)
icp_port 3130
visible_hostname pics.domain.com/0

Everything else is per the defaults in squid.conf.


"Parent" squids' (from above diagram) squid.conf:

acl Safe_ports port 81
acl PICS_DOM_COM dstdomain pics.domain.com
acl SQUID_PEERS src 127.0.0.1
http_access allow PICS_DOM_COM
icp_access allow SQUID_PEERS
miss_access allow SQUID_PEERS
http_port 81 accel defaultsite=pics.domain.com
cache_peer 192.168.0.223 parent 80 0 no-query originserver name=imgParent
cache_peer localhost sibling 82 3130 name=imgCache2 proxy-only
cache_peer_access imgParent allow PICS_DOM_COM
cache_peer_access imgCache2 allow PICS_DOM_COM
cache_mem 8192 MB
maximum_object_size_in_memory 100 KB
cache_dir aufs /usr/local/squid1/cache 10240 16 256
visible_hostname pics.domain.com/1
icp_port 3130
icp_hit_stale on

Everything else per defaults.



So, when I run this config and test I see the following happen in the logs:

>From "Squid0" I see that it resolves to grab the image from one of it's parent 
>caches. This is great! (some show as "Timeout_first_up_parent" and others as 
>just "first_up_parent")

1237253713.769 62 127.0.0.1 TCP_MISS/200 2544 GET 
http://pics.domain.com:81/thumbnails/59/78/45673695.jpg - 
TIMEOUT_FIRST_UP_PARENT/imgParent image/jpeg

>From the parent cache that it resolves to, I see that it grabs the image from 
>IT'S parent, originserver (our image server). Subsequent requests are 
>'TCP_HIT' or mem hit. Great stuff!

1237253713.769 62 127.0.0.1 TCP_MISS/200 2694 GET 
http://pics.domain.com/thumbnails/59/78/45673695.jpg - 
FIRST_PARENT_MISS/imgCache1 image/jpeg


Problem is, it doesn't round-robin the requests to both of my "parent" squids 
and you end up with a very 1-sided cache. If I stop the "parent" instance that 
is resolving the items, the second "parent" doesn't take over either. If I then 
proceed to restart the "Squid0" instance, it will then direct the requests to 
the second "parent", but then the first wont recieve any requests. So I know 
both "parent" configs work, but I must be doing something wrong somewhere, or 
is this all just a silly idea...?


Can anyone comment on the best way to run a high-traffic set of accel cache 
instances similar to this, or how to fix what i've tried to do? Or another way 
to put a LOT of data into a squid instance's memory. (We have ~150Million x 2KB 
images that are randomly requested).
I'd like to see different content cached on each instance with little or no 
overlap with round-robin handling which squid gets to cache an item and icp 
handling which squid has that item.

I'm open to other ideas too..

Sorry for the loong email.

Thanks all!

David


  __
Instant Messaging, free SMS, sharing photos and more... Try the new Yahoo! 
Canada Messenger at http://ca.beta.messenger.yahoo.com/


Re: [squid-users] Large-scale Reverse Proxy for serving images FAST

2009-03-17 Thread David Tosoff

OK. Thanks Amos.

Changing up the icp_port to a unique for each instance worked. I should have 
thought about that as all instances were on the same host (localhost/127.0.0.1) 
w/ the same port... duhh.

So, I have a few other questions then: We're going to scale this up to a 
single-machine single-instance of 64 linux and 64 squid 3.0 --
 - What OS would you personally recommend running Squid 3.x on for best 
performance?
 - Is there no limit to the cache_mem we can use in squid 3? I'd be working 
with about 64GB of memory in this machine.
 - Can you elaborate on "heap replacement/garbage policy"??
 - Any other options to watch for, for optimizing memory cache usage?

Thanks again!

David


--- On Tue, 3/17/09, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Large-scale Reverse Proxy for serving images FAST
> To: dtos...@yahoo.com
> Cc: squid-users@squid-cache.org
> Received: Tuesday, March 17, 2009, 12:10 AM
> David Tosoff wrote:
> > All,
> > 
> > I'm new to Squid and I have been given the task of
> optimizing the delivery of photos from our website. We have
> 1 main active image server which serves up the images to the
> end user via 2 chained CDNs. We want to drop the middle CDN
> as it's not performing well and is a waste of money; in
> it's stead we plan to place a few reverse proxy web
> accelerators between the primary CDN and our image server.
> > 
> 
> You are aware then that a few reverse-proxy accelerators
> are in fact the definition of a CDN? So you are building
> your own instead of paying for one.
> 
> Thank you for choosing Squid.
> 
> > We currently recieve 152 hits/sec on average with
> about 550hps max to our secondary CDN from cache misses at
> the Primary.
> > I would like to serve a lot of this content straight
> from memory to get it out there as fast as possible.
> > 
> > I've read around that there are memory and
> processing limitations in Squid in the magnitude of 2-4GB
> RAM and 1 core/1 thread, respectively. So, my solution was
> to run multiple instances, as we don't have the
> rackspace to scale this out otherwise.
> > 
> 
> Memory limitations on large objects only exist in Squid-2.
> And 2-4GB RAM  issues reported recently are only due to
> 32-bit build + 32-bit hardware.
> 
> Your 8GB cache_mem settings below and stated object size
> show these are not problems for your Squid.
> 
> 152 req/sec is not enough to raise the CPU temperature with
> Squid, 550 might be noticeable but not a problem. 2700
> req/sec has been measured in accelerator Squid-2.6 on a
> 2.6GHz dual-core CPU and more performance improvements have
> been added since then.
> 
> 
> > I've managed to build a working config of 1:1
> squid:origin, but I am having trouble scaling this up and
> out.
> > 
> > Here is what I have attempted to do, maybe someone can
> point me in the right direction:
> > 
> > Current config:
> > User Browser -> Prim CDN -> Sec CDN -> Our
> Image server @ http port 80
> > 
> > New config idea:
> > User -> Prim CDN -> Squid0 @ http :80 ->
> round-robin to "parent" squid instances on same
> machine @ http :81, :82, etc -> Our Image server @ http
> :80
> > 
> > 
> > Squid0's (per diagram above) squid.conf:
> > 
> > acl Safe_ports port 80
> > acl PICS_DOM_COM dstdomain pics.domain.com
> > acl SQUID_PEERS src 127.0.0.1
> > http_access allow PICS_DOM_COM
> > icp_access allow SQUID_PEERS
> > miss_access allow SQUID_PEERS
> > http_port 80 accel defaultsite=pics.domain.com
> > cache_peer localhost parent 81 3130 name=imgCache1
> round-robin proxy-only
> > cache_peer localhost parent 82 3130 name=imgCache2
> round-robin proxy-only
> > cache_peer_access imgCache1 allow PICS_DOM_COM
> > cache_peer_access imgCache2 allow PICS_DOM_COM
> > cache_mem 8192 MB
> > maximum_object_size_in_memory 100 KB
> > cache_dir aufs /usr/local/squid0/cache 1024 16 256  --
> This one isn't really relevant, as nothing is being
> cached on this instance (proxy-only)
> > icp_port 3130
> > visible_hostname pics.domain.com/0
> > 
> > Everything else is per the defaults in squid.conf.
> > 
> > 
> > "Parent" squids' (from above diagram)
> squid.conf:
> > 
> > acl Safe_ports port 81
> > acl PICS_DOM_COM dstdomain pics.domain.com
> > acl SQUID_PEERS src 127.0.0.1
> > http_access allow PICS_DOM_COM
> > icp_access allow SQUID_PEERS
> > miss_access allow SQUID_PEERS
> > http_port 81 accel defaultsite=pics.domain.com
> > c

Re: [squid-users] Large-scale Reverse Proxy for serving images FAST

2009-04-01 Thread David Tosoff


Thanks Chris & Amos for your comments thus far.

I've finally located a machine I can place this "Memory-only" squid on. I've 
got a 32GB, AMD 64-bit, blah blah.

Anyway, since I'm a bit of a linux n00b, I was asking the OS question even 
after having read the wiki and postings about this topic. For me, the OS i use 
doesn't matter from a comfortability/familiarity standpoint, as it's all fairly 
new to me anyway. The only requirements are that it's 64-bit and will work with 
my 32GB of RAM.

I was thinking of using fedora, centOS, or ubuntu 64-bit editions. What do you 
think will be the easiest OS to compile & run a 64-bit version of Squid on?

That leads me to my next question... How DO I compile or get a binary of 64-bit 
squid 3.0 stable13? The few source and binaries i've seen don't differentiate 
between 32 & 64. I've dowmloaded the 3.0 Stable13 tar.gz, but I have no idea 
how to go about compiling it to run as 64-bit.

Once I know this, I think i'll be all set.

Any help would be very much appreciated.

Thanks all!!

David

--- On Tue, 3/17/09, Chris Robertson  wrote:

> From: Chris Robertson 
> Subject: Re: [squid-users] Large-scale Reverse Proxy for serving images FAST
> To: squid-users@squid-cache.org
> Received: Tuesday, March 17, 2009, 1:21 PM
> David Tosoff wrote:
> > OK. Thanks Amos.
> > 
> > Changing up the icp_port to a unique for each instance
> worked. I should have thought about that as all instances
> were on the same host (localhost/127.0.0.1) w/ the same
> port... duhh.
> > 
> > So, I have a few other questions then: We're going
> to scale this up to a single-machine single-instance of 64
> linux and 64 squid 3.0 --
> >  - What OS would you personally recommend running
> Squid 3.x on for best performance?
> >   
> 
> This space intentionally left blank.
> 
> >  - Is there no limit to the cache_mem we can use in
> squid 3? I'd be working with about 64GB of memory in
> this machine.
> >   
> 
> Of course there's a limit.  You just aren't likely
> to hit it with the hardware you are using.  Of course, as of
> Q3 2007 here's the official answer:
> http://www.squid-cache.org/mail-archive/squid-users/200709/0559.html
> 
> >  - Can you elaborate on "heap replacement/garbage
> policy"??
> >   
> 
> http://www.squid-cache.org/Doc/config/cache_replacement_policy/
> and
> http://www.squid-cache.org/Doc/config/memory_replacement_policy/
> (The second link references the first, but would be the more
> relevant directive if you are going to be using a
> memory-only Squid).
> 
> >  - Any other options to watch for, for optimizing
> memory cache usage?
> >   
> 
> http://www.squid-cache.org/Doc/config/memory_pools_limit/
> 
> > Thanks again!
> > 
> > David
> >   
> 
> Chris


  __
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your 
favourite sites. Download it now
http://ca.toolbar.yahoo.com.


Re: [squid-users] Large-scale Reverse Proxy for serving images FAST

2009-04-01 Thread David Tosoff

OK. Thanks Chris. I'll give your suggestions a go! I guess I'll go for CentOS 
or Ubuntu and try the compilations on them.

Cheers,

DT


--- On Wed, 4/1/09, Chris Robertson  wrote:

> From: Chris Robertson 
> Subject: Re: [squid-users] Large-scale Reverse Proxy for serving images FAST
> To: squid-users@squid-cache.org
> Received: Wednesday, April 1, 2009, 12:25 PM
> David Tosoff wrote:
> > Thanks Chris & Amos for your comments thus far.
> > 
> > I've finally located a machine I can place this
> "Memory-only" squid on. I've got a 32GB, AMD
> 64-bit, blah blah.
> > 
> > Anyway, since I'm a bit of a linux n00b, I was
> asking the OS question even after having read the wiki and
> postings about this topic. For me, the OS i use doesn't
> matter from a comfortability/familiarity standpoint, as
> it's all fairly new to me anyway. The only requirements
> are that it's 64-bit and will work with my 32GB of RAM.
> > 
> > I was thinking of using fedora, centOS, or ubuntu
> 64-bit editions. What do you think will be the easiest OS to
> compile & run a 64-bit version of Squid on?
> >   
> 
> Fedora runs on a 6 month release cycle with support for the
> current + last release(1).  Perhaps a poor choice for a
> server.  CentOS is a clone of RHEL, which has a seven year,
> multi-phased support cycle(2).   The first four years
> include hardware upgrades with bug fixes and security
> patches.  The fifth year has limited new hardware support,
> bug fixes and security patches.  The last two years are
> exclusively bug fixes and security patches.  Ubuntu Long
> Term Support version (LTS) offers up a 5 year support cycle
> with "Seamless upgrade from one LTS to the
> other"(3,4).
> 
> > That leads me to my next question... How DO I compile
> or get a binary of 64-bit squid 3.0 stable13? The few source
> and binaries i've seen don't differentiate between
> 32 & 64.
> 
> The source won't differentiate (as the same code can be
> compiled into a 32 or 64 bit binary), but binaries should. 
> Usually, the default is 32 bit, with special marking for 64
> bit binaries.
> 
> >  I've dowmloaded the 3.0 Stable13 tar.gz, but I
> have no idea how to go about compiling it to run as 64-bit.
> >   
> 
> The simplest method is to compile it on the system it's
> intended to run on using the distribution supplied tools.
> 
> > Once I know this, I think i'll be all set.
> > 
> > Any help would be very much appreciated.
> > 
> > Thanks all!!
> > 
> > David
> >   
> 
> Chris
> 
> 1. http://fedoraproject.org/wiki/LifeCycle
> 2. http://markmail.org/message/vi2xbxms6tcmm3cd
> 3.
> http://www.ubuntu.com/products/whatisubuntu/serveredition/benefits/lifecycle
> 4. http://www.ubuntu.com/products/ubuntu/release-cycle


  __
Instant Messaging, free SMS, sharing photos and more... Try the new Yahoo! 
Canada Messenger at http://ca.beta.messenger.yahoo.com/


Re: [squid-users] Large-scale Reverse Proxy for serving images FAST

2009-04-02 Thread David Tosoff

I have 1 more question:

I noticed in the 3.0 Stable13 release notes that there is Windows support for 
compiling the code in Windows native format... Has anyone done this 
successfully? And can it compile to 64-bit to use my 32GB or memory?? I've 
looked at Guido S's version, but it's 32-bit. :(

Ideally, running in windows would be better for me, as it's my comfort zone. As 
i said.. I'm a linux n00b.

Thanks all!

David


--- On Wed, 4/1/09, Chris Robertson  wrote:

> From: Chris Robertson 
> Subject: Re: [squid-users] Large-scale Reverse Proxy for serving images FAST
> To: squid-users@squid-cache.org
> Received: Wednesday, April 1, 2009, 12:25 PM
> David Tosoff wrote:
> > Thanks Chris & Amos for your comments thus far.
> > 
> > I've finally located a machine I can place this
> "Memory-only" squid on. I've got a 32GB, AMD
> 64-bit, blah blah.
> > 
> > Anyway, since I'm a bit of a linux n00b, I was
> asking the OS question even after having read the wiki and
> postings about this topic. For me, the OS i use doesn't
> matter from a comfortability/familiarity standpoint, as
> it's all fairly new to me anyway. The only requirements
> are that it's 64-bit and will work with my 32GB of RAM.
> > 
> > I was thinking of using fedora, centOS, or ubuntu
> 64-bit editions. What do you think will be the easiest OS to
> compile & run a 64-bit version of Squid on?
> >   
> 
> Fedora runs on a 6 month release cycle with support for the
> current + last release(1).  Perhaps a poor choice for a
> server.  CentOS is a clone of RHEL, which has a seven year,
> multi-phased support cycle(2).   The first four years
> include hardware upgrades with bug fixes and security
> patches.  The fifth year has limited new hardware support,
> bug fixes and security patches.  The last two years are
> exclusively bug fixes and security patches.  Ubuntu Long
> Term Support version (LTS) offers up a 5 year support cycle
> with "Seamless upgrade from one LTS to the
> other"(3,4).
> 
> > That leads me to my next question... How DO I compile
> or get a binary of 64-bit squid 3.0 stable13? The few source
> and binaries i've seen don't differentiate between
> 32 & 64.
> 
> The source won't differentiate (as the same code can be
> compiled into a 32 or 64 bit binary), but binaries should. 
> Usually, the default is 32 bit, with special marking for 64
> bit binaries.
> 
> >  I've dowmloaded the 3.0 Stable13 tar.gz, but I
> have no idea how to go about compiling it to run as 64-bit.
> >   
> 
> The simplest method is to compile it on the system it's
> intended to run on using the distribution supplied tools.
> 
> > Once I know this, I think i'll be all set.
> > 
> > Any help would be very much appreciated.
> > 
> > Thanks all!!
> > 
> > David
> >   
> 
> Chris
> 
> 1. http://fedoraproject.org/wiki/LifeCycle
> 2. http://markmail.org/message/vi2xbxms6tcmm3cd
> 3.
> http://www.ubuntu.com/products/whatisubuntu/serveredition/benefits/lifecycle
> 4. http://www.ubuntu.com/products/ubuntu/release-cycle


  __
Instant Messaging, free SMS, sharing photos and more... Try the new Yahoo! 
Canada Messenger at http://ca.beta.messenger.yahoo.com/


[squid-users] Memory-only Squid questions

2009-04-04 Thread David Tosoff

Hi all, I've got a 64-bit Reverse-proxy squid running on Ubuntu as of yesterday 
(3.0Stable13). I've got it configured how I want it as far as I can tell.

I've compiled the cache_dir null type in, and configured it as "cache_dir null 
/tmp", as I do not want to cache to disk on this machine at all.
I want all the data to be cached in memory, but all I'm seeing in my access.log 
is TCP_IMS_HIT & TCP_MISS. No TCP_MEM_HIT for anything. Now, I know TCP_IMS_HIT 
is ambiguous and can indicate IMS_HIT from memory or disk, but in this case 
where I am "cache_dir null /tmp", are these IMS_HIT coming from memory, or am I 
misunderstanding the 'null' type?
Also, the items that are recieving TCP_MISS are items that should be in MEM_HIT 
all the time, as they are loaded with EVERY page load (static menu, page 
structure images, etc). This concerns me, as in a ufs or aufs cache_dir, these 
items hit from memory
Also, in store.log, I'm only seeing RELEASE & SO_FAIL.

I do see memory usage increasing when watching "top", which is good. I'm just 
curious if these indicators in the logs are something to be concerned about in 
this type of config.

NOTEABLE CONFIG OPTIONS:
cache_mem 28672 MB
maximum_object_size_in_memory 150 KB   -- My objects are between 2KB - 20KB 
(so, 150 is a limit that likely should never be reached)
memory_replacement_policy heap GDSF
cache_dir null /tmp


Is this the best way to go about a memory only cache? I've seen a few posts re: 
using RAM disks for cache_dir instead...

Thanks All,

David


  __
Be smarter than spam. See how smart SpamGuard is at giving junk email the boot 
with the All-new Yahoo! Mail.  Click on Options in Mail and switch to New Mail 
today or register for free at http://mail.yahoo.ca


[squid-users] ...Memory-only Squid questions

2009-04-06 Thread David Tosoff

Hey all, haven't heard anything on this and could really use some help. :)

You can disregard the HIT related questions, as once I placed this into a full 
scale test, it started hitting from memory wonderfully (~40% offload from the 
origin)

The config works great, to a point. It fills up my memory up, but keeps going 
way past the "cache_mem" that I set. I've dropped it down to 24GB, but it chews 
up all the memory on the system (32GB) and then continues into the swap and 
chews that up too. At that point, squid hangs, crashes then reloads and the 
cache has to spend another few hours building everything up into memory again. 
Like I said though, it works great...until the mem is full... 

I'm now going to test with a 4GB cache_mem and see what she does.

Can anyone offer any suggestions for the best, most stable way of running a 
memory-only cache? is 'cache_dir null /tmp' actually what I want to be using 
here? The SO_FAIL's concern me, but I'm not sure if they should?

Thanks!

David


--- On Sat, 4/4/09, David Tosoff  wrote:

> From: David Tosoff 
> Subject: [squid-users] Memory-only Squid questions
> To: squid-users@squid-cache.org
> Received: Saturday, April 4, 2009, 3:04 PM
> Hi all, I've got a 64-bit Reverse-proxy squid running on
> Ubuntu as of yesterday (3.0Stable13). I've got it
> configured how I want it as far as I can tell.
> 
> I've compiled the cache_dir null type in, and
> configured it as "cache_dir null /tmp", as I do
> not want to cache to disk on this machine at all.
> I want all the data to be cached in memory, but all I'm
> seeing in my access.log is TCP_IMS_HIT & TCP_MISS. No
> TCP_MEM_HIT for anything. Now, I know TCP_IMS_HIT is
> ambiguous and can indicate IMS_HIT from memory or disk, but
> in this case where I am "cache_dir null /tmp", are
> these IMS_HIT coming from memory, or am I misunderstanding
> the 'null' type?
> Also, the items that are recieving TCP_MISS are items that
> should be in MEM_HIT all the time, as they are loaded with
> EVERY page load (static menu, page structure images, etc).
> This concerns me, as in a ufs or aufs cache_dir, these items
> hit from memory
> Also, in store.log, I'm only seeing RELEASE &
> SO_FAIL.
> 
> I do see memory usage increasing when watching
> "top", which is good. I'm just curious if
> these indicators in the logs are something to be concerned
> about in this type of config.
> 
> NOTEABLE CONFIG OPTIONS:
> cache_mem 28672 MB
> maximum_object_size_in_memory 150 KB   -- My objects are
> between 2KB - 20KB (so, 150 is a limit that likely should
> never be reached)
> memory_replacement_policy heap GDSF
> cache_dir null /tmp
> 
> 
> Is this the best way to go about a memory only cache?
> I've seen a few posts re: using RAM disks for cache_dir
> instead...
> 
> Thanks All,
> 
> David
> 
> 
>  
> __
> Be smarter than spam. See how smart SpamGuard is at giving
> junk email the boot with the All-new Yahoo! Mail.  Click on
> Options in Mail and switch to New Mail today or register for
> free at http://mail.yahoo.ca


  __
Ask a question on any topic and get answers from real people. Go to Yahoo! 
Answers and share what you know at http://ca.answers.yahoo.com


Re: [squid-users] ...Memory-only Squid questions

2009-04-06 Thread David Tosoff

Thanks Chris.

I had already read both of the wiki post and the thread you directed me to 
before I posted this to the group.

I already had compiled heap into my squid before this issue happened. I am 
using heap GDSF. And, I wasn't able to find "--enable-heap-replacement" as a 
compile option in './configure --help' ... perhaps it's deprecated?? Is it a 
still a valid compile option for 3.0 stable 13?

In any event, a gentleman named Gregori Parker responded and helped me with 
some suggestions and I've managed to stabalize the squid at ~20480 MB cache_mem

The only thing I seem to be missing now is the SO_FAIL issue. 
Correct me if I'm wrong, but I assume 'SO' stands for 'Swap Out'... But how 
does this affect a system where there is nowhere for the squid to swap out to 
(cache_dir null /tmp)...?

Thanks for all your help so far.

Cheers,

David

--- On Mon, 4/6/09, Chris Robertson  wrote:

> From: Chris Robertson 
> Subject: Re: [squid-users] ...Memory-only Squid questions
> To: squid-users@squid-cache.org
> Received: Monday, April 6, 2009, 4:56 PM
> David Tosoff wrote:
> > Hey all, haven't heard anything on this and could
> really use some help. :)
> > 
> > You can disregard the HIT related questions, as once I
> placed this into a full scale test, it started hitting from
> memory wonderfully (~40% offload from the origin)
> >   
> 
> Good news...
> 
> > The config works great, to a point. It fills up my
> memory up, but keeps going way past the
> "cache_mem" that I set.
> 
> http://wiki.squid-cache.org/SquidFaq/SquidMemory
> 
> >  I've dropped it down to 24GB, but it chews up all
> the memory on the system (32GB) and then continues into the
> swap and chews that up too. At that point, squid hangs,
> crashes then reloads and the cache has to spend another few
> hours building everything up into memory again. Like I said
> though, it works great...until the mem is full... 
> > I'm now going to test with a 4GB cache_mem and see
> what she does.
> > 
> > Can anyone offer any suggestions for the best, most
> stable way of running a memory-only cache? is 'cache_dir
> null /tmp' actually what I want to be using here?
> 
> Yes.
> 
> >  The SO_FAIL's concern me, but I'm not sure if
> they should?
> >   
> 
> Perhaps
> http://www.mail-archive.com/squid-users@squid-cache.org/msg19824.html
> gives some insight.  Are you using a
> (cache|memory)_replacement_policy that you didn't
> compile support for?
> 
> > Thanks!
> > 
> > David
> 
> Chris


  __
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your 
favourite sites. Download it now
http://ca.toolbar.yahoo.com.