Can you pastebin the script here:
http://www1.ngtech.co.il/paste/

Just to put it to the eyes of the public..

Eliezer

On 6/5/2013 2:48 AM, Ricardo Klein wrote:
I think the problem is he squid init script from fedora/centos has
less timeout then needed and fails to kill all squid processes (and
clean /var/run/squid/pidfiles), so when it comes back there are
already some processes up, causing those errors.

I have made some changes on squid init script and we are testing it.
--
Att...

Ricardo Felipe Klein
klein....@gmail.com


On Tue, Jun 4, 2013 at 12:03 PM, Alex Rousskov
<rouss...@measurement-factory.com> wrote:
On 06/04/2013 06:15 AM, Ricardo Klein wrote:

about having more then 1 rock store, I dont know, I may have made
some confusion when reading about SMP and cache_dir options diferent
then "rock",

The primary reason to use multiple rock cache_dirs is to utilize
multiple hard drives (i.e., multiple disk spindles) without RAID/etc
overheads.


maybe THAT is what is generating the "FATAL:
Ipc::Mem::Segment::open failed to
shm_open(/squid-squid-page-pool.shm): (2) No such file or directory"
errors...

Multiple cache_dirs should not cause FATAL errors. If they do, it is a bug.


HTH,

Alex.

cache_mem 2048 MB
workers 6
cache_dir rock /var/spool/squid/cache1 4096 max-size=31000 swap-timeout=1000 
max-swap-rate=100
cache_dir rock /var/spool/squid/cache2 4096 max-size=31000 swap-timeout=1000 
max-swap-rate=100
cache_dir rock /var/spool/squid/cache3 4096 max-size=31000 swap-timeout=1000 
max-swap-rate=100
cache_dir rock /var/spool/squid/cache4 4096 max-size=31000 swap-timeout=1000 
max-swap-rate=100
cache_dir rock /var/spool/squid/cache5 4096 max-size=31000 swap-timeout=1000 
max-swap-rate=100
cache_dir rock /var/spool/squid/cache6 4096 max-size=31000 swap-timeout=1000 
max-swap-rate=100


Reply via email to