Re: [squid-users] Oom-killer and Squid

2007-03-16 Thread Henrik Nordstrom
fre 2007-03-16 klockan 17:05 +0100 skrev Matus UHLAR - fantomas:
> On 14.03.07 13:07, Dave Rhodes wrote:
> > Have you had a chance to look at the file I sent you a few days ago?
> 
> I wonder, why did you send it to Henrik.

because I asked for him to collect certain large volume information
about his problem..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Oom-killer and Squid

2007-03-16 Thread Dave Rhodes
Matus,
I was having difficult getting the file on the mail site because of it's
size.  Henrik and I had already begun a dialog so I sent the file to
him.

Anyway, I think Henrik solved it.  The early releases of the HTTP 1.1
patch caused a memory leak.  I've upgraded to 2.6-STABLE10 and I'm
monitoring.

I'll let you know.
Thanks,
Dave

-Original Message-
From: Matus UHLAR - fantomas [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 16, 2007 12:06 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Oom-killer and Squid


On 14.03.07 13:07, Dave Rhodes wrote:
> Have you had a chance to look at the file I sent you a few days ago?

I wonder, why did you send it to Henrik.

> It looks like the mem_node is growing and never stops.

what are your cache_mem and cache_dir settings? how much of memory do
you have in the machine and what's the architecture?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
2B|!2B, that's a question!


Re: [squid-users] Oom-killer and Squid

2007-03-16 Thread Matus UHLAR - fantomas
On 14.03.07 13:07, Dave Rhodes wrote:
> Have you had a chance to look at the file I sent you a few days ago?

I wonder, why did you send it to Henrik.

> It looks like the mem_node is growing and never stops.

what are your cache_mem and cache_dir settings? how much of memory do you
have in the machine and what's the architecture?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
2B|!2B, that's a question!


RE: [squid-users] Oom-killer and Squid

2007-03-15 Thread Dave Rhodes
Chris,
Thanks! I'll certainly be switching to aufs before this goes live and
reducing the number of dirs.  Most of what I read about aufs was all
about dual CPUs.  Nothing really about performance otherwise.  I'll also
reduce the size of the cache, didn't know about the 20% performance
issue.  It's a 70GB drive so I probably should go to about 55GB.
Dave

-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 15, 2007 6:19 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Oom-killer and Squid


Dave Rhodes wrote:
> Thanks for the reply Henrik,
> The settings:
>
> cache_mem 1 GB
> cache_dir ufs /cache/normal 6 9600 256
>   

Off the subject of the original problem, are you REALLY running a 60GB 
cache dir with ufs (as opposed to aufs)?  I guess it's not the size of 
the directory, but the number of requests per second, but it seems to me

(unless you are caching some pretty big objects) the two would be a 
little bit related...

> I'm not sure about the cache_dir stuff, didn't know if it was better 
> to have a lot of small dirs or a few large ones, I think I pulled this

> setting from someone setting up a cache about the same as mine in the 
> archives.
>   

 From the Squid master himself: 
http://www.squid-cache.org/mail-archive/squid-users/200701/0433.html

What you have looks a bit off suggested values (6 / 500 = 120)...

> I think 6 is 6MB or 60GB?
> Dave
>   

Chris


Re: [squid-users] Oom-killer and Squid

2007-03-15 Thread Chris Robertson

Dave Rhodes wrote:

Thanks for the reply Henrik,
The settings:

cache_mem 1 GB
cache_dir ufs /cache/normal 6 9600 256
  


Off the subject of the original problem, are you REALLY running a 60GB 
cache dir with ufs (as opposed to aufs)?  I guess it's not the size of 
the directory, but the number of requests per second, but it seems to me 
(unless you are caching some pretty big objects) the two would be a 
little bit related...



I'm not sure about the cache_dir stuff, didn't know if it was better to
have a lot of small dirs or a few large ones, I think I pulled this
setting from someone setting up a cache about the same as mine in the
archives.
  


From the Squid master himself: 
http://www.squid-cache.org/mail-archive/squid-users/200701/0433.html


What you have looks a bit off suggested values (6 / 500 = 120)...


I think 6 is 6MB or 60GB?
Dave
  


Chris


RE: [squid-users] Oom-killer and Squid

2007-03-15 Thread Henrik Nordstrom
tor 2007-03-15 klockan 10:26 -0400 skrev Dave Rhodes:
> Henrik,
> I thought so but just wanted to be sure.  Were there any other changes
> in the daily that I should look for that may impact the system?

The complete list of changes is found in the same directory..
http://www.squid-cache.org/Versions/v2/2.6/changesets/

the patch for Bug #1915 is the only significant one since 2.6.STABLE10
and the reason to the pretty quick 2.6.STABLE11 release.. the rest is
mostly minor cleanups.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Oom-killer and Squid

2007-03-15 Thread Dave Rhodes
Henrik,
I thought so but just wanted to be sure.  Were there any other changes
in the daily that I should look for that may impact the system?
Thanks,
Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 14, 2007 7:42 PM
To: Dave Rhodes
Cc: Squid Users
Subject: RE: [squid-users] Oom-killer and Squid


ons 2007-03-14 klockan 19:27 -0400 skrev Dave Rhodes:
> OK Henrik, thanks.  I just finished the upgrade to the daily release. 
> I hope that one didn't need the patch?

The daily naturally includes the patch.

Regards
Henrik




RE: [squid-users] Oom-killer and Squid

2007-03-14 Thread Henrik Nordstrom
ons 2007-03-14 klockan 19:27 -0400 skrev Dave Rhodes:
> OK Henrik, thanks.  I just finished the upgrade to the daily release. I
> hope that one didn't need the patch?

The daily naturally includes the patch.

Regards
Henrik




signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Oom-killer and Squid

2007-03-14 Thread Dave Rhodes
OK Henrik, thanks.  I just finished the upgrade to the daily release. I
hope that one didn't need the patch?

I'll let you know how it goes.
Thanks again,
Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 14, 2007 6:09 PM
To: Dave Rhodes
Cc: Squid Users
Subject: RE: [squid-users] Oom-killer and Squid


ons 2007-03-14 klockan 17:20 -0400 skrev Dave Rhodes:
> Henrik,
> The HTTP 1.1 patch is applied and it's 2.6 stable5.  The only odd 
> thing is that I am running IWSS on the same server.

Ok. That explains a lot. Early http11 patches had some problems,
including a very noticeable FwdState leak...

Upgrade to 2.6.STABLE10 +
http://www.squid-cache.org/Versions/v2/2.6/changesets/11323.patch, or a
current nightly 2.6 release (or 2.6.STABLE11 when released in a few
days) and the problem should be gone. 2.6.STABLE10 and later includes
the HTTP/1.1 dechunking. The above patch fixes a critical error in the
2.6.STABLE10 release.

If you use the http11 patch for other purposes than dechunking of broken
servers then please let me know. Quite likely you will then need
Squid-2.HEAD + current http11 patch.

Regards
Henrik


RE: [squid-users] Oom-killer and Squid

2007-03-14 Thread Henrik Nordstrom
ons 2007-03-14 klockan 17:20 -0400 skrev Dave Rhodes:
> Henrik,
> The HTTP 1.1 patch is applied and it's 2.6 stable5.  The only odd thing is 
> that I am running IWSS on the same server.

Ok. That explains a lot. Early http11 patches had some problems,
including a very noticeable FwdState leak...

Upgrade to 2.6.STABLE10 +
http://www.squid-cache.org/Versions/v2/2.6/changesets/11323.patch, or a
current nightly 2.6 release (or 2.6.STABLE11 when released in a few
days) and the problem should be gone. 2.6.STABLE10 and later includes
the HTTP/1.1 dechunking. The above patch fixes a critical error in the
2.6.STABLE10 release.

If you use the http11 patch for other purposes than dechunking of broken
servers then please let me know. Quite likely you will then need
Squid-2.HEAD + current http11 patch.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Oom-killer and Squid

2007-03-14 Thread Dave Rhodes
Henrik,
The HTTP 1.1 patch is applied and it's 2.6 stable5.  The only odd thing is that 
I am running IWSS on the same server.

Here are the squid.conf entries:

cache_peer 127.0.0.1 parent 8080 0 no-query default
acl our_networks src 10.0.0.0/8
acl numconn maxconn 100
http_access deny our_networks numconn
http_access allow our_networks
http_access deny all
never_direct allow our_networks

Also, I'm using file lists to block some IPs:

acl DSBlock dst "/usr/local/squid/etc/dshieldblock.conf"
http_access deny DSBlock
acl BlockIP dst "/usr/local/squid/etc/blockip.conf"
http_access deny BlockIP

Otherwise, it's pretty vanilla.
Thanks,
Dave


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 14, 2007 4:41 PM
To: Dave Rhodes
Cc: Squid Users
Subject: RE: [squid-users] Oom-killer and Squid


mån 2007-03-12 klockan 17:33 -0400 skrev Dave Rhodes:
> Henrik,
> The Squid proc continues to grow.  I ran the memory monitor between 
> 1.5 and 1.7GB.  Hopefully that's enough.
> 
> Let me know if you see anything interesting in there or if there is 
> something specific I can look for.

Lots of things growing which should not be growing..

Most interesting is that the FwdState is growing a lot.. this should be a 
fairly small number as FwdState represents a request currently being processed. 
The rest is probably related to this..

Which Squid version is this? (you probaly told me already, but please remind me)

Standard release, or any patches applied?

Any odd configuration?

Regards
Henrik


RE: [squid-users] Oom-killer and Squid

2007-03-14 Thread Henrik Nordstrom
mån 2007-03-12 klockan 17:33 -0400 skrev Dave Rhodes:
> Henrik,
> The Squid proc continues to grow.  I ran the memory monitor between 1.5
> and 1.7GB.  Hopefully that's enough.
> 
> Let me know if you see anything interesting in there or if there is
> something specific I can look for.

Lots of things growing which should not be growing..

Most interesting is that the FwdState is growing a lot.. this should be
a fairly small number as FwdState represents a request currently being
processed. The rest is probably related to this..

Which Squid version is this? (you probaly told me already, but please
remind me)

Standard release, or any patches applied?

Any odd configuration?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Oom-killer and Squid

2007-03-14 Thread Dave Rhodes
Henrik,
Have you had a chance to look at the file I sent you a few days ago?  It
looks like the mem_node is growing and never stops.  I've searched the
archives and found others with the problem but no solution.  Right now
I'm restarting squid every other day to clean it up and keep from
running out of memory but that won't work if I add more users.  I'm at
about 30 now but I'm looking at a few thousand if I get this working.

The glibc is 2.4 on SuSE Linux 10.1.
Thanks,
Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Saturday, March 10, 2007 6:41 AM
To: Dave Rhodes
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Oom-killer and Squid


fre 2007-03-09 klockan 16:23 -0500 skrev Dave Rhodes:

> Last night the squid proc grew to over 2GB so I lowered the cache_mem 
> and restarted squid to free the 2GB.  The current size is 765MB and 
> I'm hoping it will level off at just over 1GB.  If it continues to 
> grow beyond 2GB then I'm guessing it's the malloc issue.

More likely a problem of some kind. The glibc malloc is quite stable for
most workloads.

So monitor the memory utilization page using cachemgr. 


  #!/bin/sh
  while sleep 300; do
  /usr/local/squid/bin/squidclient mgr:mem
  done 

save the output to a file.

Regards
Henrik


RE: [squid-users] Oom-killer and Squid

2007-03-10 Thread Henrik Nordstrom
fre 2007-03-09 klockan 16:23 -0500 skrev Dave Rhodes:

> Last night the squid proc grew to over 2GB so I lowered the cache_mem
> and restarted squid to free the 2GB.  The current size is 765MB and I'm
> hoping it will level off at just over 1GB.  If it continues to grow
> beyond 2GB then I'm guessing it's the malloc issue.  

More likely a problem of some kind. The glibc malloc is quite stable for
most workloads.

So monitor the memory utilization page using cachemgr. 


  #!/bin/sh
  while sleep 300; do
  /usr/local/squid/bin/squidclient mgr:mem
  done 

save the output to a file.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Oom-killer and Squid

2007-03-09 Thread Dave Rhodes
Thanks Adrian, I'll give them a shot.
Dave

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 09, 2007 7:48 PM
To: Dave Rhodes
Cc: Henrik Nordstrom; squid-users@squid-cache.org
Subject: Re: [squid-users] Oom-killer and Squid


On Fri, Mar 09, 2007, Dave Rhodes wrote:
> Henrik,
> Sorry for the late reply, I've changed the cache_mem to 250MB and I am

> monitoring the growth.  The server does have 6GB of RAM.  The swap is 
> small based on SuSE recommendations  but I can add swap if necessary.
> 
> Last night the squid proc grew to over 2GB so I lowered the cache_mem 
> and restarted squid to free the 2GB.  The current size is 765MB and 
> I'm hoping it will level off at just over 1GB.  If it continues to 
> grow beyond 2GB then I'm guessing it's the malloc issue.
> 
> "dlmalloc" isn't an option, as I understand it, as it is limited to 
> 2GB. Know of any good 64bit compatible mallocs?

linux libc malloc isn't bad. The google malloc seems 64 bit happy.



Adrian



Re: [squid-users] Oom-killer and Squid

2007-03-09 Thread Adrian Chadd
On Fri, Mar 09, 2007, Dave Rhodes wrote:
> Henrik,
> Sorry for the late reply, I've changed the cache_mem to 250MB and I am
> monitoring the growth.  The server does have 6GB of RAM.  The swap is
> small based on SuSE recommendations  but I can add swap if necessary.
> 
> Last night the squid proc grew to over 2GB so I lowered the cache_mem
> and restarted squid to free the 2GB.  The current size is 765MB and I'm
> hoping it will level off at just over 1GB.  If it continues to grow
> beyond 2GB then I'm guessing it's the malloc issue.  
> 
> "dlmalloc" isn't an option, as I understand it, as it is limited to 2GB.
> Know of any good 64bit compatible mallocs?

linux libc malloc isn't bad. The google malloc seems 64 bit happy.



Adrian



RE: [squid-users] Oom-killer and Squid

2007-03-09 Thread Dave Rhodes
Henrik,
Sorry for the late reply, I've changed the cache_mem to 250MB and I am
monitoring the growth.  The server does have 6GB of RAM.  The swap is
small based on SuSE recommendations  but I can add swap if necessary.

Last night the squid proc grew to over 2GB so I lowered the cache_mem
and restarted squid to free the 2GB.  The current size is 765MB and I'm
hoping it will level off at just over 1GB.  If it continues to grow
beyond 2GB then I'm guessing it's the malloc issue.  

"dlmalloc" isn't an option, as I understand it, as it is limited to 2GB.
Know of any good 64bit compatible mallocs?
Thanks,
Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 08, 2007 5:50 PM
To: Dave Rhodes
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Oom-killer and Squid


tor 2007-03-08 klockan 13:55 -0500 skrev Dave Rhodes:

> Any idea what would make the heap keep growing?

You have a cache_mem of 1 GB, plus 60 GB of cache_dir. That alone is
about 1.6GB or memory. Then your OS need some..

How much memory do the server have? In a previous post you claimed 6GB.
Why then only 2GB of swap? If you have swap it should be at least as
large as the memory, preferably larger..  but I do not think this is
your problem..

If you suspect there is a memory problem with Squid then monitor the
process size over time, and also see the cachemgr memory page to see if
there is any indications to what the memory is being used for. And the
general runtime info page as well (even if many numbers there break down
when the process grows above 2GB due to glibc mallinfo reporting
limitations, not our fault).

Regards
Henrik


RE: [squid-users] Oom-killer and Squid

2007-03-08 Thread Henrik Nordstrom
tor 2007-03-08 klockan 13:55 -0500 skrev Dave Rhodes:

> Any idea what would make the heap keep growing?

You have a cache_mem of 1 GB, plus 60 GB of cache_dir. That alone is
about 1.6GB or memory. Then your OS need some..

How much memory do the server have? In a previous post you claimed 6GB.
Why then only 2GB of swap? If you have swap it should be at least as
large as the memory, preferably larger..  but I do not think this is
your problem..

If you suspect there is a memory problem with Squid then monitor the
process size over time, and also see the cachemgr memory page to see if
there is any indications to what the memory is being used for. And the
general runtime info page as well (even if many numbers there break down
when the process grows above 2GB due to glibc mallinfo reporting
limitations, not our fault).

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Oom-killer and Squid

2007-03-08 Thread leongmzlist
check how many objects are in your cache (either via squid snmp, or 
/bin/find).  Check my previous posts regarding out of memory 
errors.  Basically, more objects = more ram use.


mike

At 10:55 AM 3/8/2007, Dave Rhodes wrote:

Colin,
Thanks for your reply.  I checked into hugemem and it looks like about
16GB is hugemem for 64 bit systems.

After looking at pmap, it looks like the Squid heap is growing without
end.  It's up to about 1.6GB resident at the moment and I suspect that
it crashes when the heap is larger than swap (2GB) and Squid decides to
swap or it just keeps growing until it consumes all the memory.

Any idea what would make the heap keep growing?
Thanks,
Dave

-Original Message-
From: Colin Campbell [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 06, 2007 7:43 PM
To: Dave Rhodes
Cc: Henrik Nordstrom; squid-users@squid-cache.org
Subject: RE: [squid-users] Oom-killer and Squid


Hi,

I've been bitten by oom-killer in the past, not just on Squid boxes. The
problem comes from Linux' memory model which splits RAM into three
parts, the most imprtant of which are what's called "Low" and "High".
Essentially Low mem = 0..892 MBytes and High mem is the rest. If you run
"free -l" you can see how much of each is in use. You'll probably that
most of your Low mem is gone and little or none of your High mem is in
use.

Red Hat ship two kernel types, a "normal" one a "hugemem" one which is
for machines with > 4GB of RAM. On my 32 bit systems changing from the
normal to "hugemem" changed things. Here's two boxes both running squid:

HOST1 /root # uname -r
2.6.9-42.0.3.ELhugemem

HOST1 /root # free -l
 total   used   free sharedbuffers
cached
Mem:   41469724025500 121472  0 521828
2838672
Low:   33605403240092 120448
High:   786432 785408   1024
-/+ buffers/cache: 6650003481972
Swap:  39679761923967784

HOST2 /root # uname -r
2.6.9-42.0.8.ELsmp

HOST2 /root # free -l
 total   used   free sharedbuffers
cached
Mem:   41492404024556 124684  0 468904
2911876
Low:872696 857516  15180
High:  32765443167040 109504
-/+ buffers/cache: 6437763505464
Swap:  39679761923967784

You might want to look at Suse to see if they do something similar
although you might find you need to rebuild your kernel.

Colin

On Tue, 2007-03-06 at 16:05 -0500, Dave Rhodes wrote:
> Thanks for the reply Henrik,
> The settings:
>
> cache_mem 1 GB
> cache_dir ufs /cache/normal 6 9600 256
>
> I'm not sure about the cache_dir stuff, didn't know if it was better
> to have a lot of small dirs or a few large ones, I think I pulled this

> setting from someone setting up a cache about the same as mine in the
> archives.
>
> I think 6 is 6MB or 60GB?
> Dave
>
> -Original Message-
> From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, March 06, 2007 3:54 PM
> To: Dave Rhodes
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] Oom-killer and Squid
>
>
> tis 2007-03-06 klockan 14:47 -0500 skrev Dave Rhodes:
>
> > Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache, 2GB swap
> > w/SuSE 10.1.  As a rule, thanks to some help from Henrik, everything

> > runs well.  Twice now though, I've had oom-killer jump in and kill
> > Squid and only Squid.  I am running a very small test group of about

> > 30 users so it takes awhile (about 3 weeks) to run out of memory.
>
> You should not run out of memory unless you configured something very
> wrong...
>
> Whats your cache_mem and cache_dir settings?
>
> Regards
> Henrik
>
--
Colin Campbell
Unix Support/Postmaster/Hostmaster
Citec
+61 7 3227 6334




RE: [squid-users] Oom-killer and Squid

2007-03-08 Thread Dave Rhodes
Colin,
Thanks for your reply.  I checked into hugemem and it looks like about
16GB is hugemem for 64 bit systems.

After looking at pmap, it looks like the Squid heap is growing without
end.  It's up to about 1.6GB resident at the moment and I suspect that
it crashes when the heap is larger than swap (2GB) and Squid decides to
swap or it just keeps growing until it consumes all the memory.

Any idea what would make the heap keep growing?
Thanks,
Dave

-Original Message-
From: Colin Campbell [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 06, 2007 7:43 PM
To: Dave Rhodes
Cc: Henrik Nordstrom; squid-users@squid-cache.org
Subject: RE: [squid-users] Oom-killer and Squid


Hi,

I've been bitten by oom-killer in the past, not just on Squid boxes. The
problem comes from Linux' memory model which splits RAM into three
parts, the most imprtant of which are what's called "Low" and "High".
Essentially Low mem = 0..892 MBytes and High mem is the rest. If you run
"free -l" you can see how much of each is in use. You'll probably that
most of your Low mem is gone and little or none of your High mem is in
use.

Red Hat ship two kernel types, a "normal" one a "hugemem" one which is
for machines with > 4GB of RAM. On my 32 bit systems changing from the
normal to "hugemem" changed things. Here's two boxes both running squid:

HOST1 /root # uname -r
2.6.9-42.0.3.ELhugemem

HOST1 /root # free -l
 total   used   free sharedbuffers
cached
Mem:   41469724025500 121472  0 521828
2838672
Low:   33605403240092 120448
High:   786432 785408   1024
-/+ buffers/cache: 6650003481972
Swap:  39679761923967784

HOST2 /root # uname -r
2.6.9-42.0.8.ELsmp

HOST2 /root # free -l
 total   used   free sharedbuffers
cached
Mem:   41492404024556 124684  0 468904
2911876
Low:872696 857516  15180
High:  32765443167040 109504
-/+ buffers/cache: 6437763505464
Swap:  39679761923967784

You might want to look at Suse to see if they do something similar
although you might find you need to rebuild your kernel.

Colin

On Tue, 2007-03-06 at 16:05 -0500, Dave Rhodes wrote:
> Thanks for the reply Henrik,
> The settings:
> 
> cache_mem 1 GB
> cache_dir ufs /cache/normal 6 9600 256
> 
> I'm not sure about the cache_dir stuff, didn't know if it was better 
> to have a lot of small dirs or a few large ones, I think I pulled this

> setting from someone setting up a cache about the same as mine in the 
> archives.
> 
> I think 6 is 6MB or 60GB?
> Dave
> 
> -Original Message-
> From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, March 06, 2007 3:54 PM
> To: Dave Rhodes
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] Oom-killer and Squid
> 
> 
> tis 2007-03-06 klockan 14:47 -0500 skrev Dave Rhodes:
> 
> > Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache, 2GB swap
> > w/SuSE 10.1.  As a rule, thanks to some help from Henrik, everything

> > runs well.  Twice now though, I've had oom-killer jump in and kill 
> > Squid and only Squid.  I am running a very small test group of about

> > 30 users so it takes awhile (about 3 weeks) to run out of memory.
> 
> You should not run out of memory unless you configured something very 
> wrong...
> 
> Whats your cache_mem and cache_dir settings?
> 
> Regards
> Henrik
> 
-- 
Colin Campbell
Unix Support/Postmaster/Hostmaster
Citec
+61 7 3227 6334



RE: [squid-users] Oom-killer and Squid

2007-03-07 Thread Colin Campbell
Hi,

I've been bitten by oom-killer in the past, not just on Squid boxes. The
problem comes from Linux' memory model which splits RAM into three
parts, the most imprtant of which are what's called "Low" and "High".
Essentially Low mem = 0..892 MBytes and High mem is the rest. If you run
"free -l" you can see how much of each is in use. You'll probably that
most of your Low mem is gone and little or none of your High mem is in
use.

Red Hat ship two kernel types, a "normal" one a "hugemem" one which is
for machines with > 4GB of RAM. On my 32 bit systems changing from the
normal to "hugemem" changed things. Here's two boxes both running squid:

HOST1 /root # uname -r
2.6.9-42.0.3.ELhugemem

HOST1 /root # free -l
 total   used   free sharedbuffers
cached
Mem:   41469724025500 121472  0 521828
2838672
Low:   33605403240092 120448
High:   786432 785408   1024
-/+ buffers/cache: 6650003481972
Swap:  39679761923967784

HOST2 /root # uname -r
2.6.9-42.0.8.ELsmp

HOST2 /root # free -l
 total   used   free sharedbuffers
cached
Mem:   41492404024556 124684  0 468904
2911876
Low:872696 857516  15180
High:  32765443167040 109504
-/+ buffers/cache: 6437763505464
Swap:  39679761923967784

You might want to look at Suse to see if they do something similar
although you might find you need to rebuild your kernel.

Colin

On Tue, 2007-03-06 at 16:05 -0500, Dave Rhodes wrote:
> Thanks for the reply Henrik,
> The settings:
> 
> cache_mem 1 GB
> cache_dir ufs /cache/normal 6 9600 256
> 
> I'm not sure about the cache_dir stuff, didn't know if it was better to
> have a lot of small dirs or a few large ones, I think I pulled this
> setting from someone setting up a cache about the same as mine in the
> archives.
> 
> I think 6 is 6MB or 60GB?
> Dave
> 
> -Original Message-
> From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, March 06, 2007 3:54 PM
> To: Dave Rhodes
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] Oom-killer and Squid
> 
> 
> tis 2007-03-06 klockan 14:47 -0500 skrev Dave Rhodes:
> 
> > Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache, 2GB swap 
> > w/SuSE 10.1.  As a rule, thanks to some help from Henrik, everything 
> > runs well.  Twice now though, I've had oom-killer jump in and kill 
> > Squid and only Squid.  I am running a very small test group of about 
> > 30 users so it takes awhile (about 3 weeks) to run out of memory.
> 
> You should not run out of memory unless you configured something very
> wrong...
> 
> Whats your cache_mem and cache_dir settings?
> 
> Regards
> Henrik
> 
-- 
Colin Campbell
Unix Support/Postmaster/Hostmaster
Citec
+61 7 3227 6334



[squid-users] Oom-killer and Squid

2007-03-06 Thread Dave Rhodes

Thanks for the reply Henrik,
The settings:

cache_mem 1 GB
cache_dir ufs /cache/normal 6 9600 256

I'm not sure about the cache_dir stuff, didn't know if it was better to
have a lot of small dirs or a few large ones, I think I pulled this
setting from someone setting up a cache about the same as mine in the
archives.

I think 6 is 6MB or 60GB?
Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 06, 2007 3:54 PM
To: Dave Rhodes
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Oom-killer and Squid


tis 2007-03-06 klockan 14:47 -0500 skrev Dave Rhodes:

> Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache, 2GB swap
> w/SuSE 10.1.  As a rule, thanks to some help from Henrik, everything 
> runs well.  Twice now though, I've had oom-killer jump in and kill 
> Squid and only Squid.  I am running a very small test group of about 
> 30 users so it takes awhile (about 3 weeks) to run out of memory.

You should not run out of memory unless you configured something very
wrong...

Whats your cache_mem and cache_dir settings?

Regards
Henrik


RE: [squid-users] Oom-killer and Squid

2007-03-06 Thread Dave Rhodes
Thanks for the reply Henrik,
The settings:

cache_mem 1 GB
cache_dir ufs /cache/normal 6 9600 256

I'm not sure about the cache_dir stuff, didn't know if it was better to
have a lot of small dirs or a few large ones, I think I pulled this
setting from someone setting up a cache about the same as mine in the
archives.

I think 6 is 6MB or 60GB?
Dave

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 06, 2007 3:54 PM
To: Dave Rhodes
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Oom-killer and Squid


tis 2007-03-06 klockan 14:47 -0500 skrev Dave Rhodes:

> Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache, 2GB swap 
> w/SuSE 10.1.  As a rule, thanks to some help from Henrik, everything 
> runs well.  Twice now though, I've had oom-killer jump in and kill 
> Squid and only Squid.  I am running a very small test group of about 
> 30 users so it takes awhile (about 3 weeks) to run out of memory.

You should not run out of memory unless you configured something very
wrong...

Whats your cache_mem and cache_dir settings?

Regards
Henrik


Re: [squid-users] Oom-killer and Squid

2007-03-06 Thread Henrik Nordstrom
tis 2007-03-06 klockan 14:47 -0500 skrev Dave Rhodes:

> Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache, 2GB
> swap w/SuSE 10.1.  As a rule, thanks to some help from Henrik,
> everything runs well.  Twice now though, I've had oom-killer jump in and
> kill Squid and only Squid.  I am running a very small test group of
> about 30 users so it takes awhile (about 3 weeks) to run out of memory.

You should not run out of memory unless you configured something very
wrong...

Whats your cache_mem and cache_dir settings?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Oom-killer and Squid

2007-03-06 Thread Dave Rhodes
Denys,
Thanks for the reply, the system is 64 bit.  Right now, "ps axv" shows
squid as the largest memory user at almost 600MB and growing at about
2MB/min (makes sense with the 3 week crash time).  Memory leak maybe?
Dave

-Original Message-
From: Denys [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 06, 2007 3:32 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Oom-killer and Squid


Is it 32-bit system?
If yes, i guess with PAE process cannot take more than 2GB of memory.
But i 
think i am wrong (it is 64-bit?).
try to do sometimes "ps axv", and see how much ram eated by processes

On Tue, 6 Mar 2007 14:47:02 -0500, Dave Rhodes wrote
> Hello All:
> 
> I am running Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache,
> 2GB swap w/SuSE 10.1.  As a rule, thanks to some help from Henrik, 
> everything runs well.  Twice now though, I've had oom-killer jump in 
> and kill Squid and only Squid.  I am running a very small test group 
> of about 30 users so it takes awhile (about 3 weeks) to run out of
memory.
> 
> After the last crash, I put in memory monitors to see what the status 
> was just before and just after the crash.  I also added Webmin 
> monitors to let me know when the process stopped.  Just before this 
> particular crash, the free memory was only about 15MB but that is not 
> unusual with Linux and I had watched it on several occasions jump from

> that number to almost 100MB.  Apparently, it ran out of buffers and 
> oom- killer shut the Squid process down.  After restarting the Squid 
> process a check of the free memory showed 5.4GB.
> 
> If anyone can tell me why this happens, I would certainly appreciate
> it.
> 
> Below is the system log output detailing the oom-killer sequence.
> The access.log and cache.log show no problems at all. Thanks, Dave
> 
> System Log:
> 
> Mar  6 11:01:10 Squid1 kernel: oom-killer: gfp_mask=0x201d2, order=0
> Mar  6 11:01:10 Squid1 kernel:
> Mar  6 11:01:10 Squid1 kernel: Call Trace:
> {out_of_memory+93}
> {__alloc_pages+552}
> Mar  6 11:01:10 Squid1 kernel:
> {__do_page_cache_readahe
> ad+149} {__wait_on_bit_lock+91}
> Mar  6 11:01:10 Squid1 kernel: {__lock_page+114} 
>  fff8015a59b>{filemap_nopage+323}
> Mar  6 11:01:10 Squid1 kernel: 
> {__handle_mm_fault+911}
> {do_page_fault+965}
> Mar  6 11:01:10 Squid1 kernel: 
> {default_wake_function+0
> } {do_gettimeofday+80}
> 
> Mar  6 11:01:10 Squid1 kernel:{error_exit+0}
> Mar  6 11:01:10 Squid1 kernel: Mem-info:
> Mar  6 11:01:10 Squid1 kernel: Node 0 DMA per-cpu:
> Mar  6 11:01:10 Squid1 kernel: cpu 0 hot: high 0, batch 1 used:0 Mar
>  6
> 11:01:10 Squid1 kernel: cpu 0 cold: high 0, batch 1 used:0 Mar  6
> 11:01:10 Squid1 kernel: cpu 1 hot: high 0, batch 1 used:0
> Mar  6 11:01:11 Squid1 squid[5079]: Squid Parent: child process 5081 
> exited due  to signal 9
> Mar  6 11:01:12 Squid1 kernel: cpu 1 cold: high 0, batch 1 used:0 
> Mar  6 11:01:15 Squid1 kernel: Node 0 DMA32 per-cpu: 
> Mar  6 11:01:16 Squid1 kernel: cpu 0 hot: high 186, batch 31 used:161 
> Mar  6 11:01:17 Squid1 kernel: cpu 0 cold: high 62, batch 15 used:14 
> Mar  6 11:01:17 Squid1 kernel: cpu 1 hot: high 186, batch 31 used:158 
> Mar  6 11:01:17 Squid1 kernel: cpu 1 cold: high 62, batch 15 used:61 
> Mar  6 11:01:17 Squid1 kernel: Node 0 Normal per-cpu: 
> Mar  6 11:01:17 Squid1 kernel: cpu 0 hot: high 186, batch 31 used:177 
> Mar  6 11:01:17 Squid1 kernel: cpu 0 cold: high 62, batch 15 used:15 
> Mar  6 11:01:17 Squid1 kernel: cpu 1 hot: high 186, batch 31 used:156 
> Mar  6 11:01:17 Squid1 kernel: cpu 1 cold: high 62, batch 15 used:11 
> Mar  6 11:01:17 Squid1 kernel: Node 0 HighMem per-cpu: empty
> Mar  6 11:01:17 Squid1 kernel: Free pages:   32040kB (0kB HighMem)
> Mar  6 11:01:17 Squid1 kernel: Active:1046725 inactive:437196 
> dirty:0 writeback: 0
unstable:0
> free:8010 slab:5467 mapped:1480421 pagetables:5102
> 
> Mar  6 11:01:17 Squid1 kernel: Node 0 DMA free:12388kB min:16kB
low:20kB
> high:24 kB active:0kB
> inactive:0kB present:12032kB pages_scanned:2868 all_unreclaimable? yes
> Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 3512 5974 5974
> Mar  6 11:01:17 Squid1 kernel: Node 0 DMA32 free:15616kB min:5808kB
> low:7260kB h igh:8712kB
> active:2228388kB inactive:1308212kB present:3596460kB pages_scanned:5
> 390781 all_unreclaimable? yes
> Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 0 2461 2461
> Mar  6 11:01:17 Squid1 kernel: Node 0 Normal free:4036kB min:4068kB
> low:5084kB h

Re: [squid-users] Oom-killer and Squid

2007-03-06 Thread Denys
Is it 32-bit system?
If yes, i guess with PAE process cannot take more than 2GB of memory. But i 
think i am wrong (it is 64-bit?).
try to do sometimes "ps axv", and see how much ram eated by processes

On Tue, 6 Mar 2007 14:47:02 -0500, Dave Rhodes wrote
> Hello All:
> 
> I am running Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache, 
> 2GB swap w/SuSE 10.1.  As a rule, thanks to some help from Henrik, 
> everything runs well.  Twice now though, I've had oom-killer jump in 
> and kill Squid and only Squid.  I am running a very small test group 
> of about 30 users so it takes awhile (about 3 weeks) to run out of memory.
> 
> After the last crash, I put in memory monitors to see what the status
> was just before and just after the crash.  I also added Webmin monitors
> to let me know when the process stopped.  Just before this particular
> crash, the free memory was only about 15MB but that is not unusual with
> Linux and I had watched it on several occasions jump from that 
> number to almost 100MB.  Apparently, it ran out of buffers and oom-
> killer shut the Squid process down.  After restarting the Squid 
> process a check of the free memory showed 5.4GB.
> 
> If anyone can tell me why this happens, I would certainly appreciate 
> it.
> 
> Below is the system log output detailing the oom-killer sequence.  
> The access.log and cache.log show no problems at all. Thanks, Dave
> 
> System Log:
> 
> Mar  6 11:01:10 Squid1 kernel: oom-killer: gfp_mask=0x201d2, order=0 
> Mar  6 11:01:10 Squid1 kernel:
> Mar  6 11:01:10 Squid1 kernel: Call Trace:
> {out_of_memory+93}
> {__alloc_pages+552}
> Mar  6 11:01:10 Squid1 kernel:
> {__do_page_cache_readahe
> ad+149} {__wait_on_bit_lock+91}
> Mar  6 11:01:10 Squid1 kernel:
> {__lock_page+114}  fff8015a59b>{filemap_nopage+323}
> Mar  6 11:01:10 Squid1 kernel:
> {__handle_mm_fault+911}
> {do_page_fault+965}
> Mar  6 11:01:10 Squid1 kernel:
> {default_wake_function+0
> } {do_gettimeofday+80}
> 
> Mar  6 11:01:10 Squid1 kernel:{error_exit+0}
> Mar  6 11:01:10 Squid1 kernel: Mem-info:
> Mar  6 11:01:10 Squid1 kernel: Node 0 DMA per-cpu:
> Mar  6 11:01:10 Squid1 kernel: cpu 0 hot: high 0, batch 1 used:0 Mar 
>  6
> 11:01:10 Squid1 kernel: cpu 0 cold: high 0, batch 1 used:0 Mar  6
> 11:01:10 Squid1 kernel: cpu 1 hot: high 0, batch 1 used:0
> Mar  6 11:01:11 Squid1 squid[5079]: Squid Parent: child process 5081 
> exited due  to signal 9
> Mar  6 11:01:12 Squid1 kernel: cpu 1 cold: high 0, batch 1 used:0 
> Mar  6 11:01:15 Squid1 kernel: Node 0 DMA32 per-cpu: 
> Mar  6 11:01:16 Squid1 kernel: cpu 0 hot: high 186, batch 31 used:161 
> Mar  6 11:01:17 Squid1 kernel: cpu 0 cold: high 62, batch 15 used:14 
> Mar  6 11:01:17 Squid1 kernel: cpu 1 hot: high 186, batch 31 used:158 
> Mar  6 11:01:17 Squid1 kernel: cpu 1 cold: high 62, batch 15 used:61 
> Mar  6 11:01:17 Squid1 kernel: Node 0 Normal per-cpu: 
> Mar  6 11:01:17 Squid1 kernel: cpu 0 hot: high 186, batch 31 used:177 
> Mar  6 11:01:17 Squid1 kernel: cpu 0 cold: high 62, batch 15 used:15 
> Mar  6 11:01:17 Squid1 kernel: cpu 1 hot: high 186, batch 31 used:156 
> Mar  6 11:01:17 Squid1 kernel: cpu 1 cold: high 62, batch 15 used:11 
> Mar  6 11:01:17 Squid1 kernel: Node 0 HighMem per-cpu: empty
> Mar  6 11:01:17 Squid1 kernel: Free pages:   32040kB (0kB HighMem)
> Mar  6 11:01:17 Squid1 kernel: Active:1046725 inactive:437196 
> dirty:0 writeback: 0 unstable:0
> free:8010 slab:5467 mapped:1480421 pagetables:5102
> 
> Mar  6 11:01:17 Squid1 kernel: Node 0 DMA free:12388kB min:16kB low:20kB
> high:24 kB active:0kB
> inactive:0kB present:12032kB pages_scanned:2868 all_unreclaimable?
> yes
> Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 3512 5974 5974
> Mar  6 11:01:17 Squid1 kernel: Node 0 DMA32 free:15616kB min:5808kB
> low:7260kB h igh:8712kB
> active:2228388kB inactive:1308212kB present:3596460kB pages_scanned:5
> 390781 all_unreclaimable? yes
> Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 0 2461 2461
> Mar  6 11:01:17 Squid1 kernel: Node 0 Normal free:4036kB min:4068kB
> low:5084kB h igh:6100kB
> active:1958512kB inactive:440572kB present:2520960kB pages_scanned:98
> 21648 all_unreclaimable? yes
> Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 0 0 0
> Mar  6 11:01:17 Squid1 kernel: Node 0 HighMem free:0kB min:128kB
> low:128kB high: 128kB
> active:0kB inactive:0kB present:0kB pages_scanned:0 
> all_unreclaimable? no
> Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 0 0 0
> Mar  6 11:01:17 Squid1 kernel: Node 0 DMA: 7*4kB 5*8kB 4*16kB 5*32kB 
> 3*64kB 3*12 8kB 1*256kB 
> 0*512kB 1*1024kB 1*2048kB 2*4096kB = 12388kB
> Mar  6 11:01:17 Squid1 kernel: Node 0 DMA32: 2*4kB 1*8kB 1*16kB 
> 3*32k

[squid-users] Oom-killer and Squid

2007-03-06 Thread Dave Rhodes
Hello All:

I am running Squid 2.6 Stable5 on an HP DL390 w/6GB RAM, 60GB cache, 2GB
swap w/SuSE 10.1.  As a rule, thanks to some help from Henrik,
everything runs well.  Twice now though, I've had oom-killer jump in and
kill Squid and only Squid.  I am running a very small test group of
about 30 users so it takes awhile (about 3 weeks) to run out of memory.

After the last crash, I put in memory monitors to see what the status
was just before and just after the crash.  I also added Webmin monitors
to let me know when the process stopped.  Just before this particular
crash, the free memory was only about 15MB but that is not unusual with
Linux and I had watched it on several occasions jump from that number to
almost 100MB.  Apparently, it ran out of buffers and oom-killer shut the
Squid process down.  After restarting the Squid process a check of the
free memory showed 5.4GB.

If anyone can tell me why this happens, I would certainly appreciate it.

Below is the system log output detailing the oom-killer sequence.  The
access.log and cache.log show no problems at all. 
Thanks, Dave

System Log:

Mar  6 11:01:10 Squid1 kernel: oom-killer: gfp_mask=0x201d2, order=0 
Mar  6 11:01:10 Squid1 kernel:
Mar  6 11:01:10 Squid1 kernel: Call Trace:
{out_of_memory+93}
{__alloc_pages+552}
Mar  6 11:01:10 Squid1 kernel:
{__do_page_cache_readahe
ad+149} {__wait_on_bit_lock+91}
Mar  6 11:01:10 Squid1 kernel:
{__lock_page+114} {filemap_nopage+323}
Mar  6 11:01:10 Squid1 kernel:
{__handle_mm_fault+911}
{do_page_fault+965}
Mar  6 11:01:10 Squid1 kernel:
{default_wake_function+0
} {do_gettimeofday+80}
Mar  6 11:01:10 Squid1 kernel:{error_exit+0}
Mar  6 11:01:10 Squid1 kernel: Mem-info:
Mar  6 11:01:10 Squid1 kernel: Node 0 DMA per-cpu:
Mar  6 11:01:10 Squid1 kernel: cpu 0 hot: high 0, batch 1 used:0 Mar  6
11:01:10 Squid1 kernel: cpu 0 cold: high 0, batch 1 used:0 Mar  6
11:01:10 Squid1 kernel: cpu 1 hot: high 0, batch 1 used:0
Mar  6 11:01:11 Squid1 squid[5079]: Squid Parent: child process 5081
exited due  to signal 9
Mar  6 11:01:12 Squid1 kernel: cpu 1 cold: high 0, batch 1 used:0 
Mar  6 11:01:15 Squid1 kernel: Node 0 DMA32 per-cpu: 
Mar  6 11:01:16 Squid1 kernel: cpu 0 hot: high 186, batch 31 used:161 
Mar  6 11:01:17 Squid1 kernel: cpu 0 cold: high 62, batch 15 used:14 
Mar  6 11:01:17 Squid1 kernel: cpu 1 hot: high 186, batch 31 used:158 
Mar  6 11:01:17 Squid1 kernel: cpu 1 cold: high 62, batch 15 used:61 
Mar  6 11:01:17 Squid1 kernel: Node 0 Normal per-cpu: 
Mar  6 11:01:17 Squid1 kernel: cpu 0 hot: high 186, batch 31 used:177 
Mar  6 11:01:17 Squid1 kernel: cpu 0 cold: high 62, batch 15 used:15 
Mar  6 11:01:17 Squid1 kernel: cpu 1 hot: high 186, batch 31 used:156 
Mar  6 11:01:17 Squid1 kernel: cpu 1 cold: high 62, batch 15 used:11 
Mar  6 11:01:17 Squid1 kernel: Node 0 HighMem per-cpu: empty
Mar  6 11:01:17 Squid1 kernel: Free pages:   32040kB (0kB HighMem)
Mar  6 11:01:17 Squid1 kernel: Active:1046725 inactive:437196 dirty:0
writeback: 0 unstable:0
free:8010 slab:5467 mapped:1480421 pagetables:5102
Mar  6 11:01:17 Squid1 kernel: Node 0 DMA free:12388kB min:16kB low:20kB
high:24 kB active:0kB
inactive:0kB present:12032kB pages_scanned:2868 all_unreclaimable?
yes
Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 3512 5974 5974
Mar  6 11:01:17 Squid1 kernel: Node 0 DMA32 free:15616kB min:5808kB
low:7260kB h igh:8712kB
active:2228388kB inactive:1308212kB present:3596460kB pages_scanned:5
390781 all_unreclaimable? yes
Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 0 2461 2461
Mar  6 11:01:17 Squid1 kernel: Node 0 Normal free:4036kB min:4068kB
low:5084kB h igh:6100kB
active:1958512kB inactive:440572kB present:2520960kB pages_scanned:98
21648 all_unreclaimable? yes
Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 0 0 0
Mar  6 11:01:17 Squid1 kernel: Node 0 HighMem free:0kB min:128kB
low:128kB high: 128kB
active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable?
no
Mar  6 11:01:17 Squid1 kernel: lowmem_reserve[]: 0 0 0 0
Mar  6 11:01:17 Squid1 kernel: Node 0 DMA: 7*4kB 5*8kB 4*16kB 5*32kB
3*64kB 3*12 8kB 1*256kB
0*512kB 1*1024kB 1*2048kB 2*4096kB = 12388kB
Mar  6 11:01:17 Squid1 kernel: Node 0 DMA32: 2*4kB 1*8kB 1*16kB 3*32kB
0*64kB 1* 128kB 0*256kB
0*512kB 1*1024kB 1*2048kB 3*4096kB = 15616kB
Mar  6 11:01:17 Squid1 kernel: Node 0 Normal: 1*4kB 0*8kB 0*16kB 2*32kB
0*64kB 1 *128kB 1*256kB
1*512kB 1*1024kB 1*2048kB 0*4096kB = 4036kB
Mar  6 11:01:17 Squid1 kernel: Node 0 HighMem: empty
Mar  6 11:01:17 Squid1 kernel: Swap cache: add 1031992, delete 1031737,
find 761