'sysctl -a | grep buf' shows: vm.buffermem = 2        10      60

When I checked the /proc/sys/vm/ this is what it shows:

-rw-------    1 root     root            0 Dec  7 19:46 bdflush
-rw-r--r--    1 root     root            0 Dec  7 19:46 buffermem
-rw-r--r--    1 root     root            0 Dec  7 19:46 freepages
-rw-r--r--    1 root     root            0 Dec  7 19:46 kswapd
-rw-r--r--    1 root     root            0 Dec  7 19:46 overcommit_memory
-rw-------    1 root     root            0 Dec  7 19:46 page-cluster
-rw-r--r--    1 root     root            0 Dec  7 19:46 pagecache
-rw-------    1 root     root            0 Dec  7 19:46 pagetable_cache

They are all empty.  When I did an ls on /proc, this shows up:

-r--------    1 root     root     268177408 Dec  7 19:56 kcore
-r--------    1 root     root            0 Nov 29 18:10 kmsg
-rw-r--r--    1 root     root           66 Dec  7 19:56 mtrr
dr-xr-xr-x    4 root     root            0 Nov 29 18:10 net
-r--r--r--    1 root     root            0 Dec  7 19:56 partitions
-r--r--r--    1 root     root            0 Dec  7 19:56 pci
-r--r--r--    1 root     root            0 Dec  7 19:56 rtc
dr-xr-xr-x    3 root     root            0 Dec  7 19:56 scsi
lrwxrwxrwx    1 root     root           64 Dec  7 19:56 self -> 15964

Everything above kcore are empty.  What is this kcore and why is it so big?
Is it like a core dump?  The command 'file' displays: /proc/kcore:
Linux/i386 core file.  Is this file important and can I delete it?

Also, here's an update of 'top':

 10:35am  up 8 days, 16:25,  2 users,  load average: 0.15, 0.12, 0.09
58 processes: 57 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  0.9% user,  1.3% system,  0.0% nice, 97.6% idle
Mem:   257492K av,  211044K used,   46448K free,   32904K shrd,  139212K
buff
Swap:  530104K av,    2224K used,  527880K free                   44488K
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT  LIB %CPU %MEM   TIME COMMAND
 4415 root       0   0  4004 3492  3296 S       0  0.0  1.3   0:00 httpd
 4418 nobody     0   0  3732 2336  1768 S       0  0.0  0.9   0:00 httpd
 4422 nobody     0   0  3740 2336  1756 S       0  0.0  0.9   0:00 httpd
 4419 nobody     0   0  3720 2284  1744 S       0  0.0  0.8   0:00 httpd
 4420 nobody     0   0  3364 1892  1576 S       0  0.0  0.7   0:00 httpd
30619 named      0   0  2060 1760   756 S       0  0.0  0.6   0:12 named
 4423 nobody     0   0  3032 1492  1284 S       0  0.0  0.5   0:00 httpd
 4424 nobody     0   0  3032 1492  1284 S       0  0.0  0.5   0:00 httpd
 4425 nobody     0   0  3032 1492  1284 S       0  0.0  0.5   0:00 httpd
 4421 nobody     0   0  3032 1432  1224 S       0  0.0  0.5   0:00 httpd
21321 root       0   0  1196 1196   924 S       0  0.0  0.4   0:00 login
21358 root       0   0  1196 1196   924 S       0  0.0  0.4   0:00 login
  597 xfs        0   0  1020 1016   532 S       0  0.0  0.3   0:00 xfs
21322 vvuong     0   0   968  968   740 S       0  0.0  0.3   0:00 bash
21359 vvuong     0   0   968  968   740 S       0  0.0  0.3   0:00 bash
23322 root       3   0   968  968   736 S       0  0.0  0.3   0:00 bash
23318 root       0   0   948  948   732 S       0  0.0  0.3   0:00 su
21348 vvuong    16   0   864  864   668 R       0  0.5  0.3   0:29 top
21320 root       0   0   760  760   604 S       0  0.0  0.2   0:03
in.telnetd
21357 root       0   0   756  756   604 S       0  0.0  0.2   0:00
in.telnetd
23407 root      10   0   756  756   692 S       0  0.0  0.2   0:00 sh
23406 root       5   0   716  716   656 S       0  0.0  0.2   0:00 sh
23411 root      10   0   688  688   560 S       0  0.0  0.2   0:00 less
  410 root       0   0   684  680   304 S       0  0.0  0.2   0:03 klogd
23403 root       5   0   680  680   392 S       0  0.0  0.2   0:00 man
  456 root       0   0   564  560   456 S       0  0.0  0.2   0:00 crond
  424 nobody     0   0   544  532   424 S       0  0.0  0.2   0:00 identd
  426 nobody     0   0   544  532   424 S       0  0.0  0.2   0:00 identd
  427 nobody     0   0   544  532   424 S       0  0.0  0.2   0:00 identd
  428 nobody     0   0   544  532   424 S       0  0.0  0.2   0:00 identd
  429 nobody     0   0   544  532   424 S       0  0.0  0.2   0:00 identd
  336 root       0   0   516  512   428 S       0  0.0  0.1   0:00 rpc.statd
  401 root      13   0   504  500   404 S       0  0.7  0.1  14:36 syslogd
  488 root       0   0   488  484   404 S       0  0.0  0.1   0:00 lpd
    1 root       0   0   476  476   404 S       0  0.0  0.1   0:05 init
  474 root      18   0   472  468   384 S       0  0.0  0.1   0:14 inetd
10835 qmaill     0   0   428  428   348 S       0  0.0  0.1   0:01 splogger
  635 root       0   0   408  408   340 S       0  0.0  0.1   0:00 mingetty
  636 root       0   0   408  408   340 S       0  0.0  0.1   0:00 mingetty
  637 root       0   0   408  408   340 S       0  0.0  0.1   0:00 mingetty
  638 root       0   0   408  408   340 S       0  0.0  0.1   0:00 mingetty

That means I've just lost another 14MB of free mem. But the stat shows that
less is being used by the the httpd compared to yesterday :(

vav



-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Jason Holland
Sent: Thursday, December 07, 2000 7:42 PM
To: [EMAIL PROTECTED]
Subject: RE: Memory Leak


Vav,
  this looks normal for linux.  the load on your box is so low it looks like
its asleep. :)  i have boxes at work and home that both do the same thing.
linux is just being very, very generous with the allocation of buffer cache.
if you really want to get brave, you can change that value in the /proc
directory with sysctl.  i think its /proc/sys/vm/buffermem.  it would be a
good test to see if you grab back some memory. just a thought.

Jason


>
> That does make sense.  But it's strange.  This server is my main email
> server, also running webserver.  When I saw how much memory it took, I
> logged off of Xwindow and that brought me up to about 180MB.  Then as the
> week goes by, it started slowly loosing its free memory again.
> Top seems to
> show a usage of maybe 60MB.  The other thing is that memory usage
> increases
> faster during the day.  I expected it to come back up in the
> evening, but it
> didn't.  I just hope that it stops at 62MB. ^_^  Thankx all for your help.
> Please take a look at the data below and tell me if it looks
> right.  Thankx
> again.
>
> vav
>
>
>
> vmstat:
>    procs                      memory    swap          io     system
> cpu
>  r  b  w   swpd   free   buff  cache  si  so    bi    bo   in
> cs  us  sy
> id
>  1  0  0      0  61920 118144  46796   0   0     0     3   44
> 30   0   0
> 37
> -----------------------------------------------------------------
>
> Top:
>   6:42pm  up 8 days, 32 min,  3 users,  load average: 0.00, 0.00, 0.00
> 59 processes: 58 sleeping, 1 running, 0 zombie, 0 stopped
> CPU states:  0.0% user,  0.5% system,  0.0% nice, 99.4% idle
> Mem:   257492K av,  195320K used,   62172K free,   55052K shrd,  118144K
> buff
> Swap:  530104K av,       0K used,  530104K free                   46796K
> cached
>
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT  LIB %CPU %MEM   TIME COMMAND
>  4418 nobody     0   0  4620 4620  4052 S       0  0.0  1.7   0:00 httpd
>  4419 nobody     0   0  4596 4596  4100 S       0  0.0  1.7   0:00 httpd
>  4422 nobody     0   0  4580 4580  4128 S       0  0.0  1.7   0:00 httpd
>  4420 nobody     0   0  4220 4220  4012 S       0  0.0  1.6   0:00 httpd
>  4421 nobody     0   0  4220 4220  4012 S       0  0.0  1.6   0:00 httpd
>  4423 nobody     0   0  4220 4220  4012 S       0  0.0  1.6   0:00 httpd
>  4424 nobody     0   0  4220 4220  4012 S       0  0.0  1.6   0:00 httpd
>  4425 nobody     0   0  4220 4220  4012 S       0  0.0  1.6   0:00 httpd
>  4415 root       0   0  4144 4144  3944 S       0  0.0  1.6   0:00 httpd
> 30619 named      0   0  2220 2220   916 S       0  0.0  0.8   0:10 named
>  3884 root       0   0  1196 1196   924 S       0  0.0  0.4   0:00 login
>  4125 root       0   0  1196 1196   924 S       0  0.0  0.4   0:00 login
>  7381 root       0   0  1196 1196   924 S       0  0.0  0.4   0:00 login
>   597 xfs        0   0  1136 1136   648 S       0  0.0  0.4   0:00 xfs
>  3886 Knic       0   0   992  992   760 S       0  0.0  0.3   0:00 bash
>  4109 root       0   0   976  976   736 S       0  0.0  0.3   0:00 bash
> 10647 root       0   0   976  976   736 S       0  0.0  0.3   0:00 bash
>  4126 Knic       0   0   964  964   740 S       0  0.0  0.3   0:00 bash
>  7383 Knic       0   0   964  964   740 S       0  0.0  0.3   0:00 bash
>  4108 root       0   0   948  948   732 S       0  0.0  0.3   0:00 su
> 10646 root       0   0   948  948   732 S       0  0.0  0.3   0:00 su
> 14553 Knic      14   0   868  868   668 R       0  0.5  0.3   0:17 top
>   410 root       0   0   768  768   388 S       0  0.0  0.2   0:03 klogd
>  3883 root       0   0   760  760   604 S       0  0.0  0.2   0:00
> in.telnetd
>  4124 root       0   0   760  760   604 S       0  0.0  0.2   0:02
> in.telnetd
>  7379 root       0   0   760  760   604 S       0  0.0  0.2   0:00
> in.telnetd
>   424 nobody     0   0   640  640   520 S       0  0.0  0.2   0:00 identd
>   426 nobody     0   0   640  640   520 S       0  0.0  0.2   0:00 identd
>   427 nobody     0   0   640  640   520 S       0  0.0  0.2   0:00 identd
>   428 nobody     0   0   640  640   520 S       0  0.0  0.2   0:00 identd
>   429 nobody     0   0   640  640   520 S       0  0.0  0.2   0:00 identd
>   456 root       0   0   620  620   512 S       0  0.0  0.2   0:00 crond
>   336 root       0   0   560  560   472 S       0  0.0  0.2
> 0:00 rpc.statd
>   401 root       4   0   552  552   452 S       0  0.0  0.2  12:43 syslogd
>   488 root       0   0   532  532   448 S       0  0.0  0.2   0:00 lpd
>   474 root      15   0   528  528   440 S       0  0.0  0.2   0:13 inetd
>   442 daemon     0   0   496  496   416 S       0  0.0  0.1   0:00 atd
>   537 root       0   0   496  496   420 S       0  0.0  0.1   0:00 gpm
>   350 root       0   0   480  480   412 S       0  0.0  0.1   0:00 apmd
>     1 root       0   0   476  476   404 S       0  0.0  0.1   0:05 init
>   311 bin        0   0   428  428   340 S       0  0.0  0.1   0:00 portmap
> -------------------------------------------------------------------
>
>
>
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Jeff Hogg
> Sent: Thursday, December 07, 2000 6:20 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Memory Leak
>
>
>
> -----Original Message-----
> From: Vu Vuong <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
> Date: Thursday, December 07, 2000 5:16 PM
> Subject: Memory Leak
>
>
> >Hi,
> >    Could someone help me diagnose my system.  I use 'top' to monitor the
> >system activities.  As I watch, the memory used increases.  I don't know
> >what is causing it.  Earlier this week it was at 130MB free, now it's at
> >62MB free.  I used 'vmstat 1' to see if there were any large
> file swapping
> >involve, but could not see it.  Any help would be most
> appreciated.  Thank
> >you.
> >
>
>
> Take a look at the output of top again.  Do you see how much
> memory is being
> shared and used for disk buffers?  Subtract those out from the
> amount shown
> used and you get the real amount of memory your programs are using.  The
> buffers and share bits will shrink if and when a program needs the RAM.
> It's the normal behavior for linux.  My system shows 192Megs in use of
> 256Megs, but  154Megs is just buffers.  Leaving only 38Megs in use by
> programs.  If you start using lots of swap, and you don't see a
> high buffers
> total, then you should worry.  Hope this helps.
>
> Jeff Hogg
>
>
>
> _______________________________________________
> Redhat-list mailing list
> [EMAIL PROTECTED]
> https://listman.redhat.com/mailman/listinfo/redhat-list
>
>
>
>
>
> vav,
>   linux is typically aggressive when it comes to memory use.
> which means it
> will use MORE than it needs.  you mentioned your system has not begun to
> swap,  are applications crashing??  is there any one application suffering
> performance wise??  it doesn't sound like a memory leak, just
> linux being a
> little over aggressive in memory use.
>
> Jason
>
>
>
> _______________________________________________
> Redhat-list mailing list
> [EMAIL PROTECTED]
> https://listman.redhat.com/mailman/listinfo/redhat-list
>



_______________________________________________
Redhat-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-list



_______________________________________________
Redhat-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to