On 26.05.2011 14:31, SADA SIVA REDDY S wrote:
My Questions:
1. Is there a provision in Linux to automatically cleanup the old
corefiles when we reach a certain limit ?
I think there is no such feature. Core dump is regular file saved in
default process directory, and system doesn't trace these files, it
simply generates core dump and forget about it (on other words, system
treats core dumps as regular file and doesn't know that is a core dump
file).
1. Is there a provision in Linux to set a upper limit for space
occupied by all core files (not individual core files) ?
I think no, you can limit size of generated core dump per file, per user
(ulimit -c).
But, you can change destination of all core dump files by add line
kernel.core_pattern = /vol/allcoredumps/%u/%e
in /etc/sysctl.conf
After that, you can write a simple script to check amount of free space,
schedule it into crontab. When free space will be below certain limit,
script should remove oldest or biggest files from above location.
Below, list of available patterns:
|
%p: pid
%: '%' is dropped
%%: output one '%'
%u: uid
%g: gid
%s: signal number
%t: UNIX time of dump
%h: hostname
%e: executable filename
%: both are dropped|
--
regards
Andrzej Kardas
http://www.linux.mynotes.pl
_______________________________________________
Kernelnewbies mailing list
[email protected]
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies