I use this script on my system ( I run it from cron) to up this limit if it
ever gets burst.  I do this because I have  co-located server and if the
limit is reached I find it difficult to log on via ssh to fix it.


max=`cat /proc/sys/fs/file-max`
nr=`cat /proc/sys/fs/file-nr | awk ' { print $1 } ' `
newmax=$(( $max + 4096 ))
new_i_max=$(( $newmax * 3 ))
echo Max $max Nr $nr Newmax $newmax NewInodeMax $new_i_max
if [ $nr -gt $max ]
then
        echo more
        echo $newmax > /proc/sys/fs/file-max
        echo $new_i_max > /proc/sys/fs/inode-max
else
        echo less
fi


JC

----- Original Message -----
From: "dbrett" <[EMAIL PROTECTED]>
To: "Sites, Brad" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, February 19, 2003 7:30 PM
Subject: RE: too many files open?


> This has been a big help, this and the lsof command, will put me well on
> the way to solving the problem
>
> Thanks again for your help.
>
> I do have one more question.  Is it possible for lsof to indicate more
> open files than /proc/sys/fs/file-max says is possible?
>
> david
>
> On Wed, 19 Feb 2003, Sites, Brad wrote:
>
> > Jan wrote:
> > > dbrett wrote:
> > >> I have a RH 6.2 server, which seems to be unable to keep up with the
> > >> load it is under.  I have to keep rebooting it about every other
> > >> day.  One of the first clues I have is there too many files open and
> > >> can't do another operation.
> > >>
> > >> How do I find out how many files are open and by what programs?  Is
> > >> it possible to increase the number of files which can be open?
> > >>
> > > lsof may be a good place to start - it lists all open files; it is a
> > > LONG list! Perhaps you should run it at intervals (and save the
> > > output) to see if there is a single program that runs amok.
> > >
> > > /jan
> >
> > You may be running out of file descriptors.  Open tcp sockets and things
> > like Apache and database servers are prone to opening a large amount of
file
> > descriptors.  The default number of file descriptors available is 4096.
> > This probably needs to be upped in your scenario.  The theoretical limit
is
> > somewhere around a million file descriptors, but a number much lower
would
> > be more reasonable.  Try doubling the default number and seeing if that
> > takes care of things.  If not, double that number and see how it works.
> > Here is the command to do this on the fly:
> >
> > echo 8192 > /proc/sys/fs/file-max
> >
> > To make this happen each time at boot, edit your /etc/sysctl.conf file
and
> > add the following line:
> >
> > fs.file-max = 8192
> >
> >
> >
> > Brad Sites
> >
>
>
>
> --
> redhat-list mailing list
> unsubscribe mailto:[EMAIL PROTECTED]?subject=unsubscribe
> https://listman.redhat.com/mailman/listinfo/redhat-list
>



-- 
redhat-list mailing list
unsubscribe mailto:[EMAIL PROTECTED]?subject=unsubscribe
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to