It depends on your OS (and I forget which one you said you were using),
but generally, there is.  Normally, this is given by ulimit -Sn, but
I've seen systems that have another value that won't show up with
ulimit.  And it's possible that your shell environment has a different
limit than the apache environment, as your shell init process can mess
with ulimit -Sn, and apache doesn't go through this normally.

Note that lsof will not list any files which have been deleted since
opening; I don't know if Apache uses this trick for temporary cache, but
I have seen it used by a number of programs.

Ed

On Wed, 6 Feb 2002, Brian Burke wrote:

> I'm thinking that maybe I'm running into a user limit (httpd) rather
> than a process limit.  I show only 60 or so open handles per httpd 
> process, with a system limit of 1024.  Is there such a thing as a user 
> limit?  I know there are limits on the number of user processes, but I'm 
> not sure about open file handles.
> 
> I may try to attack the problem short-term by having apache throttle
> back to less httpd's when idle, and lowering MaxRequestPerChild to
> have the children die earler.
> 
> Brian
> 
> 
> On Thu, 7 Feb 2002, Axel Beckert wrote:
> 
> > Hi!
> > 
> > On Wed, Feb 06, 2002 at 04:48:06PM -0500, Brian Burke wrote:
> > > When I run ulimit -Hn and ulimit -Sn, the system shows I can have
> > > 1024 open handles. Does that mean if I run lsof | fgrep httpd | wc
> > > -l and it is close to 1024, I have a problem?
> > 
> > Only, if you run Apache with the -X flag (one process only, some kind
> > of debugging state), because 'lsof | fgrep httpd' would match all
> > httpd processes. And even, when I grepped after the pid of one httpd
> > process I not always got near the ulimit with wc -l. My guess is, that
> > probably there is the right timing for the lsof needed.
> > 
> > I tried the following: 
> > 
> >                 lsof | fgrep httpd | sort -k9
> > 
> > (maybe you need to use another value than 9, depends on the parameters
> > to lsof) to sort by the path of the open files. If you see one file
> > very often (tens per httpd process), that's usually the one which
> > causes the trouble. In my case it was the magic file, so I knew I had
> > search in or around File::MMagic for the problem.
> > 
> > But due to with Apache (1.x) each child can only handle one request a
> > time, something must go really wrong to reach that limit with a single
> > request. (The Solaris limit of 64 was easier to reach... ;-)
> > 
> >             Regards, Axel
> > 
> 
> -- 
> ______________________________________             
> Brian Burke
> [EMAIL PROTECTED]
> ______________________________________
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to