On 4 October 2012 16:36, Jason Aubrey <aubre...@gmail.com> wrote:
> Thanks all for your replies to my question.
>
> Because of the nature of our application, we can't really load everything at
> start up, but I did some digging and there are clearly some inefficiencies
> here and the situation would indeed be improved by cleaning these up.
> However, it does look like it may be the interaction between these
> inefficiencies and freebsd in particular that is causing the error.
>
> So, here's what I did: on a linux development box I wrote a script using
> Linux::Inotify2 to watch the directories containing the code for our
> application that we (think we) have to load at run time.  A typical request
> generated an average of 225 IN_OPEN inotify events.  (FWIW, it also
> generated around 850 IN_ACCESS events.  I don't understand the difference,
> but IN_OPEN events seem more relevant to what I'm getting to.)
>
> Now, we have MaxClients set super high, but with just 150 concurrent
> requests that's 33,750 open files and that's not taking into account files
> opened for other processes on the machine, such as the kernel or the web
> server, etc.
>
> But, here's where freebsd comes in.  The freebsd kernel has a bunch of
> tunables, one of which is kern.maxfiles. The freebsd manual says "This
> variable indicates the maximum number of file descriptors on your system...
> Each open file, socket, or fifo uses one file descriptor. A large-scale
> production server may easily require many thousands of file descriptors,
> depending on the kind and number of services running concurrently"
>
> Well, we had kern.maxfiles = 12328.  Also possibly relevant is
> kern.ipc.somaxconn which "limits the size of the listen queue for accepting
> new TCP connections".  We had that at the default of 128.  I also don't
> understand the interaction between this setting and apache's MaxClients
> since when we had kern.ipc.somaxconn = 128 and MaxClients 150 we quickly
> maxed out max clients.
>
> So, we bumped up kern.maxfiles to 50000 and kern.ipc.somaxconn to 8192 and
> things appeared to be better last night under heavy load. But, as I said,
> there are clearly some in efficiencies too in the way our application loads
> code at run time, and cleaning these up would certainly help.

You would also see reduced memory pressure and performance
improvements. Perl is actually pretty slow at compiling code (in
comparison to executing code), and code that is compiled prefork is
going to be shared, whereas code compiled post-fork is going to be
duplicated per process.

By preloading you will both win performance improvements and reduce
memory pressure on your application.

I strongly recommend you approach this subject aggressively.

yves

-- 
perl -Mre=debug -e "/just|another|perl|hacker/"

Reply via email to