To Steve's  helpful comments, I have to contend against the idea to "grab as
much of the 2gb worth of ram that the 32bit OS will allow  jrun to have by
bumping the -xmx and -xms to 1400, and turning on AgressiveHeap -- plus
other params you can mess with." 

No offense intended to Steve: I appreciate he's trying to help, and sharing
what seemed to work for him. And I know it's a common recommendation, but
I'm telling you, as someone who helps peoples with these problems every day,
it's generally not the right thing to do, and it may well make things worse.

To be clear: I'm not disagreeing it may have helped Steve in his specific
situation. I'm saying to beware of taking any such recommendations of what
worked for others, things to "mess with", and just trying them on your
server to see if it will help.

Instead, I can't say it enough: always, always, always dig into the
available diagnostics to determine what to do. I outlined it in the blog
entry I referred to earlier, but I appreciate that some will not bother to
have read it. Still, for them and others here who did that, I'll say it
again. 

For instance, again, just because memory is high doesn't mean you have a
problem. The key question is whether you get an outofmemory error, as I
discuss in the blog entry on memory problems. 

The JVM may well let memory climb to as much as 95%, only to then perform a
full GC on its own when it realized memory is running low, and that GC may
reduce used memory back to perhaps 25% used. In that case, the "high memory"
was not a problem at all. The JVM was just being lazy. It's a common source
of confusion (and its new behavior since JVM 1.5, introduced between CF 7
and 8, referred to in the JVM literature as "ergonomics"). 

So the point is that one may not need to (and perhaps even should not) raise
memory just because they see it being "high". The question simply is, did
you get any outofmemory errors in those logs I pointed to.

Indeed, another argument against "just raising the heap" is that you may
actually cause another kind of outofmemory error, "unable to create new
native thread". It's not the kind that will happen immediately on startup,
but instead may happen over time. 

Here's the deal: if you let the heap be larger, then over time when that
extra space is actually is used (again perhaps simply because the JVM's
being lazy and hasn't done a major GC), that may then constrain another
memory space, called the stack, and you may now get outofmemory: unable to
create new native thread. The problem (generally, despite the words in the
message) is that the stack space (which is allocated out of the space
remaining in the 2gb on 32 bit systems after the heap, system dlls, and
other object spaces are loaded) may now be squeezed. A stack entry is
allocated whenever a thread is started (CF request threads, database
connection pool threads, cfthread threads, and so on). If there's no space
to allocate one, you get this error. It's one of those challenging ones that
may not happen for a long time, then bam.  Again, the clues are in the
runtime/jrun logs.

So bottom line: don't just raise the heap, unless you have clear evidence
from those logs that you have an outofmemory due to a heap space (or gc
ratio) problem. Otherwise, one correction may lead to another problem. (Some
of this is what I need to get added to the remaining "parts" of the series I
started on memory myths.)

 

/charlie

 

From: ad...@acfug.org [mailto:ad...@acfug.org] On Behalf Of Steven
Sent: Tuesday, February 01, 2011 1:53 PM
To: discussion@acfug.org
Subject: Re: [ACFUG Discuss] CF9 Performance

 

Frank, 

One thing to look at during your investigations through FR.. is to watch
those mem spikes as you mentioned, and determine if CF itself has enough
memory allocated to it. If you've never made adjustments to your jrun.config
file on the server or server(s) in question, (specifically the -xmx and -xms
settings) now may be the time.

 

This may be one piece in the puzzle to your ultimate solution, but one to
keep in mind. To give an example -- I had experience with a crippled system
that was very cfc heavy, and those peak times during the day when CF didn't
have any room to breathe. At a former job, we had a 32bit, win2003 machine
left at default jrun.config settings -- and part of our solution was to grab
as much of the 2gb worth of ram that the 32bit OS will allow  jrun to have
by bumping the -xmx and -xms to 1400, and turning on AgressiveHeap -- plus
other params you can mess with.

 

FR will definitely give you that overall picture to point you towards
performance issues. Watch Task Manager/Process Viewer as well, and take note
of how much memory Jrun is gunning for during those peak times.

 

Also another side note, and if solutions don't arise, I also recall a time
when we had to offload pdf creation to a separate server/instance because we
had just too much going on -- on one specific server. We had a "perfect"
storm of too many scheduled processes and intensive queries, which required
us to offload. Not an answer for everyone's infrastructure, but one to
consider, if even to provide yourself some breathing room to optimize
existing app process.

 

FR has saved my butt a number of times, and I would definitely start there.

 

-Steve Duys

Senior Systems Dev

Care Solutions, Inc.

 

On Tue, Feb 1, 2011 at 1:18 PM, Frank Moorman <stretch...@franksdomain.net>
wrote:

Thank you Charlie...  And Ajas and Chris...

We are on the same mental page. My whole reasoning 
|<snip>

 




-------------------------------------------------------------
To unsubscribe from this list, manage your profile @ 
http://www.acfug.org?fa=login.edituserform

For more info, see http://www.acfug.org/mailinglists
Archive @ http://www.mail-archive.com/discussion%40acfug.org/
List hosted by http://www.fusionlink.com
-------------------------------------------------------------

Reply via email to