Charlie,

Thanks so much for lending your expertise.  We will take a look at lowering the 
heap space.  We are on a 32-bit server, but it has run for several years 
without issue.  Recently, we added more apps that rely on LDAP calls via 
CFLDAP.  So far, every time this error was thrown, the CFFDAP tag was in 
process. Do you know of any code optimizing strategies specific to CFLDAP?

Thanks,
Rob

From: ad...@acfug.org [mailto:ad...@acfug.org] On Behalf Of Charlie Arehart
Sent: Tuesday, October 09, 2012 1:39 PM
To: discussion@acfug.org
Subject: RE: [ACFUG Discuss] CFLDAP and memory errors

I would suspect that it may be only made worse by increasing the heap. :-) 
Strap on your seatbelts. If the problem was easily understood and solved, you'd 
have found the answer elsewhere. I'll share here some thoughts you don't 
see/hear often. It will take a few paragraphs to give proper context, though. 
Sorry for the long-ish email.

Often (though not always) the "unable to create new native thread" is due to 
CF/the JVM being unable to allocate a new thread in a part of the address space 
OUTSIDE the heap (but within the jrun.exe process space), and you don't "set" 
this "stack space". It just gets what's left over after the heap, permgen, and 
other spaces use memory.

Well, if you are on 32-bit (where most people got this error), then you get 
only 2GB total addressable space for each process. And the bigger you make the 
heap, the less space is left for this "stack space". There are more details and 
nuances, but this should help get you started. So believe it or not, sometimes 
lowering the heap is the solution.

How far can you lower it? Well, until you start getting "outofmemory: java heap 
space" errors (or "gc overhead limit reached", which means merely that the JVM 
has been trying to do GCs as it gets full, but it's not able to reclaim much.) 
So yes, it's a balancing act, to make both errors go away.

As always, there's more to this than meets the eye: while sometimes the 
solution is to change the resource limits (add heap, or in this case reduce 
it), but another thought is to find what's putting pressure on the exhausted 
resource instead. So there may be some other explanation for why you are trying 
to create so many "threads" that can't fit in that "stack space", or other 
things putting pressure on that available addressable space, thus squeezing the 
stack space. There can be many reasons for that, again too much to get into in 
email.
BTW, if someone would point out the XSS jvm arg as a "solution" for this "new 
native thread" error which they've seen offered, I'd argue against that. Here's 
why: that's not for setting the size of the stack space, but rather for setting 
the size of each stack entry. And since I can find no documentation of what the 
default is for different versions of the JVM and OS, I would be leery of "just 
changing that" to see if it may help this problem.

(Similarly, and related, when one gets outofmemory in the heap, the natural 
reaction is "increase the heap", but I'd argue instead you should find out 
what's putting pressure on the heap, which might be an unusually large amount 
of sessions, perhaps caused by excessive visits by spiders and bots, etc. 
Lowering that "pressure" may be the real "right" answer, and may do your server 
a world of other good.)

No, you don't often hear the above, but it's because a) it can't be 
communicated in a tweet and b) people tend to repeat what they've heard from 
others (in tweets or perhaps in old blog entries, which in my estimate seem 
often to have been more "making guesses" at what was amiss, and offering a 
bunch of "solutions", like JVM argument tweaks, rather than focusing on real 
root causes, etc.)

BTW, I mean no disrespect to any here on the list who may have done that, or 
may well disagree with me. I'm just commenting on my observation of this CF 
server troubleshooting space, where I live day in and day out. I know there may 
be others here who don't agree with something I've said above. It's clearly an 
area with divergent opinions. I'm just basing my observations and suggestions 
on what I have specifically helped many people with.

One last thing about the "new native thread" error: there's one other thing 
that can lead to it. For some releases of CF, there was a problem where image 
handling was loading lots of DLLs into the address space, and that TOO was 
putting pressure on the stack space. Applying that hotfix (which was sometimes 
not included in the CHFs) would be necessary.  But since you say, Rob, that 
you're on 9.0.2, then you have ALL the hotfixes, and that's not the issue for 
you. Even so, it just goes to show that sometimes the cause is NOT about the 
JVM and NOT about CF itself, but can be influenced by code (which I'll argue 
could be exacerbated by spiders and bots). So often, it's one thing that leads 
to another. But you have to start somewhere. :-)

Perhaps the simplest solution (if you can't lower the heap much) would be to 
move to a 64-bit environment. That removes the 2gb per process limit, and often 
lets CF/the JVM "breath" better.

Hope that's helpful.

/charlie

PS I probably ought to make a blog entry out of this, but I'll wait to hear 
what others may say, and especially how things may go for Rob if he tries any 
of the suggestions. Sadly, not all answers are always "right". This can be an 
area with lots of subtleties.  Even so, the above has proven helpful and true 
for many I've helped, and it's different from what most offer in response to 
the problem, so let's see how it goes.

From: ad...@acfug.org [mailto:ad...@acfug.org] On Behalf Of Rob Saxon
Sent: Tuesday, October 09, 2012 11:56 AM
To: ACFUG (discussion@acfug.org)
Subject: [ACFUG Discuss] CFLDAP and memory errors

CF Gurus,
We have several apps that are generating the following error at different times:
"unable to create new native thread The specific sequence of files included or 
processed is: [file path and line #] "
java.lang.OutOfMemoryError: unable to create new native thread
The errors eventually require a CF restart.  We first experienced this while 
running CF7, but it continues to occur after upgrading to CF 9.0.2 and 
increasing the JVM allocation to three-quarters of a gigabyte.

The error consistently occurs when using the CFLDAP tag to query an LDAP 
server. Does anyone have any suggestions about how to troubleshoot it?

Thanks,
Rob

---------------------------------------------------------------------------
Rob Saxon
Director
Web Management
Mercer University
478-301-5550


-------------------------------------------------------------
To unsubscribe from this list, manage your profile @
http://www.acfug.org?fa=login.edituserform

For more info, see http://www.acfug.org/mailinglists
Archive @ http://www.mail-archive.com/discussion%40acfug.org/
List hosted by FusionLink<http://www.fusionlink.com>
-------------------------------------------------------------

-------------------------------------------------------------
To unsubscribe from this list, manage your profile @
http://www.acfug.org?fa=login.edituserform

For more info, see http://www.acfug.org/mailinglists
Archive @ http://www.mail-archive.com/discussion%40acfug.org/
List hosted by FusionLink<http://www.fusionlink.com>
-------------------------------------------------------------



-------------------------------------------------------------

To unsubscribe from this list, manage your profile @ 

http://www.acfug.org?fa=login.edituserform



For more info, see http://www.acfug.org/mailinglists

Archive @ http://www.mail-archive.com/discussion%40acfug.org/

List hosted by http://www.fusionlink.com

-------------------------------------------------------------


Reply via email to