Hello,
I'm seeing some odd behaviour from JavaMonitor in an EC2-based deployment
setting. On a couple of EC2 instances (both running JavaMonitor, wotaskd and
the same application, but otherwise not connected, and not identical)
JavaMonitor itself dies after some period of time (in the order of days after
launch) apparently because it's out of memory:
---
2014-01-03 22:08:56,759 WARN 26.47 MB/30.28 MB [WorkerThread11]
logging.ERXNSLogLog4jBridge (ERXNSLogLog4jBridge.java:44) -
<er.extensions.appserver.ERXComponentRequestHandler>: Exception occurred
while handling request:
com.webobjects.foundation.NSForwardException
[java.lang.reflect.InvocationTargetException]
null:java.lang.reflect.InvocationTargetException
2014-01-03 22:08:56,761 WARN 26.47 MB/30.28 MB [WorkerThread11]
logging.ERXNSLogLog4jBridge (ERXNSLogLog4jBridge.java:44) - Ran out of memory,
killing this instance
2014-01-03 22:08:56,762 FATAL 26.47 MB/30.28 MB [WorkerThread11]
appserver.ERXApplication (ERXApplication.java:1947) - Ran out of memory,
killing this instance
2014-01-03 22:08:56,763 FATAL 26.47 MB/30.28 MB [WorkerThread11]
appserver.ERXApplication (ERXApplication.java:1948) - Ran out of memory,
killing this instance
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:679)
at
com.webobjects.monitor._private.MHost.sendRequestToWotaskdArray(MHost.java:320)
at
com.webobjects.monitor.application.WOTaskdHandler.sendRequest(WOTaskdHandler.java:160)
at
com.webobjects.monitor.application.WOTaskdHandler.sendQueryToWotaskds(WOTaskdHandler.java:354)
at
com.webobjects.monitor.application.WOTaskdHandler.getApplicationStatusForHosts(WOTaskdHandler.java:618)
at
com.webobjects.monitor.application.WOTaskdHandler.updateForPage(WOTaskdHandler.java:105)
at
com.webobjects.monitor.application.ApplicationsPage.<init>(ApplicationsPage.java:27)
---
(I can post the full stack trace if it's helpful.) By way of background, I'm
only seeing this on a couple of specific EC2 instances—I've got other instances
that have been running JavaMonitor with uptimes of months if not years.
According to the logging pattern in JavaMonitor's Properties, those log entries
for the OutOfMemoryError are claiming 26M used and 30M free, which seems a bit
suspect on the face of it. JavaMonitor is being started by
/etc/init.d/webobjects using the default heap size, which I assume to be 64M.
I've always found that to be sufficient, and can't find any reference to
JavaMonitor being a memory hog on the list or elsewhere. The JavaMonitors that
are failing in this way are used very lightly—monitoring a few instances at
most of a single application on a single host, with occasional stops and starts
for deployments.
So, has anyone seen JavaMonitor itself fall over in this way? Is anyone using
non-default JVM memory settings for JavaMonitor launch? Does anyone want to
have a guess at the root cause here? I can provide the full stack trace if
anyone wants to see it.
--
Paul Hoadley
http://logicsquad.net/
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com
This email sent to [email protected]