On Fri, 2 Jan 2009 12:11:07 -0500, Dennis Nezic
<dennisn at dennisn.dyndns.org> wrote:

> On Wed, 31 Dec 2008 14:41:42 +0000, Matthew Toseland
> <toad at amphibian.dyndns.org> wrote:
> 
> > On Friday 26 December 2008 17:50, Dennis Nezic wrote:
> > > After a day or so, with no problems mentioned in wrapper.log, my
> > > wrapper will suddenly crash:
> > > 
> > > "JVM appears hung: Timed out waiting for signal from JVM."
> > > "JVM did not exit on request, terminated"
> > > "JVM received a signal SIGKILL (9)."
> > > "Reloading Wrapper configuration..."
> > > 
> > > It's not a new problem.
> > > 
> > > Does this happen to anyone else?
> > > 
> > > Here is a 24h graph of the free memory on my computer, during my
> > > latest jvm crash. I have other things running, but most of the
> > > activity is probably due to freenet. The crash occurred around
> > > 7am, and you can see how just over 100M is freed up... though,
> > > this is barely half of what I alot to it (220M). So I don't think
> > > it's an out-of-memory problem.
> > > 
> > > http://dennisn.dyndns.org/guest/pubstuff/freenetcrashes-freememory.png
> > > 
> > > But what else could crash the jvm? Maybe it's a bug in the
> > > wrapper? System CPU activity is acceptably low and normal (~25%
> > > average usage)--I can't imagine how it could hang the jvm for a
> > > few minutes.
> > 
> > You can verify whether it is in fact a memory problem by adding to
> > your wrapper.log :
> > 
> > wrapper.java.additional.3=-Xloggc:freenet.loggc
> > 
> > Then tail -f freenet.loggc : when it crashes, do you see lots of
> > Full GC's, very frequently (approx every second)?
> 
> I have a growing suspicion that there's a bug in either the wrapper,
> or freenet's interface with the wrapper. I am currently running
> without any wrapper, and I already have a few days of
> uptime--normally it would crash after about a day. Fingers crossed.

Nevermind :\. After almost a week of uptime, it crashed, or, more
likely, stopped itself. Here's what wrapper.log said:

Restarting node: MessageCore froze for 3 minutes!
Exiting on deadlock....
Restarting node: MessageCore froze for 3 minutes!
Exiting on deadlock....
Restarting node: PacketSender froze for 3 minutes! (264273)
Exiting on deadlock....
Goodbye. from freenet.node.Node (USM deadlock)
Goodbye. from freenet.node.Node (PacketSender deadlock)
run() exiting for UdpSocketHandler.....
Goodbye. from freenet.node.Node (USM deadlock)

It then goes on to successfully close the key stores.

Well, at least it doesn't appear to be a memory problem? It was
steadily hovering at the same level (below the max i set for it) for
days. Then just suddently it shutdown.

Ideas?

Reply via email to