Let me conclude this thread.
After serveral days's observation, I found out my OOM was nothing to do with
either VM tuning or memory leak. Actually I got the hints from this issue
"permanent solution for OOM"(See Paul Chen's comment). The OOM happens when
writeRequestQueue blows up in my case. I found out because when I put my
client on the same machine as server, OOM doesn't happen after 2 days.
However, when I move my client to different network. OOM happens sometimes
due to "slow traffice"? That's all I can conclude at this moment. And i want
to share with all of you. Maybe a Write throttle filter is needed.
Finally dear commitee, please consider point out this to all of Mina users
and I believe Read/Write throttle filter is really helpful.


On 7/25/07, Mark Webb <[EMAIL PROTECTED]> wrote:
>
> Not sure if this article will help, but check this out:
> http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html
>
> Maybe if you choose a different garbage collector, your memory might get
> cleaned up more efficiently.
>
>
> On 7/24/07, mat <[EMAIL PROTECTED]> wrote:
> >
> > I got the following error message again after 40 hours starting my
> server.
> > It seems the Direct buffer memory happens only the heap memory usage
> high.
> > I
> > also record the memory usage history in windows task manager. It seems
> GC
> > doesn't collect until heavy loading is gone. However after every GC, my
> > memory still increases. Eventually OOM happens.
> >
> > 2007-7-25 4:45:51
> > org.apache.mina.common.support.DefaultExceptionMonitorexcepti
> > onCaught
> > warning: Unexpected exception.
> > java.lang.OutOfMemoryError: Direct buffer memory
> >         at java.nio.Bits.reserveMemory(Unknown Source)
> >         at java.nio.DirectByteBuffer.<init>(Unknown Source)
> >         at java.nio.ByteBuffer.allocateDirect(Unknown Source)
> >         at sun.nio.ch.Util.getTemporaryDirectBuffer(Unknown Source)
> >         at sun.nio.ch.IOUtil.write(Unknown Source)
> >         at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
> >         at
> org.apache.mina.transport.socket.nio.SocketIoProcessor.doFlush
> > (Socket
> > IoProcessor.java:428)
> >         at
> org.apache.mina.transport.socket.nio.SocketIoProcessor.doFlush
> > (Socket
> > IoProcessor.java:366)
> >         at
> > org.apache.mina.transport.socket.nio.SocketIoProcessor.access$600
> > (Soc
> > ketIoProcessor.java:44)
> >         at
> > org.apache.mina.transport.socket.nio.SocketIoProcessor$Worker.run
> > (Soc
> > ketIoProcessor.java:509)
> >         at org.apache.mina.util.NamePreservingRunnable.run
> > (NamePreservingRunnabl
> > e.java:43)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask
> (Unknown
> > Source
> > )
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> > Source)
> >         at java.lang.Thread.run(Unknown Source)
> > 2007-7-25 5:16:53
> > org.apache.mina.common.support.DefaultExceptionMonitorexcepti
> > onCaught
> > warning: Unexpected exception.
> > java.lang.OutOfMemoryError: Direct buffer memory
> >         at java.nio.Bits.reserveMemory(Unknown Source)
> >         at java.nio.DirectByteBuffer.<init>(Unknown Source)
> >         at java.nio.ByteBuffer.allocateDirect(Unknown Source)
> >         at sun.nio.ch.Util.getTemporaryDirectBuffer(Unknown Source)
> >         at sun.nio.ch.IOUtil.write(Unknown Source)
> >         at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
> >         at sun.nio.ch.PipeImpl$Initializer.run(Unknown Source)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at sun.nio.ch.PipeImpl.<init>(Unknown Source)
> >         at sun.nio.ch.SelectorProviderImpl.openPipe(Unknown Source)
> >         at java.nio.channels.Pipe.open(Unknown Source)
> >         at sun.nio.ch.WindowsSelectorImpl.<init>(Unknown Source)
> >         at sun.nio.ch.WindowsSelectorProvider.openSelector(Unknown
> Source)
> >         at java.nio.channels.Selector.open(Unknown Source)
> >         at
> > org.apache.mina.transport.socket.nio.SocketIoProcessor.startupWorker(
> > SocketIoProcessor.java:84)
> >         at org.apache.mina.transport.socket.nio.SocketIoProcessor.addNew
> > (SocketI
> > oProcessor.java:69)
> >         at
> > org.apache.mina.transport.socket.nio.SocketAcceptor$Worker.processSes
> > sions(SocketAcceptor.java:319)
> >         at
> org.apache.mina.transport.socket.nio.SocketAcceptor$Worker.run
> > (Socket
> > Acceptor.java:239)
> >         at org.apache.mina.util.NamePreservingRunnable.run
> > (NamePreservingRunnabl
> > e.java:43)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask
> (Unknown
> > Source
> > )
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> > Source)
> >         at java.lang.Thread.run(Unknown Source)
> > Exception in thread "pool-7-thread-6" java.lang.OutOfMemoryError: Java
> > heap
> > spac
> > e
> > Exception in thread "Thread-5"
> >
> >
> > On 7/23/07, peter royal <[EMAIL PROTECTED]> wrote:
> > >
> > > On Jul 23, 2007, at 3:45 AM, Paddy O'Neill wrote:
> > > > This sounds similar to a problem that we encountered where mina was
> > > > serving multiple sessions and forwarding to a single connection,
> > > > similar to your environment.  We found that the thread management
> > > > in mina on the single connection side would not work efficiently
> > > > and end up only allocating jobs to a single thread.  This
> > > > effectively caused the jobs to be executed sequentially, which in
> > > > turn caused the backlog and memory buildup.  I suspect that the
> > > > issue is related to there only being a single connection on the
> > > > IoConnector as we found that the jobs were allocated correctly
> > > > across threads on the IoAcceptor side.
> > >
> > > sure..  not unsurprising. that's how the ExecutorFilter works. what
> > > you did was the right thing.. recognizing you have some
> > > parallelization in processing of messages for your app and managing
> > > it yourself.
> > >
> > > -pete
> > >
> > >
> > >
> > > --
> > > [EMAIL PROTECTED] - http://fotap.org/~osi
> > >
> > >
> > >
> > >
> > >
> >
>
>
>
> --
> ..Cheers
> Mark
>

Reply via email to