No, I dont think it is a bug of vm.
Running in jdk6, just try to expect jdk6 maybe smart enough to handle direct
buffer reference.
I done a quick check of mina code, and not found "release direct buffer"
code.

Is there a  result when running jdk6?

2007/7/21, mat <[EMAIL PROTECTED]>:

Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_08-b03)
Java HotSpot(TM) Client VM (build 1.5.0_08-b03, mixed mode, sharing)
Windows 2003

So you think it is bug of JVM?

On 7/21/07, 向秦贤 <[EMAIL PROTECTED]> wrote:
>
> Not sure your jdk version.
> http://issues.apache.org/jira/browse/DIRMINA-320 not running in jdk6.
> Try run in jdk6 to check it out whether happen again.
> There a sure thing, some direct buffer reference hold by application. vm
> cannot clean it.
>
> 2007/7/21, mat <[EMAIL PROTECTED]>:
> >
> > http://issues.apache.org/jira/browse/DIRMINA-320
> >
> > Please check this out. It seems that i am not the only one who faces
> this
> > problem and it happens in mina core. I quote something written by
> Trustin.
> > "Other session might have allocated huge memory block and other
session
> > might be being affected by its side-effect. " However, I only had one
> > client
> > connecting to my server when OOM occurs.
> >
> > On 7/21/07, 向秦贤 <[EMAIL PROTECTED]> wrote:
> > >
> > > Maybe Direct buffer not released.
> > > Direct buffer must explicit release.
> > > so somewhere may check if direct buffer and set to null.
> > >
> > > 2007/7/21, mat <[EMAIL PROTECTED]>:
> > > >
> > > > I captured the exception message.
> > > >
> > > >
org.apache.mina.common.support.DefaultExceptionMonitorexceptionCaught
> > > > Unexpected exception.
> > > > java.lang.OutOfMemoryError: Direct buffer memory
> > > > at java.nio.Bits.reserveMemory
> > > > at java.nio.DirectByteBuffer.<init>
> > > > at java.nio.ByteBuffer.allocateDirect
> > > > at sun.nio.ch.Util.getTemporaryDirectBuffer
> > > > at sun.nio.ch.IOUtil.write
> > > > at sun.nio.ch.SocketChannelImpl.write
> > > > at org.apache.mina.transport.socket.nio.SocketIOProcessor.doFlush<
> > > > SocketIoProcessor.java:428>
> > > > at org.apache.mina.transport.socket.nio.SocketIOProcessor.doFlush<
> > > > SocketIoProcessor.java:366>
> > > > at
org.apache.mina.transport.socket.nio.SocketIOProcessor.access$600
> <
> > > > SocketIoProcessor.java:44>
> > > > at
org.apache.mina.transport.socket.nio.SocketIoProcessor$Worker.run
> <
> > > > SocketIoProcessor.java:509>
> > > > at org.apache.mina.util.NamePreservingRunnable.run<
> > > > NamePreservingRunnable.java:43>
> > > > at java.util.concurrent.ThreadPoolExecutor$Worker.runTask
> > > > at java.util.concurrect.ThreadPoolExecutor$Worker.run
> > > > at java.lang.Thread.run
> > > >
> > > >
> > > > On 7/20/07, Luis Neves <[EMAIL PROTECTED]> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > mat wrote:
> > > > > > My server sometimes faced "OOM" problem. (I couldn't profile
it
> > > since
> > > > > TPTP
> > > > > > crashed my server before OOM occured). I didn't see major
memory
> > > leak
> > > > > when
> > > > > > profiling. Therefore, I believe OOM happens when READ or WRITE
> > > > operation
> > > > > > can't handle the incoming messages or outgoing messages.
> (However
> > my
> > > > > > incoming messages normally 20 * 512bytes/sec, NOT too fast,
> > right?).
> > > > > Last
> > > > > > time i saw my server memory usage is over 600MB in windows XP.
> > > > >
> > > > > Your code is broken ... the question is where. Mina can handle
> that
> > > > amount
> > > > > of
> > > > > messages without breaking a sweat.
> > > > > Do you have some kind of heavy processing in the receiving end
> that
> > > > delays
> > > > > the
> > > > > acceptance of messages?
> > > > >
> > > > > Did you try to use the ReadThrottleFilter?
> > > > > How are you doing you writes?
> > > > > A simple "iosession.write()" ?
> > > > > Did you try something like:
> > > > > WriteFuture wf = iosession.write();
> > > > > wf.join();
> > > > >
> > > > > Can we see the code of your Encoder/Decoder?
> > > > >
> > > > >
> > > > > --
> > > > > Luis Neves
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > 向秦贤
> > >
> >
>
>
>
> --
> 向秦贤
>




--
向秦贤

Reply via email to