Please search this thread:
*1000+ simultaneous connections with data transfer*

On 9/6/07, velytreuzien <[EMAIL PROTECTED]> wrote:

>
> No, it's an echo over my custom protocol. CPU usage is full - 100%. I've
> tried it on Athlon 3500+.
>
> Yes, i used the given lines for ByteBuffer, but the last version works
> with
> PooledByteBufferAllocator instead of Simple* - nothing changes, in
> general.
>
> Today i've managed to run some tests on Dual Xeon 2.8 HT (4 virtual cores)
> with 4 threads for SocketAcceptor:
> acceptor = new SocketAcceptor(4, Executors.newCachedThreadPool());
> It goes well until 1300-1500 clients, but then this is the same story.
>
>
> mat-29 wrote:
> >
> > Did you use the echo example? What about the CPU usage? What's your CPU
> > model?
> >
> > Did you add?
> > ByteBuffer.setUseDirectBuffers(false);
> > ByteBuffer.setAllocator(new SimpleByteBufferAllocator());
> >
> >
> > On 9/6/07, velytreuzien <[EMAIL PROTECTED]> wrote:
> >>
> >>
> >> I'm running my application on the single cpu machine in the 100MBit
> LAN.
> >> Other three workstations in the LAN are emulating multiple clients.
> >>
> >> The server uses custom protocol codec filter and custom handler. The
> >> configuration is: manual thread model, one thread for SocketAcceptor,
> no
> >> ExecutorFilter in the filterchain (because with the ExecutorFilter the
> >> total
> >> thread count increases rapidly up tho hundreds!! why?..)
> >>
> >> The fifth workstation in the LAN is running test application also
> >> emulating
> >> a client and measures server echo response timeout.
> >> The average results are:
> >> 1 client - 300 ms
> >> 333 cleints - 400-500 ms
> >> 666 clients - 1000-1500 ms
> >> 1000 clients - strange fluctuations in 7-20 s range
> >>
> >> So we can clearly see the problem comes when the client count
> increases.
> >>
> >> After i've thoroughly done the profiling i've managed to eliminate all
> >> hot
> >> spots from my own code (anyway, almost nothing changed) and then the
> >> performance has stuck to the
> >> sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(long, int, int[],
> int[],
> >> int[], long) method taking 64% of total execution time. It's enourmous
> >> because my own logic thread takes only 16% of total, and the
> asynchronous
> >> writing thread (that responds to clients) takes 11%.
> >>
> >> The call tree descending to the mentioned method is:
> >>                 java.lang.Thread.run() 71 %
> >>
> >> java.util.concurrent.ThreadPoolExecutor$Worker.run()
> >> 71 %
> >>
> >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Runnable) 71 %
> >>
> >> org.apache.mina.util.NamePreservingRunnable.run() 71 %
> >>
> >> org.apache.mina.transport.socket.nio.SocketIoProcessor$Worker.run()
> >> 71 %
> >>
> >> sun.nio.ch.SelectorImpl.select(long) 70 %
> >>
> >> sun.nio.ch.SelectorImpl.lockAndDoSelect(long) 70 %
> >>
> >> sun.nio.ch.WindowsSelectorImpl.doSelect(long) 70 %
> >>
> >> sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400
> >> (WindowsSelectorImpl$SubSelector)
> >> 64 %
> >>
> >>
> >> sun.nio.ch.WindowsSelectorImpl$SubSelector.poll() 64 %
> >>
> >>
> >> sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(long, int,
> >> int[], int[], int[], long) 64 %
> >>
> >> Is there any workaround? Maybe somehow configure SocketIoProcessor or
> >> smth?.. How should it scale on a multiple cpu machine?
> >>
> >> Thanks for any kind of help in advance!
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/Performance-issue-with-selector-tf4390855s16868.html#a12518738
> >> Sent from the Apache MINA Support Forum mailing list archive at
> >> Nabble.com
> >> .
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Performance-issue-with-selector-tf4390855s16868.html#a12522892
> Sent from the Apache MINA Support Forum mailing list archive at Nabble.com
> .
>
>

Reply via email to