Thanks! I found that thread
(http://www.nabble.com/1000%2B-simultaneous-connections-with-data-transfer--tf3607564s16868.html)
very useful!

Summarising the tips given in that thread:

1. use FixedThreadExecutor for IoServices, e.g. for IoAcceptor:
acceptor = new SocketAcceptor(Runtime.getRuntime().availableProcessors()+1,
Executors.newFixedThreadPool(50));

But in the http://mina.apache.org/configuring-thread-model.html there is
written that:

"Executors.newCachedThreadPool() is always preferred by IoService. It is
because using other thread pool type can lead to unpredictable performance
side effect in IoService. Once all threads in the pool become in use,
IoService will start to block while it tries to acquire a thread from the
pool and to start to show weird performance degradation, which is sometimes
very hard to trace."

What value can i choose for the fixed pool size constant to avoid IoService
blocking (50 in the example above)?

2. Reduce the size of send and receive buffers:
((SocketSessionConfig) cfg.getSessionConfig()).setReceiveBufferSize(512);
((SocketSessionConfig) cfg.getSessionConfig()).setSendBufferSize(512); 

What are the most suitable values if the messages in my protocol are quite
short - from 8 bytes to, lets say, 128 bytes? Is 64 bytes OK? Or the buffer
can be used for several messages and so needs to be larger?

3. Try Linux due to it's better socket performance
I'll try it as soon as i will have the test machine for it. As well as i'm
going to set some agressive JVM options like:
... -Xms512m -Xmx512m -Xss128k -XX:+AggressiveOpts -XX:+UseParallelGC
-XX:+UseBiasedLocking -XX:NewSize=64m


But... As i mentioned in my first post, the profiler shows that the
bottleneck is in sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(long, int,
int[], int[], int[], long) method taking 64% of total execution time. Are
there any special tweaks for it?.. Well, I suppose that suggestion no. 3 can
help, because the problem seems to be in sun.nio.ch.WindowsSelectorImpl...
Am i right?




mat-29 wrote:
> 
> Please search this thread:
> *1000+ simultaneous connections with data transfer*
> 
> On 9/6/07, velytreuzien <[EMAIL PROTECTED]> wrote:
> 
>>
>> No, it's an echo over my custom protocol. CPU usage is full - 100%. I've
>> tried it on Athlon 3500+.
>>
>> Yes, i used the given lines for ByteBuffer, but the last version works
>> with
>> PooledByteBufferAllocator instead of Simple* - nothing changes, in
>> general.
>>
>> Today i've managed to run some tests on Dual Xeon 2.8 HT (4 virtual
>> cores)
>> with 4 threads for SocketAcceptor:
>> acceptor = new SocketAcceptor(4, Executors.newCachedThreadPool());
>> It goes well until 1300-1500 clients, but then this is the same story.
>>
>>
>> mat-29 wrote:
>> >
>> > Did you use the echo example? What about the CPU usage? What's your CPU
>> > model?
>> >
>> > Did you add?
>> > ByteBuffer.setUseDirectBuffers(false);
>> > ByteBuffer.setAllocator(new SimpleByteBufferAllocator());
>> >
>> >
>> > On 9/6/07, velytreuzien <[EMAIL PROTECTED]> wrote:
>> >>
>> >>
>> >> I'm running my application on the single cpu machine in the 100MBit
>> LAN.
>> >> Other three workstations in the LAN are emulating multiple clients.
>> >>
>> >> The server uses custom protocol codec filter and custom handler. The
>> >> configuration is: manual thread model, one thread for SocketAcceptor,
>> no
>> >> ExecutorFilter in the filterchain (because with the ExecutorFilter the
>> >> total
>> >> thread count increases rapidly up tho hundreds!! why?..)
>> >>
>> >> The fifth workstation in the LAN is running test application also
>> >> emulating
>> >> a client and measures server echo response timeout.
>> >> The average results are:
>> >> 1 client - 300 ms
>> >> 333 cleints - 400-500 ms
>> >> 666 clients - 1000-1500 ms
>> >> 1000 clients - strange fluctuations in 7-20 s range
>> >>
>> >> So we can clearly see the problem comes when the client count
>> increases.
>> >>
>> >> After i've thoroughly done the profiling i've managed to eliminate all
>> >> hot
>> >> spots from my own code (anyway, almost nothing changed) and then the
>> >> performance has stuck to the
>> >> sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(long, int, int[],
>> int[],
>> >> int[], long) method taking 64% of total execution time. It's enourmous
>> >> because my own logic thread takes only 16% of total, and the
>> asynchronous
>> >> writing thread (that responds to clients) takes 11%.
>> >>
>> >> The call tree descending to the mentioned method is:
>> >>                 java.lang.Thread.run() 71 %
>> >>
>> >> java.util.concurrent.ThreadPoolExecutor$Worker.run()
>> >> 71 %
>> >>
>> >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Runnable) 71 %
>> >>
>> >> org.apache.mina.util.NamePreservingRunnable.run() 71 %
>> >>
>> >> org.apache.mina.transport.socket.nio.SocketIoProcessor$Worker.run()
>> >> 71 %
>> >>
>> >> sun.nio.ch.SelectorImpl.select(long) 70 %
>> >>
>> >> sun.nio.ch.SelectorImpl.lockAndDoSelect(long) 70 %
>> >>
>> >> sun.nio.ch.WindowsSelectorImpl.doSelect(long) 70 %
>> >>
>> >> sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400
>> >> (WindowsSelectorImpl$SubSelector)
>> >> 64 %
>> >>
>> >>
>> >> sun.nio.ch.WindowsSelectorImpl$SubSelector.poll() 64 %
>> >>
>> >>
>> >> sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(long, int,
>> >> int[], int[], int[], long) 64 %
>> >>
>> >> Is there any workaround? Maybe somehow configure SocketIoProcessor or
>> >> smth?.. How should it scale on a multiple cpu machine?
>> >>
>> >> Thanks for any kind of help in advance!
>> >> --
>> >> View this message in context:
>> >>
>> http://www.nabble.com/Performance-issue-with-selector-tf4390855s16868.html#a12518738
>> >> Sent from the Apache MINA Support Forum mailing list archive at
>> >> Nabble.com
>> >> .
>> >>
>> >>
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Performance-issue-with-selector-tf4390855s16868.html#a12522892
>> Sent from the Apache MINA Support Forum mailing list archive at
>> Nabble.com
>> .
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Performance-issue-with-selector-tf4390855s16868.html#a12531112
Sent from the Apache MINA Support Forum mailing list archive at Nabble.com.

Reply via email to