Re: New Wiki documentation organization

2010-02-15 Thread Emmanuel Lecharny

On 2/15/10 4:41 AM, Ashish wrote:

Was migrating some pages to new wiki and has some thoughts

1. We can move all current pages (in new Wiki) from under
Documentation to a new page User Guide, which shall child of
Documentation.
2. We can also have all FAQ's and other doco related stuff, under one
roof, Documentation

This shall help us keep all the Documentation stuff localized.

Something like

Documentation
   -  User Guide
   --  ...

   -  FAQ
   -  ...

wdyt?
   

I like the idea !

We could also add a menu where we expose common samples.


--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com




Huge contention problem

2010-02-15 Thread Emmanuel Lecharny

Hi guys,

yesturday, I did some experiments with the code provided by Omry on 
DIRMINA-762. I ran several tests, did some profiling, and what I got was 
horrific.


Basically, I nevver went up than 15 000 messages/s on my pretty fast 
computer (dual core). The CPU was 100%, with almost 85% system : 
contention problem. When I tried to send messages without any delay, I 
saturated the server memory in a matter of seconds (30), the server 
wasn't able to serve responses fast enough, they all were stacked in the 
writer queue.


This is very bad. We have to analyse what's going on, an fix it asap. 
Actually, with hose performancs issue, using MINA 2.0 in production is 
an absolute no go.


We should also build some stress scenario that can be run against 
different versions of MINA, i order to be able to compare numbers. 
Having those scenario running against other stacks (Netty3, Grizzly) can 
also help us getting some clue about the kind of performances we can expect.


Sadly, I don't see how possibly we can fix that in the next couple of days.

Anyway, let's start working the issue now...

--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com




First sent message is not garbage collected per session.

2010-02-15 Thread Michelberger, Joerg
Hi there,

I did a heap dump of my application containing MINA 2.0.0RC1 and found a lot of 
my messages not garbage collected.
I use ProtocolCodecFilter. 
After a investigation with VisualVM I found my already sent messages stucking 
in a DefaultWriteRequest as attribute message. 
The DefaultWriteRequest is attribute writeRequest in 
ProtocolCodecFilter$ProtocolEncoderOutputImpl. 
ProtocolCodecFilter$ProtocolEncoderOutputImpl stucks with key ENCODER_OUT in 
the sessions attribute map. 
WriteRequest in ProtocolCodecFilter$ProtocolEncoderOutputImpl is not released 
after flushing...
The attribute ENCODER_OUT is never removed from attributes.
Seems that only first Message stucks ProtocolEncoderOutputImpl as result of 
constructor call of ProtocolCodecFilter line 543. ProtocolEncoderOutputImpl is 
stored for later usage in ProtocolCodecFilter line 298, but only for providing 
public void write(Object encodedMessage) API.

Hmm, ProtocolEncoderOutputImpl should store not the whole WriteRequest, only 
significant data.
If there are Login data inside this message, all data is stored for session 
lifetime in memory.

ProtocolCodecFilter.java
private ProtocolEncoderOutput getEncoderOut(IoSession session,
NextFilter nextFilter, WriteRequest writeRequest) {
ProtocolEncoderOutput out = (ProtocolEncoderOutput) 
session.getAttribute(ENCODER_OUT);

if (out == null) {
// Create a new instance, and stores it into the session
out = new ProtocolEncoderOutputImpl(session, nextFilter, 
writeRequest);
session.setAttribute(ENCODER_OUT, out);
}

return out;
}

Regards
Jörg Michelberger

About CircularQueue

2010-02-15 Thread Emmanuel Lecharny

Hi guys,

yesterday I removed all references to the non-thread-safe CircularQueue 
data structure, and replaced it with a ConcurrentLinkedQueue.


Not only this is a comparable data structure, but it's also thread safe, 
and tested.


Now, the question : should we remove the CircularQueue data structure 
from the code base, assuming that it should only be used by MINA core, 
and not by user, or simply deprecate it ?


I also have some concern about the existence of HashMap in MINA. We have 
149 references to this data structure which is really slow when run on 
mlti-threaded environment. I suggest we replace all of those references 
in core by ConcurrentHashMap 
(http://www.javamex.com/tutorials/synchronization_concurrency_8_hashmap.shtml). 
The very same for HashSet which is also used a lot.


Thoughts ?

--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com




[MINA 3.0] Acceptor/Connector

2010-02-15 Thread Emmanuel Lecharny

Hi guys,

since day one, I found that Acceptor/Connector are technical names, not 
user friendly names.


Let's face the real world : we are not developping Acceptors, not 
Connectors, but Servers and Clients. Can't we rename those two guys to 
IoServer and IoClient instead of IoAcceptor and IoConnector ?


I know this is just cosmetic, but if it helps people to understand the 
kind of objects they are manipulating, I think it would worth the change...


thoughts ?

--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com




[jira] Commented: (DIRMINA-681) Strange CPU peak occuring at fixed interval when several thousand connections active

2010-02-15 Thread Mauritz Lovgren (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833777#action_12833777
 ] 

Mauritz Lovgren commented on DIRMINA-681:
-

I performed a new test run from trunk today, and after the last few check-ins 
(revisions after 27.jan.2010), the performance problem seems to be gone.
The CPU peaks are still present, though. Did you simply revert the epoll fix in 
trunk?

 Strange CPU peak occuring at fixed interval when several thousand connections 
 active
 

 Key: DIRMINA-681
 URL: https://issues.apache.org/jira/browse/DIRMINA-681
 Project: MINA
  Issue Type: Task
  Components: Core
Affects Versions: 2.0.0-M4, 2.0.0-RC1
 Environment: Windows Vista Ultimate 64-bit (on 64-bit Sun JDK 
 1.6.0_18). Intel Core 2 Quad Core Q9300 2,5 GHz, 8 GB RAM
Reporter: Mauritz Lovgren
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg, screenshot-3.jpg, 
 screenshot-4.jpg


 Observing strange CPU activity occuring at regular (seemingly fixed) interval 
 with no protocol traffic activity.
 See attached window capture of task manager that shows this with 3000 active 
 connections.
 Is there some kind of cleanup occuring within MINA core at a predefined 
 interval?
 The 3000 connections in the example above connects within 250 seconds. A 
 normal situation would be that these connections are established over a 
 longer period of time, perhaps spreading the CPU peaks shown above as well, 
 flattening the curve.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [MINA 3.0] Acceptor/Connector

2010-02-15 Thread Julien Vermillard
Le Mon, 15 Feb 2010 12:20:49 +0100,
Emmanuel Lecharny elecha...@gmail.com a écrit :

 Hi guys,
 
 since day one, I found that Acceptor/Connector are technical names,
 not user friendly names.
 
 Let's face the real world : we are not developping Acceptors, not 
 Connectors, but Servers and Clients. Can't we rename those two guys
 to IoServer and IoClient instead of IoAcceptor and IoConnector ?
 
 I know this is just cosmetic, but if it helps people to understand
 the kind of objects they are manipulating, I think it would worth the
 change...
 
 thoughts ?
 

+1



-- 
Julien Vermillard

Archean Technologies
http://www.archean.fr


signature.asc
Description: PGP signature


[jira] Commented: (DIRMINA-681) Strange CPU peak occuring at fixed interval when several thousand connections active

2010-02-15 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833789#action_12833789
 ] 

Emmanuel Lecharny commented on DIRMINA-681:
---

No. I simply fixed the patch :)

I have mad a mistake : to detect if the selector was dead, I checked 3 things :
- first, the select(1000) should return 0
- second, it should return 0 immediately
- third, it should not have been woke up

For the third condition, I used a boolean flag which was set to true when the 
IoProcessor was waken up, but in this case, I forgot to reset it after the test 
(ie, when select retur 0 in 0 ms when it has been awakened). The consequence 
was that we just return back to the select(1000), doing nothing at all, forever.

I fixed that .

 Strange CPU peak occuring at fixed interval when several thousand connections 
 active
 

 Key: DIRMINA-681
 URL: https://issues.apache.org/jira/browse/DIRMINA-681
 Project: MINA
  Issue Type: Task
  Components: Core
Affects Versions: 2.0.0-M4, 2.0.0-RC1
 Environment: Windows Vista Ultimate 64-bit (on 64-bit Sun JDK 
 1.6.0_18). Intel Core 2 Quad Core Q9300 2,5 GHz, 8 GB RAM
Reporter: Mauritz Lovgren
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg, screenshot-3.jpg, 
 screenshot-4.jpg


 Observing strange CPU activity occuring at regular (seemingly fixed) interval 
 with no protocol traffic activity.
 See attached window capture of task manager that shows this with 3000 active 
 connections.
 Is there some kind of cleanup occuring within MINA core at a predefined 
 interval?
 The 3000 connections in the example above connects within 250 seconds. A 
 normal situation would be that these connections are established over a 
 longer period of time, perhaps spreading the CPU peaks shown above as well, 
 flattening the curve.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Emmanuel Lecharny (JIRA)
DDOS possible in only a few seconds...
--

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0


We can kill a server in just a few seconds using the stress test found in 
DIRMINA-762.

If we inject messages with no delay, using 50 threads to do that, the 
ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
messages waiting to be written back to the client, with no success.

On the client side, we receive almost no messages :
0 messages/sec (total messages received 1)
2 messages/sec (total messages received 11)
8 messages/sec (total messages received 55)
8 messages/sec (total messages received 95)
9 messages/sec (total messages received 144)
3 messages/sec (total messages received 162)
1 messages/sec (total messages received 169)
...

On the server side, the memory is totally swamped in 20 seconds, with no way to 
recover :
Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
space

(see graph attached)

On the server, ConcurrentLinkedQueue contain the messages to be written (in my 
case, 724 499 Node are present). There are also 361629 DefaultWriteRequests, 
361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 618 
ProtocolCodecFilter$MessageWriteRequest and 361 614 
ProtocolCodecFilter$EncodedWriteRequests.

That mean we don't flush them to the client at all. 


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Emmanuel Lecharny (JIRA)

 [ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emmanuel Lecharny updated DIRMINA-764:
--

Attachment: screenshot-1.jpg

The CPU when running the test : 100%, with a lot of System CPU

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Emmanuel Lecharny (JIRA)

 [ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emmanuel Lecharny updated DIRMINA-764:
--

Attachment: screenshot-2.jpg

The memory consumption. All the memory is eaten in 20 seconds.

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-762) WARN org.apache.mina.core.service.IoProcessor - Create a new selector. Selected is 0, delta = 0

2010-02-15 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833798#action_12833798
 ] 

Emmanuel Lecharny commented on DIRMINA-762:
---

I moved the issue on https://issues.apache.org/jira/browse/DIRMINA-764

 WARN org.apache.mina.core.service.IoProcessor  - Create a new selector. 
 Selected is 0, delta = 0
 

 Key: DIRMINA-762
 URL: https://issues.apache.org/jira/browse/DIRMINA-762
 Project: MINA
  Issue Type: Bug
 Environment: Linux (2.6.26-2-amd64),  java version 1.6.0_12 and also 
 1.6.0_18.
Reporter: Omry Yadan
Priority: Critical
 Fix For: 2.0.0-RC2

 Attachments: BufferCodec.java, NettyTestServer.java, 
 RateCounter.java, Screen shot 2010-02-02 at 7.48.39 PM.png, Screen shot 
 2010-02-02 at 7.48.46 PM.png, Screen shot 2010-02-02 at 7.48.59 PM.png, 
 Screen shot 2010-02-02 at 7.49.13 PM.png, Screen shot 2010-02-02 at 7.49.18 
 PM.png, Server.java, StressClient.java


 Mina server gets into a bad state where it constantly prints :
 WARN org.apache.mina.core.service.IoProcessor  - Create a new selector. 
 Selected is 0, delta = 0
 when this happens, server throughput drops significantly.
 to reproduce run the attached server and client for a short while (30 seconds 
 on my box).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Victor N (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833804#action_12833804
 ] 

Victor N commented on DIRMINA-764:
--

Emmanuel, are you clients in this test fast enough to read at the speed 
proposed by the server? Also, is the network between the server and the client 
fast enough?
Maybe read buffer is too small in the client? I do not see it configured in 
the stress client.
I would say that it is typical - when some server is writing too quickly into a 
socket, so that some client can not read at this speed, the server will die in 
OutOfMemory :)
You need to throttle/limit the write speed somehow. As I know, in mina, 
writeRequestQueue is unlimited in IoSession :(

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833811#action_12833811
 ] 

Emmanuel Lecharny commented on DIRMINA-764:
---

The read is blocking, so I guess it reads as soon as something returns...

The network is fast enough, hopefully, as I ran the test locally !

Also the messages are 9 bytes long. No need of extra large buffers here :/

I think there is a huge problem in the way the server handles the channel ready 
for write : it seems to send just one single message. I have to check that 
though.

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Victor N (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833869#action_12833869
 ] 

Victor N commented on DIRMINA-764:
--

I am not 100% sure but IMHO when you run stress clients and the server on the 
same host, so the CPU and I/O activity are high, there may be troubles in 
testing.
I would propose to run the same test in LAN environment - all clients on a 
separate machine or even multiple machines.

As for TCP buffers, they do not depend on how you use your socket - via 
blocking or non-blocking I/O, locally or remotely. If your client works slowly 
(under high load on your computer), it will read slowly; in addition, if it has 
a small TCP buffer for reading - the whole process of TCP transmission is 
stalled, the server will not send to socket anymore (remember how the 
congestion control algorithm in TCP works?)

Of course, maybe this is not the case in your test, so it would be useful to 
compare with another mina build before you start digging into the code ;)

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833873#action_12833873
 ] 

Emmanuel Lecharny commented on DIRMINA-764:
---

Ok, there is some slight problem in the client : we don't wait for the 
response, we immediately send another message. The server does not have the 
time to send the response, as it is pounded with new requests.

I have slightly modified the client code to wait until some bytes are 
available, instead of immediately sending a new message.

The server is now stable, dealing with around 12 000 message per second. No 
OOM, but a very high CPU consumption.

Sadly, I can tell if the System CPU is caused by the server at this point. I 
have to run the test on different machines.

However, I keep this issue open, because a malevolent client can kill a mina 
server in a matter of seconds. This has to be fixed.

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re : About CircularQueue

2010-02-15 Thread Edouard De Oliveira
If CircularQueue is a core class then we don't have to support anything 
external using it.
If someone needs it for own code, it can be extracted from svn history 

HashMap, HashSet :
as long as interfaces are used and performances are improved, i think it's 
pretty clearly
a good move IMHO
 Cordialement, Regards,
-Edouard De Oliveira-
Blog: http://tedorgwp.free.fr
WebSite: http://tedorg.free.fr/en/main.php



- Message d'origine 
De : Ashish paliwalash...@gmail.com
À : dev@mina.apache.org; elecha...@apache.org
Envoyé le : Lun 15 Février 2010, 11 h 54 min 51 s
Objet : Re: About CircularQueue

On Mon, Feb 15, 2010 at 4:12 PM, Emmanuel Lecharny elecha...@gmail.com wrote:
 Hi guys,

 yesterday I removed all references to the non-thread-safe CircularQueue data
 structure, and replaced it with a ConcurrentLinkedQueue.
 Not only this is a comparable data structure, but it's also thread safe, and
 tested.

 Now, the question : should we remove the CircularQueue data structure from
 the code base, assuming that it should only be used by MINA core, and not by
 user, or simply deprecate it ?


Will marking it as depreciated help anyone?
We have removed it from core :-). I think we need to ask this question
in User ML as well

 I also have some concern about the existence of HashMap in MINA. We have 149
 references to this data structure which is really slow when run on
 mlti-threaded environment. I suggest we replace all of those references in
 core by ConcurrentHashMap
 (http://www.javamex.com/tutorials/synchronization_concurrency_8_hashmap.shtml).
 The very same for HashSet which is also used a lot.

 Thoughts ?

hmm.. would love to get rid of them. I hope it doesn't break anything :-)

thanks
ashish







[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833878#action_12833878
 ] 

Emmanuel Lecharny commented on DIRMINA-764:
---

Victor, you are perfectly right.

My intention is to build a test environnement, as I have a 4-way CPU with 16Gb 
ram, and 5 injectors, and a Gb ethernet network. On my local machine, I'm most 
certainly bounded by the clients which are sucking 2/3 of the CPU.

Right now, I'm just worrying about the server crash I get.

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re : [MINA 3.0] Acceptor/Connector

2010-02-15 Thread Edouard De Oliveira
Absolutely right here 
BIG +1

 Cordialement, Regards,
-Edouard De Oliveira-
Blog: http://tedorgwp.free.fr
WebSite: http://tedorg.free.fr/en/main.php



- Message d'origine 
De : Julien Vermillard jvermill...@archean.fr
À : dev@mina.apache.org
Envoyé le : Lun 15 Février 2010, 13 h 30 min 37 s
Objet : Re: [MINA 3.0] Acceptor/Connector

Le Mon, 15 Feb 2010 12:20:49 +0100,
Emmanuel Lecharny elecha...@gmail.com a écrit :

 Hi guys,
 
 since day one, I found that Acceptor/Connector are technical names,
 not user friendly names.
 
 Let's face the real world : we are not developping Acceptors, not 
 Connectors, but Servers and Clients. Can't we rename those two guys
 to IoServer and IoClient instead of IoAcceptor and IoConnector ?
 
 I know this is just cosmetic, but if it helps people to understand
 the kind of objects they are manipulating, I think it would worth the
 change
 
 thoughts ?
 

+1



-- 
Julien Vermillard

Archean Technologies
http://www.archean.fr







Re: [MINA 3.0] Acceptor/Connector

2010-02-15 Thread Ashish
On Mon, Feb 15, 2010 at 4:50 PM, Emmanuel Lecharny elecha...@gmail.com wrote:
 Hi guys,

 since day one, I found that Acceptor/Connector are technical names, not user
 friendly names.

 Let's face the real world : we are not developping Acceptors, not
 Connectors, but Servers and Clients. Can't we rename those two guys to
 IoServer and IoClient instead of IoAcceptor and IoConnector ?

 I know this is just cosmetic, but if it helps people to understand the kind
 of objects they are manipulating, I think it would worth the change...

 thoughts ?


+1 on name change

Have a little hesitation with IoServer/IoClient, it kindof gives an
impression of complete implementation, but don't have a better
suggestion either

-- 
thanks
ashish


Re: [MINA 3.0] Acceptor/Connector

2010-02-15 Thread Julien Vermillard
Le Mon, 15 Feb 2010 22:10:12 +0530,
Ashish paliwalash...@gmail.com a écrit :

 On Mon, Feb 15, 2010 at 4:50 PM, Emmanuel Lecharny
 elecha...@gmail.com wrote:
  Hi guys,
 
  since day one, I found that Acceptor/Connector are technical names,
  not user friendly names.
 
  Let's face the real world : we are not developping Acceptors, not
  Connectors, but Servers and Clients. Can't we rename those two guys
  to IoServer and IoClient instead of IoAcceptor and IoConnector ?
 
  I know this is just cosmetic, but if it helps people to understand
  the kind of objects they are manipulating, I think it would worth
  the change...
 
  thoughts ?
 
 
 +1 on name change
 
 Have a little hesitation with IoServer/IoClient, it kindof gives an
 impression of complete implementation, but don't have a better
 suggestion either
 

an IoServier With an IoHandler and the chain is a complete
implementation no ?

-- 
Julien Vermillard

Archean Technologies
http://www.archean.fr


signature.asc
Description: PGP signature


Re: [MINA 3.0] Acceptor/Connector

2010-02-15 Thread Ashish
On Mon, Feb 15, 2010 at 10:14 PM, Julien Vermillard
jvermill...@archean.fr wrote:
 Le Mon, 15 Feb 2010 22:10:12 +0530,
 Ashish paliwalash...@gmail.com a écrit :

 On Mon, Feb 15, 2010 at 4:50 PM, Emmanuel Lecharny
 elecha...@gmail.com wrote:
  Hi guys,
 
  since day one, I found that Acceptor/Connector are technical names,
  not user friendly names.
 
  Let's face the real world : we are not developping Acceptors, not
  Connectors, but Servers and Clients. Can't we rename those two guys
  to IoServer and IoClient instead of IoAcceptor and IoConnector ?
 
  I know this is just cosmetic, but if it helps people to understand
  the kind of objects they are manipulating, I think it would worth
  the change...
 
  thoughts ?
 

 +1 on name change

 Have a little hesitation with IoServer/IoClient, it kindof gives an
 impression of complete implementation, but don't have a better
 suggestion either


 an IoServier With an IoHandler and the chain is a complete
 implementation no ?

 --
 Julien Vermillard

hmm.. yeah didn't thought this way.. :-)

Hey are we planning this in 2.0 or 3.0?


Re: [MINA 3.0] Acceptor/Connector

2010-02-15 Thread Julien Vermillard
Le Mon, 15 Feb 2010 22:17:49 +0530,
Ashish paliwalash...@gmail.com a écrit :

 On Mon, Feb 15, 2010 at 10:14 PM, Julien Vermillard
 jvermill...@archean.fr wrote:
  Le Mon, 15 Feb 2010 22:10:12 +0530,
  Ashish paliwalash...@gmail.com a écrit :
 
  On Mon, Feb 15, 2010 at 4:50 PM, Emmanuel Lecharny
  elecha...@gmail.com wrote:
   Hi guys,
  
   since day one, I found that Acceptor/Connector are technical
   names, not user friendly names.
  
   Let's face the real world : we are not developping Acceptors, not
   Connectors, but Servers and Clients. Can't we rename those two
   guys to IoServer and IoClient instead of IoAcceptor and
   IoConnector ?
  
   I know this is just cosmetic, but if it helps people to
   understand the kind of objects they are manipulating, I think it
   would worth the change...
  
   thoughts ?
  
 
  +1 on name change
 
  Have a little hesitation with IoServer/IoClient, it kindof gives an
  impression of complete implementation, but don't have a better
  suggestion either
 
 
  an IoServier With an IoHandler and the chain is a complete
  implementation no ?
 
  --
  Julien Vermillard
 
 hmm.. yeah didn't thought this way.. :-)
 
 Hey are we planning this in 2.0 or 3.0?

in 3.0 because 2.0 API is frozen

-- 
Julien Vermillard

Archean Technologies
http://www.archean.fr


signature.asc
Description: PGP signature


[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Omry Yadan (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833886#action_12833886
 ] 

Omry Yadan commented on DIRMINA-764:


looks like a minor client bug indeed which would manifest itself if the server 
is slow.
I don't think running on the same machine is really the issue here: when I run 
the same stress client against a Netty test server which does exactly the same 
(also attached to 762) I get throughput of 200k-300k messages/sec.


 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Victor N (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833888#action_12833888
 ] 

Victor N commented on DIRMINA-764:
--

 a malevolent client can kill a mina server in a matter of seconds. This has 
 to be fixed.  

In fact, this is not mina-specific problem, this is more common in network 
world. But  I agree, we should propose some solutions, ex.:

1) writeRequestQueue may be bounded - somewhere we could configure its size and 
a policy what to do when the queue is full (like in Executors)
2) some kind of write throttling (optionally) - as I remember, mina already has 
IoEventQueueThrottle class, but I never used it and I do not know if it is 
up-to-date

If some client (an IoSession) is slow, that is there are many events waiting 
for socket write, it is server application's responsibility to decide what to 
do - ignore new events, send some kind of warning to client (hay, mister, you 
network is too slow, you risk to be disconnected!), maybe event client 
disconnection after some time, etc. If client and server can negotiate in this 
situation, everything will work well. We did something like this for Flash 
clients using Red5 server (based on mina) - we checked writeRequestQueue (or 
calculated the number or pending write request, maybe) and tuned frame rate of 
video stream; sometimes we sent a warning to client :)

Of course, there may bebad clients trying to do DDOS - this way we can also 
handle such situations.

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833891#action_12833891
 ] 

Emmanuel Lecharny commented on DIRMINA-764:
---

Netty deal with messages in a completely different way : it has 2 chains, one 
for incoming messages, one for outgoing messages (something MINA should have 
had since day one ...). It allows for a much better throughput.

I haven't read Netty's code, but I suspect also that no copy is done, and that 
it does not uses queues to transfert messages from one filter to another. That 
could help a lot.

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DIRMINA-764) DDOS possible in only a few seconds...

2010-02-15 Thread Victor N (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12833903#action_12833903
 ] 

Victor N commented on DIRMINA-764:
--

I found on Netty's documentation page:

# No more OutOfMemoryError due to fast, slow or overloaded connection.
# No more unfair read / write ratio often found in a NIO application under high 
speed network

This is what we should implement in mina 2.0 - protect ourselves from clients 
writing too quickly or reading too slowly.
Emmanuel, seems that unfair read / write ratio is what you have seen in your 
test!

 DDOS possible in only a few seconds...
 --

 Key: DIRMINA-764
 URL: https://issues.apache.org/jira/browse/DIRMINA-764
 Project: MINA
  Issue Type: Bug
Affects Versions: 2.0.0-RC1
Reporter: Emmanuel Lecharny
Priority: Blocker
 Fix For: 2.0.0

 Attachments: screenshot-1.jpg, screenshot-2.jpg


 We can kill a server in just a few seconds using the stress test found in 
 DIRMINA-762.
 If we inject messages with no delay, using 50 threads to do that, the 
 ProtocolCodecFilter$MessageWriteRequest is stuffed with hundred of thousands 
 messages waiting to be written back to the client, with no success.
 On the client side, we receive almost no messages :
 0 messages/sec (total messages received 1)
 2 messages/sec (total messages received 11)
 8 messages/sec (total messages received 55)
 8 messages/sec (total messages received 95)
 9 messages/sec (total messages received 144)
 3 messages/sec (total messages received 162)
 1 messages/sec (total messages received 169)
 ...
 On the server side, the memory is totally swamped in 20 seconds, with no way 
 to recover :
 Exception in thread pool-1-thread-1 java.lang.OutOfMemoryError: Java heap 
 space
 (see graph attached)
 On the server, ConcurrentLinkedQueue contain the messages to be written (in 
 my case, 724 499 Node are present). There are also 361629 
 DefaultWriteRequests, 361628 DefaultWriteFutures, 361625 SimpleBuffer, 361 
 618 ProtocolCodecFilter$MessageWriteRequest and 361 614 
 ProtocolCodecFilter$EncodedWriteRequests.
 That mean we don't flush them to the client at all. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Dealing with potential DDOS with MINA 2.0

2010-02-15 Thread Emmanuel Lecharny

Hi,

today we discussed about DIRMINA-764, and about solutions to deal with 
rogue clients (which are not necessarily malevolent).


The problem is that a client which send a lot of messages and does not 
read the responses fast enough will impact the server in a very ad way : 
at some point, you'll be hit by a OOM.


So the question arose about how to deal with such a situation. there are 
many things we can control :

- number of clients per server
- number of message accepted for a client per unit of time
- number of message a client can have on the writing queue before we 
stop accepting new requests

- size of message we accept for a client
- number of messages in the writing queue
- size of messages being processed globally

All those parameters (and I may have missed some) have an impact on the 
server. The problem here is that we are at the limit between 
configuration and protection. If we decide we accept up to 100 000 
clients on a MINA server, then how do we set the other limits? What size 
should we allowate to handle the load ?


Another problem is that if we limit the global number of messages being 
processed, or the global size, then we will have to select which client 
we will have to block.


Also limitating the writeQueue size might slow down the processing.

Right now, in order to avoid a situation where the server simply die, I 
suggest to implement a very smple strategy on the server : we add a 
parameter in the session config indicating the macimum number of 
messages allowed in the writeQueue for a specific session, before this 
session block new incoming messages. This is easy to implement, and will 
protect us a bit from fast client but slow readers.


We can think more about those typical use cases in MINA 3.

thoughts ?

--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com




[MINA 3.0] Init phase

2010-02-15 Thread Emmanuel Lecharny

Hi,

some random thoughts about server initialization. The way a server is 
initialized is by bind the addresses we want the server to listen on on 
an IoAcceptor. Note that this initialization is done in three steps :
- a first step is to bind the address. A future is created, put in a 
queue (registerQueue), and the acceptor is started, which do a select() 
with not timeout, and the selector is woke up. This is it for the first 
step, for the moment, as we are waiting on the future to be done.
- the second step quicks in when the selector is woke up. The 
registerHandles() method is called, it pull the futures from the 
registerQueue(), then for each of them, it opens a ServerSocketChannel 
and initializes it. A the end, the channel is registered on the selector 
with an OP_ACCEPT flag, and inform the future that it's done. The thread 
now coninue processing incoming connection handles, and unbind requests 
(unlikely to have any ...), and at the end, block on the select() 
waiting for bind(), unbind() or connect events.
- The third steps consists on firing all the attached listeners, one of 
them is the statisctic listener.


And that's it. I tried to be dense here, but trust me, you need aspirin 
power 2 to get a clue about what all this does.


Now, some ideas about MINA 3/0:
- first, in the three described steps, you can be sure that the third oe 
is not done properly, as the listeners may perfectly well be executed 
*after* the server has started received some new connection requests, 
and eventually executed some processing on them, f*cking up the stats. 
This is bloody wrong, but, meh, this is just about stats, so who cares ???
- second, I'm not sure we want to bind a new address once the server is 
started. There may be some use cases, but in any case, I would not let 
the Acceptor thread dealing with bind operation
- third, and this is the major point, why do we have to use a future, a 
queue, some complex synchronization between two threads when all this 
can be done in one single thread : we can perfectly create and configure 
the new ServerSocketChannel *before* registering it on the selector, up 
to the point all the listeners and data structures have been processed. 
Then, and only then, we attach the channel to the selector.


Really I think MINA 2.0 is way to complex. Simple things should remain 
simple...


The unbind operation should also be handled in a simple way : it's just 
a matter of unregistering the channel from the selector, then to handle 
the cleanup...


Did I missed something ?

--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com