I have developed server application to process charging messages which are coming in a binary format. In this application I have used mina core libraries to develop the server following the state machine pattern.

In my scenario, when client initiated a connection for the first time, it is not closed. Client maintains the connection with server by sending some echo messages periodically. This echo message is 24 bytes in size.When a message hits server first time, it always allocates a byte buffer with 30000 bytes and read the content in to the byte buffer.

Every 30 seconds echo message comes to the server. Mina framework watches this behavior and starts to reduce the buffer in to a half. [30000 -> 15000 -> 7500 -> 3750 ??-> 58]. And it remains in the size 58 as the incoming message is 24 bytes long.

But the problem is starting when the actual message is received, which is 704 bytes long. As the server buffer is allocated only to 58 bytes, it only reads first 58 bytes from the message. After that only framework realize that this buffer is not enough and starting to double the byte buffer. But the time is too late to react, because decoder state machine reject the message as it is not tallying with the size stated in its header.

So, is there a way to stop this optimization to this message, because I can?t go with this logic when it is considered the logic I am having? If there is a configuration to stop this please let me know, otherwise tell me where should I modify the source code in order to stop this optimization.


----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

Reply via email to