I think I'm not communicating my thoughts well enough. A single algorithm
can handle large data pipes and provide extremely low latency for variable,
small and large message sizes at the same time.

On the Producer side:
Application code should determine the block sizes that are pushed onto the
output queue. Logic would be as previously stated:
- write until there's nothing left to write, unregister for the write
event, return to event processing
- write until the the channel is congestion controlled, stay registered for
write event, return to event processing
This handles very low latency for 1K message blocks and ensures optimal
usage of a socket for large data blocks.

On the Consumer side:
64K non-blocking read of channel when read selector fires. Don't read until
there's nothing left to read. Let the Selector tell you when it's time to
read again.





On Thu, Dec 1, 2011 at 11:53 AM, Emmanuel Lecharny <elecha...@gmail.com>wrote:

> On 12/1/11 5:28 PM, Steve Ulrich wrote:
>
>> Hi (quickly reading ,
>>
>> reading everything-you-can-get might starve the application logic.
>> We currently have some "realtime" stuff which must be transferred as
>> quickly as possible, but it's just some bytes (Biggest messages are 1K,
>> smallest about 10 bytes). This logic would increase roundtrip times to
>> numbers where we can shut our servers down.
>>
>
> Yes, Chad ointed out that it was not an option, so I reverted my changes.
>
>
>> In such a setup it would be nice if every 1K ByteBuffer is pushed to the
>> chain, since in most cases it's a full message and waiting any longer just
>> increases roundtrip times.
>> In this case, streaming big data would be very inefficient, so don't
>> expect a simple solution that fits all problems.
>>
>
> Right now, we use one single buffer associated with the selector, and it's
> now set to 64Kb, so it works for streaming big data as small ones. We can
> make this size configurable.
>
>
>> Maybe the application/decoder logic should set some hints to the
>> Processor on a session base. This way you could even switch a running
>> session between short reaction time and efficient streaming.
>>
>> A quick and unfinished thought about a hint-class:
>>
>> class DecodingHints {
>>   static DecodingHints MASS_DATA = new DecodingHints(65535, 10)
>>   static DecodingHints NORMAL = new DecodingHints(16384, 10)
>>   static DecodingHints QUICK = new DecodingHints(1024, 1)
>>
>>   DecodingHints(int bufferSize, in maxBufferedBuffersCount){
>> ...
>>   }
>> }
>>
>> Usage:
>>
>> class MyDecoder {
>>   ...
>>   if (isStreamingBegin){
>>     session.setDecodingHints(**DecodingHints.MASS_DATA);
>>   } else if (isStreamingEnd) {
>>     session.setDecodingHints(**NORMAL);
>>   }
>>   ...
>> }
>>
>
> This is something we can probably implement in the selector's logic, sure.
> We can even let the session define which size fits the best its need,
> starting with a small buffer and increase it later.
>
> It can even be interesting on a highly loaded server to process small
> chunks of data, in order to allow other sessions to be processed.
>
> A kind of adaptative system...
>
>
>
> --
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
>
>

Reply via email to