Re: [MINA 3.0] IoBuffer usage

2011-12-01 Thread Chad Beaulac
Hi Emmanuel,

A 1k ByteBuffer will be too small for large data pipes. Consider using 64k
like you mentioned yesterday.
Draining the channel before returning control to the program can be
problematic. This thread can monopolize the CPU and other necessary
processing could get neglected. The selector will fire again when there's
more data to read. Suggest removing the loop below and using a 64k input
buffer.

Regards,
Chad


On Thu, Dec 1, 2011 at 4:00 AM, Emmanuel Lecharny wrote:

> Hi guys,
>
> yesterday, I committed some changes that make the NioSelectorProcessor to
> use the IoBuffer class instead of a singe buffer to store the incoming
> data. Here is the snippet of changed code :
>
>int readCount = 0;
>IoBuffer ioBuffer = session.getIoBuffer();
>
>do {
>ByteBuffer readBuffer =
> ByteBuffer.allocate(1024);
>readCount = channel.read(readBuffer);
>LOGGER.debug("read {} bytes",
> readCount);
>
>if (readCount < 0) {
>// session closed by the remote peer
>LOGGER.debug("session closed by the
> remote peer");
>sessionsToClose.add(session);
>break;
>} else if (readCount > 0) {
>readBuffer.flip();
>ioBuffer.add(readBuffer);
>}
>} while (readCount > 0);
>
>// we have read some data
>// limit at the current position & rewind
> buffer back to start & push to the chain
>session.getFilterChain().**
> processMessageReceived(**session, ioBuffer);
>
> As you can see, instead of reading one buffer, and call the chain, we
> gather as many data as we can (ie as many as the channel can provide), and
> we call the chain.
> This has one major advantage : we don't call the chain many times if the
> data is bigger than the buffer size (currently set to 1024 bytes), and as a
> side effect does not require that we define a bigger buffer (not really a
> big deal, we can afford to use a 64kb buffer here, as there is only one
> buffer per selector)
> The drawback is that we allocate ByteBuffers on the fly. This can be
> improved by using a pre-allocated buffer (say a 64kb buffer), and if we
> still have something to read, then we allocate some more (this is probably
> what I will change).
>
> The rest of the code is not changed widely, except the decoder and every
> filter that expected to receive a ByteBuffer (like the LoggingFilter). It's
> just a matter of casting the Object to IoBuffer, and process the data, as
> the IoBuffer methods are the same than the ByteBuffer (except that you
> can't inject anything but ByteBuffers into an IoBuffer, so no put method,
> for instance).
>
> The decoders for Http and Ldap have been changed to deal with the
> IoBuffer. The big gain here, in the Http cas, is that we don't have to
> accumulate the data into a new ByteBuffer : the IoBuffer already accumulate
> data itself.
>
> The IoBuffer is stored into the session, which means we can reuse it over
> and over, no need to create a new one. I still have to implement the
> compact() method which will remove the used ByteBuffers, in order for this
> IoBuffer not to grow our of bounds.
>
> thoughts, comments ?
>
> Thanks !
>
> --
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
>
>


Re: [MINA 3.0] IoBuffer usage

2011-12-01 Thread Chad Beaulac
The reverse is true for the producer. Let's assume the writer/producer has a 
list of ByteBuffer. When the selector fires that you can write for the channel 
then:
- write until there's nothing left to write, unregister for the write event, 
return to event processing
- write until the the channel is congestion controlled, stay registered for 
write event, return to event processing

This assumes you register for write event when a ByteBuffer is added to the 
output queue and the output queue is currently empty. 

(not at a computer to send more comprehensive info)

-Chad

Sent from my iPhone

On Dec 1, 2011, at 8:52 AM, Emmanuel Lecharny  wrote:

> On 12/1/11 2:49 PM, Emmanuel Lécharny wrote:
>> Forwarded to the ML
>> 
>> Quick note below.
>> 
>> Sent from my iPhone
>> 
>> On Dec 1, 2011, at 7:51 AM, Emmanuel Lécharny  wrote:
>> 
>>> On 12/1/11 1:32 PM, Chad Beaulac wrote:
>>>> Hi Emmanuel,
>>>> 
>>>> A 1k ByteBuffer will be too small for large data pipes. Consider using 64k
>>>> like you mentioned yesterday.
>>> 
>>> Yes, this is probably what I'll do.
>>>> Draining the channel before returning control to the program can be
>>>> problematic. This thread can monopolize the CPU and other necessary
>>>> processing could get neglected. The selector will fire again when there's
>>>> more data to read. Suggest removing the loop below and using a 64k input
>>>> buffer.
>>> if we poll the channel with small channels, what will happen is that we 
>>> will generate a messageReceived event, which will be processed immediately. 
>>> Then we will reset the selector to be put in OP_READ state, and we will 
>>> immediately read again the data from the channel.
>>> 
>>> It's a bit difficult for me to see how it could be less CPU consuming that 
>>> reading everything immediately, and then go down the chain.
>>> 
>>> Do you have any technical information to back your claim, I would be very 
>>> interested to avoid falling in a trap I haven't seen.
>> 
>> It's not a question of CPU consumption. It's managing the reactor thread 
>> fairly.
>> 
>> If you sit in a tight loop like that and one channel is a high data rate you 
>> may not get out in a timely fashion to service change operations (connect, 
>> disconnect, registering for write operations).
> 
> Totally makes sense, right !
> 
>> 
>> I can forward a URL to an example later.
> Yes, I'd like to read more about this issue.
> 
> Anyway, I'll remove the loop, and use a wider buffer.
> 
> 
> -- 
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
> 


Re: [MINA 3.0] IoBuffer usage

2011-12-02 Thread Chad Beaulac
I think I'm not communicating my thoughts well enough. A single algorithm
can handle large data pipes and provide extremely low latency for variable,
small and large message sizes at the same time.

On the Producer side:
Application code should determine the block sizes that are pushed onto the
output queue. Logic would be as previously stated:
- write until there's nothing left to write, unregister for the write
event, return to event processing
- write until the the channel is congestion controlled, stay registered for
write event, return to event processing
This handles very low latency for 1K message blocks and ensures optimal
usage of a socket for large data blocks.

On the Consumer side:
64K non-blocking read of channel when read selector fires. Don't read until
there's nothing left to read. Let the Selector tell you when it's time to
read again.





On Thu, Dec 1, 2011 at 11:53 AM, Emmanuel Lecharny wrote:

> On 12/1/11 5:28 PM, Steve Ulrich wrote:
>
>> Hi (quickly reading ,
>>
>> reading everything-you-can-get might starve the application logic.
>> We currently have some "realtime" stuff which must be transferred as
>> quickly as possible, but it's just some bytes (Biggest messages are 1K,
>> smallest about 10 bytes). This logic would increase roundtrip times to
>> numbers where we can shut our servers down.
>>
>
> Yes, Chad ointed out that it was not an option, so I reverted my changes.
>
>
>> In such a setup it would be nice if every 1K ByteBuffer is pushed to the
>> chain, since in most cases it's a full message and waiting any longer just
>> increases roundtrip times.
>> In this case, streaming big data would be very inefficient, so don't
>> expect a simple solution that fits all problems.
>>
>
> Right now, we use one single buffer associated with the selector, and it's
> now set to 64Kb, so it works for streaming big data as small ones. We can
> make this size configurable.
>
>
>> Maybe the application/decoder logic should set some hints to the
>> Processor on a session base. This way you could even switch a running
>> session between short reaction time and efficient streaming.
>>
>> A quick and unfinished thought about a hint-class:
>>
>> class DecodingHints {
>>   static DecodingHints MASS_DATA = new DecodingHints(65535, 10)
>>   static DecodingHints NORMAL = new DecodingHints(16384, 10)
>>   static DecodingHints QUICK = new DecodingHints(1024, 1)
>>
>>   DecodingHints(int bufferSize, in maxBufferedBuffersCount){
>> ...
>>   }
>> }
>>
>> Usage:
>>
>> class MyDecoder {
>>   ...
>>   if (isStreamingBegin){
>> session.setDecodingHints(**DecodingHints.MASS_DATA);
>>   } else if (isStreamingEnd) {
>> session.setDecodingHints(**NORMAL);
>>   }
>>   ...
>> }
>>
>
> This is something we can probably implement in the selector's logic, sure.
> We can even let the session define which size fits the best its need,
> starting with a small buffer and increase it later.
>
> It can even be interesting on a highly loaded server to process small
> chunks of data, in order to allow other sessions to be processed.
>
> A kind of adaptative system...
>
>
>
> --
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
>
>


Re: [MINA 3.0] IoBuffer usage

2011-12-03 Thread Chad Beaulac
On Fri, Dec 2, 2011 at 10:19 AM, Emmanuel Lécharny wrote:

> On 12/2/11 2:02 PM, Chad Beaulac wrote:
>
>> I think I'm not communicating my thoughts well enough.
>>
> Well, I hope I have undesrtood what you said, at least :)
>
>
>   A single algorithm
>> can handle large data pipes and provide extremely low latency for
>> variable,
>> small and large message sizes at the same time.
>>
> AFAIU, it' snot because you use a big buffer that you will put some strain
> when dealing with small messages : the buffer will only contain a few
> useful bytes, and that's it. In any case, this buffer won't be allocated
> everytime we read from the channel, so it's just a container. But it's way
> better to have a big buffer when dealing with big messages, because then
> you'll have less roundtrips between the read and the processing. But the
> condition, as you said, is that you don't read the channel until there is
> no more bytes to read. You just read *once* get what you get, and go fetch
> the processing part of your application with these bytes.
>
> The write has exactly the same kind of issue, as you said : don't pound
> the channel, let other channel the opportunity to be written too...
>
>
The write has the same sort of issue but it can be handled more optimally
in a different manner. The use case is slightly different because it's the
client producer code driving the algorithm instead the Selector.
Producer Side
- Use a queue of ByteBuffers as a send queue.
- When send is possible for the selector, block on the queue, loop over the
output queue and send until SocketChannel.send(ByteBuffer src)  (returnVal
< src.remaining || returnVal == 0) or you catch exception.
- This is a fair algorithm when dealing with multiple selectors because the
amount of time the sending thread will spend inside the "send" method is
bounded by how much data is in the ouputQueue and nothing can put data into
the queue while draining the queue to send data out.

Consumer Side
- Use a ByteBuffer(64k) as a container to receive data into
- Only call SocketChannel.read(inputBuffer) once for the channel that's
ready to read.
- Create a new ByteBuffer for the size read. Copy the the intputBuffer into
the new ByteBuffer. Give the new ByteBuffer to the session to process.
Rewind the input ByteBuffer. An alternative to creating a new ByteBuffer
every time for the size read is allow client code to specify a custom
ByteBuffer factory. This allows client code to pre-allocate memory and
create a ring buffer or something like that.

I use these algorithms in C++ (using ACE - Adaptive Communications
Environment) and Java. The algorithm is basically the same in C++ and Java
and handles protocols with a lot of small messages, variable message size
protocols and large data block sizes.



>
>> On the Producer side:
>> Application code should determine the block sizes that are pushed onto the
>> output queue. Logic would be as previously stated:
>> - write until there's nothing left to write, unregister for the write
>> event, return to event processing
>>
> This is what we do. I'm afraid that it may be a bit annoying for the other
> sessions, waiting to send data. At some point, it could be better to write
> only a limited number of bytes, then give back control to the selector, and
> be awaken when the selector set the OP_WRITE flag again (which will be
> during the next loop anyway, or ay be another later).
>
>  - write until the the channel is congestion controlled, stay registered
>> for
>> write event, return to event processing
>>
>
> And what about a third option : write until the buffer we have prepared is
> empty, even if the channel is not full ? That mean even if the producer has
> prepared a -say- 1Mb block of data to write, it will be written in 16
> blocks of 64Kb, even if the channel can absorb more.
>
> Does it make sense ?
>
>
No. Doesn't make sense to me. Let the TCP layer handle optimizing how large
chunks of data is handled. If the client puts a ByteBuffer of 1MB or 20MB
or whatever onto the outputQueue, call
SocketChannel.write(outputByteBuffer). Don't chunk it up.



>  This handles very low latency for 1K message blocks and ensures optimal
>> usage of a socket for large data blocks.
>>
>> On the Consumer side:
>> 64K non-blocking read of channel when read selector fires. Don't read
>> until
>> there's nothing left to read. Let the Selector tell you when it's time to
>> read again.
>>
> Read you. Totally agree.
>
>
>
> --
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
>
>


Re: Re: [MINA 3.0] IoBuffer usage

2011-12-04 Thread Chad Beaulac
On Sun, Dec 4, 2011 at 8:04 AM, Emmanuel Lecharny wrote:

> Posted on the wrong mailing list... Forwarding there.
>
> Hi Chad,
>
>
>
> On 12/4/11 1:25 AM, Chad Beaulac wrote:
>
>>  A single algorithm
>>
>>>  can handle large data pipes and provide extremely low latency for
>>>>  variable,
>>>>  small and large message sizes at the same time.
>>>>
>>>>   AFAIU, it' snot because you use a big buffer that you will put some
>>> strain
>>>  when dealing with small messages : the buffer will only contain a few
>>>  useful bytes, and that's it. In any case, this buffer won't be allocated
>>>  everytime we read from the channel, so it's just a container. But it's
>>> way
>>>  better to have a big buffer when dealing with big messages, because then
>>>  you'll have less roundtrips between the read and the processing. But the
>>>  condition, as you said, is that you don't read the channel until there
>>> is
>>>  no more bytes to read. You just read *once* get what you get, and go
>>> fetch
>>>  the processing part of your application with these bytes.
>>>
>>>  The write has exactly the same kind of issue, as you said : don't pound
>>>  the channel, let other channel the opportunity to be written too...
>>>
>>>
>>>   The write has the same sort of issue but it can be handled more
>> optimally
>>  in a different manner. The use case is slightly different because it's
>> the
>>  client producer code driving the algorithm instead the Selector.
>>  Producer Side
>>  - Use a queue of ByteBuffers as a send queue.
>>  - When send is possible for the selector, block on the queue, loop over
>> the
>>  output queue and send until SocketChannel.send(ByteBuffer src)
>>  (returnVal
>>  <   src.remaining || returnVal == 0) or you catch exception.
>>  - This is a fair algorithm when dealing with multiple selectors because
>> the
>>  amount of time the sending thread will spend inside the "send" method is
>>  bounded by how much data is in the ouputQueue and nothing can put data
>> into
>>  the queue while draining the queue to send data out.
>>
>
> Right, but there are some cases where for one session, there is a lot to
> write, when the other sessions are waiting, as the thread is in used
> flusing all its data. This is why I proposed to chunk the writes in
> small chunks (well, small does not mean 1kb here).
>
>
This won't work when you have channels with very large data pipes and
channels with small data pipes in the same selector. It will end up being
inefficient for the large data pipe channel.  Chunking the writes is
unnecessary and will consume extra resources.
Yes, the other sessions will be waiting when you're writing for one
channel. This is true for the entire algorithm.
An example of the fairness of the algorithm is as follows:
Consider a selector with two channels in it that you're writing to.
Channel-1 is a 300Mb/second stream.
Channel-2 is a 2Mb/second stream.
To be fair, the system will need to spend a lot more time writing data for
channel-1. Chunking the data creates overhead at the TCP layer that is best
avoided. Let the TCP layer figure out how it wants to segment TCP packets.
If you have 40MB to write, just call channel1.write(outputBuffer). It is ok
that output for channel-2 is waiting while you're writing for channel-1.
Either the call to write will immediately work and all data will be
written, some portion of it will be written or some error occurs because
the socket is closed or something. In case-1, you'll look in the queue for
more output which is synchronized some nobody can put more data into while
this write is occurring.


> If we have more than one selector, it's still the same issue, as a
> session will always use the same selector.
>
>
Not sure why you'll need more than one selector.


>
>

>>  Consumer Side
>>  - Use a ByteBuffer(64k) as a container to receive data into
>>  - Only call SocketChannel.read(**inputBuffer) once for the channel
>> that's
>>  ready to read.
>>  - Create a new ByteBuffer for the size read. Copy the the intputBuffer
>> into
>>  the new ByteBuffer. Give the new ByteBuffer to the session to process.
>>
> Not sure we want to copy the ByteBuffer. It coud be an option, but if we
> can save us this copy, that would be cool.
>
>   Rewind the input ByteBuffer. An alternative to creating a new ByteBuffer
>>  every time for the size read is allow client code to specify a custom
>>  ByteBuffer factory. This allo

Re: Re: [MINA 3.0] IoBuffer usage

2011-12-05 Thread Chad Beaulac
Looks perfect except one thing. Don't allow clients to put WriteRequest's
into the queue while you're draining it. If you allow other threads to
enqueue more WriteRequest's, the algorithm is unbounded and you risk
getting stuck writing for only one channel. I added a synchronized block
below.

On Mon, Dec 5, 2011 at 4:07 AM, Julien Vermillard wrote:

> Hi,
> snipped a lot of the message :)
> >> So, do you mean that the underlying layer will not allow us to push say,
> >> 20M, without informing the session that it's full ? In other word, there
> >> is a limited size that can be pushed and we don't have to take care of
> >> this limit ourselves ?
> >
> >
> >>
> > Sort of. If the TCP send window (OS layer) has less room in it than the
> > outputBuffer.remaining(), the write will only write a portion of
> > outputBufffer. Consider this the CONGESTION_CONTROLLED state. If the TCP
> > send window is full when you try to write, the write will return 0. The
> > algorithm should never see this case because you should always stop
> trying
> > to write when only a portion of the outputBuffer is written. And, always
> > continue to try and write when an entire outputBuffer is written and
> there
> > are more outputBuffers to write in the output queue.
> >
>
> Here the write algorithm used in trunk (3.0), we give up writing if
> the buffer is not written totally because we consider the kernel
> buffer is full or congested :
>
> Queue queue = session.getWriteQueue();
>
> do {
>// get a write request from the queue
>
  synchronized (queue) {

>WriteRequest wreq = queue.peek();
>if (wreq == null) {
>break;
>}
>ByteBuffer buf = (ByteBuffer) wreq.getMessage();
>
>
>int wrote = session.getSocketChannel().write(buf);
>if (LOGGER.isDebugEnabled()) {
>LOGGER.debug("wrote {} bytes to {}", wrote, session);
>}
>
>if (buf.remaining() == 0) {
>// completed write request, let's remove
>// it
>queue.remove();
>// complete the future
>DefaultWriteFuture future = (DefaultWriteFuture) wreq.getFuture();
>if (future != null) {
>future.complete();
>}
>} else {
>// output socket buffer is full, we need
>// to give up until next selection for
>// writing
>break;
>}
>
> } // end synchronized(queue)


> } while (!queue.isEmpty());
>


Re: [MINA 3.0] IoBuffer usage

2011-12-05 Thread Chad Beaulac
Excellent. I like the counter in the write algorithm. 

Sent from my iPhone

On Dec 5, 2011, at 8:50 AM, Emmanuel Lecharny  wrote:

> On 12/5/11 2:39 PM, Chad Beaulac wrote:
>> Looks perfect except one thing. Don't allow clients to put WriteRequest's
>> into the queue while you're draining it. If you allow other threads to
>> enqueue more WriteRequest's, the algorithm is unbounded and you risk
>> getting stuck writing for only one channel. I added a synchronized block
>> below.
> 
> We can avoid the synchronized section by getting the queue size at the 
> beginning :
> 
> Queue  queue = session.getWriteQueue();
> 
> int size = queue.size();
> 
> while (size > 0) {
>... // process each entry in the queue
> size--;
> }
> 
> But I don't even think it's necessary : a session can only be processed by 
> one single thread, which is the one which process the write (the selector's 
> thread).
> 
> Of course, if we add an executor somewhere in the chain, this is a different 
> story, but then, we will need to add some synchronization at the executor 
> level, not on the selector level, IMO.
> 
> 
> 
> -- 
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
> 


Re: [MINA 3.0] IoBuffer usage

2011-12-05 Thread Chad Beaulac
Looks good Emmanuel. 

Sent from my iPhone

On Dec 5, 2011, at 10:13 AM, Emmanuel Lecharny  wrote:

> On 12/5/11 3:50 PM, Julien Vermillard wrote:
>> since it's a ConcurrentLinkedQueue it could be a perf killer to do a .size() 
>> from the oracle javadoc : ""Beware that, unlike in most collections, the 
>> size method is NOT a constant-time operation. Because of the asynchronous 
>> nature of these queues, determining the current number of elements requires 
>> a traversal of the elements.""
> Damn right...
> 
> What about using a read/write lock instead of a synchronized block ? The 
> problem with the synchornized block on queue is that we must still protect 
> the queue when it's being written with another synchronized block, when if we 
> use a read/write lock, we can allow parallel writes in the queue, but once it 
> comes to write in the channel, we acquire a write lock and the queue is now 
> protected. Something like :
> 
> private final ReadWriteLock lock = new ReentrantReadWriteLock();
> private final Lock queueReadLock = lock.readLock();
> private final Lock queueWriteLock= lock.writeLock();
> ...
> try {
> queueWriteLock.lock();
> 
>do {
>WriteRequest wreq = queue.peek();
> 
>if (wreq == null) {
>break;
>}
>...
>} while (!queue.isEmpty());
> } finally {
>queueWriteLock.unlock();
> }
> 
> ...
> 
>public WriteRequest enqueueWriteRequest(Object message) {
>DefaultWriteRequest request = new DefaultWriteRequest(message);
> 
>try {
> queueReadLock().lock()
>writeQueue.add(request);
>} finally {
> queueReadLock.unlock();
>}
>}
> 
> -- Regards, Cordialement, Emmanuel Lécharny www.iktek.com


Re: [MINA 3] Session's state

2011-12-07 Thread Chad Beaulac
+1 unit tests for all transitions ;-)

On Tue, Dec 6, 2011 at 3:05 PM, Julien Vermillard wrote:

> On Tue, Dec 6, 2011 at 7:26 PM, Emmanuel Lecharny 
> wrote:
> > Hi,
> >
> > following the different states a session can be in, plus the possible
> > transitions from one state to another, we will have an issue if we don't
> > protect the sessions state against concurrent access and modifications.
> >
> > For instance, as right now, the session's state is a volatile variable in
> > AbstractIoSession :
> >
> >protected volatile SessionState state;
> >
> > it looks like it's protected. It is, only of we consider it from a
> > read/write pespective. That mean it's safe to read the sate, it's safe to
> > change it, there is no way we can't do that as an atomic operation.
> >
> > But there is something we can't do, it's changing the state *and* check
> that
> > the transition is ok :
> >
> >  if (state == CONNECTED ) {
> >state = SECURING
> >  }
> >
> > for instance, might fail as the session's state may have changed before
> we
> > try to change it.
> >
> > So we need to protect the state transition from concurrent accesses.
> Again,
> > one possible solution would be to use a ReentrantReadWrite lock, to allow
> > fast session's state reads, and safe transition.
> >
> > I would also suggest that we have only one way to change the state,
> throw a
> > method like :
> >
> > void changeState( SessionState from, SessionState to)
> >
> > in order to be able to check that the transition does not violate the
> table
> > of possible transitions.
> >
> > wdyt ?
> >
>  + 1
> anyway I'm not sure it's handled correctly today (who said unit tests ?)
>


Re: [MINA 3] Session attributes

2011-12-07 Thread Chad Beaulac
Why not use an enum for all the keys?


On Mon, Dec 5, 2011 at 10:34 AM, Emmanuel Lécharny wrote:

> On 12/5/11 4:32 PM, Christian Schwarz wrote:
>
>> As a user, having to create a new instance to hold the key and value might
>>
>>> be seen as heavy, don't you think ?
>>>
>>> session.set(new AttributeKey(String.**
>>> class,"myKey"),"myAttribute");
>>>
>>> is a bit more complex than
>>>
>>> session.set( "myKey", "myAttribute" );
>>>
>>> Or is it just me ?
>>>
>>>  It's true for your example! I don't know, but i think the most
>> developers
>> would use a 'final static'- constant for there keys, to avoid misspelling
>> and to reduce the memory footprint. I my huble opinion the heavier
>> constuction process (+ ~5sec) could save time because i don't have to test
>> and/or debug, if set an attribute of the wrong  type accidently. I think
>> we
>> need more opinions here, to see what the future users might think about
>> the
>> pro&  contra.
>>
> Totally agree. And we can still change in a near future, before freezing
> the code.
>
> Note that your proposal is closer to MINA 2, so a migration would be
> easier (another pro for your proposal).
>
>
>
> --
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
>
>


Re: [MINA 3.0] IoBuffer usage

2012-01-16 Thread Chad Beaulac
Emmanuel, (all)

I'm working on this Camel ticket:
https://issues.apache.org/jira/browse/CAMEL-2624

I finished the initial cut of
https://issues.apache.org/jira/browse/CAMEL-3471 to create a mina2
component in Camel.

CAMEL-2624 adds the full async behavior for Mina2 endpoints in Camel.

Would it be possible to backport the IoBuffer reading and writing discussed
in this email thread from Mina3 to Mina2?
Following the depth of the stack trace through
AbstractIoSession.write(...), I'm a little concerned about the throughput.
My current code (mina-less) is supporting single TCP channels with 320+
Mb/s rates. I'm porting my code to use Mina with a Mina Google Protocol
Buffers codec I wrote. I know if this is a real problem soon when I finish
CAMEL-2624 and setup some throughput tests.

Regards,
Chad


On Tue, Dec 6, 2011 at 1:27 AM, Chad Beaulac  wrote:

> Looks good Emmanuel.
>
> Sent from my iPhone
>
> On Dec 5, 2011, at 10:13 AM, Emmanuel Lecharny 
> wrote:
>
> > On 12/5/11 3:50 PM, Julien Vermillard wrote:
> >> since it's a ConcurrentLinkedQueue it could be a perf killer to do a
> .size() from the oracle javadoc : ""Beware that, unlike in most
> collections, the size method is NOT a constant-time operation. Because of
> the asynchronous nature of these queues, determining the current number of
> elements requires a traversal of the elements.""
> > Damn right...
> >
> > What about using a read/write lock instead of a synchronized block ? The
> problem with the synchornized block on queue is that we must still protect
> the queue when it's being written with another synchronized block, when if
> we use a read/write lock, we can allow parallel writes in the queue, but
> once it comes to write in the channel, we acquire a write lock and the
> queue is now protected. Something like :
> >
> > private final ReadWriteLock lock = new ReentrantReadWriteLock();
> > private final Lock queueReadLock = lock.readLock();
> > private final Lock queueWriteLock= lock.writeLock();
> > ...
> > try {
> > queueWriteLock.lock();
> >
> >do {
> >WriteRequest wreq = queue.peek();
> >
> >if (wreq == null) {
> >break;
> >}
> >...
> >} while (!queue.isEmpty());
> > } finally {
> >queueWriteLock.unlock();
> > }
> >
> > ...
> >
> >public WriteRequest enqueueWriteRequest(Object message) {
> >DefaultWriteRequest request = new DefaultWriteRequest(message);
> >
> >try {
> > queueReadLock().lock()
> >writeQueue.add(request);
> >} finally {
> > queueReadLock.unlock();
> >}
> >}
> >
> > -- Regards, Cordialement, Emmanuel Lécharny www.iktek.com
>


Re: [MINA 3.0] IoBuffer usage

2012-01-17 Thread Chad Beaulac
On Mon, Jan 16, 2012 at 1:10 PM, Emmanuel Lecharny wrote:

> On 1/16/12 2:56 PM, Chad Beaulac wrote:
>
>> Emmanuel, (all)
>>
>> I'm working on this Camel ticket:
>> https://issues.apache.org/**jira/browse/CAMEL-2624<https://issues.apache.org/jira/browse/CAMEL-2624>
>>
>> I finished the initial cut of
>> https://issues.apache.org/**jira/browse/CAMEL-3471<https://issues.apache.org/jira/browse/CAMEL-3471>to
>>  create a mina2
>> component in Camel.
>>
>> CAMEL-2624 adds the full async behavior for Mina2 endpoints in Camel.
>>
>> Would it be possible to backport the IoBuffer reading and writing
>> discussed
>> in this email thread from Mina3 to Mina2?
>> Following the depth of the stack trace through
>> AbstractIoSession.write(...), I'm a little concerned about the throughput.
>> My current code (mina-less) is supporting single TCP channels with 320+
>> Mb/s rates. I'm porting my code to use Mina with a Mina Google Protocol
>> Buffers codec I wrote. I know if this is a real problem soon when I finish
>> CAMEL-2624 and setup some throughput tests.
>>
>
> Another option would be to port this to MINA3, and make it work with
> Camel. Right now, MINA 3 is pretty rough, but we have made it works well
> with Http and LDAP. I'd rather spend some time making it a bit more solid
> and working well in your case instead of trying to inject the code in MINA2.
>
> Now, it's your call. We can discuss the pros and cons of both approach if
> you like.
>
>
Hi Emmanuel,

One of my pros/cons trade-offs is time-to-market. I can have a solution in
Camel with Mina2 fairly quickly. Although I might have issues with high
data rate streams.
With that said, my approach would be the following:
1) Finish CAMEL-2624 with Mina2. This will give me the asynchronous
endpoints I need and a quick time-to-market. I'll put off issues concerning
high throughput.
2) Work on Mina3 to ensure it has low latency with small data rate streams
and high throughput with large data pipes.
3) Upgrade my Google protocol buffers codec for Mina3.
4) When Mina3 is ready, open a new Camel ticket and create a new mina3
Camel Component.

What do you think?

Regards,
Chad
www.objectivesolutions.com


>
>
> --
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
>
>


Re: [MINA 3.0] IoBuffer usage

2012-01-17 Thread Chad Beaulac


On Jan 17, 2012, at 9:32 AM, Emmanuel Lécharny  wrote:

> On 1/17/12 3:02 PM, Chad Beaulac wrote:
>> On Mon, Jan 16, 2012 at 1:10 PM, Emmanuel Lecharnywrote:
>> 
>>> On 1/16/12 2:56 PM, Chad Beaulac wrote:
>>> 
>>>> Emmanuel, (all)
>>>> 
>>>> I'm working on this Camel ticket:
>>>> https://issues.apache.org/**jira/browse/CAMEL-2624<https://issues.apache.org/jira/browse/CAMEL-2624>
>>>> 
>>>> I finished the initial cut of
>>>> https://issues.apache.org/**jira/browse/CAMEL-3471<https://issues.apache.org/jira/browse/CAMEL-3471>to
>>>>  create a mina2
>>>> component in Camel.
>>>> 
>>>> CAMEL-2624 adds the full async behavior for Mina2 endpoints in Camel.
>>>> 
>>>> Would it be possible to backport the IoBuffer reading and writing
>>>> discussed
>>>> in this email thread from Mina3 to Mina2?
>>>> Following the depth of the stack trace through
>>>> AbstractIoSession.write(...), I'm a little concerned about the throughput.
>>>> My current code (mina-less) is supporting single TCP channels with 320+
>>>> Mb/s rates. I'm porting my code to use Mina with a Mina Google Protocol
>>>> Buffers codec I wrote. I know if this is a real problem soon when I finish
>>>> CAMEL-2624 and setup some throughput tests.
>>>> 
>>> Another option would be to port this to MINA3, and make it work with
>>> Camel. Right now, MINA 3 is pretty rough, but we have made it works well
>>> with Http and LDAP. I'd rather spend some time making it a bit more solid
>>> and working well in your case instead of trying to inject the code in MINA2.
>>> 
>>> Now, it's your call. We can discuss the pros and cons of both approach if
>>> you like.
>>> 
>>> 
>> Hi Emmanuel,
>> 
>> One of my pros/cons trade-offs is time-to-market. I can have a solution in
>> Camel with Mina2 fairly quickly. Although I might have issues with high
>> data rate streams.
>> With that said, my approach would be the following:
>> 1) Finish CAMEL-2624 with Mina2. This will give me the asynchronous
>> endpoints I need and a quick time-to-market. I'll put off issues concerning
>> high throughput.
>> 2) Work on Mina3 to ensure it has low latency with small data rate streams
>> and high throughput with large data pipes.
>> 3) Upgrade my Google protocol buffers codec for Mina3.
>> 4) When Mina3 is ready, open a new Camel ticket and create a new mina3
>> Camel Component.
>> 
>> What do you think?
> 
> I'll try to squeeze 2 hours to backport the patch to MINA 2 today or tomorrow.
> 
> Feel free to ping me on mail or on #mina if I don't send any feedback in the 
> next 2 days (I'm pretty busy and may slip)
> 
> 
> -- 
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
> 
Wow. That is nice! Look forward to checking it out. I'll move forward with my 
plan in the meantime. 

Chad
Sent from my iPhone


Re: [MINA 3.0] IoBuffer usage

2012-01-17 Thread Chad Beaulac
Ok. I'll test the throughput on it with the tests I have. 

Thanks
Chad

Sent from my iPad

On Jan 17, 2012, at 11:33 AM, Emmanuel Lecharny  wrote:

> IFAICT, on MINA 2, you should not have any issue.
> 
> The loop where we write buffers into the channel is :
> 
>...
>final int maxWrittenBytes = session.getConfig().getMaxReadBufferSize()
>+ (session.getConfig().getMaxReadBufferSize() >>> 1);
>int writtenBytes = 0;
>WriteRequest req = null;
> 
>try {
>// Clear OP_WRITE
>setInterestedInWrite(session, false);
> 
>do {
>// Check for pending writes.
>req = session.getCurrentWriteRequest();
> 
>if (req == null) {
>req = writeRequestQueue.poll(session);
> 
>if (req == null) {
>break;
>}
> 
>session.setCurrentWriteRequest(req);
>}
> 
>int localWrittenBytes = 0;
>Object message = req.getMessage();
> 
>if (message instanceof IoBuffer) {
>localWrittenBytes = writeBuffer(session, req,
>hasFragmentation, maxWrittenBytes - writtenBytes,
>currentTime);
> 
>if (( localWrittenBytes > 0 )
> && ((IoBuffer) message).hasRemaining()) {
>// the buffer isn't empty, we re-interest it in writing
>writtenBytes += localWrittenBytes;
>setInterestedInWrite(session, true);
>return false;
>}
>} else { // Blahhh }
> 
>if (localWrittenBytes == 0) {
>// Kernel buffer is full.
>setInterestedInWrite(session, true);
>return false;
>}
> 
>writtenBytes += localWrittenBytes;
> 
>if (writtenBytes >= maxWrittenBytes) {
>// Wrote too much
>scheduleFlush(session);
>return false;
>}
>} while (writtenBytes < maxWrittenBytes);
>...
> 
> with :
> 
>private int writeBuffer(S session, WriteRequest req,
>boolean hasFragmentation, int maxLength, long currentTime)
>throws Exception {
>IoBuffer buf = (IoBuffer) req.getMessage();
>int localWrittenBytes = 0;
> 
>if (buf.hasRemaining()) {
>int length;
> 
>if (hasFragmentation) {
>length = Math.min(buf.remaining(), maxLength);
>} else {
>length = buf.remaining();
>}
> 
>localWrittenBytes = write(session, buf, length);
>}
> 
> and :
> 
>protected int write(NioSession session, IoBuffer buf, int length)
>throws Exception {
>if (buf.remaining() <= length) {
>return session.getChannel().write(buf.buf());
>}
> 
>int oldLimit = buf.limit();
>buf.limit(buf.position() + length);
>try {
>return session.getChannel().write(buf.buf());
>    } finally {
>buf.limit(oldLimit);
>}
>}
> 
> So we try our best to stuff the channel with as many bytes as possible, 
> before giving up (either because we don't have anything to write, to because 
> the channel is full...)
> 
> I don't see if we can do any better.
> 
> On 1/17/12 3:57 PM, Chad Beaulac wrote:
>> 
>> On Jan 17, 2012, at 9:32 AM, Emmanuel Lécharny  wrote:
>> 
>>> On 1/17/12 3:02 PM, Chad Beaulac wrote:
>>>> On Mon, Jan 16, 2012 at 1:10 PM, Emmanuel 
>>>> Lecharnywrote:
>>>> 
>>>>> On 1/16/12 2:56 PM, Chad Beaulac wrote:
>>>>> 
>>>>>> Emmanuel, (all)
>>>>>> 
>>>>>> I'm working on this Camel ticket:
>>>>>> https://issues.apache.org/**jira/browse/CAMEL-2624<https://issues.apache.org/jira/browse/CAMEL-2624>
>>>>>> 
>>>>>> I finished the initial cut of
>>>>>> https://issues.apache.org/**jira/browse/CAMEL-3471<https://issues.apache.org/jira/browse/CAMEL-3471>to
>>>>>>  create a mina2
>>>>>> component in Camel.
>>>>>> 
>>>>>> CAMEL-2624 adds the full async behavior for Mina2 endpoints in Camel.
>>>>>> 
>>

Re: MINA 3.0 Acceptor/Processor

2012-05-31 Thread Chad Beaulac
Julien,

I agree. A single thread with one Selector can be optimized for non-blocking 
I/O to handle a large number of streams simultaneously.

-Chad


On May 31, 2012, at 2:55 PM, Julien Vermillard wrote:

> Hi,
> In mina 2 the event loops are composed of Acceptor/Connector (for
> accepting/connecting TCP sessions) and IoProcessor for handling
> session read and write events. For each service you need at least one
> Acceptor/connector and one IoProcessor, so at least two threads. For a
> simple TCP base asynchronous proxy it sould be doable with one thread.
> 
> The IoProcessor for TCP and for UDP are totally differents, the TCP
> one select TCP client sockets the UDP one handle read write event
> passed by the Acceptor because you have only one socket in a UDP
> server.
> 
> Ayway this logic make the code uber complicated, what I propose :
> 
> Two technical event loop construction :
> A SelectorProcessor, his work is to select SelectableChannel for IO
> event (read, write, accept) and push events to listeners (e.g.
> TcpServer for accept events, UdpServer for read events on new
> sessions)
> A EventProcessor, his work is to process read/write events coming from
> SelectorProcessor using session chains with one thread event.
> 
> So we can wired SelectorProcessor and EventProcessor like we want,
> from just one SelectorProcessor for a one thread logic for quick
> server, to multiple SelectorProcessor pushing event to multiple
> EventProcessors.
> 
> I think it'll make the code really simpler, testable with mock and
> more modulable.
> 
> Julien



Re: [MINA 3.0] Thoughts on TCP server

2012-10-01 Thread Chad Beaulac
Hi Emmanuel,

1) One Reactor is preferable. Easier to manage, easier to code, meets all the 
requirements.

2) Event Processing: Use non-blocking sockets. Accept, connect, read and write 
won't hang each other up.

I have some code that does this. I'll get it and attach to a subsequent email. 
It might take a couple days.

Regards,
Chad Beaulac
Objective Solutions, Inc.
www.objectivesolutions.com
c...@objectivesolutions.com



On Oct 1, 2012, at 6:45 AM, Emmanuel Lécharny wrote:

> Hi guys, some thoughts about the TCP server. feel free to comment.
> 
> TCP server MINA 3
> -
> 
> As we are reworking the server part of MINA, we can review the current 
> architecture. There are a few problems we can address.
> 
> 1) One vs Many selectors
> In MINA 2, we do have at least 2 selectors used :
> - one for the OP_ACCEPT event
> - One or many for the OP_READ/OP_WRITE
> 
> I don't think that it makes a lot of sense to force such a choice. IMO, we 
> could perfectly start with one single selector to handle all the events, 
> assuming we are fast enough to process them. Otherwise, we can also configure 
> the server to use more selectors, if needed.
> 
> Typically, the acceptor selector will just deal with incoming new 
> connections, and the created channel will be registred on an IoProcessor 
> selector on MINA 2. We could register this channel on the acceptor selector.
> 
> In any case, if we do create more than one selector, I suggest that the first 
> one would always handle OP_ACCEPT events, and that's all.
> 
> The general algorithm will look likes :
> 
> signal started
> 
> while true
>  nbSelect = select( delay )
> 
>  if nbSelect != 0
>for each selectionKey do
>  case isAccept // Only if the selector is the first one, otherwise we 
> don't need to heck this case
>create session
> 
>  case isRead
>process read
> 
>  case isWrite
>process write
>done
> 
>  if dispose // In order to stop the server
>process dispose
>break
> done
> 
> The key is to start all the selector workers *before* accepting any 
> connection, otherwise we may lose some messages. One more thing : each 
> selector should signal that there have started before entering in the loop, 
> so that the caller don't have to wait a random period of time for the 
> selectors to be started.
> 
> 2) Events processing
> Now, in order to limit the number of selectors to use, we need to limit the 
> time it takes to process the read/write/accept events. But even if we have 
> many selectors, we should try to minimize the risk that one selector is 
> blocked by a single session blocked somewhere while doing some heavy 
> prcoessing, as it will block all the other sessions.
> 
> Having more than a selector is one way to mitigate this issue : as we have 
> many threads (one per selector), we spread the loads on as many threads.
> Another solution would be to use an executor in charge of processing the 
> events, with a queue between the selector and the executor, queue that is 
> used to process the events as fast as possible on the selector (this is 
> really important for UDP, as we don't want to lose messages simply because 
> the OS buffer is full).
> 
> The problem is that we are just not solving the problem of a rogue service 
> that block a thread for a long time (if we use a limited size executor), or 
> we may end with so many threads that it may kill the server. But anyway, it 
> sounds like a better solution, as incoming events that won't require a long 
> processing will be favored in the long term.
> 
> 3) Write processing
> This is a complex issue too : we may not be able to push all the data we want 
> into the socket, if it becoms full (or was already full). In this case, we 
> will have to store the data in a queue. The following algorithm describe this 
> situation and a proposal to solve it
> 
>if there are some data in the writeQueue
>  then
>// We need to enqueue the data, and write the head of the queue
>enqueue data
> 
>// Now, we shoudl try to write as much of the queue as we can
>while ( queue not empty)
>  do
>  poll the data from the queue
>nbWritten = channel.write( remaining data ) // We may have already 
> written some part of the head data
> 
>if nbWritten < data.remaining
>  then
>// the socket is already full, set its OP_WRITE interest, and 
> don't touch the queue
>selectionKey.ops = OP_WRITE
>break // We can stop, the socket is full anyway
>  else
>  

Re: [MINA3] Writting messages directly : some pb

2013-01-07 Thread Chad Beaulac
If you don't push the data onto a queue and wait for the selector to fire, you 
could be trying to write data to a socket that is congestion controlled, which 
you shouldn't do. The writer should wait for the selector to fire saying the 
socket is writable before trying to write.

Chad Beaulac
Objective Solutions, Inc.
www.objectivesolutions.com
c...@objectivesolutions.com



On Jan 7, 2013, at 4:20 AM, Julien Vermillard wrote:

> I have another problem (probably hitting you too) with the Idle event wich
> is spawned from another thread (the idle checker thread). we should perhaps
> pass message to the selector and wake it.
> 
> 
> 
> 
> On Mon, Jan 7, 2013 at 6:42 AM, Emmanuel Lécharny wrote:
> 
>> Hi !
>> 
>> as you know, I was modifying the code to write the messages directly
>> into the channel instead of pushing it into a queue, and wait for the
>> SelectorLoop to process it. So far, so good it works well, except that
>> the messageSent( event is difficult to generate.
>> 
>> Whe we push the message into a queue, what happens is that we feed the
>> DefaultWriteRequest with the original message (ie before it goes through
>> the codec filter), and we also add a future into it. This is done
>> *after* the message has been enqueued. Why ? Because in
>> AbstractIoSession.processMessageWriting(), we get back the enqueued
>> message using the lastWriteRequest, which is produced by the
>> AbstractIoSession.enqueueFinalWriteMessage().
>> 
>> Now, the messageSent() event will be produced by the selectorLoop() when
>> it process the message to write, pollingit from the queue. At thsi
>> point, this message has everything needed : the data tow rite, the
>> original data and teh future.
>> 
>> But when we write the message directly, we don't have the future nor the
>> original message, yet, but still we have to generate the messageSent
>> event. This is problematic...
>> 
>> Here, what I suggest, is that the original message is to be passed
>> through the full chain of filters, so that we can update it in any case.
>> That would probably mean that the AbstractIoSession.doWriteWithFuture()
>> create this envelop, feed it with the original message and the future,
>> and pass this object down the chain.
>> 
>> I don't see any oher way to solve this issue. If any of you have any
>> better idea, I'm all ears :)
>> 
>> --
>> Regards,
>> Cordialement,
>> Emmanuel Lécharny
>> www.iktek.com
>> 
>> 









Re: [MINA3] Writting messages directly : some pb

2013-01-07 Thread Chad Beaulac

On Jan 7, 2013, at 11:58 AM, Emmanuel Lécharny wrote:

> Le 1/7/13 2:19 PM, Chad Beaulac a écrit :
>> If you don't push the data onto a queue and wait for the selector to fire, 
>> you could be trying to write data to a socket that is congestion controlled, 
>> which you shouldn't do. The writer should wait for the selector to fire 
>> saying the socket is writable before trying to write.
> 
> The thing is that in any case, we do try to write into the socket. If
> the socket internal buffer gets full, we won't be able to write any ore
> data into it, and we will have to wait for the socket to inform the
> selector when it's ready to accept more data.
> 
> What I'm proposing is not any different, except that I bypass the queue
> *if* and only *if* the socket is ready to accept new data. In other
> words, the algorithm is :
>  o if there is a queue containing messages waiting for the socket to be
> ready (OP_WRITE set ti true), then enqueue the new message.
>  o else try to write the message into the socket
>- if we can't write the full message, set the SelectionKey to be
> interested in OP_WRITE, and enqueue the remaining data.
>- Otherwise, the data have already been written anyway, we are done
> 
> This is the way MINA 2 works since it's inception.
> 
> Is there anything wrong with that ?
> 

Potentially. You should not attempt to write to the socket unless the OP_WRITE 
selector key says the socket is writable. If the selector has fired and says 
the socket is writable before you try to write, it's not a problem. Otherwise, 
you should queue the data, let the Selector fire and then try to write. I don't 
see any way around queuing the data.
You never want to try to write to a socket that is writable. If it isn't 
writable, it's closed or congestion controlled. This "edge-triggered" I/O 
approach works well. You don't know what "edge" you're on if you haven't 
registered for an event before you try to write.

Pseudocode Algorithm.

// Client provides data to this class to output onto the socket.
public synchronized void put(Data data) {
  if (queue.isEmpty) {
register(key, OP_WRITE);  // output queue is empty. We need to register 
for the write event so the OS can tell us when it's ok to write.
enqueue(data);  // Put the data onto the ConcurrentLinkedQueue
  }
  else {
enqueue(data); // Just put the data onto the queue. We're already 
registered for the write event.
  } 
}

public void runEventLoop() {
// handle events on Selector
}


// OP_WRITE fired for key
public synchronized void writeData(key) {
while (!queue.isEmpty()) {
Data data = queue.dequeue();
writeForKey(key,data);
if (data.remaining() > 0) {
  enqueue(data);
  break; // socket is congestion controlled. We're still 
registered for the write event. When the TCP queue drains, the write event will 
fire again
} 
}
if (queue.isEmpty()) {
  unregister(key, OP_WRITE);  // We don't have data to write to this 
socket anymore. Unregister for the write event.
}
}


> 
> -- 
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com 
> 

Chad Beaulac
Objective Solutions, Inc.
www.objectivesolutions.com
c...@objectivesolutions.com








Re: [MINA3] Writting messages directly : some pb

2013-01-08 Thread Chad Beaulac
Non-blocking mode for all sockets for sure. I like the algorithm. Seems like 
you almost certainly still avoid trying to write multiple times to a congestion 
controlled socket which must not happen. I'll modify my code to try your 
slightly different approach. 

Chad 

Sent from my iPhone

On Jan 8, 2013, at 3:45 AM, Julien Vermillard  wrote:

> the trick is to try to write first with the socket configured in non
> blocking mode, so if it's congestion controlled , the write will
> fail instantaneously so we en-queue the message and set the OP_WRITE flag
> on the selection key.
> 
> 
> On Tue, Jan 8, 2013 at 3:24 AM, Chad Beaulac  wrote:
> 
>> 
>> On Jan 7, 2013, at 11:58 AM, Emmanuel Lécharny wrote:
>> 
>>> Le 1/7/13 2:19 PM, Chad Beaulac a écrit :
>>>> If you don't push the data onto a queue and wait for the selector to
>> fire, you could be trying to write data to a socket that is congestion
>> controlled, which you shouldn't do. The writer should wait for the selector
>> to fire saying the socket is writable before trying to write.
>>> 
>>> The thing is that in any case, we do try to write into the socket. If
>>> the socket internal buffer gets full, we won't be able to write any ore
>>> data into it, and we will have to wait for the socket to inform the
>>> selector when it's ready to accept more data.
>>> 
>>> What I'm proposing is not any different, except that I bypass the queue
>>> *if* and only *if* the socket is ready to accept new data. In other
>>> words, the algorithm is :
>>> o if there is a queue containing messages waiting for the socket to be
>>> ready (OP_WRITE set ti true), then enqueue the new message.
>>> o else try to write the message into the socket
>>>   - if we can't write the full message, set the SelectionKey to be
>>> interested in OP_WRITE, and enqueue the remaining data.
>>>   - Otherwise, the data have already been written anyway, we are done
>>> 
>>> This is the way MINA 2 works since it's inception.
>>> 
>>> Is there anything wrong with that ?
>> 
>> Potentially. You should not attempt to write to the socket unless the
>> OP_WRITE selector key says the socket is writable. If the selector has
>> fired and says the socket is writable before you try to write, it's not a
>> problem. Otherwise, you should queue the data, let the Selector fire and
>> then try to write. I don't see any way around queuing the data.
>> You never want to try to write to a socket that is writable. If it isn't
>> writable, it's closed or congestion controlled. This "edge-triggered" I/O
>> approach works well. You don't know what "edge" you're on if you haven't
>> registered for an event before you try to write.
>> 
>> Pseudocode Algorithm.
>> 
>> // Client provides data to this class to output onto the socket.
>> public synchronized void put(Data data) {
>>  if (queue.isEmpty) {
>>register(key, OP_WRITE);  // output queue is empty. We need to
>> register for the write event so the OS can tell us when it's ok to write.
>>enqueue(data);  // Put the data onto the ConcurrentLinkedQueue
>>  }
>>  else {
>>enqueue(data); // Just put the data onto the queue. We're already
>> registered for the write event.
>>  }
>> }
>> 
>> public void runEventLoop() {
>>// handle events on Selector
>> }
>> 
>> 
>> // OP_WRITE fired for key
>> public synchronized void writeData(key) {
>>while (!queue.isEmpty()) {
>>Data data = queue.dequeue();
>>writeForKey(key,data);
>>if (data.remaining() > 0) {
>>  enqueue(data);
>>  break; // socket is congestion controlled. We're still
>> registered for the write event. When the TCP queue drains, the write event
>> will fire again
>>}
>>}
>>if (queue.isEmpty()) {
>>  unregister(key, OP_WRITE);  // We don't have data to write to
>> this socket anymore. Unregister for the write event.
>>}
>> }
>> 
>> 
>>> 
>>> --
>>> Regards,
>>> Cordialement,
>>> Emmanuel Lécharny
>>> www.iktek.com
>> 
>> Chad Beaulac
>> Objective Solutions, Inc.
>> www.objectivesolutions.com
>> c...@objectivesolutions.com
>> 
>> 
>> 
>> 
>> 
>> 
>> 


Re: [MINA3] Writting messages directly : some pb

2013-01-08 Thread Chad Beaulac
Understood on change in state between writable event firing and socket could 
close by the time you goto write. In which case the read event would fire next 
and result in zero bytes read signaling the socket is closed. 

Your statement below is exactly what I have dealt with on many systems. 
Resulting in TCP buffer overflow and RAM bloat/leak. 
"One possible case would be if the underlying OS were to accept anything
you try to write into the socket..."

Sent from my iPhone

On Jan 8, 2013, at 9:21 AM, Emmanuel Lécharny  wrote:

> One possible case would be if the underlying OS were to accept anything
> you try to write into the socke


Re: [MINA3] Writting messages directly : some pb

2013-01-08 Thread Chad Beaulac
Hi Emmanuel.

It's all good. We want the same thing. I'm not feeling like we're arguing at 
all. :-)
Understood on setting SO_SNDBUF.
I think your algorithm is good. The write before the select fires the write 
event only when the queue is empty should be ok and it should lower initial 
latency as you suggested earlier.
I'm going to test this change but I expect that it will work well.

Regards,
Chad



On Jan 8, 2013, at 11:40 AM, Emmanuel Lécharny wrote:

> Just a clarification :
> 
> Chad, I don't want to fell like I'm arguing. I'm really trying to see if
> the change I have pushed should be made optional, and if doing so will
> really help when we are dealing with a congestion situation.
> 
> What I dont get is how waiting for the select() to wake up the thread
> when the socket is ready to write is any different from what I'm doing.
> Do you have any link that explaining the pb ?
> 
> Thanks !
> 
> 
> Le 1/8/13 4:46 PM, Emmanuel Lécharny a écrit :
>> Le 1/8/13 4:06 PM, Chad Beaulac a écrit :
>>> Understood on change in state between writable event firing and socket 
>>> could close by the time you goto write. In which case the read event would 
>>> fire next and result in zero bytes read signaling the socket is closed. 
>>> 
>>> Your statement below is exactly what I have dealt with on many systems. 
>>> Resulting in TCP buffer overflow and RAM bloat/leak. 
>>> "One possible case would be if the underlying OS were to accept anything
>>> you try to write into the socket..."
>> In my understanding, you can mitigate such pb by setting the
>> SO_SNDBUF,accordingly to your client capacity. At some point, if your
>> client is slow, you perfectly can ends with your server being swamped
>> with data waiting to be sent, but I don't see hav using a queue can help
>> here : as soon as the socket will refuse to accept anymore data than the
>> buffer it uses to store the data, you are safe.
>> 
>> In other words, your application should take care of such scenario, or,
>> better for MINA to provide the tools to inform the application when the
>> sending queue is getting too large...
>> 
> 
> 
> -- 
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com 
> 

Chad Beaulac
Objective Solutions, Inc.
www.objectivesolutions.com
c...@objectivesolutions.com
410.707.5842 (cell)








Re: CumulativeProtocolDecoder and UDP

2016-03-01 Thread Chad Beaulac
UDP datagrams are fragmented on MTU size. So if your messages are bigger than 
MTU size, you need to handle multiple callbacks for MTU size in your decoder. 
So, it doesn't work very well with UDP. 

Sent from my iPhone

> On Mar 1, 2016, at 11:37 AM, Ashish  wrote:
> 
> TCP handles fragmentation at its level, but for UDP you have to do it
> at application layer meaning UDP data has to carry message sequences
> and then you merge them at receiving end. Here you packets can come in
> different order do you got to keep them somewhere before complete
> message is constructed.
> 
> 
> 
>> On Tue, Mar 1, 2016 at 7:58 AM, tunca  wrote:
>> Our customer has strange requirement about merging multiple udp/TCP packages
>> to create a single  message.
>> There is a well defined protocol that defines message boundaries.
>> The decoder is working good with TCP packages.  It can create a single
>> message from multiple TCP packages.
>> However when a message is fragmented into multiple packages the do doDecode
>> method always gives the same ioBuffer.
>> I'll try 2.0.13 next day.
>> Thanks
>> 
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://apache-mina.10907.n7.nabble.com/CumulativeProtocolDecoder-and-UDP-tp18927p50274.html
>> Sent from the Apache MINA Developer Forum mailing list archive at Nabble.com.
> 
> 
> 
> -- 
> thanks
> ashish
> 
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal


[jira] [Commented] (DIRMINA-654) Google Protocol Buffers Codec

2012-08-23 Thread Chad Beaulac (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13440620#comment-13440620
 ] 

Chad Beaulac commented on DIRMINA-654:
--

Hi,

I wrote a Mina 2.x GPB codec. 
* Tests pass with Mina 2.x
* Supports multiple messages on single endpoint using extensions
* Varint encoding for wire protocol works fine. No need for fixed length header.

The Maven project I have that builds it required protoc at maven build time. I 
think donating this codec to Apache would be a good idea. How shall we proceed?

Regards,
Chad


> Google Protocol Buffers Codec
> -
>
> Key: DIRMINA-654
> URL: https://issues.apache.org/jira/browse/DIRMINA-654
> Project: MINA
>  Issue Type: New Feature
>  Components: Filter
>Reporter: Tomasz Blachowicz
> Fix For: 2.0.6
>
> Attachments: protobuf.txt
>
>
> Google Protocol Buffers (protobuf) are language-neutral, platform-neutral, 
> extensible mechanism for serializing structured data. Apache MINA already 
> provides several protocol codecs (decoders and encodes) that allows to deal 
> with different wire formats. The Protocol Buffer is the perfect candidate for 
> new codec available as separate and optional module within Apache MINA.
> More details on Protocol Buffers can be found on the website:
> http://code.google.com/apis/protocolbuffers

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (DIRMINA-654) Google Protocol Buffers Codec

2012-08-23 Thread Chad Beaulac (JIRA)

[ 
https://issues.apache.org/jira/browse/DIRMINA-654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13440620#comment-13440620
 ] 

Chad Beaulac edited comment on DIRMINA-654 at 8/24/12 7:26 AM:
---

Hi,

I wrote a Mina 2.x GPB codec. 
* Tests pass with Mina 2.x
* Supports multiple messages on single endpoint using extensions
* Varint encoding for wire protocol works fine. No need for fixed length header.

The Maven project I have requires protoc at maven build time. I think donating 
this codec to Apache would be a good idea. How shall we proceed?

Regards,
Chad


  was (Author: cabeaulac):
Hi,

I wrote a Mina 2.x GPB codec. 
* Tests pass with Mina 2.x
* Supports multiple messages on single endpoint using extensions
* Varint encoding for wire protocol works fine. No need for fixed length header.

The Maven project I have that builds it required protoc at maven build time. I 
think donating this codec to Apache would be a good idea. How shall we proceed?

Regards,
Chad

  
> Google Protocol Buffers Codec
> -
>
> Key: DIRMINA-654
> URL: https://issues.apache.org/jira/browse/DIRMINA-654
> Project: MINA
>  Issue Type: New Feature
>  Components: Filter
>Reporter: Tomasz Blachowicz
> Fix For: 2.0.6
>
> Attachments: protobuf.txt
>
>
> Google Protocol Buffers (protobuf) are language-neutral, platform-neutral, 
> extensible mechanism for serializing structured data. Apache MINA already 
> provides several protocol codecs (decoders and encodes) that allows to deal 
> with different wire formats. The Protocol Buffer is the perfect candidate for 
> new codec available as separate and optional module within Apache MINA.
> More details on Protocol Buffers can be found on the website:
> http://code.google.com/apis/protocolbuffers

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira