Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-11 Thread Oleg Kalnichevski
On Sat, 2010-10-09 at 16:58 +0200, Emmanuel Lécharny wrote:
 On 10/6/10 9:18 PM, Oleg Kalnichevski wrote:
  Here is my private fork of HttpCore NIO code with all HTTP specific
  stuff ripped out. It might be easier for you to get a feel of the
  framework by looking at the protocol independent code.
 
  http://github.com/ok2c/lightnio
 
  This code is what I would like to get potentially replaced by MINA 3.0
 
 Looking at this code (well, browsing it, I still have to do my homework 
 ;), It seems that the internal logic is really close to what we have in 
 MINA, and that's quite logical.
 
 I see that the processEvent() method is the place that propagate the 
 events to the underlying application, and probably what you'd like to 
 have in MINA to have direct control on the channel if I understand 
 correctly.  There is a slight difference here between lightNIO and MINA 
 : once we have created a session, it has a Chain of filters attached to 
 it, and we call the method corresponding to the event we had, which goes 
 through the chain up to the Handler. This is where you have your 
 application code. In other words, an application based on lightNIO will 
 have to implement IoRector, when we require that the application 
 implement IoHandler in MINA.
 

Hi Emmanuel

Actually, HttpCore comes with two most common IORector implementations:
listener and connector. What the user code is expected to provide is a
custom implementation of the IOEventDispatch interface. The I/O dispatch
layer is where I/O events get pre-processed and propagated to a protocol
handler. That would be the place to implement a filter pipeline similar
to that employed by MINA. 


 One other difference is that we process the read and write parts into 
 the main loop (in IoProcessor), something you want to handle directly.
 
 If we remove this processing from the IoProcessor, and move it to be a 
 Filter (ReadFilter, WriteFilter), then it becomes to be very similar to 
 what you want : either we add those filters in the chain, and the 
 application does not have to deal with the read/write operation (buffer 
 creation and such, queues...), or we let the application to deal with this.
 
 That might work. Still have to think more about the impact on MINA (I 
 would hate asking MINA users to inject those filters manually. But this 
 can be the default chain, and we can define another chain without those 
 filters)
 
 

I see no good reason why an I/O framework could not support both modes
equally well or even have a filter pipeline run on top of a Channel
based I/O reactor. 

If you are willing to invest some time into exploring the possibility of
exposing the lower level machinery of MINA through a public interface of
some sort, I would also happily do my bit by helping debug the code and
by contributing a test suite for it.

Potentially I could also contribute a pretty much feature complete
SMTP/LMPT transport implementation (both client and server side), which
is currently based on my private fork of HttpCore NIO code (lightnio),
if there is interest.

Cheers

Oleg



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-11 Thread Emmanuel Lécharny

 On 10/11/10 2:06 PM, Oleg Kalnichevski wrote:

On Sat, 2010-10-09 at 16:58 +0200, Emmanuel Lécharny wrote:

On 10/6/10 9:18 PM, Oleg Kalnichevski wrote:

Here is my private fork of HttpCore NIO code with all HTTP specific
stuff ripped out. It might be easier for you to get a feel of the
framework by looking at the protocol independent code.

http://github.com/ok2c/lightnio

This code is what I would like to get potentially replaced by MINA 3.0

Looking at this code (well, browsing it, I still have to do my homework
;), It seems that the internal logic is really close to what we have in
MINA, and that's quite logical.

I see that the processEvent() method is the place that propagate the
events to the underlying application, and probably what you'd like to
have in MINA to have direct control on the channel if I understand
correctly.  There is a slight difference here between lightNIO and MINA
: once we have created a session, it has a Chain of filters attached to
it, and we call the method corresponding to the event we had, which goes
through the chain up to the Handler. This is where you have your
application code. In other words, an application based on lightNIO will
have to implement IoRector, when we require that the application
implement IoHandler in MINA.


Hi Emmanuel

Actually, HttpCore comes with two most common IORector implementations:
listener and connector. What the user code is expected to provide is a
custom implementation of the IOEventDispatch interface. The I/O dispatch
layer is where I/O events get pre-processed and propagated to a protocol
handler. That would be the place to implement a filter pipeline similar
to that employed by MINA.


We have removed the IoHandler class from MINA 3.0 code. That's a first 
step toward a more generic approach.

One other difference is that we process the read and write parts into
the main loop (in IoProcessor), something you want to handle directly.

If we remove this processing from the IoProcessor, and move it to be a
Filter (ReadFilter, WriteFilter), then it becomes to be very similar to
what you want : either we add those filters in the chain, and the
application does not have to deal with the read/write operation (buffer
creation and such, queues...), or we let the application to deal with this.

That might work. Still have to think more about the impact on MINA (I
would hate asking MINA users to inject those filters manually. But this
can be the default chain, and we can define another chain without those
filters)



I see no good reason why an I/O framework could not support both modes
equally well or even have a filter pipeline run on top of a Channel
based I/O reactor.
I agree. However, we don't have an event like 'readyToWrite' to inform 
the application that the channel is ready to accept some more write 
peration. We have to refactor this part, one possible solution would be 
to let the user determinate the queue size : if it's zero, then he has a 
direct access to the channel, with all the needed control over it.

If you are willing to invest some time into exploring the possibility of
exposing the lower level machinery of MINA through a public interface of
some sort, I would also happily do my bit by helping debug the code and
by contributing a test suite for it.
Work in progress :) As we support many different kind of transports 
(Nio, Apr, VmPipe, Serial and potentially plain IO), we would like to 
have a common approach for all those transports. That might lead to 
major refactoring in MINA code (in fact, we think that MINA 3 will be a 
complete rewrite).


Don't expect that we will be ready anytime soon :/

Potentially I could also contribute a pretty much feature complete
SMTP/LMPT transport implementation (both client and server side), which
is currently based on my private fork of HttpCore NIO code (lightnio),
if there is interest.

Of course there is !


--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-11 Thread Oleg Kalnichevski
On Mon, 2010-10-11 at 14:19 +0200, Emmanuel Lécharny wrote:

...

 We have removed the IoHandler class from MINA 3.0 code. That's a first 
 step toward a more generic approach.
  One other difference is that we process the read and write parts into
  the main loop (in IoProcessor), something you want to handle directly.
 
  If we remove this processing from the IoProcessor, and move it to be a
  Filter (ReadFilter, WriteFilter), then it becomes to be very similar to
  what you want : either we add those filters in the chain, and the
  application does not have to deal with the read/write operation (buffer
  creation and such, queues...), or we let the application to deal with this.
 
  That might work. Still have to think more about the impact on MINA (I
  would hate asking MINA users to inject those filters manually. But this
  can be the default chain, and we can define another chain without those
  filters)
 
 
  I see no good reason why an I/O framework could not support both modes
  equally well or even have a filter pipeline run on top of a Channel
  based I/O reactor.
 I agree. However, we don't have an event like 'readyToWrite' to inform 
 the application that the channel is ready to accept some more write 
 peration. We have to refactor this part, one possible solution would be 
 to let the user determinate the queue size : if it's zero, then he has a 
 direct access to the channel, with all the needed control over it.
  If you are willing to invest some time into exploring the possibility of
  exposing the lower level machinery of MINA through a public interface of
  some sort, I would also happily do my bit by helping debug the code and
  by contributing a test suite for it.
 Work in progress :) As we support many different kind of transports 
 (Nio, Apr, VmPipe, Serial and potentially plain IO), we would like to 
 have a common approach for all those transports. That might lead to 
 major refactoring in MINA code (in fact, we think that MINA 3 will be a 
 complete rewrite).
 
 Don't expect that we will be ready anytime soon :/

I'll be lurking on the list, but please do feel free to ping me directly
as soon as you have something you would like me to start looking at. 

Cheers

Oleg



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Emmanuel Lecharny

 On 10/6/10 2:49 PM, Oleg Kalnichevski wrote:

MINA devs,

Would you be receptive to the idea of supporting a message-less (or
bare-metal, if you like) I/O mode? Essentially, I would like MINA 3.0 to
make it possible to interact directly with the underlying I/O channels
bypassing the read / write data queues altogether.
There is nothing such as a queue used for read operation. We read the 
data from the channel into a buffer, and then call the messageReceived() 
method through the filters chain up to he handler.


Of course, if you add an executor in the middle, then that's a different 
story.


Anyway. Maybe what you want is that the handler directly do the read, 
sparing the creation of an intermediate buffer. You can't currently do 
that, so the buffer will always be created and filled with data.


For the write operation, what is currently missing in MINA 2 is the 
transfertTo method which would allow you to push file contents directly 
to the socket without copying it into memory. This is most certainly 
something we want to have in 3.0



In other words,
whenever a channel signals read / write readiness, the I/O selector
thread would fire an event referring to the originating channel. The
decision as to how much data can be read from or written to the channel
would be left up to the I/O handler.


Well, that does not seems very useful. If you defer to the handler the 
read and write operation on top of the channel, then why not directly 
write your NIO server from scratch ? It's about 500 lines of code, all 
in all, and you can even disassemble MINA to do that ...

I am perfectly aware of downsides of this approach, which are several,
but it would enable data intensive protocols such as HTTP and SMTP to
manage connection memory footprint more conservatively. For instance,
the protocol handler could attempt to preallocate a fixed and invariable
amount memory at the connection initialization time and either succeed
or fail early instead of risking the out of memory condition half way
through a transaction due to the memory flooding.
You already can manage the flooding by limitating the number of bytes 
you read. As you have the complete control on the initial RcvBufferSize, 
plus the control over the created buffer, you can always at some point 
'kill' a session which has receive too much data.


There is also possible to manage a threshold for every session, which 
might help in this case.


For instance, in Apache DirectoryServer, we have added a parameter that 
limit the size of the received PDU, this parameter can be modified for 
any session. If we receive more bytes that this limit, then we close the 
session. It works pretty well.

I understand this might be too much of a radical shift from the existing
architecture, so feel free to ignore me if you find this approach
incompatible with the MINA 3.0 design concepts.
No, that's fine, it's just important to know what we already have and 
what we can offer without cutting two legs to MINA :)


--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Oleg Kalnichevski
On Wed, 2010-10-06 at 16:55 +0200, Emmanuel Lecharny wrote:
 On 10/6/10 2:49 PM, Oleg Kalnichevski wrote:
  MINA devs,
 
  Would you be receptive to the idea of supporting a message-less (or
  bare-metal, if you like) I/O mode? Essentially, I would like MINA 3.0 to
  make it possible to interact directly with the underlying I/O channels
  bypassing the read / write data queues altogether.
 There is nothing such as a queue used for read operation. We read the 
 data from the channel into a buffer, and then call the messageReceived() 
 method through the filters chain up to he handler.
 
 Of course, if you add an executor in the middle, then that's a different 
 story.
 
 Anyway. Maybe what you want is that the handler directly do the read, 
 sparing the creation of an intermediate buffer. You can't currently do 
 that, so the buffer will always be created and filled with data.
 

There are enough scenarios (especially when streaming large entities)
when this extra copy is wasteful and unnecessary. Moreover, what if the
I/O handler is simply unable to consume the message entirely without
allocating more memory? The content of the message cannot be unread
until some space frees up in the input buffer. The beauty of Channel is
that the I/O handler can read only as much as it is able to process at a
given point of time.


 For the write operation, what is currently missing in MINA 2 is the 
 transfertTo method which would allow you to push file contents directly 
 to the socket without copying it into memory. This is most certainly 
 something we want to have in 3.0
 

Again, what if the transmitted entity is not backed by a file?


  In other words,
  whenever a channel signals read / write readiness, the I/O selector
  thread would fire an event referring to the originating channel. The
  decision as to how much data can be read from or written to the channel
  would be left up to the I/O handler.
 
 Well, that does not seems very useful.

It all depends how you look at it.


  If you defer to the handler the 
 read and write operation on top of the channel, then why not directly 
 write your NIO server from scratch ? It's about 500 lines of code, all 
 in all, and you can even disassemble MINA to do that ...


It is a bit more if the SSL support and other bits and pieces, but this
is exactly what I have to do at the moment. 


  I am perfectly aware of downsides of this approach, which are several,
  but it would enable data intensive protocols such as HTTP and SMTP to
  manage connection memory footprint more conservatively. For instance,
  the protocol handler could attempt to preallocate a fixed and invariable
  amount memory at the connection initialization time and either succeed
  or fail early instead of risking the out of memory condition half way
  through a transaction due to the memory flooding.
 You already can manage the flooding by limitating the number of bytes 
 you read. As you have the complete control on the initial RcvBufferSize, 
 plus the control over the created buffer, you can always at some point 
 'kill' a session which has receive too much data.
 

I would very much rather prefer to avoid memory flooding instead of
having to manage it.


 There is also possible to manage a threshold for every session, which 
 might help in this case.
 
 For instance, in Apache DirectoryServer, we have added a parameter that 
 limit the size of the received PDU, this parameter can be modified for 
 any session. If we receive more bytes that this limit, then we close the 
 session. It works pretty well.

By dropping the connection in the middle of the session? Would not it be
better to pre-allocate a fixed amount of memory and have a guarantee
that more memory will not be required (at least on the transport level)?


  I understand this might be too much of a radical shift from the existing
  architecture, so feel free to ignore me if you find this approach
  incompatible with the MINA 3.0 design concepts.
 No, that's fine, it's just important to know what we already have and 
 what we can offer without cutting two legs to MINA :)
 

It is shame that by abstracting away java.nio.channels.Channel both MINA
and Netty throw away one of the best features of NIO, which is memory
efficiency.

Anyways, I understand that MINA is based on a different philosophy and I
also understand the convenience of having a chain of filters that work
with messages that are essentially small chunks of memory. I just
thought the core framework might be able to support both modes.

I apologize for intrusion.

Cheers   

Oleg



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Bernd Fondermann
On Wed, Oct 6, 2010 at 17:37, Oleg Kalnichevski ol...@apache.org wrote:

 I apologize for intrusion.

Oleg, no apology needed, I highly appreciate this thread.
Very informative.

  Bernd


Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Emmanuel Lécharny

 On 10/6/10 5:37 PM, Oleg Kalnichevski wrote:

On Wed, 2010-10-06 at 16:55 +0200, Emmanuel Lecharny wrote:

On 10/6/10 2:49 PM, Oleg Kalnichevski wrote:

MINA devs,

Would you be receptive to the idea of supporting a message-less (or
bare-metal, if you like) I/O mode? Essentially, I would like MINA 3.0 to
make it possible to interact directly with the underlying I/O channels
bypassing the read / write data queues altogether.

There is nothing such as a queue used for read operation. We read the
data from the channel into a buffer, and then call the messageReceived()
method through the filters chain up to he handler.

Of course, if you add an executor in the middle, then that's a different
story.

Anyway. Maybe what you want is that the handler directly do the read,
sparing the creation of an intermediate buffer. You can't currently do
that, so the buffer will always be created and filled with data.


There are enough scenarios (especially when streaming large entities)
when this extra copy is wasteful and unnecessary. Moreover, what if the
I/O handler is simply unable to consume the message entirely without
allocating more memory?


Use MemoryMappedFile. It's supported.

For the write operation, what is currently missing in MINA 2 is the
transfertTo method which would allow you to push file contents directly
to the socket without copying it into memory. This is most certainly
something we want to have in 3.0


Again, what if the transmitted entity is not backed by a file?


This is just an example, by here, you can perfectly do something like :

do {
  WriteFuture future = session.write( data);

  future.await();

  if ( !future.isWritten() ) {
// error
break;
  }

  // grab next piece of data
} while ( true );


In other words,
whenever a channel signals read / write readiness, the I/O selector
thread would fire an event referring to the originating channel. The
decision as to how much data can be read from or written to the channel
would be left up to the I/O handler.

Well, that does not seems very useful.

It all depends how you look at it.
(my response was supposed to be clarified by the next paragraph. 
However, it sounds like I'm saying that I emit an opinion, which is not 
the case)



  If you defer to the handler the
read and write operation on top of the channel, then why not directly
write your NIO server from scratch ? It's about 500 lines of code, all
in all, and you can even disassemble MINA to do that ...


It is a bit more if the SSL support and other bits and pieces, but this
is exactly what I have to do at the moment.
And it make complete sense. I mean, MINA is a heavy beast which offers a 
lot of goodies, but it comes at a price. Sometime, this price is simply 
too expensive...



I am perfectly aware of downsides of this approach, which are several,
but it would enable data intensive protocols such as HTTP and SMTP to
manage connection memory footprint more conservatively. For instance,
the protocol handler could attempt to preallocate a fixed and invariable
amount memory at the connection initialization time and either succeed
or fail early instead of risking the out of memory condition half way
through a transaction due to the memory flooding.

You already can manage the flooding by limitating the number of bytes
you read. As you have the complete control on the initial RcvBufferSize,
plus the control over the created buffer, you can always at some point
'kill' a session which has receive too much data.


I would very much rather prefer to avoid memory flooding instead of
having to manage it.
There is no way you can 'avoid' memory flooding, except if you limit 
everything :

- the message size
- the number of connected clients

which is, somehow, managing the flooding.


There is also possible to manage a threshold for every session, which
might help in this case.

For instance, in Apache DirectoryServer, we have added a parameter that
limit the size of the received PDU, this parameter can be modified for
any session. If we receive more bytes that this limit, then we close the
session. It works pretty well.

By dropping the connection in the middle of the session? Would not it be
better to pre-allocate a fixed amount of memory and have a guarantee
that more memory will not be required (at least on the transport level)?
if a session is consumming more memory than allowed, that means 
something wrong is going on. I then would rather kill this session which 
might well kill my server. Now, allocating a fixed size buffer for each 
session does not mitigate this risk : t just a way to consume memory you 
are likely not going to use, if your message are smaller.


What I mean here is that just because some session might send large 
chunks of data, you should not consider that all the session will do the 
same, and if it's the case, then allocating a fixed size for all the 
session will not help a lot : what will you do when the session's memory 
is full ?



I understand this 

Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Emmanuel Lecharny

 On 10/6/10 5:49 PM, Bernd Fondermann wrote:

On Wed, Oct 6, 2010 at 17:37, Oleg Kalnichevskiol...@apache.org  wrote:

I apologize for intrusion.

Oleg, no apology needed, I highly appreciate this thread.
Very informative.
Indeed. It's probably certain that MINA is lacking documentation, and 
that it's internal is quite complex.


All imputs are very welcome at this stage of design (basically, blue print).

In fact, I would be really interested in knowing exactly what kind of 
service you want to provide on top of NIO/ MINA (if it fits your need), 
because it ight be useful in the design decision we would take.


--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Oleg Kalnichevski
On Wed, 2010-10-06 at 18:10 +0200, Emmanuel Lécharny wrote:
 On 10/6/10 5:37 PM, Oleg Kalnichevski wrote:
  On Wed, 2010-10-06 at 16:55 +0200, Emmanuel Lecharny wrote:
  On 10/6/10 2:49 PM, Oleg Kalnichevski wrote:
  MINA devs,
 
  Would you be receptive to the idea of supporting a message-less (or
  bare-metal, if you like) I/O mode? Essentially, I would like MINA 3.0 to
  make it possible to interact directly with the underlying I/O channels
  bypassing the read / write data queues altogether.
  There is nothing such as a queue used for read operation. We read the
  data from the channel into a buffer, and then call the messageReceived()
  method through the filters chain up to he handler.
 
  Of course, if you add an executor in the middle, then that's a different
  story.
 
  Anyway. Maybe what you want is that the handler directly do the read,
  sparing the creation of an intermediate buffer. You can't currently do
  that, so the buffer will always be created and filled with data.
 
  There are enough scenarios (especially when streaming large entities)
  when this extra copy is wasteful and unnecessary. Moreover, what if the
  I/O handler is simply unable to consume the message entirely without
  allocating more memory?
 
 Use MemoryMappedFile. It's supported.

Errr. Thank you very much.

  For the write operation, what is currently missing in MINA 2 is the
  transfertTo method which would allow you to push file contents directly
  to the socket without copying it into memory. This is most certainly
  something we want to have in 3.0
 
  Again, what if the transmitted entity is not backed by a file?
 
 This is just an example, by here, you can perfectly do something like :
 
 do {
WriteFuture future = session.write( data);
 
future.await();
 
if ( !future.isWritten() ) {
  // error
  break;
}
 
// grab next piece of data
 } while ( true );
 

The whole point is that I would like to write out only as much data as a
channel is able to take _without_ blocking. This is precisely what the
Channel interface enables me to do. I do not want just to block waiting
for a chunk of memory to be written out. 


  In other words,
  whenever a channel signals read / write readiness, the I/O selector
  thread would fire an event referring to the originating channel. The
  decision as to how much data can be read from or written to the channel
  would be left up to the I/O handler.
  Well, that does not seems very useful.
  It all depends how you look at it.
 (my response was supposed to be clarified by the next paragraph. 
 However, it sounds like I'm saying that I emit an opinion, which is not 
 the case)
 
If you defer to the handler the
  read and write operation on top of the channel, then why not directly
  write your NIO server from scratch ? It's about 500 lines of code, all
  in all, and you can even disassemble MINA to do that ...
 
  It is a bit more if the SSL support and other bits and pieces, but this
  is exactly what I have to do at the moment.
 And it make complete sense. I mean, MINA is a heavy beast which offers a 
 lot of goodies, but it comes at a price. Sometime, this price is simply 
 too expensive...
 
  I am perfectly aware of downsides of this approach, which are several,
  but it would enable data intensive protocols such as HTTP and SMTP to
  manage connection memory footprint more conservatively. For instance,
  the protocol handler could attempt to preallocate a fixed and invariable
  amount memory at the connection initialization time and either succeed
  or fail early instead of risking the out of memory condition half way
  through a transaction due to the memory flooding.
  You already can manage the flooding by limitating the number of bytes
  you read. As you have the complete control on the initial RcvBufferSize,
  plus the control over the created buffer, you can always at some point
  'kill' a session which has receive too much data.
 
  I would very much rather prefer to avoid memory flooding instead of
  having to manage it.
 There is no way you can 'avoid' memory flooding, except if you limit 
 everything :
 - the message size
 - the number of connected clients
 
 which is, somehow, managing the flooding.
 

I respectfully disagree. One can use a fixed size input / output session
buffers and suspend / resume the channel using interest ops whenever
session buffers get full / empty all _without_ allocating any extra bit
of memory on the transport level. Believe me, it works quite well.
Thanks to the Channel interface.


  There is also possible to manage a threshold for every session, which
  might help in this case.
 
  For instance, in Apache DirectoryServer, we have added a parameter that
  limit the size of the received PDU, this parameter can be modified for
  any session. If we receive more bytes that this limit, then we close the
  session. It works pretty well.
  By dropping the connection in the middle of the session? Would not it be
  

Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Oleg Kalnichevski
On Wed, 2010-10-06 at 18:13 +0200, Emmanuel Lecharny wrote:
 On 10/6/10 5:49 PM, Bernd Fondermann wrote:
  On Wed, Oct 6, 2010 at 17:37, Oleg Kalnichevskiol...@apache.org  wrote:
  I apologize for intrusion.
  Oleg, no apology needed, I highly appreciate this thread.
  Very informative.
 Indeed. It's probably certain that MINA is lacking documentation, and 
 that it's internal is quite complex.
 
 All imputs are very welcome at this stage of design (basically, blue print).
 
 In fact, I would be really interested in knowing exactly what kind of 
 service you want to provide on top of NIO/ MINA (if it fits your need), 
 because it ight be useful in the design decision we would take.
 

I am developer / maintainer of Apache HttpComponents. We have our own
NIO framework optimized specifically for data intensive protocols such
as HTTP, which works quite well for us. However, Apache HC is a small
project with just a handful of folks involved. If another ASLv2 licensed
I/O framework met our requirements, we could gradually drop our own NIO
code thus freeing up bandwidth for HTTP related stuff. Presently, the
memory management in MINA just kills it for me. Same goes for Netty.

Take it as a food for thought. Nothing more.

Cheers

Oleg



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Emmanuel Lécharny

 On 10/6/10 6:54 PM, Oleg Kalnichevski wrote:

On Wed, 2010-10-06 at 18:13 +0200, Emmanuel Lecharny wrote:

On 10/6/10 5:49 PM, Bernd Fondermann wrote:

On Wed, Oct 6, 2010 at 17:37, Oleg Kalnichevskiol...@apache.org   wrote:

I apologize for intrusion.

Oleg, no apology needed, I highly appreciate this thread.
Very informative.

Indeed. It's probably certain that MINA is lacking documentation, and
that it's internal is quite complex.

All imputs are very welcome at this stage of design (basically, blue print).

In fact, I would be really interested in knowing exactly what kind of
service you want to provide on top of NIO/ MINA (if it fits your need),
because it ight be useful in the design decision we would take.


I am developer / maintainer of Apache HttpComponents. We have our own
NIO framework optimized specifically for data intensive protocols such
as HTTP, which works quite well for us. However, Apache HC is a small
project with just a handful of folks involved. If another ASLv2 licensed
I/O framework met our requirements, we could gradually drop our own NIO
code thus freeing up bandwidth for HTTP related stuff. Presently, the
memory management in MINA just kills it for me. Same goes for Netty.

Take it as a food for thought. Nothing more.

I'm going to give HttpComponents a shoot, to see how those two guys can fit.

One solution would be to define a common base, with all the Filter 
intricacy separated from the NIO part. That might fit your need then.


--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Emmanuel Lécharny

 On 10/6/10 6:43 PM, Oleg Kalnichevski wrote:

Use MemoryMappedFile. It's supported.

Errr. Thank you very much.

:)

This is just an example, by here, you can perfectly do something like :
do {
WriteFuture future = session.write( data);

future.await();

if ( !future.isWritten() ) {
  // error
  break;
}

// grab next piece of data
} while ( true );


The whole point is that I would like to write out only as much data as a
channel is able to take _without_ blocking. This is precisely what the
Channel interface enables me to do. I do not want just to block waiting
for a chunk of memory to be written out.
Here, MINA won't help. We don't generate an event when the socket is 
ready to be written, and this is probably wrong. It would b *way* better 
to be able to inform the handler that it can send a new piece of data, 
you are plain right. And that could perfectly be something we want to do 
in MINA 3.0


Currently, as I said in my first mail, written messages are enqueued, 
waiting to be sent. This is really an issue if you have tons of messages 
to send (we already experienced OOM due to this 'design' and the only 
way to bypass this problem was to wait for the previous message to be 
completely sent (ie, wait on the future).


I ear you on that...

I would very much rather prefer to avoid memory flooding instead of
having to manage it.

There is no way you can 'avoid' memory flooding, except if you limit
everything :
- the message size
- the number of connected clients

which is, somehow, managing the flooding.


I respectfully disagree. One can use a fixed size input / output session
buffers and suspend / resume the channel using interest ops whenever
session buffers get full / empty all _without_ allocating any extra bit
of memory on the transport level. Believe me, it works quite well.
Thanks to the Channel interface.
see my previous point. I know understand your issue, and I agree that 
MINA is not solving it atm, so yes, we can provide the interface you 
need for MINA 3.0.


Very interesting thread !

--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Oleg Kalnichevski
On Wed, 2010-10-06 at 19:16 +0200, Emmanuel Lécharny wrote:
 On 10/6/10 6:54 PM, Oleg Kalnichevski wrote:
  On Wed, 2010-10-06 at 18:13 +0200, Emmanuel Lecharny wrote:
  On 10/6/10 5:49 PM, Bernd Fondermann wrote:
  On Wed, Oct 6, 2010 at 17:37, Oleg Kalnichevskiol...@apache.org   wrote:
  I apologize for intrusion.
  Oleg, no apology needed, I highly appreciate this thread.
  Very informative.
  Indeed. It's probably certain that MINA is lacking documentation, and
  that it's internal is quite complex.
 
  All imputs are very welcome at this stage of design (basically, blue 
  print).
 
  In fact, I would be really interested in knowing exactly what kind of
  service you want to provide on top of NIO/ MINA (if it fits your need),
  because it ight be useful in the design decision we would take.
 
  I am developer / maintainer of Apache HttpComponents. We have our own
  NIO framework optimized specifically for data intensive protocols such
  as HTTP, which works quite well for us. However, Apache HC is a small
  project with just a handful of folks involved. If another ASLv2 licensed
  I/O framework met our requirements, we could gradually drop our own NIO
  code thus freeing up bandwidth for HTTP related stuff. Presently, the
  memory management in MINA just kills it for me. Same goes for Netty.
 
  Take it as a food for thought. Nothing more.
 I'm going to give HttpComponents a shoot, to see how those two guys can fit.
 
 One solution would be to define a common base, with all the Filter 
 intricacy separated from the NIO part. That might fit your need then.
 

Yp. This is precisely what I was trying to hint at. 

Here is my private fork of HttpCore NIO code with all HTTP specific
stuff ripped out. It might be easier for you to get a feel of the
framework by looking at the protocol independent code. 

http://github.com/ok2c/lightnio

This code is what I would like to get potentially replaced by MINA 3.0

Cheers

Oleg




Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Emmanuel Lécharny

 On 10/6/10 9:18 PM, Oleg Kalnichevski wrote:

Take it as a food for thought. Nothing more.

I'm going to give HttpComponents a shoot, to see how those two guys can fit.

One solution would be to define a common base, with all the Filter
intricacy separated from the NIO part. That might fit your need then.


Yp. This is precisely what I was trying to hint at.

Took me a while to figure this out :)

Here is my private fork of HttpCore NIO code with all HTTP specific
stuff ripped out. It might be easier for you to get a feel of the
framework by looking at the protocol independent code.

I have downloaded the original code (httpcore-nio).

A few remarks :
- mvn eclipse:eclipse fails, due to some strange errors with the 
'filtering' tags. I removed them, and get the projects available in eclipse
- in the select() loop, you will experiment from time to time some 100% 
CPU usage. This is a known bug in Sun code base. What happens is that 
select() return something  0 but no selectionKey are available, because 
some client simply connect and disconnect immediately, triggering the 
select() but weren(t present anymore when the selector tried to add a 
selectionKey in its internal table (a kind of race condition). As a 
consequence, there is nothing to do, so you immediately return to the 
select() which returns immeditely with a value  0 and so on = 100% 
CPU. See https://issues.apache.org/jira/browse/DIRMINA-678


A fix for the second point consist on detecting that we exit from the 
select() fast and that we don't have any SelectionKey to process. You 
have to get the time before entering the select(), the time when you get 
out, and compare them. If it's almost 0, and if the returned value is  
0, and if there is no selectionKey available, then you have been hit by 
the epoll bug. Now, you have to create a new selector, register all the 
existing selectionKey on the new selector, and substitute the old 
selector with the new one. Yu. But it works.



http://github.com/ok2c/lightnio

This code is what I would like to get potentially replaced by MINA 3.0


Let's work on that.

--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Oleg Kalnichevski
On Wed, 2010-10-06 at 22:21 +0200, Emmanuel Lécharny wrote:
 On 10/6/10 9:18 PM, Oleg Kalnichevski wrote:
  Take it as a food for thought. Nothing more.
  I'm going to give HttpComponents a shoot, to see how those two guys can 
  fit.
 
  One solution would be to define a common base, with all the Filter
  intricacy separated from the NIO part. That might fit your need then.
 
  Yp. This is precisely what I was trying to hint at.
 Took me a while to figure this out :)
  Here is my private fork of HttpCore NIO code with all HTTP specific
  stuff ripped out. It might be easier for you to get a feel of the
  framework by looking at the protocol independent code.
 I have downloaded the original code (httpcore-nio).
 
 A few remarks :
 - mvn eclipse:eclipse fails, due to some strange errors with the 
 'filtering' tags. I removed them, and get the projects available in eclipse
 - in the select() loop, you will experiment from time to time some 100% 
 CPU usage. This is a known bug in Sun code base. What happens is that 
 select() return something  0 but no selectionKey are available, because 
 some client simply connect and disconnect immediately, triggering the 
 select() but weren(t present anymore when the selector tried to add a 
 selectionKey in its internal table (a kind of race condition). As a 
 consequence, there is nothing to do, so you immediately return to the 
 select() which returns immeditely with a value  0 and so on = 100% 
 CPU. See https://issues.apache.org/jira/browse/DIRMINA-678
 
 A fix for the second point consist on detecting that we exit from the 
 select() fast and that we don't have any SelectionKey to process. You 
 have to get the time before entering the select(), the time when you get 
 out, and compare them. If it's almost 0, and if the returned value is  
 0, and if there is no selectionKey available, then you have been hit by 
 the epoll bug. Now, you have to create a new selector, register all the 
 existing selectionKey on the new selector, and substitute the old 
 selector with the new one. Yu. But it works.
 

I am aware of the issue.

We have gone through the same painful experience a while ago, but
ultimately decided against committing the patch with a work-around to
the main codeline, as the epoll spin problem had been fixed in the
latest Sun's JRE releases. One of the upstream users that makes
extensive use of HttpCore NIO in their product conducted a series
thorough load tests with and without the patch and confirmed the problem
could be resolved by upgrading to JRE 1.6.0.21. 

I still have the patch handy just in case.

Cheers

Oleg




Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Emmanuel Lécharny

 On 10/6/10 10:51 PM, Oleg Kalnichevski wrote:

On Wed, 2010-10-06 at 22:21 +0200, Emmanuel Lécharny wrote:

On 10/6/10 9:18 PM, Oleg Kalnichevski wrote:

Take it as a food for thought. Nothing more.

I'm going to give HttpComponents a shoot, to see how those two guys can fit.

One solution would be to define a common base, with all the Filter
intricacy separated from the NIO part. That might fit your need then.


Yp. This is precisely what I was trying to hint at.

Took me a while to figure this out :)

Here is my private fork of HttpCore NIO code with all HTTP specific
stuff ripped out. It might be easier for you to get a feel of the
framework by looking at the protocol independent code.

I have downloaded the original code (httpcore-nio).

A few remarks :
- mvn eclipse:eclipse fails, due to some strange errors with the
'filtering' tags. I removed them, and get the projects available in eclipse
- in the select() loop, you will experiment from time to time some 100%
CPU usage. This is a known bug in Sun code base. What happens is that
select() return something  0 but no selectionKey are available, because
some client simply connect and disconnect immediately, triggering the
select() but weren(t present anymore when the selector tried to add a
selectionKey in its internal table (a kind of race condition). As a
consequence, there is nothing to do, so you immediately return to the
select() which returns immeditely with a value  0 and so on =  100%
CPU. Seehttps://issues.apache.org/jira/browse/DIRMINA-678

A fix for the second point consist on detecting that we exit from the
select() fast and that we don't have any SelectionKey to process. You
have to get the time before entering the select(), the time when you get
out, and compare them. If it's almost 0, and if the returned value is
0, and if there is no selectionKey available, then you have been hit by
the epoll bug. Now, you have to create a new selector, register all the
existing selectionKey on the new selector, and substitute the old
selector with the new one. Yu. But it works.


I am aware of the issue.

We have gone through the same painful experience a while ago, but
ultimately decided against committing the patch with a work-around to
the main codeline, as the epoll spin problem had been fixed in the
latest Sun's JRE releases. One of the upstream users that makes
extensive use of HttpCore NIO in their product conducted a series
thorough load tests with and without the patch and confirmed the problem
could be resolved by upgrading to JRE 1.6.0.21.
Seems to have been fixed in 
http://www.oracle.com/technetwork/java/javase/6u18-142093.html, right.


Has it been fixed in java 5 ?


--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com



Re: [MINA 3.0] Message-less, non-flooding I/O mode

2010-10-06 Thread Oleg Kalnichevski
On Wed, 2010-10-06 at 23:25 +0200, Emmanuel Lécharny wrote:
 On 10/6/10 10:51 PM, Oleg Kalnichevski wrote:
  On Wed, 2010-10-06 at 22:21 +0200, Emmanuel Lécharny wrote:
  On 10/6/10 9:18 PM, Oleg Kalnichevski wrote:
  Take it as a food for thought. Nothing more.
  I'm going to give HttpComponents a shoot, to see how those two guys can 
  fit.
 
  One solution would be to define a common base, with all the Filter
  intricacy separated from the NIO part. That might fit your need then.
 
  Yp. This is precisely what I was trying to hint at.
  Took me a while to figure this out :)
  Here is my private fork of HttpCore NIO code with all HTTP specific
  stuff ripped out. It might be easier for you to get a feel of the
  framework by looking at the protocol independent code.
  I have downloaded the original code (httpcore-nio).
 
  A few remarks :
  - mvn eclipse:eclipse fails, due to some strange errors with the
  'filtering' tags. I removed them, and get the projects available in eclipse
  - in the select() loop, you will experiment from time to time some 100%
  CPU usage. This is a known bug in Sun code base. What happens is that
  select() return something  0 but no selectionKey are available, because
  some client simply connect and disconnect immediately, triggering the
  select() but weren(t present anymore when the selector tried to add a
  selectionKey in its internal table (a kind of race condition). As a
  consequence, there is nothing to do, so you immediately return to the
  select() which returns immeditely with a value  0 and so on =  100%
  CPU. Seehttps://issues.apache.org/jira/browse/DIRMINA-678
 
  A fix for the second point consist on detecting that we exit from the
  select() fast and that we don't have any SelectionKey to process. You
  have to get the time before entering the select(), the time when you get
  out, and compare them. If it's almost 0, and if the returned value is
  0, and if there is no selectionKey available, then you have been hit by
  the epoll bug. Now, you have to create a new selector, register all the
  existing selectionKey on the new selector, and substitute the old
  selector with the new one. Yu. But it works.
 
  I am aware of the issue.
 
  We have gone through the same painful experience a while ago, but
  ultimately decided against committing the patch with a work-around to
  the main codeline, as the epoll spin problem had been fixed in the
  latest Sun's JRE releases. One of the upstream users that makes
  extensive use of HttpCore NIO in their product conducted a series
  thorough load tests with and without the patch and confirmed the problem
  could be resolved by upgrading to JRE 1.6.0.21.
 Seems to have been fixed in 
 http://www.oracle.com/technetwork/java/javase/6u18-142093.html, right.
 
 Has it been fixed in java 5 ?
 

I am not sure whether or not Java 1.5 was affected in the first place. I
do not think it uses epoll based selector per default to start with. In
our case the problem was reported when running Java 1.6.

Oleg