On Wed, 2010-10-06 at 18:10 +0200, Emmanuel Lécharny wrote:
> On 10/6/10 5:37 PM, Oleg Kalnichevski wrote:
> > On Wed, 2010-10-06 at 16:55 +0200, Emmanuel Lecharny wrote:
> >> On 10/6/10 2:49 PM, Oleg Kalnichevski wrote:
> >>> MINA devs,
> >>>
> >>> Would you be receptive to the idea of supporting a message-less (or
> >>> bare-metal, if you like) I/O mode? Essentially, I would like MINA 3.0 to
> >>> make it possible to interact directly with the underlying I/O channels
> >>> bypassing the read / write data queues altogether.
> >> There is nothing such as a queue used for read operation. We read the
> >> data from the channel into a buffer, and then call the messageReceived()
> >> method through the filters chain up to he handler.
> >>
> >> Of course, if you add an executor in the middle, then that's a different
> >> story.
> >>
> >> Anyway. Maybe what you want is that the handler directly do the read,
> >> sparing the creation of an intermediate buffer. You can't currently do
> >> that, so the buffer will always be created and filled with data.
> >>
> > There are enough scenarios (especially when streaming large entities)
> > when this extra copy is wasteful and unnecessary. Moreover, what if the
> > I/O handler is simply unable to consume the message entirely without
> > allocating more memory?
> 
> Use MemoryMappedFile. It's supported.

Errr. Thank you very much.

> >> For the write operation, what is currently missing in MINA 2 is the
> >> transfertTo method which would allow you to push file contents directly
> >> to the socket without copying it into memory. This is most certainly
> >> something we want to have in 3.0
> >>
> > Again, what if the transmitted entity is not backed by a file?
> 
> This is just an example, by here, you can perfectly do something like :
> 
> do {
>    WriteFuture future = session.write( data);
> 
>    future.await();
> 
>    if ( !future.isWritten() ) {
>      // error
>      break;
>    }
> 
>    // grab next piece of data
> } while ( true );
> 

The whole point is that I would like to write out only as much data as a
channel is able to take _without_ blocking. This is precisely what the
Channel interface enables me to do. I do not want just to block waiting
for a chunk of memory to be written out. 


> >>> In other words,
> >>> whenever a channel signals read / write readiness, the I/O selector
> >>> thread would fire an event referring to the originating channel. The
> >>> decision as to how much data can be read from or written to the channel
> >>> would be left up to the I/O handler.
> >> Well, that does not seems very useful.
> > It all depends how you look at it.
> (my response was supposed to be clarified by the next paragraph. 
> However, it sounds like I'm saying that I emit an opinion, which is not 
> the case)
> >
> >>   If you defer to the handler the
> >> read and write operation on top of the channel, then why not directly
> >> write your NIO server from scratch ? It's about 500 lines of code, all
> >> in all, and you can even disassemble MINA to do that ...
> >
> > It is a bit more if the SSL support and other bits and pieces, but this
> > is exactly what I have to do at the moment.
> And it make complete sense. I mean, MINA is a heavy beast which offers a 
> lot of goodies, but it comes at a price. Sometime, this price is simply 
> too expensive...
> >
> >>> I am perfectly aware of downsides of this approach, which are several,
> >>> but it would enable data intensive protocols such as HTTP and SMTP to
> >>> manage connection memory footprint more conservatively. For instance,
> >>> the protocol handler could attempt to preallocate a fixed and invariable
> >>> amount memory at the connection initialization time and either succeed
> >>> or fail early instead of risking the out of memory condition half way
> >>> through a transaction due to the memory flooding.
> >> You already can manage the flooding by limitating the number of bytes
> >> you read. As you have the complete control on the initial RcvBufferSize,
> >> plus the control over the created buffer, you can always at some point
> >> 'kill' a session which has receive too much data.
> >>
> > I would very much rather prefer to avoid memory flooding instead of
> > having to manage it.
> There is no way you can 'avoid' memory flooding, except if you limit 
> everything :
> - the message size
> - the number of connected clients
> 
> which is, somehow, managing the flooding.
> 

I respectfully disagree. One can use a fixed size input / output session
buffers and suspend / resume the channel using interest ops whenever
session buffers get full / empty all _without_ allocating any extra bit
of memory on the transport level. Believe me, it works quite well.
Thanks to the Channel interface.


> >> There is also possible to manage a threshold for every session, which
> >> might help in this case.
> >>
> >> For instance, in Apache DirectoryServer, we have added a parameter that
> >> limit the size of the received PDU, this parameter can be modified for
> >> any session. If we receive more bytes that this limit, then we close the
> >> session. It works pretty well.
> > By dropping the connection in the middle of the session? Would not it be
> > better to pre-allocate a fixed amount of memory and have a guarantee
> > that more memory will not be required (at least on the transport level)?
> if a session is consumming more memory than allowed, that means 
> something wrong is going on. I then would rather kill this session which 
> might well kill my server. Now, allocating a fixed size buffer for each 
> session does not mitigate this risk :

And why is that?


>  t just a way to consume memory you 
> are likely not going to use, if your message are smaller.
> 
> What I mean here is that just because some session might send large 
> chunks of data, you should not consider that all the session will do the 
> same, and if it's the case, then allocating a fixed size for all the 
> session will not help a lot : what will you do when the session's memory 
> is full ?
> >


How about suspending the session until the session buffer frees up or,
in order words, how about I/O throttling?


> >>> I understand this might be too much of a radical shift from the existing
> >>> architecture, so feel free to ignore me if you find this approach
> >>> incompatible with the MINA 3.0 design concepts.
> >> No, that's fine, it's just important to know what we already have and
> >> what we can offer without cutting two legs to MINA :)
> >>
> > It is shame that by abstracting away java.nio.channels.Channel both MINA
> > and Netty throw away one of the best features of NIO, which is memory
> > efficiency.
> In fact, it' snot throwing memory efficiency out of the door. I even 
> think that NIO is *not* at all designed to manage memory efficiently. 
> It's just a layer on top of sockets and select() which provide an 
> efficient way to handle potentially hundred of thousands connection 
> without having to created the same number of thread.
> 

This is where we differ quite profoundly


> Everything else is in the hand of the server designer, ie you. Will you 
> develop you own server on top of NIO instead of on top of Netty or MINA, 
> you will face the *exact* same issues.
> 
> > Anyways, I understand that MINA is based on a different philosophy and I
> > also understand the convenience of having a chain of filters that work
> > with messages that are essentially small chunks of memory.
> No, we are not working with messages that are small chunks of memory : 
> we work with the data being read directly from the socket, and trust me, 
> there is no way you can manipulate those bytes without first loading 
> them in memory, MINA or not.
> 

Right. However, I would like to load those bytes into a session buffer
bypassing an intermediate buffer.


> The filters do nothing but add some logic around those buffers, which 
> are not copied and copied again and again : once they have read, they 
> aren't copied anymore.
> 
> So at the end, in the Handler, you just get what has been read from the 
> socket, if of course you haven't used a codec or a cumulative decoder.
> 
> All in all, if you don't put *any* filter in the chain, you'll get 
> *exactly* what you are looking for : the bytes get from *one single* 
> read from the socket,
>

Again, bytes gets copied into an intermediate buffer. Before they could
be used for anything useful they need to be copied at least once into a
session buffer that may contain some content from previous reads. Even
if the intermediate copy can be avoided on the reading side, I just do
not see how it can be avoided on the writing side. 

Anyhow, I will not waste your time anymore. There are different ways of
seeing and not seeing. We just have to decide what works for us.

cheers

Oleg


Reply via email to