On 04/28/2008 10:46 AM, Emmanuel Lecharny wrote:
David M. Lloyd wrote:
- should we serialize the stream at some point ?

What do you mean by "serialize"?
Write to disk if the received data are too big. See my previous point (it's up to the decoder to deal with this)

Ah I understand. Yes, it would be up to the decoder. Though hopefully most decoders can process the entire buffer without having to keep the buffer data around. The faster you can get rid of incoming buffers, the less memory you will consume overall.

Note that a buffer might contain data from more than one message as well. So it's important to use only a slice of the buffer in this case.
Not a big deal. Again, it's the decoder task to handle such a case. We have experimented such a case in LDAP too.

Yes and no - we should at least make support methods available to make this easier. MINA won't win any fans by being labeled "framework that makes decoder implementers do the most work" :-)

- how to write an efficient encoder when you have no idea about the size of the data you are going to send ?

Use a buffer factory, such as IoBufferAllocator, or use an even simpler interface like this:

public interface BufferFactory {
    ByteBuffer createBuffer();
}
[..]
That's an idea. But this does not solve one little pb : if the reader is slow, you may saturate the server memory with prepared BB. So you may need a kind of throttle mechanism, or a blocking queue, to manage this issue : a new BB should not be created unless the previous one has been completely sent.

Well that would really depend on the use case. If you're sending the buffers as soon as they're filled, it might not be a problem. If you saturate the output channel though, you'll have to be able to handle that situation somehow. It ultimately has to be up to the protocol decoder to detect a saturated output channel and perform whatever action is necessary to squelch the output message source until the output channel can be cleared again.

The right answer might be to spool output data to a disk-backed queue, or it might be to block further requests, etc. Basically the same situation that exists today.

One of the "classic" NIO fallacies is to assume that the output channel will never block. :-)

Another option is to skip ByteBuffers and go with raw byte[] objects (though this closes the door completely to direct buffers).
Well, ByteBuffers are so intimately wired with NIO that I don't think we can easily use byte[] without losing performances... (not sure though ...)

Not that I'm in favor of using byte[] directly, but it's easy enough to wrap a (single) byte[] with a ByteBuffer using the ByteBuffer.wrap() methods.

Yet another option is to have a simplified abstraction for byte arrays like Trustin proposes, and use the stream cleasses for the buffer state implementation.

This is all in addition to Trustin's idea of providing a byte array abstraction and a buffer state abstraction class.
I'm afraid that offering a byte[] abstraction might lead to more complexity, wrt with what you wrote about the way codec should handle data. At some point, your ideas are just the good ones, IMHO : use BB, and let the codec deal with it. No need to add more complex data structure on top of it.

Maybe. If this route is taken though, a very comprehensive set of "helper" methods will probably be needed. ByteBuffer has a pretty lousy API. :-)

So I'd cast my (useless and non-binding) vote behind either using ByteBuffer with static support methods, or using a byte array abstraction object with a separate buffer abstraction like Trustin suggests.

Otherwise, the idea may be to define some simple codec which transform a BB to a array[], for those who need it. As we have a cool Filter chain, let's use it...

Any non-direct buffer is already backed by a byte[], so this would be pretty easy. Though you'd have to pass up a byte[], int offs, int len to avoid copying. Copying is really the #1 problem with IoBuffer as it exists today.

- DML

Reply via email to