On 8/20/11 12:59 PM, Alex Karasulu wrote:
On Sat, Aug 20, 2011 at 2:46 AM, Emmanuel Lecharny<elecha...@gmail.com>wrote:

On 8/19/11 2:26 PM, Alan D. Cabrera wrote:

On Aug 19, 2011, at 5:19 AM, Emmanuel Lecharny wrote:

  On 8/19/11 2:16 PM, Alan D. Cabrera wrote:
I'm wondering.  Do you guys think it's a good idea?  It seems to make
things pretty complicated and adds another dimension to groking the behavior
of your service.  I'm not sure that it's necessary.

Definitively a bad idea.

What we need is an abstraction on top of an Array of ByteBuffer (the Java
NIO class), which extends the size by adding new ByteBuffer on the fly.

The array must behave exactly as the ByteBuffer.

I wrote such a class 2 years ago, but I can't find the code. Will look
again on my USB keys this week-end.

So, what is the scenario that we're trying to support?  I imagine
appending headers to binary data would be one.  In this case is an Array of
ByteBuffers really needed?  Why not just send down one ByteBuffer for the
header and another for the body that was sent to you?

Julien and Steve laready responded with clear examples where expandable BB
are necessary. I do think this is a common case, assuming you can perfectly
receive the data byte by byte, and you want to store those bytes in a place
which does not need to be reallocated every time.

The current IoBuffer, when full, is copied in a new loBuffer which size is
doubled. If you are expecting big PDUs, you might allocate huge Iobuffer for
nothing, wasting space. Plus you have to copy all the old IoBuffer into the
new one. Not really a good thing.

*if* we are using DirectBuffer, we even might want to use fixed size
IoBuffer (say 1k), and store them in a queue of avilable free buffers for
reuse, sparing the cost of allocating them (to be validated, I'm not sure
that the cost of managing concurrent access to this queue does not
overweight the cost of allocating a new direct buffer)

For HeapBuffers, i'm quite sure that we should allocate new ByteBuffer with
the exact size of the received data (if it's 10 bytes, then the new
ByteBuffer will be 10 byte slong. If it's bigger, then fine). This
additional ByteBuffer can be appended a the end of the ByteBuffer array, and
the filter will be able to process the list when it has everything needed.

Julien mentionned the Cumulative decoder, which can be used in may cases :
- we are waiting for a \n (HTTP protocol)
- we are waiting for a closing tag (XML)
- we are dealing with LV (Length/value) PDU, and the received value is not
fully received (we are expecting Length bytes)
- we are dealing with fixed size PDU

One more thing to consider : at some point, we may want to flush the
ByteBuffer to disk, if we are waiting for a huge PDU, to avoid sucking all
the memory : this is typically the case in LDAP when receiving a JpegPhot
attribute, inside a PDU (we don't know that it's a JpegPhoto attribute until
we have read and processed teh full PDU). With a proxy on top of the
ByteBuffer, we can eventually hide the fact that the bytes are on disk.

Although these PDU's are large you can store arbitrary data as well, who
knows what kinds of data and what scenarios users will come up with. It
would be very nice if MINA allowed some way behind the scenes to stream
large PDU's to disk and transparently allow upstream consumers to read from
disk as if it were coming directly from the wire. However this might be
something added on top of the framework WDYT?

This is what I have in mind. It should be transparent to the user.


--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com

Reply via email to