Re: Codecs and CumulativeByteBuffer

2013-06-17 Thread Raphaël Barazzutti
Hi Emmanuel,

Thanks for your comments!

I imagine that slice() method would have a behaviour similar to the
one of ByteBuffer. As in ByteBuffer, position based values (position,
limit and mark) of the slice would be independent copies of the
original IoBuffer ones, but the buffer content itself would be shared.
Then, the position based values would logically be consistent between
the original and the new slice.

In decoders, IoBuffer would be used as a kind of sliding window,
consequently position based values would be monotonically growing
series (and thus limited to Integer.MAX_INT). What do you think?

Currently the CumulativeByteBuffer provides most of the methods
available in NIO ByteBuffer (it doesn't not offer position-aware
methods, which I think are not really useful in a decoding phase).

Kind regards,

Raphaël

On Mon, Jun 17, 2013 at 12:59 AM, Emmanuel Lécharny elecha...@gmail.com wrote:
 Le 6/16/13 9:31 PM, Raphaël Barazzutti a écrit :
 Hi Julien,

 Thanks for your comments!

 You're right there are not that much differences between that two classes.

 In some situations I find IoBuffer very interesting, but in the case
 of a decoder there are some concepts taken from ByteBuffer that in my
 opinion don't fit very well.

 Lets describe briefly a case: In a decoder, the IoBuffer could be used
 as a connection's buffer, then a given instance would be the context
 of the connection. That buffer would be somehow used as a queue and
 the position p0 would be the first byte of the communication. After
 having enough bytes to process a message, the position would be moved
 to position p1. Then we can expect that the buffer release the byte
 buffer(s) containing the bytes in the range between p0 and p1. Then it
 becomes a bit complicated to define the behaviour of position(int)
 when a call to position(p0) is done.
 We just need to implement the slice() method in the IoBuffer to offer
 the kind of features you expect.


 --
 Regards,
 Cordialement,
 Emmanuel Lécharny
 www.iktek.com



Re: Codecs and CumulativeByteBuffer

2013-06-17 Thread Emmanuel Lécharny
Le 6/17/13 1:32 PM, Raphaël Barazzutti a écrit :
 Hi Emmanuel,

 Thanks for your comments!

 I imagine that slice() method would have a behaviour similar to the
 one of ByteBuffer. As in ByteBuffer, position based values (position,
 limit and mark) of the slice would be independent copies of the
 original IoBuffer ones, but the buffer content itself would be shared.
 Then, the position based values would logically be consistent between
 the original and the new slice.
Absolutely. Is this a problem, assuming that once the decoding is done,
the IoBuffer will be discarded ?

 In decoders, IoBuffer would be used as a kind of sliding window,
 consequently position based values would be monotonically growing
 series (and thus limited to Integer.MAX_INT). What do you think?
This is not exactly what I have in mind.

The need, as I understand it, is to keep a track of incoming data, up to
the point you can consider the full data has been decoded. This can be
done through many calls to the channel.read() method, as the data may be
fragmented.

In any case, either you consider that what have been decoded can be
discarded, or must be kept up to the end.

Let's consider the first case : you need to write a stateful decoder to
'remember' where you stopped, so that you can start again when some more
data are received. This is the best possible solution, but this is also
the more complex one. One of the constraint is that you must keep a data
structure in memory to hold the partially decoded structure.

In the second case, you have to keep all the incoming bytes until you
know that you can decode them. This is typically the case whan you know
the size of the data you will get (iether because the PDU has a fixed
size, or because you were able to decode the data length from the
incoming data). In this case, you don't really care to have a specific
system that keep a track of the current position : either the total
length of data in your IoBuffer is what you expect, or you need to read
more bytes.
One small thing to consider though : the length might be split, too...

So all in all, whatever use case, it seems to me that IoBuffer covers
all the needs. Do I missed something ?


-- 
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com 



Re: Codecs and CumulativeByteBuffer

2013-06-17 Thread Emmanuel Lécharny
Le 6/17/13 1:47 PM, Emmanuel Lécharny a écrit :
 Le 6/17/13 1:32 PM, Raphaël Barazzutti a écrit :
 Hi Emmanuel,

 Thanks for your comments!

 I imagine that slice() method would have a behaviour similar to the
 one of ByteBuffer. As in ByteBuffer, position based values (position,
 limit and mark) of the slice would be independent copies of the
 original IoBuffer ones, but the buffer content itself would be shared.
 Then, the position based values would logically be consistent between
 the original and the new slice.
 Absolutely. Is this a problem, assuming that once the decoding is done,
 the IoBuffer will be discarded ?
 In decoders, IoBuffer would be used as a kind of sliding window,
 consequently position based values would be monotonically growing
 series (and thus limited to Integer.MAX_INT). What do you think?
 This is not exactly what I have in mind.

 The need, as I understand it, is to keep a track of incoming data, up to
 the point you can consider the full data has been decoded. This can be
 done through many calls to the channel.read() method, as the data may be
 fragmented.

 In any case, either you consider that what have been decoded can be
 discarded, or must be kept up to the end.

 Let's consider the first case : you need to write a stateful decoder to
 'remember' where you stopped, so that you can start again when some more
 data are received. This is the best possible solution, but this is also
 the more complex one. One of the constraint is that you must keep a data
 structure in memory to hold the partially decoded structure.

 In the second case, you have to keep all the incoming bytes until you
 know that you can decode them. This is typically the case whan you know
 the size of the data you will get (iether because the PDU has a fixed
 size, or because you were able to decode the data length from the
 incoming data). In this case, you don't really care to have a specific
 system that keep a track of the current position : either the total
 length of data in your IoBuffer is what you expect, or you need to read
 more bytes.
 One small thing to consider though : the length might be split, too...

 So all in all, whatever use case, it seems to me that IoBuffer covers
 all the needs. Do I missed something ?

Forgot to mention that the IoBuffer *must* be stored in the Session
context, and not shared.

Also note that we are using one single ByteBuffer to read data from the
Channel, so this ByteBuffer must be copied into the IoBuffer. You need
to dup it, in other words.


-- 
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com 



Re: Codecs and CumulativeByteBuffer

2013-06-17 Thread Raphaël Barazzutti
Hi Emmanuel,

Thanks for your helpful comments,

I think that I've now a better picture of how you expect things to
work with IoBuffer.

For sure IoBuffer is per session and not shared. But more
specifically, I see that a the context should be an object pointing to
IoBuffer and not an IoBuffer directly.

Thanks again and have a nice afternoon,

Regards,

Raphaël



On Mon, Jun 17, 2013 at 2:50 PM, Emmanuel Lécharny elecha...@gmail.com wrote:
 Le 6/17/13 1:47 PM, Emmanuel Lécharny a écrit :
 Le 6/17/13 1:32 PM, Raphaël Barazzutti a écrit :
 Hi Emmanuel,

 Thanks for your comments!

 I imagine that slice() method would have a behaviour similar to the
 one of ByteBuffer. As in ByteBuffer, position based values (position,
 limit and mark) of the slice would be independent copies of the
 original IoBuffer ones, but the buffer content itself would be shared.
 Then, the position based values would logically be consistent between
 the original and the new slice.
 Absolutely. Is this a problem, assuming that once the decoding is done,
 the IoBuffer will be discarded ?
 In decoders, IoBuffer would be used as a kind of sliding window,
 consequently position based values would be monotonically growing
 series (and thus limited to Integer.MAX_INT). What do you think?
 This is not exactly what I have in mind.

 The need, as I understand it, is to keep a track of incoming data, up to
 the point you can consider the full data has been decoded. This can be
 done through many calls to the channel.read() method, as the data may be
 fragmented.

 In any case, either you consider that what have been decoded can be
 discarded, or must be kept up to the end.

 Let's consider the first case : you need to write a stateful decoder to
 'remember' where you stopped, so that you can start again when some more
 data are received. This is the best possible solution, but this is also
 the more complex one. One of the constraint is that you must keep a data
 structure in memory to hold the partially decoded structure.

 In the second case, you have to keep all the incoming bytes until you
 know that you can decode them. This is typically the case whan you know
 the size of the data you will get (iether because the PDU has a fixed
 size, or because you were able to decode the data length from the
 incoming data). In this case, you don't really care to have a specific
 system that keep a track of the current position : either the total
 length of data in your IoBuffer is what you expect, or you need to read
 more bytes.
 One small thing to consider though : the length might be split, too...

 So all in all, whatever use case, it seems to me that IoBuffer covers
 all the needs. Do I missed something ?

 Forgot to mention that the IoBuffer *must* be stored in the Session
 context, and not shared.

 Also note that we are using one single ByteBuffer to read data from the
 Channel, so this ByteBuffer must be copied into the IoBuffer. You need
 to dup it, in other words.


 --
 Regards,
 Cordialement,
 Emmanuel Lécharny
 www.iktek.com



Re: Codecs and CumulativeByteBuffer

2013-06-16 Thread Raphaël Barazzutti
Hi Julien,

Thanks for your comments!

You're right there are not that much differences between that two classes.

In some situations I find IoBuffer very interesting, but in the case
of a decoder there are some concepts taken from ByteBuffer that in my
opinion don't fit very well.

Lets describe briefly a case: In a decoder, the IoBuffer could be used
as a connection's buffer, then a given instance would be the context
of the connection. That buffer would be somehow used as a queue and
the position p0 would be the first byte of the communication. After
having enough bytes to process a message, the position would be moved
to position p1. Then we can expect that the buffer release the byte
buffer(s) containing the bytes in the range between p0 and p1. Then it
becomes a bit complicated to define the behaviour of position(int)
when a call to position(p0) is done.
Another thing which is a bit confusing is also that I imagine that the
position should be consistent over time in an IoBuffer (i.e. p1-p0-1
is equal to the number of bytes between p0 and p1), the drawback of
this consistency is that the communication cannot exceed
Integer.MAX_VALUE bytes when enforcing it.

That's why I didn't use at all the concept of positions/limits/etc in
CumulativeByteBuffer,

Otherwise, I think I'll add support for
(get|put)(Short|Int|Long|Float|Double) in the CumulativeByteBuffer

CumulativeByteBuffer has to be moved, you're right,

Thanks again,

Raphaël



On Sat, Jun 15, 2013 at 10:26 PM, Julien Vermillard
jvermill...@gmail.com wrote:
 Hi,
 Thanks for the contribution ! keep up the good work.

 The CumulativeDecoder is a nice addition for the ones who will miss
 the old one and more in phase with the new codec API.

 On the CumulativeByteBuffer, it's not that far of IoBuffer I don't
 think we should have two class and they should merge.
 If IoBuffer could discard byte, and provide byte iterator it would be
 fine for you ?

 BTW CumulativeDecoder should be in it's own package.

 Julien
 --
 Julien Vermillard  http://people.apache.org/~jvermillard/


 On Sat, Jun 15, 2013 at 11:42 AM, Raphaël Barazzutti
 raphael.barazzu...@gmail.com wrote:
 Hi all,

 According to the helpful comment of Ashish (thanks!), I had to add a
 cumulative mechanism in codec in order to have correct unserialization
 in the Thrift module as well as in the Protobuf module.

 Look here : https://github.com/rbarazzutti/mina.git on branch
 serialization-fixes-1

 For this I implemented a class named CumulativeByteBuffer to handle
 properly these accumulating needs in the decoding phase.

 Nevertheless another class, IoBuffer, is providing mechanisms close to
 the one of CumulativeByteBuffer. IoBuffer has an API more similar to
 the one of NIO's ByteBuffer, while CumulativeByteBuffer only provides
 a subset of that API. IMHO, CumulativeByteBuffer is simpler and gives
 some additional tools which are convenient for decoders.

 Ready for comments,

 Kind regards,

 Raphaël


Re: Codecs and CumulativeByteBuffer

2013-06-16 Thread Emmanuel Lécharny
Le 6/16/13 9:31 PM, Raphaël Barazzutti a écrit :
 Hi Julien,

 Thanks for your comments!

 You're right there are not that much differences between that two classes.

 In some situations I find IoBuffer very interesting, but in the case
 of a decoder there are some concepts taken from ByteBuffer that in my
 opinion don't fit very well.

 Lets describe briefly a case: In a decoder, the IoBuffer could be used
 as a connection's buffer, then a given instance would be the context
 of the connection. That buffer would be somehow used as a queue and
 the position p0 would be the first byte of the communication. After
 having enough bytes to process a message, the position would be moved
 to position p1. Then we can expect that the buffer release the byte
 buffer(s) containing the bytes in the range between p0 and p1. Then it
 becomes a bit complicated to define the behaviour of position(int)
 when a call to position(p0) is done.
We just need to implement the slice() method in the IoBuffer to offer
the kind of features you expect.


-- 
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com 



Codecs and CumulativeByteBuffer

2013-06-15 Thread Raphaël Barazzutti
Hi all,

According to the helpful comment of Ashish (thanks!), I had to add a
cumulative mechanism in codec in order to have correct unserialization
in the Thrift module as well as in the Protobuf module.

Look here : https://github.com/rbarazzutti/mina.git on branch
serialization-fixes-1

For this I implemented a class named CumulativeByteBuffer to handle
properly these accumulating needs in the decoding phase.

Nevertheless another class, IoBuffer, is providing mechanisms close to
the one of CumulativeByteBuffer. IoBuffer has an API more similar to
the one of NIO's ByteBuffer, while CumulativeByteBuffer only provides
a subset of that API. IMHO, CumulativeByteBuffer is simpler and gives
some additional tools which are convenient for decoders.

Ready for comments,

Kind regards,

Raphaël


Re: Codecs and CumulativeByteBuffer

2013-06-15 Thread Julien Vermillard
Hi,
Thanks for the contribution ! keep up the good work.

The CumulativeDecoder is a nice addition for the ones who will miss
the old one and more in phase with the new codec API.

On the CumulativeByteBuffer, it's not that far of IoBuffer I don't
think we should have two class and they should merge.
If IoBuffer could discard byte, and provide byte iterator it would be
fine for you ?

BTW CumulativeDecoder should be in it's own package.

Julien
--
Julien Vermillard  http://people.apache.org/~jvermillard/


On Sat, Jun 15, 2013 at 11:42 AM, Raphaël Barazzutti
raphael.barazzu...@gmail.com wrote:
 Hi all,

 According to the helpful comment of Ashish (thanks!), I had to add a
 cumulative mechanism in codec in order to have correct unserialization
 in the Thrift module as well as in the Protobuf module.

 Look here : https://github.com/rbarazzutti/mina.git on branch
 serialization-fixes-1

 For this I implemented a class named CumulativeByteBuffer to handle
 properly these accumulating needs in the decoding phase.

 Nevertheless another class, IoBuffer, is providing mechanisms close to
 the one of CumulativeByteBuffer. IoBuffer has an API more similar to
 the one of NIO's ByteBuffer, while CumulativeByteBuffer only provides
 a subset of that API. IMHO, CumulativeByteBuffer is simpler and gives
 some additional tools which are convenient for decoders.

 Ready for comments,

 Kind regards,

 Raphaël