On 11/18/10 4:54 PM, Björn Thelberg wrote:
No problem Emmanuel, thanks for looking into this now!
Yes I can post a JIRA bug, I just want to make sure I have the whole picture
clear first.
It's probably better to fill a JIRA, as it's not lost in a mailing list.
Also we can add comments to this JIRA issue, and you can also attach
yourself as a listener to this issue, so you get direct informations
relative to this issue when some comment is added.
As I look at it we are talking about two problems with the same origin - an
Exception in a decode()-method.
The first is that the DemuxingProtocolDecoder's state.currentDecoder is not
cleared after an Exception is thrown.
Which makes sense, in some way : just because you get an exception does
not mean you want to discard the message being decoded. You may have
many possible cases where you still want to continue decoding.
Now, that does not mean that clearing the buffer should not be an
option. This is what I proposed in my previous mail.
So any following received messages are passed directly to the previously used
MessageDecoder's decode()-method without selecting a new decoder, using
decodable()-calls. This differs from what happens if you return
MessageDecoder.NOT_OK even though both are documented as protocol specification
violations.
You could go around this by catching all exceptions and return NOT_OK but as
you can't both return and re-throw the exception, the nice
ExceptionHandler-functionality in MINA goes to waste.
I would rather suggest that the DemuxingProtocolDecoder in it's doDecode()
catches Exceptions, resets the current decoder and re-throws the Exception. But
I'm no expert of MINA internals.
I still have to get back to the code to see exactly what could be the
impact. Here again, a JIRA can help as it will probably gather more
information and use cases than on this mailing list.
FYI, I don't use a DemuxingProtocolDecoder, neither a
CummulativeProtocolDecoder, I found them a bit too intrusive... Nothing
forbid you to define your own codec managing fragmentation and demuxing
by yourself.
The second is that the buffer can be "half read" when the Exception is thrown
and must some how be reset before passed to a new decoder if a new message is received.
Here, I can see a problem. What if the buffer contains more than one
message, but if the exception occurred while processing the first
message ? I know it's quite theorical, but this can happen when you have
fixed size messages.
Otherwise the decoding will fail for sure again and we will probably end up
with a new Exception and be stuck in a loop. And even if nothing was read from
the buffer when the Exception is thrown, chances are big that a new decoding
attempt of that same buffer will fail in the same way.
Making the removeSessionBuffer() method accessible sounds nice but still forces
the framework users to do a try-catch and re-throw in every decoder. Also I
don't know an easy way to access the DemuxingProtocolDecoder instance from the
MessageDecoder.decode()-method.
Maybe a catch block in the DemuxingProtocolDecoder as I suggested for problem
#1 could fix this also
What kind of protocol are you implementing btw ? I'm wondering if some
other Codec design can fit your need ...
Should I post one or two bugs in JIRA?
One should be enough.-----Ursprungligt meddelande-----
--
Regards,
Cordialement,
Emmanuel Lécharny
www.iktek.com