Kathey Marsden (JIRA) wrote:
Looking again at the patch, I have some error handling concerns.

I am intending to take a more detailed look at this patch over
the next week, as time permits, specifically with respect to error
handling, but I wanted to make two initial comments, and see what
the reaction was:

1) I think that before this patch, the server tended to behave as
follows: first it would read the entire large object into memory
in order to compute its length; then it would emit that large object
out into the DRDA messages.

With the new implementation, the server does not need to pre-compute
the length, so it can feed the bytes directly from their storage
location in the database to the DRDA messages.

I believe that this means that for a whole class of errors, the
temporal location of the errors has changed. By this I mean, that
with the old code, it was fairly likely that if there was a "bad spot"
in the large object, we would encounter that bad spot during the
initial marshalling of the object into memory, whereas with the
new code, if there is some problem with the large object that only
shows up when you've read 2/3 of the object, we are now going to
encounter that error at a different time in processing.

Is this true? What sort of faults are we likely to hit? Are things
like "corrupt page in database" the most likely?

If somebody could clarify the fault model for fetching a LOB from
the database, that would be very interesting.

2) In the old code, we used the DRDA "extended length" feature to
write the large object as a "length-delimited" format. That is, at
the start of sending the large object back to the client, we figured
out its overall length, and wrote that as the extended length in the
DRDA message.

Writing this length bytes into the initial message is, I believe,
an implicit contract that the server is going to send that many bytes
back to the client, unless the server closes the connection.

That is, I don't see anything in the DRDA spec which allows the server
to start sending a large DDM object using DSS Layer B segmenting, and
then to change his mind and send some sort of "premature end of message"
indicator. If we say that we're going to send the client a 5 megabyte
BLOB, and then we hit an error after sending the first 50 thousand
bytes, we still have to send all of the remaining 4.95 meg, because that's
the rules of Layer B segmenting.

However, in the new code, we don't use extended length. Instead, we
use Layer B streaming. Therefore, we no longer have this implicit
contract to deliver the entire blob, and we are free to terminate our
send at any point. The client will compute the length of the data by
simply adding up all the segments that it receives, so if we should hit
an error at any point midway through the process, we can just stop
sending data, and immediately move on to returning the next DSS
message (which is presumably the error indication exception).

So I think that the presence or absence of the 'padScalarStreamForError'
processing is directly linked to whether or not we emit the extended
length bytes.

But as Kathey noted earlier in this thread, it seems as though there are
still some cases where the new code will emit extended length bytes
when writeScalarStream is called. Specifically, Kathey noted that
VARCHAR FOR BIT DATA takes this path.

If this is true, then I think the patch may have a problem, because if
an error occurs during the midst of such processing, we still need to
be able to pad the scalar stream.

Or, we could take Kathey's suggestion and change that path so that it
also uses Layer B stream. That is, have writeScalarStream() *always*
do Layer B streaming, even if the length is known ahead of time.

thanks,

bryan




Reply via email to