From: "Tim Bunce" <[EMAIL PROTECTED]>

> > What's an example where having finish() after the loop actually makes
> > a difference?  I'm trying to understand why is it a "mistake", as
> > opposed to simply being superfluous?
> 
> Consider doing a select returning many rows some of which are either
>    a) LONGs where one row has a length>LongReadLen and LongTruncOk=0
> or b) fields calculated like "foo/bar" where a row has bar=0
> 
> The application is merrily calling fetchrow_*() in a while loop.
> When it reaches the bad row the fetch will fail. Without RaiseError
> and with no error check after the loop the only way you'll
> notice the premature end of fetching is by the warning.
> But you won't get the warning if you've called finish().

What I've been wondering for awhile is whether finish is wrong or just
superfluous when you DO have RaiseError set, on either select or
insert/update/delete statements. Say you have a generic SQL handler,
and you'd like to just call finish() not matter how many rows were
fetched, or no matter what kind of statement it is. Would you run
into any problems then? Would it maybe depend on the DBD?

I recently 'fixed' such a case to only call finish on statements that
fetched some, but not all, rows. In benchmarking, the saved call
to finish() did not quite make up for the extra logic to determine
if some but not all rows were fetched (but the difference was
negligible). It just seemed more 'correct'
that way, but I was wondering if it was necessary.

-Douglas Wilson

Reply via email to