On 01/02/2011 20:50, Tim Bunce wrote:
On Tue, Feb 01, 2011 at 10:58:14AM -0500, John Scoles wrote:
  On 01/02/2011 10:48 AM, Tim Bunce wrote:
On Mon, Jan 31, 2011 at 08:39:40PM +0000, Martin J. Evans wrote:
I imagine most DBDs [if not all] that implement execute_array
[Just a reminder that drivers can opt to implement just
execute_for_fetch() and use the DBI's default execute_array() method,
which then calls execute_for_fetch().]

themselves) the rows affected will be -1 per execute (as the driver
does not know affected rows) and so I'd expect $rows to be -1 and
not 18. DBI gets away with this as it does an execute for each row
The thing is OCI will know the end result of rows effected I just
does not know the #rows for each statement.

So on an update with say 4 tuples and 6 rows updates the tupels
would look like this

-1
-1
-1
-1
wile the list context one would get
Tuples=4
rows =6
I think we could factor into the spec the fact that some drivers can
return an overall row count but not per-tuple counts.

Did that a few hours ago but not committed yet.

My only concern is when it does error (no matter what the setting of
AutoCommit) you always get unef;
Umm, yes. Returning undef (or an empty list) if there was any kind of
error causes useful information (the total row count) to be discarded.

I don't see why when called in list context. Just because execute_array partially succeeded (or failured) only means undef needs to be returned for the first scalar and does not affect the rows affected.
This brings us back to the question of whether a failure of a single
tuple should cause execute_array() itself to return an error.

ODBC doesn't do that. It treats that situation as SUCCESS_WITH_INFO
and that does seem very reasonable.

The funny thing is being an big user of ODBC, I don't (and others in this thread expressed the same belief). For me, if I use execute_array (with AutoCommit on or not) my principle issue is I want to know if anything failed so I do:

$dbh->{RaiseError} = 1;
my $ret = eval {
   execute_array(...)
};
do_something_on_error() if ($@ || !$ret); # amended for DBD::Oracle behavior

NOTE: I've only started doing this since realising DBD::Oracle does not raise an error.

Why put the onus on people who cannot accept some tuples to fail to check the ArrayTupleStatus instead of put it on those who can accept some to fail to examine ArrayTupleStatus? My feeling is that it is very serious to ignore a partial failure and so the default should be to ensure you know about it with RaiseError. For those people who can accept a tuple to fail, they can look at ArrayTupleStatus and still commit the changes.
but I can live with that.
Perhaps we could default to the SUCCESS_WITH_INFO model and add an
attribute for people to use if they want tutple errors to cause
execute_array() itself to return an error.

We can debate the name later, but for now let's call it:

     ArrayPromoteTupleError =>  1

Tim.
which defaults to 1, I hope.

Incidentally, I just blogged about this a few hours ago as I think it is a serious problem to ignore potential failures. Add that to the fact that some versions of Oracle database (broken) return success with info but don't then commit the ok tuples.

Martin

Reply via email to