Yes, they most likely were implicitly started by the select 
statement. The lock is taken as a full-table IS lock, which
is a very normal result of the DB2 query optimizer.

The question is why the lock is not released when the statement 
handle goes out of scope after having been ->finish and destroyed:

sub main
{
  $dbh = DBI->connect();

  {
    $stmt->execute;
    while ($stmt->fetch) { ... }
    $stmt->finish;
  }

  # Now we are out-of-scope but lock is still present here in DB 
  # becase DBI still keeps an internal reference?

  $dbh->disconnect();
  return undef;
}

How do you interpret the DESTROY part of the log? Is it normal? I 
see the "ref cnt = 1" without a final destroy on the outer handle, 
which makes me believe that the reason the driver does not release 
the lock is because DBI actually has a reference to the 
statement-handle outside the scope of the function where the 
statement-handle was created.

A possible explanation is that DB2 wants me to explicitly commit() 
select statements for transactions that were not started by me? But 
not for all fields, just in case there happens to be an (unfetched) 
BLOB column? Which is slightly unintuitive.

But I am really, mostly just hoping/asking for your analysis of 
DBI's part in this. 1) How is DBI supposed to kill off its 
statement-handles 2) Can I see somehow in the log file if that 
happened properly? 

/K
 

     Den onsdag, 29 oktober 2014 12:47 skrev Tim Bunce <tim.bu...@pobox.com>:
   

 On Tue, Oct 28, 2014 at 02:20:52PM +0000, Knal Astrfisson wrote:
>  Hi,
> 
>  The problem: database locks are left behind after the statement
>  handles of simple select statements are closed, e.g. $sth->finish;
>  and undef $sth; when I fetch BLOB fields from DB2 using AutoCommit=0.

I'd guess that the locks are being held by the transaction that was
implicitly started by the select statement.

Read the DB2 docs on Transaction Isolation.

Tim.


   

Reply via email to