On 4/6/04 10:54 AM, "Jan Wieck" <[EMAIL PROTECTED]> wrote:

>> Cursors seem as if they have some nice performance benefits (esp. if
>> you're not using all rows found), but their usefulness drops
>> considerably since you must leave a transaction open.
> 
> And now you know why they are so good if you don't use all rows. This
> benefit I think goes away if you use Joe Conway's suggestion of WITH HOLD.

I tried using WITH HOLD in the following case (using an ecpg C program):

foreach row in table A
   update table B with value from table A
   commit once every 10,000 updates
   forend

I created a cursor on table A.  Without WITH HOLD, obviously I got an error
on the next TABLE A fetch because the COMMIT closed the cursor.  I added
'WITH HOLD' to the cursor.  On the first COMMIT, the application hung.  I
assume the COMMIT would have completed after some period of time, but I
didn't wait that long.

There are 20 million rows in table A and 60 million in table B (one to many
relationship).

Is this hang on COMMIT when using WITH HOLD to be expected?  Is there a way
around it?  I don't think it's reasonable put the entire 60 million updates
in a single transaction.  The kludge solution I implemented was to write out
all the data I needed from table A to a file, then read that file and update
table B.  

Wes


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to