[ 
https://issues.apache.org/jira/browse/DERBY-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12535673
 ] 

Knut Anders Hatlen commented on DERBY-3093:
-------------------------------------------

That's a good catch, James! Seems like you have found the bug. If the savepoint 
name only needs to be unique within a transaction, I think we can fix it by 
maintaining a counter in GenericLanguageConnectionContext and adding a 
getUniqueSavepointName() method which returns "ISSP1", "ISSP2", ...

> intermittent transaction failure caused by internal duplicate savepoint name, 
> for tirgger
> -----------------------------------------------------------------------------------------
>
>                 Key: DERBY-3093
>                 URL: https://issues.apache.org/jira/browse/DERBY-3093
>             Project: Derby
>          Issue Type: Bug
>    Affects Versions: 10.3.1.4
>         Environment: Java 1.6.0_02
>            Reporter: Jim Newsham
>
> While running our app overnight, performing some intensive database 
> operations (primarily deletes), we experienced a transaction failure, with 
> error message "A SAVEPOINT with the passed name already exists in the current 
> transaction".  This failure appears to be very intermittent, as I've run the 
> same operation successfully for hours and hours without failure.  Some more 
> information:
> 1.  I see that there are four JIRA issues which report the same message 
> (DERBY-2773, DERBY-1457, DERBY-2808, DERBY-2804).  I am not sure to what 
> extent if any, the problem I experienced is related to these issues.
> •     At least three of the above JIRA issues are related to triggers.  Our 
> transaction failure occurred while performing a delete in a table which has 
> delete triggers.
> •     We aren't using savepoints explicitly.  Of course, it can be seen in 
> the stack trace that the trigger operation is setting a savepoint.
> •     One of the JIRA issues mentions exceeding maximum depth of nested 
> triggers.  Not sure what a nested trigger is, but the trigger we are 
> performing should not (in theory) cause another trigger event.
> 2.  Here's the stack trace:
> Caused by: java.sql.SQLException: A SAVEPOINT with the passed name already 
> exists in the current transaction.
>         at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown 
> Source)
>         at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source)
>         at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source)
>         at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source)
>         at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source)
>         at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source)
>         at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown 
> Source)
>         at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown 
> Source)
>         at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(Unknown 
> Source)
>         at 
> com.referentia.sdf.monitor.samplebase.derby.DerbySampleBase$3.perform(DerbySampleBase.java:682)
>         at 
> com.referentia.sdf.monitor.samplebase.derby.DerbySampleBase.performInTransaction(DerbySampleBase.java:2747)
>         ... 7 more
> Caused by: java.sql.SQLException: A SAVEPOINT with the passed name already 
> exists in the current transaction.
>         at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
>         at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
>  Source)
>         ... 18 more
> Caused by: ERROR 3B501: A SAVEPOINT with the passed name already exists in 
> the current transaction.
>         at org.apache.derby.iapi.error.StandardException.newException(Unknown 
> Source)
>         at org.apache.derby.impl.store.raw.xact.Xact.setSavePoint(Unknown 
> Source)
>         at 
> org.apache.derby.impl.store.access.RAMTransaction.setSavePoint(Unknown Source)
>         at 
> org.apache.derby.impl.sql.conn.GenericStatementContext.setSavePoint(Unknown 
> Source)
>         at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown 
> Source)
>         at 
> org.apache.derby.impl.sql.execute.GenericTriggerExecutor.executeSPS(Unknown 
> Source)
>         at 
> org.apache.derby.impl.sql.execute.RowTriggerExecutor.fireTrigger(Unknown 
> Source)
>         at 
> org.apache.derby.impl.sql.execute.TriggerEventActivator.notifyEvent(Unknown 
> Source)
>         at 
> org.apache.derby.impl.sql.execute.DeleteResultSet.fireAfterTriggers(Unknown 
> Source)
>         at org.apache.derby.impl.sql.execute.DeleteResultSet.open(Unknown 
> Source)
>         at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown 
> Source)
>         ... 12 more
> 3.  Here are the relevant tables and triggers. We have different tables for 
> different "sample" data types  in our application (currently int, long, 
> float, double, Boolean, string).  I have shown the int_sample table; the 
> other tables are identical but have a different data type for the value 
> field.  
> Sample records are first class; time records are second class - they only 
> exist to support sample records.  When there are no remaining sample records 
> for a given time record, the time record can be removed.  It is the job of 
> the trigger to perform this cleanup:  when a record is deleted from any of 
> the sample tables, the time record corresponding to *_sample.fk_time_id is 
> deleted only if there are no remaining records in any of the sample tables 
> which have the same fk_time_id.  So although there are cascading deletes 
> (deleting a time record deletes all of its samples), the trigger should not 
> recurse because the trigger only deletes when there are no associated records.
>  
> create table time (
>   id                  int not null generated always as identity,
>   time                timestamp not null,
>   constraint time_pk primary key (id),
>   constraint time_unique unique (time)
> );
> create table int_sample (
>   fk_band_id          int not null,
>   fk_time_id          int not null,
>   value               int not null,
>   constraint int_sample_pk primary key (fk_band_id, fk_time_id),
>   constraint int_sample_fk_band foreign key (fk_band_id) references band (id) 
> on delete cascade,
>   constraint int_sample_fk_time foreign key (fk_time_id) references time (id) 
> on delete cascade
> );
> create trigger cascade_delete_unused_time_for_int_sample_trigger
>   after delete on int_sample
>   referencing old as oldrow
>   for each row
>   delete from time where 
>   time.id = oldrow.fk_time_id
>   and id not in (select fk_time_id from int_sample) 
>   and id not in (select fk_time_id from long_sample) 
>   and id not in (select fk_time_id from float_sample) 
>   and id not in (select fk_time_id from double_sample) 
>   and id not in (select fk_time_id from boolean_sample) 
>   and id not in (select fk_time_id from string_sample);
> 4.  Invocation.  This is very rough pseudocode focusing on the core logic.  
> In reality, there's a lot more code, but I'm doing the usual stuff, such as 
> using prepared statements, and closing all resources when done.  The key 
> thing is that I'm using two statements in a nested fashion; in the outer 
> statement, I iterate over records in the time table, and in the inner loop I 
> use a second statement to delete records in one of the sample tables.  I can 
> provide the actual code if desired.
> deleteSamples(int bandId, String sampleTable, Object someQueryCriteria) {
>   within transaction {
>     create stmt1 iterating over time ids in time table (restricted by 
> someQueryCriteria)
>       for (time_id in stmt1) {
>         PreparedStatement stmt2 = conn.prepareStatement(String.format(
>         "delete from %s where fk_band_id = ? and fk_time_id = ?", 
> sampleTable));
>         set stmt2 parameters and executeUpdate()
>     }
>   }
> }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to