[
https://issues.apache.org/jira/browse/DERBY-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12852911#action_12852911
]
Rick Hillegas commented on DERBY-1482:
--------------------------------------
Thanks for the patch, Mamta. I have a couple comments:
CreateTriggerNode:
o In the comments there are several references to the columns mentioned in the
REFERENCING clause. I think the meaning would be a little more clear and
specific if these comments talked about the columns mentioned in the trigger
action.
o There are two places in bindReferencesClause() where -1s are squeezed out of
arrays. This is done for triggerColsAndTriggerActionCols and
triggerActionColsOnly. It seems to me that there is an opportunity here to
factor this squeezing code into a helper method. That will make the code a
little easier to read and maintain.
o I believe that the new code (computing the columns referenced in trigger
actions and optimizing the generated text) should only be performed if the
database is at level 10.6 or above. I am worried that the trigger will fail if
the user soft-downgrades back to 10.5. If the database is at level 10.5 or
lower, then the old code should be used.
TriggerDescriptor
o This patch changes the serialized form of TriggerDescriptor. This is ok as
long as we can convince ourselves that these objects are never persisted. These
objects do live in the ConstantActions of query plans for INSERT, UPDATE, and
DELETE statements. Can we convince ourselves that the compiled forms of INSERT,
UPDATE, and DELETE statements never persist across soft-upgrades?
ReferencedColumnDescriptorImpl
o I think that readExternal() won't be able to read descriptors from
soft-upgraded 10.5 databases. And writeExternal() will write descriptors which
won't be readable if the user soft-downgrades to 10.5. I think that the
following alternative implementations will work.
public void readExternal(ObjectInput in) throws IOException
{
int rcLength;
int versionNumber = in.readInt();
if ( versionNumber < 0 ) { rcLength = in.readInt(); }
else { rcLength = versionNumber; }
referencedColumns = new int[rcLength];
for (int i = 0; i < rcLength; i++)
{
referencedColumns[i] = in.readInt();
}
if ( versionNumber < 0 )
{
int rctaLength = in.readInt();
referencedColumnsInTriggerAction = new int[rctaLength];
for (int i = 0; i < rctaLength; i++)
{
referencedColumnsInTriggerAction[i] =
in.readInt();
}
}
}
public void writeExternal(ObjectOutput out) throws IOException
{
int versionNumber = referencedColumnsInTriggerAction == null ?
referencedColumns.length : -1;
out.writeInt( versionNumber );
if ( versionNumber < 0 ) { out.writeInt( referencedColumns.length ); }
for (int i = 0; i < referencedColumns.length; i++)
{
out.writeInt(referencedColumns[i]);
}
if ( versionNumber < 0 )
{
out.writeInt(referencedColumnsInTriggerAction.length);
for (int i = 0; i <
referencedColumnsInTriggerAction.length; i++)
{
out.writeInt(referencedColumnsInTriggerAction[i]);
}
}
}
Thanks,
-Rick
> Update triggers on tables with blob columns stream blobs into memory even
> when the blobs are not referenced/accessed.
> ---------------------------------------------------------------------------------------------------------------------
>
> Key: DERBY-1482
> URL: https://issues.apache.org/jira/browse/DERBY-1482
> Project: Derby
> Issue Type: Bug
> Components: SQL
> Affects Versions: 10.2.1.6
> Reporter: Daniel John Debrunner
> Assignee: Mamta A. Satoor
> Priority: Minor
> Attachments: derby1482_patch1_diff.txt, derby1482_patch1_stat.txt,
> derby1482_patch2_diff.txt, derby1482_patch2_stat.txt,
> derby1482DeepCopyAfterTriggerOnLobColumn.java, derby1482Repro.java,
> derby1482ReproVersion2.java, junitUpgradeTestFailureWithPatch1.out,
> TriggerTests_ver1_diff.txt, TriggerTests_ver1_stat.txt
>
>
> Suppose I have 1) a table "t1" with blob data in it, and 2) an UPDATE trigger
> "tr1" defined on that table, where the triggered-SQL-action for "tr1" does
> NOT reference any of the blob columns in the table. [ Note that this is
> different from DERBY-438 because DERBY-438 deals with triggers that _do_
> reference the blob column(s), whereas this issue deals with triggers that do
> _not_ reference the blob columns--but I think they're related, so I'm
> creating this as subtask to 438 ]. In such a case, if the trigger is fired,
> the blob data will be streamed into memory and thus consume JVM heap, even
> though it (the blob data) is never actually referenced/accessed by the
> trigger statement.
> For example, suppose we have the following DDL:
> create table t1 (id int, status smallint, bl blob(2G));
> create table t2 (id int, updated int default 0);
> create trigger tr1 after update of status on t1 referencing new as n_row
> for each row mode db2sql update t2 set updated = updated + 1 where t2.id =
> n_row.id;
> Then if t1 and t2 both have data and we make a call to:
> update t1 set status = 3;
> the trigger tr1 will fire, which will cause the blob column in t1 to be
> streamed into memory for each row affected by the trigger. The result is
> that, if the blob data is large, we end up using a lot of JVM memory when we
> really shouldn't have to (at least, in _theory_ we shouldn't have to...).
> Ideally, Derby could figure out whether or not the blob column is referenced,
> and avoid streaming the lob into memory whenever possible (hence this is
> probably more of an "enhancement" request than a bug)...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.