[ 
https://issues.apache.org/jira/browse/DERBY-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12872333#action_12872333
 ] 

Rick Hillegas commented on DERBY-1482:
--------------------------------------

Hi Mamta,

Concerning the first comment, let's just wait for the new patch. If the 
comments still puzzle me, I'll let you know.

Concerning the second comment: If the new serialized form is only used in new 
databases and hard-upgraded databases, then there should be no problem. There 
should be no problem if new trigger descriptors in soft-upgraded databases have 
the same serialized form as 10.6 trigger descriptors. This is the situation you 
want to avoid:

1) You create a new trigger in a soft-upgraded database.

2) Then you soft-downgrade to 10.6.

3) Because the serialized form has changed, the 10.6 server raises a 
deserialization error (or worse) every time the new trigger fires.

Note that, because this patch didn't make it into 10.6, there is now another 
serialization issue which we have to deal with:

In 10.5 and earlier, the objects stored in system tables were converted into 
strings when they were selected by clients. That is, a 10.5 or earlier server 
returned ReferencedColumnDescriptorImpl.toString() for the following query:

   select referencedcolumns from sys.systriggers

In 10.6, the above query uses the writeExternal()/readExternal() machinery to 
send the ReferencedColumnDescriptor object if both the client and the server 
are at 10.6 or higher. That query will choke a 10.6 client when it hits a 
trigger with a referencing clause that is created in a 10.7 database. For this 
sort of problem, the Formatable machinery calls for introducing a new 
formatable id. In the end, this may make your serialization logic easier to 
read but it may not improve the situation for 10.6 clients. I think that the 
following will work:

Introduce a new subclass of ReferencedColumnDescriptorImpl called 
ReferencedColumnDescriptorImpl_7_0. That class will have its own formatable id 
and will serialize itself differently than the old 
ReferencedColumnDescriptorImpl. If one of these new classes is sent to a 10.6 
client, then you will get an error looking up the formatable id. I don't know 
if that error will be any better or worse. We may want to add a release note 
saying that we don't handle this edge case gracefully.


Hope this answers your questions.

Regards,
-Rick

> Update triggers on tables with blob columns stream blobs into memory even 
> when the blobs are not referenced/accessed.
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-1482
>                 URL: https://issues.apache.org/jira/browse/DERBY-1482
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.2.1.6
>            Reporter: Daniel John Debrunner
>            Assignee: Mamta A. Satoor
>            Priority: Minor
>         Attachments: derby1482_patch1_diff.txt, derby1482_patch1_stat.txt, 
> derby1482_patch2_diff.txt, derby1482_patch2_stat.txt, 
> derby1482DeepCopyAfterTriggerOnLobColumn.java, derby1482Repro.java, 
> derby1482ReproVersion2.java, junitUpgradeTestFailureWithPatch1.out, 
> TriggerTests_ver1_diff.txt, TriggerTests_ver1_stat.txt
>
>
> Suppose I have 1) a table "t1" with blob data in it, and 2) an UPDATE trigger 
> "tr1" defined on that table, where the triggered-SQL-action for "tr1" does 
> NOT reference any of the blob columns in the table. [ Note that this is 
> different from DERBY-438 because DERBY-438 deals with triggers that _do_ 
> reference the blob column(s), whereas this issue deals with triggers that do 
> _not_ reference the blob columns--but I think they're related, so I'm 
> creating this as subtask to 438 ]. In such a case, if the trigger is fired, 
> the blob data will be streamed into memory and thus consume JVM heap, even 
> though it (the blob data) is never actually referenced/accessed by the 
> trigger statement.
> For example, suppose we have the following DDL:
>     create table t1 (id int, status smallint, bl blob(2G));
>     create table t2 (id int, updated int default 0);
>     create trigger tr1 after update of status on t1 referencing new as n_row 
> for each row mode db2sql update t2 set updated = updated + 1 where t2.id = 
> n_row.id;
> Then if t1 and t2 both have data and we make a call to:
>     update t1 set status = 3;
> the trigger tr1 will fire, which will cause the blob column in t1 to be 
> streamed into memory for each row affected by the trigger. The result is 
> that, if the blob data is large, we end up using a lot of JVM memory when we 
> really shouldn't have to (at least, in _theory_ we shouldn't have to...).
> Ideally, Derby could figure out whether or not the blob column is referenced, 
> and avoid streaming the lob into memory whenever possible (hence this is 
> probably more of an "enhancement" request than a bug)... 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to