[
https://issues.apache.org/jira/browse/TRAFODION-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
liu ming resolved TRAFODION-1566.
-
Resolution: Duplicate
Fix Version/s: 2.1-incubating
dup with TRAFODION-2046
> Ungraceful failure when transaction size limit reached
> --
>
> Key: TRAFODION-1566
> URL: https://issues.apache.org/jira/browse/TRAFODION-1566
> Project: Apache Trafodion
> Issue Type: Bug
> Components: dtm, sql-exe
>Affects Versions: 1.3-incubating
>Reporter: David Wayne Birdsall
>Assignee: liu ming
>Priority: Minor
> Fix For: 2.1-incubating
>
>
> When exceeding transaction size limits, DELETE fails with a puzzling error
> message.
> The following script produces the problem on a workstation (using
> install_local_hadoop, so the HMaster process is handling all four regions).
> Best results if the setup section is done in a separate sqlci from the
> deleteTest section (so you get current statistics):
> ?section setup
> create schema DeleteFailure;
> set schema DeleteFailure;
> -- create a table saltx with 458752 (=7*65536) rows, and another table
> -- salty, that is a copy of saltx
> CREATE TABLE saltx
> (
> AINT NO DEFAULT NOT NULL NOT DROPPABLE
> SERIALIZED
> , BINT NO DEFAULT NOT NULL NOT DROPPABLE
> SERIALIZED
> , CVARCHAR(20) CHARACTER SET ISO88591
> COLLATE
> DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE SERIALIZED
> , PRIMARY KEY (A ASC, B ASC)
> )
> SALT USING 4 PARTITIONS
> ;
> insert into saltx values (1,1,'hi there!'),
> (2,1,'bye there!'),(3,1,'Happy Tuesday!'),(4,1,'Huckleberry Pie');
> insert into saltx select a+4,b,c from saltx;
> insert into saltx select a+8,b,c from saltx;
> insert into saltx select a+16,b,c from saltx;
> insert into saltx select a+32,b,c from saltx;
> insert into saltx select a+64,b,c from saltx;
> insert into saltx select a+128,b,c from saltx;
> insert into saltx select a+256,b,c from saltx;
> insert into saltx select a+512,b,c from saltx;
> insert into saltx select a+1024,b,c from saltx;
> upsert into saltx select a+2048,b,c from saltx;
> upsert into saltx select a+4096,b,c from saltx;
> upsert into saltx select a+8192,b,c from saltx;
> upsert into saltx select a+16384,b,c from saltx;
> upsert into saltx select a+32768,b,c from saltx;
> upsert using load into saltx select a,b+1,c from saltx;
> upsert using load into saltx select a,b+2,c from saltx where b = 1;
> upsert using load into saltx select a,b+3,c from saltx where b = 1;
> upsert using load into saltx select a,b+4,c from saltx where b = 1;
> upsert using load into saltx select a,b+5,c from saltx where b = 1;
> upsert using load into saltx select a,b+6,c from saltx where b = 1;
> update statistics for table saltx on every column;
> create table salty like saltx;
> upsert using load into salty select * from saltx;
> update statistics for table salty on every column;
> ?section deleteTest
> set schema DeleteFailure;
> set param ?b '4'; -- change it to '5' and the delete will succeed
> prepare xx from delete from saltx where b > ?b;
> explain options 'f' xx;
> execute xx; -- fails with ungracious error message
> Here is a log showing the deleteTest section failing:
> [birdsall@dev02 IUDCosting]$ sqlci
> Apache Trafodion Conversational Interface 1.3.0
> Copyright (c) 2015 Apache Software Foundation
> >>obey deleteFailure.sql(deleteTest);
> >>?section deleteTest
> >>
> >>set schema DeleteFailure;
> --- SQL operation complete.
> >>
> >>set param ?b '4';
> >> -- change it to '5' and the delete will succeed
> >>
> >>prepare xx from delete from saltx where b > ?b;
> --- SQL command prepared.
> >>
> >>explain options 'f' xx;
> LC RC OP OPERATOR OPT DESCRIPTION CARD
> -
> 4.5rootx 1.52E+005
> 3.4esp_exchange1:4(hash2)1.52E+005
> 123tuple_flow1.52E+005
> ..2trafodion_vsbb_deletSALTX 1.00E+000
> ..1trafodion_scan SALTX 1.52E+005
> --- SQL operation complete.
> >>
> >>execute xx;
> *** ERROR[8448] Unable to access Hbase interface. Call to
> ExpHbaseInterface::nextRow returned error HBASE_ACCESS_ERROR(-706). Cause:
> java.util.concurrent.ExecutionException: java.io.IOException: PerformScan
> error on coprocessor call, scannerID: 14 java.io.IOException: performScan
> encountered Exception txID: 70081 Exception:
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
>